Chatgpt Can String Words Together. Only Humans Can Write.

Since its release in November 2022, ChatGPT has disrupted a myriad of fields, including education, publishing, market research, advertising, and online advice giving, sometimes with disastrous results.
ChatGPT is just one mainstream example of generative artificial intelligence (AI), a field that includes Google’s Gemini and Microsoft’s Copilot. These programs create new content by hoovering up what has already been posted on the internet—legally and illegally—and spewing it out in a new form without ever thinking about the text.
Naturally, these capacities make generative AI controversial among those who work with words. The technology has occasioned lawsuits in publishing and debates among writers. No matter what position we take, it’s here to stay, as John Warner concedes in his new book More Than Words: How to Think About Writing in the Age of AI. “There is no wishing away AI at this point,” he writes, “meaning it must be grappled with and done so in a way that preserves our humanity.”
Warner approaches this topic from the perspective of a 20-year college writing teacher, weekly book columnist for the Chicago Tribune, and author of guides like Why They Can’t Write and The Writer’s Practice. In More Than Words, he focuses on ChatGPT as it relates to writing and reading. Although his primary audience is writing and English teachers, his insights and speculations also apply to writers, readers, and professors in other disciplines.
Warner starts with this goal: “I hope to convince you that we vastly underappreciate the importance of the act of writing to the work of being human, and that very little writing that has any meaning can be successfully outsourced to syntax-generation technology.”
As a writer and editor, I confess I’ve not spent a lot of time researching generative AI. I know writers who outright dismiss it and writers who embrace it wholeheartedly, as well as writers who fall in between the two extremes. Although people use it to create published materials like blog posts, business newsletters, articles, and even books, the results tend to be flat, with no evidence of thinking and feeling, which Warner insists are part of being human.
I’ve noticed that flatness and have not been impressed. Thus, I was prepared to like More Than Words before I started reading it. I was not disappointed.
In part 1, Warner explains what ChatGPT is and what it does. As an engine of generative AI, it’s an example of a large language model that processes content, specifically text, by mimicking what has been written. It does not write. This, Warner emphasizes, is an important distinction.
Writing … is a fully embodied experience. When we do it, we are thinking and feeling. We are bringing our unique intelligences to the table and attempting to demonstrate them to the world, even when our intelligences don’t seem too intelligent. ChatGPT is the opposite, a literal averaging of intelligence, a featureless landscape of pattern-derived text.
Warner drives home the point that generative AI represents automation, not genuine human intelligence. As such, we need to be discerning in how and when we use it, recognizing that “generative AI has been born in sin and that it is already an ethical, moral, and environmental nightmare.”
For instance, it is trained on copyrighted intellectual property, which it then spews out as “writing.” Like many other writers, I find this process especially disturbing. The results of this training have led to plagiarism lawsuits. Even if the plaintiffs don’t win—and many haven’t—we can’t ignore the ethical issues involved.
Furthermore, Warner writes, the use of generative AI, like all supercomputing, requires enormous amounts of power to run computer servers and of water to keep them cool, thus impacting the environment in negative ways. (This is not an argument I’d encountered before among AI opponents.)
In part 2, Warner elaborates on the distinctions between humans writing and generative AI processing text. As he emphasizes, the two clearly are not the same. Writing is foremost the experience of wrestling with ideas and relating them to specific readers. ChatGPT is merely stringing together words gathered from a variety of sources, without any particular goal or thought process. Besides thinking, Warner declares, writing also involves feeling, which is communicated in our words and hopefully touches readers. Machines have no capacity for any of this.
Although he has plenty of reasons to be wary of generative AI, Warner understands it has value in other areas and uses it himself as a writer. For instance, ChatGPT can produce text summaries and lists much quicker than any person can. It’s “like having an on-demand generator of CliffsNotes for just about anything you can think of.”
This simile is an apt description of what appears at the top of Google searches. In mere seconds, I can get a quick summary in response to the words I put in the search box. But as Warner notes, we should remember that AI-generated text may not be wholly—or even partly—accurate. It’s important to go back to original sources to verify information, but the text does provide starting points for dealing with a topic.
Then, in part 3, Warner explores—and speculates on—how ChatGPT may affect the future for writers, readers, and educators. Thus far, the results of using ChatGPT have been far from stellar—and often just plain wrong. Examples include a “lawyer citing nonexistent cases” in court due to AI research and students using AI shortcuts to write papers for classes.
Warner fears that as generative AI works better in the future, we’ll accept the text it produces instead of engaging with the original works behind the words. And as a result, we’ll lose a piece of what it means to be human to a machine.
Warner’s experience of growing up with Tang, a powdered orange drink that doesn’t taste much like orange juice, resonated with me since Tang was also part of my childhood. After all, it was “the drink of astronauts.” If you only drink Tang—a cheaper, convenient imitation—and never drink real orange juice, you don’t know how superior the juice tastes. Likewise, if all students and readers know is AI-processed text, they’ll never learn why real writing—which reflects the author’s thinking, feeling, and experiences—is vastly superior.
The future of generative AI is unpredictable. In the closing sections of his book, Warner posits “a framework for how to think about this technology going forward.” He builds this framework on three broad categories: resisting, renewing, and exploring.
Here he circles back to examine how writing is related to being human: “I believe we have to orient toward goals that are associated with human flourishing, and make use of artificial intelligence where it is useful in those goals and reject it where it is a hindrance.”
Resistance starts with remembering that artificial intelligence is a misnomer; a more accurate label is artificial automation. Instead of blindly moving forward, wholeheartedly embracing it, we need to take the time necessary for discerning what benefits it holds. Just because it’s new and shiny doesn’t mean we should embrace it without thinking through the implications of using it.
Thankfully, many writers are resisting the consequences and implications of AI in publishing. Last year, for instance, the Authors Guild and The New York Times filed separate lawsuits against OpenAI and Microsoft for copyright infringement.
Not only is generative AI being trained on copyrighted material; it is also being used to steal authors’ names and reputations, which threatens to hurt their sales as readers lose trust in them. Last summer I heard Jane Friedman, a 25-year publishing veteran who reports on the business, talk about this very situation at a writers conference. She discovered AI-produced books with her byline for sale on Amazon and listed in her profile on Goodreads. She’s not the only author compelled to deal with the theft of her content and name. Generative AI makes this piracy quicker.
“Very little, if any, of the early excitement about generative AI has been tied to demonstrable improvement in the quality of products and outcomes,” writes Warner. “In fact, most of the outputs from generative AI models are acknowledged as inherently inferior. The biggest difference is the speed with which they are produced.”
Furthermore, says Warner, we have trouble resisting AI technology because we’re disconnected from knowing what a good life truly is. We are not machines, as some scientists propose, or the products of algorithms that reduce our lives to averages. Thus, we need to renew ourselves as sentient, discerning individuals who have values and are rooted in community with other humans.
Besides resisting the technology and renewing our humanity, Warner strongly believes, we need to explore both the potential and the pitfalls of generative AI, with emphasis on the latter. Doing so, he argues, is a matter of urgent public interest rather than a purely private concern. He advocates public discussion, debate, and regulation, especially in relation to schools since this is his area of expertise.
Warner takes what could be a dry, technical subject and enlivens it with plenty of personal experiences and real AI responses to prompts to illustrate his points. He adds over 14 pages of notes that reveal his research and incorporate opposing viewpoints. This is not an academic treatise or a diatribe against generative AI output, although Warner admits he sometimes wishes it would disappear.
More Than Words is not a book to race through but to chew and digest. It gave me a broader understanding of generative AI and the need for regulation of its increasing encroachment in our lives.
If, like Warner, we value writing as readers or writers or teachers, we won’t settle for AI-produced imitation. Instead, we would do well to heed his warning: “Only humans can read. Only humans can write. Don’t let anyone tell you otherwise.”
Lin Johnson is a freelance editor and writer, the editor-administrator of The Christian Writers Market Guide, and the former owner-director of the Write to Publish Conference.
The post ChatGPT Can String Words Together. Only Humans Can Write. appeared first on Christianity Today.