‘Language Is Something Very Personal’
A Q&A with linguist Naomi Baron ’68 on the threats posed by ChatGPT.
Photo Credit: Getty Images
By Chris Quirk
Naomi Baron ’68 (Photo Credit: Nikhil Bhattacharya)
“Will a future relative of ChatGPT be writing my next book instead of me?” asks linguist Naomi Baron ’68 in her latest volume, “Who Wrote This? How AI and the Lure of Efficiency Threaten Human Writing” (Stanford University Press, 2023).
It’s not a rhetorical question, says Baron, professor emerita of linguistics at American University.
Over the past year, OpenAI’s chatbot, drawing on a sophisticated artificial-intelligence tool known as a large language model, has exponentially accelerated debates around safeguarding self-expression and human creativity, preventing disinformation, and protecting democracy.
Within months of ChatGPT’s public release in November 2022, OpenAI had launched even more-advanced versions. Google and Meta responded with their own large language models.
These chatbots can write essays, poems, and computer code; compose music; generate images from verbal descriptions; and answer test questions. Their veracity, on the other hand, leaves a lot to be desired. They sometimes make up information and draw the wrong conclusions.
Here, Baron answers some fundamental questions about our future with generative AI. What might it cause us to gain — and lose — as individuals and as a culture? What should we relinquish to AI? And what should we protect?
How do AI large language models like ChatGPT produce text?
They work by predicting what the next word is likely to be. The words with the highest frequency are most readily predicted, and these are generally less-interesting words. You could identify text produced by the original version of ChatGPT because it was plain vanilla.
But the technology has evolved. We’re now up to GPT-4. These days, you can harness a large language model in almost any way you can imagine. You need a sonnet? ChatGPT can write one, no problem. (See the sidebar.)
My hunch, and that of most people thinking about generative AI, is that however remarkable GPT-4 might be, it won’t be producing writing like Shakespeare, or Aeschylus, or Joyce could produce. Great literary works have appeal across time because they reveal truths about the lived human condition, truths readers identify with.
AI has no lived experience, no cultural perspective. And, at least so far, it lacks genuine eloquence.
What makes the act of writing uniquely valuable for humans?
Writing gives us an opportunity to see right before our very eyes what we’re thinking. It helps us think.
Consider all the times you’ve started writing and thought, I don’t quite know what I want to say, or even what I believe. You try to figure this out by putting some ideas down. You look at your words and think, Now I see the argument. Or you ask yourself, Am I being consistent here? Have I forgotten something? What would someone else say about my argument?
A week or so after ChatGPT was launched, Norwegian teachers of language and literature spoke out. They were concerned that if students used ChatGPT, they would lose the motivation to hone their own thinking skills. Without such skills, you don’t know how to construct an argument; you don’t know how to choose among arguments. These teachers in Norway were worried about raising the next generation of members of a democratic society.
Do automated language tools jeopardize self-expression?
Early in my research career, I developed a sense that language is something very personal. It’s something I can mold and control. I can use language to differentiate myself from other people.
We know your brain changes when you become literate. What does that do to the rest of your thinking, broadly? The links between language and thinking are highly intricate, and no one understands how all of them work.
If we don’t use language to spur our thinking, to refine our thoughts, we become less smart. We also become less empowered.
So it matters that what AI produces — a poem or an essay, for example — lacks human intent?
Yes. Knowing a living person created a written work is meaningful. We lose that human connection to creativity if we look only at the text.
Over the years, there have been literary theories, like New Criticism, that said, Just look at the work. Don’t look at the author, don’t look at the times, don’t look at the historical context. Perhaps one could do that in a time before a large language model could create poetry. But these days, I think it’s especially important to contextualize creativity, including in writing.
Even if a piece of music, or art, or writing that is AI-generated seems incredibly creative, we tend to value it less than an equally impressive work that is human-made. We still go to live concerts, even though flawless recordings by artists are available. We pay many times more for a handmade Oriental rug than for an equally elegant one that’s machine-made.
Can generative AI be genuinely creative?
This is a tricky question, in part because it’s so hard to define creativity, but also because AI technology keeps evolving. As human societies, and as individuals within them, we make different judgments about whether something is valuable. So maybe another way of assessing creativity is asking if we believe a work is valuable.
How can we manage AI’s potential dangers while harnessing its abilities?
In one word, education.
AI tools can do fascinating things. You might use them to better your English, for example. And it’s wonderful to take a 20-page article written in a language you can’t read, put it into Microsoft Translator, and get the whole thing translated for you. Is it exactly the way a human would translate the article? Probably not, though it’s not bad.
But we need to give people motivations to write for themselves, reasons to feel it’s worth doing. One reason is that knowing how to write — and write well — is a vital tool for maintaining one’s individuality. The output of ChatGPT and other generative AI tools is often described as “beige,” meaning neutrally acceptable but in no way distinctive. Participants in my research speak of wanting what they write to convey their personal voice. For that to happen, students need to do the work.
A second reason is that you may not always have AI tools for writing or editing at your disposal. Can you still write when the internet is down? If not, you’re in trouble.
It sounds like you’re making a case for the liberal arts.
Which I really do believe in. I would love to see a cultural shift toward honoring writing as a means of thinking and self-expression. However, now that AI can work its magic in both editing and composing, it’s hard convincing students they need to nurture their own writing skills.
At the same time, I do foresee a positive role for AI in the human writing process, particularly as a pedagogical grammar aid and an idea generator.
We will need to figure out the balance between humans using their own writing skills versus turning to AI assists.