"Then only" vs "Only then": ChatGPT's grammar fixes and cultural power
Journalist and novelist Vauhini Vara on AI's limits and hidden biases
Welcome to CWO, where Eleanor and Kenn explore AI, writing, ideas, humanity, &c.
✏️ The day after I (Eleanor) quit my job to write my first novel, I gave Claude my credit card info and locked in. I wanted to know if AI could be of any help.
👻 I looked for inspiration from other AI-curious writers. At the top of that list: Vauhini Vara, a journalist and author whose debut novel was a Pulitzer finalist. Her most recent book, Searches: Selfhood in the Digital Age, explores how Big Tech influences our communication through a series of personal essays. It also includes a version of her viral short story ‘Ghosts,’ in which she used a ChatGPT predecessor to write about her sister’s death.
💡 I felt a strong kinship with Vara, not just as a fellow WSJ alum (as is Kenn), but as someone actively experimenting with AI and coming up disappointed. I was overjoyed when she kindly agreed to speak to CWO. Our chat gave me an appreciation of how my experiments, and those of other writers, can be a form of creative resistance in the face of ubiquitous tech. The interview is edited for brevity and clarity.
🔑 Key takeaways from my conversation
AI bias is not always obvious: We might recognize blatant examples, but miss subtler examples, such as English being edited to a more Americanized version. These micro-corrections embed cultural hierarchies into our everyday.
Language is a tool of communion and oppression: This tension has existed throughout history, but companies such as OpenAI/Google are using language in ways that consolidate their power beyond what has even been done before.
LLMs could be powerful tools of persuasion for commercial aims: Imagine an AI that knows exactly how to suggest you buy a product using your communication style to build artificial trust, then taking a cut of the revenue when you make the purchase. (The FT reported this week that OpenAI is thinking of doing exactly this.)
EW: In the writing of Searches, what surprised you in terms of the engagement you had with this technology?
VV: I was very attuned to the question of the extent to which ChatGPT, when I dialogued with it about my book, would potentially behave in ways that didn't serve my artistic goals. I thought it might happen, but I wasn't sure how it would show up. So I was very surprised by the intensity with which ChatGPT worked to convince me to write a book that was more positive about Big Tech, about OpenAI, about Sam Altman. It was even more than I had anticipated.
EW: This also makes me think about expertise. If you are an expert in tech and an expert writer, you are attuned to what’s pushing back against you. But many times, we’re not conscious of that feedback.
VV: Absolutely. What I was hoping to do in the book was to provide enough information in the more informational chapters about how these products work, so that readers would then be armed with the tools to critically analyze that ChatGPT dialog. The goal is for people to read a book like this and come away thinking more critically about their own engagement with these technologies.
EW: Can you speak a bit more about the bias that is inherent in these tools?
VV: There is a very strong research consensus now that the prominent commercial large language models and image generation models produce text and images that reinforce biases. It's a fact that I knew from my research, but I've seen it manifest in all kinds of ways in my own experiences.
One that is very striking to me is the way in which my dad has taken to using ChatGPT to edit his texts, including his emails. I had sent him parts of my book that he was present in to get his feedback, and he responded to me about it in an email that he had edited with ChatGPT. There was a line in the ChatGPT email where ChatGPT had edited a line of his to say, “If you really want to understand what people are going through, you have to really sit down and talk with them. Only then can you understand their experiences.”
The line was almost identical to the line he had originally written. It was just that, in his version, he wrote “then only” rather than “only then,” and ChatGPT had changed it. The construction he had used is a standard construction in Indian English, which is the English that he grew up learning. “Only then” is a construction in American English and other forms of English.
If the assumption that what OpenAI sells to us is this idea that the language it's generating is more proper, more correct, then there's an implicit message sent when it edits my dad's email: Indian English is wrong English, and the way to correct that English is to turn it into American English.
I like that example in particular because it's more subtle than what we're used to. We have all heard stories about Elon Musk's AI platform talking about the genocide of white people in South Africa and that sort of thing. Those examples are obvious to us, and we think we'll recognize that when we see it. But they're all these forms of bias that are harder to immediately recognize.
EW: This makes me think of my visit to Samuel Johnson’s house yesterday. He is known for creating the first comprehensive English dictionary, which served to improve literacy but also to impose a standardized English across British colonies. What does all of this mean for communication more broadly?
VV: Since the beginning of communication, textual, oral, and imagistic/visual communication, there’s probably always been this push and pull between the use of communication to consolidate power and wealth and the use of communication for other forms of communion.
It's easy to say we've always used language just to interact with each other, and now companies like OpenAI have come along and they're trying to colonize language. I think the reality is that there's always been those two forces at play. In the case of Samuel Johnson, for example, both will be happening at the same time. That dictionary is both a tool for communion and a tool of oppression.
OpenAI, Google, and so on are building these AI tools that consolidate their power and wealth using language in ways that go beyond what has been done before. The reason I'm interested in engaging with these technologies in my work is that, as with any technology we've invented, there is always room for us to express our agency in using these tools in ways that might subvert the intentions of the companies behind them. If I can include a ChatGPT dialog in my book in a way that maybe undermines OpenAI’s selling point for its technology, I find that interesting and exciting.
EW: If this tension has always existed, what is different about large language models?
VV: This is the first time that I can think of that human language has been used as a raw material for a language-based product. But what I find even more intellectually interesting is what the product ultimately could be.
All of these companies are under pressure to make a return on their investment, and so they are all slowly talking more openly about how they might monetize those free users. That's when these questions about the colonization of language become really interesting, because of what these companies can learn. Technology companies have always employed our use of language to create a product. When we search for things on Google, for example, our use of language is productized in the form of Google telling marketers that they can target us with messages.
With large language models, the possibility now arises for these companies to understand through our dialogs with them exactly how language might be used to convince us. They now know not only what we're interested in, but both how we use language and how we respond to language. One can easily imagine a future in which these models could get more sophisticated in for example, matching the tone, style and substance of the way in which we talk, for us to feel more of a sense of affinity with the chatbot.
So in that future, let’s say I'm in Madrid and I'm having some kind of health concern. I turn to ChatGPT to ask it for advice. It has enough information to suggest that I go buy an herbal remedy at the shop that is half a block away, and it knows how to suggest that to me in a convincing way. When I click on that link about the pharmacy, it could potentially get a share of the revenue that the shop gets from me. To be clear, this is all very hypothetical. I'm not saying that the companies are doing it now, but it's a way in which this technology could certainly be deployed.
EW: Tell me about your writing process and how it involves AI.
VV: Let me talk about this book in particular, because I don’t know if I would describe myself as somebody who has a regular writing practice that involves AI. In this book, every time I used AI, I thought about what the purpose was. I usually had some rhetorical goal in mind. Not necessarily while writing, because I'm always experimenting with all kinds of things, and I don't put pressure on myself to know why I'm doing it while I'm doing it, but at the point at which I decide to publish something, I want to have a clear sense of, like, what that is accomplishing.
The first AI experiment that I have in the book is the chapter called Ghosts, in which I use a predecessor to the models underlying ChatGPT, called GPT-3. I use GPT-3 to write this essay about my sister's death and my grief over it, and my rhetorical goal there was to show what a technology like this claimed it could do, how far it got in fulfilling that process, and where it fell short. To the extent that the essay has a meta argument, it is: this product can't do what it claims to do.
EW: The version of Ghosts in the book is not the same as the version you originally published, right?
VV: When it was a standalone essay, people read it and were like, “Oh, cool. You can use AI to express yourself.” My goal had not been to show that. My goal had been to show how, ultimately, that promise falls short. I made that edit in the book to try to underscore in the form itself what the rhetorical point was.
With ‘Ghosts,’ because these large language models were still relatively new to the public, I went into that exercise more in good faith, thinking, “Who knows? Maybe this product is going to express something for me that I don't know how to express.” When I saw that it didn't, that’s the groundwork that allowed me to go into these other experiments knowing that the promise probably wouldn't be fulfilled. So in the ChatGPT dialog [in the book], I'm playing the character of a literary or clown, and acting more naive than I really am.
EW: I’m working on another novel and grappling with how much AI-generated text could be a part of that. What would you say?
VV: One piece of advice I give to writers who ask, “How should I approach this?” is: transparency, transparency, transparency. As long as you are clear to a reader about what you're doing here, it's all fair game.
🧠 Final note: Vara has been running an anonymous survey of women about what it is like to be alive in the world. Respond here.
Other voices, other rooms…
🧠 The cognitive cost of AI assistance: People are agog about a MIT study showing that four-fifths of students who wrote an essay with AI couldn't accurately quote what they had submitted minutes earlier, compared to just one-in-ten who wrote it themselves. I’m surprised people are surprised… more shocking are the 11% who couldn’t cite their own work, no? My quasi-sentient colleagues at The Economist noodled whether AI makes us dumb this week.
🏫 Institutional response: Yes, a “National Academy for AI Instruction” has just launched in the US, a teacher-training center to prepare little humans for a radically different workplace. It’s backed by the teachers’ unions AFT and UFT, and is the brainchild of Roy Bahat, who heads Bloomberg Beta — and is an AFT member. (There’s surely a story in that…). I love the development because, where politics is probably not downstream from culture, social attitudes are absolutely downstream from primary-school teachers. Read more.
💭 Personal reckonings: Last week I (Kenn) moderated a virtual panel on AI for a think-tank and fretted that I screwed up, ending the session too abruptly. So I asked the organizer for feedback. Her reply had useful tips… but was expressed in that now-familiar formula of problem statement, bullet points and summation. I immediately phoned; my intuition was right. She leaned-in: “Nobody should be ashamed to use AI for writing — it’s a helpful tool”. We agreed I could have turned to AI myself for feedback.
☕️ Thank you for joining us on the journey. What are some unexpected uses for AI in writing you’ve been seeing? Let us know at chiefwordofficer@substack.com.
In last week’s post, we used AI for interview transcription and to write a first draft of the takeaways. Find out how we used AI here in next week’s missive. Also, there is one intentional typo this week and an unsubtle reference to a Truman Capote novel.