The one thing writers get wrong about AI
Understanding how tech people think can help writers and communicators unlock GenAI tools
You tried ChatGPT, got mediocre results, and went back to doing it yourself because it was faster. You’re not alone, and you’re not doing it wrong.
The problem is that AI tools were built by engineers who think about problems completely differently than many writers and communicators do. Until you understand the difference, AI will keep disappointing you.
The tech world runs on iteration
When I started working in tech, I was surprised by how much everyone loved the word iterate. Startups iterate by collecting feedback on their dating app or B2B payments software, tweaking, testing and constantly improving. Books like The Lean Startup and The Startup Owner’s Manual helped popularize the idea of building something big by taking small steps forward, rather than with meticulous planning.
It just so happens that this iterative process is also how you get the best out of LLMs. No wonder; they were built by engineers and tech people.
But most communicators and writers weren’t trained to work like this. We were taught to dig for insight, not iterate toward it.
How writers and communicators are trained to work
I was one of three people in my university class who majored in Comparative Literature. The classes that I took were a long exercise in critical thinking and taught me to read and synthesize dense information quickly, and how to make arguments by applying analytical frameworks.
I was digging deeper into texts and ideas until I came away with a sharp argument or insight. Stephen King has said that “stories are found things, like fossils in the ground.” I believe the same thing holds for finding the right message in a corporate narrative or through interviews as a journalist.
If iteration is an infinite cycle of test-improve-test again, the humanities approach is digging. You chip away until you strike gold — or dinosaur bones.
Why this matters for using AI
Many writers and communications experts object to LLMs because they think they make digging irrelevant. That perception is not helped by reports about “cognitive debt,” which paints a picture of people not just outsourcing work, but thinking itself to the machine. That’s antithetical to knowledge archaeologists.
Many writers who do take the plunge come away frustrated because their first few prompts didn’t give them good results. It spits out jargon or something bland.
What’s actually happening: LLMs and the style of interaction that they demand rely on a process closer to startup iteration, a flywheel, and not an excavation. People who were not educated to achieve outcomes in this way need to learn to spend time tinkering and tweaking. Otherwise, you’re just aimlessly digging.
Tinkering is even more important because we don’t exactly know why LLMs make the decisions they do or give certain output as a response to certain input. What works is often counterintuitive: sometimes being more specific helps, sometimes being more general does. The only way to find out is to try, observe the results, and adjust.
Pro tip 1: When in doubt, ask the AI how you should prompt it: “I'm a writer/comms person/journalist/marketer/analyst struggling to use AI effectively for [specific task]. Can you suggest 3-4 different prompting approaches I could try, and help me think through how to refine them based on results?” Use your tools to improve your tools.
Pro tip 2: Give the AI more resources to draw upon. When asking AI to draft something for me, I may ask it to draw upon multiple documents, voice recordings, images, or data.
Breaking your work into tiny chunks
There is a second reason I believe that non-technical people struggle with GenAI.
Despite all the talk of agentic everything, GenAI tools out of the box are still not good at doing tasks strung together. They’re still best used to make tiny chunks or steps in work more efficient. Like how I use Otter for transcription, ChatGPT to clean up the transcript, but write this newsletter with my own two paws.
As a creative person with a high level of mastery in my field, a lot of my work and decisions are based on intuition that I have built over time and experience. What makes great writing sing. What makes a great news story.
What that intuition often blinds me to, however, is how my work could be broken down into smaller steps where I could apply GenAI. It’s even harder to see when a lot of those “steps” are actually just thoughts. For example, when I edit a press release, a few of the thoughts going through my mind are:
Does the flow feel right? Am I missing any counterarguments? How would Audience A react to this news or this framing? How could Audience B react to this news or framing? Are there other resources or data that I could be citing? Is the headline going to make people want to read this? Is the news framed in the right context?
All of these are separate prompts. But because I’m approaching them so intuitively, I often don’t see that possibility.
Try next week
Pick one repetitive task you’re doing this week. Break it into 3-4 micro steps. Try prompting the AI for just one step, observe what happens, then adjust.
Looking ahead: a tool that thinks like me
These bottlenecks to adopting AI will not remain forever. GenAI tools will get better at anticipating and suggesting steps in a larger task and understanding user intent. The current model of prompting will be replaced by more intuitive interfaces for interacting with models.
Yet it’s still funny to me that, currently, I need to know how the people who made the tool think in order to understand how to use the tool. It says something about a lot of today’s tech companies, which have lost sight of who they are building something for and to what aim.
If you’re creating something to augment thinking, then why not build a tool that works with my natural problem-solving process rather than forcing me to adapt to its logic? What would a tool that helped me dig look like? Maybe that’s what AGI is.
📚 Further reading
On why AI feels broken: because it’s generating workslop and thinking is becoming a luxury good.
On prompting techniques: Grammarly, Muck Rack (for PR), Peter Yang (prompts for writing), Ethan Mollick (general)
Communicators using AI well: for PR and journalism, marketing, emails, short stories
This edition used AI for subtitle and subject line recommendations. A big thank you to Éanna and James for your feedback on the essay.
See you next week! ☕️


Great article team! And absolutely these tips work for me. Especially when I help to train my LLM on a corpus of my own writing. 🙏