The case for friction
Culture/tech connector and ex-Googler Suhair Khan on art's role in creating new frameworks for AI
Japanese emails and letters traditionally start with a seasonal greeting. So I (Eleanor) was amused when I arrived this week to 35°heat in Tokyo and received a maladroit email in English, wishing me well in this “scorching hell”. It felt like a ChatGPT translation of something I’m missing, or the author could just be very poetic…
This week, CWO is zooming out to think about AI and creativity more generally. Our guide is Suhair Khan (suhairk), an entrepreneur, AI advisor and ex-Googler who is bringing technology and culture closer. Our conversation made us excited about the possibility of collaborations between art and tech that make tech better for us all.
🔑 Key takeaways from our conversation
🎨 Beyond just AI, society doesn't value artists. We focus on the issue of copyrighted works being used to train AI, but forget a bigger problem: society in general does not value artists or creative work.
🔬Art as tech R&D. Creative institutions and artists are using art to explore and develop new frameworks for AI. Tech companies need to open themselves up to recognizing and integrating these efforts as legitimate R&D.
🧠 Friction and imagination are what keep us human. Technology's promise to remove friction actually prevents us from doing the hard work of growth. Preserving our capacity for imagination requires defending activities like reading books and engaging with diverse cultural stories.
Our interview, edited for brevity and clarity, is below.
Eleanor Warnock: You went from Wall Street to a master’s in international development to Google. What did working in technology teach you?
Suhair Khan: A lot of my work today is about, ‘What is the intentionality of this? How do we think about where it goes next, and what is the landscape in which we should apply it?’ Google and Amazon, all these companies can build anything at this stage, but why are they doing it and who are they doing it for — this has been lost because they have unlimited budgets and the capacity to build at scale.
EW: I feel this with GenAI tools. What do you think about intentionality there?
SK: [AI] wasn't invented to steal ideas or to steal rights. It was invented to say, ‘How can you make this bigger and better?’ It speaks a lot to the culture of the tech sector. Look at Uber; you break lots of rules in the beginning because you assume that you're going to create economic value in new areas, for yourself and your shareholders.
The second layer of intentionality now is geopolitics and power. If you have access to information resources in the sector, it makes you powerful because it generates wealth. But it's not just about wealth. Abu Dhabi, Saudi Arabia — they’re very wealthy countries, but there is power in saying you ‘own innovation’ or you ‘own progress.’
EW: It seems like the ethics of something like using creative content to train models has been left out of this intentionality. Have you seen a case where an ethical framework was applied successfully by a technology or service after the fact in the service of encouraging creativity?
SK: I think a bigger point about protecting creativity and culture is: ‘How does society value creativity and culture?’
Think about creative people here during COVID. If you're a freelancer, you didn't get furlough: you were essentially left out of the economy and abandoned. Do we really value reading or books or literature or ideas in a way that we should be, or the people who create and build them? Do we give money to cultural institutions as a government in the way that we should be? That is as questionable as the fact that most governments right now are trying to facilitate the success of technology companies at a very large scale.
EW: You’re saying this doesn’t just apply to tech companies, but to society as a whole.
SK: Right. But to respond to your original question [about applying ethical frameworks], people are trying. There are many amazing things happening.
Not to be cynical, but I don’t know how much it’s going to matter. On the very specific question of copyright and AI, it seems like much has already been lost. If your content has been sucked up, whether it’s done legally or illegally, it’s now part of these models.
Ethics is not just about ‘be transparent’. It’s about understanding cultural values: how important artists are to the world? How much do women matter? That's why we need more companies and more startups to be building this space. We need more women founders. We need doctors or gynecologists informing how these platforms are designed from the ground.
A reframing is not just about making big rules for AI, it's industry-specific, it's community-specific. For example, there are specific guidelines from the World Health Organization about brain data —who owns it? How is it distributed? Where is it stored? What is consent? What is attribution? These are all very specific questions that are relevant across AI, but you can't really apply them unless you're looking at a particular area.
It shouldn't just be left in this really high level of regulation for big tech companies, because they're not going to want to comply with everything, and they're not going to know everything. It's not their job to know about, for example, someone's brain and the data in their brain.
EW: So you’re saying that we need to take the debate down to the level of specific communities or experts?
SK: You have expertise; you are a writer. So do rabbis, so do doctors. The biggest challenge of AI is that it wipes that out, and it says, ‘I’ll help you or I’ll replace you.’
That’s why we need to think about valuing and investing more in expertise than just investing in the future of data centers in the UK and the next nuclear reactor. We need to think about who are we seeding and how are we supporting them.
EW: People in tech should be talking to the art world, but it often feels like they aren’t doing this because the walls of the tech world are so high. I see powerful people in tech intimidated by the art world.
SK: I love the Serpentine’s technology and art work. I tried to help them figure out how to frame themselves as R&D for technology. Creative R&D is creating more ethical frameworks for where AI could go next.
For example, the Serpentine had an exhibition a few months ago by two artists called Holly Herndon and Mat Dryhurst. They did a choral AI piece where they got choral singers to share their voices. They created a data trust for these choral singers to protect the ownership of their own voices and to make sure that if anything was monetized down the line, they had consented to share their voices. [The singers] are now part owners in the trust that contains their data.
There is R&D happening on the ground, but it’s not being treated as such because it’s not feeding into the next iteration of the next large language model. If you go to Google, there is constant R&D, but it’s increasingly siloed. You have a particular end goal, which is efficiency and usage, and you have far less of a diversity of voices inputting.
If we want real meaning, if we want real depth, [tech companies] are going to have to start to open up more, in a serious way, to interdisciplinary R&D in frontier technology.
EW: Have you seen other creatives embrace AI and do interesting things?
SK: There’s a startup in India that makes comic books using AI, and they do it from out-of-copyright novels. Their thesis is that people aren’t reading Tolstoy anymore, but young people love comic books. If you can provide them with a story or a deep dive into Russian literature, that’s amazing.
What’s interesting is that they work with professors of literature all over the world. They’re working with professors to make sure that they’re conveying the completeness of the story, enough nuance to be relevant, and that they are representing things in a way that feels appropriate. They’re not just taking stories and dumping them into a model.
EW: It makes me think about how writing used to be linear. You had an idea, you wrote it, and there was an output. But now it’s a circle, because what I write becomes remixed into a comic book or a video.
SK: You use your imagination to see things in your own head. Look at synesthesia and things like that; the neural networks that allow for that are not going to exist if you stop reading proper books and stuff. But equally, a lot of people in the world have never read books. Lots of people have unwritten languages. So I think of this as a new portal, a new gateway, but we really have to think about how we invest, and how we enforce reading and writing.
EW: You talked about owning progress as a political goal, but it sounds like it should also be a goal for countries to nurture populations who have strong imaginations.
SK: There’s an artist called Anab Jain, and she works a lot with AI. She grew up in Gujarat, in India. She had all these myths and magic stories that she grew up with, which were not at all Western. She now does a lot of work on multi-species intelligence, and she said that a lot of her inspiration for thinking about other forms of intelligence came from her childhood and the magic of believing that other beings exist. She believes in other ways of seeing the world because she had the privilege of growing up with this freedom.
EW: So she is doing things that no one has thought about.
SK: In my career, I always did what was most exciting, interesting and inspiring. That’s difficult, that’s friction. Friction is what keeps you doing the hard stuff, leaving your job and trying to build a path that feels right in a way where you’re trying to remind yourself what you believe in.
Removing friction, as technology does, doesn’t help us move forward. It helps us in a journey, but it doesn’t actually take us forward.
EW: Where can we build friction? How does this relate to how we see AI?
SK: I think the idea of what we find inspiring or exciting about technology, or what we’re mad about has been reduced into boxes that are almost fed to us. You’re not going to know unless it makes you feel bad one day, and you say, ‘ChatGPT made me feel bad about myself.’ You have to know that it did that in order to confront it and ask questions.
EW: This reminds me of an experience I had where I had a personal problem that I went to Claude about, and it gave me very bad advice.
SK: We’re quite vulnerable now. I think we’re going to look back in the next couple of years about how they changed us and who we are. It’s not that we’re dumber, but it’s changing what you think. There is something about that feedback loop, what these systems are doing to us. There is a day-to-day subversive feedback loop that is feeding into our work, into our choices, our relationships, our decisions.
To read more from Suhair, follow her Substack.
Other voices, other rooms
👩🏫 One teacher reports that her students don’t regard ChatGPT as cheating; they’re simply demoralized. Why write if a machine can? I (Kenn) think she’s on to something. One generation lost its attention span to social media. Another is about to lose its ambition to genAI. (Headline from 2027: “The new age of AIpathy.”)
🚨 Will GenAI cleave businesses into those who optimize what is, versus invent the new? Save money or earn new income? Lower the denominator or increase the numerator? The challenge has been nicely articulated for media. But it aplies more broadly. In every industry, can you imagine a challenger brand using AI to do totally new things? Incumbents beware!
💡 Spanish philosopher José Ortega y Gasset published “The Revolt of the Masses” in 1930. He was aghast that modern man was turning off his critical faculties and individuality: “Anybody who is not like everybody, who does not think like everybody, runs the risk of being eliminated.” Ortega foresaw the rise of fascism, where the individual is subsumed by the mass, a comingling which produces a bland gray. Do we not already see “AI gray” in the stream of LinkedIn posts and other AI texts? I’m looking to Ortega for answers.
Thank you for reading! In the next few weeks, we’ll be exploring AI and writing from the point of view of a neuroscientist and a wedding speech writer. Who else should we speak to?
Last week’s use of AI was in the interview transcription, not writing. Kenn placed one intentional typo into this week’s edition. If you were forwarded this email, subscribe below.

