What ikigai really means—and why it matters for AI
We speak to scientist Ken Mogi about Totoro, Pokémon and the joy of butterflies
This week’s conversation is with Ken Mogi, a neuroscientist and author whose work explores the messy intersection of brain science and those indefinable human experiences that make us us.
Ken studies qualia—think "the redness of red" aka “the explanatory gap that exists between the subjective qualities of our perception and the physical system that we call the brain.” After Microsoft AI’s CEO warned this week about mistaking AI for conscious beings (Seemingly Conscious AI), Ken’s research feels urgent.
Beyond consciousness, we dug into ikigai—the Japanese concept that your career coach and yoga teacher keep butchering. Ken wrote a global bestseller on it, and spoiler alert: it's not about finding your "life purpose" through a Venn diagram.
What might this Japanese concept teach us about how to approach AI?
Our conversation is below, edited for clarity and brevity.
P.S. Ken shared his take on SCAI with me this week: it's theoretically possible but unlikely to produce genuine consciousness. The catch? Proving its absence might be impossible. (His full paper on AI and consciousness is worth your time.)
EW: Hello! First question — do you use GenAI in your writing?
KM: I have written three books; the third one is coming out this September, about stoicism. I’ve never used GenAI, not even in the last one.
I don’t like the qualia associated with the sentences generated by ChatGPT. There’s something unnatural about it. I suspect that this enthusiasm and fascination with GenAI might actually be a fad.
The people at DeepMind studied how amino acid sequences are mapped into 3D structures in nature, reproduced it, and applied it to unknown amino sequences. We have a lot of proteins out there as a natural exhibit of this principle.
The same can be said about sentences. They are like proteins, folding in their own unique way, just like I am sure you have your own unique style. So do I. We have many examples of sentences, but we don’t know exactly yet how these sentences form in our brains. GenAI takes examples of these things and learn from the way we do things using next-token prediction. But I sometimes feel that we haven’t actually understood the way people create sentences.
Do you enjoy text generated by ChatGPT?
EW: No, not really. It’s just not interesting to me. There’s something strange about it.
KM: Yoshiharu Habu [a legendary Japanese shogi player] used to say he didn’t like to play shogi with AI. Because he’s such a great shogi player, he could feel the mind behind the other side, and he felt that there was something strange about the play and the mindset of AI.
EW: And yet everyone says that these tools will solve everything for us.
KM: I really appreciate the wonderful work of the people in California, the technological Prometheans. But they don’t distinguish between finely written prose and machine-generated prose. As you say, the world is dominated by those who don’t care about subtlety.
Do you know the famous incident on the NHK documentary when Hayao Miyazaki [an animator and filmmaker] said that he was “disgusted” when presented with an AI animation experiment? I think people, when they generate anime images and movies on GenAI and put it on X, assume that those images are as good as those made by humans, but they aren’t. There’s something very strange about these videos.
EW: You said these tools might be a fad, but do you think that something has actually changed with the advent of these tools? For the first time, we can draw on a large volume of human language and create new outputs from that.
KM: The ChatGPT moment is a real thing, totally astonishing, and nobody expected that. However, I was talking to physicist Shunichi Amari this year, the founder of information geometry, and he said something fascinating: no one knows how LLMs work.
Some people are pursuing a field of research called mechanistic interpretability to reverse engineer neural networks. Unlike the human brain, we can actually monitor the internal state of large language models, but we don’t actually understand how GenAI works. So that means we cannot really compare them in qualitative and quantitative measures to the human creative process.
We are living in a truly fascinating era. Do you know that Kindle limits the number of books you can publish each day? I found this phenomenon where texts that are not protected by copyright are translated by machine learning — I think — and then put in Kindle format. Some of them are of really bad quality. So now on Kindle, I always go for established publications.
EW: One argument people make for why AI won’t proliferate is that the audience will recognize bad quality. But in many cases, it’s good enough. People are buying those books on Kindle.
KM: The signal-to-noise ratio is really deteriorating, and the same can be said about social media, which is why John Oliver’s recent take on AI was important.
EW: In a world where there is so much noise, how do you feel like you can have any impact?
KM: Some people know what is good, and that's the only hope, isn't it?
The authentic Studio Ghibli paintings and drawings are different from those that people create with AI. There are people who care about these differences, and there are people who don't care. I want to be somebody who cares about these differences, because that's for the well-being of one’s soul. I know ‘well-being’ is a big word, but I don't want to read garbage.
EW: I think we should assign value to the process and labor that went into creating something, to the human being involved. What do you think?
KM: I know Elon Musk is not your cup of tea, but he is obsessed with the truth, and I think the truth is very important. How do you feel when you see fake videos of animals or people? There is no true event that was captured in those videos. The fact that someone actually wrote something is very important. There is truth there.
But when you talk to engineers and technologists, they are very cavalier about this. I’ll tell you a funny story. I was discussing the gut-brain axis with Jun Rekimoto, my colleague at Sony Computer Science Lab and a professor at the University of Tokyo. The brain-gut axis is an important embodied aspect to our cognition. But he thought the gut microbiome was superfluous to our interests, and that by uploading our brains we could move to an unembodied, digital self. I can’t believe that.
EW: I think embodiment is a core part of the feeling of ikigai, which you have written about.
KM: First, ikigai has been culturally appropriated. I’m not finger-pointing at anyone, but it has been interpreted in a Western civilization-friendly way, by defining ikigai as a cross-section of what you love, what you need, what you’re paid for, and what society needs. But you can have your ikigai for things you aren’t good at. You can have your ikigai for things you can’t be paid for, for things society doesn’t need. The Venn diagram is totally wrong.
I was asked what my ikigai was, and I talked about how when I see a butterfly flying, I feel so happy. For the record, I have never been paid a penny in my life for my love of butterflies.
EW: If ikigai is about love or joy, then I feel ikigai with writing because I love to write. That’s why I’m upset if a machine can do it for me.
KM: So here’s the fun part. Take Hayao Miyazaki. He has so much ikigai when he draws. When you see a film like My Neighbor Totoro, you can feel the inner joy that Miyazaki experiences when he makes these things. He never thinks about the Academy Awards or the box office.
The founder of the company that makes Pokémon games is the same. He wanted to express his childhood joy chasing insects and other creatures. The ikigai of childhood, playing with your friends and forgetting time. It’s not about making money, and it’s not about marketing. When Japanese culture is great at invoking people’s emotions, it’s when creators have experienced ikigai while creating something. It's a deep, layered communication between people.
EW: So what does this mean for how we should think about AI?
KM: Translating ikigai as the purpose or goal of life is wrong. I think ikigai has a lot to do with the phenomenology of the flow of consciousness and the very fundamental feeling of living. Artificial intelligence systems, unfortunately, do not live. So I think one of the great ways that we can align with AI is that we humans try to live in the here and now, because that's what the AI cannot do.
Other voices, other rooms
🧠 Suleyman’s article (quoted in the intro) makes me (Kenn) feel like technologists are finally turning a corner. The dogma in Silicon Valley has long been that there’s no rational reason why AI can’t be conscious and that LLM’s are on the path to AGI. Suleyman upends the debate by prefixing the idea with “seemingly”—ie, not really. It reminds me of Kiev-born religious philosopher Lev Shestov’s critique of rationality: “Reason has done so much, therefore reason can do everything. But ‘much’ does not mean ‘everything’…they belong to two irreducible categories.” (Cf. Marci Shore at 32:30.)
🎠 How might AI undermine free speech and human rights? I had to write an essay on the topic this week for the fabulous Index on Censorship. The editor wrote a detailed description of the assignment, which I placed into Anthropic’s Claude to produce. I wanted to “centaur” the 2,500-word essay; ie, write it in tandem as an experiment. Sadly, it didn’t work. Though there were some good parts (the world being “startled” and a sentence on the Greek word techne), I couldn’t use the material, even as a duet. I’ll post my essay when it’s published, along with Claude’s version.
Reading this week
🕹️ Using Claude to launch a brick-and-mortar board game café (h/t Claire Vo)
✏️ A reminder of what good writing should feel like (h/t Artificial Whimsy)
🪦 “I found out about my death the way everybody finds out everything: from Google.” (h/t Dave Barry)
💩 Eleanor quoted in Sifted about how founders should build their brand in the age of AI slop.
Thank you for reading this week! Next week, we’re speaking to a preacher who is launching an AI tool to help other preachers write better sermons. Should God be angry?
Last week’s missive used AI for subject line brainstorming. Kenn resisted deliberately adding a typo this week, to shake things up a bit. Send feedback to ChiefWordOfficer@substack.com





“There are people who care about these differences, and there are people who don't care. I want to be somebody who cares about these differences, because that's for the well-being of one’s soul.”
As someone who identifies deeply with this sentiment (as well as the idea of a soul — as slippery as it is to define), I wonder: are we at risk of a linguistic chasm opening between the two sides of this debate? The language that I resort to when defending my own sanity in these conversations are amorphous, bordering on the woo-woo: souls, spirits, forces. These terms elude materialistic definitions, and therefore technicians find them easier to disregard. What means do we have of closing that gap?
PS: What does ikigai really mean?