ChatGPT usage numbers keep climbing while public sentiment toward AI stays sour. Nilay Patel’s written and video essay, surfaced by Simon Willison and also flagged by John Gruber, tries to explain that contradiction directly. The core argument: technologists who see the world through what Patel calls “software brain” are becoming increasingly detached from everyone else, and AI is accelerating that split rather than bridging it.
Willison, who describes the piece as “a superb piece of commentary, and something I expect I’ll be thinking about for a long time to come,” links to the essay with minimal editorializing. The substance is Patel’s. But the fact that Willison — a developer who writes frequently about practical AI use — finds it worth highlighting suggests it’s landing in the right community.
What software brain actually means
Patel’s framing is that software brain is a particular way of seeing the world: as something to be automated, modeled in terms of information flows and data, optimized as if it were a business process. According to his essay, “software brain has ruled the business world for a long time. AI has just made it easier than ever for more people to make more software than ever before — for every kind of business to automate big chunks of itself with software.”
The problem he identifies isn’t that automation is bad. It’s that the software brain worldview has a limit, and that limit is the full scope of human experience. As he puts it: “not everything is a business. Not everything is a loop! The entire human experience cannot be captured in a database. That’s the limit of software brain. That’s why people hate AI. It flattens them.”
That word — flattens — carries most of the argumentative weight. The complaint isn’t that AI is incompetent. It’s that competent AI applied to human experience produces something that feels reductive rather than expansive.
The smart home as evidence
Patel reaches for a specific example to make this concrete: smart home automation. He notes that he is himself “a full-on smart home sicko,” with automated lights, shades, and climate controls throughout his house. But he then observes that Apple, Google, and Amazon — companies with enormous resources and years of effort — have struggled to make mainstream consumers care about smart home automation at all. “And they just don’t,” he writes.
This is a useful empirical data point. Smart home technology is not expensive or difficult to access in the way it once was. The barriers of cost and complexity have fallen substantially. And yet adoption among general consumers has remained narrow. The technology worked, in the sense that the lights turn on. But it didn’t work in the sense that the general public decided they wanted lights that turn on that way.
From Patel’s perspective, this is not a failure of marketing or of execution. It’s a failure of premise. The premise was that people want their home to be optimized, that they see their environment as a system to be automated. According to the essay, “regular people don’t see the opportunity to write code as an opportunity at all.” They are not waiting for a tool that lets them script their environment. They didn’t have a frustration that the smart thermostat was designed to solve.
The gap that AI is widening
The broader claim in Patel’s essay is that AI is widening the gap between software-brain thinkers and everyone else, not closing it. As the essay describes, the cutting edge of advertising and marketing is already AI-driven automation — and it is not about creativity in the old sense. For people working inside that system, the affordances AI provides feel like liberation. For people outside it, the outputs feel impersonal and off.
This tension is not new. But the scale is. AI makes it possible for more people with software brain to build more automated systems faster than ever before. The gap between the worldview of builders and the worldview of users was manageable when building was slow and expensive. When building is fast and cheap, that gap can propagate at speed.
The practical implication, though it is Patel’s framing rather than a separate analytical claim: AI adoption numbers and AI enthusiasm numbers are measuring different things. Usage is not the same as buy-in. ChatGPT’s growing user base coexists with widespread skepticism about AI because people can use a tool without endorsing what that tool represents or how the people who built it think about the world.
Willison doesn’t editorialize further on the implications. He links, quotes, and notes it’s worth thinking about. That restraint is itself notable — the argument is worth letting sit rather than immediately resolving into a hot take.
The smart home parallel is the most economical part of Patel’s case. Huge companies, long time horizons, falling costs, still no mass adoption. The people do not yearn for automation, however elegant the automation becomes.