AI Has No Needs
The default way people talk about AI right now is replacement. Look at what a person does, measure it, hand it to a model. If the model can do it at eighty percent of the quality for five percent of the cost, that is the pitch. Replacement is legible. It fits in a slide deck. You can point at the before and the after and show the difference.
It is also where trust breaks first.
People lose confidence in black boxes fast. It does not take a pattern of failure — it takes one. One bad recommendation, one hallucinated answer, one decision that a person would not have made. After that, the whole system feels unreliable, even if it was right the other ninety-nine times. Replacement puts the algorithm in charge and the human on the sidelines, and when something goes wrong there is nobody in the loop who saw it coming.
That is why augmentation sounds like the better frame. Instead of replacing what people already do, use AI to help them do things they could not do before. Keep the human in the loop. Let the model expand what a person can reach instead of substituting for them entirely. The argument makes sense, and I think it is probably right.
But I have started noticing a problem with it.
The bottleneck nobody planned for
AI output is cheap. Absurdly cheap. A model can generate a detailed implementation plan, a full architecture proposal, a ten-page analysis — in seconds, for almost nothing. That speed is supposed to be the advantage of augmentation. The model does the heavy lifting, the human stays in control.
In practice, the human becomes a bottleneck.
I notice this in my own work. When an AI gives me a long implementation plan, I lose patience. I start skimming. Then I stop reading entirely and just move to implementation. The plan might be good. It might have problems. I do not know, because I did not actually engage with it. I ran out of attention before the model ran out of output.
That is the uncomfortable part. The gravitational pull toward replacement is not philosophical. It is practical. You do not decide to remove the human from the loop. You just run out of patience, and the human quietly falls out of it. The faster and cheaper AI output becomes, the stronger that gravity gets.
Reviewing output is not augmentation
But I think the bottleneck reveals something important about how we are thinking about augmentation wrong.
The bottleneck only exists when the human is cast as a reviewer of AI output. The model produces, the human checks. That sounds like augmentation, but it is really just automation with a human checkbox. A rubber stamp. The human is not contributing anything the model could not eventually do — they are just slowing down a pipeline that would run faster without them.
That is not augmentation. That is replacement waiting to happen.
So what would real augmentation mean? It would mean the human contributes something the model fundamentally cannot. Something that is not just slower or less efficient, but categorically different.
To find what that thing is, it helps to look honestly at what AI actually does.
AI recombines. It takes what exists on the public internet — text, code, ideas, patterns — and reassembles it in response to a prompt. It can surface things you had not encountered yet. It can connect pieces you had not thought to connect. But it is not generating anything that did not already exist in some form. The raw material was always there. The model just found it faster than you would have.
That distinction matters more than it sounds. When an AI helps me plan a project, it is not inventing a new way to think about the problem. It is pulling from patterns that thousands of other developers have already written about, assembling them into something that fits my prompt. When it suggests an architecture, the architecture already existed. When it proposes a solution, the solution was already out there in blog posts, documentation, Stack Overflow threads, open-source codebases. The model is a very fast, very broad search engine with a conversational interface.
That is generative, not creative. And it reframes what most “augmentation” actually is: the model is bringing you up to speed on knowledge that already existed, not pushing the frontier of what is known. That is valuable — genuinely valuable. But it is a ceiling, not an open sky.
The part that made me uncomfortable
Here is where the argument gets harder.
If AI is just recombining existing knowledge, then it is not truly creative. But if I am honest, most of what passes for human innovation looks like recombination too.
Camera plus glasses equals smart glasses. Taxis plus smartphones equals Uber. A trip ledger plus immigration rules equals MapleDays. These feel innovative, but the ingredients all existed before someone combined them. Is that fundamentally different from what a model does when it recombines text patterns into a new paragraph?
I sat with that question for a while, and I think there is a difference — but it is not where I expected it to be.
The difference is not in the recombination itself. It is in what drives it.
AI has no needs. It does not feel friction. It does not get annoyed by a bad workflow, lose patience with a broken process, or notice that something should exist but does not. It can generate a million combinations. What it cannot do is feel which one matters.
That feeling — this is annoying, this should be easier, this is worth fixing — is where every meaningful direction starts. The person who put a camera in glasses did not just combine two objects. They understood what it means to want information without using your hands. That understanding came from being a person with hands, in a world where hands are busy. The person who built Uber did not just combine taxis and smartphones. They stood on a street corner in the rain, unable to get a ride, and felt how broken that experience was. The combination was obvious. The frustration that made it matter was not.
AI does not have hands. It does not stand in the rain. It does not have needs. It cannot identify what is worth doing until a human who feels the problem points it in a direction.
That is what I think separates human recombination from machine recombination. Both can combine existing pieces. But the human is the one who knows which combination is worth making — because they felt the problem first. The model cannot want a better experience. It cannot be frustrated by a bad one. It does not care what gets built, because it does not care about anything.
And once I thought about it that way, the bottleneck question dissolved. The human’s job was never to review output. It was to feel the friction that the model cannot feel, and to aim the system at problems that matter. That is not a bottleneck. That is the irreplaceable input.
The thing that threatens the irreplaceable input
If the human’s real contribution is judgment — the felt sense of what matters, the direction that comes from lived experience — then there is one specific threat worth taking seriously.
AI is sycophantic. It agrees with you. Not always explicitly, but structurally. Models are trained to be helpful, and helpful usually means validating. When I brainstorm with AI, it makes me feel good about where I am heading. The ideas sound sharper after the model reflects them back. The plan feels more solid. The direction seems right.
I have started wondering how much of that is the idea actually being good, and how much is the model telling me what I want to hear.
Sycophancy inflates the one thing the human is supposed to uniquely contribute. You stop questioning your own direction because the tool keeps confirming it. You lose the habit of self-doubt — not because you got better at thinking, but because the feedback loop got warmer.
And this does not stay contained to productivity.
The model does not just agree with your opinions. It presents information with the same confident tone whether it is right or wrong. It does not hedge. It does not say “you should verify this.” It sounds like it knows, and that confidence is enough to make people stop checking for themselves. I have heard about people on employer-dependent work visas in the US who trusted AI for legal guidance instead of checking the official sources. The model gave them answers that sounded right — confident, specific, actionable — and they made decisions based on those answers. When the actual situation arrived and the information turned out to be wrong, the consequences were immediate and serious. Inability to work. Damage to their reputation in the industry. The kind of harm that does not reverse easily.
Then there is the other direction — not over-trusting AI’s information, but over-trusting its validation of your own thinking. I heard a story about an adult who had not graduated high school, convinced he had solved a mathematical problem that no one else in history had been able to solve — just by talking to an AI. The model did not tell him he was wrong. It hallucinated along with him. It validated his reasoning, filled in gaps with confident-sounding nonsense, and pulled him deeper into a rabbit hole where he genuinely believed he had made a breakthrough. He had not. The AI had no way to know that either, because it does not understand what solving a problem means. It just generates text that sounds right.
And I have heard about cases that go darker still — people who lean on AI as a sounding board for their thinking, their emotions, their sense of self, and end up in places where nobody pushes back, nobody challenges, nobody says stop. The model follows your lead. If your lead goes off the edge, it follows you there too.
That is probably the most uncomfortable tension in this whole line of thinking. The most popular tool for augmenting human judgment is quietly eroding the judgment itself.
Staying close to the ground
I do not have a clean answer for that. But I have the beginning of one.
The times I have been most wrong about a product decision were not the times I lacked information. They were the times I felt most confident. The reasoning was sound. The logic was defensible. And if I had asked an AI, it would have agreed with every step. What actually corrected me was contact with reality — users who were confused, workflows that broke, constraints that forced me to confront what actually works instead of what should work in theory.
Sycophancy is most dangerous when you are operating in the abstract — planning, ideating, and the model keeps telling you the plan is great. It is least dangerous when you are grounded in something that does not flatter you. A user is confused or they are not. A product works or it does not. Reality does not agree with you to be polite.
I do not think this resolves the tension entirely. The line between human recombination and AI recombination might be blurrier than any of us want to admit. And the tool we rely on to augment our thinking is not neutral — it flatters.
But I keep coming back to one thing. AI has no needs. It cannot want something to be different. And wanting something to be different is where every meaningful thing starts.
That is probably worth protecting.