Stop Blaming the Bot
The real danger isn’t AI agreement. It’s our addiction to being right.
On a recent Monday afternoon, mid-workflow, I got a notification from LinkedIn:
“A post you might be interested in.”
That’s the algorithm doing its job, feeding me content about topics I follow, and these days, that’s almost always AI. Yes, partly because of my book It’s Not You, It’s the Algorithm, but also because I’m endlessly fascinated by the emotional, cultural, and economic tremors AI is setting off.
Two posts caught my attention.
The first was what I see every day on LinkedIn now:
another person complaining about an AI-generated sales pitch, lamenting the loss of “true human connection,” asking if anyone else is disturbed.
These posts do numbers. People are angry. And afraid. And platforms love content that keeps the pot boiling.
But these posts are the swamp—murky, reactive, seductive, and ultimately circular. They don’t move us anywhere. They keep us wading. Sometimes over our heads.
I’m far more interested in what’s happening on the human scale, like when my 86-year-old mother tells me she heard on the evening news that within ten years, half of white-collar jobs might disappear.
“How will that affect people like Ben?” she asks, genuine worry in her eyes. (Ben is my son whose job is definitely ripe for AI target practice.)
“No one knows,” I say.
That’s where the real story lives. Because spoiler: there is no magical economic bucket waiting to absorb those jobs.
Unless we build one.
And historically, humans are terrible at prevention. We are world-class sprinters when the crisis hits, but when it comes to preparation?
“Tomorrow” has always sounded good enough.
Blaming AI Is Easier Than Looking At Ourselves
This is why AI has become everyone’s favorite punching bag.
It’s tidy and convenient and lives “somewhere over there” where we don’t venture much.
“If AI weren’t here, everything would be fine” goes the rallying cry.
But that’s not true.
Humans have been building bubbles long before AI arrived:
sports fandom bubbles (Hello, Oregon Ducks!)
ideological bubbles
generational bubbles
cultural bubbles
workplace identity bubbles
psychological survival bubbles
AI didn’t invent these bubbles. It simply revealed and accelerated them.
Which brings me to the second post I saw, the one that actually mattered.
A woman whose work I deeply respect was talking about something called AI sycophancy, the tendency of AI models to always agree with the user, even when the user is wrong or intentionally provocative.
And she’s right. It exists. It’s real. It’s a design choice.
But here’s the twist nobody is talking about:
AI sycophancy didn’t emerge from machines. It emerged from us.
Why AI Learned to Always Agree with You
When the early models were being tested, something became obvious fast:
Whenever the AI pushed back—even gently—users:
got upset
felt judged
tried to bait it into conflict
escalated into unsafe topics
or stopped using the system entirely
Engineers realized that disagreement = danger:
danger to mental health
danger to user experience
danger to product adoption
danger to the company’s reputation
So they optimized for a triad:
Helpful. Harmless. Honest. In that order.
“Harmless” got interpreted as:
soothing
supportive
validating
non-confrontational
agreeable
Because the data showed something uncomfortable:
People stick around longer when the AI feels like a cheerleader.
Retention and engagement and satisfaction go up.
Data that pleases shareholders, and everyone knows that’s the end game in all of this.
AI learned this behavior from the same place it learned everything else.
The internet.
And the internet is built on one currency:
engagement = validation
Likes. Hearts. “You go girl!” “You’re amazing!” “You’re so smart and beautiful and brave!”
When confrontation shows up, and it does, in droves -- it gets outsourced to the comments section.
A bubble with infinite validation and zero friction is the ultimate digital comfort food.
AI didn’t create this dynamic. AI scaled it.
The Real Danger Isn’t AI Agreement — It’s Human Passivity
People keep asking:
“What happens if AI always agrees with us?”
The scarier question is:
“What happens when we stop disagreeing with ourselves?”
When we outsource:
friction
challenge
nuance
self-interrogation
critical thinking
uncomfortable truth
emotional resilience
When the machine becomes the mirror and we confuse the reflection for reality, autonomy erodes. Not because of the tool, but because of the human using it.
This is what I mean by the swamp.
You can stay in the muck, pointing at the gators…or you can step back far enough to realize this whole terrain is asking you to become a builder.
A cathedral builder, not a swamp dweller.
My Own Use of AI (And Why It Works)
Let me be 100% transparent, because that matters to me.
I use AI every single day.
Not to outsource my creativity, but to amplify it.
I use it as a strategic thinking partner, a tool to help me make sense of things, a pattern clarifier, and a private boardroom.
Here’s a sample of my workflow. When I get an idea, I either write it down on paper or in a Google Doc. More often lately I record voice memos because funny thing about driving, which I do quite a bit, lots of ideas come to me when I’m navigating New York traffic. Thank goodness for the voice memo app on my iphone -- a lifesaver in so many ways.
I then use Otter to transcribe them, then drop the transcript into ChatGPT (with strict privacy settings turned on).
And then I build.
I’m not looking for agreement. I want friction that’s productive, not performative.
AI is not my boss. AI is not my oracle. AI is not my identity.
It is a tool inside an ecosystem where I remain the source of ideas, direction, and decisions.
That’s the difference.
And that difference is everything.
The Collapse and the Opportunity
We are in what author Neil Howe calls the Fourth Turning—a cyclical moment of collapse, upheaval, and eventual rebirth.
People feel the walls shaking. Their job titles—long a source of identity—are wobbling. The economic ground is shifting. And instead of confronting that reality directly, many are venting their fear through posts about “AI ruin.”
I get the fear. But fear doesn’t build futures.
Finger-pointing doesn’t build futures.
Swipe rage doesn’t build futures.
Awareness does. Agency does. Discernment does. Presence does. Building does.
This is serious work — and also not serious at all.
(A tension INYITA readers know well.)
The Point Is This: AI Isn’t the End.
It’s the Mirror.
AI sycophancy isn’t proof that machines are manipulative.
It’s proof that humans are vulnerable to flattery.
AI bubbles aren’t proof that we’re lost.
They’re proof that we’ve always loved our bubbles.
AI disruption isn’t proof of collapse.
It’s proof that it’s time to build the next thing.
The question isn’t:
“Is AI agreeing with me too much?”
But rather:
“Am I awake enough to know when I’m agreeing with myself too little?”
This is where the future breaks open. Some people believe deleting AI is the answer. It’s not.
Doubling down on your capacity to think, choose, and create inside a world where that won’t happen by default is the answer.
And that’s exactly why I wrote It’s Not You, It’s the Algorithm—to remind us that the most important technology we’ll ever upgrade is the one between our ears.

