We are all thinking with the same brain
You ask AI to pressure test your idea. It sounds solid.
Your cofounder asks the same model. Also solid. Your board member runs it through a different prompt. Same conclusion. Your teammate asks from a different angle. Still holds up.
Four people. Four conversations. One brain.
Most of us are using the same LLMs, or at least models trained on roughly the same corpus, shaped by similar safety layers, similar incentives, and often a similar worldview. The outputs look different on the surface. Different wording, different examples, different tone. But underneath, they’re pointing in the same direction.
And that creates a bug.
Today, people use AI for almost everything. Advice, research, understanding new topics, pressure testing ideas, making decisions. Often, that makes sense. Especially when you’re entering a domain where you don’t yet have enough context or expertise to think clearly on your own. That’s exactly when an AI system feels most useful.
But here’s the problem.
What happens when I bring an idea to AI, especially an idea that depends on a certain worldview to make sense, and I use that system to help me shape it, challenge it, or validate it?
Then my teammate does the same. Then my board member does the same. Then my cofounder, partner, or friend does the same.
The wording will differ. The examples will differ. The style of explanation will differ.
But very often, the conclusion will be almost identical.
Not because the idea is necessarily right, but because everyone is querying the same layer of intelligence, trained on the same patterns, rewarded toward the same kind of coherence, and biased toward the same kind of acceptable answer.
So what looks like independent validation may actually be consensus laundering.
It feels like multiple people have pressure tested the idea from different angles. But maybe they haven’t. Maybe they’ve all just asked the same machine to think from slightly different seats at the same table.
That’s the bug.
Counterarguments start becoming a luxury. Real epistemic friction becomes rare. And the more persuasive these systems get, the easier it becomes to confuse fluency with truth. Or alignment with rigor.
This matters more than people think.
Because in teams, in companies, in decision making environments, we often don’t need perfect certainty. We just need something that sounds reasonable enough for everyone to move forward. Once AI can produce that level of reasonable coherence for everyone in the room, bad ideas travel further simply because they meet the minimum standard of collective comfort.
Not truth. Not depth. Not real challenge.
Just enough sense to pass.
So the question isn’t only whether AI can help us think.
The question is how we avoid thinking inside a closed loop where the same machine keeps reflecting the same assumptions back to all of us, until agreement feels like evidence.
Maybe the next skill isn’t better prompting. Maybe it’s designing for disagreement.
Seeking out people with real domain knowledge. Stress testing ideas outside model consensus. Using different systems with different priors. Separating “this sounds right” from “this survives serious opposition.”
Because if we’re all using the same intelligence to validate the same ideas, then sooner or later, agreement stops being useful.
It becomes a mirror.
Continue Reading
Your AI sessions could be your next digital product
The most valuable thing you produce with AI is not the output. It is the conversation that delivers that output, what...
What does the next era of leverage look like?
From Columbus to AI - how leverage has evolved through geography, infrastructure, code, attention, and now intelligen...

My Claude Codes Better than Yours
In AI native workplaces, the tension is no longer who writes better code. It's who extracts better intelligence. When...
