I miss when we used to ask stupid questions

🌱 Seedling
🌱

Someone joins your team. You dump a wiki, three Notion pages, and a Slack channel backlog on them. They burn through it. And somewhere in the middle of burning through it, they ask the stupid questions.

“When you people say this is this, what does that mean?”

“When you say this is that, what does that mean?”

Those questions were the signal.

From a management point of view, that is how you checkpoint the progress of new talent on a team, or someone finding their footing in a new department. The stupid questions gave you visibility. They showed you where the person was confused, what they were picking up, and how far along they really were.

We do not get them anymore.

Not because people stopped being confused. People still get confused. They still join teams and feel overwhelmed by context they do not yet have. They still hit the gap between what they know and what the work demands.

The difference is where they take the confusion.

They paste the wiki into ChatGPT. They paste the Slack thread into Claude. They describe the codebase to Gemini and ask it to explain what they are looking at. The questions still happen. They just happen in private, with a machine that never judges them for asking.

And honestly, you cannot blame them. Asking a lead or a senior teammate “what does this mean” carries social cost. It always has. Nobody wants to be the person who does not get it yet. AI removed that cost entirely. You can ask the dumbest possible question, five different ways, at 2am, and nobody on your team will ever know.

So the questions went underground.

The visibility problem

This changes something fundamental about how teams work.

When a new hire used to ask you why the billing service talks to two different databases, that question told you three things at once. It told you they had gotten far enough to find the billing service. It told you they did not yet understand the data architecture. And it told you they were actively trying to close the gap.

That was free signal. You did not have to schedule a check-in to get it. It just surfaced naturally because asking humans was the only option.

Now the signal is gone.

Instead, six hours before the deadline, or twelve hours before the deadline, you receive the first output. And while output is a good thing, it is also a coin flip. They might get it right. They might get it wrong. You have no way to tell which one is coming because you never saw the confusion that preceded it.

And when someone asks no questions and delivers polished output, you cannot tell the difference between someone who deeply understood the work and someone who got lucky with a prompt.

Both look the same on the surface. Clean deliverable. On time. Formatted well. Hits the brief. But one of them built understanding along the way and the other one outsourced the understanding to a machine and shipped whatever came back.

That is a risk. Because the moment the work gets harder, the moment the context is too specific for a general model to handle, the person who built real understanding will adapt. The person who did not will break. And you will not know which one is which until that moment arrives.

This is not a plea to ban AI from onboarding. That ship has sailed and it should have. AI is genuinely useful for bridging knowledge gaps, and pretending otherwise is not a strategy.

But it does change the job of leadership.

You have to assume you are not going to get the stupid questions. You have to design around that assumption instead of hoping people will still come to you with their confusion.

You can no longer wait until the final stretch to see the work for the first time.

That means compressing delivery into shorter feedback cycles. Not one big checkpoint at the end. Multiple small ones throughout.

Early output, before anyone has had time to polish or over-rely on generated answers. Review and correction, where you can see the shape of someone’s thinking while it is still rough enough to be honest. Iteration and improvement, where the work gets better because you shaped it together rather than received it finished.

Each checkpoint gives you a chance to see where someone actually is instead of where their output suggests they are. It is more work. It is also the only way to replace the signal that the stupid questions used to give you for free.

What we actually lost

The stupid questions were never just a management tool. They were a relationship. Someone admitting they did not understand something yet, and someone else helping them get there. That exchange built trust. It built context that no wiki can replicate. It built the kind of team knowledge that lives in people, not documents.

AI is better than any tool we have ever had for closing information gaps. But information gaps were never the only thing the stupid questions were closing.

I still miss them.

Continue Reading