The Erosion of Struggle
AI fills the gaps before your brain gets a chance to. That might be costing you more than you think.
There is a specific kind of thinking that happens in the shower, on a walk, or in the middle of cooking dinner. It is not focused thinking. It is the background process your brain runs after you have spent real time stuck on something. You sit with a problem, you feel the discomfort of not knowing, and eventually — sometimes days later — something clicks.
That process is slower and less comfortable than asking an AI. It is also where most real understanding gets built.
The gap is the point
When you do not know something, there is a window where your brain actively works on it. It pulls at what you do know. It makes wrong guesses, corrects them, builds a model. The frustration is not a side effect of learning. It is part of the mechanism.
AI closes that window almost immediately. You get a clean answer, structured, confident, ready to use. The discomfort never builds. Neither does the understanding.
This is not a complaint about AI being wrong, though it often is. It is about what happens when it is right. You can get a correct answer and still learn nothing from it, because you skipped the part that would have made it stick.
Having an answer is not the same as understanding it
If you have ever asked AI to explain a concept and thought “yes, that makes sense” and then failed to apply it an hour later, you already know what I mean.
Comprehension and understanding are different. Comprehension is recognising that something is coherent. Understanding is knowing why, knowing the edges, knowing what breaks it. Understanding is what lets you adapt when the situation is not quite what the answer assumed.
You build understanding by doing the work: by being wrong, noticing you are wrong, figuring out why, and correcting. That loop requires friction. The AI is very efficient at removing friction.
This is not new
The same concern existed about calculators. About Google. About Stack Overflow. “You will never learn to multiply if you always reach for the calculator.” This is true and it is also fine, because nobody actually needs to multiply large numbers by hand anymore.
So the question is not whether AI assistance is categorically bad. It is whether what we are offloading matters. Arithmetic: probably fine to hand over. The judgment call you make when debugging a system you do not fully understand: less fine, because the act of debugging is how you build the understanding you will need later.
The calculator concern was about mechanical skill. This one is about thinking itself.
The dependency trap
The risk is not that AI will give you a wrong answer. It is that after long enough, you will not be able to tell. Not because you are incapable, but because you stopped practicing the thing that would have kept you capable.
Junior developers who use AI heavily from the start learn to evaluate AI output without having built the baseline that makes evaluation possible. They can tell you whether the answer looks right. They cannot always tell you whether it is right. That distinction matters the moment something subtle goes wrong.
(Seniors are not immune. The atrophy just starts from a higher baseline.)
When to sit with it
None of this is an argument for avoiding AI. It is an argument for noticing when you reach for it too quickly.
A useful rule: if the thing you are struggling with is central to what you are trying to get better at, sit with it longer before asking. The discomfort is information. It is telling you that your model is incomplete. Let it be incomplete for a while.
If the struggle is incidental, something adjacent to your actual problem that you just need to clear out of the way, reach for the tool. Knowing the difference is the skill worth keeping.
The shower moments are not gone. But you have to earn them first.