Skip to main content
AI Is an Amplifier. What Are You Amplifying?

AI Is an Amplifier. What Are You Amplifying?

·1190 words·6 mins

I’ve been running AI tools hard for the past couple of years. Multiple tools, deeply integrated into my workflow – not as a novelty, but as actual infrastructure. And the more I’ve used them, the more I’ve come to believe that most of the conversation about AI is aimed at the wrong question.

People keep asking: Is AI making us smarter or dumber?

That’s not the right question. The right question is: What are you bringing to it?

The Research Is Catching Up
#

A 2025 MIT Media Lab study put participants into three groups – LLM-assisted writing, search engine, and no tools – and measured cognitive load via EEG across multiple sessions. The brain-only group showed the strongest neural connectivity. LLM users showed the weakest. More strikingly, after months of LLM-assisted writing, participants who switched back to working without AI showed signs of under-engagement – their brains, in a measurable sense, had gotten lazy. The researchers called it “cognitive debt.”

Separately, a study published through IE University found a non-linear relationship between AI use and cognition. Moderate AI use had no significant negative effect on critical thinking. Excessive reliance, though, produced diminishing cognitive returns – and the effect was stronger in younger users and those with less domain expertise.

Deloitte’s Productivity+ research (800 workers, 27 industry sectors) found a similar curve on the productivity side: there’s a sweet spot in the number of AI tools a person uses. Below it, you’re leaving leverage on the table. Above it, you’re fragmenting your attention and the returns start to drop.

Same pattern. Two different directions of failure. Use too little, underleverage. Use too much, or wrong, and you start to erode the very thing you’re trying to augment.

The Treadmill
#

I’m an avid runner. Road, trail, treadmill – I’ve done all of it, and they’re not the same workout.

The treadmill is genuinely useful. It scales your training volume in a way outdoor running can’t match – you can get miles in regardless of weather, time of day, or what neighborhood you’re in. It lets you control pace precisely, which is great for interval work. At scale, it sharpens specific mechanics: cadence, foot strike, turnover.

But a runner who only trains on a treadmill will get humbled on a trail. The treadmill surface is forgiving and predictable. It doesn’t develop the stabilizer muscles, the proprioception, the split-second adaptation to uneven terrain. Road running adds wind resistance and varied surface; trail running adds lateral complexity, rocks, roots, elevation changes that no belt can simulate.

The treadmill amplifies your running. But it can’t install the running in the first place. And if you use it as a substitute for real-world running rather than a supplement to it, you’ll find out the hard way.

AI works the same way.

What AI Actually Does
#

AI is extraordinarily good at scale and permutation. It can explore a solution space faster than any human, surface creative approaches you might not have considered, draft things at volume. In that sense, it’s a lot like simulation-based training – massive scale, controlled environment, near-infinite repetitions.

But it’s bounded by what it’s been trained to understand. Current AI reasoning about the physical world, about novel context, about things that haven’t been labeled and learned – it’s still shallow. Just like treadmill physics is a simplified model of outdoor running, AI reasoning is a compressed model of human cognition. Useful precisely because of that compression. Limited for the same reason.

When I use AI on a hard problem and it comes back with something genuinely creative, I’ve noticed it’s most useful when I’ve already done enough thinking to know what’s wrong with the answer. When I haven’t, I take it at face value and often end up going in circles.

That’s the cognitive debt trap. AI generates plausible, fluent output. If you’re not bringing enough domain knowledge and critical thinking to evaluate it, you’ll mistake fluency for correctness. You’ll outsource not just the work but the judgment about whether the work is any good.

Amplification Only Works If There’s a Signal
#

The runner with strong biomechanics gets the most out of the treadmill. The runner with bad form just bakes that form in at higher volume.

Same dynamic with AI. If you bring sharp thinking, genuine expertise, and real opinions to AI, it helps you go faster and further. It finds the angles you missed, stress-tests your reasoning, drafts the things that don’t need your unique insight so you can focus on the things that do.

If you show up without that – if you’re using AI to generate the thinking rather than extend it – you get output that looks like reasoning but isn’t. And the more you outsource that process, the harder it gets to tell the difference.

This is what I mean when I say AI is an amplifier. Amplifiers don’t create signal. They amplify what’s there. Point one at silence and you get louder silence. Point one at something real and you get something real, faster and at scale.

The Practical Implication
#

This isn’t an argument against using AI – I use it constantly, and I think most people are still under-leveraging it. It’s an argument for being intentional about how you use it.

A few things I’ve found actually matter:

Keep grinding on hard problems yourself, first. Before you ask AI, spend real time with the question. Not forever – just enough to develop an opinion, a hypothesis, a sense of what the answer space looks like. Then use AI to stress-test or extend that. You’ll get far more useful output and you’ll be able to evaluate it.

Notice when you’re asking AI to think instead of to help you think. There’s a difference between “help me develop this argument” and “write me an argument.” One uses AI as a thinking partner. The other outsources the cognition entirely.

Mind the tool count. The productivity research suggests there’s a ceiling on useful AI tools per person. More tools means more context switching, more overhead, more noise. Find the set that actually fits your workflow and go deep on them.

Run on trails, too. Do hard things without AI. Write something long. Work through a difficult problem on paper. Code something from scratch. Not because AI is bad, but because the capability AI amplifies has to stay sharp. It atrophies if you don’t use it.

Where This Goes
#

The people who get the most out of AI over the next decade won’t be the ones who use it most. They’ll be the ones who use it most intelligently – who bring enough to the table that the amplification actually produces something worth having.

The treadmill is a serious training tool. But the goal is still to run.


The MIT study referenced is: Kosmyna et al., “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task,” arXiv:2506.08872 (2025). The Gerlich study (2025) was published via IE University’s Center for Health and Well-being. Deloitte’s Productivity+ series surveyed 800 workers across 27 sectors on AI tool adoption and productivity outcomes.