COLUMN | The real cost of convenience
Archie Bergosa
We love sensationalizing ChatGPT for killing critical thinking. But here’s the thing no one wants to admit: it’s not the tool that’s making us lazy—it’s our lazy dependence on the tool.
A recent MIT study has ignited digital debate once again: apparently, using ChatGPT can “erode critical thinking.” EEG scans from the study showed that those who relied on AI assistance exhibited lower brain activity, less memory engagement, and decreased creative output compared to those who wrote without it. Headlines quickly jumped the gun: “AI is making us dumb.” “ChatGPT kills creativity.” “The death of thinking.”
But I think that there’s a more accurate headline: Humans are making themselves stupid. ChatGPT is just making it easier.
ChatGPT didn’t kill critical thinking. We did, by depending on it too much and questioning too little. Like every tech tool before it, it was built to assist, not replace us. But in a world that values speed over substance, the real problem isn’t AI’s power. It’s our willingness to stop thinking the moment something else can do it for us.
The MIT study makes a valid argument: when people absolutely depend on ChatGPT by using it as a crutch rather than a compass, they surrender mental effort. They allow the tool to dictate instead of guide. They don’t verify, synthesize, or reflect. But let’s not pretend this is some uniquely AI-related sin. People have always sought shortcuts. The difference now is that the shortcut speaks with confidence, fluency, and surprisingly decent grammar.
The point is it’s not ChatGPT that’s replacing us. In fact, we’ve been outsourcing effort for years. Throughout history, tools have always transformed the way we think, work, and create. The printing press didn’t destroy storytelling; it preserved it. The typewriter didn’t kill the writing craft; it amplified voices. The calculator didn’t make us mathematically illiterate; it freed us to solve bigger problems. And remember the release of internet search? Spoiler alert: libraries still exist.
Critics of ChatGPT in education or workplaces are missing the bigger picture. You don’t fight fires by banning water. You learn how to use it wisely. The question shouldn’t be, “Is ChatGPT making us dumb?” The question is, “Are we teaching people how to use it intelligently?”
And that brings us to the real problem: it’s not the tool—it’s the environment we’ve placed it in.
We don’t need to blame the tool. Blame the tendency to offload responsibility when the world demands more than we can mentally juggle. Blame the systems that reward speed over thought, output over insight, and grades over genuine understanding. Today’s students are being pushed to perform, not to learn. In a world where deadlines pile up and attention spans shrink, it’s no surprise they turn to AI not for exploration, but for survival.
There’s a reason why students now treat ChatGPT like a lifeline. Not because they’re lazy, but because they’re buried. Deadlines. Metrics. Academic rubrics that reward polish over process. We built an education system that tells students to write for a grade, not for meaning. Of course they’ll reach for the tool that gives them a head start. When the goal is submission, not comprehension, who’s surprised that shortcuts win?
When education becomes a race to the finish line—when effort is measured by how fast you can respond rather than how well you understand—it’s not AI that’s eroding critical thinking. It’s the culture of performance that made thinking optional. AI just happens to be the most efficient shortcut yet.
We’ve built a culture that rewards output over depth, and confusion over curiosity. That’s what makes the overreliance on tools so dangerous. Not because the tools are powerful, but because we’ve made ourselves too passive to question them.
ChatGPT just fits perfectly into that culture. It’s fluent. It’s fast. It gives you exactly what you asked for. Which, more often than not, is the bare minimum required to pass. And if that’s all we expect from learners and workers, then no wonder they’re handing over the reins.
If we really want to talk about the erosion of critical thinking, then let’s talk about the way our institutions treat thinking itself. In classrooms where teachers are overworked and students are tested more than they’re taught, thinking is a luxury. In jobs where employees are rewarded for performance metrics and “optimization,” reflection is a threat to productivity.
So no, the real threat isn’t ChatGPT. It’s the systems that make its misuse feel necessary.
Let’s not forget: the greatest tools in history weren’t feared. They were taught. People learned how to use the printing press. They were taught how to navigate the internet. We didn’t panic when Google made encyclopedias obsolete. We adapted. But adapting requires something we’re not investing in enough right now: literacy. Digital literacy. AI literacy. The kind of understanding that teaches users to ask, verify, and think critically about what they’re being shown.
Without that foundation, we’re not just handing people advanced tools. We’re handing them weapons they don’t know how to use.
This is why the MIT study should be a wake-up call to rethink how we prepare people to live with it. Because it’s not going anywhere. In fact, it’s just getting better, faster, and more convincing. And if the only thing we do in response is blame the tool, then we’ve already lost the plot. At the end of the day, this isn’t about preserving the purity of thought. It’s about protecting our capacity for it.
If you really believe manual effort is the gold standard of learning, then write your research with pen and paper, then look up your citations in the library using card catalogs. But don’t confuse nostalgia with rigor. Real learning isn’t about how hard you work. It’s about how honestly you engage.
The future of thinking doesn’t belong to those who avoid AI. It belongs to those who know how to challenge it. Those who ask harder questions. Those who can take what the model gives and still ask, “But is this right?” or “Does this make sense?” or even better, “Can I say it better?”
ChatGPT didn’t kill our ability to think. We did that ourselves when we stopped asking why. And unless we fix that, no amount of digital regulation, content moderation, or algorithmic transparency is going to save us. The machine can’t think for you. But it will gladly think instead of you. That choice is still ours, for now.