Illustration: MARK LONG/Katmandu Journal


There was a time when losing one’s way meant wrestling with a road atlas and arguing over compass bearings. Today algorithms steer traffic through cities and decisions through offices. The mental relief is real. So is the cognitive cost. According to a series of recent studies, including one from Microsoft and Carnegie Mellon University, professionals who lean on generative artificial intelligence (AI) tools such as ChatGPT, Gemini or Copilot are showing signs of degraded critical thinking. The worry is not AI gives bad answers. It is that it dulls the human faculties needed to recognise when it does.

The phenomenon, known as cognitive offloading, is neither new nor unique to AI. From calculators to autocorrect technology has a history of displacing certain mental labours. But generative AI operates across a wider domain, offering intuitive summaries and reasoned advice, among others. This shifts user effort from active analysis to passive validation. In the words of the Microsoft-CMU paper, AI tools divert cognitive engagement “toward verifying that the response is ‘good enough’”, and away from tasks that stretch mental “musculature”—like analysing, evaluating or creating. Over time the opportunity to practise those skills dwindles.

That shift is observable in professional contexts. The study surveyed 319 knowledge workers who use generative AI at least weekly. They logged 912 work tasks involving AI—grouped into three categories: creation (like writing emails), information (summarising articles) and advice (guidance or data visualisation). Fewer than four in ten reported engaging in critical thinking to mitigate AI risks. Most simply accepted or lightly edited the output. A user drafting a performance review using ChatGPT double-checked for reputational risk but did not attempt to improve the substance. Another adjusted an AI-written email to match a boss’s expectations on hierarchy. Few ventured beyond surface-level correction.

A parallel study of 666 British users, led by Michael Gerlich of SBS Swiss Business School, drew starker conclusions. Frequent AI users consistently scored lower on a standardised critical thinking assessment. This may not be a one-way street. It is possible those with stronger analytical faculties rely less on AI to begin with. Still, anecdotal follow-up from educators paints a worrying picture. Teachers reported a soaring number of students unable to complete basic problem-solving exercises without digital assistance.

The same pattern holds in more experimental settings. A team at MIT fitted participants with electroencephalograms (EEGs) while they composed essays, with and without the help of ChatGPT. Brain activity associated with creative thought and attention fell in the AI-assisted group. These students also struggled to recall key points from the essays they had submitted, indicating a shallower form of cognitive engagement.

Much of the worry is not the nature of AI outputs, but how those outputs shape habits. As Evan Risko, a psychologist at the University of Waterloo, explains, people are “cognitive misers”: inclined to exert minimal mental effort unless provoked. If AI routinely gives acceptable results, even on complex tasks, users may grow conditioned to accept, rather than question. Over time that can become a feedback loop: less thinking leads to weaker skills, which in turn increases reliance on the tool that caused the erosion.

The implications are especially acute in domains where mistakes carry heavy consequences. In legal and forensic work, for example, AI tools are being adopted to draft legal briefs or evaluate evidence. Yet overreliance has already led to blunders: plagiarised citations, hallucinated rulings, even fabricated evidence. A study published in Societies flagged an emerging pattern: professionals in high-stakes fields showing diminished capacity to challenge or contextualise AI-generated data. Such knowledge gaps are not easily corrected after the fact.

The productivity benefits of generative AI are undeniable. They offer scale and fluency. But there is a sting in the tail. A study at the University of Toronto had participants propose creative uses for common items like tyres or trousers. Those who had seen AI-generated ideas beforehand produced solutions that were less diverse and more conventional. Where the chatbot saw a scarecrow, unaided participants saw bird feeders and mobile planters. Creativity, it seems, does not flourish when originality is outsourced.

Some technologists hope to design their way out of the trap. Microsoft researchers are testing tools that interrupt users with “provocations” mid-task, nudging them to reflect or revise. Academics at Emory and Stanford suggest rewiring chatbots into “thinking assistants” that ask questions rather than present answers. But such interventions are unlikely to gain traction. Users tend to resist anything that slows them down. A study from Abilene Christian University found that coders distracted by AI “coaches” performed worse than those left alone. Even brief delays or input requirements are unpopular. In a 16-country survey by Oliver Wyman, nearly half of respondents said they would use generative AI tools even if their employer banned them.

The human brain is adaptable but lazy. Tools that reduce effort quickly become default. That is not inherently dangerous. Few regret giving up long division or hand-drawn cartography. But the risk with generative AI is it relieves users not of drudgery but of judgment. If critical thinking atrophies through disuse, it may not return when needed. In that case, what began as a productivity revolution could end as a competence recession.

To mitigate this, users will need to treat AI less as an oracle and more as a sparring partner. Experts suggest structuring prompts to stimulate intermediate reasoning, or delaying access to AI until a first attempt has been made solo. These tactics sound promising but are hard to scale. Cognitive laziness, unlike technical inefficiency, has no simple engineering fix.

In time firms and regulators may be forced to weigh short-term efficiency gains against long-term degradation of judgement. If AI continues to erode the very skills needed to supervise it, the cost of convenience may be an economy populated by highly assisted, but poorly equipped, decision-makers. The brain, like any other tool, dulls when unused. ■