AI lowers human thinking… Cognitive debt warning: "It's convenient, but it makes you dizzy."

Research results showing that generative artificial intelligence (AI) is changing the human thought process and brain structure even before it can perfectly mimic human writing style are being published one after another, causing waves in academia and industry.

The phenomenon of "cognitive debt," where human memory and critical thinking abilities decline as a result of relying on AI for text generation efficiency, has been demonstrated through concrete data. This goes beyond simply learning how to use tools; it raises fundamental questions about how to preserve our uniquely human thinking abilities, independent of technology.

MIT, Microsoft, and others say, "Higher reliance on AI weakens brain connectivity and critical thinking." Concerns about a loss of depth in education and research are also growing. "AI literacy needs to be redefined."

Recently, leading research institutions such as University College Cork (UCC), MIT, Carnegie Mellon University, and Oxford University Press published a paper in the journal Nature analyzing the impact of AI on human cognitive abilities from various angles. The Irish research team's stylometric analysis revealed that texts written by the latest models, such as GPT-4, still leave a "machine fingerprint" due to their narrower and more homogeneous patterns compared to human text.

But the bigger issue is the brain's changes. In an experiment conducted by MIT researchers with 54 people, the group that wrote essays using ChatGPT showed significantly weaker neural connectivity in their brains compared to the group that didn't use the tool. Notably, 83% of them couldn't remember what they had written, a stark contrast to the 11% of the non-user group. The researchers defined this as "cognitive debt," which involves forcing future cognitive abilities for short-term convenience.

A joint study by Carnegie Mellon University and Microsoft analyzed 319 knowledge workers and found that as trust in AI increased, efforts to critically examine it tended to decrease inversely. Furthermore, a survey by Oxford University Press found that 62% of British students responded that AI was harming their skill development, expressing concerns about a decrease in the depth of their thinking.

Rather than leaving everything to AI, we need to cultivate selective acceptance and human-centered utilization capabilities.

These research results suggest that the IT market, dominated by the "generative AI omnipotence theory," needs to shift toward "responsible AI" and "human-centered AI utilization." While the generative AI market is currently experiencing explosive growth, concerns linger about the overflow of low-quality data, often referred to as "AI slop," and the resulting disruption to the information ecosystem.

The fact that AI demonstrates high persuasive power, regardless of factual accuracy, especially in the realm of political and social persuasion, poses a significant risk for platform companies. Research by Oxford University and others found that the latest models still suffer from a phenomenon known as "hallucination," where factual accuracy plummets by 10-14 percentage points when instructed to increase the amount of information. This suggests that when companies adopt AI in their work, they must consider not only the indicator of increased productivity but also the costs of preventing a decline in critical thinking among employees and establishing fact-checking processes for the results.

James O'Sullivan, a researcher at University College Cork in Ireland, emphasized the fundamental differences between the creative methods of humans and AI, saying, "Even if ChatGPT tries to sound human, it still leaves a detectable stylistic fingerprint."

“Today’s students are starting to think alongside machines,” said Erica Galea, co-author of the Oxford University Press report on the changes in students. “They process ideas faster, but they lose the depth that comes from stopping, asking questions, and thinking independently.”

Future advancements in AI technology will likely trend toward minimizing human intervention, but paradoxically, the value of "human-in-the-loop" intervention is expected to increase further. Beyond simple regulations prohibiting AI use, businesses and educational institutions will redefine "secondary creation and editing capabilities"—critically reviewing AI-generated drafts and imbuing them with their own unique style—as core competencies.

In conclusion, the ideal workforce of the future is likely to go beyond "operators" adept at handling AI, to "thinkers" who maintain intuition and critical thinking that AI cannot replace. In the long term, this could lead to a polarization of cognitive abilities due to AI dependence, and establishing educational and policy guidelines to prevent this will emerge as a key challenge for the IT industry.