The Real Threat of AI

Artificial intelligence is a polarizing topic. Many are concerned about its long-term implications on society, including but not limited to loss of jobs, security breaches (including a recent one at Meta), and adverse mental health effects. Its proponents, such as those at the Fraser Institute, argue many of our current concerns to be expected growing pains of introducing a revolutionary nascent technology into society, and that these can be likened to previous major advancements in automation.  

There is no doubt that artificial intelligence is a game changer for efficiency in many fields. Robust AI software is helping physicians provide excellent care amidst consistent healthcare staffing shortages. Anthropic AI's Claude Code is a new, ambitious tool that saves software developers and other coders countless hours of scrolling through online forums to find bugs and improve workflow. The possibilities of what AI can be used for are immense. Unfortunately, that iceberg runs deep beneath the surface, obscuring potential long-term dangers. 

In general, information is more accessible than ever before, but this comes at a price. That is, reliable data can only be painstakingly spotted as tiny specks swirling in a vast sea of misinformation and disinformation. Perhaps it was once easier to determine off the bat what published material has been rigorously assessed and what has not, but this is becoming quite challenging. In academia, predatory journals are increasingly common. These journals publish information under a guise that seems reputable, but in reality, the papers may lack sufficient peer review and/or be experimentally unsound. Fittingly, AI generated papers now seem to be part of this issue. 

The real threat of AI is not inherently that it is generating hallucinated or otherwise fake content. Rather, it is that the chronic use of AI functionally removes our ability to think critically as we navigate these harsh waters. Human cognition and linguistics are tools that have served us on our evolutionary path as Homo sapiens, but they also serve us as individuals. Consistently accessing these tools with a system other than our own brains (and that of human companions) risks forming a dangerous dependency. This may seem like a small concern now, but, as time continues to pass, and the use of AI becomes more prevalent among young people, it could influence the trajectory of human development. As we become more reliant on technology to think, write, speak, and problem-solve for us, we risk collectively moving away from those brilliant, millenia-in-the-making advantages. 

While one can equate the advent of AI to that of past automations, none have ever offered to remove our need to think critically to such a degree. Even using a calculator requires knowledge of operations, or you'll get the wrong answer. 

Learning how to navigate AI tools effectively is not the same as learning how to effectively synthesize and critique information. You can think of it as a form of gaming the system - using artificial intelligence to assist you when you're busy, stressed, or tired. Over time, it becomes a habit. Eventually, it becomes your reality. Unfortunately, in the end, the system you have been gaming will turn out to be your own life. 

Don't get me wrong, it's clear AI has the chance to better our society in a variety of ways. But that will require retaining our own cognitive and linguistic independence. Next time you think about running a question through ChatGPT, stop to consider what it is you really want out of that interaction. Are you looking to learn, or do you just want your momentary problems solved as quickly as possible? Because training your brain on how to most effectively get a fake brain to retrieve an answer for you… doesn't really sound like learning.

Eriel Strauch

Eriel is a Staff Writer at Lakehead Orillia.

Next
Next

Welcome to The Den