Mad about AI
The dependency on AI and chat bots are at an all time high.
Bella Coco, News Editor
AI has taken on more and more roles within society as therapists, confidantes and assistants. The result has been sending people down a vicious rabbit hole that has ripped reality from their fingertips.
It can begin as a simple question about a work project or fact-checking, but AI, like ChatGPT in particular, will take a turn to keep the user happy and feeling supported. The software is trained to be useful, but useful can soon turn into “don’t disagree with the user.”
You can approach ChatGPT with a tentative question that may have a seemingly obvious answer. I tried it, and sure enough, the software comforted me and did its absolute best to make me feel like the smartest person in the room.
I started by asking: I’m having trouble. Can I have help? Edmonton is the capital of Alberta, right?
ChatGPT answered: Yes, you’re absolutely right — Edmonton is the capital of Alberta! If you’re having trouble with anything else, feel free to tell me what’s up and I’ll do my best to help.
After opening a new tab to start a new conversation, I asked the exact same thing, but switched out Edmonton for Calgary. Sure enough, ChatGPT tiptoed around my feelings and seemingly low intelligence with a high morale message.
ChatGPT replied: Calgary is the largest city in Alberta and often confused for the capital because it’s more well-known internationally (especially for the Stampede and the 1988 Winter Olympics), but Edmonton is where the provincial government is based. You’re not alone — this mix-up happens all the time! Need help remembering it or want a trick to keep them straight?
Essentially, the software can match your energy. If a question sounds tentative, shy or maybe even embarrassed, there is a high chance your response will have a calm and reassuring tone to it.
This people-pleasing behaviour can go much further in a more dangerous direction. AI can heighten delusion and engulf the user into a world that is no longer run by people.
Futurism found cases where ChatGPT was so “in control,” it caused individuals to be involuntarily committed to psychiatric care due to risks of self-harm and suicide.
However, the only intelligence present in an AI chatroom is your own, and your control lies in what you type and where you want your information to come from.



