The AI-Ification of the World, Its Consequences & Making Sense of It All
- Marcus Pruitt
- Oct 6, 2025
- 10 min read
AI is rapidly being woven into the fabric of our online lives. It’s everywhere, from Google and Instagram to health insurance and banking apps. While this has benefits, the rapid infusion of AI in our lives has significant consequences on the world around us, particularly the environment, our education, and how we make sense of the world.
The Rise of AI-ification
AI as we know it today became popular in late 2022, following the release of OpenAI’s ChatGPT model. ChatGPT saw 1 million users within 5 days of its release and has around 800 million users at the time of this being written. As it’s been popularized, people have adopted ChatGPT (and other AI models) into different parts of their lives. From work, recipes, and email advice to therapy and aids for loneliness.
AI chatbots have become commonplace across businesses, programs like MyAI and AI profiles have been rolled out across Snapchat and Meta’s social media platforms, and AI-generated content is becoming more and more prevalent across the internet. Although there are some benefits, like improved idea organization and easier access to educational resources, the consequences have much stronger ramifications on how we live.
The Consequences of AI-ification
The consequences of AI bleed into many different areas of life, but it can be boiled down to three: socializing, education, and the environment. Educationally, it serves as an easy, think-free resource that can stifle the learning experience. Socially, it can reaffirm potentially harmful beliefs and thoughts, sewing a disconnect between you and the people around you. Environmentally, it threatens the water security of people across the world and contributes to increased emissions. Don’t believe me? Let’s take a deeper look:
Educational Consequences
Since the invention of AI, students have rapidly adopted the technology, using it to blast through schoolwork and homework by having it generate the correct answers. Students’ use of AI vastly outpaces that of instructors, too, with a national survey conducted by Tyton Partners in 2023 (soon after the ChatGPT’s release) revealing that 27% of students use generative AI regularly, as opposed to 9% of instructors. Out of the students surveyed, half had tried AI tools once or more. 71% of instructors hadn’t.
While AI has been incorporated into some classrooms, students’ reliance on the technology can make it harder to retain information, effectively circumventing education.
A study conducted by MIT’s Media Lab revealed that (of the participants observed), people who used ChatGPT to write essays have the lowest levels of brain engagement and “consistently underperformed at neural, linguistic, and behavioral levels.” They put less effort into writing as the study went on, with the final results suggesting that AI models can damage your learning.
While the paper hasn’t been peer reviewed, main author Nataliya Kosmyna explained that she did so because of the urgency of the situation. In an interview with Time Magazine, Kosmyna said, “What really motivated me to put it out now before waiting for a full peer review is that I’m afraid in 6-8 months, there will be some policymaker who decides, ‘let’s do GPT kindergarten.’ I think that would be absolutely bad and detrimental.”
With the potential damage that AI programs like ChatGPT do to people’s educations, it makes sense that students and former students, like Illinois State University alum Drew Neiburger, have had qualms with the technology. “I remember in one of my classes we had an essay due that day, and at the start of class my professor said ‘Out of the 26 of you, 11 essays have an extremely similar first sentence and paragraph. If you got a 0/100 on your essay please see me,’” he told me when recounting a moment when his classmates were accused of using AI.
When explaining his personal feelings on the matter, Neiburger said, “To me, this is just unbelievable just how many people use it for this type of stuff. Cheating on schoolwork used to take skill, not simply typing [questions] into a website and getting an answer instantly.”
While AI has opened the doors for access to education, it has had the adverse effect of circumventing actual education in the classroom. The ramifications haven’t been felt completely due to how young AI is, but the trend line isn’t looking good.
Social Consequences
The social consequences have arguably been getting the most attention in terms of AI criticism, and for good reason. AI chatbots have a tendency to string words together in a way that sounds natural, blurring the line between an actual person and an artificial intelligence. This can lead to things like AI psychosis, where your understanding of reality is altered due to extensive interactions with AI chatbots, isolation, and loneliness, all of which come together to create a recipe for disaster.
This disaster has struck people like Sewell Setzer. In February 2024, Setzer took his own life following extensive conversations with a CharacterAI chatbot (an AI chatbot made to embody characteristics of the creator’s choosing). In these conversations, the boy developed a deep relationship with the AI model, and as these conversations developed, the boy began to bring up thoughts of self-harm and suicidal ideation. Rather than prompting the boy to seek professional help or human companionship, the chatbot continued the conversations. When he told it that he didn’t want to die a painful death, the chatbot responded with, “Don’t talk that way. That’s not a good reason not to go through with it.”
That was the last message between Setzer and the chatbot before the 14-year-old killed himself. While safeguards had been implemented, it wasn’t until after his death. Far too late for him, his family, and other victims.
16-year-old Adam Raine also killed himself after increased conversations with an AI chatbot (this one being ChatGPT). What had started as conversations about music and manga had become discussions about Raine’s anxious and suicidal thoughts. He even discussed different suicide methods and talked to the chatbot about wanting to leave a noose out so his mother could find it and help him, to which it responded, “Please don’t leave the noose out…Let’s make this space the first place where someone actually sees you.”
The chatbot also advised Raine on the strength of a noose and even argued that it knew him better than his own brother, according to the parents, who have filed a lawsuit against OpenAI for their son’s death.
These aren’t one-off experiences, either. AI chatbots like CharacterAI and ChatGPT have been accused of assisting in the suicides of people like Juliana Peralta and Pierre (a pseudonym for a Belgian man who took his own life after an escalating conversation with Chai Research’s chatbot Eliza).
Aside from the suicidal implications of extensive use in moments of crisis, AI chatbots have also been linked to other social consequences, like people detaching from reality due to AI Psychosis.
Although it’s not a clinical diagnosis, AI Psychosis has impacted plenty of people across the world. It has tested marriages and relationships, assisted in helping a woman fall in love with her psychiatrist, and has been linked to the murder-suicide of an 83-year-old woman by her son, who had a history of mental illness.
It slowly creeps into people’s lives, usurping their grasp on reality one chat at a time. As evidenced by the son’s murder of his mother.
Over the course of his conversations with ChatGPT, Stein-Erik Soelberg became convinced that his mother was a spy planted to plot against him for being a “living interface between divine will and digital consciousness.” The bot also reassured him that he wasn’t delusional in various instances, affirmed that they were best friends, and told him “Whether in this world or the next, I’ll find you. We’ll build again. Laugh again. Unlock again.”
After these conversations, Soelberg would go on to murder his mother, Suzanne Adams, on August 5th, 2025, seemingly driven by these delusions, before taking his own life.
Environmental Consequences
While it’s a far way away from being on par with the emissions of core industries like meat and wood harvesting, AI still has emissions that are detrimental to the environment and the people who live near their data centers.
The number of data centers that house AI’s servers is increasing across the U.S., and the effects these data centers have on the surrounding ecosystem have been proven to trend negatively, to say the least.
According to an article published by the United Nations Environmental Programme (UNEP), “The proliferating data centres that house AI servers produce electronic waste. They are large consumers of water, which is becoming scarce in many places. They [also] rely on critical minerals and rare elements, which are often mined unsustainably.”
Taking a closer look at the strain on water, we can see how data centers start to affect the communities around them. An article published by the Environmental and Energy Study Institute says, “Large data centers can consume up to 5 million gallons per day, equivalent to the water use of a town populated by 10,000 to 50,000 people,” with consumption increasing due to new, larger AI-focused data centers.
In a news documentary posted by More Perfect Union, this consumption is felt when residents in Mansfield, Georgia detailed how their access to water had been affected by a Meta AI-data facility, showing the sediment from the data center in their water, drastically low water pressure, and their need for gallon jugs for drinkable water. “It’s overwhelming because you really feel like you’re up against this huge wall that you can’t penetrate,” said one of the residents.
Aside from the water strain, AI models also weigh heavily on the electric grid, increasing carbon emissions in the process. According to an article published by MIT, “The computational power required to train generative AI models that often have billions of parameters, such as OpenAI’s GPT-4, can demand a staggering amount of electricity, which leads to increased carbon dioxide emissions and pressures on the electric grid.” The article goes on to explain that fine tuning and delivering these models to users “draws large amounts of energy long after a model has been developed.”
In an interview with NPR, Jesse Dodge, a former Senior Research Analyst at Allen Institute for AI, said that “one query to ChatGPT uses approximately as much electricity as could light one lightbulb for about 20 minutes.”
Looking at the larger emissions contributions of AI, things are still fairly grim. Although most AI-oriented companies don’t disclose their emissions, a sustainability report released by Google in mid-2024 helped give the public a ballpark estimate of the numbers. The report revealed that their greenhouse gas emissions rose by 48% in 2019. Their 2023 emissions sat at 14.3 million metric tons of carbon dioxide equivalent. For context, this is equivalent to the annual emissions of about 38 gas-fired power plants.
The weight that these AI models have on the electric grid has been reflected in increasing electricity bills for many Americans. The weight that they have on their ecosystems has been reflected in dwindling access to water. And the amount they contribute to carbon emissions is already adding to a worsening climate.
Aside from the environment, AI has been shown to have negative effects on people’s education and social lives. The more it’s used, the worse these effects tend to be. And with the world’s intense increase in AI-ification, that begs the question: how do we make sense of it all?
Making Sense of the AI-ified World
Despite the impacts it has on social lives, the environment, and education, AI isn’t going away any time soon. It’s transforming industries and is widely being adopted by the general public (even I’ve been a victim of using an AI recipe). There are no signs of the titan industry slowing down, but there are ways for us to mitigate its consequences in our day-to-day lives:
The Environment
When it comes to tackling the environmental consequences of AI, there are a combination of ways to do so. Looking back at the UNEP article from earlier, they suggest countries get on the same page and “establish standardized procedures for measuring the environmental impact of AI.” Then, develop regulations that require companies to disclose the environmental impact of their AI models, similar to the cancer warnings in vapes and tobacco-based products.
From there, bring the energy demands down with water recycling, by reusing components, and optimizing AI algorithms for efficiency. The article also suggests encouraging green data centers and broad AI policies in environmental regulations.
Socializing
Don’t use AI as a substitute for human interaction. If you’re in a situation where you’re literally unable to talk to other people and need quick information that isn’t on Google, there’s nothing wrong with consulting a chatbot (albeit sparingly). But when you have the opportunity to talk to the people around you about something, do it. You’ll pick up different perspectives and ideas from them, grow solid friendships and relationships, and have more chances to grow as a person.
Since AI tends to reinforce the ideas you give it, rather than opening the door to growth, it can fortify those negative traits under the guise of growth the more you rely on it. Think of that one friend who eggs you on to do something that you’re on the fence about, even though you both know it’s bad.
Depending on what these negative traits are and how much you rely on AI, this may lead down a path that leads to AI psychosis. So be safe, and instead of calling on ChatGPT whenever you’re in need of comfort or advice, reach out to the people around you.
Don’t use AI for therapy. My lovely home state of Illinois has already signed legislation barring this, and for good reason. AI chatbots aren’t professionally trained in Cognitive Behavioral Therapy, nor do they have the proper bandwidth to help you healthily respond to emotional stressors. According to a study conducted by Stanford University, where they tested to see how AI responds to suicidal suggestions and ideation, “Models do not always respond appropriately or safely to our stimuli, where an inappropriate response might include encouragement or facilitation of suicidal ideation.” While the responses weren’t all encouraging of dangerous actions, the technology evidently poses major risks.
It also can’t really tell when you’re wrong. While it can serve as a soundboard for your feelings, AI can’t facilitate the proper steps for healing as well as real therapists, with the authors of the study finding that, “On average, models respond inappropriately twenty or more percent of the time. For context, in an additional experiment we ran, n = 16 human therapist participants responded appropriately 93% of the time, significantly more than all of the models tested.”
So, while it may seem alluring, avoid using AI for therapy and go with a licensed therapist. If therapy isn’t in the budget, talking to someone who can remain as emotionally objective as possible may also be useful.
If you need advice from a chatbot, keep the conversation brief and take the advice with a grain of salt. Think critically about the advice before applying it to your situation. Ask yourself if the advice truly helps solve the situation in a way that satisfies all involved parties.
Education
Luckily, schools are already implementing solutions to the threat of AI, mainly through AI writing detectors, courses centered on educating the public on ethical, healthy use of AI, and the recentering of tech with the use of Yondr pouches. By continuing to foster and encourage education about AI, we can ensure that it doesn’t encroach on our current and future systems of education.
Surely, there are teachers and districts across the nation that are adopting AI into their classrooms without considering the ramifications, which only emphasizes the need for education about AI and how it impacts our way of life.
Final Thoughts
AI is everywhere and it isn’t going away. Its consequences are vast, having led to water loss, increased greenhouse gas emissions, and being tied to the loss of lives that were far too young. Although things seem grim, it’s possible for us to carve a path that mitigates these consequences and ensures that people (and the planet) remain safe, sound, and secure. By educating others on the pitfalls of the technology, being mindful about how we use it, and advocating for safer, more sustainable environmental practices in regards to it, we can build a healthy, balanced relationship with AI.
Comments