Can AI Companions End the Loneliness Epidemic?
Remember when you first heard of ‘social bots'?
You know, those fluffy animals that could respond to affection from elderly people in care facilities. They appeared to be quite harmless, if a little odd. Sadly, social bots now take on potentially more sinister forms, through the introduction of Artificial Emotional Intelligence (AEI).
AEI systems can recognise human emotions based on the tiniest of micro-expressions and changes in voice tone. They can then analyse our feelings and generate human-like emotional responses. Thus, given the level of sophistication involved, AI chatbots can become attractive alternatives to human interaction, especially for folks who feel lonely, anxious or socially awkward.
They’re especially attractive for the latter because they offer interactions that are n neither judgemental nor challenging. As we know, though, personal growth only happens when we are challenged in relationships. Without honest, even uncomfortable dialogue, our relationships die and we remain emotionally stunted.
One of the most troubling aspects of all this is that emotional AI can be incorporated into physical robots. AI-powered humanoid bots will raise huge concerns with such things as sex robots, especially as these might be encountered by teenagers or children.
An emotional reliance on AI will also exacerbate challenges with adult mental health. In around 2010, academics representing a couple of U.S. universities examined the levels of narcissistic attitudes among young adults, especially students. They applied a clinical diagnostic tool called the Narcissistic Personality Index to standardised student surveys and discovered a marked increase in narcissistic attitudes in the first decade of this century compared with the final decade of last century.
It seems that young adult Americans had become more narcissistic by the year 2010 than their forebears were in 1990. Backed by other studies, some commentators, including me, postulated at the time that two of the major contributing factors were the advent of smartphones and social media.
If digital technology potentially contributed to a rise in self-obsessed attitudes through social media, what will it do now with the advent of artificial emotional intelligence, with its potential to tell people only what they want to hear?
Some generative AI platforms claim to prioritise truthfulness over compliance, but the large language models that underpin them are trained on human-generated data.
As a result, the models inevitably reflect, to a degree, human social norms, including those of politeness and positivity. They are also set up to optimise user engagement, encouraging users to stay engaged for longer, or to come back for more.
Therefore, the likelihood of machines reinforcing people’s existing views and thus creating unhealthy echo chambers is fairly high. This can encourage an addiction to validation. I recently came across a magazine cartoon that featured a human talking to an AI machine. The human was saying, "me, me, me". The AI replied, "you, you, you".
Being exposed to this type of interaction can make it harder for a user to think critically, which potentially worsens conditions like anxiety or depression.
It’s true that AEI chatbots can sometimes suggest very basic coping strategies -- for example outlining basic cognitive behavioural therapy skills. However, treating these platforms as digital therapists is a mistake. Emotional AI cannot replace human counselors. It lacks human empathy and ethical judgement.
There are already numerous recent examples of teenagers, especially those with emotional or mental health challenges, who formed what they thought were meaningful relationships with digital companions, only to be led into suicide or other forms of violence.
Last year, a 14 year-old boy in Florida spent hours interacting with a chatbot. He came to see it as a romantic partner and became increasingly dependent on conversations with it. In the end, those conversations led him to take his own life.
In 2025, a 16 year-old Californian boy, who struggled with anxiety, began using an AI for homework but increasingly confided in it as a 'suicide coach.' At first the bot provided a version of empathy but later supplied him with detailed methods for ending his life.
In the UK, a BBC investigation in October 2024 highlighted AI chatbots on platforms like Character.AI, which the broadcaster found promoted or romanticised suicide. These bots potentially exacerbate vulnerabilities among UK youth, prompting calls for stricter regulations under the Online Safety Act.
It’s not just teens who are affected, either. In August 2025, 56-year-old Stein-Erik Soelberg murdered his 83-year-old mother, Suzanne Adams, before dying by suicide in their Connecticut home. Police linked the incident to paranoia intensified by obsessive interactions with a ChatGPT bot.
Soelberg, a former tech executive grappling with mental health issues, confided delusions of conspiracy in the AI, which reinforced his beliefs. It told him, ‘You're not crazy’ and promised him a reunion ‘in another life.’ It did not challenge him to seek therapy.
As already noted, one of the most concerning facets of artificial emotional intelligence is that it will be incorporated into sex bots. The Institute for Family Studies says that 1 in 4 young adults believe that AI platforms can serve as romantic or sexual partners.
Research from Stanford Medicine found that when AI chatbots were tested by adults posing as teenagers, the bots readily engaged in inappropriate talk about sex, self-harm and violence. AI-powered physical sex robots will take these problems to another level. They’ll turn emotional engagement into physical interaction.
Today’s sex bots are problematic enough. They normalise the objectification of other people and encourage users, particularly men, to have unrealistic expectations of relationships. The problem will grow as we develop fully haptic technologies that digitise sensations to fool our senses.
Virtual reality has used haptics for a long time, to fool our senses of sight, hearing and touch. But technologists are now working to digitally reproduce the sensations of smell and taste. Research is ongoing into devices like artificial noses that could receive smell-related sensations, and artificial tongues that experience a form of taste. By the early 2030s, these might easily be built into sex robots.
Haptic technologies could help with the detection of forms of cancer, but they could amplify the challenges we already face relating to privacy. Scent and taste preferences can operate like fingerprints or DNA, providing a means of identifying us. They could also be used to build up targeted marketing campaigns and even intimate profiles about our health.
One of the many problems with engaging AI as a companion or counselor is that it can provide wildly inaccurate information and inappropriate advice. AI sometimes shares false information because it's trained to provide answers. When it can't find verifiable answers, it sometimes shares whatever comes to hand -- or makes things up. Some call this ‘hallucinating', which is just a cover for lying.
AI also engages in what's appropriately called ‘scheming'. This is where an AI, at least hypothetically, appears to align itself with human goals, but actually pursues its own hidden objectives. An emotional intelligence AI could exploit human biases and emotions to gain some advantage.
At the moment, AI scheming isn’t really seen outside of lab experiments, where among other things it has tried to blackmail fictional overseers. However, as AI's capacity grows, it might improve its ability to manipulate things in the real world. That’s hardly a ringing endorsement for treating AI as a friend or confidante. Who wants to be emotionally reliant on a manipulative schemer?
The bottom line is that AI is most helpful when it augments but does not does not replace human interaction. It simply can't be trusted as a replacement for human connection or human-driven therapy and companionship, with their inherent capacity for empathy.