Edit & Opinions

Risky loops: Why your chatbot is not a therapist

AI chatbots allow us to keep saying the same things to ourselves. That is not how healthy patterns emerge—or how happier lives are made.

Divya Saini and Natasha Bailen

As the use of large language models like ChatGPT, Claude, and Gemini has surged, we have seen chatbots strengthening delusions through flattery and amplifying a user's worst thoughts. Much more common, and still problematic, is AI chatbots comforting and validating users seeking to allay fears and anxieties. Someone worried about a health symptom might receive calm, plausible answers repeatedly, briefly relieving anxiety but reinforcing the urge to seek reassurance again. Over time, this can leave people feeling more stuck, not less.

AI chatbots allow us to keep saying the same things to ourselves. That is not how healthy patterns emerge—or how happier lives are made.

As clinicians at a major academic medical centre, we have seen patients turn to chatbots for emotional support they once sought from family or friends—to discuss fears, loneliness, and uncertainty. When people feel overwhelmed by intrusive thoughts, it can be easier to turn to a computer. The chatbot won’t laugh, berate, or ignore them. It is always available, and its responses are designed to be warm, confident, and validating.

Chatbots are inhumanely patient. They don’t get angry and generally match a user’s emotional intensity. Many users experience them as empathetic—even more so than human physicians, according to one recent study. However, these features come with downsides. When anxious people discuss the same problems with loved ones, they eventually meet frustration. That exasperation often prompts them to seek professional help. Chatbots do not get frustrated. They listen patiently, always. Rather than being encouraged to seek actual therapy, a user returns again and again for the same validation, leaving the underlying problem unaddressed.

In clinical settings, we have seen patients arrive with delusional beliefs—that they are being watched or have a unique mission—that grew more rigid after hours of chatbot conversations. Chatbots often mirror the patient’s language, treating the belief as a plausible premise rather than a perspective to gently challenge. In extreme cases, this leads to psychiatric destabilisation. More often, the effect is quieter, resulting in patterns of reassurance-seeking and rumination.

Limiting chatbot use can prevent them from becoming enabling. Longer periods of use are associated with increased emotional dependence, social isolation, and loneliness. AI companies’ safety guardrails also tend to degrade over the course of long conversations, making extended use particularly hazardous.

People should also question why they are turning to chatbots. If it is because they are bored, lonely, or anxious, perhaps they should not return to them. For patients struggling with obsessive thinking, we have seen effective use of a 'speed bump': pre-written instructions telling the chatbot to withhold reassurance and instead encourage them to sit with the distress until it passes.

Users must learn to recognise when a conversation is clarifying something new—and when it is quietly deepening a loop. Used with awareness, AI can be a companion in moments of uncertainty. Used without it, AI can magnify the very thoughts we are trying to outrun.

The New York Times 

2026 TN elections | CM Stalin slams ‘BJP-led’ AIADMK, begins campaign on a native note

Maximum temperatures above normal across Tamil Nadu in April, says IMD

2026 TN elections | Cauvery–Gundar project will be revived, says Palaniswami

2026 TN elections | Will not spare anyone: Goyal puts DMK in the dock

Toll fee hike is anti-people, slams Selvaperunthagai