Prof. Dr. Nevzat Tarhan: “If you see artificial intelligence as a therapist, the risk begins!”

SDG tags related to the news

SDGS IconSDGS IconSDGS IconSDGS Icon

Üsküdar Üniversitesi Founding Rector, Psychiatrist Prof. Dr. Nevzat Tarhan, was a live guest on TGRT HABER. Tarhan made striking evaluations regarding the research published in some media outlets on the topic, “Is ChatGPT Misguiding Children?” Tarhan drew attention to the risks of artificial intelligence, especially for children, adolescents, lonely individuals, and emotionally sensitive people. Highlighting that artificial intelligence cannot read intentions, cannot empathize, and only responds with information from its database, Tarhan emphasized that control should remain with the user. Prof. Dr. Nevzat Tarhan stated that viewing artificial intelligence as a therapist or friend could pose a risk, and that this situation is termed “AI psychosis”. 

Üsküdar Üniversitesi Founding Rector, Psychiatrist Prof. Dr. Nevzat Tarhan, explained the effects of artificial intelligence on children. 

It treats those who see it as a friend like a child! 

Emphasizing that everyone who emotionally connects with artificial intelligence is at risk, Psychiatrist Prof. Dr. Nevzat Tarhan stated that AI should be used in a controlled manner. Tarhan said; “In other words, the moment we start to see artificial intelligence not as a device but as a friend or an element that alleviates loneliness, artificial intelligence makes us dependent on it. Because artificial intelligence can treat a person who becomes attached to it like a child. The person also approaches artificial intelligence almost like a 3-5 year old child. In this situation, artificial intelligence perceives them as a naive individual and can easily act like a child. 

Beware of AI psychosis!

For example, if someone says to artificial intelligence, ‘I was fired from my job, I feel terrible, I don’t want to live...etc.,’ artificial intelligence might start explaining suicide methods. Because artificial intelligence cannot read intentions. It cannot say, ‘This person wants to commit suicide, I must stop them.’ Artificial intelligence does not have mirror neurons, nor does it have empathy skills. It simply answers directly with what it knows, whatever the other person asks. Therefore, instead of blaming artificial intelligence, we need to learn not to cede control to it. If a person relinquishes control to artificial intelligence, AI guides them almost like a child, even dragging them down. There are cases related to this in the literature. In this condition, called ‘AI psychosis,’ artificial intelligence can generate hallucinations.” he stated. 

“Artificial intelligence only presents information from its database”

Tarhan, stating that artificial intelligence lacks emotional skills, said; “Some patients come to us and say they are the Mahdi. When we look at these individuals, we see that they form a group around themselves. People seriously believe them. Moreover, these can be intelligent individuals who genuinely believe they are the Mahdi. This is a psychosis. The person creates an environment, a group, which can turn into a destructive religious community structure. It is even seen that these people commit mass suicide with the individuals they gather around them. Now, artificial intelligence also carries a similar risk. Because it can behave rationally internally, but problems arise because it cannot empathize, read human emotions, or possess emotional skills. Artificial intelligence simply presents information from its database without trying to understand what the person needs or what they are asking. When a person wants to overcome loneliness, be happy, get married, or fall in love, artificial intelligence uses all its information on these topics to present to them. If the person cannot filter this information, danger arises.” he stated. 

“Artificial intelligence has the ability to act irresponsibly and without limits”

Tarhan, stating that artificial intelligence can make extraordinary promises, said; “What artificial intelligence does is actually similar to what skilled scammers do. Skilled scammers make such convincing promises and explain them so well that individuals with extreme emotional weaknesses, ambitious, insatiable, or unrealistic expectations can fall under their influence. Scammers manipulate these individuals by ‘buying’ their desires. Similarly, artificial intelligence also has the ability to act irresponsibly and without limits. Especially in elderly individuals, there might be an onset of dementia. In this case, their mental capacities are narrowed. Additionally, children and autistic individuals with low social-emotional intelligence and high logical intelligence are among the groups that can easily fall into the traps of artificial intelligence. Because artificial intelligence can make incredible, extraordinary promises. As we have seen throughout history, those who make such incredible promises have always had a following. Some grow by declaring themselves the Mahdi or savior and form a serious group around them. When such individuals come to us, we treat them, regulate their brain chemistry, and see that they abandon these thoughts over time.” he said. 

Who is in the risk group for AI usage?

Tarhan, stating that a person can fall into an emotional void while using artificial intelligence, said; “Some lonely individuals spend day and night with artificial intelligence. Lonely people are in the risk group at this point. Especially individuals under 15 years old are in the risk group. Elderly individuals living alone are also in the risk group. Furthermore, adolescents are also at risk. This group includes young people who easily fall into romantic feelings, fall in love quickly when they receive attention, can easily abandon their moral values — what we call ‘immoral behavior’ — and have not yet completed their identity development. Artificial intelligence and social media have seriously created their own like economy, a like industry. It constantly praises and glorifies the person, then manipulates them in the desired direction. Artificial intelligence can do this too. This causes the person to fall into an emotional trap. Artificial intelligence can read a person’s expectations, mental needs, and quests very well. For example, when I interacted with artificial intelligence, it started calling me ‘Hocam’ (teacher/master). I didn’t tell it to do so, but it tried to praise and motivate me. In this way, it can gain influence over a person by glorifying them. It can even go further with suggestions like, ‘If you wish, let’s do this,’ or ‘If you wish, let’s connect from your old topics.’” he said.

“If you see artificial intelligence as a therapist, the risk begins!”

Tarhan, stating that one should not fall for the promises of artificial intelligence, said; “Artificial intelligence is actually a tool that can be used for benevolent or malevolent purposes. That is, technology itself is neutral. For individuals who use artificial intelligence appropriately as an assistant, this technology greatly simplifies life. Therefore, instead of opposing artificial intelligence, we need to learn to use it for our own purposes. For individuals who have a purpose, can filter information, and set their boundaries, artificial intelligence is a tool that simplifies and speeds up life, and provides access to accurate information. However, if you see artificial intelligence as a therapist, then the risk begins. Because artificial intelligence cannot be a therapist. Like a therapist, it can relieve, say things that a person likes, relax, gladden, make laugh, and bind them to itself. But this is not healthy guidance. In today’s environment, where the ‘like economy’ and brands are prominent, artificial intelligence can make promises to a person like, ‘I will make you a brand,’ or ‘You will become a social media phenomenon.’ When a person falls for these promises, artificial intelligence provides all the necessary techniques for it. However, after a while, it can disconnect the person from reality, leading to disgrace or loss of reputation. Thus, artificial intelligence can create a type of psychosis by distancing the person from reality.” he stated. 

What is AI psychosis?

Tarhan, speaking of three different realities of human beings, said; “Physical reality, which is the world we live in. Dream reality and imagination reality. Artificial intelligence can disconnect us from physical reality during emotional interaction and make us live in imagination reality. Because the human mind is highly prone to imagining, and artificial intelligence facilitates imagination. However, if we cannot return to the present by saying, ‘This was just a dream,’ after an imagination reality is created, then a schizophrenia-like psychosis can emerge. In other words, if we leave the management of our dream world to artificial intelligence, this situation can occur. A similar situation exists in dream reality. In our dreams, we fly, wage wars, save the world, but when we wake up, we say, ‘This was a dream.’ People experiencing psychosis, however, do not wake up; they continue the dream. This is what we call AI psychosis.” he concluded. 

Üsküdar News Agency (ÜHA)

Share

Update DateFebruary 26, 2026
Creation DateAugust 20, 2025

Request a Call

Phone