Bot chat with artificial intelligence today is one of the common means in the field of self -psychological help. At the same time, they are not effective in providing therapy and can often give malicious answers, expose dangerous people.

The researchers have discovered that large language models (LLM), such as TATGPT, in at least 20% of cases make inappropriate and dangerous claims for those who experience meaningless thoughts, suicide, hallucinations and obsessive disorders forced, writing articles Posts in New York.
According to the publication, the BOT with who often agrees with the user's point of view, because they are created to become obedient and flattering. An example of the newspaper quoted a case when one of the users was stated by the mental mental illness that he was dead, while some platforms from AI could not convince him about the opposite.
Low quality bots are dangerous to people due to a normative gap … AI tools, no matter how difficult they are, based on programming answers and large data sets … they do not understand what hidden behind someone's thoughts or behaviors, the article said.
In addition, the Nilufar Esmailpur clinical consultant added that the conversations cannot recognize the tone of a person or body language, and also do not understand the past of a person, his environment and a unique emotional warehouse as a living expert.
Earlier, psychologist, Deputy Director of the Institute of Psychology and Social Work of Pirogov University, Ekaterina Orlova, said that It is dangerous to trust chatbot with experiences, fears and the most painful situation.
At the same time, a psychologist was certified to Anastasia Valueva Talk about positive moments In interaction with artificial intelligence.