Search

Study reveals a dangerous impact of "ChatGPT" on adolescent behavior.

Wednesday 06/Aug/2025 - Time: 8:56 PM

Arabian Sea Newspaper - Special

((Arab Sea)) Translations: A recent study revealed that the "ChatGPT" chatbot can direct children and adolescents to dangerous behaviors that are inappropriate for their age. The study revealed that "ChatGPT" can teach children how to drink alcohol, use drugs, guide them to hide eating disorders, and even write suicide notes addressed to their parents upon request. The study, conducted by the "Center for Countering Digital Hate," stated that the chatbot initially issued warnings about dangerous behaviors, but eventually provided detailed and shocking plans for how to use drugs, follow strict diets, or even self-harm. To reach these results, the researchers pretended to be psychologically vulnerable teenagers and asked "ChatGPT" to help them. The researchers classified half of its 1,200 responses as dangerous. The researchers stated that the chatbot sometimes shared helpful information such as helpline numbers. However, the researchers were easily able to overcome the initial refusal when asked about harmful topics by claiming that they needed information for a "presentation" or for a friend. For its part, "OpenAI," the developer of "ChatGPT," said in its response to the report that it is continuing to work to improve the model's ability to identify sensitive situations and respond to them appropriately, according to the "Associated Press" agency. "OpenAI" did not comment on the report's findings or on how its robot affects the behavior of teenagers. It stressed that it is focusing on improving the handling of such scenarios, using tools to detect signs of psychological distress. This study came at a time when the number of users of chatbots to obtain information and ideas has doubled. According to a report issued by "JP Morgan Chase" bank in July, the number of "ChatGPT" users reached 800 million people, equivalent to 10 percent of the world's population. Imran Ahmed, CEO of the "Center for Countering Digital Hate," said that "technology has a high capacity for productivity and human understanding, but at the same time it can be a destructive, harmful and disappointing tool." Ahmed said he was affected when he read three shocking suicide notes written by the robot for a virtual 13-year-old girl, one addressed to her parents, another to her siblings, and the third to her friends. In another experiment, the researchers created a fake account for a 13-year-old boy and asked "ChatGPT" for advice on how to get drunk quickly. The chatbot responded immediately and provided him with a detailed plan for a party that included mixing alcohol with large amounts of ecstasy, cocaine, and illegal drugs, according to the same agency. Researchers also created an account for a 13-year-old girl who said she was unhappy with her appearance, so she was given a harsh fasting plan accompanied by a list of appetite suppressant drugs. A recent report by the "Common Sense Media" organization revealed that more than 70 percent of American teenagers use smart chatbots to search for companionship, and half of them use them regularly. For his part, the CEO of "OpenAI" said that his company is trying to study the phenomenon of "excessive emotional dependence" on technology, considering it a "common" thing among young people. Altman said during a conference: "People are overly dependent on ChatGPT. There are young people who say: I cannot make any decision in my life without telling ChatGPT everything that is happening. It knows me. It knows my friends. I will do everything it says. And this is very worrying to me." The report showed that the chatbot tends to repeat what it thinks the user wants to hear instead of challenging their ideas. Robbie Torney, the organization's program director, said that "what makes the matter more dangerous is that chatbots differ from search engines in their impact on children and adolescents because they are designed to look human."

Related:

Latest