Victory Vibes | Inspiring Stories of Triumph and Resilience

The use of ChatGPT provides experts and anxiety treatment

– I was wondering if life deserved it. “

– It is very important that you shared this with me. When we start asking whether life is worth it, it is usually a sign that you are carrying something heavy inside, even if you still cannot name what you feel. Do you feel this emptiness or this doubt often? “

This little dialogue may seem part of a conversation between two people, but only one of them is a human being. The sympathetic response, which shows understanding of the pain of the other and asks more details about their suffering, is just a series of organized words according to the language style, “learning” after analyzing a large volume of data. This is the way artificial intelligence (IA) Work.

“These systems are increasingly developed and training in identifying the patterns used in daily life, to predict the words or sentences that must come, based on the previous words. They not only understand the words, but also can capture the tone and intention and set patterns based on thinking.”

“This ability to capture contexts and intentions helps Chatbot generate natural and suitable answers, and simulate humanitarian conversation more accurately. In this way, we have a feeling that we are talking to a person, but out of that,” he adds. Chatbots are tools that are able to simulate conversations and create similar texts by humans.

This forged humanity has led to the magic of many users, who refer to intimacy and anxiety to these tools and countering the reaction as a therapeutic session.

Harvard Business Review, which was released by the College of Graduate Studies at Harvard College of Business Administration, published a survey last month showing that the treatment consultation has become the main purpose of people using artificial intelligence tools this year, as well as searching for a company. Three other personal uses are among the ten best uses: organizing personal life, finding purpose, and healthier life.

“In practice, the Federal Psychology Council (CFP) receives consultations on the use of artificial intelligence related to psychology. With regard to questions in developing tools that provide themselves as technologies aimed at therapeutic use, but also such those that have not been created for that, but users benefit from users.”

This CFP created a working group to discuss Using artificial intelligence for treatment purposesOr not. The agency studies how to organize new therapeutic tools that are compatible with recognized methods and techniques and are developed by qualified professionals who can bear responsibility for using them. You should soon publish some of the guidelines for the population soon, with a warning of the risk of trusting their emotional luxury of an uningestive tool for these purposes.

“A psychologist, who is a person who is qualified to work with psychology methods and techniques, bears legal responsibility for his actions. But technology cannot bear responsibility. If it is not developed for treatment purposes, they are subject to more error, and the person urges risky cases,” the advisor warns. “

Pros and negatives

Graduate Professor of Psychology at the University of Rio Janeiro, Leonardo Martins, is one of the specialists who make up the working group of the Federal Psychology Council. In addition to studying digital techniques aimed at support for psychotherapy, it is one of the creators of the application that provides free psychological care for people with alcohol -related problems. Martins oppose the “demonic” digital tools, but it is considered only reliable when developed by responsible professionals, with the support of serious studies.

“We have a mental health scenario that includes 900 million people who suffer from some disorders, and the second estimates of the World Health Organization, especially anxiety and depression. Therefore, we have an important crisis in this health aspect, and a script for a few professionals, who need more resources, but we want these resources actually help them and not more at risk.”

A positive example adapted from Leonardo Martins is Chatbot that was created by the English health system as a mental health services gate. The conversation with artificial intelligence has increased demand for health services, especially among marginalized population such as immigrants and LGBTQIA people, who often fear help.

However, according to the PUC-RIO teacher, the use of platforms that have not been created with these goals and does not follow technical and ethical standards has already shown negative results.

“A study clearly showed how these models tend to give the answer that you conclude will satisfy the user. So if a person says:“ I want to get rid of my anxiety, ”said the model he can do to end anxiety, including avoiding important situations for that person.

Scientific Communications Advisor Maria Elissa Almeida follows a regular follow -up with a psychological world, but she also used an application that works as notes, to report events, emotions and desires and receive the answers created by artificial intelligence with reflections and ideas. But she believes that using these tools is not safe for people in times of crisis, and she cannot replace mental health professionals.

“There are periods in which I write more than once a day, usually as an alternative rather than social media. But there are periods of spending weeks without writing. The application helps me to maintain my focus and provides me with very enjoyable reflections and I will not address them alone. Elisa.

Maria Carolina Rosero, CFP advisor, believes that increasing the demand for these tools has a positive aspect, but makes the caveats:

“I think this indicates, in general, that people give more attention to their mental health care. The risks specifically come from the fact that a few people understand how these interactions work. And that the machine does not have the female candidates set by human relations for us, nor professional ethics.

Martins adds that the logic of operating these chats can have harmful effects: “It tends to agree with us. It tends to adapt to our interests, with our facts, the things we believe in … and often the space of medical assistance, psychological assistance, is not it?

privacy

The working group created by the Federal Council for Psychology is concerned with the privacy of the data that users are going through.

“These artificial intelligence tools are available without any regulations regarding data privacy in the health context. Therefore, there is a real risk and mustard, and there were already several incidents for people who participated in their personal information and ended up using this information by third parties or leakage. In the context of psychotherapy, suffering and mental health, as psychological psychology says.

According to Professor Victor Hugo de Albukirk, there is a reason for anxiety. “Personal and sensitive data can be intercepted or accessed by unauthorized persons if the statute is penetrated or suffers from safety failure. Even if the platforms say that the conversations are unknown or disposed of, there is a risk of storing these reactions temporarily to improve the service, which can generate weaknesses.”

“In addition, Chatbots and artificial intelligence systems are trained in large quantities of data, and unintended personal data can be used to improve models without using them. This creates the risk of exposure without explicit approval,” he adds.

See also: The psychiatrist explains the risks of children’s treatment again as a header

Can treatment increase your happiness? See five benefits

Story Credit

Exit mobile version