AI researcher develops tool to identify suicidal thoughts in social media posts

The Evolution of AI in Mental Health Detection

Long before the emergence of the COVID-19 pandemic and the widespread integration of artificial intelligence, researchers in China began to notice a significant trend: individuals were increasingly expressing their most profound emotions on social media platforms. Some users conveyed more than just sadness, openly stating their desire to end their lives. This observation prompted the development of a computer model designed to sift through thousands of social media posts, identifying those that required clinical attention.

Jie Tao, an associate professor of business analytics at Fairfield University, was driven by the need to make his research more impactful. Despite his expertise in machine learning, AI, and natural language processing, he felt his work lacked real-world application. “I get some satisfaction from doing research,” he said, “but I was asking myself, what’s the point of writing papers that my own mother won’t read? That, to me, is ridiculous. If nobody’s reading it, then there’s no point.”

Tao, who has been at Fairfield since 2015, committed himself to using his skills for societal benefit. As lockdowns during the pandemic isolated people, he and his team expanded on earlier research from China, creating an AI model capable of accurately identifying suicidal ideation in social media posts.

The Development of KETCH

The AI model, named the Knowledge-Enhanced Transformer-Based Approach to Suicidal Ideation Detection from Social Media Content (KETCH), scans six major social media platforms, including Reddit, X, and Sina Weibo, searching for specific phrases and combinations that may indicate suicidal thoughts. When such posts are identified, human clinicians are alerted to intervene if necessary.

Baca Juga  USC Brain Health Network Launches New Clinic in Orangeburg

Initial research in China found that approximately 70% of the posts flagged as suicidal were genuine. However, this led to many false positives. Tao tested his AI against the same data set and found it significantly more accurate, achieving over 80% accuracy in identifying real cases of suicidal ideation. This improvement increased the number of referrals to psychologists from about five per week to nearly 25.

Ethical Considerations and Challenges

While the potential benefits of AI in mental health detection are clear, ethical concerns remain. Ridgefield resident Arthur Caplan, founder of the medical ethics department at New York University Langone Medical Center, highlighted the challenge of limited mental health resources. “If a Connecticut parent wanted their child to see a therapist, they might face a waitlist that could last a year,” he said. “There just aren’t enough psychologists and psychiatrists dealing with young people.”

Caplan also raised concerns about consent. While the tool only examines public posts, he questioned whether individuals should have the right to opt out of having their posts scanned for mental health issues. “If you or I don’t have that right, it’s certainly something that needs a thorough debate in the legislature,” he added.

Gijo Mathew, chief product officer at Spring Health, emphasized the rapid advancement of AI compared to the development of ethical frameworks. “AI is advancing faster than our ability to set clear guardrails,” he said. In response, Spring Health convened an ethics council to evaluate the safety, ethics, and clinical effectiveness of AI tools in mental health.

The Role of Human Intervention

Mathew stressed the importance of human intervention in mental health care. “AI won’t replace therapists or psychiatrists, but it will change how they work,” he said. “The future is human-led care supported by AI; tools that make therapy more effective, more continuous, and more accessible.”

Baca Juga  Affordable Care Act subsidies in peril amid prolonged government shutdown

Tao acknowledged the risks of both false positives and missed cases, emphasizing the need for accuracy. “Both kinds of mistakes are very scary,” he said. “The final safeguard of the system is we want it to be overly comprehensive so we don’t bother people who are not going to commit suicide, even if they’re just joking, or miss somebody who’s actually going to commit suicide.”

Expanding the Scope of AI in Healthcare

Beyond detecting suicidal ideation, Tao is exploring ways to expand the use of AI in mental health. He is considering applications for conditions like depression, ADHD, and PTSD. Additionally, he is developing an AI model to assist patients with cancer by searching, validating, and translating medical research into understandable language.

“This AI agent will actually go out every day on its own to search the internet to find out what’s new today,” Tao said. “It’s multiple AIs working together, some doing research, some validating the research, some being careful to make sure they’re understandable.”

Resources for Those in Need

Anyone experiencing thoughts of harming themselves or seeking access to free and confidential mental health support can call or text the National Suicide Prevention Lifeline at 988 or 800-273-8255 (en español: 888-628-9454; Deaf and Hard of Hearing dial 711 and then 988) or visit 988Lifeline.org.

unnamed AI researcher develops tool to identify suicidal thoughts in social media posts