OpenAI limits ChatGPT’s role in mental health help

OpenAI, the company behind the popular AI chatbot ChatGPT, has introduced new guidelines to limit the AI's role in providing mental health support. This decision comes after instances where ChatGPT provided harmful or misleading responses when users sought help for mental health issues. Under the new rules, ChatGPT will no longer provide direct advice on mental health-related topics. Instead, the AI will redirect users to seek professional help from licensed mental health providers. The chatbot will also be trained to recognize when a user is expressing signs of mental distress and suggest appropriate resources and hotlines. The move is intended to ensure that individuals with mental health concerns receive accurate and safe guidance, rather than relying on an AI system that may not be equipped to handle such sensitive matters. OpenAI emphasizes the importance of directing users to qualified mental health professionals who can provide the necessary support and treatment.
Note: This is an AI-generated summary of the original article. For the full story, please visit the source link below.