ChatGPT may alert police on suicidal teens

OpenAI, the creator of the popular chatbot ChatGPT, has announced a 120-day plan to strengthen its safeguards for teenage users. The plan includes the implementation of parental controls and the establishment of an expert council to advise on the wellbeing implications of AI. One of the key features of the new safeguards is the potential for ChatGPT to alert authorities if it detects a user expressing suicidal thoughts or other concerning behavior. This is aimed at providing support and intervention for vulnerable young users. Additionally, the company plans to introduce age-appropriate content filtering and expand its partnerships with mental health organizations to offer resources and guidance to ChatGPT users. The announcement comes amid growing concerns about the impact of AI technology on the mental health and wellbeing of young people. OpenAI's efforts to address these issues demonstrate a recognition of the responsibilities that come with developing powerful AI tools.
Source: For the complete article, please visit the original source link below.