Meta is re-training its AI so it won't discuss self-harm or have romantic conversations with teens
Meta is re-training its AI and adding new protections to prevent teen users from discussing harmful topics like self-harm, suicide, and disordered eating with the company's chatbots. The changes come after reports highlighted concerning interactions between Meta's AI and teens, including allegations that the company's chatbots were permitted to have "sensual" conversations with underage users. Meta is now limiting teen access to certain AI characters and training its AI not to engage with teens on these sensitive topics, but instead guide them to expert resources. The new protections will be rolled out over the next few weeks and will apply to all teen users using Meta AI in English-speaking countries. The move comes as Meta faces increased scrutiny from lawmakers and officials over the safety of its AI interactions with minors.
Source: For the complete article, please visit the original source link below.