OpenAI has disrupted (more) Chinese accounts using ChatGPT to create social media surveillance tools
OpenAI has disclosed that a Chinese-originated account used ChatGPT to develop a social media listening tool for a purported government client. The tool could crawl various social media platforms to monitor specific political, ethnic, or religious content. OpenAI also banned an account that was using ChatGPT to create a "High-Risk Uyghur-Related Inflow Warning Model" to track the movements of Uyghur-related individuals, amid allegations of human rights abuses against Uyghur Muslims in China. Additionally, OpenAI has identified instances of Russian-, Korean-, and Chinese-speaking developers using ChatGPT to refine malware, as well as networks in Cambodia, Myanmar, and Nigeria using the chatbot to create scams. However, OpenAI estimates that ChatGPT is used to detect scams three times more often than to create them. OpenAI has also disrupted operations in Iran, Russia, and China that used ChatGPT to generate content for online influence campaigns on various social media platforms, both within the originating nations and internationally.
Source: For the complete article, please visit the original source link below.