FDA employees say the agency's Elsa generative AI hallucinates entire studies
The article discusses issues with the Elsa generative AI tool used by the U.S. Food and Drug Administration (FDA). Current and former FDA employees have reported that Elsa has hallucinated non-existent studies or misrepresented real research, making the tool unreliable for aiding the clinical review process. While the FDA's leadership, including Commissioner Marty Makary, claimed they were unaware of these specific concerns, the use of Elsa at the agency is currently voluntary. The article also mentions the release of the White House's "AI Action Plan," which aims to remove "red tape and onerous regulation" in the AI sector and demands that AI be free of "ideological bias," potentially excluding important considerations like climate change, misinformation, and diversity, equity, and inclusion efforts, which have documented impacts on public health. The article suggests that the ability of tools like Elsa to provide genuine benefits to the FDA and U.S. patients is increasingly doubtful.
Note: This is an AI-generated summary of the original article. For the full story, please visit the source link below.