Lifestyle

ChatGPT falsely accuses law professor of sexual harassment

Sarene Kloren|Updated

A law professor reveals how ChatGPT fabricated serious allegations against him, sparking urgent discussions on AI accountability and misinformation.

Image: YouTube

Artificial intelligence has been hailed as a game-changer in education, healthcare, and even the justice system. 

But a recent incident has reignited concerns about the risks of unchecked AI tools spreading false information.

Law professor Jonathan Turley has revealed that ChatGPT fabricated serious allegations of sexual harassment against him, which he described as “chilling.”

Turley, who teaches at George Washington University, said he was shocked when he discovered that the chatbot had invented a story claiming he had sexually harassed a student. 

The AI system cited a fabricated Washington Post article and even placed him at Georgetown University, a school where he has never taught.

He explained: “It invented an allegation where I was on the faculty at a school where I have never taught, went on a trip that I never took, and reported an allegation that was never made. It is highly ironic, as I have been writing about the dangers of AI to free speech.”

The 61-year-old legal scholar was alerted to the false claims by UCLA professor Eugene Volokh, who had asked ChatGPT to list examples of sexual harassment cases involving American law professors. 

Among the chatbot’s examples was a completely fictitious 2018 incident in which Turley was accused of misconduct during a law school trip to Alaska.

Fabricated information

Turley quickly recognised “glaring indicators” that the account was fabricated. 

He pointed out: “First, I have never taught at Georgetown University. Second, there is no such Washington Post article. Finally, and most important, I have never taken students on a trip of any kind in 35 years of teaching, never went to Alaska with any student, and I’ve never been accused of sexual harassment or assault.”

The false allegations were not limited to ChatGPT. According to a Washington Post investigation, Microsoft’s Bing Chatbot, which runs on the same GPT-4 technology, repeated the claims.

Turley has since called for urgent discussions around AI accountability.

Speaking to The Post, he said there is a pressing need for “legislative action” to address issues such as defamation and free speech in the AI era. 

He criticised the lack of recourse, noting: “When you are defamed by a newspaper, there is a reporter who you can contact. Even when Microsoft’s AI system repeated that same false story, it did not contact me and only shrugged that it tries to be accurate.”

He believes the incident underscores the biases and flaws built into AI systems, stressing that “AI algorithms are no less biased and flawed than the people who program them.”

The case highlights a growing concern: while people may spread misinformation knowingly or unknowingly, AI systems can generate and distribute entirely fabricated stories under the guise of objectivity and the consequences of such errors could be far-reaching.

IOL Lifestyle

Get your news on the go, click here to join the IOL News WhatsApp