Large Libel Models: ChatGPT-4 Erroneously Reporting Supposed … – Reason
Some law professor colleagues and I are writing about whether Large Language Model creators (e.g., OpenAI, the creator of ChatGPT-4) could be sued for libel. And some recent stories allege that OpenAI does yield false and defamatory statements; Ted Rall wrote an article so alleging yesterday at the Wall Street Journal, and another site published something last Sunday about this as well (though there the apparently false statement was about a dead person, so it’s not technically libel). When I tried to ask the same questions those authors reported having asked, ChatGPT-4 gave different answers, but that’s apparently normal for ChatGPT-4.
This morning, though, I tried this myself, and I saw not just what appear to be false accusations, but what appear to be spurious quotes, attributed to media sources such as Reuters and the Washington Post. I appreciate that Large Language Models just combine words from sources in the training data, and perhaps this one just assembled such words…
The post Large Libel Models: ChatGPT-4 Erroneously Reporting Supposed … – Reason first appeared on SEO, Marketing and Social News | OneSEOCompany.com.
source: https://news.oneseocompany.com/2023/03/17/large-libel-models-chatgpt-4-erroneously-reporting-supposed-reason_2023031742192.html
Your content is great. However, if any of the content contained herein violates any rights of yours, including those of copyright, please contact us immediately by e-mail at media[@]kissrpr.com.