In a shocking incident, Microsoft’s AI-powered Copilot has falsely accused veteran German court reporter Martin Bernklau of heinous crimes he had actually reported on. The severity of the accusations, which include child abuse, exploiting dependents, and a dramatic escape from a psychiatric hospital, has the potential to severely damage Bernklau’s reputation and compromise his privacy, as reported by OSnews.
COAI reports that Bernklau, who has decades of experience covering court cases, discovered the false allegations when searching his own name on Copilot. To his horror, the AI system not only attributed the crimes he had reported on to him but also disclosed his personal information, such as his full address and phone number, along with a route planner.
The false claims made by Copilot are not only distressing for Bernklau but also raise serious concerns about the reliability and accountability of AI systems. The incident highlights the potential for these systems to cause significant harm to individuals’ reputations and privacy if not properly regulated and monitored.
The incident has sparked reactions from legal authorities and data protection agencies in Germany. The Tübingen Public Prosecutor’s Office stated that no criminal offense had been committed, as the false accusations were made by an AI system rather than an actual person. This highlights the legal challenges in addressing AI-generated misinformation and the need for clear accountability frameworks.
The Bavarian Data Protection Office intervened by contacting Microsoft, which initially filtered Copilot’s replies about Bernklau. However, the false information reappeared days later, underscoring the ongoing difficulties in preventing the spread of AI-generated misinformation.
In response to the false accusations and the disclosure of his personal information, Bernklau has taken legal action against Microsoft. He hired a lawyer and is suing the company for defamation and invasion of privacy, challenging Microsoft’s attempt to disclaim responsibility through its terms of service. This lawsuit raises important questions about the liability of AI companies for the content generated by their systems and the adequacy of current legal frameworks in addressing these issues.
Image credit: Wikimedia