A robot commits libel. Who is responsible?
By Guest Blogger Peter Georgiev, a graduate research assistant at Investigative Reporters and Editors (IRE) and foreign correspondent for the Bulgarian National Television.
“I will work tirelessly to keep you informed as texts will be typed into my system uninterrupted.”
This is how Xinhua News’ artificial intelligence presenter announced itself to the global audience at the World Internet Conference in November. Modeled on a real anchor Zhang Zhow, the virtual newsreader is said to be the first of its kind, according to China’s state news agency. But signs that automated journalism will soon play a central role in the news media industry have long been there.
For news organizations, algorithms generating compelling narratives are an exciting prospect. Many would have raised an eyebrow when the Associated Press started relying on automation to cover minor league baseball and transform corporate earnings into publishable stories. Fast forward a couple of years and now it seems almost impossible to find a major news outlet that is not experimenting with their own robot reporter.
From a business perspective, that makes complete sense. News bots are convenient, cheap and don’t complain when asked to produce an article at 3 a.m. on a Saturday. Most of all, they are quick. In 2015, NPR’s Planet Money podcast set up a writing contest between one of its journalists and an algorithm. Spoiler alert: the algorithm won. It wasn’t even close.
Yet, for all their apparent infallibility, bots, like their human predecessors, are also vulnerable to mistakes. In the news business, one of the worst mistakes is committing libel. So, how should courts treat cases in which a robot generates a defamatory statement? Legal and tech experts believe now is the time to decide.
Thanks to a series of landmark rulings by the U.S. Supreme Court in the second half of the previous century, the First Amendment provides strong protection to journalists in defamation lawsuits. Public officials can’t recover damages for libel without first proving that the defendant had acted with “actual malice” — knowing that a statement was false or demonstrating reckless disregard for the truth.
“That just doesn’t work very well with an algorithm,” says Lyrissa Lidsky, dean of University of Missouri’s School of Law and an expert in First Amendment law. “It’s hard to talk about the knowledge that an algorithm has or whether an algorithm acted recklessly.”
Bots don’t make conscious choices when producing content. They behave on the basis of human-written code. Yet, programmers may not always be able to predict every single word of a story or its connotation, especially when machine learning is involved.
“As these cases start to arise and be litigated, there’s going to be a lot of education of the public about how algorithms work and what choices are made in designing algorithms,” Lidsky says.
While a bot cannot act with actual malice,
Like this? Click to receive free updates