The U.S. Supreme Court is about to review Section 230 of the law, which has been regulating ethical issues in the field of Internet communications for decades. The judges' decision could be a game-changer for developers of AI-powered search engines like Google's Bard or Microsoft's Bing.
The body of section 230 states: "Neither the provider nor the user of the interactive computer service shall be deemed to be the publisher or bearer of any information provided by another information content provider." This allows companies to moderate information as they see fit, but it also leaves them vulnerable to outside criticism. However, the law protects "interactive computer services" companies from lawsuits related to third party content.
Next week, the Supreme Court will hear arguments in Gonzalez v. Google. In 2015, a relative of the plaintiff died during a terrorist attack in Paris. YouTube's recommendation algorithms led to widespread recruitment videos before the attacks, he said. Google was supported by both large IT corporations and communities like Reddit. They believe that the lawsuit will set a dangerous precedent and could lead to lawsuits against non-algorithmic forms of recommendations and individual users.
As part of the meeting, the judges will consider whether algorithmic recommendations should receive the full legal protection of Section 230.
Traditional search engine interfaces may rely on such protection if they link to inaccurate information, as search engines simply link to content from other sources.
However, the situation with search chatbots is more complicated. If they receive Section 230 protection, they will be able to broadcast defamation and inaccurate data by slightly modifying the original text.
Meanwhile, Microsoft CEO Satya Nadella believes AI-powered Bing faces the same legal challenges as a regular search engine, with the main claims being copyright infringements in content creation.
Both chat bots of IT giants have already been convicted of incorrect work. So, Google's Bard made a factual error during the first demonstration of its use, and Bing AI gave incorrect data several times during the presentation.