Section 230 of the Communications Decency Act of 1996 has been the object of lawmakers’ scrutiny lately mainly in connection with how much liability social media platforms like Twitter, Instagram have for content posted on their sites. Section 230 has traditionally shielded these platforms (as they did telecom providers) as carriers distinct from publishers that had a higher standard applied to them and, thus, more liability for what they communicated. With current regulation around AI nearly nonexistent, the potential for defamation claims against AI chatbots is rising. Section 230, however, may provide some protection.
According to an analysis from Oklahoma-based law firm Worden & Carbitcher (wordenfirm.com), due to the existence of Section 230, AI chatbots—and their owners—are less likely than humans to be accused of defamation or spreading false information. Research from Worden & Carbitcher indicates AI chatbots have been involved in just two percent of all defamation cases, compared to nearly a third (32 percent) for human users.
“Section 230 has been a game-changer for AI chatbots, providing legal protection and allowing businesses to leverage this technology with confidence,” says Attorney Andrea Worden, the firm’s founder. “It has revolutionized the way we interact with technology and has opened up new opportunities for businesses to improve their customer experience.”