For the first time in the internet’s history, revising Section 230 of the Communications Decency Act’s immunity for social media platforms from liability for third-party content seems to many not just viable, but necessary. Most such calls for reform are built around the longstanding common law liability principles of duty and reasonableness, namely conditioning Section 230 liability on platforms acting reasonably to “prevent or address” third-party content that might be harmful or illegal. These reforms are finding common cause with several legislative and executive efforts seeking to compel platforms to adhere to “reasonable” or “politically neutral” moderation policies or else face increased liability for user speech. And calls for entirely new regulatory regimes for social media, some of which also call for new federal agencies to implement them, advocate for similar approaches.
This Article is the first comprehensive response to these efforts. Using the guidance of the common law to unpack the connections between reasonableness, imminence, and intermediary liability, this Article argues that these proposed reforms are misguided as a matter of technology and information policy and are so legally dubious that they have little chance of surviving the court challenges that would inevitably follow their adoption. It demonstrates the many problems associated with adopting a common-law-derived standard of civil liability like “reasonableness” as a regulatory baseline for prospective platform intermediary fault. “Reasonableness”-based Section 230 reforms would also lead to unintended, speech-averse results. And even if Section 230 were to be revised, serious constitutional problems would remain with respect to holding social media platforms liable, either civilly or criminally, for third-party user content.