On one hand, they say they are merely platforms for people who post content, and as platform providers are not responsible for what appears. On the other hand (isn’t there always), they actively determine what appears on their platforms, in the same way newspapers decide what stories to run.
Can social media platforms really say with a straight face that they are not responsible for what appears on their platforms when they determine what constitutes suitable content?
Internet social media platforms are granted broad safe harbor protections against legal liability for any content users post on their platforms. The arguments these platforms make for escaping legal liability are spelled out in one sentence in Section 230 of the 1996 Communications Decency Act: “No provider of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In essence, Section 230 gives websites immunity from liability for what their users post.
As Congress considers amending or repealing Section 230, perhaps one immediate step should be to give the Federal Communications Commission oversight of the platforms’ content decisions.
The Communications Decency Act passed in 1996 when the Internet was in its infancy and Congress was concerned that subjecting hosting platforms to the same civil liability as all other businesses would retard their growth. It was written before Facebook and Google existed.
In effect, big tech companies benefit from a federal law that specifically protects them. The same sweetheart deal is not available to traditional media companies and publishers. When you grant platforms complete immunity for the content their users post, you also reduce their incentives to remove content that causes social harm.
Congress’s expectation in enacting Section 230 was at least two-fold. First, it hoped protection from civil suits would provide an incentive for websites to create a family-friendly online environment that would shield children, hence the Good Samaritan title of this section. Second, Congress hoped it would promote the growth of the fledgling Internet economy by giving it partial protection from federal and state regulation.
Fast forward 25 years and things look a whole lot different than they did in 1996. The Section 230 protections are now desperately out of date. The largest and most powerful companies today are big tech companies that have enormous resources and advanced algorithms they use to help them moderate content. It is time to rethink and revise the protections.
There is growing consensus for updating Section 230. Both Democrats and Republicans apparently agree that these companies should not receive this government subsidy free of any responsibility and that they should moderate content in a politically neutral manner to provide “a forum for a true diversity of political discourse”. During his presidential campaign, President Biden said Section 230 should be “revoked, immediately.” Senator Lindsey Graham (R-SC) has said: “Section 230 as it exists today has got to give.”
Before amending Section 230, Congress should make sure that changing it won’t do more harm than good. While lawmakers argue about whether Section 230 should be amended or indeed repealed, one simple and immediate step toward making big tech companies more transparent would be to require them to submit to an external audit conducted by the Federal Communications Commission.
Such an approach is not perfect, of course, but it would force the network platform companies to have to prove that their algorithms and content-removal practices moderate content in a politically neutral manner, not partisan instruments and prioritize truthfulness and accuracy over user engagement.
This would be consistent with one of Congress’s findings when it enacted Section 230: “The Internet and other interactive computer services offer a forum for a true diversity of political discussion, unique opportunities for cultural development, and myriad avenues for intellectual activity.”