In a recent New York Times op-ed piece, Colorado Law professor Margot Kaminski discusses the role social media networks like Facebook and...
- SSRN Id:
- counterterrorism; social media; EU; hate speech; radicalization; fake news; European Internet Forum; censorship creep; terms of service; transparency; hash; free expression
- Most Recent Tweet View All Tweets
- Most Recent Blog Mention
- Most Recent News Mention
The First Amendment protects our right to use social networks like Facebook and Twitter, the Supreme Court declared last week. That decis...
Silicon Valley has long been viewed as a full-throated champion of First Amendment values. The dominant online platforms, however, have recently adopted speech policies and processes that depart from the U.S. model. In an agreement with the European Commission, tech companies have pledged to respond to reports of hate speech within twenty-four hours, a hasty process that may trade valuable expression for speedy results. Plans have been announced for an industry database that will allow the same companies to share hashed images of banned extremist content for review and removal elsewhere. These changes are less the result of voluntary market choices than a bowing to governmental pressure. Private speech rules and policies about extremist content have been altered to stave off threatened European regulation. Far more than illegal hate speech or violent terrorist imagery is in EU lawmakers’ sights, so too is online radicalization and “fake news.” Newsworthy content may end up being removed along with terrorist beheading videos, “kill lists” of U.S. servicemen, and instructions on how to blow up houses of worship. The impact of extralegal coercion will be far reaching. Unlike national laws that are limited by geographic borders, terms-of-service agreements apply to platforms’ services on a global scale. Whereas local courts can only order platforms to block material viewed in their jurisdictions, a blacklist database raises the risk of total censorship. Companies should counter the serious potential for censorship creep with definitional clarity, robust accountability, detailed transparency, and ombudsman oversight.