Amid Terrorists’ Use of Social Media, Where Does a Social Media Platform’s Right to Ensure Free Speech End?

Thursday, April 13th, 2017 at 6:17 pm by Caroline Black

The Islamic State (ISIS) and other terror groups have revolutionized terror enlistment through their use of social media to fundraise, recruit, indoctrinate, and train new recruits. ISIS’s social media outreach in particular, especially through Twitter, has been the impetus for people from around the world to join ISIS’s ranks and plan and carry out attacks in their home countries. For instance, ISIS has specifically targeted women and girls as recruits through social media. In fact, the largest U.S. technology companies have been described as “’the command-and-control networks of choice for terrorists,’” as terror organizations increasingly use Twitter for recruitment and YouTube to post things such as beheadings, which were reposted thousands of times.

In response to terrorists’ use of social media, Google, Facebook, Twitter, YouTube, and others have suspended thousands of accounts for threatening or promoting acts of terrorism. However, the families of victims of terror attacks in Israel, Paris, and Brussels don’t feel these companies have done enough, and have filed suit. Specifically, for instance, Cain v. Twitter, was filed in December of 2016 by the widow of a victim of the Brussels airport bombing and the mother of a victim of the 2015 Paris attack. Cain seeks monetary damages for the victims’ deaths and accuses Twitter of having “knowingly provided material support and resources to ISIS in the form of Twitter’s online social network platform and communication services.” In addition, the plaintiffs assert that ISIS has “used and relied on Twitter’s online social network platform and communications services as among its most important tools to facilitate and carry out its terrorist activity,” including the attacks in Brussels and Paris. Cain’s lawyer stated that “among social media platforms, Twitter has most brazenly refused to cut off its services to terrorists, taking the position that ‘the tweets must flow’ even if it means assisting in mass murders.” Twitter has done all this, the suit alleges, “despite receiving numerous complaints and widespread media and other attention for providing its online social media platform and communications services to ISIS.”

Another suit, Gonzalez v. Twitter, similarly claims that Twitter, Google, and Facebook have provided “material support” to ISIS by “knowingly permit[ing] … ISIS to use their social networks as a tool for spreading extremist propaganda” and that, without social media, the growth of ISIS would not have been possible. Gonzalez is asking for damages for the death of his daughter in the Paris attacks. Yet other similar suits have also been filed regarding social media platforms’ alleged role in the Pulse nightclub shooting in Florida.


Although the question of whether social media platforms may be held liable for providing material support to terrorists and, as a result, be required to pay damages, is open, the suits seem unlikely to succeed. Last year, a federal judge rejected a case regarding a contractor in Jordan’s terrorism-related death that blamed Twitter and Facebook for facilitating the rise of terrorism. In addition, as many legal experts have pointed out, Section 230 of the Communications Decency Act protects digital platforms from having user-generated and third-party content used against them in proceedings against the platforms. This provision has been crucial to the growth of a wide array of user-generated content and user-generated content-based platforms, including YouTube and blog comments. So far, the resistance against terrorists’ use of social media platforms has been limited to platforms’ voluntary removal of pages and content. However, with terror attacks, both at home and in Europe, becomingly increasingly frequent, and many in some way linked to social media platforms, public opinion and laws may begin holding social media platforms liable if they fail to take adequate action against terrorists using their sites. Nevertheless, numerous obstacles to government-mandated social media platform self-policing abound.

The first major issue is, obviously, one of free speech and the First Amendment. If Congress does eventually pass laws that would allow social media platform liability, would the Supreme Court interpret that law as a violation of the social media platforms’ right to free speech? One person’s hate speech is another person’s free speech, and is protected under the Constitution.

Second, it’s unclear how Congress would define the content it would require social media platforms to remove. Essentially, how would Congress legally differentiate between the propaganda of “promoting terrorism” and their political speech? How would Congress ensure that their attempt to protect does not veer into censorship?

A third major issue is also whether, as a society, Americans want social media platforms to take on the task of aggressively pursuing and removing any and all speech that the platforms’ see fit. How much freedom of expression will Americans, in particular (as many social media platforms are headquartered in the U.S.), be willing to give up in exchange for a possible increase in safety for everyone around the world? If America’s reaction to the FBI’s request that Apple unlock the San Bernardino shooter’s iPhone is any indication, Americans would not be willing to give up almost unrestrained freedom of expression on social media, particularly on Twitter, without great benefits in return. Benefits that, in this case, are impossible to quantify or guarantee.

Overall, the status quo of social media platforms self-policing as they see fit, based on users reporting other users and only for the most egregious promotion of terrorism, does not seem like it will change any time soon. However, if terrorist attacks and terrorism promoted and propagated through social media continue to occur with greater frequency and the reality of the danger hits closer to home, will America begin to view free speech a little more like Europe? And will companies be tossed into the legal uncertainty that they’re not doing enough to avoid liability in the case their platform is somehow linked to terrorism?