June 12th 2024 | Amelia Frank

Edited by Stella Lee

The issue of whether or not internet platforms have an obligation to restrict harmful speech and content has been hotly contested and pertinent in supreme court cases like Twitter v. Taamneh, Google v. Gonzalez [1], Facebook v. Force [2], and Dyroff v. Ultimate Software Inc [3]. Twitter v. Taamneh [4], decided in May 2023, concerned the killing of a Jordanian citizen in an ISIS attack at a nightclub in Reina. Abdulkadir Masharipov carried out the attack under the guise of ISIS after receiving training from Al-Qaeda in Afghanistan. In January 2017, Masharipov’s open fire into a large crowd killed thirty-nine people and injured seventy others. The statement ISIS  released the following day took full and clear responsibility for the Reina attack under religious rhetoric. The plaintiffs, in this case the victim’s family, sued Twitter, Facebook, and Google (who owns Youtube) rather than ISIS. The plaintiffs claimed that Twitter actively participated in the aiding and abetting of terror recruitment and attack coordination through recommendation algorithms in addition to profiting off of advertisements placed on ISIS’s propaganda posts. The language of “aiding and abetting” originates in the Justice Against Sponsors of Terrorism Act, which imposes secondary civil liability under conditions of conspiracy and “knowingly providing substantial assistance” [5]. Because of this, the onus was on the plaintiffs to define Twitter’s aiding and abetting through either one or both of these legislative limitations. With origins in common law, aiding and abetting also hinges upon elements of culpability and a voluntary or conscious choice. This raises an interesting question as to whether or not the creation of recommendation algorithms satisfies the voluntary assistance clause which entails that any publisher of third party content would need to be “voluntarily aiding” in the dissemination of harmful content. 

A previous ruling by the ninth U.S circuit court of appeals had shown that these social media companies had met the aiding and abetting contingency that allowed plaintiffs to seek damages under the Anti-Terrorism Act. This ruling was reversed by the Supreme Court on the grounds that the ninth circuit decision obscured the “aiding and abetting” criterion. The crux of the reversal’s justification relied upon the ninth circuit’s focus on the value the platforms brought to ISIS rather than the platform’s culpable association with ISIS action. Furthermore, the ninth circuit attempts to create brightline distinctions in the matter of aiding and abetting, ignoring common law and Halberstam precedent. Halberstam established that the defendant needs to possess a general awareness of their role in an illegal or tortious activity [6]. The case also entrenched the twin requirements of conscious and culpable participation in aiding and abetting. In the ninth circuit decision, plaintiffs only allege that defendants designed platforms with the knowledge that content removal would prove difficult given the ways in which recommendation algorithms are designed. Because recommendation algorithms sort through metadata, data sets that analyze data itself, one could argue that it would be incredibly difficult to pick through ISIS content and flag it or fully remove it from the platform. Justice Clarence Thomas affirms this in his opinion citing that for every minute 500 hours of videos are posted to Youtube, 510,000 comments are posted on Facebook and 347,000 tweets are shared on Twitter. His argument implies Google, Facebook, and Twitter act as neutral computer services in that recommendation algorithms and advertisement placement are integral to the platform’s functioning. Thomas also points out that under the plaintiff’s claims any U.S national who fell victim to a terrorist attack would bring the same arguments against social media companies for aiding in the carry out of attack and recruitment. From a technological standpoint, the restructuring of the platform’s functioning and foundational infrastructure would mean complete overhaul and further innovation. The removal of recommendation algorithms would mean a vastly different consumer experience in which an individual would have to explicitly search for exact content rather than encountering videos, posts, and tweets based upon prior searches and interests.  Recommendation algorithms have streamlined the internet, generated an immense amount of consumption, and enabled functioning of the free internet as we know it today. 

The Supreme Court’s ruling that Twitter and other platforms did not aid and abet the ISIS attack in Istanbul falls in line with a similar supreme court case that was decided on the same day: Google v. Gonzalez. Both cases uphold the necessity of recommendation algorithms, and the opinion in favor of Google specifically dictates that any removal of content would undermine the free internet. Simultaneously true, however, is that the aforementioned platforms have had a heavy hand in the dissemination of ISIS propaganda over the past few years. According to PRIUS data, in 2016 alone, social media played a role in 90% of extremist radicalization cases [7]. Events like the January 6th insurrection or even the recent Hamas attack would not have been able to occur on the same scale without social media coordination and information circulation. Data show that while social media use does not improve the success rates of radicalization, it certainly accelerates terror radicalization and mobilization through streamlining of recruitment. The implicated platforms also possess the ability to sift through and ban or “shadow block” certain content creators or published information. Though regulation may not always be accurate, these platforms have taken away the power of many influential individuals through content removal.

Twitter’s own policy reads that one may not “threaten terrorism and/or violent extremism nor promote violent and hateful entities” [8]. Yet this clearly does not hold up empirically. One could posit that if Twitter and other media platforms ban or remove content, ISIS and other entities will simply move to other platforms like Telegram, Paltak, or an unencrypted platform for added security, but this argument is undermined by the fact that as of 2023 Twitter has 528.3 [9] monetizable active users, Youtube has more than 2.7 [10] billion active users and Facebook boasts nearly 3 billion [11]. 

Though the legal logic is sound in the unanimous Supreme Court decision in favor of Twitter, there is much to be said for promoting a platform regulatory environment. As terror attacks grow in lethality through advancement in drone and weaponry technology, social media platforms like Twitter may only be providing a helping hand in propaganda and recruitment campaigns. Whether or not legal change is incited through Supreme Court decisions or new legislations, there is great impetus for politicians and world leaders to intervene. The Digital Services Act enacted this past year in the EU takes important steps towards regulation and should be replicated by policymakers in the U.S. The legislation first categorizes platforms as minor enterprises or large scale internet operations that could pose systemic risk [12]. Platforms identified as having the capacity to inflict great harm on consumers are regulated on either the system or design level. This regulation is not to be conflated with censorship, rather in adhering to the larger goal of security platforms systematize risk management through the creation of algorithms. To decrease the presence of error within many of these data sifting systems, the U.S. should allocate further funding towards the improvement and innovation of algorithmic and AI technology in the field. In conjunction with the proposed government regulation, platforms would garner the most benefit by self regulating. Many larger social media companies have eschewed self regulation under the guise that it would produce company wide economic contractions, but this is a falsehood disproved by empirics in similar sectors. Increased self regulation would result in long term growth for platforms like Twitter in developing safer platforms that aren’t subject to lawsuits or changing regulations. By taking corporate social responsibility, companies preemptively circumvent problems that emerge out of a looser regulatory environment creating prospects of platform stability and success.  

Sources:

 [1] Gonzalez v. Google, No. 21-1333 (U.S. May 18, 2023)

 [2] Force v. Facebook, Inc., 934 F.3d 53 (2d Cir. 2019)

 [3] Dyroff v. The Ultimate Software Group, No. 18-15175 (9th Cir. Aug. 20, 2019)

[4] Twitter v. Taamneh, No. 21-1496 (U.S. May 18, 2023)

[5]  U.S. Congress. Justice Against Sponsors of Terrorism Act. Public Law 114-222, 114th Cong., 2nd sess., 28 Sept. 2016, govinfo.gov/content/pkg/PLAW-114publ222/html/PLAW-114publ222.htm.

[6]  Halberstam v. Welch, No. 81-0903, mem. op. at 5 (D.D.C. Mar. 24, 1982).

[7] National Consortium For The Study of Terrorism And Responses to Terrorism, comp., The Use of Social Media by United States Extremists, accessed December 3, 2023, https://www.start.umd.edu/pubs/START_PIRUS_UseOfSocialMediaByUSExtremists_ResearchBrief_July2018.pdf

[8] Twitter, “Violent and Hateful Entities Policy”, Twitter Help Center, last modified April 2023, accessed December 3, 2023,https://help.twitter.com/en/rules-and-policies/violent-entities#:~:text=You%20may%20not%20threaten%20terrorism,means%20to%20further%20their%20cause

[9] Rohit Shewale, “Twitter Statistics In 2023 — (Facts After ‘X’ Rebranding)”, Demand Sage, last modified September 2023, https://www.demandsage.com/twitter-statistics/#:~:text=Twitter%20Statistics%202023%20(Top%20Picks)&text=Twitter%20has%20around%20528.3%20million,billion%20in%20revenue%20in%202022.

[10] Rohit Shewale, “YouTube Statistics For 2023 (Demographics & Usage),” Demand Sage, last modified September 2023, accessed December 3, 2023, https://www.demandsage.com/youtube-stats/#:~:text=As%20of%202023%2C%20YouTube%20is,in%20the%20world%20access%20YouTube.

[11] Simon Kemp, “Facebook Users, Stats, Data & Trends,” Datareportal, last modified May 2023, accessed December 3, 2023, https://datareportal.com/essential-facebook-stats#:~:text=Number%20of%20Facebook%20users%20in,)%3A%202.989%20billion%20(April%202023)&text=Number%20of%20people%20who%20use,)%3A%202.037%20billion%20(April%202023)&text=Share%20of%20Facebook%27s%20monthly%20active,%3A%2068%25%20(April%202023)&text=Size%20of%20Facebook%27s%20global%20advertising,2.249%20billion*%20(April%202023

 ​[12] European Union, “The Digital Services Act Package,” European Commission, last modified September 2023, accessed December 3, 2023, https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package.

Leave a comment

Trending

Create a website or blog at WordPress.com