December 31st 2024 | Leo Lin
Edited by Peyton Mikolayek
The integration of artificial intelligence (AI) in the criminal justice system has introduced a new era of predictive capabilities through tools like predictive policing algorithms. Designed to enhance law enforcement efficiency, these algorithms analyze historical crime data to predict potential criminal activities and identify “high-risk” individuals and locations. Proponents argue that AI-driven policing offers an unprecedented level of objectivity, efficiency, and resource optimization in crime prevention. However, significant concerns about racial bias, fairness, and transparency emerge as these tools gain traction.
Critics worry that the data that drives these algorithms often reflects historical biases, such as the disproportionate policing and arrest rates in minority communities, resulting in a self-perpetuating cycle of prejudice against these groups. This raises an essential question: do predictive policing algorithms inadvertently amplify systemic biases within the criminal justice system? Fundamental rights protected by the U.S. Constitution are at stake, particularly under the Fourteenth Amendment’s Equal Protection Clause, which prohibits discriminatory practices. If predictive policing disproportionately targets certain communities based on flawed data, it risks undermining the very principles of fairness and justice it seeks to uphold.
This article examines whether predictive policing algorithms can coexist with constitutional protections, particularly under the Fourteenth Amendment. By analyzing relevant constitutional law, the Equal Protection Clause, and potential Fourth Amendment privacy concerns, this paper aims to address both the promises and pitfalls of predictive policing. In weighing the benefits and risks, this article seeks to provide a balanced perspective on whether predictive policing can truly be an instrument of justice or if it instead perpetuates inequity within the criminal justice system.
Overview of Predictive Policing Algorithms
Predictive policing algorithms are a data-driven approach that predicts where crimes are likely to occur or that identifies individuals who are likely to be involved in criminal activity. This allows law enforcement to shift from reactionary to proactive crime prevention efforts. By using decades of sophisticated crime data, these algorithms allow law enforcement agencies to gather and interpret connections and patterns in financial records, geospatial imagery, surveillance camera footage, social media data, public records, news feeds, and other open and proprietary sources to understand crime patterns better. Having a greater understanding time, location, and demographic details allows agencies to better allocate their scarce resources and significantly reduce crime.
Predictive policing works through a complex structure of data input, pattern recognition, risk scoring, and continuous learning. The algorithms are first fed with large datasets, including detailed historical crime records such as arrest rates and types of offenses, geographic information like neighborhood layouts and crime hotspots, and, in some cases, demographic details such as age, ethnicity, and socioeconomic status of individuals associated with criminal activity. Machine learning models then analyze the data to detect recurring crime patterns, such as the correlation between time of day, type of crime, and location. Following analysis, areas or individuals are assigned a “risk score,” which reflects the likelihood of criminal activity based on factors such as recent crime trends, the frequency of past offenses, demographic and socioeconomic indicators, and geographic characteristics like proximity to known crime hotspots. A high risk score prompts increased law enforcement presence or surveillance in those areas or individuals, while a low score may result in fewer resources being allocated. These scores, however, carry significant implications, as they can influence how communities are policed and raise concerns about potential over-policing or neglect in certain areas. New data inputs such as recent crime reports or arrest records allow predictive policing algorithms to provide better-informed predictions. Theoretically, data accumulation should make these predictions more accurate.
Four main types of information inform the algorithms’ dataset–historical crime records, location, and crime type. Geographic and environmental factors of crime-prone locations, such as street lighting or nearby public facilities, are also a crucial portion of the dataset. Furthermore, many models use demographic information about crimes, stirring many ethical concerns about reinforcing stereotypes. Predictive policing algorithms also factor in time-related data to predict high-risk periods, such as particular days or seasons.
The Equal Protection Clause and Systemic Bias Concerns
The Fourteenth Amendment’s Equal Protection Clause requires that laws and policies, including those governing law enforcement, do not lead to unjust discrimination. While jurisdictions that use this tool argue it enhances public safety, there is growing evidence that AI-driven predictive policing perpetuates racial bias, violates privacy rights, and undermines public trust in law enforcement. This is because the data used to make decisions around predictive policing comes from compiling and analyzing historical criminal data and police activity. Relying too heavily on historical criminal data to make policing decisions is inherently biased, as data shows that the Black community is disproportionately negatively impacted in the criminal justice system due to targeted over-policing and discriminatory criminal laws. This is noted in a letter from US Senators to the Department of Justice (DOJ), arguing that “mounting evidence indicates that predictive policing technologies do not reduce crime… Instead, they worsen the unequal treatment of Americans of color by law enforcement” [1]. These Senators called for defunding predictive policing systems until more comprehensive AI regulation systems are developed. Although predictive policing does not explicitly target individuals based on race, the outcomes may still disproportionately affect certain groups which raises questions about whether such systems violate equal protection by perpetuating systemic bias.
Under current law, proving an Equal Protection violation typically requires demonstrating discriminatory intent, not just impact, as established in Washington v. Davis, a Supreme Court case where the Court ruled that a policy with a disproportionate impact on a racial group does not violate the Equal Protection Clause unless there is clear evidence of intentional discrimination. Predictive policing, which is designed to be objective, lacks such intent. However, critics argue that the impact of these tools should matter just as much as intent, given the significant role they play in modern policing. Emerging legal approaches, like those seen in State v. Loomis, highlight judicial awareness of algorithmic bias, as the case involved a defendant challenging the use of a risk assessment algorithm in his sentencing on the grounds that it was opaque and potentially biased. Some courts, including in this case, have urged caution in using such tools and emphasized the risk of reinforcing inequities due to the lack of transparency and accountability in their design and operation. As the debate continues, some advocates suggest a “disparate outcome” standard for algorithms, similar to Title VII employment law, which would allow for addressing bias without obvious intent [2].
Beyond legal doctrine, predictive policing contributes to structural discrimination by disproportionately monitoring communities that are already subject to heightened police scrutiny. Some critics term this effect “digital redlining,” wherein algorithms reflect and reinforce historical inequities [3]. To ensure compliance with the Equal Protection Clause, predictive policing tools need regulation to prevent them from replicating and amplifying existing biases.
Privacy and the Fourth Amendment
The Fourth Amendment protects citizens against “unreasonable searches and seizures,” which traditionally requires that law enforcement actions intruding upon privacy be supported by probable cause [4]. Predictive policing algorithms, however, raise unique privacy concerns, particularly given their reliance on data-driven predictions to identify individuals and locations deemed “at risk” of crime without concrete suspicion of wrongdoing.
Predictive policing often involves surveillance of individuals and neighborhoods based on the outputs of the algorithmic. Although these predictions rely on publicly available data, critics argue that the widespread data collection and use of AI for surveillance violates the protection of privacy. Under Katz v. United States, Fourth Amendment protections apply where there is an expectation of privacy that society deems reasonable. Predictive algorithms, however, contrast with traditional suspicion-based policing, where an officer might need a specific reason or suspicion to investigate someone. Instead, they create risk profiles based on data patterns that flag individuals as “high risk” without a specific suspicion.
Predictive policing relies on historical crime data, geolocation information, and other surveillance tools to preemptively target specific areas and individuals. The use of such data, without individualized suspicion or probable cause, may constitute an “unreasonable search.” While not yet directly addressed in court, the Carpenter v. United States decision—where the Supreme Court ruled that accessing historical cell phone location data without a warrant violated the Fourth Amendment—signals that law enforcement’s reliance on digital data for surveillance requires a warrant, especially when it involves highly personal or location-specific data collected over time.
Proponents argue that predictive policing can improve safety while using data rather than invasive physical searches. On the other hand, concerns arise when algorithms encourage more police presence in specific communities based on crime trends rather than direct evidence. Such approaches risk normalizing constant surveillance in targeted areas. This could violate privacy while simultaneously disproportionately impacting minority communities. As predictive policing evolves, its alignment with Fourth Amendment standards remains uncertain; this potentially calls for stricter regulations or transparency requirements to protect privacy and uphold constitutional protections.
Conclusion
Predictive policing algorithms offer potential benefits in efficiency and crime prevention but raise profound constitutional concerns, especially under the Fourteenth Amendment’s Equal Protection Clause. While these algorithms aim for objective decision-making, their reliance on historical data often results in systemic racial bias within the criminal justice system. While traditional equal protection frameworks need proof of discriminatory intent, the significant and unintended consequences of predictive policing need further consideration of whether to take impact over intent.
As AI continues to transform criminal justice, there is a pressing need for more regulatory standards that address accountability, transparency, and fairness. Policymakers and courts should explore frameworks to ensure these technologies do not compromise constitutional rights. By balancing innovation with civil liberties, the criminal justice system can become more effective and equitable.
Notes:
[1] NAACP, “Artificial Intelligence in Predictive Policing Issue Brief,” February 15, 2024, https://naacp.org/resources/artificial-intelligence-predictive-policing-issue-brief.
[2] Ibid.
[3] Ángel Díaz, “Data-Driven Policing’s Threat to Our Constitutional Rights,” Brookings, September 13, 2021, https://www.brookings.edu/articles/data-driven-policings-threat-to-our-constitutional-rights/.
[4] Congress.gov, “U.S. Constitution – Fourteenth Amendment | Resources | Constitution Annotated | Congress.gov | Library of Congress,” 2024, https://constitution.congress.gov/constitution/amendment-14/#:~:text=No%20State%20shall%20make%20or,equal%20protection%20of%20the%20laws.
Sources:
[1] Ángel Díaz. 2021. “Data-Driven Policing’s Threat to Our Constitutional Rights.” Brookings. September 13, 2021. https://www.brookings.edu/articles/data-driven-policings-threat-to-our-constitutional-rights/.
[2] “Artificial Intelligence in Predictive Policing Issue Brief.” 2024. NAACP. February 15, 2024. https://naacp.org/resources/artificial-intelligence-predictive-policing-issue-brief.
[3] “Art Ificial Int Elligence and Predict Ive Policing: Risks and Challenges Recommendat Ion Paper 2.” n.d. https://eucpn.org/sites/default/files/document/files/PP%20%282%29.pdf.
[4] “Cogent | Blog | Predictive Policing Using Machine Learning (with Examples).” 2020. Cogentinfo.com. 2020. https://www.cogentinfo.com/resources/predictive-policing-using-machine-learning-with-examples.
[5] Halley, Catherine. 2022. “What Happens When Police Use AI to Predict and Prevent Crime? – JSTOR Daily.” JSTOR Daily. February 23, 2022. https://daily.jstor.org/what-happens-when-police-use-ai-to-predict-and-prevent-crime/.
[6] Heaven, Will Douglas. 2020. “Predictive Policing Algorithms Are Racist. They Need to Be Dismantled.” MIT Technology Review, July. https://doi.org/1085094/10-breakthrough-technologies-2024.
[7] “Interpretation: The Fourteenth Amendment Due Process Clause | Constitution Center.” 2015. National Constitution Center – Constitutioncenter.org. 2015. https://constitutioncenter.org/the-constitution/articles/amendment-xiv/clauses/701.
[8] “Surveillance and Predictive Policing through AI.” 2022. Deloitte . Deloitte. 2022. https://www.deloitte.com/global/en/Industries/government-public/perspectives/urban-future-with-a-purpose/surveillance-and-predictive-policing-through-ai.html.
[9] “Turning the Tide on Crime with Predictive Policing – Our World.” 2019. Unu.edu. 2019. https://ourworld.unu.edu/en/turning-the-tide-on-crime-with-predictive-policing.
[10] “U.S. Constitution – Fourteenth Amendment | Resources | Constitution Annotated | Congress.gov | Library of Congress.” 2024. Congress.gov. 2024. https://constitution.congress.gov/constitution/amendment-14/#:~:text=No%20State%20shall%20make%20or,equal%20protection%20of%20the%20laws.
[11] White, Margaret. 2024. “Navigating the Future of Policing.” Police Chief Magazine. April 3, 2024. https://www.policechiefmagazine.org/navigating-future-ai-chatgpt/.





Leave a comment