The professionalization of America’s police forces that began in the early 1900s has had as an obvious consequence an increasing reliance on science and technology to address the crime problem. To this end, police have utilized crime mapping and criminological theory to complement foot and random motor patrols in order to suppress crime. More recently, police are turning to predictive analytics—or sophisticated computer generated algorithms—to predict (and hopefully prevent) crime. The allure of predictive policing to divine and prevent crime may at first glance seem free of potential pitfalls. Unfortunately, a closer inspection suggests that several ethical dilemmas are present that must be acknowledged and addressed by implementers if the ideals embraced by its architects are to be achieved.
History and Origins of Predictive Policing
The arrival of the 20th century marshaled with it an increasing reliance on science and technology to rationally structure police operations. To be sure, police abandoned beat foot patrols in favor of random motor patrols while emphasizing rapid response and retroactive investigations to deter and solve crimes. In this sense, policing was largely reactive. In subsequent years, police began to consult emerging theories on crime and delinquency to identify environmental and structural factors that revealed where crime was concentrated—sometimes referred to as “hot spots”—and through the analysis of statistics realize that crime was not evenly distributed over time (e.g., home burglaries are more likely during the day when residents are at work). In response, police began to allocate resources accordingly to become more proactive in their approach.
Contemporary predictive policing is a product of this natural evolution of proactive policing. It has, however, a defining characteristic that distinguishes it from its ancestors. Generally speaking, predictive policing relies on “big data” and computer algorithms to derive predictive statistical models suggesting where and when crime may occur. To this end, police may be told, for example, that an automobile theft has a 30.2 percent chance of occurring in a particular geographic area (say one or two city blocks) between, say, 12:00 p.m. and 1:00 p.m. It is the generation, interpretation, and application of these probabilistic models and subsequent police responses that create ethical dilemmas and potential legal issues that deserve consideration.
Reliability and Validity of Data and Models
The derivation of probabilistic models using sophisticated computer algorithms and official police records to shape police actions calls into the question the reliability and subsequent validity of these models. To be sure, the models are only valid to the extent that they are actually able to predict crime. It doesn’t matter which variables are used (e.g., day of the week, moon phase, local events) to predict that crime, only that the model “significantly” increases the likelihood of accurately divining the event than would otherwise be expected by chance alone. The chosen statistical criteria aside (i.e., levels of statistical and substantive significance used to establish that validity), it is well known that the data used to generate these models may not be reliable. This is problematic in that reliability is a necessary condition for validity. Stated differently, if the data used to derive the model are not reliable, then the model is (in theory) incapable of making accurate predictions.
Perhaps the greatest threat to reliability experienced by police agencies is the underreporting of crime. In fact, the Bureau of Justice Statistics estimates that 52 percent of all violent crimes committed between 2006 and 2010 were not reported to the police. What is more, the accuracy of the data may also be imperiled as a result of inaccurate classification and/or input into computers by police. Unfortunately, there are two potential pitfalls associated with this reliability issue. First, inaccurate predictive models lead to the ineffective distribution of resources. The consequence is not only that police resources are directed in an ineffective manner and, thereby, potentially not reducing the aggregate crime rate, but also that reliance on faulty predictive models runs the risk of actually increasing individual risk. Stated differently, subordinating police intuition to invalid predictive models runs the risk of increasing risk for people and property that may not otherwise have occurred.
Relatedly, many of the algorithms that have been used to forecast crime are either proprietary—and therefore not likely to be validated through a peer-reviewed process to protect the intellectual property of the firms that developed them—or cloaked in secrecy to protect police operations. This may contribute to the CSI effect (named after the television series CSI: Crime Scene Investigation), where jurors ascribe more legitimacy to these probabilistic models not only because they are a product of “science,” and therefore perceived to be more legitimate than an officer’s intuition, but also because defense attorneys may be incapable of fully exposing their potential limitations in the absence of external scientific review. Finally, because predictive policing has largely been applied to property crimes (e.g., burglary and car theft), where reporting rates are higher presumably because of insurance incentives to commit crime, extension of predictive policing to violent or gang crimes may present challenges to achieving statistically valid models.
One consequence of the rational distribution of police resources based on criminological theories and aggregated statistics has been increased tensions between community members—typically poor minorities—who live in areas where crime is concentrated. Unfortunately, focusing police resources in specific areas produces a discriminatory effect by increasing the likelihood of detection and arrest of individuals living in those communities that is not experienced in places where there are fewer police resources, ceteris paribus. Here advances in predictive policing using algorithms offers the potential promise of improving community relations by narrowly tailoring areas and time, rather than designating broad areas at all times as “high crime.” Unfortunately, directing resources into a particular area creates the potential for a self-fulfilling prophecy whereby a concentration of police resources—even if narrowly tailored—leads to increased arrests in these areas, which serves to perpetuate existing models.
Probability and Reasonable Suspicion
The courts have historically relied on probability in determining the constitutionality of police-initiated contacts with citizens. In fact, the courts have ruled that the existence of any fact that makes an event more or less likely may serve as a basis for a police-initiated contact. To be sure, Terry v. Ohio (1967) acknowledged that an articulable fact combined with an officer’s experience that would lead a reasonable person to believe a crime had been, or was about to be committed, served as the basis for a police-initiated contact.
This recognizes that the presence of a fact that increases the probability of a crime serves as the legal basis for a stop and possibly a frisk. Application of these probabilistic models as a basis for police-initiated contacts poses complex issues for courts to consider. For example, a patrol officer is directed to a particular city block at a particular time whereby the algorithm has predicted the likelihood of a motor vehicle theft to be 10.5 percent.
If the officer observes a person carrying a screwdriver, does the officer—based upon the predictive model—have the right to initiate a contact based upon the statistical model? In other words, does the model serve as probabilistic evidence, and if so, what is the minimum acceptable threshold? Should officers be able to initiate a contact based upon a 10.5 percent increase in the chance that a crime will be committed, or upon some higher standard, such as 30 percent or 40 percent? These are issues that will be addressed as legal challenges are brought before the courts. Finally, one potential consequence of using these predictive models is that they may actually serve to relegate police intuition to probabilistic models in the courtroom and weaken justifications for stops.
For example, an officer is dispatched to a location at a particular time as a result of a prediction but witnesses no crime; then, after leaving the identified location he sees nearby an individual matching the profile generated by the algorithm (e.g., an individual walking aimlessly in front of an automated teller machine). Because the algorithm no longer matches the place predicted by the model, initiating the contact may actually be weakened by the failure of the algorithm to predict a crime in that area. What is more, courts, which have generally deferred to an officer’s professional judgment in determining the likelihood of a crime, may begin to rely instead on these probabilistic models.
Conclusion
The professionalization of the public police force has resulted in an increased reliance on science and technology to more rationally structure police operations and distribute scarce resources. Contemporary predictive policing is a natural outgrowth of efforts to make policing more efficient, effective, and proactive. The allure of divining crime to intercept criminals before they act is seductive but not without potential ethical dilemmas. Specifically, predictive policing presents issues and challenges related to reliability in measurement—and therefore subsequent predictive validity—while at the same time relegating officer intuition to potentially faulty probabilistic models that serve to increase citizen risk, adversely affect crime rates, and perpetuate the proliferation of subsequent invalid models.
Courts will also have to determine if these predictive models can be used to establish a legal basis for an officer-initiated contact and what level of precision these models must achieve to do so. Finally, police may wish to consider the potential for these models to undermine their efforts should the models begin to supplant officer experience and intuition in establishing reasonable suspicion in the courtroom, while considering the impact of defense attorneys who will likely highlight their inappropriate application and limits.
Bibliography:
- Ferguson, Andrew. “Predictive Policing and Reasonable Suspicion.” Emory Law Journal, v.62, (2012).
- Short, M. B., M. R. D’Orsogna, P. J. Brantingham, and G. E. Tita. “Measuring and Modeling Repeat and Near-Repeat Burglary Effects.” Journal of Quantitative Criminology, v.25 (2009).
- S. Department of Justice. “Victimizations Not Reported to the Police, 2006–2010.” http://bjs.gov/ content/pub/pdf/vnrp0610.pdf (Accessed May 2013).
This example Predictive Policing Essay is published for educational and informational purposes only. If you need a custom essay or research paper on this topic please use our writing services. EssayEmpire.com offers reliable custom essay writing services that can help you to receive high grades and impress your professors with the quality of each essay or research paper you hand in.