Policing and the Multitask Problem

It is relatively easy to observe whether the work of a police officer (or other law-enforcement officer) has led to a case being cleared. It is relatively difficult to observe whether a police officer’s work led to a false arrest or false conviction. Thus, one-sided incentives to clear cases would create a multitask problem in policing. Unfortunately, the police have a strong incentive to clear cases.

It is well-established that the crime clearance is a standard measure of police efficiency57, even though some scholars have questioned the legitimacy of using crime clearance rates as an indicator of effective policing58. There is also variation in how to effectively measure cleared crimes – should it be arrests made, actual convictions, or some other criteria? Although policing scholars note that the police serve many functions59, their performance is often measured by reduction in crime rates, number of arrest, response time and crime clearance rates.60 In many cases these goals are linked to high-powered incentives, which then creates a multitask problem.

In the context of the FBI, Posner61 has expressed support for the use of high-powered incentives. The “outputs” of criminal investigation, “number of arrests, prosecutions, convictions, length of sentences, and amount of property recovered” are, Posner says, “relatively hard to manipulate (at least legally)” and can therefore be “feasibly measured. FBI agents can thus be motivated by ‘high-powered’ incentives, that is by basing promotion and other career benefits on objectively measured, individual performance at the field-office level.”62 This argument neglects the multitask problem. Posner63 quickly made a partial concession to this point, saying, “a weakness in the use of arrests, convictions, and sentences as criteria for evaluating the performance of law enforcement personnel is that it is difficult to weight the criteria by the probability that the arrest, conviction, or sentence in a particular case was unlawful and may have imposed heavy costs on an innocent person. This problem amplifies the social costs of the conflict of interest of the crime labs and undermines the objectivity of the performance criteria used by law enforcement agencies.”

When police investigators have high-powered incentives to clear cases, they do not have an incentive to discriminate between the guilty and the innocent. Such skewed incentives createthe risk of false arrest and conviction. A vital aspect of the proper function of law enforcement is to discriminate between the guilty and the innocent. But the police can clear cases by arresting persons who are poor, uneducated, or mentally weak. Such persons may be less able to mount a vigorous defense and more likely to make a false confession. (We discuss false confessions below.) It would be comforting to imagine that the problem is only that a few “rogue cops” or “bad apples” may willfully prey on weak victims. As we have noted, however, even honest errors may be skewed by incentives. Thus, high-powered incentives to make arrests and to clear cases create the risk that even the most scrupulous and conscientious law enforcement officers will act in ways that needlessly increase the risk of false arrest and conviction.

The system of computer-driven statistics used by the New York City Police Department, (COMPSTAT), creates high-powered incentives for reducing reported crime rates, as well as arrests and case clearings. Unfortunately, in at least one precinct these high-powered incentives seem to have produced results similar to those we saw earlier with the Atlanta Public Schools. COMPSTAT, or Compare Statistics, provides up-to-date computerized crime statistics for all precincts in New York City. With the development of COMPSTAT, police managers face extraordinary pressures of accountability for their precinct performance.

Implementation of COMPSTAT in its first few years seemed to coincide with a major decline in serious crime, but sometimes the numbers do lie. Numerous press reports have alleged that New York City police officers intentionally “fudge” the numbers to keep the officially recorded serious crime rate low. Reports indicate that police officers manipulate the statistics by intentionally downgrading felonies to misdemeanors, undervaluing property to keep crimes from reaching the felony level, purposely not filing reports, encouraging victims not to file complaints and so on64. In New York as recently as 2009, some “precinct bosses threaten street cops if they don't make their quotas of arrests and stop-and-frisks, but also tell them not to take certain robbery reports in order to manipulate crime statistics”.65 Indeed this was just one of the allegations brought to the surface in the bombshell release of audio recordings made by Adrian Schoolcraft, eight-year veteran of the NYPD at the time, in 2010. Schoolcraft, who made audio recordings of all work-related events occurring at his precinct between June 1, 2008, and October 31, 2009, claimed that he had grown concerned by the quality of police service being delivered to the public. The audio recordings made by Schoolcraft revealed a systematic orientation, stemming from top management the bosses, to keep official crime statistics down. The pressure to keep these numbers down is made clear by one officer, who explains to Schoolcraft how robberies are typically downgraded to lower-level crimes. He says, “A lot of 61s – if it’s a robbery, they’ll make it a petty larceny. I saw a 61, at T/P/O [time and place of occurrence], a civilian punched in the face, menaced with a gun, and his wallet removed, and they wrote ‘lost property’ ”66. This is just one example of the practices used to skew the statistics.

In fact, the 81st Precinct of Bedford-Stuyvesant had adopted a policy, not sanctioned by the NYPD’s official policy, that police officers would not take a complaint from a victim unless the victim would come to the stationhouse in person. If the victim could not come to the stationhouse, then no report was filed and no crime documented. Schoolcraft also recorded his eventual meeting with the QAD (Quality Assurance Division), similar to Internal Affairs. At the end of this meeting, a supervisor explains to Schoolcraft the pressures faced by managers to lower the crime statistics. He states that “the mayor’s looking for it, the police commissioner is looking for it…every commanding officer wants to show it”.67 Just three weeks after his meeting with investigators from the QAD division, upper-level police managers had Schoolcraft committed to a mental institution for six days, claiming that he was mentally unstable.68 Committing Schoolcraft to an institution seems an obvious and unsuccessful attempt to both discredit and silence Schoolcraft, who revealed the systematic manipulation of crime statistics in one precinct of the New York City Police Department.

Similar computer-driven policing strategies have been implemented in other cities with comparable reports of statistical manipulation.69 COMPSTAT systems can be found internationally as well. For example, the United Kingdom implemented a numbers-driven accountability approach similar to COMPSTAT. Unsurprisingly, press reports described problems such as undercounting of crime and manipulation of crime category classification, with one report concluding that the police department recording policy “was designed to have the effect of artificially reducing recorded crime to a more politically acceptable level”.70 In sum, the accountability structure inherent in statistics-driven policing creates an incentive for police departments to keep reported crime levels low, especially for more serious offenses. Although measures of citizen satisfaction and perceptions of safety are also used as performance indicators71 reported crime levels have become strong measures of departmental efficiency. These measures certainly fit with the renewed “crime-control orientation” of our current criminal justice system and they are characterized by a focus on accountability. Most of these performance measures are then inexorably linked to the subsequent points in the criminal justice process and more specifically, the prosecution of crime. Prosecutors use evidence obtained by the police to build their own cases, giving police another incentive to obtain the strongest evidence possible.

The pressure to clear cases creates an incentive to under-report crimes. It also creates the risk that the evidence collection process will be skewed in ways that needlessly increase the risk of false arrest and conviction. Eyewitness identification, confession, and forensic evidence all share a common thread – they are typically regarded as the most persuasive types of evidence in a criminal case.72 During the last twenty years, forensic evidence, and more specifically, DNA evidence, has become an important part of criminal investigations. In addition to helping police investigators, DNA evidence has come to play an integral role in aiding offenders who raise claims of wrongful convictions. The Innocence Project, founded by Barry C. Scheck and Peter J. Neufeld in 1992, is the lead organization dedicated to exonerating wrongfully convicted individuals through the use of DNA evidence. Prior to forensic evidence, confession and eyewitness identification evidence, when available, typically exerted the most influence in criminal cases. The impact of this type of evidence begins with the police and ultimately prosecutors, to whom this type of evidence represents a solid chance of a conviction.

To illustrate this point, we consider how police treat confessions by individuals suspected of crimes. The police are charged with investigating crimes and making arrests based on the evidence and while confessions are considered evidence, they are often extracted based on the presumption that a suspect is guilty.73 To put this concept into the proper context, it is important to understand the concept of “tunnel vision” in the criminal justice process, whereby one suspect becomes the selected focus of an investigation at the exclusion of others. According to Martin, this is the inclination to “focus on a suspect, select and filter the evidence that will 'build a case' for conviction, while ignoring or suppressing evidence that points away from guilt.74

Tunnel vision begins at early stages of case processing and may involve a hunch or feeling about a suspect75, but it becomes ever more salient when police conduct what is known as the preinterrogation interview, or “Behavioral Analysis Interview,” to determine deception on part of a suspect.76 This process involves a focus on behavioral cues that indicate deception. Of course, police are trained to assess various behavioral cues to assist in their determinations.77 However, research has shown that various cues police are trained to look for, such as suspect fidgeting and aversion of eye contact by the suspect, are not necessarily reliable cues and may not be correct indicators of deception at all.78 Stemming from these assessments is the certainty to which the police feel they are accurate in their assessments. An extant body of literature has addressed this issue, often finding that trained law enforcement personnel are not much better, if any, at detecting deception than the layperson.79 Nevertheless, the goal of the preinterrogation is to determine a suspect's guilt.

This fact distinguishes police interviews from interrogations. An interview is designed to obtain information that leads to fact-finding and ultimately the truth whereas an interrogation is designed to elicit a confession of guilt.80 The confession, as police are made aware, is one of the best pieces of evidence used by the prosecutor in the court room81, thereby making the confession an important goal of investigation. Unfortunately, the interrogation, designed as a psychologically coercive and manipulative technique, leads not only the guilty to confess but also the innocent. The existence of false confessions due to psychologically coercive police interrogations is well-documented.82 The fallibility of eyewitness identification evidence has also been documented extensively in the field of psychology.83 The police conduct eyewitness identification procedures, otherwise known as line-ups. In a simultaneous line-up, traditionally used by police departments, a suspect is placed among other people and shown to a witness all at once.84 A variety of factors are linked to mistaken eyewitness identification errors, usually falling under system variables or estimator variables. Estimator variables are those not controlled by our criminal justice system, such as environmental conditions during the criminal event, stress experienced by a witness, and impact of cross-race identification on the witness. System variables are those controlled by the legal system, including the line-up presentation method used with witnesses, line-up instructions provided by the police, and techniques employed by investigators who interview witnesses.85 Similar to the confession, the police use the line-up to establish guilt and therefore the procedures used to administer line-ups can influence witnesses. All of these factors lead to a high rate of mistaken eyewitness identifications, and eyewitness testimony is responsible for more wrongful convictions than any other type of evidence.86

The infirmities of eyewitness testimony, then, are well understood.i Police lineups, for example, invite error if not properly structured.88 Eyewitnesses are more likely to misidentify a person perceived to have a different “race”.89 These infirmities may lead to false convictions when police are consciously or unconsciously motivated to maximize convictions.

The problems associated with confession evidence and eyewitness identification evidence are therefore twofold: that evidence widely thought of as reliable is fallible; and that police investigations are used to confirm suspect guilt rather than obtain the truth.


57. See Geoffrey P. Alpert & Mark H. Moore, Measuring Police Performance in the New Paradigm in Policing (In Performance Measures for the Criminal Justice System) (1993); Paul-Philippe Pare, Richard Felson and Marc Quimet, Community Variation in Crime Clearance: A Multi-level Analysis with Comments on Assessing Police Performance, 23 J. Quantitative Criminology 243 (2007).

58. See Jean-Paul Brodeur,, How to Recognize Good Policing: Problems and Issues (1998); Mark H. Moore & Anthony Braga. The 'Bottom' Line of Policing: What Citizens Should Value (and Measure! in Police Performance (2000).

59. See David H Bayley, Police for the Future (1994); Wesley Skogan, Susan Hartnett, Jill Dubois, Jennifer Corney, Marianne Kaiser, & Justine Lovig, On the Beat: Police and Community Problem Solving (1999).

60. See Alpert & Moore supra note 58; Mark H. Moore Margaret Poethig, The Police as an Agency of Municipal Government: Implications for Measuring Police Effectiveness, In Robert Langworthy (ed.) Measuring What Matters (1999).

61. Richard A. Posner, From the New Institutional Economics to Organization Economics: With Applications to Corporate Governance, Government Agencies, and Legal Institutions, at 22, 6 J. INSTITUTIONAL ECON. 1 (2010).

62. Id.

63. Richard A. Posner, Reply to comments, at 140, 6 J. INSTITUTIONAL ECON. 139 (2010).

64. See Robert Zink, The Trouble with Compstat, PBA Magazine (2004), available at http://www.nycpba.org/publications/mag-04-summer/compstat.html (last visited Oct. 2, 2011).

65. Graham Raymon, The NYPD Tapes: Inside Bed-Stuy’s 81st Precinct, Village Voice, May 4, 2010, available at http://www.villagevoice.com/content/printVersion/1797847/ (last visited July 19, 2011).

66. Id.

67. Id.

68. Id.

69. See John A. Eterno and Eli B. Silverman, The NYPD’s Compstat: Compare statistics or compose statistics? 12 INT’L J. POLICE SCI. & MANAGEMENT 426 (2010).

70. Nick Davies, Watching the detectives: how the police cheat in fight against crime, at 3 The Guardian (Mar. 8, 1999).

71. See Robert C. Davis, Introduction: the use of policing indicators in the developing world, 12 INT’L J. POLICE SCI. & MANAGEMENT 140 (2010); Wesley Skogan and Susan Hartnett, Community Policing, Chicago Style (1997).

72. See Richard. A. Wise, Kirsten A. Dauphinais, K., Martin A. & Safer, A Tripartite Solution to Eyewitness Error, 97 J. CRIM. L. & CRIMINOLOGY 807 (2007).

73. See Saul M. Kassin and Gisli H. Gudjonsson, The Psychology of Confessions: A Review of the Literature and Issues, 5 PSY. SCIENCES IN THE PUB. INTEREST 33 (2004).

74. Dianne Martin, Lessons about Justice from the Laboratory of Wrongful Convictions: Tunnel Vision, the Construction of Guilt, and Informer Evidence, at 848, 7 U.M.K.C. L. REV. 847 (2002).

75. Richard J. Ofshe & Richard A. Leo, The Decision to Confess Falsely: Rational Choice and Irrational Action. 74 DENVER UNIV. L. REV. 979 (1997).

76. See Richard A. Leo & Deborah Davis, From False Confession to Wrongful Conviction: Seven Psychological Processes, 38 J. PSYCHIATRY & L. 9 (1997).

77. See Kassin and Gudjonsson supra note 74.

78. See Kassin and Gudjuonsson, supra note 74; Aldert Vrij, Why Professionals Fail to Catch Liars and How They Can Improve, 9 LEGAL & CRIMINOLOGICAL PSY. 159 (2004).

79. See Vrij supra note 2004; Charles F. Bond, Jr., & Bella M. DePaulo, Accuracy of deception judgments, 10 PERSONALITY & SOC. PSY. REV. 214 (2006).

80. See Kassin and Gudjuonsson supra note 74; Steven. A. Drizin & Richard A. Leo, The problem of false confessions in the post-DNA world, 3 N.C. L. REV. 891 (2004).

81. See Ofshe & Leo supra note 76.

82. See Ofshe & Leo supra note 76; Saul M. Kassin, On the Psychology of Confessions: Does Innocence Put Innocents at Risk? 60 AMER. PSYCHOLOGIST 215 (1997)

83. See Carolyn Semmler, Neil Brewer and Gary L. Wells, Effects of Postidentification Feedback on Eyewitness Identification and Nonidentification Confidence, 89 J. APPLIED PSY. 334 (2004); Gary L. Wells & Amy L. Bradfield, 'Good, You Identified the Suspect': Feedback to Eyewitness Distorts Their Reports of the Witnessing Experience, 83 J. APPLIED PSY. 360 (1998); Gary Wells & Elizabeth Olson, Eyewitness Testimony, 54 ANN. REV. PSY. 277 (2003).

84. See Wells and Olson supra note 84.

85. See Gary L. Wells, Amina Memon and Steven D. Penrod, Eyewitness Evidence: Improving its Probative Value, 7 PSY. SCI. IN THE PUB. INT. 45 (2006).

86. Id.

-- Roger Koppl , Meghan Sacks

from "The Criminal Justice System Creates Incentives for False Convictions"

Quoted on Sun Sep 22nd, 2013