The Law & Probability: A Tricky Relationship.

(Image by Clyde Robinson)

This is the first entry in a two-part series. 

Every step of the criminal justice process involves judgments of probabilities. Police, prosecutors, judges, and juries all have to decide how evidence affects the chances (or appearance) of a suspect’s guilt, weighing trustworthiness and magnitude. This evidence is often probabilistic in nature, meaning that every step of the criminal justice process is vulnerable to a host of heuristics, fallacies, and plain errors. Most recently, this has appeared in our national dialogue around how implicit bias affects police treatment of black suspects. Just in October the International Association of Chiefs of Police apologized for “historical mistreatment of communities of color”. But our criminal justice system also has an unfortunate history more broadly of poorly integrating scientific and statistical reasoning into its judgments. These failures span every level of the system: police and attorneys often rely on untrustworthy or unfounded methods while courts and juries put too much trust in or misinterpret the evidence produced by these methods.

In a now infamous 1964 case, the couple Malcolm and Janet Collins stood trial for the robbery of a Mrs. Juanita Brooks. Neither Mrs. Brooks nor the only other witness could positively identify the accused, but described the robbers as a black man with a beard and mustache and a white woman with a blonde ponytail driving a yellow car. The Collins were arrested after a Los Angeles police officer saw their yellow car and noticed that they fit the witnesses’ description. The prosecution’s case rested entirely on the testimony of someone described by the California Supreme Court only as “an instructor of mathematics at a state college”, who provided the jury with the following table of “conservative” probability estimates:

  • Partly yellow car: 1/10
  • Man with mustache: 1/4
  • Black man with beard: 1/10
  • Woman with ponytail: 1/10
  • Woman with blonde hair: 1/3
  • Interracial couple in car: 1/1000

This expert witness then explained to the jury the product rule—that the chances of independent events occurring is the product of their individual probabilities—and estimated for the jury that the chances a randomly chosen couple would fit all the above criteria are therefore about 1 in 12 million. The witness concluded that the probability of the Collins’ being innocent was 1 in 12 million, and the jury returned a guilty verdict.

There are a couple glaring issues with this analysis. First, the probabilities that the expert witness cites are not independent; the product rule does not apply. A man having a beard raises the estimated probability of that same man having a mustache—the two probabilities are related. Second, the witness’s analysis is a classic example of the prosecutor’s fallacy—the chance of a randomly selected couple matching the guilty couples’ description is not the relevant probability that a jury should consider. The relevant question is the probability that a couple matching the characteristics is the guilty couple. In L.A., a city of almost 4 million people, let’s assume that there are only two couples who match the witnesses’ description. By the prosecution’s reasoning, based purely on the description given, both of these couples would have only a 1 in 12 million chance of being innocent. In actuality, they would each have a 1 in 2 chance—hardly beyond a reasonable doubt.

It’s important to note that not only a jury, but also the police and prosecutor (even before the introduction of the expert witness!), were convinced of the Collins’ guilt solely by a statistical fallacy. And although the California Supreme Court overturned the Collins’ conviction, recognizing that the statistical reasoning behind the jury’s decision was flawed, scrutiny of other methods doesn’t ever come under scrutiny from the courts.

One of these methods is a $2 roadside test used by police across the country to identify illegal drugs. Not long after its development in 1973 (the test remains substantially the same to today) a 1978 Department of Justice report determined that the test “should not be used for evidential purposes”. More recent research from the Florida Department of Law Enforcement lab system shows that police has a 21% false positive rate for methamphetamine—the tests also respond to a range of other compounds, and police often misapply or misread the results—and that half of those false hits are not any type of illegal drug. The tests are now inadmissible in nearly every trial jurisdiction, which doesn’t mean much when, across the country, around 90% of felony drug convictions are settled by way of plea deals. Suspects are often not aware of the tests’ false positive rate or its inadmissibility in court. Faced with trial against supposedly scientific evidence or a much lighter sentence for a guilty plea, most take the plea.

The ease with which many innocent drug suspects are talked into guilty pleas demonstrates the crucial fact that the public has a very skewed idea of how trustworthy forensic sciences are, views shared by many in the criminal justice system as well.

Kody Carmody is a 1st-year MPP student at the College of William & Mary and an Associate Editor of the William & Mary Policy Review. 

One thought on “The Law & Probability: A Tricky Relationship.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s