Note: This is a continuation of a two-part series began yesterday. The first-part may be found here.
(Image by Clyde Robinson)
DNA tests are regarded in the criminal justice system as the gold standard for forensic evidence. The same is true of the public as a whole—DNA evidence is heavily associated with groups like Project Innocence, and it’s seen as righting the wrongs of an imperfect justice system. When the validity of DNA evidence was first being introduced to courtrooms, experts testified that false positives are impossible in DNA testing. And while it is true that testing on single-source samples is virtually error-free, it ignores the significant opportunity for human error to mar test results. In his book The Drunkard’s Walk: How randomness rules our lives (2008), Leonard Mlodinow points out that “many experts” have estimated a DNA test false positive rate at about 1%, much higher than the 1 in 1 billion testified to by expert witnesses, but that courts often do not allow testimony on estimated human error rates in DNA testing.
News stories about crime lab scandals are not exactly uncommon anymore: labs in Baltimore, Chicago, Cleveland, Los Angeles, Fort Worth, Montana, Oklahoma City, San Antonio, Seattle, Virginia, and West Virginia have all been the subject of news exposes. The Houston Police Department Crime Laboratory regularly processes evidence for about 500 cases per year. Just earlier in June 2016, a local news channel had dozens of the lab’s DNA profiles independently analyzed. The results, as described by the Atlantic: “appeared that Houston police technicians were routinely misinterpreting even the most basic samples”. In one particularly egregious case, a lab technician had collected 3 samples of blood and saliva, all from the same man, and managed to return 3 substantially different DNA profiles—i.e., the lab couldn’t even match a man’s DNA to his own DNA.
Worse still, the public may view mixed-sample DNA testing under the same light of infallibility that shines on single-sample. A 2016 President’s Council of Advisors on Science and Technology Policy (PCAST) report on Forensic Science in Criminal Courts, however, found that DNA analysis of complex-mixture samples, containing DNA from 3 or more unknown individuals, relies heavily on subjective choices made by lab technicians. PCAST concludes that the foundational validity of complex-mixture DNA analysis has not been established except in very narrow circumstances (“a three-person mixture in which the minor contributor constitutes at least 20 percent of the intact DNA”), earlier stating that a method lacking this foundational validity “is scientifically meaningless: it has no probative value, and considerable potential for prejudicial impact”. Yet a prosecutor, judge, juror, or police officer presented with this evidence may view it through the same lens as a single-sample analysis, conflating them both as “DNA evidence”. It’s worth noting that the Houston man referenced above, whose DNA profile was botched by the crime lab, served more than 4 years in jail for a crime he did not commit, based largely on a mixed-sample DNA analysis conducted by the same lab.
The 2016 PCAST report cited above assesses the scientific validity and reliability of 7 “feature-comparison” forensic methods—analysis of single- and mixed-sample DNA, bitemarks, latent fingerprints, ballistics, footwear, and hair—and fails to find substantial validity for any except single-sample DNA. This points to a sort of “CSI effect”, where police officers, prosecutors, and jurors put much more stock in these methods than is scientifically warranted. As its name implies, this trust is a product of a long history of use, especially for DNA, ballistics, and fingerprint analysis, in both real life and film. PCAST warns, for example, that the false positive rate for latent fingerprint matching “is substantial and is likely to be higher than expected by many jurors… in reporting results of latent-fingerprint examination, it is important to state the false-positives rates based on properly designed validation studies”.
This post has detailed several methodologically-unsound strategies that inform criminal justice outcomes, but there are many more—too many to examine in one blog post. Some, such as polygraph tests, have been virtually removed from our criminal justice system, while others, such as eyewitness testimony and use of drug-sniffing dogs, have proved extremely resilient to any reform at all. In a later post, I will examine some promising reforms to police methods, court processes, education, and our national research agenda as well as what stands in the way of these changes.
Kody Carmody is a 1st-year MPP student at the College of William & Mary and an Associate Editor of the William & Mary Policy Review.