Fingerprint
Fingerprint

In the wake of the Brandon Mayfield case (2004) which raised serious questions near the accurateness of fingerprint identification by the FBI, the National Academy of Sciences was asked to perform a scientific assessment of the accuracy and reliability of latent fingerprint identification in criminal cases.  Initial results were published in:

Proceedings of the National Academy of Sciences (PNAS)

Bradford T. Ulery, 7733–7738, doi: ten.1073/pnas.1018707108

Accuracy and reliability of forensic latent fingerprint decisions

Bradford T. Ulery (a), R. Austin Hicklin (a), JoAnn Buscaglia (b),1, and Maria Antonia Roberts (c)

Edited by Stephen E. Fienberg, Carnegie Mellon University, Pittsburgh, PA, and approved March 31, 2011 (received for review December 16, 2010)

ABSTRACT

The estimation of forensic fingerprint evidence relies on the expertise of latent print examiners. The National Research Quango of the National Academies and the legal and forensic sciences communities have chosen for research to measure the accurateness and reliability of latent print examiners' decisions, a challenging and complex problem in need of systematic analysis. Our research is focused on the evolution of empirical approaches to studying this trouble. Here, we report on the get-go big-scale study of the accuracy and reliability of latent print examiners' decisions, in which 169 latent print examiners each compared approximately 100 pairs of latent and exemplar fingerprints from a pool of 744 pairs. The fingerprints were selected to include a range of attributes and quality encountered in forensic casework, and to exist comparable to searches of an automated fingerprint identification system containing more than than 58 million subjects. This study evaluated examiners on key decision points in the fingerprint exam process; procedures used operationally include additional safeguards designed to minimize errors. Five examiners made fake positive errors for an overall fake positive rate of 0.i%. Eighty-five percent of examiners made at least one false negative error for an overall imitation negative rate of vii.5%. Independent examination of the same comparisons by dissimilar participants (analogous to blind verification) was plant to find all simulated positive errors and the majority of false negative errors in this study. Examiners frequently differed on whether fingerprints were suitable for reaching a conclusion.

http://world wide web.pnas.org/content/108/19/7733.full

Authors

Bradford T. Ulery

(a) Noblis, 3150 Fairview Park Drive, Falls Church building, VA 22042;

R. Austin Hicklin (a) Noblis, 3150 Fairview Park Drive, Falls Church, VA 22042;

JoAnn Buscaglia (b) Counterterrorism and Forensic Scientific discipline Enquiry Unit, Federal Bureau of Investigation Laboratory Division, 2501 Investigation Parkway, Quantico, VA 22135; and

Maria Antonia Roberts (c) Latent Print Support Unit, Federal Bureau of Investigation Laboratory Sectionalisation, 2501 Investigation Parkway, Quantico, VA 22135

Whether a 0.1 percent imitation positive rate is "minor" is a subjective value sentence. Would you drive across a span that had a 1 in thou (0.i percent) chance of collapsing and killing you as you collection across it? No, probably non.

In addition, the 0.1 percentage false positive rate is based on a minor sample of less than 1000 test cases, 744 pairs of latent and exemplar fingerprints. The Federal fingerprint databases such as the ones used in the Brandon Mayfield case have millions of people in them and may eventually have all US citizens (over 300 million people) in them. How does this "small" charge per unit extrapolate when a fingerprint is compared to every fingerprint in the U.s.a. or the world?

One might wonder why such an assessment was not washed a long fourth dimension ago.

This is a report on the Brandon Mayfield case:

https://oig.justice.gov/special/s0601/exec.pdf

The National Enquiry Council also published a detailed study Strengthening Forensic Science in the United States: A Path Frontward in 2009 addressing the scientific bug raised by the Mayfield case and other questions most the scientific validity of forensic science methods.

Fingerprint identification: advances since the 2009 National Research Quango report by Christophe Campod (Philos Trans R Soc Lond B Biol Sci. 2015 Aug five; 370(1674): 20140259.
doi: 10.1098/rstb.2014.0259) has a summary of work on the issue since the 2009 National Research Council Report.

The lesser line is fingerprints are much more than accurate than random take a chance but hardly infallible every bit used to exist widely believed.

(C) 2017 John F. McGowan, Ph.D.

Credits

The fingerprint image is from the U.s. National Institute of Standards and Technology (NIST) by way of Wikimedia Eatables and is in the public domain.

About the Author

John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing gesture recognition for bear upon devices, video compression and speech recognition technologies. He has all-encompassing experience developing software in C, C++, MATLAB, Python, Visual Basic and many other programming languages. He has been a Visiting Scholar at HP Labs developing reckoner vision algorithms and software for mobile devices. He has worked as a contractor at NASA Ames Research Center involved in the research and evolution of image and video processing algorithms and engineering. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to infinite. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Establish of Technology (Caltech).