Addressing Standard Criticisms of Firearm & Toolmark Identification Discipline PDF
Document Details
Uploaded by BuoyantTaiga
Tags
Summary
This document provides an overview of criticisms of the firearm and toolmark identification discipline, often questioned in courts. It examines reports from organizations such as the National Academies of Sciences to illustrate the ongoing evolution of the discipline and its standards.
Full Transcript
Addressing Standard Criticisms of the Firearm and Toolmark Identification Discipline – Part 1 NATIONAL ACADEMY OF SCIENCES REPORTS AND PCAST COPYRIGHT 2023 - NICHOLS FORENSIC SCIENCE CONSULTING 1 Welcome to Addressing Standard C...
Addressing Standard Criticisms of the Firearm and Toolmark Identification Discipline – Part 1 NATIONAL ACADEMY OF SCIENCES REPORTS AND PCAST COPYRIGHT 2023 - NICHOLS FORENSIC SCIENCE CONSULTING 1 Welcome to Addressing Standard Criticisms of the Firearm and Toolmark Identification Discipline. This is designed to be a more focused discussion on some of the standard criticisms you might encounter in courts, whether they be evidentiary hearings or court trials. It is recognized that this is undergoing constant evolution in the courts and while attempts will be made to keep it up-to-date, this is an area that as an individual examiner you must be prepared to keep up on, lest you find yourself facing what could be a very unpleasant experience. 1 Overview Part 1 – Reports ◦ NAS Report, Ballistic Imaging, 2008 ◦ NAS Report, Strengthening Forensic Science in the United States: A Path Forward, 2009 ◦ PCAST Report, Forensic Science in Criminal Courts: Ensuring the Validity of Feature-Comparison Methods, 2016 Part 2 – Other related issues ◦ Error rates ◦ Inconclusive results ◦ Repeatability ◦ Reproducibility COPYRIGHT 2023 - NICHOLS FORENSIC SCIENCE CONSULTING 2 2 Ballistic Imaging National Research Council of the National Academies, 2008. Ballistic Imaging. The National Academies Press, Washington, DC. COPYRIGHT 2023 - NICHOLS FORENSIC SCIENCE CONSULTING 3 3 Ballistic Imaging Purpose was to explore the feasibility of creating a national image database of test fires collected from every new firearm manufactured or imported into the United States. Explicitly clear that the report should be not used to measure the validity of the firearm and toolmark discipline: ◦ “First, and more significantly, this study is neither a verdict on the uniqueness of firearms-related toolmarks generally nor an assessment of the validity of firearms identification as a discipline.” (p 18) ◦ “We also note that the committee does not provide an overall assessment of firearms identification as a discipline nor does it advise on the admissibility of firearms-related toolmark evidence in legal proceedings: these topics are not within its charge.” (p 3) COPYRIGHT 2023 - NICHOLS FORENSIC SCIENCE CONSULTING 4 The purpose of this study was to explore the feasibility of creating a national image database (comprised of 2D images or 3D data) of test fires collected from every new firearm manufactured or imported into the United States. This report was published in 2008 and while it is believed many of the machine-based studies would have been beneficial in assisting this group with their charge, most were not published at that point and it is unknown if they examined the few that were. The 2008 NAS report is explicitly clear that the report should be not used to measure the validity of the firearm and toolmark discipline: “First, and more significantly, this study is neither a verdict on the uniqueness of firearms-related toolmarks generally nor an assessment of the validity of firearms identification as a discipline.” (p 18) “We also note that the committee does not provide an overall assessment of firearms identification as a discipline nor does it advise on the admissibility of firearms-related toolmark evidence in legal proceedings: these topics are not within its charge.” (p 3) 4 Ballistic Imaging Critic Point 1 – “Finding: The validity of the fundamental assumptions of uniqueness and reproducibility of firearms-related toolmarks has not yet been fully demonstrated.” (p 3) Responding to this point ◦ The context of this statement was absolute exclusions which was in the paragraph above this statement. “Underlying the specific tasks with which the committee was charged is the question of whether firearms-related toolmarks are unique: that is, whether a particular set of toolmarks can be shown to come from one weapon to the exclusion of all others. Very early in its work the committee found that this question cannot now be definitively answered.” ◦ Practical conclusions have been the standard for 30 years, not absolute conclusions ◦ AFTE Theory of Identification ◦ US DOJ Uniform Language for Testimony and Reports (ULTR) ◦ OSAC Range of Conclusions Draft provides similar limitations, qualifications, and guidance COPYRIGHT 2023 - NICHOLS FORENSIC SCIENCE CONSULTING 5 Despite these clear and explicitly stated limitations, this has not stopped the use of this report to suggest this contains a damning verdict of firearm examination. The most often quoted line from the 2008 report is the single sentence found on page 3, “Finding: The validity of the fundamental assumptions of uniqueness and reproducibility of firearms-related toolmarks has not yet been fully demonstrated.” However, this quote must be placed in proper context, found in the paragraph above this “finding” where it states, “Underlying the specific tasks with which the committee was charged is the question of whether firearms-related toolmarks are unique: that is, whether a particular set of toolmarks can be shown to come from one weapon to the exclusion of all others. Very early in its work the committee found that this question cannot now be definitively answered.” Trying to prove a set of toolmarks is “unique” and exclusive to a single firearm, “to the exclusion of all others”, is an impossible task because one will never examine and compare all the firearms that have ever existed. So, in this context: yes, this premise had not been (and never will be) “definitively answered”. That is not to say that testimony never looked like that. It did. In fact, I recall meeting one individual who had been in the profession for a while, and he was surprised that it was not included in the AFTE Theory because he thought it was. His response when I said it wasn’t? “Well, I guess I will stop saying it.” 5 For 30 years the firearm and toolmark profession has provided guidance that limits opinions to “practical” conclusions, not absolute ones. More recently, the US Department of Justice has published Uniform Language for Testimony and Reports (ULTR). This professional guidance provides even clearer guide rails that steer the profession away from implying uniqueness or infallibility. Additionally, the draft OSAC Range of Conclusions provides similar limitations, qualifications, and guidance. Firearm and toolmark examination professional guidance does not support declarations of absolute uniqueness. Instead, if an examiner observes features that provide extremely strong support for the hypothesis of same source as well as weak or negligible support for different source, then she/he may conclude an identification. Science often does not have precisely defined thresholds. The lack of a numerical threshold for reaching source attribution conclusions is sometimes raised as proof of firearm and toolmark examination as being unscientific. However, holding firearm and toolmark examination to this criterion would place it far above what is accepted practice in research science, applied science, and forensic science. Firearm and toolmark examiners can demonstrate the agreement observed with casework documentation, much like in other forensic disciplines such as latent prints. Furthermore, laboratories will demonstrate the reproducibility of the result with quality control measures like verification, where a second examiner must reach the same result as the primary examiner. 5 Ballistic Imaging Critic Point 2 – Critics claim that the 2008 committee found no evidence to support the discipline. This is not true as stated in the 2008 report. ◦ “Notwithstanding this finding, we accept a minimal baseline standard regarding ballistics evidence. Although they are subject to numerous sources of variability, firearms-related toolmarks are not completely random and volatile; one can find similar marks on bullets and cartridge cases from the same gun.” (p 3) ◦ “A significant amount of research would be needed to scientifically determine the degree to which firearms-related toolmarks are unique or even to quantitatively characterize the probability of uniqueness. Assessing uniqueness at, say, a submicroscopic level, though probably technically possible, would be extremely difficult and time consuming compared with less definitive but more practical and generally available methods at the macroscopic level. It is an issue of policy and of economics as to whether doing so would be worthwhile. The committee did not and could not undertake such research, nor does it offer any conclusions about undertaking such research.” (p 3) COPYRIGHT 2023 - NICHOLS FORENSIC SCIENCE CONSULTING 6 Finally, critics also state that the 2008 committee found no evidence to support the discipline. This is not true, as explicitly stated in the 2008 report: “Notwithstanding this finding, we accept a minimal baseline standard regarding ballistics evidence. Although they are subject to numerous sources of variability, firearms-related toolmarks are not completely random and volatile; one can find similar marks on bullets and cartridge cases from the same gun.” (p 3) “A significant amount of research would be needed to scientifically determine the degree to which firearms-related toolmarks are unique or even to quantitatively characterize the probability of uniqueness. Assessing uniqueness at, say, a submicroscopic level, though probably technically possible, would be extremely difficult and time consuming compared with less definitive but more practical and generally available methods at the macroscopic level. It is an issue of policy and of economics as to whether doing so would be worthwhile. The committee did not and could not undertake such research, nor does it offer any conclusions about undertaking such research.” (p 3) 6 The committee did find evidence to support the validity of the discipline but had neither the charge nor the time to examine it to a more thorough level. They left that testing to future study and, as demonstrated by the past decade of research and error rate testing that is congruent with past research and testing, the firearm and toolmark discipline rests on a reliable foundation. 6 Strengthening Forensic Science in the United States: A Path Forward National Research Council of the National Academies, 2009. Strengthening Forensic Science In the United States: A Path Forward. The National Academies Press, Washington DC. COPYRIGHT 2023 - NICHOLS FORENSIC SCIENCE CONSULTING 7 7 Strengthening Forensic Science Committee recognized its own limitations in their report stating, “The committee decided early in its work that it would not be feasible to develop a detailed evaluation of each discipline in terms of its scientific underpinning, level of development, and ability to provide evidence to address the major types of questions raised in criminal prosecutions and civil litigation.” (p 7) Section on firearm and toolmark identification is only 5-1/2 pages and the committee admitted that because of the 2008 Ballistics Imaging report, independent research into the discipline would be limited. COPYRIGHT 2023 - NICHOLS FORENSIC SCIENCE CONSULTING 8 Just as in the 2008 report, the NAS committee strongly cautioned readers about using the report’s policy recommendations as a verdict for or against any particular discipline stating, “The committee decided early in its work that it would not be feasible to develop a detailed evaluation of each discipline in terms of its scientific underpinning, level of development, and ability to provide evidence to address the major types of questions raised in criminal prosecutions and civil litigation.” Section on firearm and toolmark identification is only 5-1/2 pages and the committee admitted that because of the 2008 Ballistics Imaging report, independent research into the discipline would be limited. The issue is that the charge of the 2008 committee was different than that of the 2009 committee and the research done for the 2008 report would be inadequate for the 2009 report. Points to a fundamental misunderstanding of the discipline. 8 Strengthening Forensic Science “The committee agrees that class characteristics are helpful in narrowing the pool of tools that may have left a distinctive mark. Individual patterns from manufacture or from wear might, in some cases, be distinctive enough to suggest one particular source, but additional studies should be performed to make the process of individualization more precise and repeatable.” (p 154) The committee also goes on to state where they believe the gap exists with the prior research. “For example, a report from Hamby, Brundage, and Thorpe includes capsule summaries of 68 toolmark and firearms studies. But the capsule summaries suggest a heavy reliance on the subjective findings of examiners rather than on the rigorous quantification and analysis of sources of variability.” (p 155) COPYRIGHT 2023 - NICHOLS FORENSIC SCIENCE CONSULTING 9 The 2009 committee acknowledges a baseline level of existing research, but recommends additional study be undertaken. “The committee agrees that class characteristics are helpful in narrowing the pool of tools that may have left a distinctive mark. Individual patterns from manufacture or from wear might, in some cases, be distinctive enough to suggest one particular source, but additional studies should be performed to make the process of individualization more precise and repeatable.” The committee also goes on to state where they believe 9 the gap exists with the prior research. “For example, a report from Hamby, Brundage, and Thorpe includes capsule summaries of 68 toolmark and firearms studies. But the capsule summaries suggest a heavy reliance on the subjective findings of examiners rather than on the rigorous quantification and analysis of sources of variability.” This has changed significantly with the current landscape. Many machine-based studies have done just that, with over two million independent data points analyzed that demonstrate that there is an objective, quantifiable distinction between known matching and known non-matching data. What is good about these studies is that even though they confirm the validity of the earlier studies, the results are congruent with the earlier studies and help us to have a better understanding and appreciation for them. 9 Strengthening Forensic Science Finally, critics will use the 2009 report to highlight a perception that firearm and toolmark examination lacks a defined process or method, and therefore is not scientific. This characterization of firearm and toolmark practice is not accurate. ◦ Each accredited laboratory will have a standard operating procedure that specifies the validated and approved methods to be used within that laboratory. ◦ A recent collaboration of NIST, AFTE, and OSAC has resulted in the publication of a Firearms Process Map. This 25-page document is a step-by- step process map that demonstrates firearm and toolmark examinations have distinct steps that can result in identification, elimination, and inconclusive conclusions. COPYRIGHT 2023 - NICHOLS FORENSIC SCIENCE CONSULTING 10 Finally, critics will use the 2009 report to highlight a perception that firearm and toolmark examination lacks a defined process or method, and therefore is not scientific. This characterization of firearm and toolmark practice is not accurate. Each accredited laboratory will have a standard operating procedure that specifies the validated and approved methods to be used within that laboratory. A recent collaboration of NIST, AFTE, and OSAC has resulted in the publication of a Firearms Process Map. This 25-page document is a step-by- step process map that demonstrates firearm and toolmark examinations have distinct steps that can result in identification, elimination, and inconclusive conclusions. Both NAS reports helped point out that more research would be useful, and especially research that provided more objective data. This call for additional research was answered as shown by the machine-based studies (some of which were published before PCAST) and error rate studies cited in this declaration. As in any science, this research is ongoing and has tremendous potential to assist examiners by further supporting their conclusions. 10 Forensic Science in Criminal Courts: Ensuring Scientific Validity of Feature Comparison Methods President’s Council of Advisors on Science and Technology (PCAST) Report. Forensic Science In Criminal Courts: Ensuring Validity of Feature-Comparison Methods. September 2016. COPYRIGHT 2023 - NICHOLS FORENSIC SCIENCE CONSULTING 11 11 PCAST This is a non-peer reviewed report released to President Obama by the President’s Council of Advisors on Science and Technology. Several issues with the report, each of which will be addressed in turn. COPYRIGHT 2023 - NICHOLS FORENSIC SCIENCE CONSULTING 12 In September of 2016 PCAST published their final report. This is a non-peer reviewed report released to President Obama by the President’s Council of Advisors on Science and Technology. The publication examined and reported on the validity of numerous forensic science disciplines, the focus of which in this declaration will be the firearm and toolmark discipline. There are several issues with this report, each of which will be addressed in turn. 12 PCAST Issue 1 – The report used narrow criteria for testing the validity of the firearms and toolmark discipline: at least two “appropriately” designed “black-box” studies. ◦ Ultimately PCAST deemed “… that firearms analysis currently falls short of the criteria for foundational validity, because there is only a single appropriately designed study to measure validity and estimate reliability.” ◦ As noted by PCAST Co-chair Dr. Lander in 2018, PCAST had found firearms one-study short of foundational validity stating, “With only a single well-designed study estimating accuracy, PCAST judged that firearms analysis fell just short of the criteria for scientific validity, which requires reproducibility. A second study would solve this problem.” Since then, no less than four studies have been published which meet the criteria set forth by PCAST: Keisler et al, Bajic et al (Ames II), Chapnick et al, Guyll et al. COPYRIGHT 2023 - NICHOLS FORENSIC SCIENCE CONSULTING 13 Issue 1 – The report used narrow criteria for testing the validity of the firearms and toolmark discipline: at least two “appropriately” designed “black-box” studies. PCAST defines black-box study as, “…an empirical study that assesses a subjective method by having examiners analyze samples and render opinions about the origin or similarity of samples.” The only thing that would be considered is the answer and not the reasoning behind the answer. Ultimately PCAST deemed “… that firearms analysis currently falls short of the criteria for foundational validity, because there is only a single appropriately designed study to measure validity and estimate reliability.” The study to which they were referring was the Baldwin study also known as Ames I. As noted by PCAST Co-chair Dr. Lander in 2018, PCAST had found firearms one- study short of foundational validity stating, “With only a single well-designed study estimating accuracy, PCAST judged that firearms analysis fell just short of the criteria for scientific validity, which requires reproducibility. A second study would solve this problem.” Since then, no less than four studies have been published which meet the criteria set forth by PCAST, these criteria being a black box study that was open set and included separate sample sets so that error rates could be more easily calculated. The 13 publication of Keisler et al in 2018 alone was the second study that would have satisfied PCAST’s requirements. Since Keisler, additional studies, such as Monson et al (AMES II), Chapnick et al, Law and Morris, Best and Gardner, and Guyll et al, the latter being published in the Proceedings of the National Academy of Sciences, provide further data that meet PCAST’s thresholds for foundational validity. Details of the error rates associated with these various studies is available in Module 5. Of course, critics are now finding other problems with those studies which, considering their approach is ironic because they are criticizing them in a direction that PCAST never advocated. So, here we have a group of “eminent” scientists to whom we should pay close attention but, the critics themselves ignore this very same group when they advocate for things that PCAST did or would not. 13 PCAST Issue 2 – Sole reliance on the PCAST report to judge the validity of firearms and toolmarks is problematic because the analysis contained within is flawed. ◦ PCAST’s listing of 417 references may be misleading. ◦ “PCAST members and staff identified and reviewed those papers that were relevant to establishing scientific validity.” (p 67 – emphasis added) COPYRIGHT 2023 - NICHOLS FORENSIC SCIENCE CONSULTING 14 Sole reliance on the PCAST report to judge the validity of firearms and toolmarks is problematic because the analysis contained within is flawed. The PCAST report lists 417 citations under the firearms and toolmarks section. When referring to these references, the PCAST report states the following. “PCAST compiled a list of 2019 [all encompassing] papers from various sources—including bibliographies prepared by the National Science and Technology Council’s Subcommittee on Forensic Science, the relevant Scientific Working Groups (predecessors to the current OSAC), and the relevant OSAC committees; submissions in response to PCAST’s request for information from the forensic-science stakeholder community; and our own literature searches. PCAST members and staff identified and reviewed those papers that were relevant to establishing scientific validity.” The listing of these references might imply due diligence and a thorough consideration of the peer-reviewed literature. However, the last sentence is critical (emphasis added): “PCAST members and staff identified and reviewed those papers that were relevant to establishing scientific validity”. By PCAST’s own admission, it appears that the past studies of variability, machine-based studies, and other foundational knowledge might have been deemed irrelevant. It was this narrow definition of empirical research and validity that allowed PCAST to disregard hundreds 14 of research papers and ultimately consider only nine error rate studies. 14 PCAST Issue 3 – OSAC replied to the PCAST Report and PCAST ultimately disregarded or ignored the concerns OSAC expressed. ◦ OSAC found that PCAST had made errors or omitted data from their analysis of the firearms validation studies. ◦ Brundage study ◦ Hamby study ◦ Fadul pistol slides study ◦ Fadul EBIS barrels study ◦ OSAC did not correct these. COPYRIGHT 2023 - NICHOLS FORENSIC SCIENCE CONSULTING 15 The OSAC firearms and toolmark subcommittee responded with a review of the PCAST report. OSAC found PCAST had made errors or omitted data from their analysis of the firearms validation studies (4 of the 9 article summaries had errors). Brundage study – PCAST stated that 30 examiners received test sets consisting of 20 questioned bullets to compare with 15 standards, with 300 returned answers. This is not true – it was 30 examiners, but the test sets consisted of 15 unknowns to compare with 10 pairs of standards, with 450 returned results. Hamby study – They had the test set construction correct but the total number of examiners and results were incorrect. They had 440 examiners when the Hamby study had 477 examiners (507 minus the original 30). PCAST also stated 6600 returned answers (when there were 7155) with 6593 correct assignments (when there were 7148). Fadul pistol slides study – The summary provided by PCAST is correct for Phase I, but they completely ignored Phase II of the study which was part of the very same report they referenced. 15 Fadul EBIS barrels study – PCAST indicated that the test kits consisted of 15 questioned samples fired from 10 pistols, with 2 of the 15 questioned samples coming from a firearm in which the known standards were not provide. This is not true. Each test kit had two known standards from each of eight pistols and 10 questioned samples, not 15. Furthermore, one pair of knowns did not have a corresponding unknown and two unknowns did not have a corresponding known. 15 PCAST Issue 3 – continued ◦ OSAC concluded that PCAST’s preferred study design was too narrow, and other types of studies have value in assessing overall error. ◦ The Baldwin study did not mimic casework situations like other studies did (such as the Smith et al study) which required a comparison of bullets and cartridge cases to determine how many firearms were involved. ◦ It was OSAC’s view that when taken as a whole, each validation study provides independent data points that show a low overall error rate. COPYRIGHT 2023 - NICHOLS FORENSIC SCIENCE CONSULTING 16 Additionally, OSAC concluded that PCAST’s preferred study design was too narrow, and other types of studies have value in assessing overall error. For example, PCAST failed to recognize that many of the validation studies used consecutively manufactured firearms. By doing so, the firearms and toolmark profession was attempting to create error-rate tests with worst-case scenario samples. Despite these challenging samples, test takers reported few false identifications. Additionally, PCAST preferred the Baldwin et al study test design where examiners only compared and reported on one questioned item at a time. The Baldwin study did not mimic casework situations like other studies did (such as the Smith et al study) which required a comparison of bullets and cartridge cases to determine how many firearms were involved. While this test design has utility (it allows for precise and easy error rate calculations), it does not mimic typical casework where examiners are tasked with inter-comparing numerous items all at once. That being said, the other studies do have difficulty in calculating error rates and that can be a challenge when trying to utilize them for error rate discussions. It was OSAC’s view that when taken as a whole, each validation study provides independent data points that show a low overall error rate. 16 PCAST Issue 3 – continued ◦ PCAST’s response condensed the 13-page reply into two sentences ◦ “The Organization of Scientific Area Committee’s Firearms and Toolmarks Subcommittee (OSAC FTS) took the more extreme position that all set-based designs are appropriate and that they reflect actual casework, because examiners often start their examinations by sorting sets of ammunition from a crime-scene. ◦ OSAC FTS’s argument is unconvincing because (i) it fails to recognize that the results from certain set-based designs are wildly inconsistent with those from appropriately designed black-box studies, and (ii) the key conclusions presented in court do not concern the ability to sort collections of ammunition (as tested by set-based designs) but rather the ability to accurately associate ammunition with a specific gun (as tested by appropriately designed black- box studies).” ◦ This is not a thorough or correct summary of the 13-page OSAC response. ◦ PCAST failed to address the four major points of disagreement submitted by the OSAC firearms subcommittee. ◦ PCAST did not acknowledge nor correct the misstated figures from the four studies. COPYRIGHT 2023 - NICHOLS FORENSIC SCIENCE CONSULTING 17 It was hoped PCAST would acknowledge their mistakes and recognize the value of different study designs. Instead, they condensed the 13-page OSAC response into the following two sentences: “The Organization of Scientific Area Committee’s Firearms and Toolmarks Subcommittee (OSAC FTS) took the more extreme position that all set-based designs are appropriate and that they reflect actual casework, because examiners often start their examinations by sorting sets of ammunition from a crime- scene. OSAC FTS’s argument is unconvincing because (i) it fails to recognize that the results from certain set-based designs are wildly inconsistent with those from appropriately designed black-box studies, and (ii) the key conclusions presented in court do not concern the ability to sort collections of ammunition (as tested by set- based designs) but rather the ability to accurately associate ammunition with a specific gun (as tested by appropriately designed black-box studies).” This is not a thorough or correct summary of the 13-page OSAC response. PCAST failed to address the four major points of disagreement submitted by the OSAC firearms subcommittee. PCAST also did not acknowledge nor correct the misstated figures from the four studies. 17 PCAST Issue 4 – PCAST also insisted that error rates from studies be required with examiner testimony. ◦ This practice had already been considered and logically rejected. ◦ In 1996 the National Academy of Sciences report cautioned against attempting to derive an industry-wide error rate stating, “Estimating rates at which nonmatching samples are declared to match from historical performance on proficiency tests is almost certain to yield wrong values. When errors are discovered, they are investigated thoroughly so that corrections can be made. A laboratory is not likely to make the same error again, so the error probability is correspondingly reduced.” ◦ In a peer reviewed article, Dr. Budowle and colleagues also considered and rejected the idea. COPYRIGHT 2023 - NICHOLS FORENSIC SCIENCE CONSULTING 18 PCAST also insisted that error rates from studies be required with examiner testimony. This practice had already been considered and logically rejected. In 1996 the National Academy of Sciences report cautioned against attempting to derive an industry-wide error rate stating, “Estimating rates at which nonmatching samples are declared to match from historical performance on proficiency tests is almost certain to yield wrong values. When errors are discovered, they are investigated thoroughly so that corrections can be made. A laboratory is not likely to make the same error again, so the error probability is correspondingly reduced.” In a peer reviewed article, Dr. Budowle and colleagues also considered and rejected the idea. “Providing error rates, along with or in combination with the association, has been proffered as a meaningful way to convey the strength of the evidence. However, suggesting that a specific error rate must be presented adds little value to the discussion on reliability. A community-wide error rate is not meaningful, because it falsely reduces the rate of error for those who might commit the most errors and wrongly increases the rate for those who are the most proficient. Moreover, when an error of consequence occurs, for instance a false inclusion caused by human error, QA demands that corrective action be taken which includes review of cases analyzed by the examiner prior to and after 18 discovery of the error.” 18 PCAST Summary ◦ The “bar” set by PCAST for “foundational validity” has been met. ◦ The PCAST report has counting and calculation errors. These were pointed out to PCAST, and they failed to correct them. ◦ The PCAST conclusion that test design is the main influence on rates of error is refuted by the results of multiple tests. This disproves the premise for rejecting other error rate studies as reliable measures of examiner error. ◦ The recommendation that error rates be reported by examiners for each case was already considered and rejected by the National Academy of Sciences in 1996. COPYRIGHT 2023 - NICHOLS FORENSIC SCIENCE CONSULTING 19 In conclusion, the PCAST report should not be viewed as a reliable critique of firearm and toolmark examination. First, the PCAST report is out of date. As noted in the section of this declaration dealing with Premise 2, multiple open “black-box” studies have been published and each shows low error rates. Therefore, the bar set by PCAST for “foundational validity” has been met. Second, the PCAST report has counting and calculation errors. These were pointed out to PCAST, and they failed to correct them. Third, the PCAST conclusion that test design is the main influence on rates of error is refuted by the results of multiple tests. This disproves the premise for rejecting other error rate studies as reliable measures of examiner error. Finally, the recommendation that error rates be reported by examiners for each case was already considered and rejected by the National Academy of Sciences in 1996. 19 Concluding Thoughts The reports issued by the NAS and PCAST have been misinterpreted and misrepresented in declarations offered to the courts as well as in court trials by attorneys. The 2008 NAS, 2009 NAS, and PCAST reports are sometimes held up as the consensus view of “real scientists” or of “academia.” ◦ However, this ignores the wealth of research performed by academia and non-practitioner individuals that has demonstrated with objective data that not only do different tools produce different toolmarks but that examiners can reliably interpret similarities and differences and render accurate common source determinations. ◦ Furthermore, careful reading of the reports can help to identify the misrepresentations and the errors that the reports do have (e.g., PCAST), that remain uncorrected. COPYRIGHT 2023 - NICHOLS FORENSIC SCIENCE CONSULTING 20 The reports issued by the NAS and PCAST have been misinterpreted and misrepresented in declarations offered to the courts as well as in court trials by attorneys. The 2008 NAS, 2009 NAS, and PCAST reports are sometimes held up as the consensus view of “real scientists” or of “academia.” In other words, if “real” scientists were tasked with researching firearm and toolmark evidence, then the research would prove that the fundamentals of firearm and toolmarks examination are unfounded. However, this ignores the wealth of research performed by academia and non- practitioner individuals that has demonstrated with objective data that not only do different tools produce different toolmarks but that examiners can reliably interpret similarities and differences and render accurate common source determinations. Furthermore, careful reading of the reports can help to identify the misrepresentations and the errors that the reports do have (e.g., PCAST), that remain uncorrected. One of the interesting developments is that the PCAST report remains heavily relied upon for its “appropriate” criticisms of the discipline and yet, when it comes to some 20 things, such as error rate calculations and the treatment of inconclusive results, critics seem to steer away from the PCAST report and how these “eminent” scientists performed their work and have hitched their wagons to others who take a more extreme view on how these things should be handled. 20 Questions COPYRIGHT 2023 - NICHOLS FORENSIC SCIENCE CONSULTING 21 21