Coming in Spring 2022
Conference Chair: Professor Jane Campbell Moriarty, Carol Los Mansmann Chair in Faculty Scholarship
In an age where cyber-surveillance, facial recognition, and other forms of "techno-policing" have begun to replace more traditional forms of evidence, it is time to revisit the past and examine the future. In this conference, several speakers will examine the role of machines and artificial intelligence as evidence in criminal cases. Other speakers will consider evidence that continues to pose "black box" problems about accuracy and bias, such as eyewitness testimony, neuroimaging evidence, and abusive head trauma testimony.
Machine-driven evidence may be on the verge of replacing more traditional forms of evidence such as handwriting comparison, eyewitness identification, and even police testimony. But despite the advanced technical abilities of such evidence, concerns remain about accuracy, meaningful confrontation of witnesses, and the danger of encoding existing biases. Please join us for a fascinating look where past and present meet at the intersection of criminal law and evidence.
Beyond Cross Examination: The Meaning of Confrontation in the Machine Age
Keynote Speaker: Andrea Roth, Professor of Law, Faculty Co-Director, The Berkeley Center for Law and Technology
The rise of machine-generated proof has laid bare the inadequacy of physical confrontation, cross-examination, and the oath as a means for the accused to be "confronted with the witnesses against him." As it turns out, the rise of the machine offers an opportunity to make the whole of evidence law more coherent, by recognizing "confrontation" as a right of meaningful impeachment rather than merely a trial right to cross-examination; recognizing that much of the discovery and contestation of historical fact can and should occur outside the courtroom; and recognizing the false dichotomy between "physical" and "testimonial" evidence.
Facial Recognition Software v. Eyewitness Identification
Valena Elizabeth Beety, Professor of Law and Director of the West Virginia Innocence Project at West Virginia University
Using a wrongful convictions lens, this presentation compares identifications by machines with identifications by humans and advocates for greater reliability checks on both before use against a criminal defendant. It queries the relative use of Daubert hearings, forensic scientific committees on eyewitness identification and facial recognition software, and the importance of pre-trial discovery and contextual information in analyzing both forms of evidence. The presentation also examines the influence of facial recognition software on eyewitness identifications themselves, and the related potential for greater errors.
Racing the Future, Racing Evidence
I. Bennett Capers, Stanley A. August Professor of Law, Brooklyn Law School
Turning to Afrofuturism as a way to imagine what policing might look like in a majority-minority future where people of color make up the majority in terms of numbers, and also wield the majority of political and economic power, Professor Capers plans to explore how the Rules of Evidence might change, both as a result of people of color insisting on fairer rules, and as a result of new technologies. Is it possible to have race-free machines? And if so, should we gird ourselves for a world where all testimony is machine testimony?
Medicine Without Science: Shaken Baby Syndrome and Other ‘Diagnoses' of Crime
Keith A. Findley, Associate Professor, University of Wisconsin Law School
Shaken Baby Syndrome (SBS), now known more broadly as Abusive Head Trauma (AHT), is a purported diagnosis through which physicians render opinions that fully satisfy all that is needed to prove a crime: cause and manner of death or injury (what happened-the actus reus), mental state of the perpetrator (was the injury sustained accidentally or naturally, or with intent or recklessness?-the mens rea), and even identity-based on timing of the injuries. But the scientific foundation for the diagnosis is remarkably weak and riddled with circularity confounds. Like so many of the forensic disciplines reviewed critically by both the National Academy of Sciences (2009) and the President's Council of Advisors on Science and Technology (2017), this discipline also ultimately rests upon subjective judgments-what physicians call clinical judgment. Also like so many of the pattern-matching forensic disciplines, medical determinations of criminal acts rest upon unknown error rates and unsupportable statistical claims of certainty. Moreover, medical diagnosis of crime suffers an additional disadvantage-it is not measurable against ground truth, and hence does not provide meaningful feedback that enables clinicians to learn from experience, which is essential to any discipline that relies extensively on subjective judgment calls.
AI and the New White Witness
Margaret Hu, Associate Professor of Law, Washington and Lee University School of Law
In the 1854 decision of People v. Hall, the California Supreme Court held that the Chinese, together with African-Americans and Native-Americans, were not allowed to testify in court. The Court relied upon an 1850 California statute that provided: "No black or mulatto, or Indian, shall be permitted to give evidence in favor of, or against, a white person." The case reversed a murder conviction against George W. Hall, "a free white citizen" of the State of California, by striking the eyewitness testimony of three Chinese witnesses. After the passage of the Chinese Exclusion Act in 1882 was extended by the Geary Act of 1892, the federal law called for "two white witnesses" to "testify" as to the lawfulness of a Chinese resident in the U.S. With the introduction of Al- and algorithmic-based evidence; the presumption of credibility of the "AI witness;" and implicit and explicit bias against human witnesses; this presentation questions how machine learning has the potential to replicate the historical role of the "white witness." It compares the presumption of credibility of the "white witness" in the nineteenth century with the presumption of credibility of the "AI witness" in the 21st century.
Big Brother Is Reading You: Linguistic Cyber-surveillance
Patrick Juola, Professor of Computer Science and Coordinator of the Cybersecurity Studies Program, Duquesne University McAnulty College and Graduate School of Liberal Arts, Department of Mathematics and Computer Science
One key to privacy protection is keeping "personally identifiable information" out of the public eye. Recent advances in text analysis, however, have shown that mere content lists (e.g., "full names, SSNs, driver's license numbers") are not enough. By looking at the writing style of a document, it is practical to infer not only identity but also personal traits up to and including medical conditions. This presentation summarizes recent research and discusses the issues of online identification, including privacy and forensic science implications.
Peering into the Machine-Neuroimaging and the Unknown
Jane Campbell Moriarty, Associate Dean for Faculty Scholarship Carol Los Mansmann Chair in Faculty Scholarship, and Professor of Law, Duquesne University School of Law
Neuroscience and neuroimaging evidence are often admitted in both civil and criminal cases, premised on claims of validity, reliability, and accuracy. But is it as certain as it often claims to be? Can judges really be gatekeepers when it comes to neuroimaging machine evidence? This presentation explores some of the more hidden concerns relevant to neuroscience machine evidence, including the problems of sensitivity/specificity, replication, and expertise.
Black Box Investigations
Erin Murphy, Professor, NYU School of Law
This presentation explores procedural questions such as the disclosure and discovery obligations of the government with regard to investigations involving machine evidence generated by third party vendors. The talk uses the specific example of genetic genealogical investigations (such as the Golden State Killer) to explore the implications of court orders that permit the government to withhold basic investigative information from the defense and the public (such as about the type of DNA testing conducted, the databases searched, and the ensuing investigation).
Coding Suspicion for Drug Interdiction Stops
Wesley M. Oliver, Associate Dean for Academic Affairs, Director of the Criminal Justice Program and Professor of Law, Duquesne University School of Law
Every criminal procedure professor will tell you that none of the cases in the book teach students how to meaningfully differentiate facts sufficient for either reasonable suspicion or probable cause from facts that do not cross these thresholds. All legal standards have some ambiguity, but totality-of-the-circumstances tests like reasonable suspicion and probable cause are among the least clear standards known to the law. Humans cannot read all the cases and determine how much weight is to be given each factor, when present with every permutation of other factors, to assess the degree of suspicion present. But the fact that no human could do it does not mean that it is impossible.
As machine learning problems go, this is one of the simpler problems to solve, especially in a limited universe of possible bases of suspicion and a single type of crime suspected. Drug interdiction turns out to be the perfect place to start to build an algorithm to assess suspicion. There are only so many different things an officer can observe in a brief traffic stop to determine whether there is enough to hold the motorist for a drug dog's sniff, or to search the car for drugs. The fear with all algorithms is bias, but the drug interdiction context allows an immediate check not only on the system's accuracy but also its bias.