The Death of Eyewitness Testimony & The Rise of Machine Evidence
Friday, April 8, 2022, 8 a.m. to 3 p.m.
Conference Chair: Professor Jane Campbell Moriarty, Carol Los Mansmann Chair in Faculty Scholarship
Attendees will earn 4 Substantive CLE credits and 1 Ethics CLE credit.
Registration for this CLE is closed.
Duquesne University Power Center Ballroom
1105 Forbes Ave.
Pittsburgh, PA 15219
Cost: $225 (Includes continental breakfast and lunch)
The legal system is increasingly reliant on machine-driven evidence including biometric identification, cell service location information, neuroimaging, and computer-automated DNA profiles. Although these technologies are remarkable, they pose challenging legal and ethical questions. Speakers at the conference will address constitutional concerns about privacy, self-incrimination, and confrontation; the reliability of machine evidence; the role of racial discrimination and bias in technology; and the ethical implications of technological evidence.
What Machines Can Teach Us About Confrontation
Keynote Speaker: Andrea Roth, Professor of Law, Faculty Co-Director, The Berkeley Center for Law and Technology
The Supreme Court has narrowly construed the Sixth Amendment right of the accused to be "confronted with the witnesses against him" as guaranteeing only certain in-court trial safeguards - cross-examination, physical confrontation, and the oath. But the recent rise of "machine" witnesses -- such as complex DNA algorithms that declare that a defendant was or was not a likely contributor to a DNA mixture -- has newly exposed this narrow view of confrontation as untenable. Machines cannot be physically confronted or cross-examined, and yet their accusations merit adversarial scrutiny just as much as those of human witnesses. Drawing on newly published research by others on the history of cross-examination, this Essay makes the most comprehensive argument yet that a broader view of confrontation -- as a right to a meaningful opportunity to scrutinize the government's proof -- is the only view consistent with the Constitution's history, text, and purpose. It then explores what a broader view of confrontation would mean, both for machine and human witnesses.
Facial Recognition Software v. Eyewitness Identification
Valena Elizabeth Beety, Professor of Law at Arizona State University Sandra Day O'Connor College of Law and the Deputy Director of the Academy for Justice
Using a wrongful convictions lens, this presentation compares identifications by machines, notably facial recognition software, with identifications by humans. The presentation advocates for greater reliability checks on both before use against a criminal defendant. The presentation examines the cascading influence of facial recognition software on eyewitness identifications themselves, and the related potential for greater errors. As a solution, the presentation advocates the inclusion of eyewitness identification in OSAC review of facial recognition software, for a more robust examination and consideration of software and its usage. The presentation also encourages police departments to adopt double-blind procedures for eyewitness identifications, including when "matching" photos from facial recognition software are included. Finally, the presentation concludes with a speculation of where these two fields will be ten years from now, in 2032.
Racing the Future, Racing Evidence
Bennett Capers, Professor at Fordham University School of Law and Director of Fordham's Center on Race, Law, and Justice
Capers' work focuses on policy and race, where he imagines a majority-minority future with perfect surveillance. In such a world, what evidence will be admissible? Will we need new rules? In his presentation, Professor Capers will go beyond eyewitness testimony to contemplate a world with perfect surveillance, from eye-in-the-sky technology to the ubiquity of surveillance cameras and facial recognition technology to the almost instantaneous access to big data. In such a world where actual human testimony might be the exception and machine "testimony" the rule, might it make sense to fashion different Rules of Evidence? More broadly, what will evidential rules look like in the future?
Biometric AI and an AI Bill of Rights
Margaret Hu, Professor of Law and International Affairs at Penn State Law and School of International Affairs at the Pennsylvania State University
An informed discussion on an AI Bill of Rights requires grappling with biometric data and its integration into emerging AI systems. Biometric AI systems serve a wide range of governmental and defense purposes, including policing, border security and immigration enforcement, and biometric cyberintelligence and biometric-enabled warfare. Biometric AI systems are increasingly categorized as "high-risk" when they are deployed in ways that may impact fundamental constitutional rights and human rights. There is growing recognition that high-risk biometric AI systems, such as facial recognition systems, can pose unprecedented challenges to criminal procedure rights. This Essay concludes that a failure to recognize these challenges will lead to an underappreciation of the constitutional threats posed by emerging biometric AI systems and the need for an AI Bill of Rights.
Something Wicked This Way Thumbs: Personal Contact Concerns of Text-Based Attorney Marketing
Ashley M. London, Director of Bar Studies and Assistant Professor of Legal Skills at Duquesne University School of Law
When the American Bar Association (ABA) announced its latest revisions to Model Rules 7.1-7.5, governing attorney advertising, solicitation, and information about legal services in general, the organization may have unintentionally created a way for attorneys to hack directly into the brains of potential clients for purposes of pecuniary gain.
Brushing aside decades of precedent, the rule on Solicitation of Clients now allows real-time electronic solicitation, including text messaging and Tweets. These developments beg the question of whether or not the ABA committee charged with redefining this rule actually understands the power and pervasiveness of cell phones, or how the use of this technology is changing our cognitive capacity and consumer behaviors.
Recent studies of cognition suggest signals from one's own cellular device- whether the ping of a text message or a Tweet from an attorney seeking to advertise their services- activates the same attention system as the sound of one's own name. The mere sending and receiving of text messages releases dopamine in the brain, which sets up a cycle similar to an addiction leading to more texting. The total number of text messages sent in 2017 was a mind-blowing 8.3 trillion, or about 23 billion per day, each virtually impossible for the sender not to open immediately and most likely respond.
The newly-amended rules for attorney advertising do not comport with the findings of the most up-to-date cognitive studies on the impact of smartphone use on the consumer brain with a particular focus on text messaging, or short message service (SMS). As the growth of technology continues at a break-neck pace, an increasingly tortuous interpretation of Rule 7.3 simply cannot be revised fast enough, or thoroughly enough, to compete with technology and its impact on human cognition.
The Inscrutability Problem: From First-Generation Forensic Science to Neuroimaging Evidence
Conference Chair: Jane Campbell Moriarty, Carol Los Mansmann Chair in Faculty Scholarship and Professor at Duquesne University School of Law
Expert testimony continues to turn away from human-based skills to embrace machine-based evidence. Technology is used to identify and locate individuals, unlock encrypted devices, and even to evaluate criminal responsibility. Perhaps this is a positive change. The shortcomings of first-generation forensic identification specialties are well known, including the inscrutability of its subjective comparisons. As such, this newer generation of evidence may well be an improvement. Machine-based evidence relies on "blackbox" links among hardware, software, algorithms, statistics, and engineering to reach a result-one created and interpreted by humans subject to bias and cognitive error. Much of this evidence is also inscrutable, given its complexity. Focusing on functional neuroimaging evidence, this article discusses its foundational reliability and opacity, and its increased reliance on machine learning.
Technology - Revealing or Framing the Truth? A Jurisprudential Debate
Dana Neaçsu, Associate Professor of Legal Skills and Director of the Duquesne Center for Legal Information and the Allegheny County Law Library
Abstract: Technology is so much more than a prosthetic. But how much more? And what else is it? In the legal realm, its role is not yet clear in part because the meaning of technology has usually been assumed. This lack of elucidation becomes problematic, especially when technology has the ability to convert assumptions into facts, and it takes on a truth-making, rather than a mere truth-revealing mission. Legally, it is problematic to enable technology to stand-in for reflective thinking. Evidentiary rules enable technology to decide what can be proven, ergo what truth is, and this paper is about clarifying that possibility.
Technology is a fork in the road of legal meaning making process; simultaneously it may obscure and reveal legal truth. Given this position in the process of negotiating the appearance of legal truth, this paper discusses it philosophically, ontologically, from a determinist and phenomenological perspective, directing the reader's gaze to what constitutes legal truth, and then expanding this approach to the evidentiary context of DNA sample testing.
Coding Suspicion for Drug Interdiction Stops
Wesley M. Oliver, Director of the Criminal Justice Program and Professor at Duquesne University School of Law
Every criminal procedure professor will tell you that none of the cases in the book teach students how to meaningfully differentiate facts sufficient for either reasonable suspicion or probable cause from facts that do not cross these thresholds. All legal standards have some ambiguity, but totality-of-the-circumstances tests like reasonable suspicion and probable cause are among the least clear standards known to the law. Humans cannot read all the cases and determine how much weight is to be given each factor, when presented with every permutation of other factors, to assess the degree of suspicion present. But the fact that no human could do it does not mean that it is impossible.
As machine learning problems go, this is one of the simpler problems to solve, especially in a limited universe of possible bases of suspicion and a single type of crime suspected. Drug interdiction turns out to be the perfect place to start to build an algorithm to assess suspicion. There are only so many different things an officer can observe in a brief traffic stop to determine whether there is enough to hold the motorist for a drug dog's sniff, or to search the car for drugs. The fear with all algorithms is bias, but the drug interdiction context allows an immediate check not only on the system's accuracy but also its bias.
We want to extend a special thank you to our generous event sponsors: