AI JUSTICE JA 1

The Sixth Amendment of the United States Bill of Rights guarantees defendants certain rights, an important one being the right to a fair and impartial jury. Despite this, it’s extremely challenging for any juror or potential juror to avoid possible sources of bias from information or misinformation from sources like social media and 24-hour news channels on television. This is especially true for high-profile cases like the recent Johnny Depp/Amber Heard defamation case and the O.J. Simpson trial in 1994. 

Sources of bias can also apply to everyday, lower-profile cases, which might only receive local news coverage. Even in these cases (or cases with no direct coverage), potential jurors can become biased through other means, such as viewing posts on a relevant topic on social media. Given this, one has to wonder whether it’s even remotely possible to find an impartial juror – let alone an impartial jury – when access to information and misinformation through news and social media outlets is so prevalent. 

To counteract these challenges to constructing a fair and impartial jury, some futurists have proposed using artificial intelligence (AI) jurors or even AI judges to ensure a lack of bias. Although AI would not be affected by news stories and social media posts, implicit bias considerations still exist. For example, for the AI to make “accurate” (i.e., fair) decisions, it needs to be trained first. This would typically mean that results from previous cases and essential factors would be used as training data. However, this is based on the assumption that the results of prior cases have been fair when this historically has not been the case. 

Research has shown that women and people of color will receive as little as half as much in damages for lost wages as a white male in the same situation [1] because race and gender-based actuarial tables are based on historical norms and are, therefore, inherently inaccurate for the purpose of making future projections. That is also because they fail to account for potential future progress in eliminating gender and racial disparities (note: though as of January 1, 2020, reducing damages for lost future earnings based on race, gender, or ethnicity is prohibited in personal injury and wrongful death suits in California). 

If you think this wouldn’t happen with state-of-the-art technology and machine learning, it has in the past. Amazon tried to create an AI to find the best candidates for job openings they posted to avoid gender biases. However, the training data they used was the resumes of current employees (who are predominantly male due to existing gender biases), so the first thing the AI did was throw out all of the non-male candidates because the biased training data taught it to think that the best candidates had to be male. As such, it’s clear that AI is not a panacea that will solve all impartiality problems instantly.

While AI is likely not at the point that it can completely replace jurors or judges, it can still certainly play a significant role for legal and judicial teams. Working with behavioral scientists, we, for example, can use psychographics and neuro-linguistics to identify biases in potential jurors better to see if external information (i.e., news, social media) may be overly influencing their opinion on the case. Psychographics is the science of understanding people based on their interests, activities, and opinions. Many AI tools already exist to generate an individual’s psychographic profiles. The most complex AI models can use public information like social media posts and LinkedIn biographies to generate personality trait scores and predict a person’s opinions, attitudes, hobbies, and even political party affiliation. Thus, at an individualized level, AI can decode a person. 

However, what’s important isn’t “Can I accurately measure this person?” – what we want to know is “Can my measure of this person accurately predict whether they’ll be a good juror for my client?”, which frequently requires more than just simple demographic data or information that can be found on an individual’s social media. Psychographic data can then be complemented with neurolinguistics, the science of how language is represented in the brain. However, how language is represented is more or less unique to every person (e.g., every word has a slightly different meaning that is subjective), which can give insight into their values, commitment level, and beliefs. Importantly, concealing is challenging as it is an implicit, automatic process.

Let’s go back to the O.J. Simpson trial (even though this was a criminal trial, the example still works). In 1994, when the trial occurred, AI was not nearly as intelligent as now, but prospective jurors were still required to complete a jury selection questionnaire [2]. This questionnaire was administered in an attempt to determine which jurors would be biased and was 75 pages long due to the high-profile nature of the case (questions included usual demographic questions and a wide range of other topics, including “Have you ever written to a celebrity?” and “Are you a fan of the USC Trojans football team?”). 

If psychographic profiling and neurolinguistic AI tools existed at the time, the AI would have enough data to assess each potential juror from the questionnaire data alone. The attorneys could have known how each individual felt about O.J. Simpson and the Los Angeles Police Department and any possible biases in either direction. Furthermore, other valuable information, such as which media channels each prospective juror is likely to engage with, would help the attorneys assess the probability of media persuasion shaping this potential juror’s opinions. Lastly, for the individuals selected to act as jurors in the trial, a baseline measure of their impartiality would exist, and as the case progresses, their psychographics and neurolinguistics can be reassessed to see if external information is influencing the juror’s perception of the case. Hence, even with intense media coverage, these AI tools can provide the ability to better sustain an impartial jury.

You may be reading this and thinking, “This is all well and good, but I don’t know how to apply this to my cases,” or “I collect demographic information on my potential jurors; isn’t that enough?” or “There’s no way a judge will allow me to give a 75-page questionnaire to every potential juror”. You’re probably right regarding that last point, but the good news is that unless you’re working on a case involving an A-list celebrity, you probably won’t need 75 pages. 

To be able to apply AI to your case, the first step is to collect training data. Use big data to collect many responses to your case, including demographic and psychographic data, the degree to which each virtual juror supports the plaintiff or defendant, how much liability each party has, and how much the plaintiff should receive in damages. 

Make sure that the responses you get from your virtual jurors are high quality and from a representative sample, which means refrain from using more well-known online recruitment sites, such as Craigslist or Amazon survey trolls,  and recruiting at the county level, not state [4]. The demographic and psychographic data is then used to predict the favorability of the case (this could be called “win rate” or “win metric”) and find out which specific aspects of a psychographic profile are predictive of the win rate. It’s highly unlikely that every single psychographic trait you look for will be predictive – it’s much more likely that only a handful are. For example, it could be the case that extraversion is negatively predictive and conscientiousness is positively predictive of win rate. In this case, your ideal juror would have low extraversion and high conscientiousness. 

The second step is to ask your judge for a Supplemental Juror Questionnaire (SJQ). If this doesn’t work, this process can still be done through voir dire.

Third and finally, make sure that the same voir dire questions that measured the important traits (extraversion and conscientiousness) are asked to the potential jurors, then rank the jurors based on their scores on these traits. Those who rank highly are the jurors you want to keep, whereas the low-ranked jurors are likely to be biased against your client.

AI has developed rapidly in the past few years and is likely to keep doing so, to the point that datasets that examine hundreds or thousands of individuals’ attitudes and beliefs regarding a specific case can be completely analyzed in minutes. While there is no silver bullet solution to potential juror bias, it can be argued that this is likely an impossible goal. Rather, AI can be used to improve our ability to provide an impartial jury and exclude potential jurors who have biases and prejudices about a specific case. That is why it is worthwhile to explore the use of psychographic profiling and neurolinguistic AI tools for our legal system.

References:

  1. https://www.torklaw.com/info/how-race-impacts-personal-injury-cases/
  2. https://rexsorgatz.medium.com/document-the-jury-selection-questionnaire-from-the-oj-simpson-trial-7483d3b8995
  3. https://juryanalyst.com/blog/focus-group-samples/