AI JUSTICE JA 1

The Sixth Amendment of the United States Bill of Rights guarantees defendants certain rights, an important one being the right to a fair and impartial jury. Despite this, it’s extremely challenging for any juror or potential juror to avoid possible sources of bias from information or misinformation disseminated from sources like social media and 24-hour news channels on television. This is especially true for high-profile cases like the recent Johnny Depp/Amber Heard defamation case and the O.J. Simpson trial in 1994. 

Sources of bias can also apply to every day, lower-profile cases which might only receive local news coverage. Even in these cases (or cases with no direct coverage), potential jurors can become biased through other means, such as viewing posts on a relevant topic on social media. Given this, one has to wonder whether it’s even remotely possible to find an impartial juror – let alone an impartial jury – when access to information and misinformation through news and social media outlets is so prevalent. 

In an attempt to counteract these challenges to constructing a fair and impartial jury, some futurists have proposed using artificial intelligence (AI) jurors or even AI judges to ensure a lack of bias. Although the AI would not be affected by news stories and social media posts, there are still implicit bias considerations to take into account. For example, in order for the AI to make “accurate” (i.e., fair) decisions, it needs to be trained first. This would typically mean that the results and important factors from previous cases would be used as training data. However, this is based on the assumption that the results of previous cases have been fair when this historically has not been the case. 

Research has shown that women and people of color will receive as little as half as much in damages for lost wages as a white male in the same situation [1] due to the fact that race and gender-based actuarial tables are based on historical norms, and are therefore inherently inaccurate for the purpose of making future projections. That and also because they fail to account for potential future progress in eliminating gender and racial disparities (note: though as of January 1, 2020, reducing damages for lost future earnings based on race, gender or ethnicity is prohibited in personal injury and wrongful death suits in California). 

In the event you think that this wouldn’t happen with state-of-the-art technology and machine learning, it has in the past. Amazon tried to create an AI to find the best candidates for job openings they posted in order to avoid gender biases. However, the training data they used was the resumes of current employees (who are predominantly male due to existing gender biases), so the first thing the AI did was throw out all of the non-male candidates because the biased training data taught it to think that the best candidates had to be male. As such, it’s clear that the use of AI is not a panacea that will solve all impartiality problems instantly.

While AI is likely not at the point that it can completely replace jurors or judges, it can still certainly play a major role for legal and judicial teams. Working with behavioral scientists, we, for example, can use psychographics and neurolinguistics to better identify biases in potential jurors to see if external information (i.e., news, social media) may be overly influencing their opinion on the case at hand. Psychographics is the science of understanding people based on their interests, activities, and opinions. Many AI tools already exist to generate psychographic profiles of an individual. The most complex AI models can use public information like social media posts and LinkedIn biographies to generate personality trait scores and predict the opinions, attitudes, hobbies, and even political party affiliation of a person. Thus, at an individualized level, AI can decode a person. 

However, what’s important isn’t “can I accurately measure this person?” – what we want to know is “can my measure of this person accurately predict whether they’ll be a good juror for my client?”, which frequently requires more than just simple demographic data or information that can be found on an individual’s social media. Psychographic data can be then complemented with neurolinguistics, which is the science of how language is represented in the brain. Put simply, the way language is represented is more or less unique to every person (e.g., every word has a slightly different meaning that is subjective), which can give insight into their values, commitment level, and beliefs. Additionally, and importantly, it is very difficult to conceal as it is an implicit, automatic process.

Let’s go back to the O.J. Simpson trial (even though this was a criminal trial, the example still works). In 1994, when the trial occurred, AI was not nearly as intelligent as it is now, but prospective jurors were still required to complete a jury selection questionnaire [2]. This questionnaire was administered in an attempt to determine which jurors would be biased and was 75 pages long due to the high-profile nature of the case (questions included usual demographic questions and a wide range of other topics, including “Have you ever written to a celebrity?” and “Are you a fan of the USC Trojans football team?”). 

If psychographic profiling and neurolinguistic AI tools existed at the time, the AI would have enough data to assess each potential juror from the questionnaire data alone. The attorneys could have known ahead of time how each individual felt about O.J. Simpson, the Los Angeles Police Department, and any potential biases in either direction. Furthermore, other valuable information, such as which media channels each prospective juror is likely to engage with, would help the attorneys assess the probability of media persuasion shaping this potential juror’s opinions. Lastly, for the individuals selected to act as jurors in the trial, a baseline measure of their impartiality would exist, and as the case progresses, their psychographics and neurolinguistics can be reassessed to see if external information is influencing the juror’s perception of the case. Hence, even with intense media coverage, these AI tools can provide the ability to better sustain an impartial jury.

You may be reading this and thinking “this is all well and good, but I don’t know how to apply this to my cases”, or “I collect demographic information on my potential jurors, isn’t that enough?”, or “There’s no way a judge will allow me to give a 75-page questionnaire to every potential juror”. You’re probably right in regards to that last point, but the good news is that unless you’re working a case involving an A-list celebrity, you probably won’t need 75 pages. 

To be able to apply AI to your case, the first step is to collect training data. That is, use big data to collect a large number of responses to your case, including demographic and psychographic data, the degree to which each virtual juror supports the plaintiff or defendant, how much liability each party has, and/or how much the plaintiff should receive in damages. 

Do make sure that the responses you get from your virtual jurors are high quality and from a representative sample, which means refrain from using some more well-known online recruitment sites, such as Craigslist or Amazon survey trolls,  and recruiting at the county level, not state [4]. The demographic and psychographic data is then used to predict the favorability of the case (this could be called “win rate” or “win metric”), and find out which specific aspects of a psychographic profile are predictive of the win rate. It’s highly unlikely that every single psychographic trait that you look for will be predictive – it’s much more likely that only a handful are. For example, it could be the case that extraversion is negatively predictive and conscientiousness is positively predictive of win rate. In this case, your ideal juror would have low extraversion and high conscientiousness. 

The second step is to ask your judge for a Supplemental Juror Questionnaire (SJQ). We talk about the SJQ in detail, including why and how to get it approved, here. If this doesn’t work, this process can still be done through voir dire.

Third and finally, make sure that the same questions that measured the important traits (extraversion and conscientiousness) are asked to the potential jurors, then rank the jurors based on their scores on these traits. Those who rank highly are the jurors you want to keep, whereas the low-ranked jurors are likely to be biased against your client.

AI has developed rapidly in the past few years, and is likely to keep doing so, to the point that datasets that examine hundreds or thousands of individuals’ attitudes and beliefs regarding a specific case can be completely analyzed in minutes. While there is no silver bullet solution to potential juror bias, it can certainly be argued that this is likely an impossible goal. Rather, AI can be used to improve our ability to provide an impartial jury and exclude potential jurors that do have biases and prejudices with regard to a specific case. That is why it is worthwhile to explore the use of psychographic profiling and/or neurolinguistic AI tools for our legal system.

References:

  1. https://www.torklaw.com/info/how-race-impacts-personal-injury-cases/
  2. https://rexsorgatz.medium.com/document-the-jury-selection-questionnaire-from-the-oj-simpson-trial-7483d3b8995
  3. https://juryanalyst.com/blog/focus-group-samples/