This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

A Fresh Take

Insights on M&A, litigation, and corporate governance in the US.

| 5 minutes read
Reposted from Freshfields Technology Quotient

Are we being too hard on HAL? Some thoughts on the legal need for "explainable" artificial intelligence.

Do we really need to psychoanalyze robots? The conventional wisdom is that we do. As artificial intelligence plays a bigger role in our lives, many regulators and courts are starting to demand that the AI be "explainable" to at least some degree. In simple terms, that means that the law is demanding that AI users identify (a) what information the AI has deduced about certain topics, and (b) how the AI has deduced it. The problem is popping up in all sorts of legal contexts:

  • Data protection laws increasingly give people the right to know how companies hold and process personal information, and sometimes even the particular information that companies hold on them. Some also allow people to object to automated decision-making (fancy jargon for "computers computing things"). So regulators and companies are trying to figure out what they need to do to identify the information that AI has compiled on a person, and how. For example, my UK colleagues recently explored how the UK data protection authority has suggested a highly reticulated method for explaining AI. For them, there are actually six types of explanation that companies should be prepared to give about their AI. (Not just the two I identified above.)
  • Courts frequently call on companies and individuals to explain themselves when confronted with accusations of bias. Particularly in employment contexts, a defendant in a discrimination suit will need to explain what factors motivated an employment decision and, perhaps more important, what factors didn't motivate an employment decision. When AI algorithms are used to screen resumes, determine bonuses, or make similar decisions, questions are bound to arise about how the AI came to a particular decision.
  • Courts themselves increasingly need to explain how they use AI. Famously, many courts are now using AI tools to predict recidivism, which then factors into a judge's decision on sentencing. At least in common and civil law systems, the nature of judging inherently requires judges to give reasons for their decisions; the duty is observed even more strictly when a judge's decision deprives individuals of their freedom. And so there is robust debate on how judges need to explain the AI that they use when sentencing criminal defendants. (A good exploration of the problem in the US is this recent law review article; some interesting explorations of the problem in the UK are this recent report from the Law Society and this other recent report from the Centre for Data Ethics and Innovation.)

These are just three examples among many. The simple point is that the law constantly demands that people explain themselves in a variety of contexts, so the more people use AI, the more they'll be called on to explain it.

A solution, we're told, is "explainable AI," or "xAI." That simply means AI that provides some insight into the inputs or decision-making process it uses when coming to a decision. 

There are technical reasons not to get too excited about explainable AI. AI algorithms don't necessarily follow human thought patterns. Really smart AI algorithms—those based on machine learning or neural networks, for example—may generate a host of "coefficients" to measure this, that, or the other thing. But often, the AI algorithm itself is defining what "this," "that," or "the other thing" are. A human may not be able to understand what these coefficients represent. 

But even if we got explainable AI working from a technical standpoint, there are harder theoretical and legal questions that need to be answered first, before we go all-in on requiring AI algorithms to explain themselves.

First, defining AI isn't easy, which means it's not easy to define what sorts of algorithms need explanation. A recent Congressional attempt defines AI so broadly that it seemingly encompasses almost everything—not useful. On the other hand, there's the so-called "AI Effect": as certain AI technologies become commonplace, people stop considering them to be AI. Optical character recognition (OCR) is a classic example. OCR relies on a lot of technologies that are, strictly speaking, based on machine learning and other things that we would usually call AI. In the 1990s, everybody thought "wow, computers that can read!" Today, the tech is humdrum. Even when OCRed material is used in a legal proceeding, nobody bothers to ask how OCR works—it's just taken for granted. Similarly, pretty much anyone today would agree that facial recognition technology is an example of AI. And, by the way, it's another area where people are demanding that decisions be explainable. But in 25 years, will the tech be taken for granted?

Second, we (meaning, humans) aren't great at explaining things anyway. If someone showed you a photo of a bird, asked you what it is, and then asked you to explain how you knew it what it was, could you do it? Could you identify all the inputs that you've received over your entire life about what a bird is? Could you explain how your brain wired them together to create the concept "bird"? Could you explain how you knew that little flying thing wasn't a bat or a bug?

Third, even when we know what factors a human brain considered and how it weighed each factor, we don't have a great legal construct for assessing that decision. In discrimination cases, for example, courts regularly engage in byzantine debates about what it means for a decision to be "based on" something or "because of" something else. An appeals court may vacate one criminal sentence if a trial judge didn't fully explore a particular issue, while the same appeals court may affirm a difference sentence where the trial judge explored a complicated topic in a footnote. And I don't know anyone who thinks courts are really consistent when assessing whether an administrative agency has given sufficient reasons for an adjudication or rulemaking. Bottom line, even when human brains are making the decisions, we lack reliable legal frameworks for deciding when those decisions are based on the right factors (or wrong factors), and for deciding how much explanation of the decision is enough.

Finally, even if we could get past the technical, theoretical, and legal difficulties in requiring AI to explain itself, will explainability requirements do more harm than good? At least two unintended consequences come to mind. People could misuse explainability requirements to try to reverse engineer how AI algorithms work. That could introduce security risks. It could also lead to companies stealing and free-riding off the AI innovations of competitors. On the flip side, companies may feel the need to simplify AI algorithms to make them explainable, even if simplifying the algorithms makes them less effective or efficient. I'm reminded of an aphorism: the mark of a truly smart person is being able to explain complicated concepts in a simple way. That might not always be possible when it comes to AI. To make AI explainable in a simple way, you might need to dumb it down.

And so before legislatures, regulators, and courts rush to create a new law of explainable AI, it is worth asking: Are we being too hard on HAL?

Tags

ai, data protection, litigation, global