NYU Law’s Algorithms and Explanations

Last week, on April 27th and 28th, I attended Algorithms and Explanations, an interdisciplinary conference hosted by NYU Law School’s Information Law Institute. The thrust of the conference could be summarized as follows:

  1. Humans make decisions that affect the lives of other humans
  2. In a number of regulatory contexts, humans must explain decisions, e.g.
    • Bail, parole, and sentencing decisions
    • Approving a line of credit
  3. Increasingly, algorithms “make” decisions traditionally made by man, e.g.
    • Risk models already used to make decisions regarding incarceration
    • Algorithmically-determined default risks already used to make loans
  4. This poses serious questions for regulators in various domains:
    • Can these algorithms offer explanations?
    • What sorts of explanations can they offer?
    • Do these explanations satisfy the requirements of the law?
    • Can humans actually explain their decisions in the first place?

The conference was organized into 9 panels. Each featured between 3 and 5 20-minute talks followed by a moderated discussion and Q&A. The first panel, moderated by Helen Nissenbaum (NYU & Cornell Tech), featured legal scholars (including conference organizer Katherine Strandburg) and addressed the legal arguments for explanations in the first place. A second panel featured sociologists Duncan Watts (MSR) and Jenna Burrell (Berkeley) as well as Solon Borocas (MSR), an organizer of the Fairness, Accountability and Transparency in Machine Learning workshop.

Katherine Jo Strandburg, NYU Law professor and conference organizer

I participated in the third panel, addressing technical issues regarding approaches to explaining algorithms (and the feasibility of the pursuit itself). I presented some of the arguments from my position piece The Mythos of Model Interpretability. The pith of my talk could be summarized compactly as follows:

  1. There are two separate questions that must be asked:
    • Can we develop algorithms (of any kind) that some day might satisfactorily explain their actions? (theoretical)
    • Can we satisfactorily explain the actions of the algorithms we’re using in the real world today? (actually matters now)
  2. Nearly all machine learning in the real world is supervised learning.
  3. Supervised learning operates on frozen snapshots of data, thus knows nothing of the dynamics driving observations.
  4. Supervised learning merely learns to model conditional probabilities, has no knowledge of the decisions theory that might be strapped on post-hoc to take actions or of downstream consequences.
  5. Supervised learning models are nearly always trained with biased data and are often trained to optimize the wrong objective (e.g. clicks vs newsworthiness).
  6. To summarize as a Tweet: “Modern machine learning: We train the wrong models on the wrong data to solve the wrong problems & feed the results into the wrong software“.
  7. With ML + naive decision theory making consequential decisions, a concerned society asks (reasonably) for explanations.
  8. The machine learning community generally lacks the critical thinking skills to understand the question.
  9. While a niche of  machine learning interpretability research has emerged, papers rarely identify what question they are asking, let alone provide answers.
  10. The research generally attempts to understand mechanistically, “what patterns did the model learn?” but not “why are those patterns there?”

Also in the technical panel, Anupam Datta (CMU) discussed an approach to infer whether a model has reconstructed any sensitive features (like race) via an intermediate representation in the model. Krishna Gummadi (Max Planck Institute) made an empirical case study of the explanations offered by Facebook for the ads they show.

Alexandra Chouldechova presented a deep look at recidivism prediction. Recidivism prediction is the practice of predicting, based on features of an inmate, the likelihood that someone will re-offend in the future. Of course, the ground truth data really only captures those inmates which both re-offend and get caught. We never get to see who commits crimes but doesn’t get caught. Typically, the purpose in making these predictions (probabilistic scores between 0 and 1) is to use them as risk scores for guiding decisions regarding incarceration.

Whether recidivism predictions from supervised models represent a reasonable or fundamentally flawed criteria for making parole or sentencing decisions was a recurring debate throughout both days of the conference. Personally, I’m inclined to believe that the entire practice of risk-based incarceration is fundamentally immoral/unfair, issues of bias aside.

Regardless of one’s take on the morality of risk-based incarceration, Chouldechova’s analysis was fascinating. In her talk, she motivated the use of model comparison as a way for understanding the effects of a black-box algorithm’s decisions. Chouldechova compared the scores assigned by Compass, a proprietary model to simply relying upon the count of prior offenses as a measure of risk.

While predicting risk based on the number of prior offenses has the benefit of punishing only for crimes already committed (not future crimes forecasted), it has the drawback of disproportionately punishing older people who may have been prolific criminals in their youth but have since outgrown crime. For a proper dive into this line of research, see Chouldechova’s recent publication on the topic.

The day concluded with a second panel of legal scholars. The Q&A section here exploded in fireworks as a lively debate ensued over what protections are actually afforded by the European Union’s recently passed General Data Protection Regulation legislation (set to take effect in 2018). While I won’t recap the debate in full detail, it centered on Sandra Wachter’s Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. In the paper, Wachter interrogates the “ambiguity and limited scope of the [GDPR’s] ‘right not to be subject to automated decision-making’”, suggesting that this “raises questions over the protection actually afforded to data subjects” and “runs the risk of being toothless”.

The second day of the conference turned towards specific application areas. Panels addressed:

  • Health: Federico Cabitza, Rich Caruana, Francesca Rossi, Ignacio Cofone
  • Consumer credit: Dan Raviv (Lendbuzz), Aaron Rieke (Upturn), Frank Pasquale (University of Maryland), Yafit Lev-Aretz, NYU
  • The media: Gilad Lotan (Buzzfeed), Nicholas Diakopoulos (University of Maryland), Brad Greenberg (Yale), Madelyn Sanfilippo, NYU]
  • The courts: Julius Adebayo (FastForward Labs), Paul Rifelj (Wisconsin Public Defenders), Andrea Roth (UC Berkeley), Amanda Levendowski, NYU
  • Predictive policing: Jeremy Heffner (Hunchlab), Dean Esserman (Police Foundation) Kiel Brennan-Marquez (NYU), Rebecca Wexler (Yale)

As parting thoughts, the following themes recurred [either in talks, discussions, or my private ruminations] throughout the conference:

  1. When we ask about explanations, who are they for?
    • Model builders?
    • Consumers?
    • Regulators?
  2. Discussions on the trade-off between accuracy vs. explainability are often ill-posed:
    • We often lose sight of the fact that models are typically optimized to do the wrong thing. Predicting clicks accurately is not the same thing as successfully choosing newsworthy content.
    • If we’re optimizing the wrong thing in the first place – then how can we assess a tradeoff between accuracy and explainability?
    • What does it mean to compare humans to algorithms quantitatively when the task is mis-specified?
  3. This research needs a home:
    • As with The Human use of Machine Learning before it, this conference did a wonderful job of bringing together scholars from a variety of disciplines. Adding in FAT-ML, it appears that a solid community is coalescing to study the social impacts of machine learning.
    • However, for publishing, the community remains fractured. Purely technical contributions (a new algorithm or visualization technique, say) have a home in the traditional venues. And discussions of policy have a home in legal journals.
    • It’s not clear where truly interdisciplinary research belongs. The failure of machine learning publications to entertain critical papers seems problematic. Perhaps it’s time that a proper publishing  conference or journal emerged from this community?

Author: Zachary C. Lipton

Zachary Chase Lipton is an assistant professor at Carnegie Mellon University. He is interested in both core machine learning methodology and applications to healthcare and dialogue systems. He is also a visiting scientist at Amazon AI, and has worked with Amazon Core Machine Learning, Microsoft Research Redmond, & Microsoft Research Bangalore.

4 thoughts on “NYU Law’s Algorithms and Explanations”

  1. Thank-you for this nice summary of the meeting. I wonder if you are overemphasizing the difference between the optimization objective of the learning algorithm and the objective of a decision maker. Many loss functions are very closely related to one another, so even though they are not identical, we can relate their error rates. There is a large class of proper loss functions that have desirable properties in this regard. In general, with a sufficiently large data set, the learned function will reproduce the decisions of whoever labeled the data. The real issue as you point out, is that we often do not know whether those decisions were good (just, fair, unbiased), and we don’t know whether the decisions were based on additional information not captured in the input features. In this regard, a learned classifier is just like any formal rule. Modus Ponens is called the “law of detachment” precisely because it severs the connection from the original decision making context.

    1. Hi Thomas, Nice to meet you again in cyberspace and hope to meet you in person one of these days :). I think we mostly agree here. To be clear, I am using the term objective in the common language sense. I do not mean to say something like “we are optimizing {log loss, hinge loss, squared error} but really care about {accuracy, precision@k,…}”.

      Instead I mean that we are often training machine learning models to perform the wrong task. We are grabbing the convenient labels (things like clicks), not the right labels. In many contexts, ML practitioners go after tasks that easily fit our familiar supervised learning problem statements and for which labels are abundant. We then train pattern recognition systems, strap on some afterthought decision theory, and use the trained models to take actions in a different context (like curating news articles). Such systems can exhibit unpredictable or undesirable behavior.

      And, as you mention, even when we attack the right problem, issues of bias in the data (are groups under-represented? are the important features present? Are the labels themselves biased?) remain.

  2. I’m curious if it was mentioned (here or anywhere else) why explanations for decisions made by computer-implemented algorithms are a compelling right, while explanations for decisions made by organizationally-implemented algorithms or by individuals aren’t. Most of these risks present with algorithmic decision-making are present or amplified when implemented by humans, yeah?

  3. Actually, humans are required in a number of contexts to issue explanations / justifications. The law spends considerable energy on what are vs aren’t justifiable reasons for making certain kinds of decisions. These include the grounds for approving/denying credit, jobs and housing applications. Now whether those attempts have successfully led us to a more just society, or whether we can just anyones explanations, or whether you believe justice is real at all may be up for debate. But it’s not tue that explanations are never expected of individuals.

Leave a Reply to Zachary C. Lipton Cancel reply

Your email address will not be published. Required fields are marked *