Troubling Trends in Machine Learning Scholarship

By Zachary C. Lipton* & Jacob Steinhardt*
*equal authorship

Originally presented at ICML 2018: Machine Learning Debates [arXiv link]
Published in Communications of the ACM

1   Introduction

Collectively, machine learning (ML) researchers are engaged in the creation and dissemination of knowledge about data-driven algorithms. In a given paper, researchers might aspire to any subset of the following goals, among others: to theoretically characterize what is learnable, to obtain understanding through empirically rigorous experiments, or to build a working system that has high predictive accuracy. While determining which knowledge warrants inquiry may be subjective, once the topic is fixed, papers are most valuable to the community when they act in service of the reader, creating foundational knowledge and communicating as clearly as possible.

What sort of papers best serve their readers? We can enumerate desirable characteristics: these papers should (i) provide intuition to aid the reader’s understanding, but clearly distinguish it from stronger conclusions supported by evidence; (ii) describe empirical investigations that consider and rule out alternative hypotheses [62]; (iii) make clear the relationship between theoretical analysis and intuitive or empirical claims [64]; and (iv) use language to empower the reader, choosing terminology to avoid misleading or unproven connotations, collisions with other definitions, or conflation with other related but distinct concepts [56].

Recent progress in machine learning comes despite frequent departures from these ideals. In this paper, we focus on the following four patterns that appear to us to be trending in ML scholarship:

  1. Failure to distinguish between explanation and speculation.
  2. Failure to identify the sources of empirical gains, e.g. emphasizing unnecessary modifications to neural architectures when gains actually stem from hyper-parameter tuning.
  3. Mathiness: the use of mathematics that obfuscates or impresses rather than clarifies, e.g. by confusing technical and non-technical concepts.
  4. Misuse of language, e.g. by choosing terms of art with colloquial connotations or by overloading established technical terms.

From AI to ML to AI: On Swirling Nomenclature & Slurried Thought

Artificial intelligence is transforming the way we work (Venture Beat), turning all of us into hyper-productive business centaurs (The Next Web).  Artificial intelligence will merge with human brains to transform the way we think (The Verge). Artificial intelligence is the new electricity (Andrew Ng).  Within five years, artificial intelligence will be behind your every decision (Ginni Rometty of IBM via Computer World ).

Before committing all future posts to the coming revolution, or abandoning the blog altogether to beseech good favor from our AI overlords at the AI church, perhaps we should ask, why are today’s headlines, startups and even academic institutions suddenly all embracing the term artificial intelligence (AI)?

In this blog post, I hope to prod all stakeholders (researchers, entrepreneurs, venture capitalists, journalists, think-fluencers, and casual observers alike) to ask the following questions:

  1. What substantive transformation does this switch in the nomenclature from machine learning (ML) to  artificial intelligence (AI) signal?
  2. If the research hasn’t categorically changed, then why are we rebranding it?
  3. What are the dangers, to both scholarship and society, of mindlessly shifting the way we talk about research to maximize buzz?

Continue reading “From AI to ML to AI: On Swirling Nomenclature & Slurried Thought”

Leveraging GANs to combat adversarial examples

In 2014, Szegedy et al.  published an ICLR paper with a surprising discovery: modern deep neural networks trained for image classification exhibit the following vulnerability: by making only slight alterations to an input image, it’s possible to drastically fool a model that would otherwise classify the image correctly (say, as a dog), into outputting a completely wrong label (say, as a banana). Moreover, this attack is possible even with perturbations that are so tiny that a human couldn’t distinguish the altered image from the original.

These doctored images are called adversarial examples and the study of how to make neural networks robust to these attacks is an increasingly active area of machine learning research.

Continue reading “Leveraging GANs to combat adversarial examples”