Welcome to the Black Box

Cathy O'Neil

Algorithms have increasing power over our lives — and they're not as objective as we might think.

Tim Snell / Flickr

Interview by

Terms like “big data” and “machine learning” have entered the mainstream, and are often presented as solutions to thorny economic and social problems. Much less consideration is given to their negative impacts, and the ways that mathematical algorithms affect the lives of ordinary people. But Cathy O’Neil — a mathematician and former Wall Street quantitative analyst (a “quant”) — thinks we should be paying attention.

Her new book Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy shows how popular algorithms used by companies and institutions often harm poor and working people and benefit elites. Jacobin sat down with O’Neil to discuss these destructive algorithms and what we can do about them.


What led you to write a popular book about math?

Cathy O'Neil

It’s funny, I don’t actually think of Weapons of Math Destruction as a book about math. It’s more a book about instruments of social control, and how they are masquerading as mathematical.

I wrote the book because I think there is considerable harm being done by destructive algorithms, and as a mathematician I’m in a unique position to explain those harms. I worked as a hedge fund quant during the 2008 financial crisis and as a data scientist at the height of the big data revolution. So I have been living behind the scenes, and I know how this stuff works.

At the same time, I’m an occupier. I joined Occupy in October 2011, forming and facilitating the alternative banking group, which has met weekly at Columbia University since then. Our weekly discussions have established a lens through which I’ve learned to examine the world, especially as it connects to money and power.

So now when I come across an automated decision-making system, I always wonder who is benefiting from that system, and who is suffering. And the conclusion I keep coming to when considering systems that rely on algorithms, is that poor people, black and brown people, and the mentally ill are consistently being shut out by these algorithmic black-box structures.

Machine learning has been presented to us as trustworthy, because it’s mathematically sophisticated and because algorithms have no agendas. But the data itself cannot be decontextualized from our historical practices, nor can the choices of the modelers who build the models and choose objective functions.

In other words, we don’t move past discrimination through the use of algorithms, but rather instead sanitize and obscure our historical-cultural practices and patterns. In the process we risk reinforcing and even exacerbating those patterns.

I wanted to share this truth with people, so I quit my data science job in the summer of 2012 and devoted myself to writing a book about destructive algorithms and how they shape our society.

Can you briefly explain what you mean by “weapons of math destruction”? How are these WMDs different from more benign, or even useful, mathematical models?

Cathy O'Neil

One of the main things I do in the book is perform some triage on the population of algorithms — to carve out a definition that allows us to focus attention on the most worrisome algorithms.

So much of the discussion about the potential harms of surveillance and data collection are unfocused, and often when you’re in one of those conversations you end up with nothing more than a vague notion that that someday, maybe, bad things will happen. But destructive algorithms — “weapons of math destruction” — already exist and are already harming us.

I designate “weapon of math destruction” as algorithms with three primary characteristics — they’re widespread, mysterious, and destructive. Widespread because I only care about algorithms that affect a lot of people and have important consequences for those people. So if the algorithm decides whether someone gets a job, or goes to jail for longer, or gets a loan, or votes, then it’s a big deal.

I call WMDs mysterious because the algorithms I worry about are secret. They come from hidden formulas owned by private companies and organizations and are guarded as valuable “secret sauce.” That means the people targeted by their scoring systems are unaware of how their scores are computed, and they’re often even unaware that they are being scored in the first place.

Along with this secrecy comes a lack of accountability on the part of the institution that deploys the scoring systems, and of course a lack of an appeals process. After all, how can you appeal a score you didn’t know was computed? And how can you argue the score is wrong if you have no access to the underlying formula?

Finally these algorithms are extremely destructive. The scoring systems they utilize ruin people’s lives. Moreover, they engender larger feedback loops that undermine their original purpose, which is often well-intentioned. This is the hallmark of a bad scoring system that is nevertheless given enormous trust and power: it creates its own reality and distorts the reality around it.

Could you give us some examples of WMDs?

Cathy O'Neil

So the teacher value-added model is the first WMD I came across outside finance. It’s an opaque scoring system that uses student test scores to assess teacher ability. The value-added model is statistically unreliable and comes with little explanation or advice for teachers to improve.

Even so, it’s been used in high-stakes decisions. I interviewed a teacher named Sarah Wysocki who got fired in the Washington DC area because of a low score that she had reason to believe was caused by a previous teacher’s cheating.

Some version of the teacher value-added model is being used all across the country, with a bias towards urban school districts. And although its purported goal is to locate bad teachers with an eye towards removing them, it has had the opposite effect: some great teachers are being unjustly fired, while others are choosing to retire early or get jobs in districts that don’t have such an arbitrary and punitive system in place. Increasingly the best teachers are leaving the poorest neighborhoods, not the worst. The overall feedback loop, in other words, is counterproductive.

Another important example of a WMD comes from criminal justice in the form of “predictive policing” algorithms. These are algorithms that look at patterns of past crimes and try to predict where future crimes will occur, and then send police to those areas with the goal of deterring crime.

The fundamental problem with this concept is that it reinforces already uneven and racist policing practices. Again, a pernicious feedback loop. Algorithms get trained on the data that they are fed, which in this case are historical police-civilian interactions.

If we had a perfect policing system, that would be great, and we might want to automate it. But we do not have a perfect system, as we’ve recently seen from the Ferguson report and the Baltimore report among others. We have a “broken windows” policing system, and the data that “teaches” these algorithms reflect this system.

Put another way, if the police had been sent to Wall Street after the financial crisis to arrest the masterminds of that disaster, our police data would be very different, and the predictive policing algorithm would continue to send police to Wall Street to search out, and find, criminal activity. That’s not what happened.

One of the interesting points you make in the book is that the personalized nature of many of these models means that the same algorithm can have disparate effects on different segments of society. So for example, the same advertising algorithm that targets the poor for predatory payday loans and fly-by-night, for-profit colleges also “place the able classes of society in their own marketing silos. They jet them off to vacations in Aruba or wait-list them at Wharton.”

Can you talk about how class manifests itself in the era of big data?

Cathy O'Neil

The tipping point for me — the moment that made me quit my job to write this book — was when I heard a venture capitalist describe his ideal for the future of tailored advertising: a world where he’d receive only offers for jet skis and trips to Aruba, and where he’d never again have to see “another University of Phoenix ad,” because that’s not for people like him. That’s when I realized that one of capitalism’s most profitable technologies is in its ability to segment and segregate people into categories so that the rich can be given opportunities and the poor can be preyed upon.

Tailored advertisement is an auction system, so the person who’s willing to pay the most gets the opportunity to put an ad in front of you, on Google or Facebook or wherever. That means companies with products that I want to buy — probably expensive yarn — will advertise to me, because I have extra pocket money and I’m highly vulnerable to glistening jewel-toned alpaca wool.

But that also means that a poor and poorly educated single mother struggling to make a living to support her kids will be extremely valuable, and vulnerable, to a predatory payday loan company or a for-profit college promising to solve all her problems. Tailored advertising allows predatory industries to find their targets extremely efficiently.

Do you think that mathematical models are used to obscure and hide political agendas? If so, how prevalent do you think this is and how much of the impulse to adopt WMDs is that they transform political questions into questions that appear technical and objective?

Cathy O'Neil

Since writing the book I’ve been able to step back and survey all the examples I’ve researched, and I’ve noticed the following pattern: WMD’s show up when there’s a responsibility that nobody wants to take on.

Sometimes this responsibility stems from a real problem, like a problematic and racist justice system. At other times it’s an artificial or poorly understood problem, as we saw in the original education report, A Nation At Risk, that induced political panic but turned out to be a misinterpretation of statistics. Even so, the perceived problem served as a useful justification for the decades-long war on teachers.

When a sticky, complicated social issue like education or justice or hiring comes along, people would like to solve it with an algorithmic black box, and then they’d like to distance themselves from how that black box actually works. They sometimes don’t even acknowledge that the black box is making choices, as we’ve seen recently with controversy surrounding Facebook’s trending news algorithm.

We do not replace human judgment and morality with algorithms. WMDs serve simply to obscure them. Just as model evangelists seek to sanitize historical practices, they try to frame the models as being “beyond moral decisions.”

Of course, this just means that the morals embedded in a given algorithm will be the default, the one defined by its objective function — how it defines success — and its cost function — how it penalizes mistakes. In commercial examples, the goal is almost always to maximize profit, so the inferred morality can be loosely interpreted as, “whatever profits the company that wields the algorithm is good for the world.”

How can we fight back against WMDs? Will it be possible to eliminate WMDs without fundamentally challenging the imperatives of neoliberalism? As long as there are gross inequities in power in society, is it inevitable that math will used as a weapon by the rich against the poor?

Cathy O'Neil

I have some hope, or I wouldn’t have bothered writing this book. I actually think most people want to consider themselves fair-minded, and with that I hope that we will develop a rubric for algorithms so that, when they rise to a certain level of impact, we will have rules about how transparent and accountable they must be to their targets and to the general public. I don’t think it will happen overnight, and it also won’t happen simply by appealing to companies to trade in profit for fairness.

Challenging WMDs will require a movement of people who refuse to bow down to the algorithmic gods, who band together, collect evidence of their harm, and demand better laws from policy-makers. We already have some prototypes in the anti-discrimination laws of the 1970s, not to mention privacy laws in Europe. I do think change can happen, and I’m hoping to help.

Having said that, yes, I think algorithms will always be weaponized, especially as companies collect and sell more and more data about each and every person and can use their ever-improving technology to minutely segment people. There will always be opportunities in this context for companies to profit from different groups of people using whatever tactics are available to them.

Do you think big data and mathematics can play a liberatory role in society? How?

Cathy O'Neil

There’s a growing group of data scientists who are aware of the power of bad algorithms, and we are beginning to develop tools that will allow us to “see into the black box.” I’m trying to start a company that will do just that, and plan to develop methodology and technical tools that I can someday give to regulators.

This isn’t to say it’ll be easy to impose algorithms on the powerful like the powerful impose algorithms on the powerless; powerful people can afford privacy. Teachers get evaluated with black box algorithms, not school chancellors or governors.

In the short run, we should focus on forming political alliances that fight back against WMDs using legal and moral arguments, to demand accountability and transparency. My hope is that the book I’ve written gives people the courage to realize that this isn’t really about math at all, it’s about power.

But most importantly, there are absolutely algorithms that help people rather than hurt them. In fact the same models could be used for good or for bad: a model that anticipates health problems would be great if it were used by your doctor to keep you healthy, but would be horrific in the hands of employers looking to avoid hiring people who represent extra insurance costs. A model that finds struggling college freshman could pair those kids with extra resources, to keep them engaged and focused, or could put targets on their backs so the colleges can dump them.