Are machines racist? Are algorithms and artificial intelligence inherently prejudiced? Do Facebook, Google, and Twitter have political biases? Those answers are complicated.
But if the question is whether the tech industry doing enough to address these biases, the straightforward response is no.
Warnings that AI and machine learning systems are being trained using “bad data” abound. The oft-touted solution is to ensure that humans train the systems with unbiased data, meaning that humans need to avoid bias themselves. But that would mean tech companies are training their engineers and data scientists on understanding cognitive bias, as well as how to “combat” it. Has anyone stopped to ask whether the humans that feed the machines really understand what bias means?
Companies such as Facebook—my former employer—Google, and Twitter have repeatedly come under attack for a variety of bias-laden algorithms. In response to these legitimate fears, their leaders have vowed to do internal audits and assert that they will combat this exponential threat. Humans cannot wholly avoid bias, as countless studies and publications have shown. Insisting otherwise is an intellectually dishonest and lazy response to a very real problem.
In my six months at Facebook, where I was hired to be the head of global elections integrity ops in the company’s business integrity division, I participated in numerous discussions about the topic. I did not know anyone who intentionally wanted to incorporate bias into their work. But I also did not find anyone who actually knew what it meant to counter bias in any true and methodical way.
Over more than a decade working as a CIA officer, I went through months of training and routine retraining on structural methods for checking assumptions and understanding cognitive biases. It is one of the most important skills for an intelligence officer to develop. Analysts and operatives must hone the ability to test assumptions and do the uncomfortable and often time-consuming work of rigorously evaluating one’s own biases when analyzing events. They must also examine the biases of those providing information—assets, foreign governments, media, adversaries—to collectors.
This kind of training has traditionally been reserved for those in fields requiring critical analytic thinking and, to the best of my knowledge and experience, is less common in technical fields. While tech companies often have mandatory “managing bias” training to help with diversity and inclusion issues, I did not see any such training on the field of cognitive bias and decision making, particularly as it relates to how products and processes are built and secured.
Judging by some of the ideas batted around by my Facebook colleagues, none of the things I had spent years doing—structured analytic techniques, weighing evidence, not jumping to conclusions, challenging assumptions—were normal practice, even when it came to solving for the real-world consequences of the products they were building. In large part, the “move fast” culture is antithetical to these techniques, as they require slowing down when facing important decisions.
Several seemingly small, but concerning examples from my time at Facebook demonstrate that, despite well-meaning intentions, these companies are missing the boat. In preparation for the 2018 US midterm elections, we asked our teams whether there was any risk that we would be accused of an anti-conservative bias in our political ads integrity policies. Some of the solutions they proposed showed that they had no idea of how to actually identify or measure bias. One program manager suggested doing a straight data comparison of how many liberal or conservative ads were rejected—no other analysts or PMs flagged this as problematic. My explanations of the inherent faults in this idea did not seem to dissuade them that this would not, in fact, prove a lack of bias.
In other exercises, employees would sometimes mischaracterize ads based on their own inherent biases. In one glaring example, an associate mistakenly categorized a pro-LGBT ad run by a conservative group as an anti-LGBT ad. When I pointed out that she had let her assumptions about conservative groups’ opinions on LGBT issues lead to incorrect labeling, my response was met by silence up and down the chain. These mischaracterizations are incorporated into manuals that train both human reviewers and machines.
These are mistakes made while trying to do the right thing. But they demonstrate why tasking untrained engineers and data scientists with correcting bias is, at the broader level, naïve, and at a leadership level insincere.
I believe that many of my former coworkers at Facebook fundamentally want to make the world a better place. I have no doubt that they feel they are building products that have been tested and analyzed to ensure they are not perpetuating the nastiest biases. But the company has created its own sort of insular bubble in which its employees' perception of the world is the product of a number of biases that are engrained within the Silicon Valley tech and innovation scene.
This is exactly why the tech industry needs to actually invest in real cognitive bias training and empower true experts to address these issues, as opposed to spouting platitudes. Countering bias takes work. While I don’t expect companies to put their employees through the same rigorous training of intelligence analysts, raising awareness of their cognitive limitations through workshops and training would be one concrete step.
Last year, when I attended a workshop in Sweden, a trainer started a session with a typical test. As soon as he put the slide up, I knew this was a cognitive bias exercise; my brain scrambled to find the trick. Yet despite my critical thinking skills and analytic integrity, I still fell right into the trap of what is called “pattern bias”, in which we see the patterns that we expect to see. At a workshop I gave to a group of trained intelligence and security analysts in New York a few months later, they also all fell for a number of bias traps.
No matter how trained or skilled you may be, it is 100 percent human to rely on cognitive bias to make decisions. Daniel Khaneman’s work challenging the assumptions of human rationality, among other theories of behavioral economics and heuristics, drives home the point that human beings cannot overcome all forms of bias. But slowing down and learning what those traps are—as well as how to recognize and challenge them—is critical. As humans continue to train models on everything from stopping hate speech online to labeling political advertising to more fair and equitable hiring and promotion practices, such work is crucial.
Becoming overly reliant on data—which in itself is a product of availability bias—is a huge part of the problem. In my time at Facebook, I was frustrated by the immediate jump to “data” as the solution to all questions. That impulse often overshadowed necessary critical thinking to ensure that the information provided wasn't tainted by issues of confirmation, pattern, or other cognitive biases.
There is not always a strict data-driven answer to human nature. The belief that simply running a data set will solve for every challenge and every bias is problematic and myopic. To counter algorithmic, machine, and AI bias, human intelligence must be incorporated into solutions, as opposed to an over-reliance on so-called “pure” data.
While there are positive signs that the industry is seeking real solutions—such as IBM Research’s work to reduce discrimination already present in a training dataset—such efforts will not solve for human nature. Some of the proposed fixes include revisiting algorithms or updating the data being fed to machines. But it is still humans who are developing the underlying systems. Attempting to avoid bias without a clear understanding of what that truly means will inevitably fail.
WIRED Opinion publishes pieces written by outside contributors and represents a wide range of viewpoints. Read more opinions here. Submit an op-ed at opinion@wired.com
- Messenger lets you unsend now. Why don't all apps?
- This birdlike robot uses thrusters to float on two legs
- A new Chrome extension will detect unsafe passwords
- The Social Network was more right than anyone realized
- Micromobility: prose and poetry of the scooter-faithful
- 👀 Looking for the latest gadgets? Check out our latest buying guides and best deals all year round
- 📩 Want more? Sign up for our daily newsletter and never miss our latest and greatest stories