There are promising conversations emerging about ethics in the tech sector. But while any turn toward ethics is an important one, it is incomplete without a racial lens.
To date, the conversation around ethics and race in tech has been shaped by three main approaches: examining the pipeline for diverse talent, supporting “diversity and inclusion,” and testing for implicit bias. But each of these has encountered its own pitfalls.
There is a well-measured “diversity dividend,” that is, a real measurable benefit to diverse tech development teams. A Harvard Business Review study found more diverse workplaces performed better financially, but realizing the goal of a diverse workplace continues to elude tech companies. However, while many acknowledge diversity in hiring as a goal, the tech industry as a whole still falls short of achieving it.
People in tech have told us while putting this report together that talking about “diversity and inclusion” (D&I) is often a way to avoid talking about racial issues directly. Instead, people talk about “background” or “experience” or “under-represented groups,” which can obscure how serious a problem systemic racism really is. And, for the tiny percentage of black and Latinx people who do get hired in tech, they face the multiple burdens of having to do the work of racial literacy for their co-workers, supervisors, and company culture.
Another predominant way the tech industry has sought to address “diversity and inclusion” is through implicit bias trainings. Implicit bias is the idea that human prejudices are ingrained at a deep, unconscious level. These trainings use a computer-assisted “implicit association test” (IAT) that measures the strength of associations between groups of people (e.g., black people) and evaluations (e.g.,good, bad) or stereotypes (e.g., athletic, clumsy). Such IATs consistently demonstrate that we are all more biased than we’re comfortable acknowledging. After two decades, using implicit bias to diagnose racial bias has not paid off.
The notion that our brains are “hardwired” for bias leaves us in a kind of cul-de-sac, unable to escape the programming of our minds. If we want a truly ethical technology, we need a different approach, one that looks to ways we can build the skills we need in order to address racial bias.
Tech companies and their users are globally connected across national and cultural boundaries. Whether in startups of a few people or in well-established firms, tech companies are often working with international teams of people creating products that will launch in a wide array of social, cultural, and financial contexts. According to one report, about 71% of tech company employees in Silicon Valley are born outside the United States. The international teams work in, and are shaped by, a dominant culture that has race embedded in it, often in very confusing ways. The tech products created in Silicon Valley and distributed globally are not racially neutral but rather carry the imprint of the dominant culture in which they are created, exporting those assumptions and ideas to other cultures. Racial literacy helps make clear that cultural localization efforts should be an active part of any expansion of services, rather than a reactive customer service strategy.