Can technology be used to undo the wrongs of the past?

Can technology help undo the wrongs of the past?

Rashida Richardson, Director of Policy Research at the AI Now Institute at NYU.

In support of our work championing responsible technology, we’re inviting leading practitioners, thinkers and funders from around the globe, who are exploring the impact of the internet on people and society, to share their insights and experience.

For this initial series, we asked contributors to give their view on how the field of responsibility technology is changing. Each author addresses different questions which may relate to the history of the movement, why it is important, where it may be going and what’s needed to bring more of the thinking into practice.

In this piece, Rashida Richardson, Director of Policy Research at the AI Now Institute at NYU, discusses how tech developers’ own biases and prejudices and the problems with how we remember the past, can result in technologies reproducing prejudices and inequities. But can tech also be a tool to illuminate and mitigate the existence of social and political stratifications?

Events, and stages of society and history, generally repeat themselves in cycles. While the people and facts vary in each iteration, there are common themes and consequences that reoccur. And within each reoccurrence, most forms of social and political stratification remain in flux or even worsen over time.

There is a natural inclination to assume that technology developed in good faith should reduce instead of increase social and political stratification. This is, however, a flawed presumption because technology developers are often unaware of their own blindspots, preferences and biases. These blind spots and biases can ultimately hinder their ability to perform a comprehensive assessment of the social impact of their technology. And even if created with the best of intentions, they will often lack a sufficient understanding of the historical and social contexts of the issues they are attempting to address with their tech solution. Developing a technological solution while failing to take into account the overwhelming influence of relevant systemic and structural forces, including political, economic, and cultural ones, the technology can end up contributing to social and political stratification.

This is evident by the numerous and embarrassing news reports of racist and sexist chat bots, racist map search results, gender biased job search engines, and offensive photo labelling. All of these examples result, in part, from technologies reproducing prejudices observed in the underlying data and in society at large. Rather than accepting responsibility for the way their technologies interacted with this larger environment, the companies in each of these examples addressed the problem simply by prohibiting the offensive results or outcomes as and when they emerged.

Addressing the negative consequences on a case-by-case basis fails to address the bigger problem and will not necessarily preclude inadvertent or future problems.

Responsible Technology, as defined by Doteveryone, considers the social impact it creates and the unintended consequences it might cause. Thus for technology to be responsible it must be designed and developed with an awareness of, but also, I would argue, even act as a tool in and of itself, to address the underlying inequities that contribute to biased and problematic outcomes.

Only then can it potentially function as a tipping point towards a more equitable future.

The Problem with Historical Memory

How we remember our past, not only informs how we deal with the present but also how we prepare for the future.

To break the cycle of ever increasing social and political stratification, one must confront the problem of historical memory. The concept of historical memory refers to the practice of groups and communities constructing and identifying with particular narratives about historical periods or events. It can be intentional and therefore function as propaganda.[1] It can also be an unintentional consequence of limited or ignored information. For example, in the United States, lynching was considered bad, but commonly accepted in and outside of the American South as punishment for criminal acts by Black people. It was not until Ida B. Wells published The Red Record, with statistics and relevant reporting, to illuminate how lynching was often used in the South to control or punish Black people that were economic competition. It was this revelation that led to the introduction of anti-lynching legislation in Congress.

How we remember our past, not only informs how we deal with the present but also how we prepare for the future. So if we fail to understand historical periods as they actually occurred, then we will only understand certain outcomes and problems as explained through the prevailing narrative. This is, in part, why many forms of social and political stratification persist. It is difficult to avoid replicating problems or developing effective solutions if we are working on a flawed understanding of the relevant circumstances and information.

Technology to re-write the wrongs

Social and political stratifications also persist and worsen because they are embedded into our social and legal systems and structures. Thus, it is difficult for most people to see and understand how bias and inequalities have been automated or operationalized over time. Responsible technology can serve an important role in helping illuminate and mitigate the existence of these social and political stratifications as well as how they function in our social and legal systems and structures.

For instance, in many parts of the United States, a person’s zip code can function as a proxy for their race because of a long history of housing segregation based on race.[2] To avoid built-in biases, using technology to remove this variable, can help to avoid bias or other unintended consequences when making banking or insurance decisions for example. Yet, an alternative role technology could have would be to to illuminate why zip code can function as a proxy for race. If we know certain inequalities in technology derive from historical data that exhibits societal inequalities and biases, then we can use technology to prove these facts and help advocates push for more comprehensive and systemic reform to address the underlying issues.

For technology to be fully responsible it must take into account historical memory. As well as being aware of the existing inequities, it must be aware that it is being developed under fairly regressive legal and policy frameworks that have failed to meaningfully address facially neutral practices that perpetuate bias and inequalities, and are also hesitant to promote policies that are explicitly conscious of bias and structural or systemic inequalities.

So for technology to really embody the Latin adage of “first, do no harm,” it must both not knowingly deepening existing inequalities, or creating new ones, but also, when possible, help quantify or otherwise highlight how existing inequalities that have functioned over time and how they relate to our current structures and systems. In essence, technology has the potential to serve as an intervention for flawed narratives that have prevailed as a result of historical memory by demonstrating the realities present in historical and current data. This type of information will be a welcomed tool for advocates and it can help identify interventions that may help reduce, not perpetuate or increase, social and political stratifications.

[1] E.g., the FBI Counterintelligence Program’s (COINTELPRO) initial purpose was to disrupt or otherwise neutralize groups the FBI believed to be subversive. So COINTELPRO operations included soliciting journalist to produce false news items aimed at discrediting communist and civil rights leaders.

[2] Housing segregation in the United States is the result of a variety of practices by government and private actors. For example, redlining is the systemic denial of services, most often financial such as banking or insurance, to certain neighborhoods based on community demographics.

Interested in contributing your thoughts and experiences working in the responsible tech field? Get in touch with us and let’s arrange a chat. [email protected].

If you enjoyed this, please click the ? button and share to help others find it! Feel free to leave a comment below.

Rashida Richardson is the Director of Policy Research at the AI Now Institute at NYU, where she designs, implements, and coordinates AI Now’s research strategy and initiatives on the topics of law, policy, and civil rights.

Rashida joined AI Now after working as Legislative Counsel at the New York Civil Liberties Union (NYCLU), the New York state affiliate of the American Civil Liberties Union, where she led led the organization’s work on privacy, technology, surveillance, and education issues. Prior to the NYCLU, she was a staff attorney at the Center for HIV Law and Policy, where she worked on a wide-range of HIV-related legal and policy issues nationally, and she previously worked at Facebook Inc. and HIP Investor in San Francisco.

Rashida currently serves on the Advisory Board of the Civil Rights and Restorative Justice Project and the Board of Directors of the College & Community Fellowship.

She received her BA with honors in the College of Social Studies at Wesleyan University and her JD from Northeastern University School of Law.


Can technology be used to undo the wrongs of the past? was originally published in Doteveryone on Medium, where people are continuing the conversation by highlighting and responding to this story.