Tech ethics, who are they good for?

This is a very short post, because the alternative would be to write a very, very long one and it’s Friday night — and well, I’m not Peter Singer, so I should keep it brief.

But here are two things, posted in the last week, that assume humanity is a single, undifferentiated entity. Neither of them acknowledge that tech ethics exist within a complex set of power relations; that something that is good for one person is often, of necessity, bad for another. Both of them are statements without a subject.

This slide stood out to me from this year’s edition of Mary Meeker’s legendary trends deck:

Slide 36 from the 2018 Kleiner Perkins Internet Trends deck

Because it begs two questions:

  1. Who are the people experiencing the unintended consequences?
  2. Who are the people benefitting from innovation and progress?

Very often, they are not the same people. You can insert different technologies into these questions to see how they play out. For instance:

  1. Who are the people experiencing the unintended consequences of bias in automated decisions?
  2. Who are the people benefitting from the innovation and progress created by bias in automated decisions?

In this instance, the people who suffer the consequences of (1) are unlikely to be the people who reap the rewards of (2). The people whose equity is lessened are not the same as the people who make a dividend. The one doesn’t cancel the other out. Sure, sometimes there are ambivalent outcomes, but not in the space of the big, Capital E, “ethical” dilemmas.

A few days later, Google published their Principles for AI.

Which beg two very similar questions:

  1. Socially beneficial for whom?
  2. Foreseeable risks and downsides for whom?

The UN Sustainable Development Goals are the closest we have to a definition of things that are good for everyone. They would be a fairly unambiguous set of social benefits to sign up to.

“What is good for me might not be good for you.”

Google haven’t done this. Instead, according to these principles, they are going to weigh the pros and cons in the balance, against a “broad range of social and economic factors”, without declaring their politics or their underlying value set. What is good for me might not be good for you. What is bad for me might be fantastic for millions of others. And so it goes on.

Ethical declarations like these need to have subjects. It needs to be clear who they are referring to. “Socially beneficial for engineers with equity” is different to “Socially beneficial to the poorest 50% of people in the world”. As Google Duplex demonstrated, “socially beneficial for people who hate making telephone calls” is not the same as “socially beneficial for people who have a right to know if they are talking to a robot”.

Subject-less statements are too imprecise to truly be called “principles” or “ethics”. If they are to be useful, and can be taken seriously, we need to know both who they will be good for and who they will harm.


Tech ethics, who are they good for? was originally published in Doteveryone on Medium, where people are continuing the conversation by highlighting and responding to this story.