How "data ethics" could actually work. In real life.

I’ve been in countless (well, I could probably count them, but I don’t want to) presentations that discuss “data ethics” and its status as the looming NEXT BIG THING. I’ve been told many, many times that organizations will have to go “beyond compliance” (whatever that means) and create ethical frameworks that help them make decisions about what to do with personal data when there is no law explicitly telling them they can’t do what would seem to be an immoral thing.

This way, we can avoid onerous regulation that will be unlikely to keep up with technological innovation and we can all live in a happy utopia where our benevolent corporate masters let us know what is okay and what is not okay to do with our personal data.

Sorry, that was perhaps a bit too cynical even for me. But you get the idea.

Almost always, the examples of how this would work are framed in humanitarian issues that may seem to present ethical dilemmas, but which don’t really. Such as:

There’s an ebola outbreak. We need to get as many experimental vaccine doses into as many people as possible as fast as possible. Therefore, we we will harvest cell phone data from the big carriers to see where people are traveling the most and where the choke points are and we will put vaccine stations there.

BUT WAIT. We don’t have a legal basis to process that mobile data in that way. WE’RE VIOLATING THEIR PRIVACY.

Oh ho! But I have run this through my ethical framework and the good of preventing the spread of ebola outweighs the privacy harms in this case. WE ARE GOOD TO GO.

Which is all well and good. But ethics isn’t about the obvious cases. We don’t need deep thinkers and ethicists to figure out that keeping thousands of people from dying outweighs infringing on some people’s location privacy in the aggregate. That’s pretty cut and dry.

Rather, ethics is about the harder cases where it’s not so clear: Hey, we have this data. There’s nothing that says we CAN’T use it in this way we want. What should we do?

Not long ago, I wrote about how Ever had decided it would take all the AI learning it has done on people’s uploaded photos and sell that to people as facial recognition technology and make bank. Now, I read about Real Networks doing the same thing. I’m guessing nobody took the decision to sell facial recognition based on the data gathered from users through an ethical framework:

Vance says the SAFR technology, which uses less bandwidth and computing power than some competing facial recognition tools, evolved from the company’s years of expertise processing images and video for products like the once-ubiquitous RealPlayer streaming media player and RealTimes, which processes personal photos and videos and includes facial detection features. (In January, RealNetworks doubled its stake in Rhapsody International, effectively making the company the owner of legendary streaming music service Napster.)

The SAFR software—designed to identify faces in real-world conditions, including people in motion, in dim lighting, and at occluded angles—can “reliably match against millions of faces in under a second,” the company says.

So, here’s a true ethical balancing test, right? If you were operating RealNetworks’ ethics department, would you have gone forward with this business plan? First, let’s look at the “harms.”

  1. Processing people’s images, which were uploaded for a completely different reason than the stated purpose. Their privacy is violated.

  2. Using the labor of people who had no idea what they were contributing for another’s personal profit without any direct compensation. Their personal autonomy is violated. They didn’t choose to provide you with money-making labor for free.

  3. Providing the ability to allow institutions of all kinds to “track” people who have done nothing other than inhabit a space, identifying their gender, race, etc., and categorizing them, without their permission. Their privacy is violated.

  4. Despite claims of “reliably” matching faces to people, we know facial recognition makes mistakes, thus subjecting at least some small amount of people to unwarranted detentions of at least some kind. Plus, we know that these unwarranted detentions can sometimes go bad, so at least some of those innocent people will be physically harmed, even potentially killed by scared law enforcement or security types. Plus they suffer financial harm when they have to pay their medical bills.

  5. Despite claims of “reliability,” we know that facial recognition technology disproportionally mis-identifies people of color, subjecting them to yet more discrimination, just because they happened to inhabit a public space. This could lead to actual physical and/or monetary harm in a variety of ways, including simply not being allowed to enjoy a sporting event that you’ve paid to attend because they mistakenly think you’re a “bad guy” for a couple hours.

Now, let’s look at the benefits:

  1. RealNetworks’ owners and employees will make some money. Hey, that’s a benefit. We need a functioning economy. But it’s hard to argue there aren’t other avenues for these people to make a living.

  2. Museums and other places report they’ll be able to offer a “more tailored experience.” I guess that’s a benefit, though I’ve never thought that my museum or sporting experience was particularly “untailored” in a bad way.

  3. People can open things with their faces. I mean, do we really suffer from bad door-opening technology? Like, this helps people who can’t remember their badges or keys? And what about the false-positives and false-negatives that mistakenly disallow or allow access? Is it really possible the good outweighs the bad here? Maybe.

  4. We keep bad guys out of places they shouldn’t be. This is the big one people will point to: Hey, this might prevent a stadium getting bombed or a school shooting or an abduction! I guess so. But how many terrorist attacks are done by people we would have recognized on a facial recognition camera? Is the access control at schools really so bad that we can’t keep known-offenders from entering the doors without facial recognition? Can’t we just force them to show ID before allowing them in? And if you’re saying, “well, bad guys can falsify IDs, not their faces!,” let me tell you that a good pair of sunglasses and a hat defeats facial recognition 99 out of 100 times.

To me, that ethical balance leans toward: Don’t do that. The harms are pretty clear and real, whereas the benefits are either personal for the people monetizing the tech or dubious for society at large. Ethical fail.

And what are the consequences for this ethical fail? Absolutely nothing. What is the incentive, then, for this “data ethics” landslide I keep being told is coming? Unless you are consumer-facing in some way, and you face some kind of consumer revolt that would affect your bottom line, there are no incentives whatsoever to act ethically and “go beyond compliance with the law.” It’s nonsense trumped up by well-meaning people to make themselves feel better about the fact that we’ve let technology run rampant without having meaningful conversations about what we should allow to happen in a democratic society.

And there are many, many other obvious examples where even a cursory ethical balancing test would show the immoral nature of something tech is doing: Apple hosts apps that are quite obviously used to stalk and harass women. Why? Israeli company XYZ (I don’t even care enough who they actually are to Google it) makes a technology that helps governments spy on “bad guys,” but OOPS it happens to wind up in the hands of crime bosses who use it to find and kill their enemies (like, oh, journalists).

Maybe you shouldn’t have made and sold that tech in the first place, dudes.

The only way this works is if ethical companies show themselves. Refuse to work with companies who are monetizing data in unethical ways. Be like Microsoft employees who revolted against Microsoft selling tech to ICE and forced the company to issue a statement calling for the end of ICE’s child separation policy. Ask for an “ethical impact assessment” before buying new tech, to see that companies have actually thought through the harms and benefits and made a proactive decision they can defend.

Show some ethical leadership and don’t buy facial recognition technology! Do not be cowed by your fears and let them over-rule your sense of fairness and human dignity. We are more than the records in the database that are associated with our faces. We are more than the mathematical tendencies displayed by people of our ages, genders, and races. We are individual humans with rights to freedom and autonomy that should not be superseded just because a technology is possible.

Only if our corporate leaders demand better will be get better. Otherwise, “data ethics” will remain a topic for round-tables at corporate events while the latest “tech start-up” pays for another 10 mansions in the California hills.

Sam Pfeifle