When I see headlines like: ‘Uber cars run red lights during unauthorized real-world testing’, ‘Google image captioning in photos labelled black people as gorillas’ and ‘Microsoft chatbot Tay goes racist’, I don’t like the way they talk about these systems. But it wasn’t until I watched the presentation ‘Debunking Robot Rights: Metaphysically, Ethically and Legally’ on youtube by Jelle van Dijk, Abeba Birhane and Frank Pasquale that I could say what bothered me.
My main problems with these headlines are this:
- There is no human responsible in the story
- We talk about an automatic system as if it has agency and free will
What I really liked about the presentation1 is that the authors make a very good case against rights for robots. And with robots I mean ‘artificial intelligence’.
They articulated many points that bother me in news about AI. Let me try to repeat them here.
- We talk about current automated solutions as if it is fantasy AI
- we ignore that robots are an artefact of human work
Fantasy robots vs actual systems
What is that fantasy AI (cmd Data in star trek, R2D2 in star wars, etc).
These ‘robots’ don’t exist. Talking about robotic overlords and grand artificial intelligence has absolutely nothing to do with automated systems
as they are currently used. We have actual problems with actual systems. We can care about the hypothetical fantasy robots when they arrive. We should focus on societal problems that are causing real hurt.
The interesting thing is, that most machine learning practicioners I talk to never ever talk about the fantasy robots. We are too focused on making the stupid system run.
Robots are artefacts of human decisions
All ‘ai’ systems are actually based on human work, the data is collected, cleaned, labeled by humans. Someone made a decision to use a certain algorithm, train it with certain data and deployed the resulting model. Someone made the decision to use that model on actual human beings.
So what can we do
We can change the way we talk about these systems, taking into account that they are tools, and that someone made a decision to apply them.
I suggest we practice by replacing AI with different words, and inserting the humans responsible for the decisions.
Replacing AI with tool-nouns
The term Artificial intelligence is misleading, but we are stuck with it, i’m afraid. In my mind, these systems are just tools, like a big scissor. But maybe it would help if you replace AI in headlines with ‘big scissors’ and see if the headlines still make sense. Like this chrome extension: ‘replace AI and ML with Lots and Lots of Math’
For instance The verge: ‘Microsoft lays off journalists to replace them with AI’
Turn it into Microsoft lays off journalists to replace them with BIG SCISSORS
Now the true questions arise:
- Who will operate the scissors?
- What if the Scissors hurt someone?
- Who decided to replace the journalists, the company?
Inserting humans responsible for decision
- ‘Uber cars run red lights during unauthorized real-world testing’
- Uber CEO Dara Khosrowshahi allowed automated vehicle to bring residents of ‘San Francisco’ in danger.
Talking about corporations as if they are humans, makes no one responsible, but that is not the case. The CEO is responsible. So call him (or her) out.
Better conversations about AI
By using the term AI we muddle the water, it brings to mind sentient robots that do not actually exist. By talking about the automated system as if it is sentient or as if having agency we reflect responsibility of the makers and decision makers.
We cannot say this is an algorithm going wrong, the algorithm is doing what is is supposed to do, it has not agency. It is not doing something, someone made a decision to apply this and now because of that decision (or chain of deciions) other people are hurt.
We should talk about what happens to victims and perpetrators.
So let’s practice:
‘Google image captioning in photos labels black people as gorillas’
- good, we talk about victims
- bad: Google the company? No someone made a decision, and that responsibility lies with Sundar Pichai as CEO of Google (I don’t know exactly when this happened, it could have been Larry Page, 2015 was a transition year)
- Good: We are not talking about ‘ai’ it is a tool: giving images captions
- Why did this happen? (Decision: unverified unchecked input data is appropriate as training data)
Microsoft chatbot Tay goes racist
- Bad: we give agency to a robot, a chatbot
- Bad: we talk about a corporation as if it is a human, dodging responsibility
- Why did this happen? (Decision: unverified unchecked input data is appropriate as training data)
Welfare surveillance system violates human rights, Dutch court rules
- Good: talks about human rights
- Good: surveillance system as term, not AI
- Bad: No one is responsible, in fact even now Lodewijk Ascher (Ministor of Social affairs) is dodging his responsibility
I’m publishing this as part of 100 Days To Offload. You can join in yourself by visiting https://100daystooffload.com, post - 42/100
Find other posts tagged #100DaysToOffload here
References
Birhane, Abeba, and Jelle van Dijk. “Robot Rights? Let’s Talk about Human Welfare Instead.” ArXiv:2001.05046 [Cs], January 14, 2020. https://doi.org/10.1145/3375627.3375855.
telegraph.co.uk: Google Photos labels Black People as ‘gorillas’
Welfare surveillance system violates human rights, Dutch court rules
Microsoft Chat Bot Goes On Racist, Genocidal Twitter Rampage
which is based on the paper “Robot Rights? Let’s Talk about Human Welfare Instead.” by Abeba Birhane, and Jelle van Dijk. ↩︎