What's the big deal about Google Duplex?
10 May 2018On Tuesday, Google demoed an update to its virtual assistant that calls up businesses on your behalf to book appointments and make reservations.
In the demo, Duplex makes two calls, one to make a hair appointment, and another to reserve a table at a restaurant. The reservation call is especially impressive because when it turns out the restaurant will only take reservations for five or more people, Duplex politely takes no as an answer and knows to ask if there’s likely to be a wait.
It neither conversation did Duplex identify itself as non-human. It even goes so far as to introduce speech disfluencies (ummms and aahs and at least one sassy mmm-hmmmmm) which Google acknowledges are mostly about sounding more human. I wouldn’t mind that kind of thing if I know I’m talking to a computer, but it feels like an especially harsh betrayal to be manipulated into believing an AI is human in that way.
Why did Google, with their armies of PR people, think it was ok to demo a product that misrepresents itself as human to unwitting service workers?
They knew that this demo would cause a backlash, they just didn’t know how big. And now they’re getting to find out. So far we have:
- Tech Crunch - Duplex shows Google failing ethical and creative AI design
- PC Mag - Google Duplex is classist: Here’s how to fix it
- The Verge - The selfishness of Google Duplex
- Slate - Am I speaking to a human?
As of a few hours ago CNET is reporting that Google has already started “clarifying”:
We are designing this feature with disclosure built-in, and we’ll make sure the system is appropriately identified. What we showed at I/O was an early technology demo, and we look forward to incorporating feedback as we develop this into a product.
But that doesn’t really answer any of the specific questions I want to have answered:
- does the assistant identify itself as non-human up front, or only when asked?
- can businesses opt out of receiving calls from Duplex?
- what action does Duplex take when the conversation fails?
Instead, it looks like Google is stalling for time. They’re “incorporating feedback”. In other words, they’re waiting to see what we decide is okay and then they’ll see what they can get away with.
This kind of tactic can be used to shift the overton window on what behaviors we’re ok with from our technology. Maybe that’s already what’s happening here, even.
One thing is for sure, there are plenty of other questions like “Can computers misrepresent themselves as human? that AI companies want answered. By presenting their answer at a product demo, they get to set the terms of the discussion. The tone of that demo (here it is by the way) was clearly projecting this is fine. If that’s where the conversation starts, maybe we end up closer to that position that we would have otherwise.
That’s clearly what Google’s hoping, anyway.
But it doesn’t have to be that way. In ethics classes, in technology journalism, in science fiction, in conversations with friends, on social media! We can be talking about these questions. We can decide what we’re ok with and what we’re not. And we don’t have to accept that just because it seems like some thing already is some way that it always has to be that way.