This NYT article discusses the “new” scourge of rude people interacting with their phones in public via voice thanks in large part to Siri, Apple’s new virtual assistant.
This article reminded me of something slightly different about human interaction with virtual assistants or automation. In a 2004 paper, researchers Parasuraman and Miller wondered if automation that possessed human-like qualities would cause people to alter their behavior.
They compared automation that made suggestions in a polite way or a rude way (always interrupting you). As you might expect, automation that was polite elicited higher ratings of trust and dependence.
This might be one reason why Siri has a playful, almost human-like personality instead of a robot servant that merely carries out your commands. The danger is that with assistants that are perceived as human-like, people will raise their expectations to unreasonable levels. Like mistakenly ascribing political motivations to it.
Lastly, the graph shown below was in the latest issue of Wired magazine. I think it’s a nice compliment to the perceived reliability graph we showed in a previous post: