understanding the importance and impact of anonymity and authentication in a networked society
navigation menu top border

.:home:.     .:project:.    .:people:.     .:research:.     .:blog:.     .:resources:.     .:media:.

navigation menu bottom border
main display area top border
« Flying sans ID | Main | Captain Copyright v. The Corruptibles »

One Moment Please*

posted by:Carole Lucock // 11:59 PM // June 13, 2006 // ID TRAIL MIX

trailmixbanner.gif

“Agent, agent, agent!” A contorted face yells into a cell phone. A frantic pumping of the keypad follows this. Momentarily, the tightened muscles on the face relax and pull toward a smile; the pumping of the keypad stops. A reassuring voice has appeared on the other end of the line: “How may I help you?” Apparently, it is a real voice. The voice of someone real. The voice of a someone.

There follows a rather ordinary voice exchange in which the caller is successfully instructed on how to hook up a telecommunications technology:

“Okay, so the red plug goes to the converter? That’s what I was doing wrong.”
“Probably. Call back if you’re still having trouble. Now, is there anything else I can help you with today?”
“I think I’m fine now. Thank you.”

The ‘machine voice’ and the ‘human voice’. At first, the machine voice, or the machine simulating human voice, beginning a series of prompts. And responding to this series, obediently on cue, a series of human responses. “If you want English, say ‘English’”, the machine voice instructs. “English”, comes the reply of the cooperative supplicant. “Is that a business or a residence”? “Residence.” After a few rounds of this, the human voice, impatient and frustrated, tries frantically to override and free itself from the incessant questioning of the machine voice. “Agent, Agent, Agent!” the voice yells, as the human hands pound on the keyboard. And finally, a human voice appears on the other end.

This mini-drama becomes increasingly common and ordinary today. It begins in a machine to human exchange. Sometimes that is enough to satisfy the human. Other times, as above, the human becomes impatient and frustrated, or the issue cannot be addressed by the machine voice. The drama ends in a human-to-human exchange.

Reflecting on this drama, I think about my encounters with these voice machines. I register a certain uneasiness. What is the source of my unease? What are its grounds?

To begin, there is my impatience and even irritation at having to negotiate a maze of questions. My time and energy is increasingly used up in useless, unwanted options, as I seek to get ‘customer service’. But that cannot be the root source of my unease with the machine voice. After all, in many cases where a real voice or human being is on the other line, I have to endure a similar sort of inane questioning as this human voice – live and in real time, but machine like all the same – reads me through a script of questions it is programmed to ask. That too can make me impatient and irritated. But the machine like human voice is different from the human like machine voice.

To be sure, I have a certain unease about the machine like human voice. And this unease too is not reducible to impatience or irritation. And the precise grounds of that unease would be worth chasing down. All the same, I prefer the human voice, even when it is machine like, to the machine voice that is or tries to be like a human voice. And so, as in the drama above, I try to get to a human voice as quickly as possible: “Agent, Agent, Agent!” I endure and respond to an inane series upon series of options in order to make my way as quickly as permitted to a place – a voice – where my questions or problems can be addressed. I even search the internet to see if creative others have found ways of overriding the system.

In some few of these automated options mazes, if one makes the right move there is a possibility of skipping the options entirely (at any stage) and speaking to another human being directly. But in this or that case there may not be such a right move, or one may not know what it is. I dread, and therefore assiduously seek to avoid, making a wrong move or mistake that would loop me back to the beginning of the options series. And I am relieved when (if) the human voice appears: “How may I help you today”.

So my unease is not just impatience or irritation. And neither is it the passive obedience that one is steered into as a supplicant, on the way to a desired endpoint: problem solved, or a real person. To be sure, I do not like this passivity. “English please,” I answer dutifully and politely to one of its prompts, adding the ‘please’ half-forgetful that it is after all a machine I am responding to. But even when I am dealing with other human beings (machine like or not), in situations where I am a supplicant, I am conditioned or programmed to a certain passivity in order to make my way toward my desired end point. And politeness sometimes helps.

So it is not the fact of my conditioned or even programmed responses that is the source of my unease about the human like machine voice. To be sure, this gives me a certain unease, and that unease would also be worth chasing down, but it is an unease that is not unique to the machine voice.

And that unease that I feel grows as we move from automated keypad choice to voice-tone choice (if choice is the right word at all here), and becomes greater as these voices take on a calm and friendly persona and try to feign real-person dialogue. The machine-voice tries to ‘understand’ and ‘interpret’ the variety of responses provided in answer to specific questions, as though it were interacting with the caller. Currently, the technology is not particularly good and it’s obvious that one is dealing with a machine-voice, a machine that stands between you and a real person (even if one occasionally forgets oneself and slips into the polite form of response one might give in answer to a person: “English, please”). However, the technology will get better. And one can imagine a Turing moment when it will not be so obvious that the voice on the other end of the phone is a machine, or that one’s responses are pre-programmed towards its programmed ends.

As we move in this direction, I wonder if our current, passive submission to answer the machine-voice command (speak out loud to the voice-machine) is preparing or conditioning us to embrace a possible tomorrow in which the real person at the other end of the phone has been eclipsed. A person who, to be sure, may be machine like and programmed. But a person all the same. A being to whom it is not absurd to say ‘please’ or thank you. A being to whom, after a series of inane machine like questions and answers, one might meaningfully (even if vainly) terminate the exchange with a profanity and a charge, accusation or put down that the person is nothing but a machine or a robot.

*The author of the present piece warrants that, notwithstanding the ideological programming she has received, it has not been (entirely) machine written.

Comments

carole,

i immediately connected with your reflections, not only by way of personal experience but by recollection of something neil postman once warned us of in his book TECHNOPOLY. He said that:

"The fundamental metaphorical message of the computer, in short, is that we are machines-thinking machines, to be sure, but machines nonetheless. It is for this reason that the computer is the quintessential, incomparable, near perfect machine for Technopoly. It subordinates the claims of our nature, our biology, our emotions, our spirituality. The computer claims sovereignty over the whole range of human experience, and supports its claim by showing that it "thinks" better than we can."

Postman noted that the metaphorical expression of computer ideology is not restricted to the "human as machine" - the metaphor has since been inverted:

"[W]hat we have here is a case of metaphor gone mad. From the proposition that humans are in some respects like machines, we move to the proposition that humans are little else but machines and, finally, that humans are machines. And then, inevitably…to the proposition that machines are human beings."

as your piece so beautifully suggests, i am not sure which metaphorical direction is more dangerous...

Posted by: Ian Kerr at June 13, 2006 11:02 AM

Hi Carole. Thanks for this -- a very engaging exploration! I wonder: Do you think the sense of unease would be diminished if the technologies in question were designed such that it was constantly salient to any user that the voice heard was not a human voice (e.g. by making the voice as unlike a human voice as practically possible, or by having the voice frequently make a remark to the effect of "And remember, you're just talking to a machine")? Alternatively, do you think the sense of unease would increase were the voice to become even more human-like in appearance? Thinking about the answer to this might help us in turn decide whether the source of the unease is primarily (1) a general concern about machines taking over the roles of humans, or (2) a more specific concern about machines taking over the roles of humans *while at the same time causing us to overlook the fact that it is machines playing those roles*. I'm inclined to think that it's more (2), and less (1) that's doing the work. But I'm not sure ...

Posted by: David Matheson at June 13, 2006 09:50 PM

Post a comment




Remember Me?


main display area bottom border

.:privacy:. | .:contact:.


This is a SSHRC funded project:
Social Sciences and Humanities Research Council of Canada