understanding the importance and impact of anonymity and authentication in a networked society
navigation menu top border

.:home:.     .:project:.    .:people:.     .:research:.     .:blog:.     .:resources:.     .:media:.

navigation menu bottom border
main display area top border

« May 2006 | Main | July 2006 »

From the "Sign of Impending Doom" Department

posted by:Angela Long // 10:52 PM // June 29, 2006 // General

The president of Applied Digital Solutions, the makers of the implantable RFID device called VeriChip, suggested that the VeriChip be used to track migrant workers and immigrants in the US in a television interview last month. A sign of the impending right wing apocalypse, perhaps, or just an enterprising entrepeneur capitalising on his nifty invention?? Is the VeriChip a digital angel? Digital devil is more like it...

| Comments (0) |


A Watchful Eye Influences Behaviour

posted by:Carole Lucock // 05:00 PM // June 28, 2006 //

Today, CBC reported on a study that suggests that being watched by a pair of eyes in a picture can influence behaviour. An "honesty box" for drinks received roughly three times as much money when accompanied by a picture of eyes than when the box was accompanied by a picture of flowers.

| Comments (1) |


AT&T's Privacy Policy

posted by:Angela Long // 11:59 PM // June 27, 2006 // ID TRAIL MIX

trailmixbanner.gif

During my usual pre-work web-surfing (aka. technique of seemingly interminable procrastination) last week, I came upon a post on boingboing.net with the title AT&T retrofits privacy policy: your data is not yours. The title piqued my curiosity, given its relevance to privacy law and the involvement of one of the world’s largest telecommunications companies. During our contracts course this past year, Ian Kerr and I routinely used Canadian telecommunications contracts and privacy policies to provide ‘real world’ examples of contracts with which the students would have had some personal (and often frustrating) experience. Having read those contracts and policies in great detail (and even fashioning an exam question based on one such contract), I was interested to see what changes AT&T was making.

Apparently AT&T has revamped it’s privacy policy (a misnomer if ever I’ve heard one – ‘privacy policies’ usually provide protection for almost everything EXCEPT privacy) to provide even less protection for it’s customers confidential information. The boingboing.net posts a link to an article by David Lazarus of the San Francisco Gate. Lazarus describes the new policy, that applies only to AT&T Yahoo! internet users, as markedly different from the company’s previous policy, in that it specifically ascribes ownership of customer data to AT&T. The new policy states in a section dealing with AT&T’s legal obligations and fraud:

While your Account Information may be personal to you, these records constitute business records that are owned by AT&T. As such, AT&T may disclose such records to protect its legitimate business interests, safeguard others, or respond to legal process.

In addition, it also requires customer agreement with the policy as a term of the service. It states (in bold print):

Please read this Privacy Policy carefully. Before using your Service(s), you must agree to this Policy.

In other words, if you don’t agree with the policy, which means agreeing to the use of your personal information in the ways set out by AT&T, you can’t use AT&T’s service.

Much of the brouhaha surrounding the latest antics of AT&T in the U.S. has to do with allegations that the company has been allowing the National Security Administration access to not only customer account information, but also to data that customers have transmitted through AT&T’s services, such as e-mails, without warrants, in the name of national security to subvert potential terrorist attacks on the US, an on-going red hot issue for privacy advocates. The company’s new policy widens the scope of to whom and in what circumstances it will be able to provide it’s customers’ information to government authorities. It states:

We may disclose your information in response to subpoenas, court orders, and other legal process, or to establish or exercise our legal rights or defend against legal claims. We may also use your information in order to investigate, prevent, or take action regarding illegal activities, suspected fraud, situations involving potential threats to the physical safety of any person, violations of the Service Terms or the Acceptable Use Policy, or as otherwise required or permitted by law.

While Lazarus focuses on the ownership of information issue (an issue that is no doubt of interest to privacy advocates) in this article and in a follow up article, I will instead focus on the contractual issue of required agreement to privacy policies, which I have discovered, has implications for Canadians dealing with telecommunications companies, as well as all other commercial enterprises. As I stated above, AT&T has made it a term of its internet service agreement that customers agree to the privacy policy. If you don’t agree to the policy, you don’t get the service. In contractual lingo, we call this a take it or leave it offer. AT&T, as the offeror, is the master of the offer. The customer, as the offeree, may attempt to negotiate (the chances of this happening in modern commercial relationships is quite unlikely), but has no real power of the terms upon which the offer rests. The only choice the offeree has is to accept the terms of the offer or take her business elsewhere. I am no expert in US privacy law, but given the lack of emphasis on the take-it-or-leave-it change to AT&T’s policy in the media coverage, we can assume that it is legal for AT&T to take such an approach. In addressing this issue, Lazarus states:

Meanwhile, what can AT&T customers do if they choose to distance themselves from AT&T? Dozens of readers have put that question to me since Wednesday’s column ran. Short answer: Not all that much. There are other local and long-distance companies...but they often rely on AT&T’s network to get calls through or have policies similar to AT&T’s.

After reading about the situation with AT&T, I became curious about the state of affairs in Canada with respect to take-it-or-leave-it offers based on acceptance of privacy policies. Having some familiarity with Canadian telecommunications privacy policies, I didn’t recall seeing a similar term that required acceptance of the privacy policy in order to receive the service. But upon further investigation, I realized that Rogers has a similar term in its End User Agreement for Rogers Yahoo! Highspeed Internet . The preamble states:

As a condition of using the Services, you agree to and must comply with the terms and conditions of this Agreement, which will be binding on you.

Clause 8c then incorporates the Rogers Privacy Policy into the End User Agreement. So by agreeing to the End User Agreement, a customer agrees to the Privacy Policy, and in fact must agree to the Privacy Policy, as it is a term of the contract itself. The result is much the same as in the AT&T situation, consumers have to take it or leave it. And the Canadian situation for consumers, especially with respect to telecommunications, is at least as bad, if not worse, than it is in the United States, with large corporations dominating the market for these services. If they don’t agree with the content of these privacy policies, or the way that the information will be used as provided within the privacy policies, consumers will increasingly be out of luck in finding them elsewhere at a reasonable price.

One difference in the Canadian situation, I thought, may be the existence of PIPEDA (Personal Information Protection and Electronic Documents Act ), a federal act designed to help protect personal information within commercial transactions. I thought there may be some sort of recourse for people who wish to acquire a product or service without necessarily consenting to the use of their personal information as outlined in a company’s privacy policy. However, after looking at PIPEDA, I was sorely disappointed (well, actually, I was a bit confused at first, as is often the case when I first read legislation). As it turns out, Rogers can require assent to their privacy policy as a term of their service agreement, meaning that Rogers can decline to enter into contractual relations with people who do not want to consent to the use of their information in the ways that Rogers have outlined in their policy.

Schedule 1 of PIPEDA provides the principles with which commercial enterprises are to adhere to with respect to the collection, retention and dissemination of personal customer information. The harbinger of PIPEDA is consent of the individual, meaning that people must consent to all collection, retention and dissemination of information at the time it is collected by a company. This means that companies must tell their customers upfront what information they will collect and how they will use that information. This all seems fine and good, until we consider whether there are any limits on the kinds of information that companies are able to collect or on the uses of that information. It appears that there are no such limits, and as long as the customer is informed of what the company is doing with the information there is compliance with the principles of PIPEDA. To me, it all seems largely circuitous. Companies are essentially allowed to collect, retain and use personal information for any purpose, as long as that purpose is identified by the company, communicated to the customer and consented to by the customer. There are no real limits on the kinds of purposes, since the purposes are defined by the companies themselves.

To illustrate my point, look at Principle 4.2.2 and 4.2.3 of Schedule 1:

4.2 Principle 2 — Identifying Purposes
The purposes for which personal information is collected shall be identified by the organization at or before the time the information is collected.
4.2.2
Identifying the purposes for which personal information is collected at or before the time of collection allows organizations to determine the information they need to collect to fulfil these purposes. The Limiting Collection principle (Clause 4.4) requires an organization to collect only that information necessary for the purposes that have been identified.
4.2.3
The identified purposes should be specified at or before the time of collection to the individual from whom the personal information is collected. Depending upon the way in which the information is collected, this can be done orally or in writing. An application form, for example, may give notice of the purposes.

The purposes are to be identified by the company itself. That may not be problematic in and of itself, but when looked at with the other principles contained within PIPEDA, it becomes harder to swallow. Principle 4.3.3. states:

4.3.3
An organization shall not, as a condition of the supply of a product or service, require an individual to consent to the collection, use, or disclosure of information beyond that required to fulfil the explicitly specified, and legitimate purposes.

A company cannot require an individual to consent to a purpose that was not explicitly specified in order to obtain a product or service. The problem is that the corollary of this statement must also be true, a company CAN require the consent of an individual to the collection, use or disclosure of information to obtain a product or service where that collection, use or disclosure has been defined and explicitly specified as a legitimate purpose. And who determines the legitimate purposes? Going back to Principle 4.2, the companies themselves are able to set their own purposes for gathering information. If consumers don’t agree to these purposes and do not wish to consent to them, they are out of luck as the company will not be required to contract with them.

This state of affairs seems unfair, to say the least. To allow companies to set their own purposes for the collection and use of personal information, some which may not be seen by consumers as legitimate (ie. the sharing of information with other companies within the same corporate family, or even worse transgressions) and then to allow them to deny the provision of a product or service on the basis of disagreement with such purposes does not seem to be in line with the general purpose of PIPEDA, which is to protect information of individuals. This may be acceptable (a big MAY) in some situations, where there is ample choice in the market for consumers. They can choose to go to companies who have information purposes more in line with their own views. But as Lazarus points out, this kind of consumer choice is waning. First, there is less and less choice about who to do business with, especially in the telecommunications industry where virtual corporate monopolies exist. Second, more and more companies are invoking all encompassing privacy policies that give them wide scope to deal with the personal information of their customers. And as long as they disclose this to customers at the outset, they have complied with PIPEDA. Increasingly, then, consumers must either consent (I would actually question whether this is true consent given the circumstances) to such policies or go without products and services. And as more and more companies adopt broad collection and use purposes, there is less and less privacy. Given this state of affairs my question is where there actually exists any substantive protection for personal information collected within the commercial sphere at all?

| Comments (2) |


Identity Exchange

posted by:Angela Long // 12:49 PM // June 21, 2006 // General

Listening to CBC Radio this afternoon, I heard an interesting interview with a performance artist named Nancy Nisbet, whose name I remembered from an article I pulled on RFID technology. She is interested in ideas about how RFIDs affect our personal identity. In order to further her research, she had not one, but two RFIDs implanted into her body (in her hands), which actually gives her two separate digital identites. This is something I hadn't thought about before...what if we implant more than one RFID into our bodies? How does this affect our identity and our privacy?

She is currently undertaking a North American performance tour that aims to engage the public in a discussion about identity issues. Check out her website here.

You can also check out Wired Magazine's article about her RFID implants, which includes a cool video of the implantation procedure.

| Comments (1) |


Emerging Technologies of Ownership

posted by:Spike Gronim // 11:59 PM // June 20, 2006 // ID TRAIL MIX

trailmixbanner.gif

The social concept of ownership as applied to physical objects is broadly accepted and easily enforced. Whoever controls an object and has the right to transfer this control to others owns the object. In principle this definition applies to digital content as well. Difficulties arise, though, when abstract legal principles meet common social practice and technological realities. People typically describe the (legally purchased) music on their computers as "my music" In fact, as the Apple iTunes® terms of service state[1], this music remains the property of the content producers and their affiliates. A group of companies called the Trusted Computing Platform Alliance (TCPA) are working hard to enforce ownership of digital content using new technologies. These new technologies extend the current software-only content control systems with hardware components. What can these technologies do, what are they being used for, and what does all this mean for consumers' autonomy?

The Trusted Platform Module (TPM) [2] is a representative example of a digital ownership enforcement technology. The TPM is a microprocessor attached to the motherboard of a notebook or desktop computer. Its purpose is to provide a "platform root of trust" [3] that allows a computer to prove things about itself to other computers across a network. For example a TPM allows the computer to prove that it was manufactured by a certain company and has an authentic TPM chip. In order to accomplish this the TPM has cryptographic capabilities built in, such as RSA[4] encryption and signatures. Before a computer leaves the factory the TPM generates an RSA public/private key pair that serves as its "Endorsement Key"[5]. The TPM holds the private key in its own memory. Ideally, nobody (including the manufacturer) knows this private key. The manufacturer then uses their RSA key pair to sign the TPM's public key, creating a certificate of authenticity endorsing the TPM. All the various manufacturers' RSA key pairs are in turn endorsed with certificates from the TCPA organization. This forms a chain of trust from the TCPA to the TPM, allowing any Internet-connected computer to verify the authenticity of a given TPM.

Remotely authenticating a TPM might seem like an obtuse and technical procedure, but without hardware support content control will always be imperfect. A non-TPM computer can do anything the user programs it to do. Apple iTunes® uses software tools to prevent unauthorized use of downloaded content [6]. The content is encrypted, and the software will only decrypt the content for use on authorized computers for authorized purposes. Software-only content control is fundamentally weak. The root of this weakness is the user's complete control over the computer. In order to play the music it must be decrypted, if only temporarily, and the decrypted form must appear somewhere in the computer's memory. A skilled user can access any portion of their computer's memory at any time. Accessing the correct parts of memory in an efficient manner is of real practical complexity, leading to practical security benefits for the content producer. Regardless of practical challenges it is theoretically possible to circumvent any such software-only content control system.

Fully incorporating the TPM into a computer's software "stack" could enable strong content control. The term stack refers the layered nature of computer software: applications depend on operating systems that in turn depend on hardware. The remote authentication feature of the TPM allows remote computers to establish a secure channel with the TPM. This means that a content producer can send messages to a consumer's computer addressed to the computer's TPM. Using basic cryptography tools and the Endorsement Key, the TPM and the content producer can each determine whether the consumer programmed their computer to alter the messages in transit. The content producer now knows that the TPM is authentic and therefore not under the consumer's control. The TPM can then examine the operating system and tell the content producer whether or not it is identical to the version released by the vendor. At this point the content producer has authenticated the bottom layers of the stack - the hardware and the operating system. Proceeding similarly, the content producer can move up the stack until they are confident that everything is in order - that is, not under the consumer's control.

Before going any further we must address some of the hype and fear surrounding TPM. The members of the TCPA hype TPM as a broad solution to many security and privacy problems. Without going into the details I have reservations about the technical and business arguments for the benefits of TPM. I am not convinced that some of the problems allegedly solved by TPM require it, nor am I convinced that the TPM architecture is sound and secure. As TPM matures, the market and the security community will answer these questions. For now I take vendors' technical statements at face value in order to evaluate how TPM affects ownership of digital content. Some security researchers and open source software advocates fear that TPM will be blatantly abused by its creators. Content, including the operating system itself, will be remotely disabled by vendors without cause. Linux won't run on TPM-enabled computers. Governments will compel vendors to use TPM to ban certain documents. In reality these fears are overblown. Intel recommends that TPM units should be shipped disabled by default and with certain potentially invasive features disabled permanently[7]. There are serious backwards-compatibility issues involved in implementing full-stack TPM that have so far kept proposed uses in niche areas, such as digital music ownership.

The striking thing about TPM is that it takes on the role of the owner. Using the techniques outlined above, control of content is transferred from the producer to the TPM itself. Returning to our initial definition of ownership we see that the TPM, not the consumer, ends up owning TPM-protected content. The user cannot control the content; the encryption key held only by the TPM prevents this. Nor can the user transfer control without the TPM's consent. In the case of digital music this new technical reality seems to be in line with existing copyright principles. Problems arise when the technical implications of TPM content ownership are worked through. From the content producer's point of view it is meaningless to attempt TPM control of only the application layer. The application's entire memory is accessible to the operating system. In general, control of any lower layer of the stack implies theoretical control of higher layers. Thus TPM ownership of music requires TPM ownership of the operating system.

At this point we have deviated from the established principles of ownership. Of course operating system vendors' copyrights imply ownership of their software. The requirements of TPM controlled music go beyond the rights granted copyright holders. Non-TPM operating systems grant their users full autonomy except in narrowly defined situations (such as unauthorized copying of the operating system itself). A full-stack TPM-enabled operating system introduces an extensible system by which consumers' activities can be directly controlled by a network of third parties. Regardless of the legal and ethical validity of such controls' end purposes, TPM implies a transition from general autonomy and specific controls to general controls.

William "Spike" Gronim (spike.gronim@alumni.cmu.edu) is a software developer and alumnus of the Carnegie Mellon University Data Privacy Lab.

[1] http://www.apple.com/support/itunes/legal/terms.html §13 (a)
[2] Bajika, Sundeep. Trusted Platform Module (TPM) based Security on Notebook PCs - White Paper. June 20, 2002. Accessed at http://developer.intel.com/design/mobile/platform/downloads/Trusted_Platform_Module_White_Paper.pdf on June 18, 2006.
[3] Ibid. p. 7.
[4] RSA can be used to encrypt/decrypt and sign messages. Suppose Alice wishes to send Bob a confidential message. Bob generates two long numbers with certain mathematical properties, a public key and a private key. The public key is made available to everyone, while only Bob knows his private key. Alice can use Bob's public key to transform her message such that it is only meaningful to someone in possession of Bob's private key. If Bob wants to prove his identity he can sign his message by creating a short number (the signature) based on his private key and the message. Alice can use the signature, the message, and Bob's public key to determine whether someone with knowledge of Bob's private key created the signature. See http://en.wikipedia.org/wiki/RSA for a more complete description.
[5] Bajika 2002, p. 8.
[6] This software is called FairPlay. See http://en.wikipedia.org/wiki/FairPlay.
[7] Bajika 2002, p. 19.

| Comments (1) |


Call For Papers: Graduate Student Symposium @ NYU

posted by:Jeremy Hessing-Lewis // 01:58 PM // June 19, 2006 // Digital Activism and Advocacy | General | Walking On the Identity Trail

Identity and Identification in a Networked World: A Multidisciplinary Graduate Student Symposium

Increasingly, who we are is represented by key bits of information scattered throughout the data-intensive, networked world. Online and off, these core identifiers mediate our sense of self, social interactions, movements through space, and access to goods and services. There is much at stake in designing systems of identification and identity management, deciding who or what will be in control of them, and building in adequate protection for our bits of identity permeating the network.

The symposium will examine critical and controversial issues surrounding the socio- technical systems of identity, identifiability and identification. The goal is to showcase emerging scholarship of graduate students at the cutting edge of humanities, social sciences, artists, systems design & engineering, philosophy, law, and policy to work towards a clearer understanding of these complex problems, and build foundations for future collaborative work.

In addition to presenting and discussing their work, students will have the opportunity to interact with prominent scholars and professionals related to their fields of interest. The symposium will feature a keynote talk by Ian Kerr, Canada Research Chair in Ethics, Law & Technology at the University of Ottawa.

Submission Information:
We invite submissions on the function of identity, identifiability and identification in the following general areas:

# Media & communication: DRM systems, e-mail & instant messaging, discussion forums
# Online: Identity 2.0, web cookies, IP logging, firewalls, personal encryption
# Social interaction: online social networks, blogging, meetups
# Consumer culture: RFID product tags, reputational systems, commercial data aggregation
# Mobility: electronic tolls, auto black boxes, RFID passports, SecureFlight, V-ID cards
# Security: video surveillance, facial recognition, biometric identification systems, national ID cards

Please submit abstracts, position pieces, demos or full papers for a 10-15 minute presentation to michael.zimmer@nyu.edu by July 5, 2006. Include contact and brief biographical information with your submission. Notification of submission acceptance will be given by July 17, 2006. Limited travel stipends will be available for presenters. Students in need of travel funds should indicate so with their submission.

--------------------

Program chairs:
Tim Schneider, JD student, New York University School of Law
Michael Zimmer, Ph.D. candidate, NYU Steinhardt Department of Culture & Communication
Faculty advisor: Helen Nissenbaum, NYU Steinhardt Department of Culture & Communication

Sponsors:
New York University Coordinating Council for Culture and Communications, Journalism, and Media Studies
New York University, Steinhardt School, Department of Culture and Communication
New York University Information Law Institute
New York University School of Law

For more information, visit the Symposium's Site Here.

| Comments (1) |


Chips in Chips: New Guidelines, Growing Concerns

posted by:Carole Lucock // 11:02 AM // // Commentary &/or random thoughts

Today, Ontario’s Privacy Commissioner issued privacy guidelines for RFID systems. An article in today’s Globe and Mail, which details the increased use of RFID tags to track not just objects but also the behaviour of people, illustrates the need for guidelines. The article notes that in some casinos, gamblers are monitored through the use of a chip in their gambling chip and, at the World Cup, soccer fans are being monitored through the use of a chip in their world cup ticket.

| Comments (0) |


MARK HER WORDS

posted by:Ian Kerr // 11:41 PM // June 15, 2006 //

tonight i attended the Canadian Biometric ID Documents: A Public Forum
an event co-organized by andrew clement and krista boa.

at the event, alice sturgeon, senior director, accessibility, identity management and security, treasury board, secretariat told us that canada has no plans for a national identity card.

just watch and remember ...

| Comments (0) |


Surveillance Goes Mainstream

posted by:Jeremy Hessing-Lewis // 02:02 PM // June 14, 2006 // Commentary &/or random thoughts | General | Surveillance and social sorting | Walking On the Identity Trail

While researching how the major telcos are bundling their products, I was somewhat surprised to see that Telus has now added retail sales of consumer surveillance products to its online store. There are at least three immediate observations to be made about this development.

1. Web-based video surveillance is now mainstream. While similar products have been available for years, Linksys (a division of Cisco Systems) is a major market player with a variety of high-volume retail distributors. Telus is also prominently marketing these products through the main products page of their online store.
2. Web-based video surveillance is easy to use. Unlike the James Bond surveillance of years past, the Linksys models are ready to run out of the box. According to the product description, the Wireless G Video Camera contains its own web-server and does not require a computer. Just provide power and a nearby wireless network connection and the camera will stream live video (with sound) straight to any web-browser. For mobile monitoring, the camera can notify a cell-phone, pager, or e-mail address whenever the motion sensor is triggered. When operating in "Security Mode," the camera can be configured to send short video clips to up to 3 e-mail addresses.

3. Web-based video surveillance is cheap. Telus offers two models. The cheaper version retails for $99.95 and contains all the basic functionality. For $274.95, the deluxe version includes a motion sensor and microphone.

Such products will likely have significant privacy implications. Their ease-of-use and low-cost will allow a much broader market of users than have previous versions. It is foreseeable that many of these users will devise illicit uses beyond the "home monitoring" described by Telus. As these products continue to shrink in size and wireless capabilities improve, the threat is only likely to increase.

We are left with the recurring question: Does the democratization of surveillance equipment present a threat?

One might argue, as has Steve Mann with the concept of sousveillance, that providing such tools to citizens counterbalances the powers of otherwise one-sided surveillance. I consider this to be somewhat of a "right to bear arms" argument and am forced to wonder whether such a state is at all desirable. Are many weapons preferable to a single weapon?

In contrast, one might also see Telus' foray into video surveillance as part of the surveillance "arms race" that will inevitably be a race to the bottom (the always enjoyable skeptic's position).

Alas, I fear this moral debate will only be resolved by the great oracle of our time... the market.

| Comments (1) |


Captain Copyright v. The Corruptibles

posted by:Natalie Senst // 09:14 AM // // Commentary &/or random thoughts

Access Copyright's Captain Copyright comic seems to have been designed with the intention to teach kids about the dangers of copyright infringement - school boards in Vancouver, Richmond and Halton (Ontario) began linking (only Richmond still has a link up). Then links were made to the site that treated the little comic not so favourably. Contractual issues have ensued over linking policy, with repeated revisions by Access Copyright to their terms for linking to the comic, leaving others confused about what obligations are created for the person providing links to another's website (for more on this, see Michael Geist).

This issue aside, I would like to bring up The Corruptibles - the brainchild comic of EFF (its focus on American copyright issues aside). In my view, this is quite an interesting circumvention of the entire legal uprise over linking rights. The EFF has made a point of showing "superheroes" as something similar to wolves in sheep clothing, particularly when it comes to the assertion of copyright (funny how similar that "Corruptible" logo looks to a Copyright logo!). Viewing both these comic creations together puts some perspective to the good "Captain" without having to link to him directly, and is possibly an even stronger message of warning (oh the joys of interactive cartoons!). But perhaps if school boards provide a link to Captain Copyright beside a link to The Corruptibles, this proximity between links may become the subject of another term up for discussion in the legality of linking.

Here's hoping the artistic creativity of EFF will be appreciated, and provide a good dose of skepticism when classifying "superheroes" from now on (at least when it comes to copyright).

| Comments (1) |


One Moment Please*

posted by:Carole Lucock // 11:59 PM // June 13, 2006 // ID TRAIL MIX

trailmixbanner.gif

“Agent, agent, agent!” A contorted face yells into a cell phone. A frantic pumping of the keypad follows this. Momentarily, the tightened muscles on the face relax and pull toward a smile; the pumping of the keypad stops. A reassuring voice has appeared on the other end of the line: “How may I help you?” Apparently, it is a real voice. The voice of someone real. The voice of a someone.

There follows a rather ordinary voice exchange in which the caller is successfully instructed on how to hook up a telecommunications technology:

“Okay, so the red plug goes to the converter? That’s what I was doing wrong.”
“Probably. Call back if you’re still having trouble. Now, is there anything else I can help you with today?”
“I think I’m fine now. Thank you.”

The ‘machine voice’ and the ‘human voice’. At first, the machine voice, or the machine simulating human voice, beginning a series of prompts. And responding to this series, obediently on cue, a series of human responses. “If you want English, say ‘English’”, the machine voice instructs. “English”, comes the reply of the cooperative supplicant. “Is that a business or a residence”? “Residence.” After a few rounds of this, the human voice, impatient and frustrated, tries frantically to override and free itself from the incessant questioning of the machine voice. “Agent, Agent, Agent!” the voice yells, as the human hands pound on the keyboard. And finally, a human voice appears on the other end.

This mini-drama becomes increasingly common and ordinary today. It begins in a machine to human exchange. Sometimes that is enough to satisfy the human. Other times, as above, the human becomes impatient and frustrated, or the issue cannot be addressed by the machine voice. The drama ends in a human-to-human exchange.

Reflecting on this drama, I think about my encounters with these voice machines. I register a certain uneasiness. What is the source of my unease? What are its grounds?

To begin, there is my impatience and even irritation at having to negotiate a maze of questions. My time and energy is increasingly used up in useless, unwanted options, as I seek to get ‘customer service’. But that cannot be the root source of my unease with the machine voice. After all, in many cases where a real voice or human being is on the other line, I have to endure a similar sort of inane questioning as this human voice – live and in real time, but machine like all the same – reads me through a script of questions it is programmed to ask. That too can make me impatient and irritated. But the machine like human voice is different from the human like machine voice.

To be sure, I have a certain unease about the machine like human voice. And this unease too is not reducible to impatience or irritation. And the precise grounds of that unease would be worth chasing down. All the same, I prefer the human voice, even when it is machine like, to the machine voice that is or tries to be like a human voice. And so, as in the drama above, I try to get to a human voice as quickly as possible: “Agent, Agent, Agent!” I endure and respond to an inane series upon series of options in order to make my way as quickly as permitted to a place – a voice – where my questions or problems can be addressed. I even search the internet to see if creative others have found ways of overriding the system.

In some few of these automated options mazes, if one makes the right move there is a possibility of skipping the options entirely (at any stage) and speaking to another human being directly. But in this or that case there may not be such a right move, or one may not know what it is. I dread, and therefore assiduously seek to avoid, making a wrong move or mistake that would loop me back to the beginning of the options series. And I am relieved when (if) the human voice appears: “How may I help you today”.

So my unease is not just impatience or irritation. And neither is it the passive obedience that one is steered into as a supplicant, on the way to a desired endpoint: problem solved, or a real person. To be sure, I do not like this passivity. “English please,” I answer dutifully and politely to one of its prompts, adding the ‘please’ half-forgetful that it is after all a machine I am responding to. But even when I am dealing with other human beings (machine like or not), in situations where I am a supplicant, I am conditioned or programmed to a certain passivity in order to make my way toward my desired end point. And politeness sometimes helps.

So it is not the fact of my conditioned or even programmed responses that is the source of my unease about the human like machine voice. To be sure, this gives me a certain unease, and that unease would also be worth chasing down, but it is an unease that is not unique to the machine voice.

And that unease that I feel grows as we move from automated keypad choice to voice-tone choice (if choice is the right word at all here), and becomes greater as these voices take on a calm and friendly persona and try to feign real-person dialogue. The machine-voice tries to ‘understand’ and ‘interpret’ the variety of responses provided in answer to specific questions, as though it were interacting with the caller. Currently, the technology is not particularly good and it’s obvious that one is dealing with a machine-voice, a machine that stands between you and a real person (even if one occasionally forgets oneself and slips into the polite form of response one might give in answer to a person: “English, please”). However, the technology will get better. And one can imagine a Turing moment when it will not be so obvious that the voice on the other end of the phone is a machine, or that one’s responses are pre-programmed towards its programmed ends.

As we move in this direction, I wonder if our current, passive submission to answer the machine-voice command (speak out loud to the voice-machine) is preparing or conditioning us to embrace a possible tomorrow in which the real person at the other end of the phone has been eclipsed. A person who, to be sure, may be machine like and programmed. But a person all the same. A being to whom it is not absurd to say ‘please’ or thank you. A being to whom, after a series of inane machine like questions and answers, one might meaningfully (even if vainly) terminate the exchange with a profanity and a charge, accusation or put down that the person is nothing but a machine or a robot.

*The author of the present piece warrants that, notwithstanding the ideological programming she has received, it has not been (entirely) machine written.

| Comments (2) |


Flying sans ID

posted by:Jeremy Clark // 03:02 PM // June 11, 2006 // Walking On the Identity Trail

Here is quick but amusing article on Jim Harper (CATO) taking a challenge from John Gilmore (EFF) to fly on an inter-USA flight without an ID.

| Comments (0) |


The Nexus of Intellectual Privacy and Copyright

posted by:Alex Cameron // 11:59 PM // June 06, 2006 // ID TRAIL MIX

trailmixbanner.gif

For nearly three centuries since the enactment of the world’s first copyright statute, individuals have been free to travel the kingdom of copyright as countrymen, enjoying the delightful objects to be found there, in private and without any notice taken. Historically, neither copyright law nor copyright holders have interfered with individuals’ freedom to enjoy copyright works in private. This centuries-old relationship between copyright and privacy has changed dramatically in the recent past.

Copyright and privacy have increasingly come into conflict over the course of the past decade. This conflict has led to a diminishment of individuals’ privacy and autonomy in connection with their enjoyment of copyright works. Digital rights management (DRM) technologies that use surveillance and restrict individuals’ activities are a prime example of this conflict.

Failure to gain a richer understanding of the conflict and relationship between copyright and privacy may leave us with little or no room to travel our vibrant copyright kingdoms in private. Permitting privacy to be diminished in the name of copyright may also lead to the impoverishment of the very copyright kingdoms that we purport to be enriching in so doing.

This short ID Trail Mix briefly discusses why, quite apart from its intrinsic worth, authors’ intellectual privacy is and has historically been instrumental in furthering the goals of copyright. This ID Trail Mix raises the question of whether the rationale behind authorial privacy’s historical utility in promoting the goals of copyright can provide arguments in support of protecting individuals’ intellectual privacy in connection with their enjoyment of copyright works. The ultimate question posed here is what role individuals’ intellectual privacy could or should play in the copyright balance.

Copyright and authors’ intellectual privacy

Copyright and privacy share a fascinating and complex historical relationship. At first blush, one might have thought that copyright and privacy have come to implicate one another only over the course of the past decade since the advent of digital networked technology. The kind of conflicts that have emerged in the recent past – the ones that involve conflict between copyright and individuals’ private enjoyment of copyright works – do appear to be a uniquely contemporary phenomenon. However, copyright and privacy also share a much older and more foundational relationship, a complementary relationship. For example, Sunny Handa has characterized privacy as one of the “theoretical pillars” of copyright. i

In “The Right to Privacy”, Warren and Brandeis sketched a picture of where copyright and privacy might lay in respect of one another as of 1890. In examining the nature and basis of the right to control the act of publication of a copyright work, Warren and Brandeis described how the right does not depend on whether the subject matter has any economic value or would otherwise be protected as intellectual property. In other words, the common law right to control the act of publication is not merely the right of control that copyright provides and nor is it motivated by protecting precisely the same interests as copyright is motivated by. Distinguishing the right from principles of private property, Warren and Brandeis identified the right as one instance of the more general right of privacy, the right “to be let alone”.

The aspect of the relationship between copyright and privacy identified by Warren and Brandeis is based principally on the distinction between published and unpublished works. Though not offering complete privacy protection (because facts could be disclosed without infringing copyright), the right of first publication of a copyright work is a privacy-like right protected at common law. Once published, the rights in the work became primarily rooted in copyright law.

In addition, there are a number of cases where copyright has been invoked to protect confidential information and in some cases what one might consider to be privacy interests. In these cases, which continue to arise, copyright has played an instrumental role in protecting privacy interests, typically in situations involving the attempted publication of personal materials such as letters. In a similar way, copyright has effectively protected privacy-related interests in the area of commissioned photographs and portraits.

Privacy can thus be viewed as playing at least two key roles in terms of furthering the objectives of copyright. First, privacy protects the act of first publication. This protection helps to encourage the development and expression of new ideas. Sunny Handa discusses this concept in the negative, noting the risk inherent in having less than absolute privacy protection in this area:

Making the right of privacy [protecting first publication] less than absolute, creates a chilling effect whereby confidential works will not be committed to paper for fear of their being divulged. This is similar to the approach of the courts to the U.S. first amendment law. Thus, privacy protections [protecting first publication] should be paramount. It is both an important right – considered a fundamental freedom by some – and a fragile one. Once it is lost, privacy cannot be regained. It should be removed from the reach of copyright exceptions [such as fair dealing/use or public interest exceptions]. ii

By avoiding the potential chilling effect described in this passage, an absolute privacy right can be seen as encouraging the development and expression of new ideas, which is part of the purpose of copyright. It creates a refuge for building ideas, an intellectual ‘breathing space’, a veil behind which authors can explore ideas and develop new expressions. This privacy right ultimately protects authors’ right to determine whether and when they will publish their expressions.

A second way that privacy contributes to the objectives of copyright law lies in the protection of moral rights. Although rights in a work are primarily rooted in copyright upon publication, this is not to say that privacy is no longer relevant. In jurisdictions with moral rights regimes, like Canada for example, privacy plays a role in copyright in so far as authors have moral rights to remain anonymous or to use a pseudonym. These are forms of a right of privacy. Moral rights contribute to the development and dissemination of new expression, at least to the extent that such rights encourage authors to create and disseminate works that they would not otherwise create or disseminate. Moral rights, and hence a form of privacy, can therefore be viewed as an important part of the incentive package that copyright offers to creators.

These are a few examples of ways that copyright and privacy share a complementary relationship in ways that further the goals of copyright policy. However, the privacy rights discussed thus far have been the privacy rights of authors. But what of the privacy rights of individuals who wish to access and use copyright works? Can their privacy rights possibly further the goals of copyright policy when in recent years they seem to have so often come in conflict with copyright holders? Can authorial privacy’s utility in promoting the goals of copyright be extended to arguments in support of protecting individuals’ privacy in relation to their enjoyment of copyright works?

Copyright and individuals’ intellectual privacy

Prior the conflicts of the recent past, individuals were free to roam the kingdom of copyright in private, without any notice taken. Copyright has traditionally not interfered with individuals’ freedom to access and enjoy copyright works in private. Rather than focusing on the private activities of individuals, copyright has heretofore been principally concerned with protecting publishers against copying by competing publishers. As Daniel Gervais explains, copyright law never used to concern itself with the private activities of individuals who access and use copyright works:

The fact that copyright was not meant to be routinely used in the private sphere is further evidenced by the fact that exceptions and limitations to copyright were also written in the days of the professional intermediary as the user. This explains why in several national laws, the main exceptions can be grouped into two categories: private use, which governments previously regarded as “unregulatable” (i.e., where copyright law abdicated its authority by nature)… Still today, there are several very broad exceptions for “private use” (e.g., Italy, Japan) that were adopted in the days when the end-user was just that, the end of the distribution chain. End-users have always enjoyed both “room to move” because of exceptions such as fair use and rights stemming from their ownership of a physical copy. There was thus an intrinsic balance that recognized that end-users who did not significantly affect the commercial exploitation of works by their individual use should not be on the copyright radar.
…[copyright’s recent] invasion of the private sphere is at odds with the history of copyright, where it never forayed except, as just mentioned, in the case of levies. There was an implicit recognition that copyright did not apply to end uses, even though formally users were making copies and, in rarer cases, performing or communication works. iii

Of course, as Gervais alludes to when he mentions ownership of a physical copy, it is worth emphasizing two additional reasons why copyright has not conflicted with privacy in the past. First, individuals typically did not have the means to infringe copyright works, let alone on a scale that would have an impact on the exploitation of the work – e.g. individuals could not very easily copy, distribute and sell thousands of books. Second, copyright holders have traditionally not had an efficient or effective means to invade the private sphere; there were no ways that they could track individuals’ access and use of physical copies of copyright works in order to prevent or detect illegal or unauthorized activities. The context in which copyright and individual privacy now interact is dramatically different.

Through exceptions like fair dealing, copyright law continues to attempt to carve out private space for individuals to access and enjoy copyright works. However, copyright holders increasingly have the legal and technological means by which to foreclose those spaces and to track individuals’ private activities. This applies not only in the case of online digital content delivery services, but now also in the case of physical copies of works like CDs, as demonstrated by the infamous Sony BMG rootkit controversy iv. The scope of private, anonymous and/or autonomous use previously afforded by the ownership of tangible goods is eroding.

On the other hand of course, many copyright holders argue that individuals increasingly have the means by which to infringe copyright on a scale that impacts the commercial exploitation of works. For example, in a matter of seconds, a single individual can perfectly copy and make a copyright work available to millions of people for downloading on a p2p network. For these reasons, some copyright holders claim that privacy-invasive measures aimed at responding to infringement are justified.

The modern copyright context thus requires us to consider the nature and scope of individuals’ ability and potential legal right to enjoy copyright works in private, anonymously and autonomously. Authorial privacy suggests that privacy can play a role in furthering the goals of copyright. However, copyright policy has heretofore not adequately considered the potential importance of individuals’ intellectual privacy – individuals’ ability and/or legal right to enjoy copyright works in private, anonymously and/or autonomously – in furthering the goals of copyright. If, as the Supreme Court of Canada has recognized, the purpose of copyright is utilitarian, aimed at balancing the economic rights of creators against promoting the public interest in the encouragement and dissemination of creative works, then we must ask what role individuals’ intellectual privacy could play in that balance.

i Sunny Handa, “Understanding the Modern Law of Copyright in Canada”, (1997) McGill University (Thesis), at 160.
ii Ibid.
iii Daniel Gervais, “Use of Copyright Content on the Internet: Considerations for Excludability and Collective Licensing”, in Michael Geist, ed., In the Public Interest: The Future of Canadian Copyright Law (Toronto: Irwin Law, 2005) at 531, 548.
iv For a discussion of the Sony rootkit controversy, see Jeremy deBeer, “How Restrictive Terms and Technologies Backfired on Sony BMG” (2006) Internet & E-Commerce Law in Canada, Vol. 6, No. 12.

| Comments (3) |


John Doe

posted by:Jeremy Hessing-Lewis // 02:08 PM // June 02, 2006 // Commentary &/or random thoughts | TechLife

Douglas Coupland's newest novel, JPod, features the usual assortment of quirky characters. One in particular is especially clever. His name is John Doe and he is obssessed with being unremarkable. In contrast to the other characters who subvert their bland cubicle environment with endless self-identifying customization, John is determined to be statistically average.

John's birth name is "crow well mountain juniper" (all lower case). He grew-up in a lesbian commune, was home-schooled until the age of fifteen, and never saw a tv-set until the age of twelve. His desire to be statistically normal is an attempt to counteract his "wacko upbringing."

His attempts at being normal are a brilliant jab at the way we identify ourselves in a consumer society. He drives a white Ford Taurus and is flattered when people tell him that it looks like a rental car. He keeps himself 9 pounds overweight (stastically average). His wardrobe consists of khakis and plain corporate golf shirts.

Essentially, his personality is defined by his desire to not have an identity. Hilarity ensues.

| Comments (0) |


Bloggers Are Journalists Too

posted by:Jason Millar // 02:10 PM // June 01, 2006 // Digital Democracy: law, policy and politics

In a recent ruling a California court decided to block Apple Computer Inc.'s access to the identity of a blogger's source. The blogger had published information about the company's products in development (information the company guards famously), which had been leaked by an insider. The court decided that bloggers and journalists should receive the same protections owing to the fact that "in no relevant respect do they appear to differ from a reporter or editor for a traditional business-oriented periodical who solicits or otherwise comes into possession of confidential internal information about a company".

| Comments (0) |


main display area bottom border

.:privacy:. | .:contact:.


This is a SSHRC funded project:
Social Sciences and Humanities Research Council of Canada