understanding the importance and impact of anonymity and authentication in a networked society
navigation menu top border

.:home:.     .:project:.    .:people:.     .:research:.     .:blog:.     .:resources:.     .:media:.

navigation menu bottom border
main display area top border

« April 2005 | Main | June 2005 »

Media Sources: Does Anonymity Reduce Credibility?”

posted by:Marsha Hanen // 11:59 PM // May 31, 2005 // ID TRAIL MIX

The New York Times for May 8, 2005 carried an interesting piece by Adam Cohen: “The Latest Rumbling in the Blogosphere: Questions About Ethics.” Cohen argues that the largest news blogs can no longer realistically be viewed as outside the mainstream media, citing the presence of bloggers “at national political conventions, at the World Economic Forum at Davos and on the cover of Business Week”, and pointing out that “blogs helped to shape…some of the biggest stories of the last year – the presidential election, tsunami relief, Dan Rather.”

As bloggers become increasingly influential in providing us with a substantial portion of our news, information and opinion, questions about their reliability and ethical standards arise. To what extent can we trust what we read in blogs, particularly when no general ethical rules apply and, in many cases, bloggers are anonymous so that we are unable to check even for basic conflicts of interest?

Cohen suggests that it is hypocrisy for bloggers to insist on standards of ethical journalism – checking facts, inviting response from subjects of stories, avoiding real or apparent conflicts of interest, correcting errors and separating editorial content from advertising – for mainstream media but not for themselves. More importantly, though, he argues that “the real reason for an ethical upgrade is that it is the right way to do journalism, online or offline.”

The anonymity issue is particularly intriguing in light of another article that appeared on the reverse of the very same page in the same issue of the New York Times. This was a piece by Daniel Okrent, the Times’ Public Editor entitled “Briefers and Leakers and the Newspapers Who Enable Them”.

Okrent discusses the issue of public mistrust of reporters’ use of anonymous sources (e.g., “according to a high-ranking official”) and “background briefings” which challenge our belief in the reliability and accountability of what appears on the news pages. It’s a question of integrity – not so much in the sense that we assume reporters are willfully misleading us, but in the sense that reporters, like others, are not above succumbing to the temptation, even in the face of insufficient evidence, to get something published before the competition does.

The Times’ Executive Editor Bill Keller told Okrent that reporters and editors “are obliged to tell readers how we know what we know…There are cases where we can’t, for excellent reasons – but they have to be exceptional, and they have to be explained to the reader.” The basic idea is that, although certain areas – criminal justice, diplomacy and intelligence reporting – could not function without anonymous reporting, in other areas reporters should use such sources only when there is, essentially, no other way to get the story, and the story is important enough to waive concerns about anonymity.

Coincidentally, the controversial Newsweek piece describing the alleged desecration at Guantanamo Bay of the Qu’Ran by flushing it down a toilet appeared on May 9, 2005, the very day after these two New York Times articles; and, as we saw, the rush to publish in that case led to dire consequences when passions were inflamed in several countries. At the very least, it would appear to be incumbent on the reporter to be certain of the facts. At this point, we still don’t know whether the desecration actually occurred – it may have, but it may not; and no reliable source has yet been prepared convincingly to confirm or deny it.

Since that article was published, we have seen the publication of the New York Times’ policy which Mr. Okrent was discussing; and, in addition to publishing an apology and a retraction, Newsweek has announced (May 23) a policy to limit the use of anonymous sources and to charge two senior editors with the responsibility of approving such sources. The reaction to the Newsweek “scandal” has brought about a flurry of comment on the Internet, on radio and television as well as in print and even from White House Press Secretary Scott McLellan.

So it’s clear that the mainstream media are struggling, as they should, with the issue of when and how it is legitimate (and ethical) to use anonymous sources. And it’s reasonable to ask whether questions of anonymous reporting, opinion writing and advertising are different when the writing occurs online from situations in which it occurs in a print medium. Of course, as Cohen points out, bloggers often claim not to be journalists at all, but rather activists, or humorists or something else, and this is fair enough; but if they want to be taken seriously as responsible political writers and commentators, then perhaps it’s not unreasonable to expect similar measures of accountability from bloggers as from “mainstream” journalists.

Anonymity, both on the Internet and more generally, is well known to be one of those “on the one hand” and “on the other hand” issues. In the media there has long been an emphasis on the “scoop”, and Woodward and Bernstein’s All the President’s Men made investigative reporting using “deep throats” glamorous. All to the good, we might say. But the general canons of good journalism, including checking and confirming facts must still be operative, and quality should, presumably, trump sensationalism.

Probably what should worry us is a situation in which writers are so caught up in a “gotcha” mentality that they think it’s acceptable to cut corners and publish items where they are unable to obtain suitable confirmation. In justification of this practice, some say the give and take of public discourse will correct errors over time. Perhaps so, but in the meantime, reputations can be smeared, actions set in motion and people can be harmed, even killed. At the very least, ethical considerations would appear to require that consequences be taken into account when evaluating the use of anonymous sources. On the reader’s side, the best defense may have to continue to be a healthy and even heightened skepticism.

Marsha Hanen is the President of the Sheldon Chumir Foundation for Ethics in Leadership.
| Comments (0) | | TrackBack


Sensitivity and Personal Information

posted by:David Matheson // 05:01 PM // May 30, 2005 // Core Concepts: language and labels

The sensitivity of personal information

Is personal information necessarily sensitive? That is, from the fact that a given bit of information about an individual is personal does it follow that the information is sensitive?

The question is relevant to privacy concerns, since informational privacy is usually understood in terms of personal information: one can have privacy with respect to personal information about oneself, but never with respect to non-personal information. (That's why, for example, it makes sense to talk about my privacy with respect to information about my sexual or drinking habits, whereas it doesn't make sense to talk about my privacy with respect to information about the physical properties of quarks. The latter is not personal information about me.) And the general consensus in the privacy literature is that, yes, personal information is indeed necessarily sensitive. I think that's a mistake, however, so let me explain why.

There are two main ways in which one might try to maintain that personal information is necessarily sensitive. First, one might claim that (1) in order for information about an individual to be personal, that individual herself must not want the information widely known. Second, one might claim that (2) in order for information about an individual to be personal, the information must be of such a sort that most members of the individual's society would not want information of that sort widely known about themselves, regardless of whether that individual herself wants it. (This second account comes originally from William A. Parent, and various others in the privacy literature have followed his lead here.)

The trouble is that neither of these accounts escapes refutation by clear counterexample. Consider the following counterexample to (1). Jack is a guy who simply doesn't care at all who knows what about him. As far as he's concerned, every fact about him from the details of his fantasy life to how often he clips his toenails is open season to anyone who cares to ask. (The existence of people like Jack is not so bizarre as one might think, given the increasing popularity of highly personal blogs on the Web.) What should we say of Jack? According to (1), there is no such thing as personal information about him, since he's happy to have anyone at all know anything at all about him. But that's clearly wrong. There mere fact that Jack doesn't care who knows what about him -- that he's not sensitive about any of his information -- doesn't mean that none of his information is personal. The right thing to say is rather that he simply couldn't care less about whether anyone knows his personal information.

Consider now the following counterexample to (2). Suppose there is a society in which, as it happens, the majority of individuals don't care whether anyone else knows facts about the intimacies of their (say) sexual lives. Due to the propaganda and brainwashing of state officials, say, almost every individual in this society has become conditioned not to care about this sort of exposure. Does it follow that for any individual in such a society, details about the intimacies of her sexual life fail are non-personal? Clearly not, yet that is precisely what (2) implies. The mere fact that most people in this society don't care who knows what about their sexual lives doesn't mean that information about their sexual lives is not personal. Contrary to (2), the natural thing to say is that, sadly, most people in this society have been conditioned not to care about whether personal information -- about their sexual lives -- is widely known.

So, both (1) and (2) turn out to be false. Granted, there may be some other way of trying to establish the supposedly inherent sensitivity of personal information, but for my part I can't see how it would go. I think we have to recognize that personal information, whatever it turns out to be, is not necessarily (even if typically) sensitive.

The relativity of privacy

Here's one reason why this is important. If personal information is not necessarily sensitive, then one common argument for the relativity of privacy (which relies on the opposite claim) goes by the wayside. The argument goes like this:

Premise 1. Privacy entails personal information.
Premise 2. Personal information entails sensitive information.
Conclusion 1. Therefore, privacy entails sensitive information.
Premise 3. Sensitive information is relative to individuals or societies.
Conclusion 2. Therefore, privacy is relative to individuals or societies.

Premise 1 here is just another way of putting the point raised in the second paragraph above. Premise 2 captures the idea that personal information is necessarily sensitive. Premise 3 says, in effect, that whether a bit of information about an individual is sensitive depends either on (a) whether that individual wants the information widely known or on (b) whether most members of that individual's society want information of that sort widely known about themselves; and (a) and (b) vary from person to person, and society to society, respectively.) And the final step of the reasoning, Conclusion 2, says that whether an individual has privacy with respect to information about her depends either on either (a) or (b). Privacy is relative, in other words, in the sense that whatever privacy you have right now could be increased or diminished simply by either changing your mind about whether you want the relevant information widely known, or by people in your society changing their mind in a similar way -- even if nothing else changes.

One we see that personal information is not necessarily sensitive, however, we see that Premise 2 of this line of reasoning won't do. So, to the extent that we do think privacy is relative in the way that Conclusion 2 suggests, we'll need some other argument. This one fails.

For the record, I don't think privacy is relative in the way that Conclusion 2 suggests: I think you've got to do more to increase or diminish my privacy than merely bring about those changes of mind. I do think privacy is relative in another way, but that's a story for another time.

| Comments (2) |


Blood-powered "vampire" fuel cells

posted by:Chris Young // 04:06 PM // May 28, 2005 // TechLife

So you thought implants were science fiction? vnunet.com reports that a Japanese research team has successfully developed a miniature power generation device that creates an electric current using human blood as the energy source. Used in implants, the current is "enough for simple processing and radio communication". The era of the 'network of people' will soon be here.

| Comments (1) | | TrackBack


The results are in: 2005 Electronic Monitoring & Surveillance Survey

posted by:Marty // 06:31 PM // May 25, 2005 // Surveillance and social sorting

Last week, on May 18, 2005, the American Management Association issued a press release that highlights the results from its 2005 survey on workplace electronic monitoring and surveillance. Here are some notable results:


- 26% have fired workers for misusing the Internet,
- Another 25% have terminated employees for e-mail misuse,
- 6% have fired employees for misusing office telephones,
- 76% of employers monitor workers' Website connections for inappropriate content,
- 65% of companies use software to block connections to inappropriate Websites—a 27% increase since 2001 when AMA and ePolicy Institute last surveyed electronic monitoring and surveillance policies and procedures in the workplace,

"Of those organizations that engage in monitoring and surveillance activities, fully 80% inform workers that the company is monitoring content, keystrokes and time spent at the keyboard; 82% let employees know the company stores and reviews computer files; 86% alert employees to e-mail monitoring; and 89% notify employees that their Web usage is being tracked."

At least 89% of respondents have the decency to notify employees that their usage is being tracked.

Forgetting for a momment obvious issues of discomfort, as an aside think of how employees can spin snooping to their benefit and use the fact that they are being watched to create the image of being the great employee. Perhaps they can look at key websites, send useless e-mails complementing the company and superiors, thereby helping mold a model employee image. Imagine getting a bigger raise because the boss, who reads your e-mail, is under the impression that you worship the ground she walks on. Of course how likely is that?

What really is distressing here is not the use of technology to track online and telecommunications activity, but rather using technology, such as GPS, to track physical activity.

"Employers who use Assisted Global Positioning or Global Positioning Systems satellite technology are in the minority, with only 5% using GPS to monitor cell phones; 8% using GPS to track company vehicles; and 8% using GSP to monitor employee ID/Smartcards.
The majority (53%) of companies employ Smartcard technology to control physical security and access to buildings and data centers. Trailing far behind is the use of technology that enables fingerprint scans (5%), facial recognition (2%) and iris scans ( 0.5%)."

How long until these numbers skyrocket? All it takes is a decrease in acquisition cost to make the business-case all the more viable. Once that occurs, the sanctity of physical movement and activity (inactivity as the case may be when trying to sneak a nap at one's desk) will be erroded.

| Comments (0) | | TrackBack


Report is a roadmap to canning `spam'

posted by:Michael Geist // 11:51 PM // May 24, 2005 // ID TRAIL MIX

Lost amidst the high drama on Parliament Hill last week was the release of "Stopping Spam," the National Task Force on Spam's final report. Established in May 2004 by the minister of industry, the task force was comprised of Internet service providers (ISPs), marketers, consumer groups, and academic experts (I was a member of the task force and served as co-chair of its Law and Enforcement Working Group).

The task force provided Industry Minister David Emerson with a 60-page report featuring 22 recommendations, including a call for a new spam-specific law and a central co-ordinating body to improve enforcement.

Its work was guided by three key principles: (i) no country can single-handedly eliminate spam but that each must do its part to curtail the spam that originates from within its borders; (ii) any spam solution must include effective laws, technological solutions, as well as better business and consumer practices; and (iii)) new laws should only be pursued if the current Canadian legal framework is ineffective.

The magnitude of the Canadian spam problem quickly became apparent. We heard from ISPs burdened by enormous costs trying to block billions of spam messages each day, from marketers discouraged that the promise of email as an effective marketing channel was rapidly eroding, and from individual Canadians struggling under a daily deluge of junk email.

Moreover, we learned that Canada is a leading source of spam, typically ranked among the top six sources worldwide. Given that situation, all agreed that Canada must take steps to clean up its own spam problem, while working with other countries on international solutions.

Technological and business solutions will clearly play critical roles in combating spam. Working with the task force, the ISP community developed a series of best practices that I believe should be considered mandatory by Canadian ISPs since they provide a framework for dramatically reducing the amount of spam that leaves their networks.

Similarly, individual Canadians should take note of the costs of responding to spam messages and be guided by the maxim found in a task force education campaign of "don't try, don't buy, and don't reply."

Alongside technological and business solutions, Canada also needs an effective anti-spam legal framework. With national privacy legislation, the Competition Act, and the Criminal Code, many of the provisions contained in other countries' anti-spam statutes are admittedly already part of Canadian law. The challenge was therefore to test the effectiveness of current Canadian law in order to identify any gaps or shortcomings.

Together with officials from the Competition Bureau, the Privacy Commissioner of Canada, and law enforcement, the task force targeted the provisions that might be used to launch cases against Canadian spamming activity.

The Competition Bureau completed its case in December, demonstrating that the law could be used to counter spam containing false claims. That same month, the Privacy Commissioner released her first spam decision, responding with a well-founded finding to a complaint that I launched against the Canadian Football League's Ottawa Renegades.

Despite many meetings, the law enforcement initiatives languished, however, leading to a task force conclusion that little progress was made due to a lack of prioritization and jurisdiction questions.

The test cases demonstrated that while existing laws address specific aspects of spam, they are not sufficient to achieve the overall goal of deterring spammers in Canada. The report therefore recommends legislative and enforcement changes to remedy the problem.

From a legislative perspective, we recommend that the federal government enact a spam-specific law. That law should establish an opt-in regime by making failure to obtain appropriate consents before sending commercial email an offence.

Such an approach would distinguish the Canadian law from its U.S. counterpart, which contains only an opt-out requirement.

Moreover, an opt-in system in a spam-specific law will take the pressure off the current national privacy statute, which is ill-equipped to deal with serious spam issues since it does not provide the Privacy Commissioner with the ability to levy tough penalties or exercise order-making powers.

The task force identified additional gaps in the current statutory framework. New provisions are needed to address issues such as false or misleading headers, dictionary attacks, and the harvesting of email addresses.

Underlying all of these provisions would be tough penalties, modeled after the Australian system. Backed by a statute that features potential multi-million dollar penalties, Australia has enjoyed some success in ridding itself of local spammers.

In order to engage the private sector in the legal fight against spam, the task force is also calling for the establishment of a private right of action that could facilitate lawsuits against spammers. This would make it far easier for all Canadians, particularly ISPs, to use national law for spam suits, rather than relying on U.S. law, as has been the recent practice.

While a better legislative framework is essential, a more proactive enforcement system is also needed. The task force recommends creating a central co-ordinating body to foster greater collaboration between enforcement agencies and to provide oversight to ensure that anti-spam actions receive appropriate resources and prioritization.

The central co-ordinating body would also keep private sector parties accountable by issuing regular public reports and would assist the public by establishing education programs and a complaints mechanism.

The task force report provides a roadmap to creating a spam-specific statute and enforcement framework that would be far more robust than our current system. Implementing new legislation may be difficult in the current political environment, but given the rising costs associated with spam, failure to act is not an option.

Michael Geist is the Canada Research Chair in Internet and E-commerce Law at the University of Ottawa. He is on-line at www.michaelgeist.ca.


| Comments (0) | | TrackBack


Podcasting -- the next DRM battlefield?

posted by:Jason Millar // 09:10 PM // // TechLife

Heard about podcasting? If you haven't, Apple's recent announcement to support podcasting in it's upcoming release of iTunes will certainly thrust it into the mainstream vernacular.

Podcasting is two things. First, it's a completely new way of getting your audio published and distributed on the web, which takes advantage of the RSS feeds commonly used for text-based news subscriptions. Second, it is a way of downloading audio directly to a digital media player, such as an iPod, in a manner that is fundamentally different from the traditional solutions offered by KazAa, or Napster. The beauty of the system is that the user can simply subscribe to a syndicated podcasting feed, and the MP3s are downloaded and synchronized to the device automatically as they become published on the internet, via some software like ipodder.

Typical podcasts consist of homebrewed radio programs presented in an interview style. But sites like GarageBand.com have recently begun offering all of their music via podcasting.

For a much more complete description of the podcasting universe, check out this article about the inventor of podcasting, Adam Curry, and this podcasting blog.

A recent list of endangered devices published by the Electronic Frontier Foundation lists the iPod among those targeted by anti copyright infringement laws proposed in the US Congress. Given that podcasting allows the widespread distribution of audio, it will certainly be subject to the ongoing debate surrounding Digital Rights Management, as it offers a new method of MP3 distribution that is quite different from the traditional solutions. However, it also represents a potential widespread "uninfringing" technology, given that mainstream audio producers such as the CBC are beginning to podcast their content.

| Comments (1) | | TrackBack


RFID/Fingerprint Enabled DVD Players

posted by:Todd Mandel // 05:30 PM // May 20, 2005 // TechLife

From Wired:

Researchers in Los Angeles are developing a new form of piracy protection for DVDs that could make common practices like loaning a movie to a friend impossible.
University of California at Los Angeles engineering professor Rajit Gadh is leading research to turn radio frequency identification, or RFID, tags into an extremely restrictive form of digital rights management to protect DVD movies.

Read the full article at: http://www.wired.com/news/digiwood/0,1412,67556,00.html

| Comments (0) | | TrackBack


Nothing to Hide

posted by:David Matheson // 12:52 PM // // Core Concepts: language and labels

To be apathetic about protecting one's privacy typically involves the belief that one has no reason to protect one's privacy. And, as Teresa Scassa points out in her recent ID Trail Mix post, one common ground of this belief comes out in expressions of the "nothing to hide" attitude. But what is the idea behind this attitude, exactly, and how is it supposed to warrant the apathetic belief?

It seems to me that when someone says "I'm not concerned about my privacy; I've got nothing to hide" they are typically offering up one or the other of two basic arguments. I'll call the first the "Licit Behavior Argument" and the second the "No Desire Argument". My aim here is to explain why I think that neither argument is very good, and hence why I think that the "nothing to hide" attitude fails to serve as a decent basis for apathy about privacy protection.

1. The Licit Behavior Argument

The Licit Behavior Argument is pretty straightforward, and can be captured be the following simple syllogism:

Premise 1: My behavior is licit (i.e. not illegal or immoral in any serious way).
Premise 2: If my behavior is licit, then I have no reason to protect my privacy.
Conclusion: Therefore, I have no reason to protect my privacy.

I'm not myself inclined to challenge those who claim the likes of Premise 1 about themselves. In any case, trying to convince someone that she ought to be more concerned about protecting her privacy by trying to convince her that she's more wicked that she realizes strikes me as a strategy that's unlikely to succeed.

The real problem with the Licit Behavior Argument comes with Premise 2. It assumes that reasons to protect one's privacy are all disreputable, for it assumes that if one does have a reason for protection, then there must be some illicit activity that one would like to keep others from knowing about. But if the literature on privacy has brought anything clearly to light over the last 30 years or so, it's that there is a wealth of reasons for an individual to protect her privacy that have nothing to do with her engaging in illicit behavior. If --as is plausible-- the possession of privacy is a necessary condition on (or at any rate a very useful means to securing) differential social relations such as friendship and intimacy, on individual autonomy, and on excellence in the political arena, then there are plenty of reasons to protect one's privacy (not to mention that of others) that are motivated by nothing but the most noble of goals.

2. The No Desire Argument

The other argument that may be offered in expressions of the "nothing to hide" attitude is unconcerned with (il)licit behavior. It simply points to the absence of desire on the part of the speaker when it comes to protecting her privacy. This No Desire Argument can also be captured quite simply:

Premise 1: I have no desire to protect my privacy.
Premise 2: If I have no desire to protect my privacy, then I have no reason to protect my privacy.
Conclusion: Therefore, I have no reason to protect my privacy.

I suspect that most people who reason in this way are quite right when it comes to the first step, Premise 1: they simply don't have a burning desire to protect their privacy, and could care less about the whole business. And even if we admit (as I think we should) that people can occasionally be quite wrong about what they really desire, I think in most cases these individuals will nonetheless be a better position than I to say whether they have the relevant desire.

But that still leaves Premise 2. On the face of things, it looks like this second step of the No Desire Argument is trivially true. Doesn't it just make the obvious point that if someone doesn't care about her privacy, then (no surprise!) she doesn't care about her privacy?

Well, no, it doesn't. And far from being trivially true, I think Premise 2 of the No Desire Argument is false for most people. Here's why. The consequent of that premise -- the bit that comes after the 'then' -- talks about having no reason to protect one's privacy. The antecedent of the premise -- the bit that comes after the 'if' -- talks about having no desire to protect one's privacy. So what Premise 2 of the Absent Desire Argument in effect says is this: No one can have a reason to protect her privacy unless she desires to protect it. (Others might have reasons for protecting their, or even her privacy, but she herself doesn't if she has no desire to.) But now consider the more general principle that underwrites this idea: No one can have a reason to perform an action unless she desires to perform that action. The falsity of this general principle is easily seen by thinking about the following case (modified from the original case discussed by the late British philosopher Bernard Williams in a famous paper entitled "Internal and External Reasons"). Suppose I love gin, and presently have a strong hankering for a taste of the stuff. There is, in fact, a full bottle of it right in front of me. Sadly, however, I don't take a sip, because I'm under the mistaken impression that the bottle contains lighter fluid. What should we say of this situation? I have no desire to drink from the bottle, due to my false belief about what it contains. That explains why I don't actually drink from the bottle. Nevertheless, I have a reason to drink from it, even if the reason is unknown to me: it would satisfy a desire for gin that I happen to have. (True enough, I would desire to drink from it if I were properly informed about its contents; but since I'm not in fact so informed, I don't in fact desire to drink from it.) So, generally, one can have a reason to perform an action despite having no desire to.

A similar point can now be made about Premise 2 of the No Desire Argument. Due to ignorance about the nature or consequences of protecting one's privacy -- e.g. a failure to understand how important privacy, and hence its protection, is for securing such goods as friendship, intimacy, autonomy, political excellence, etc. -- one can in fact have a reason to protect one's privacy despite having no desire to protect if. If one's ignorance were removed, one would have the desire, given that one desires these other goods; and that suffices to give one a reason for protecting one's privacy in the absence of any actual desire to do so.

| Comments (0) |


FEDERAL COURT DISMISSES CRIA's APPEAL SEEKING DISCLOSURE OF THE IDENTITIES OF 29 PSEUDONYMOUS FILE SHARERS

posted by:Ian Kerr // 03:06 PM // May 19, 2005 // Digital Democracy: law, policy and politics

earlier today, the canadian federal court of appeal released its reasons for judgement in the much publicized litigation between the canadian recording industry association (cria) and 29 pseudonymous P2P file-sharers.

based on a lack of evidence, the court dismissed cria's appeal, upholding a prior ruling that refused to disclose the identities of 29 alleged file-sharers.

in rendering its decision, the court acknowledged that "[c]itizens legitimately worry about encroachment upon their privacy rights" and that such "intrusion not only puts individuals at great personal risk but also subjects their views and beliefs to untenable scrutiny."

at the same time, the court was careful to frame the issue as an attempt to balance "the tension existing between the privacy rights of those who use the Internet and those whose rights may be infringed or abused by anonymous Internet users."

having declared the outcome "a divided success," the court affirmed cria's "right to commence a further application for disclosure of the identity of the 'users'."

although the long term implications of this divided success are yet unknown, i fear that cria will interpret the dismissal of its claim as an invitation to conduct more agressive surveillance in furtherance of what michael geist has predicted will result in thousands of suits against individual Canadians in the months ahead.

isn't it likely that, in attempting to affirm privacy in principle, the court's invitation to cria to commence further applications without prejudice might actually undermine the privacy of netizens?

in a recently released draft book chapter titled "Nymity, P2P & ISPs," alex cameron and i predicted that:

... a number of the Court’s findings in BMG v. Doe may quite unintentionally diminish Internet privacy in the future. Recall that the result in BMG v. Doe turned on the inadequate evidence provided by CRIA. The decision openly invites CRIA to come back to court with better evidence of wrongdoing in a future case. Such an invitation may well result in even closer scrutiny of Internet users targeted by CRIA, both to establish a reliable link between their pseudonyms and their IP address and to carefully document the kinds of activities that the individuals were engaged in for the purpose of attempting to show a prima facie copyright violation. It could also motivate the development of even more powerful, more invasive, surreptitious technological means of tracking people online. This increased surveillance might be seen as necessary by potential litigants in any number of situations where one party to an action, seeking to compel disclosure of identity information from an ISP, is motivated to spy on the other, set traps and perhaps even create new nyms in order to impersonate other peer-to-peer file-sharers with the hope of frustrating them, intimidating them, or building a strong prima facie case against them.

regardless of whether cria and other organizations respond along these lines, i think that it is imperative that our courts stop paying lip-service to privacy and start grappling with the gruesome implications of: (i) permitting low evidentiary thresholds for identity disclosure, and (ii) offering open invitations to large and powerful organizations, with little guidance or constraint, thereby encouraging powerful organizations to gather more evidence by better monitoring everyone's intellectual consumption habits.

such a course is a recipe for disaster.

| Comments (0) | | TrackBack


RFIDs: Reasons For Infinite Despair?

posted by:Teresa Scassa // 11:50 PM // May 17, 2005 // ID TRAIL MIX

With my colleagues Theo Chiasson, Michael Deturbide and Anne Uteck I have just completed a project on Radio Frequency Identification tags (RFIDs) under the federal Privacy Commissioner’s Contributions Program. As you might imagine, on concluding such a project I am feeling a little bleak about the future of privacy – to be honest, I’m a little bleak about the “present” of privacy.

As you are all probably well aware, RFIDs are tiny chips equipped with antennae. The chips contain data which is transmitted when the tag is activated by an electronic impulse sent to it by a “reader”. The reader in turn is connected to a database. Currently there are plans to place RFID tags in most individual product items within a few years time as a means of improving inventory control. The tags will amount to unique product identifiers, as opposed to the generic UPC codes currently on products. Thus, rather than identifying a product as “Brand X soap”, the RFID can identify the product as that package of Brand X soap manufactured in Mississauga, Ontario on June 12 2004. Tag data can be matched with customer personal information on credit cards or loyalty cards when purchases are made. RFIDs also raise a number of other privacy concerns; if they are not deactivated or removed at point of purchase, they remain active and able to communicate their information long after the initial transaction has taken place.

There are many reasons to be bleak about the future of privacy. The private sector data industry is booming, and it is hard to get through a day without leaving some data droppings to be avidly collected by data-hungry scavengers. Although we have private sector data protection legislation in Canada (and here I’ll just refer to the federal statute PIPEDA for convenience), the law is frankly not up to the task before it. Enforcement and monitoring of the legislation depend on both consumer awareness and the government’s willingness to provide adequate resources to the Office of the Privacy Commissioner. While admittedly much better than nothing at all, the law as drafted is not adequate to respond properly to emerging technologies of surveillance and data gathering. RFIDs are a perfect example of this. There is nothing in PIPEDA that can address the use or deployment of RFID tags in inventory; they only fall within the scope of the legislation when the RFID data is being matched to personal information. Yet the very presence of the tags in consumer items raises privacy issues (industry reassurances notwithstanding). PIPEDA does not apply to either the development or deployment of this technology; it comes into play only at the point where RFID data is matched to consumer data.

In some jurisdictions there have been attempts to specifically address RFID technology and place terms or limits on its use. There is a good case to be made for technology-specific legislation or regulations. In the EU, the privacy issues raised by new technologies such as cookies have been recently addressed in a separate Directive. The Directive does not alter the fundamental fair information practices that apply; rather, it speaks directly to unique issues raised by particular technologies. Canada needs to be more proactive in developing technology-specific privacy norms to go along with the technology neutral ones.

PIPEDA also leaves far too many loopholes for governments and their agents to harvest data from private sector companies, or to receive gifts of data surrendered voluntarily by civic minded companies. Fair information principles only take you so far if your data, collected with your consent for a specific purpose, is passed along to government agents for another purpose without your knowledge or consent. And frankly, innocuous fragments of data change their character when combined with other innocuous fragments of data – consent to collection of the individual fragments is hardly fully informed. The privatization of security and law enforcement through the government use of the huge data resources of private sector companies is a major threat to privacy. One need only look south of the border for lessons in this regard. Adding even more refined data, gathered through product-specific RFID tags, simply sharpens the data picture that can be drawn by private corporations or government agencies. It is possible to shrug off RFID privacy concerns (as many proponents of the technology do) on the basis that the tags only alter the quality and not the kind or volume of data already collected. Yet this attitude relies on a level of complacence about the current data harvesting practices of the private sector that is simply not warranted.

It is true that consumers do seem largely indifferent – both to RFIDs and to the more general data collection practices. This complacence is attributable to a number of factors. It is difficult for most people to comprehend the impact that private sector data collection can and will have on personal privacy. Much of the collection and processing is invisible and undetectable. Further, too much onus is placed on individuals to learn how to protect their privacy. In many cases, the technology moves too quickly, or self-help measures require a degree of technological ability that is beyond most consumers. It is one thing, for example, to talk about using a secure browser and paying attention to one’s security settings, but for many Internet users this is just one more daunting technological project that is not necessary for them to perform in order to get where they want to go on the Internet, and which is not perceptibly rewarding. Placing the onus on consumers to remove, deactivate or block tags will result in a blank apathy towards them.

Consumer complacence has other sources as well. We are all prodded to accept “rewards” in exchange for personal information. For many people such rewards make a difference in terms of what they can afford to acquire. The data provided in exchange for the rewards seems trivial – a fair bargain or even an advantageous one for the consumer. There is also the “nothing to hide” attitude: what is the harm in letting others collect trivial data about oneself if one has nothing to hide? There is even an upside if the collection of trivial data about persons reveals criminal activity by others.

It is also the case that the privacy message does not seem to be getting out to consumers in a very effective manner. Privacy is an amorphous concept, and it is difficult to persuade people of privacy threats in the abstract. Identity theft scares people – when it occurs, it makes for compelling news stories. Ironically, though, identity theft can be used as a rationale for increasing private sector data collection. The more a company knows about you and your habits, the easier it is to detect when some credit card or other activity is not conforming to those habits. The more comprehensively your identity is established, the harder it is for anyone else to assume it.

There is also, of course, the fact that we have a cultural tolerance of privacy invasion that is reinforced at an early age. To some extent this is essential to living in a society with others – we need to accept and even embrace a certain level of benevolent surveillance. However, this cultural tolerance has perhaps not kept up with the times: we teach our children to accept surveillance in a wide variety of contexts, and we have been relatively slow to teach them about privacy beyond privacy of the body and home.

So what do RFIDs add to the mix that was not already there before? Promoters of the technology take the view that RFIDs will lower costs and improve inventory control while having little impact on consumer privacy. Consumers will, in fact, benefit, as better inventory control leads to lower prices. After all, RFID proponents argue, information about shopping habits is already being collected, compiled, used, shared and disclosed. Yet RFIDs allow for the collection of finer detail from transactions – and the finer the detail, the greater the level of surveillance. This is the case, quite apart from any post-transaction collection or use of RFID tag data on items in the possession of consumers. In the context in which we already live, it would be naïve to assume that sophisticated ways to collect, use, and misuse this information will not arise.

What’s to be done? I’m not sure there is much we can do – hence my rather bleak outlook these days. It seems that raising consumer awareness, raising the profile of privacy issues, and developing a political voice that is, if not louder than, at least not entirely drowned out by corporate voices, is a crucial first step. We need to think seriously about placing limits on the sharing of private sector data with government. It may also be time to move away from strict technology neutrality and develop ways in which we can quickly formulate more technology-specific forms of privacy regulation when the need arises.

| Comments (0) | | TrackBack


Personal Data Search Engines

posted by:Shannon Ramdin // 09:17 AM // May 13, 2005 // Surveillance and social sorting

Wired News recently interviewed ZabaSearch’s CEO Robert Zakari and Chairman Nicholas Matzorkis. ZabaSearch.com is one of the most extensive personal-data search engines on the net. The database provides personal information, such as residential addresses and phone numbers (listed & unlisted). For an extra fee, background checks and criminal history reports can also be obtained. Is ZabaSearch.com exploiting data privacy or merely providing a synthesized location with government and public information?

Click here for the full interview.

| Comments (9) | | TrackBack


College seeks access to identify website's creator

posted by:Jennifer Manning // 07:54 AM // // Digital Democracy: law, policy and politics

An upstate New York college is seeking a federal court's help to track down the unknown creators of a website school officials say has been allowing harassing postings against faculty and students.

"This isn't about free speech," said Macreena Doyle, a spokeswoman for St. Lawrence University, 150 miles north of Syracuse. "This has been targeting specific individuals for nothing more than ridicule, and that's different."

Lawyers for the school are seeking a court order giving the university access to records from Time Warner Cable that could help identify the operators of the site.

Click here for the rest of the article.

| Comments (0) |


What your playlist says about you

posted by:Marty // 06:36 PM // May 12, 2005 // TechLife

Listening In: Practices Surrounding iTunes Music Sharing, presented at the 2005 Computer-Human Interaction Conference, details the results of recent study offering some informative insight into the nuisances and dynamics of behavioural representation. The paper speaks to how music playlists in general, play a role in establishing and portraying characteristics of ourselves to those around us, including co-workers. Furthermore, the study establishes that sharing digital music can lead to strong group identities.

Of interest to Blog*on*nymity, is the study’s revelations on the nexus between privacy and the sharing features of music playlists.

Those who used iTunes as a personal music library prior to the version release that enabled sharing upgraded their versions of iTunes and started sharing immediately. The rest enabled sharing as soon as they started using iTunes; sharing, as it was seen, was part of the “ethos” of the application

...

By default, one’s own music sharing is turned off; users must explicitly turn it on. One participant (P9) reported that if his music had been automatically shared, he would have strongly resented it and turned it off. Giving users control over whether they share their music from the start respected users’ privacy concerns in sharing.

This study adds another dimension to the notion that technology alters the way humans relate to one another, in large part due to their online or digital identity. Thus, the above statement regarding control exemplifies that control is a central aspect to how one might approach the reales of availability of one's own personal information or characteristics in the online world.

| Comments (0) | | TrackBack


Abortion Records, Health Privacy, and De-Identification

posted by:Robert Gellman // 11:06 PM // May 10, 2005 // ID TRAIL MIX

This is a reply to Daphne Gilbert’s recent post titled The Power of Privacy to Obscure Equality: Abortion Rights Under Attack. I am not offering a rebuttal because I am either in agreement with her conclusions or at least sympathetic. But I have a somewhat different view of US health privacy law, and I want to comment on an abortion records case that raises some novel anonymity issues.

Part I. Overview of US Health Privacy Law

Until a few years ago, health privacy law in the US was largely a matter of state law, and most state laws were (and still are) a hodgepodge of statutes and rules covering disparate elements of health privacy. Congress started the process of federalizing health privacy law with the Health Insurance Portability and Accountability Act of 1996, better known as HIPAA. The law ultimately directed the Department of Health and Human Services to promulgate federal health privacy rules. The health privacy rules, which took effect in 2003, can be found at http://www.hhs.gov/ocr/hipaa/. Stronger state and federal laws continue to remain in force, however.

The federal rules generally establish a common policy for all health records, regardless of content or record keeper. This choice is probably correct policy. While people often perceive differences in the sensitivity of health records, it is difficult to draw clear lines based on content. Records sometimes identified as sensitive include records about AIDS, drug or alcohol abuse, sexually transmitted diseases, and genetic records. State laws often have special protections for these types of records, and it can be challenging to comply with laws when more than one applies to the same record. Consider the difficulty applying five or more different laws to records of a patient who is a drug abuser, has AIDS, has the gene for Huntington’s Disease, and is depressed about it all.

If you work at it long enough, you can find sensitivity everywhere. A dentist told me of a patient whose biggest secret was his dentures. Not even the patient’s wife knew that he had false teeth. Are psychiatric records always sensitive? Not to everyone. Some people talk (endlessly) about their psychiatrists. But I have yet to meet anyone who talks about a visit to a proctologist. Sensitivity is not a clear, consistent, or predictable concept in health. One person will guard a cold as a medical secret while the next loudly describes a cancer diagnosis at a cocktail party.

The only records that receive special treatment under HIPAA are a very narrowly defined category of psychotherapy notes. However, the extra protections are limited. For example, the notes can still be disclosed for law enforcement purposes in response to subpoenas and court orders.

Thus, abortion records receive no special treatment or protection under HIPAA. This means, for example, that the records can be disclosed for numerous purposes, including health care oversight, public health, research, law enforcement, national security, and many other activities. HIPAA provides that nearly all routine disclosures of health records can occur without specific notice to or consent of the data subjects. Indeed, disclosures are allowed in most cases even if a patient expressly objects to the disclosure. In sum, HIPAA allow virtually every disclosure necessary or convenient for the health care system, for law enforcement, and for many other governmental purposes.

If there is a saving grace here, it is that the federal rules do not mandate that covered entities make allowable disclosures. Under the rule, disclosures are permissive. However, many other laws compel disclosure. Some are unobjectionable. Physicians have long been required to report communicable diseases to public health authorities. Child abuse must also be reported. Gunshot wounds are also subject to reporting laws, as are birth defects and some other medical conditions in some states.

We are not done with compelled disclosures. Court orders, grand jury subpoenas, search warrants, and the like may require disclosure of health records. In some cases (e.g., subpoenas), the record keeper or the data subject may have an opportunity to contest the order. However, with a search warrant, the police seize records without any opportunity to object.

We can now draw some conclusions about US health privacy law. First, federal privacy rules apply most to health records, although state laws remain relevant. Second, the HIPAA protections against disclosure are weak. Third, abortion records have no special status under the federal rules.

Part II. US Constitutional Law and Privacy

The Supreme Court’s decision to uphold the right to abortion was based in significant part on the right to privacy. I won’t repeat Daphne Gilbert’s discussion of Roe v. Wade. However, there is more to the constitutional analysis of privacy.

In 1977, the Supreme Court addressed privacy issues in Whalen v. Roe, a case involving a clash between health privacy and the ability of the state to mandate reporting of patient information, see: http://supct.law.cornell.edu/supct/html/historics/USSC_CR_0429_0589_ZC1.html. The case involved a constitutional challenge to a New York State statutory requirement that the names and addresses of all persons who obtained certain prescription drugs be reported to the state and stored in a central computerized databank.

In Whalen, the Court described its own past decisions involving privacy as protecting two kinds of interests. One is an individual interest in avoiding disclosure of personal matters. The other is an interest in independence in making certain kinds of important decisions (e.g., matters relating to marriage, procreation, contraception, family relationships, child rearing, and education). These two prongs of privacy are important for the rest of the discussion here.

Whalen involved an individual’s interest in avoiding disclosure. The Court said that the duty to avoid unwarranted disclosures "arguably has its roots in the Constitution." This statement hints at the existence of a constitutional right of informational privacy, but the Court did not squarely hold that the right exists. The Court observed that disclosures of private health information are often an essential part of modern medical practice. The Court could not conclude that reporting to the state was an impermissible invasion of privacy.

The existence and scope of any constitutional protection for information privacy remains uncertain nearly 30 years after Whalen. Subsequent lower court decisions are split. Some courts found that a constitutional right of information privacy exists and some found that it does not. In any event, Whalen suggests that it does not take much of a state interest to overcome an individual’s interest in non-disclosure.

It is the privacy interest in independence in making personal decisions is directly relevant to the right to abortion and not the interest in non-disclosure. However, the recent abortion records cases arise at the intersection between the two privacy interests identified by the Supreme Court. Does the right to abortion also encompass a corresponding right to privacy for records documenting the abortion? Or does the constitutional right to informational privacy (if any) provide any protection, special or otherwise, to abortion records? Or will the courts see the legislatively mandated rule issued by the Health and Human Services Department as providing an excuse to duck the harder constitutional issues?

If I part company with Daphne Gilbert’s analysis, it is over her discussion of the investigation being conducted by the Kansas Attorney General. That investigation seeks to force abortion clinics to turn over the complete health records of nearly ninety women and girls. The state’s contention is that the material is needed for an investigation into underage sex and illegal late-term abortions. Gilbert concludes that: “It seems obvious that the Kansas investigation constitutes, at the least, a violation of client-doctor privacy.”

Unfortunately, it isn’t clear that there is much left to the notion of client-doctor privacy under information privacy principles. Remember that the plaintiff in Whalen lost with that argument. Now that we have federal health privacy rules, neither the plaintiff in Whalen nor the subject of a Kansas abortion record is in a better position. HIPAA places no substantive barrier to disclosure to the Attorney General. Disclosures for criminal or civil investigations – no matter what the prosecutor’s real motive may be – can be made consistently with the HIPAA rules. Abortion records have no better protection than other records. The traditional physician-patient privilege is also not likely to help at all. The privilege is often so narrow as to be irrelevant, and it is non-existent in some states. I wouldn’t abandon any of these arguments, but I do not have much hope.

If there is a better argument available here, it may arise under the other prong of the Whalen analysis. The interest in independence in making important personal decisions may provide a different basis for arguing that abortion records need protection. It is not some vague patient right of privacy that is at stake but the right to abortion itself. If abortion records become the subject of routine disclosure for law enforcement, national security, health care oversight, public health, research, and other allowable HIPAA purposes, women may be deterred from seeking abortions.

Whether this argument has any chance in court remains to be seen. Like Daphne Gilbert, I would like to make a traditional information privacy argument, but I don’t see that as a sure winner under current law. The best hope for success may arise under the other prong of privacy, which is (for now) clearly rooted in the Constitution.

Part III. An Unlikely Hero: Posner to the Rescue.

One of the first abortion records cases involved litigation over the constitutionality of the federal law prohibiting so-called partial birth abortions. Several courts wrestled with discovery requests for abortion records for procedures done by physicians testifying as expert witnesses that the prohibited technique is medically necessary.

The cases produced hand-wringing newspaper editorials about health privacy, few of which showed understanding of the substantial lack of privacy protections in the federal privacy rules or in most state laws. The real issue in the cases had to do with discovery rules in civil litigation.

In March 2004, the Seventh Circuit Court of Appeals decided one of these cases: Northwestern Memorial Hospital v. Ashcroft. Surprisingly, perhaps, Judge Richard Posner, a noted critic of privacy, wrote the majority pro-privacy opinion. He has written elsewhere that most demand for privacy is motivated by concealment of discreditable information by people who want to project an untrue image. This is a view held by some economists. I don’t buy it because there are many other elements to privacy beyond concealment (e.g., access, correction, notice, data quality, dignity, etc.).

Posner’s opinion addressed the burden of compliance with requests for production of documents, a standard issue in civil discovery. A third party can object to producing documents when the burden would exceed the value of the material to the litigation. Judge Posner used this principle to decide the case by weighing the probative value of the records against the potential privacy loss that would result in a case in which the patient was not a party. Privacy won.

However, it is crucial to understand that the records at stake were not identifiable. The records were to be de-identified before disclosure so that a patient’s identity could not reasonably be ascertained. The federal health privacy rule sets out a stringent de-identification procedure, and the dissent in the case argued with some force that there was no privacy interest left after de-identification. Here are two key paragraphs from Posner’s majority opinion:

Some of these women will be afraid that when their redacted records are made a part of the trial record in New York, persons of their acquaintance, or skillful “Googlers,” sifting the information contained in the medical records concerning each patient’s medical and sex history, will put two and two together, “out” the 45 women, and thereby expose them to threats, humiliation, and obloquy.
Even if there were no possibility that a patient’s identity might be learned from a redacted medical record, there would be an invasion of privacy. Imagine if nude pictures of a woman, uploaded to the Internet without her consent though without identifying her by name, were downloaded in a foreign country by people who will never meet her. She would still feel that her privacy had been invaded. The revelation of the intimate details contained in the record of a late-term abortion may inflict a similar wound. [emphasis provided].

Posner found a privacy interest even if there were no possibility that the patient’s identity could be determined. For someone like Posner who otherwise does not think much about the value of privacy, this is a remarkable conclusion. If applied more broadly, then no microdata of any type might ever be disclosed even if accompanied by a mathematical proof of non-identifiability. Such a holding would have significant consequences for health and other types of social science research, among other things. If we read Posner’s conclusion as relating to the disclosure prong of privacy, it is a significant departure from the existing understanding of privacy as well as an expansion of the notion of privacy. Arguments that wholly de-identified records retain a privacy interest are rare.

However, if we read more between the lines, it is possible to suggest that Posner’s approach derives from the independent decision prong of privacy. The goal is not to protect the privacy of wholly de-identified data, but to protect women seeking abortion from any concern about even the remotest possibility that their information might be released in any form.

When I first read Posner’s opinion, I thought that his notion of privacy for wholly de-identified records broke new ground. However, reading that same idea as a protection for the individual decision prong of privacy, I am much more supportive. In at least some instances, it may make sense to ban disclosure of records whether identifiable or de-identified because we cannot expect that the average person will understand data identifiability distinctions. If there is a reason for concern that disclosure would interfere with individual decisions, then greater protection for health records may be justified. However, providing protection for all types of de-identified data would be considerably more troublesome.

The abortion records cases in the US raise many unexplored issues. The notion of privacy protections for anonymized data is not something that anyone would have predicted as part of any constitutional right to privacy. Posner’s opinion may have opened that door, and it will be interesting to see where it may lead.

| Comments (2) | | TrackBack


Broadcast: If I Were an iPod, live - sort of

posted by:Marty // 09:09 AM // May 05, 2005 // Walking On the Identity Trail

On Monday, May 9, beginning at 11:00am, Local Revolutions-Calgary Talks on CJSW 90.9FM will broadcast "If I were an Ipod": Privacy, Autonomy, and the Internet for Dummies.

http://www.cjsw.com/programming/shows/word.html


Monday, May 9, 11:00am – 12:00pm (Mountain Time)

If I were an Ipod: Privacy, Autonomy and the Internet for Dummies One of two Sheldon Chumir Foundation broadcasts for this month of May, “If I were an Ipod” features Ian Kerr as he takes the audience through the ethical significance of powerful identification technologies as well as privacy in the digital age. Ian Kerr holds the Canada Research Chair in Ethics, Law and Technology at the University of Ottawa.
| Comments (0) | | TrackBack


Surveillance and Sousveillance: A continuation of questions for Steve Mann

posted by:Steven Davis // 11:07 PM // May 03, 2005 // ID TRAIL MIX

At the ConcealedI Conference held on March 4th, Steve Mann presented material from his sousveillence at a Sears Department store in the Toronto area. In part of his presentation, he presented a video clip in which he is shown asking some Sears’ sales clerks about the purpose of a device affixed to the ceilings of the sales floors, obviously a surveillance camera. Many of the answers were evasive and indicated, so Steve implied, bad faith on the part of the sales clerks. One of the sales clerks, if my memory serves me correctly, said that the device was for measuring temperature and another said that he didn’t know what it was for. There was a sales clerk who seemed to own up to what he thought were the cameras’ function; its purpose, the clerk said, was to take pictures of possible shop lifters. The way in which Steve presented the material the sales clerks were portrayed as objects of ridicule. It would appear that part of Steve’s motivation was to show the offensiveness of in-store and similar surveillance cameras. Most of the people in the audience at the conference seemed to agree with Steve’s assessment of the situation and laughed at the store clerks’ responses to his questions.

I want to raise a point about Steve’s presentation and take issue with what he seemed to indicate about the purpose of the Sears surveillance cameras. It seems to me that Steve, like the clerks, missed the purpose of the cameras. They are not primarily meant to spy on the customers and to make sure that they don’t make off with Sears’ merchandise without paying. Such cameras are mainly aimed at the poor sales clerks that Steve was so quick to hold up to ridicule. Most store theft is internal and amounts to billions of dollars a year, theft against which stores have a right to protect themselves. If we stop and think for a moment, the cameras are really not much protection against outside theft. Suppose that I snatch something from Sears. My bit of larceny is caught on film. (I assume that the cameras take pictures rather than being connected to monitors with alert store employees watching the moves of the customers.) Now my face is on film. How could this help the store prevent or discourage me and others from doing what I did? The store can’t track me down to have me arrested? No store has on file images of the thousands of people who use the store every day. So the film doesn’t help in catching me, the offending thief. But it can catch an in-store thief, since the store management can visually identify their employees. The conclusion is that Steve missed the purpose of the cameras. They are not directed at him or us, but the poor store clerks that he was so quick to make fun of.

Consider another case of surveillance. Why are there cameras in banks? To protect the banks’ money, you might say. In part. But an important role they have is to protect the bank employees. Bank robbers who hold up tellers make off with very little money, but sometimes things go awry and the crooks start shooting up the bank. And when they do, more often than not, it is the poor bank teller who gets it in the neck or some other bodily part. So now how do the cameras protect the tellers? Well bank robbing appears to be a profession and many who engage in it have records and thus have their mugs in a police data bank some where. Thus, capturing their pictures on film might lead to their arrest. It up the stakes for them, since most of them don’t want to spend time in the slammer. It then can serve to dissuade some from sticking a gun in a tellers face and telling her to hand over the contents of her cash draw. (Most tellers are women.) How then should we feel about such cameras?

And there are other cases of the use of cameras the purpose of which is to protect rather than to spy. Can anyone be opposed to cameras in underground parking garages? In long metro corridors? At deserted bus stops? Or in places where terrorists might plant bombs, for example in Northern Ireland which was plagued with bomb attacks against civilians, including women and children. Where then are we opposed to the placing of surveillance cameras? In government buildings? Around military bases? On trains, buses, and subways? If so, we first have to ask why the cameras are there and second, whom they protect. My guess is that they mostly protect ordinary working class people, the clerks who staff government buildings, the enlisted soldiers who fill our military bases, the ordinary folks who ride the subways, buses, and trains.

Think of the surveillance cameras in mom and pop convenience stores. There, the purpose is to protect against outside theft, but can anyone be opposed to this? Convenience stores are often owned and run by immigrant families who spend many hours behind the counters of these stores. They work from early in morning to late at night and for all their sweat labour they earn very little. But the little they earn can be eaten away by petty snatch and grab. Surveillance cameras probably help stem the theft. Can we really be opposed to such immigrant families protecting their meager earnings?

What then is the objection to the use of surveillance cameras in Canada? There is of course the possibility that surveillance cameras could be over used, but in many cases they serve a quite legitimate purpose. It is a no brainer to claim that they shouldn’t be over used, and there can be disagreement about when and where they are over used. But it doesn’t follow from this that they have no legitimate use in both a legal and moral sense.

Steven Davis is a Professor of Philosophy, Emeritus Professor of Philosophy and Director of the Centre on Values and Ethics, Carleton University.
| Comments (4) | | TrackBack


The bright pink teddy bear is watching you

posted by:Marty // 11:32 PM // May 01, 2005 // Surveillance and social sorting

While not describing anything innovative or any new issues in surveillance, this article from CNN provides a nice overview of some household surveillance technologies. Of note is the description of the practices of the UK store Spymaster:

The latest equipment is kept hidden, and checks are done on clients wanting to buy the more advanced equipment...

"We run background checkups on any clients that want to purchase more sensitive equipment,"

"You can never be 100 percent sure but you can minimize the risk of selling to the wrong people. We have to be responsible."

Click here for the article.

| Comments (0) | | TrackBack


Schools do not have to give students up

posted by:Marty // 11:20 PM // // Digital Democracy: law, policy and politics

Recently, Judge Russell A. Eliason, a U.S. Federal Magistrate, denied the RIAA's attempt to force two North Carolina colleges from disclosing the names of its students who are only known by their file-sharing pseudonyms.

"Durham lawyers Fred Battaglia and Michael Kornbluth represented Jane Doe, the UNC-CH student. They said they were not concerned with allegations of music piracy but with whether their client, whom they declined to name, could have his or her privacy protected."

Follow this article to read more.

| Comments (0) | | TrackBack


main display area bottom border

.:privacy:. | .:contact:.


This is a SSHRC funded project:
Social Sciences and Humanities Research Council of Canada