understanding the importance and impact of anonymity and authentication in a networked society
navigation menu top border

.:home:.     .:project:.    .:people:.     .:research:.     .:blog:.     .:resources:.     .:media:.

navigation menu bottom border
main display area top border

« March 2007 | Main | May 2007 »

Privacy as a Social Value

posted by:Jane Bailey // 11:59 PM // April 24, 2007 // ID TRAIL MIX

trailmixbanner.gif

The Canadian case law on hate propaganda, obscenity and child pornography features numerous analyses and discussions on the right to privacy, almost exclusively in the context of the privacy claims of those accused of related offences. Shaped as they are by the contexts in which they are raised, these analyses tend to mirror the negative, individualistic, control-over-access-to-information paradigm that has dominated thinking on the issue for several centuries. Notwithstanding that the vast bulk of Canadian legal analysis focuses on the right of an individual accused against state intrusion on a “private” sphere of activity to the exclusion of consideration of the privacy-related rights of the targets of hate propaganda and obscenity, Canadian courts have recognized that child pornography intrudes upon the privacy-related interests of the individual children abused in its production. The failure to recognize that hate propaganda and obscenity trigger similar intrusions for the members of the groups they target does not necessarily mean that no such intrusions are in fact triggered. Instead, the failure to recognize the triggering of those privacy interests might be understood to be the result of the selection of an individualistic privacy paradigm that, by and large, is conceptually inadequate to capture the collective nature of the privacy-related harms that can be occasioned by all three of these forms of expression.

Individuals targeted directly within hate propaganda and obscenity could muster arguments to squeeze the related privacy intrusions they experience as a result of that targeting into the individualistic paradigm, as has been the case with the analysis of the privacy-related intrusions on the children abused in production of child pornography. In the case of hate propaganda, however, the typical modus operandi of hate purveyors avoids attacks on individuals, generally focusing on broad categories. In the case of obscenity, the individualistic control-over-information paradigm, combined with patriarchal presumptions that women can be assumed to have consented to sexual activity and abuse is likely to impose a preliminary threshold of proof of non-waiver. Re-making what are essentially collectively-based claims into individual claims for the purpose of fitting the paradigmatic mould is unlikely, however, to form the basis of a meaningful long-term strategy for equality-seeking groups and their members.

Just as the analyses of privacy in the contexts of abortion and the counseling records and sexual histories of complainants in sexual assault cases have tended to re-personalize political issues, undermine calls for affirmative state action and reinscribe gendered and raced notions of privacy, so too may privacy-based arguments by the direct targets of hate propaganda and obscenity crafted to fit the paradigm. The privacy-related harms of hate propaganda, obscenity and child pornography need also to be understood in the context of social inequalities that allow empowered narratives to constrain the autonomy of otherized individuals by limiting their opportunities for self-definition with presumed, imposed characteristics attributed to the equality-seeking groups with which individual targets are identified. The personal intrusion is integrally and intrinsically related to systemic, group-based power imbalances. Claims framed within the individualistic privacy paradigm are more likely to bury that dynamic than to make it understood. Without that recognition, the potential role for state action to address those imbalances – or at least a call for state action reflecting a conscious choice not to reinforce those imbalances is likely to be ignored.

Rather than trying to fit collectively-based harms into an individualistic paradigm, it may be preferable to re-think the paradigm to encompass collective, social considerations. The seeds for this idea were originally sown within aspects of work by authors such as Westin that were largely sidelined in the wake of an individualistic, libertarian drive against state intrusion. They have since been replanted in the work of authors such as Allen and Gavison who have advocated privacy as a producer of social goods such as better social contributions and relationships. However, the drive to articulate privacy as a social value can be found more directly in the work of authors such as Gandy, Regan and Cohen in the context of rising concern as to the broad-ranging privacy implications of digital data collection and use. As fragmented individual data collected for one purpose is aggregated and re-used in other contexts as the basis for labeling and making judgments affecting individuals’ lives with little or no opportunity for reciprocity, the adequacy of individualistic models that focus on control over access to information has increasingly come under scrutiny.

The push, in the context of digital data collection and use, for recognition of privacy as a public value, a common value and a collective value and the potentially invidious collective forms of discrimination to which its inobservance can give way offers both threats and opportunities for members of equality-seeking groups. To the extent that those accused of offences relating to hate propaganda, obscenity and child pornography would then be positioned to bootstrap their individualistic privacy argument with one premised on societal interests, the competing equality-based interests of the members of target groups may be undermined. On the other hand, thinking collectively about the value of privacy opens up the opportunity to better articulate a more group-based conception of the privacy violation occasioned by perpetuation of group-based stereotypes prevalent in hate propaganda, obscenity and child pornography. It suggests an opening to argue that privacy shouldn’t simply be conceived of as a producer of individualistic goods like free expression, freedom of conscience and liberty, but also the equally important, but too frequently unmentioned democratic right to substantive equality.

The parameters of a collectively-based privacy argument might work from accounts of authors such as Delgado, Crenshaw, Tsesis and MacKinnon on how hate propaganda, obscenity and child pornography can work to impose social constructions of inhumanity on targeted groups that are both externally reinforced and sometimes internalized in a way that undermines their abilities to self-define. To the extent that these effects lead individuals to choose to dissociate or to attempt dissociation from the groups so targeted, both the groups themselves and society as a whole stand to lose - our aspirations for diversity, plurality and mutual respect are undermined.

If hate, obscenity and child pornography are understood in this way, certain aspects of the current push for a social conception of privacy within the context of digital data collection might be usefully analogized. Simplistic data derived from these forms of “expression” are used to render social profiles of targeted groups that become a basis for imposed definitions not only on those groups, but their members as well. These socially constructed definitions then form the basis and justification for discriminatory action and treatment of individual members of those groups that can, in some cases, be internalized within their own processes of self-definition.

The fragments of identity misrepresented in hate propaganda, obscenity and child pornography are used to form the bases for social composites that intrude both upon the definition of self and the understanding of self in relation to group. The social constructions produced authorize privacy intrusions that both reflect and reinforce substantive inequality. For equality-seeking communities, privacy understood entirely as a producer of purely individualistic goods like free expression and liberty has to often been an empty proposition. Privacy understood as a social value and producer of collective goods like substantive equality seems like something worth talking about.

| Comments (0) |


A Self-narrative Approach to the Deeply Personal

posted by:David Matheson // 10:13 AM // April 17, 2007 // ID TRAIL MIX

trailmixbanner.gif

In less than a couple of weeks, I’ll be attending the Computers, Freedom, and Privacy Conference in Montreal to participate in a workshop presentation with other members of the project. The theme of the discussion is the reasonable expectation of privacy. This morning I’d like to give a snapshot of what I’ll be contributing.

Let me start off by noting what seem to be two very general conditions on the reasonable expectation of privacy in informational contexts. First, it seems obvious that in order for someone to have a reasonable expectation of privacy with respect to a piece of information, she can’t have voluntarily exposed it in a general manner. When I walk across the quad on my university’s campus in broad daylight during a busy term weekday, there’s an obvious sense in which I’m voluntarily exposing lots of information about myself: I know that if I walk across the quad, various people are likely to cast an occasional glance in my direction and thereby acquire visual information about my present appearance, location, activity, etc.; and I’m okay with that, so I walk. But no one would say that I have a reasonable expectation of privacy with respect to it, since I’ve voluntarily exposed it – made it known or at least easily knowable – to whomever happens to be in the area.

Second, in order for an individual to have a reasonable expectation of privacy with respect to a bit of information, it must be personal information of a certain sort about her. To say that information is personal is to say, at the very least, that it is about persons. The information that lightning is a rapid discharge of electrons, say, or that the average annual rainfall in Montevideo is 1100mm, is not personal because it’s not about persons – at all. Moreover personal information, in the usual sense, must be personal information about specific persons. Consider, for example, the following pieces of information, all of which are about persons: that Canada has a population of over 30 million, that all people have certain inalienable rights, and that recent polls show that a majority of Americans favor national anti-obesity programs. Despite being about persons, these bits of information are not about specific persons, and hence don’t count as pieces of personal information in the usual sense.

But not just any personal information counts. In order for an individual to have a reasonable expectation of privacy with respect to a bit of personal information, it must be personal information of the right sort. For consider the following examples of personal information about me: that I am self-identical (to borrow an example from earlier exchanges on this blog with Steven Davis), that it is logically impossible for me to be a circle, and that my rate of free-fall is the same as that of a small pebble. Even if we admit these as examples of personal information, because they are about specific individuals, no one would be inclined to say that they are of the right sort of personal information to be covered by the reasonable expectation of privacy. They can be rationally inferred about specific individuals merely on the basis of nonpersonal pieces of information such as logical or scientific laws.

Let’s call personal information of the right sort – of the sort with respect to which one can have a reasonable expectation of privacy – “deeply personal information.” Accordingly, we can say that in order for an individual to have a reasonable expectation of privacy with respect to a bit of information, she must not have voluntarily exposed it and it must be deeply personal information about her.

I want to resist the suggestion that deeply personal information is to be distinguished by means of its sensitivity. The basic idea of this suggestion is that deeply personal information is sensitive personal information, i.e. personal information that individuals don’t want widely known by others. Sensitivity in this sense, according to certain privacy theorists, might come in one of two basic forms. The personal information in question might be sensitive because the person it is specifically about does not want it widely known by others. It might also be sensitive because it is the sort of information that most members of her society don’t want widely known about themselves.

The reason I want to resist this suggestion is two-fold. First, consider the problem of hypersensitivity. This has to do with the fact that some people can be excessively sensitive about information, including personal information that is not deeply personal. Suppose, to illustrate, that for one bizarre reason or another I happen to be very sensitive about the information that I am self-identical, that it is logically impossible for me to be a circle, or that my rate of free-fall is the same as that of a small pebble. It’s quite silly of me to be sensitive about this sort of rationally inferable information, but, nonetheless, let's suppose, I am. And since it’s sensitive information specifically about me, it turns out to be deeply personal information on the sensitivity approach. But that seems wrong. Whether personal information about me is deeply personal in the relevant sense can’t surely depend simply on my sensitivities, which may stray quite wildly away from the realm of where they ought to be.

There’s also the problem of hyposensitivity. This arises because some people can be excessively insensitive about information, even deeply personal information about themselves. We all know that sort of person who opens up at the drop of a hat and shares all sorts of intimate details about themselves to anyone with open ears. Encountering that sort of person is disconcerting, because we want to say that they shouldn’t be sharing so much deeply personal information with us, total strangers.

Of course, an advocate of the sensitivity approach could agree with us here, and point out that the reason the information such a person shares is deeply personal is that it’s the sort of personal information that most members of their society don’t normally want widely known by others. It may not be sensitive personal information for them, but it is for most of their society, and so it is in fact deeply personal.

But it’s not too hard to think of cases in which even the sensitivities of most members of society are deficient. Suppose that the government, or even a large corporation – call it Big Brother – embarks on a propaganda campaign, for one bad reason or another, to convince most members of society not to be sensitive about the intimate details of their sexual and romantic lives, their medical statuses, their on-line activities, etc. Suppose further that the campaign is very successful. We get the result that virtually no one in society cares how widely such personal information about themselves is known by others. Does the very success of the propaganda campaign absolve Big Brother, who then goes on to get his hands on such details about many members of society, from the charge that he’s inappropriately gotten his epistemic hands on deeply personal information of many members of society? Surely not. The right thing to say of this sort of scenario seems to be that Big Brother has, wrongly and sadly, convinced most members of society not to care about large swaths of what remains their deeply personal information.

So if we don’t characterize the nature of deeply personal information along the lines of the sensitivity approach, what’s the alternative? It seems to me that one plausible alternative, at any rate, can be gleaned from paying careful attention to the language that the Supreme Court has employed in such well-known cases as R. v. Plant (1993) and R. v. Tessling (2004). Deeply personal information, the Court says, is what lies at the “biographical core” of personal information, and information whose disclosure may affect the “dignity, integrity, and autonomy” of the individual it is about.

This suggests two very important points about the nature of deeply personal information. First, deeply personal information has something to do with what might be described as the telling of a story about an individual’s life – that’s the “biographical” bit. Second, it also has to do with the individual’s telling her own story, for herself and on her own terms – with “dignity, integrity and autonomy.”

The narrative language of “biography” and the “telling of one’s own story” may be largely metaphorical, but I believe it captures a very familiar element of our day-to-day experience. We are all, everyday, telling stories about ourselves to others in the sense of revealing to (and concealing from) others different pieces of information about ourselves in different contexts. And the capacity to do so in accord with our own considered convictions about who should know what about us in which context is crucial, I think, to our dignity, integrity and autonomy as persons.

We can bring these points together into something like the following (call it) “self-narrative” approach to the nature of deeply personal information. On this approach, deeply personal information is personal information open access to which would seriously undermine the individual’s ability to tell her own unique story. (When I talk about “open access” here, I mean more or less unrestricted access for the public at large, i.e. access for pretty much any member of society who cares to learn the relevant information, regardless of whether the individual that the information is about has voluntarily exposed it.)

To evaluate the plausibility of the self-narrative approach, consider its application to cases already mentioned. The rationally inferable information that I am self-identical, that it is logically impossible for me to be a circle, or that my rate of free-fall is the same as that of a small pebble, despite being about a specific individual, is not deeply personal information. Does the self-narrative approach give us that result? It would seem so. It is very difficult to see how open access to any of these pieces of personal information about me would seriously undermine my ability to tell my own unique story. After all, none of these pieces of information could itself be used to distinguish me from others in any significant way. That it is logically impossible for me to be a circle is certainly about me in particular, but exactly the same sort of information can be known to apply to every other individual in society, simply by rational inference from non-personal information. That’s also true of the information that I am myself or that my rate of free-fall is the same as that of a small pebble. Everyone is self-identical. Everyone’s rate of free-fall is the same as that of a small pebble.

Recall now the Big Brother example. On the sensitivity approach, the very success of Big Brother’s campaign absolves him from the charge of wrongfully getting his epistemic hands on loads of deeply personal information about members of his society. But, as we noted, that seems wrong. On the self-narrative approach, however, we get a more intuitively sound verdict. Big Brother can properly be charged with inappropriately getting his hands on deeply personal information, because the mere success of his propaganda campaign – the mere fact that he’s convinced most members of society not to be sensitive about intimate details of their sexual and romantic lives, medical statuses, on-line activities, etc. – does not suffice to render those details non-deeply personal. Open access to such details would seriously undermine the ability of the individuals concerned to tell their own unique stories: where there is open access, individuals lack control over those details, which constitute precisely the sort of personal information whereby they could significantly distinguish themselves from others. And the fact that open access would seriously undermine their ability in this way remains regardless of whether they are sensitive about the details.

| Comments (0) |


Gratiscard

posted by:Jeremy Hessing-Lewis // 08:53 AM // April 05, 2007 // Digital Identity Management | General

In the world of digital cash, not all news is bad news for privacy researchers. The Economist has an article April 4th detailing the emergence of a new generation of payment cards to give Visa and Mastercard a run for their money (pun intended). Among them is Gratiscard, a system that can be used as Credit, Debit, or Prepaid and can be used anonymously:

Taking aim at both of these flaws is GratisCard, a new payments system backed by Steve Case, the founder of AOL, launched later this month. The card, which can function as a debit, credit or prepaid card, is entirely anonymous. A thief who steals one will not find a customer's name or account number on it, nor will a hacker find anything to decode in the card's magnetic strip. Instead, customer data are stored in GratisCard's data centre in Florida and sent to the till only as needed. GratisCard will be the first to use the internet to zip data among merchants and banks. This allows it to side-step the big payment networks and their stiff interchange fees. Merchants that accept GratisCard simply pay a processing fee capped at 0.5% of a transaction.

| Comments (0) |


Don't have an account. I'll use a shared one.

posted by:Stefan Popoveniuc // 11:59 PM // April 03, 2007 // ID TRAIL MIX

trailmixbanner.gif

It is generally believed that you have to take the extra step to protect your privacy: look for the SSL lock on your browser, shred your old bank statements, scan your computer for key loggers etc. Convenience and easy of use are often regarded as antagonists to security or privacy. I have recently come to discover a useful website that seems to contradict this paradigm.

Remember all those popular websites that force you to register just because you want to read the entire article, user comments or download some piece of free software? They all claim that the registration process is simple but you often find yourself entering your email address, gender, full or partial postal address, phone number and at the end they ask you to fill out a survey with how many hours you spend on the internet each month, what’s your income level, age, education and so on. But probably most important, you tend to set your password from the two-three passwords that you use on tens of websites. Clearly an exposure of what you consider to be private information.

www.bugmenot.com has a collection of public usernames and passwords for some of the most popular sites that require free registration for accessing their free content. Some of the popular websites are: www.nytimes.com www.washingtonpost.com www.imdb.com etc. A Firefox extension makes logging in to these websites a breeze: right click ->login with BugMeNot. Click-clack, you’re in.

Don’t get me wrong, customizing your account and leaving comments with your reserved username is always good, but most of the times you just want to read the end of article. And you simply don’t want to have yet another site know one of your “secret” passwords :)

*The author has absolutely not affiliation with BugMeNot.com, except for sharing the same Internet.

| Comments (0) |


CALL FOR STUDENT ABSTRACTS

posted by:Julia Ladouceur // 09:08 AM // April 02, 2007 // General

CALL FOR STUDENT ABSTRACTS
The Student "I": A student conference on privacy and identity
University of Ottawa, Faculty of Law
October 25, 2007

Graduate and undergraduate students from all disciplines are invited to submit an abstract for The Student “I” , a student conference on October 25, 2007 at the Faculty of Law, University of Ottawa, Canada.

Preceding the Revealed “I” conference hosted by researchers from On the Identity Trail, this day long student conference brings together students from around the world, selected through a peer-review process, to present research relating to identity, privacy, anonymity, technology, surveillance, and other related topics engaged by the On the Identity Trail project.

Abstracts should not exceed 1,000 words (including notes and citations). Successful abstracts will seek to make an original contribution. Inter-disciplinary submissions are encouraged. Abstracts should be accompanied with a short bio, which should include the student’s program and institution of study, and an email address for correspondence. The deadline for abstracts is July 1, 2007. Send to:

Julia Ladouceur
University of Ottawa
Faculty of Law, Common Law Section
57 Louis Pasteur Street
Ottawa, ON K1N 6N5
Email: anonplan@uottawa.ca

Successful applicants will be notified at the email address provided no later than August 1, 2007. Successful applicants who are unable to obtain funding from their home institution may apply for a student bursary to cover expenses relating to travel and accommodation.

| Comments (0) |


main display area bottom border

.:privacy:. | .:contact:.


This is a SSHRC funded project:
Social Sciences and Humanities Research Council of Canada