understanding the importance and impact of anonymity and authentication in a networked society
navigation menu top border

.:home:.     .:project:.    .:people:.     .:research:.     .:blog:.     .:resources:.     .:media:.

navigation menu bottom border
main display area top border

« October 2006 | Main | December 2006 »

“A Man’s Home (Page) is His Castle”

posted by:Carlisle Adams // 11:59 PM // November 28, 2006 // ID TRAIL MIX


Many of us have probably heard the saying “a man’s home is his castle”. (By way of parenthetical footnote, let me just mention that I could not find an elegant way to make this old adage gender-neutral: “an [(unspecified gender) entity]’s home is [(unspecified gender) possessive pronoun] castle”. Therefore, for the purposes of this article, I will just state up front that “man” and its variations should be taken to mean “man or woman” and their corresponding variations, and hope that this satisfies any sensitivities in this area.) This saying has been around for quite some time and most people, I think, would probably have some intuitive understanding of its meaning upon hearing or seeing it. However, given some of the new terminology with which we have become accustomed in our Internet age (particularly, “home page”), it may be interesting to explore whether this saying has any relevance in our digital virtual world.

Traditionally, a castle (the residence of a king) has been associated with at least four concepts: protection; identity; privacy; and control. With respect to “protection”, the castle provides a fortress, a stronghold, security from invaders of all kinds. “Identity” suggests ownership, permanence and, at least at some level, a way of authenticating oneself (“I am the king because I have the key to the drawbridge, or because the guard will let me in when I show up”). In terms of “privacy”, the castle is a means of keeping its inhabitants and their discussions away from prying eyes and ears, so that secrets are prevented from flowing out to the general public. The castle also gives a sense of “control” in that the king has ultimate authority over certain aspects of the domain (e.g., the power to decide content and activity within the walls).

When it is said that a man’s home is his castle, the analogy is clear: with his home, the man gets – or at least expects – a measure of protection, a sense of identity, some level of privacy, and a degree of control. So now, out of curiosity if nothing else, one can ask, when we say today that a man has a home page, is the analogy as clear, or is it stretched so thin as to be fragile and useless? Let’s consider the above four associations with the term “castle” to see how well they apply to our digital (rather than physical) homes.

Protection. When Bob sets up a Web server and creates on that server a home page for himself, does he have the security from invaders that a castle might provide? Even a superficial awareness of security issues with websites over the past few years will confirm that the answer is “no”: attackers break into websites every day all over the world to enter, change, or steal data. Well-known buffer overflow or SQL injection attacks are commonly used to break into websites to cause damage. If a website is expecting some user input (such as a username and password), the attacker may send far more data than the buffer allocated to receive this data can hold (a password of 10,000 characters, for example). This input data may “spill over” beyond the buffer into another area of memory. If the overflow area is a place where instructions are executed, and if the overflow characters are carefully constructed to be valid executable instructions, then this attacker will have succeeded in having arbitrary code of his choosing run on Bob’s machine. This is the essence of a buffer overflow attack. With SQL injections, the attack is similar in that data input by the user contains some extra, unexpected characters and these characters are treated as commands to the SQL database that sits behind Bob’s webpage.

If Bob has routines that do proper input data validation (e.g., make sure that the username that has been entered is really a username and is not longer than a specified value), he may be able to avoid some of these attacks, but it is not always easy to distinguish an attack script from valid data. Unfortunately, the conclusion is that unless he puts extensive effort into setting up particular safeguards, Bob’s home page gives him very little protection from the malicious entities that live outside his “virtual walls”.

Identity. Is Bob’s home page a valid mechanism for showing ownership and authenticating himself? The answer to this question is also negative. Webpage spoofing attacks are not very difficult to perform and can be fairly successful. Making an identical copy of an existing webpage is almost trivial (it is essentially a copy-and-paste operation to move every image, every piece of text, every logo, etc., from one webpage to another. The (slightly trickier) step is to get other people to go to the new site while thinking that they’re going to the old one. This can be accomplished using a technique called DNS cache poisoning. The Domain Name Server (DNS) is a machine that performs an important service on the Web: you give it a host name (such as bob.com) and it gives you back the IP address of that host machine (such as 192.12.567.30). This way, your computer can communicate with that machine using the Internet Protocol (IP). The data pair {bob.com, 192.12.567.30} is stored in the DNS cache (a fast portion of its memory). Cache poisoning is an attack in which that the attacker changes the data pair so that bob.com is instead associated with a different (i.e., the attacker’s) IP address in the DNS cache. Now everyone that wants to go to Bob’s machine will send bob.com to DNS, get back the attacker’s IP address, and then go to the attacker’s website, which has been made to look identical to Bob’s site. Thus, the attacker gets Bob’s customers (their money, or their personal data), and Bob has no way of knowing that this has occurred.

If Bob’s machine is a Web server, technologies such as SSL server authentication can help, but they provide no guarantee: users often do not check that the certificate of the site they have reached is the one they’re expecting, and often ignore warnings in pop-up windows even if they do check. In general, website spoofing means that Bob’s home page is not sufficient to prove ownership or to validate or authenticate Bob in any way.

Privacy. Does Bob’s home page give him a private place to store his personal and confidential information? At first glance, this seems like an odd question to even be asking: the Internet is a public space and Internet search engines (such as Google) will find a website once it exists (this is what they were invented to do). The idea of putting something on a website and expecting it to be private is a bit like building a house with glass walls and expecting that others will not see what goes on inside. However, it is possible to create private spaces within a website; typically, these are password-controlled areas (anyone with the password can view the page, and all others are redirected to another page telling them that they are not authorized to see the contents). But the problems with passwords as an authentication mechanism are extremely well-known and well-documented. Compounding this is the fact that the site owner usually wants a number of people to access these areas (friends, family, students in a course, etc.) and so will deliberately choose a password that will be easy for that group of people to remember. It may therefore be conjectured that the passwords used to protect these areas are probably even weaker than typical passwords, which are notoriously weak in many cases.

It is unlikely to be the case that Bob’s home page will be a safe place for his private data.

Control. Does Bob have authority over his home page? Does he have the power to decide content and activity? Again we see that the answer to this question is negative. Website defacement (in which a hacker changes the content or appearance of a target website, altering or inserting messages, pictures, or other data) shows that owners do not really have complete power over the content on their sites. Furthermore, session hijacking attacks (in which a hacker takes over an active session and begins interacting with the server as if he was the original client, or interacting with the client as if he was the original server), buffer overflow attacks, SQL injection attacks, and so on, demonstrate that the site owner may also have little power over the activities that take place on his site.

Integrity detection / protection mechanisms, intrusion detection / protection mechanisms, and good session management practices can all help but these are hard to do well and, again, unless Bob takes extensive efforts to defend his website, he will not have the control over content and activity that he might wish to have.

Where do we go from here?

OK, so home pages don’t really have any of the properties we might associate with homes. But “home” and “home page” are just names; what difference does any of this make? At this point, it may be tempting to pull out another old saying: What’s in a name? That which we call a rose by any other name would smell as sweet [1]. There may be much truth in this (after all, Shakespeare was right about a great many things!), but we need to exercise a little caution. The danger lies not in using two different names for the same thing, but rather in using the same name (or very similar names) for two different things: “home” and “home page”. All the security (i.e., protection, privacy, identity, and control) we associate with our home does not translate immediately to our home page. Although there are some similarities (locking the front door with a key is something like password-protecting the website), there are some major differences as well (for example, website spoofing: the attacker makes an exact replica of your house and fools your family and friends into going there when they want to visit you? Nothing in the real world (outside the “twilight zone” [2]) corresponds to this). Using essentially the same term (“home” and “home page”) could lead the unsuspecting user to think that similar behaviour – both precautions and activities – are appropriate, when in fact this is not the case.

I am not advocating that we should call home pages something else (it’s far too late for that sort of change, although “start page” might have been a good choice that is free of other associations). But I am suggesting that this could serve as a reminder to us that we need to be careful when naming things in our created virtual worlds. Choosing names that are familiar (so that people will more readily identify with the technology and feel comfortable using it) can have unintended consequences (including behaviour that is inappropriate because the technology is not as much like its real-world name-sake as the appellation might imply). Let this be a lesson to us: choosing names for concepts should not be taken lightly; we need to think through the connotations of the names we pick and consider whether this may lead to security or privacy problems down the road.

So, a man’s home (page) is his castle? Not really; not in the new Wild West of cyberspace…


[1] Romeo and Juliet, Act II, Scene 2.
[2] http://www.scifi.com/twilightzone/

| Comments (1) |

Agency and Anti-Social Networks

posted by:Ryan Bigge // 11:59 PM // November 21, 2006 // ID TRAIL MIX


“A man opposed to inevitable change needn't invariably be called a Luddite. Another choice might be simply to describe him as slow in his processes.”
-- Francis Wolcott (Deadwood, Season 2, Episode Four)

Let me start with a strange but charming article in the Sunday New York Times, written by a 24-year-old market researcher named Theodora Stites. In “Someone to Watch Over Me (on a Google Map),” Stites details her multiple memberships in various online communities. She describes the safety and security of friendships made online due to the distancing effects of computer mediation and jokes about being unable to “log out of” awkward social situations in the physical world, thus prompting her to join Second Life.

Reading the article, I found myself taken aback -- not by the extent of her electronic immersion but by the amount of work (labour, as it were) her routine appeared to entail. As Stites writes, “Every morning, before I brush my teeth, I sign in to my Instant Messenger to let everyone know I'm awake. I check for new e-mail, messages or views, bulletins, invitations, friend requests, comments on my blog or mentions of me or my blog on my friends' blogs.” [i]

This sounds like a lot of effort. I would undoubtedly forget to brush my teeth. Clearly, the target demographic of 14-24 year olds who use MySpace have more free time than beleaguered, 30-something grad students. Although I have social networks in the dirt and flesh world, I do not see the utility of an online equivalent.

Of course, it’s hard not to sound like a young fogey when questioning the curious rituals of the younger generation. I’m reminded of novelist Nicholson Baker, who once published a lengthy, impassioned defense of the card catalog in the New Yorker back in 1994. Swimming against the technological tide is often unpopular, but it remains a useful intellectual exercise.

In a recent online interview with danah boyd, a PhD student at the Berkeley School of Information studying MySpace and MIT’s Henry Jenkins, social networking sites are described as vital resources for students entering primary and secondary schools. According to Jenkins, “The early discussion of the digital divide assumed that the most important concern was insuring access to information as if the web were simply a data bank. Its power comes through participation within its social networks.” [ii]

Jenkins raises important questions relating to the digital divide and making good on access. But when did joining MySpace or Facebook because a necessity, rather than an option? Did we skip a step? At what point does not being a member of a social network site become a liability? At what point does it become impossible to not be a member?

Journalism about social networking sites underscore this aspect of inevitability. In a recent New Yorker article by John Cassidy, Facebook co-founder Chris Hughes explains that, “If you don't have a Facebook profile, you don't have an online identity.” He went on to say that, “It doesn't mean that you are antisocial, or you are a bad person, but where are the traces of your existence in this college community? You don't exist---online, at least. That's why we get so many people to join up. You need to be on it.” [iii]

You need to be on it. Where does choice or agency reside in inevitable change? What if I want to decide for myself? Does that make me “slow in my processes?”

Although I’m aware of the irony inherent in the term (you’re reading this article online, after all), I believe that the neo-Luddite movement offers a useful method of reconsidering the importance of social networking sites. Neo-Luddite philosophy provides a small measure of critical distance from the object of study, along with foregrounding questions of technological determinism. In his recent book Against Technology, Steven E. Jones examines the myth of the Luddites, and how those who smashed looms in 1811 and 1812 continue to inspire and inform debates about technology almost 200 years later.

Incorporating a wide range of writers and thinkers, including William Blake, Mary Shelley, Bill Joy, Edward Tenner and Theodore Kaczynski, Jones investigates how the mythology of the Luddites has persevered and reconfigured itself over time. In its most basic iteration, Jones suggests that, “Many people who identify with the term ‘Luddite’ just want to reduce or control the technology that is all around us and to question its utility – to force us not to take technology for the water in which we swim.” [iv]

The problem for would be loom-smashers, according to Jones, is that “Modern (and now postmodern) technology is routinely understood as an autonomous, disembodied force operating behind any specific application, the effect of a system that is somehow much less material, more ubiquitous, than any mere ‘machinery.’” [v] My technological skepticism is not sufficient enough for me to consider acts of rage against the machinery, but I do think it worthwhile to consider the quality of water that we find ourselves swimming in.

Although not a neo-Luddite, Mark Andrejevic, in his examination of webcams, writes of the Digital Enclosure, a concept that is equally relevant when considering social networking sites. According to Andrejevic, “The de-differentiation of spaces of consumption and production achieved by new media serves as a form of spatial enclosure: a technology for enfolding previously unmonitored activities within the monitoring gaze of marketers.” [vi] I like to think of the digital enclosure as a more theoretically robust update of Rockwell’s 1980s hit “Somebody’s Watching Me.”

There is plenty to surveil. According to various studies, young people spend a significant amount of time using Facebook and MySpace. Cassidy points out that “Two-thirds of Facebook members log on at least once every twenty-four hours, and the typical user spends twenty minutes a day on the site.” [vii] Social networking sites might resemble play, but Andrejevic argues that “Consumers generate marketable commodities by submitting to comprehensive monitoring.” [viii] Which makes MySpace and Facebook participation a form of labour, even if it’s invisible to most users.

Andrejevic’s work helps explain why Rupert Murdoch’s News Corporation paid $580 million last year to purchase MySpace. For Andrejevic, the digital enclosure “promises to undo one of the constituent spatial divisions of capitalist modernity: that between sites of labor and leisure.” [ix] Which is to say that 24-year-old Theodora Stites is clearly working two jobs.

Of course, like any theoretical insight, the digital enclosure doesn’t explain everything. I would complement Andrejevic’s work with Angela McRobbie, who has studied how elements of the UK rave scene seeped into the logic of the cultural industries during the 1990s, creating an environment where “the club culture question of ‘are you on the guest list?’ is extended to recruitment and personnel, so that getting an interview for contract creative work depends on informal knowledge and contacts, often friendships.” [x] Without making it explicit, McRobbie is exploring Pierre Bourdieu’s concept of social and cultural capital – that is, the importance of who you know, not what you know. Bourdieu’s concept has been extended by Sarah Thornton (subcultural capital) and Paul Resnick, who created the term sociotechnical capital to describe “productive resources that inhere in patterns of social relations that are maintained with the support of information and communication technologies.” [xi]

Combining agency and sociotechnical capital forces the question: Is there any difference between those excluded from creating a robust social network and those who chose not to participate? How does a neo-Luddite (that is, a conscientious MySpace objector) differ from someone with social network failure? Or, to put it another way, is it possible to communicate intent through a lack of participation?

It appears as though social network sites now offer two polarized options: either the constant, self-generated surveillance of the type described by Stites or the self-negation (“You don’t exist”) that avoidance entails. In a marketplace built on unlimited choice, this lack of options is rather frustrating.

It almost makes you want to smash something …

About the author
Ryan Bigge is completing his Master’s thesis on the transgressive strategies of Vice magazine in the Joint Programme in Communication and Culture at Ryerson University. (rbigge [a] ryerson [dot] ca). His review essay, Making the Invisible Visible: The Neo-Conceptual Tentacles of Mark Lombardi, was published in the Fall 2005 issue of Left History. Ryan has a BA in history from Simon Fraser University.

Zach Devereaux, a doctoral candidate in the Communication and Culture program at Ryerson University, provided invaluable assistance and brainstorming for this paper. Thanks also to Dr. Greg Elmer, Dr. Edward Slopek and Dr. Jennifer Burwell.

[i] Stites, T. (Jul 9, 2006). Someone to Watch Over Me (on a Google Map). New York Times, pg. 9.8
[ii] Jenkins, H. and boyd, d. “Discussion: MySpace and Deleting Online Predators Act (DOPA)” at http://www.danah.org/papers/MySpaceDOPA.html accessed 28 August 2006.
[iii] Cassidy, J. (2006). Me media. New Yorker, 82(13), 50-59.
[iv] Jones, S. E. (2006). Against technology : From the luddites to neo-luddism. New York: Routledge. p. 231
[v] Jones, S. E. (2006). Against technology : From the luddites to neo-luddism. New York: Routledge. p. 174-175.
[vi] Cassidy, J. (2006). Me media. New Yorker, 82(13), 50-59. (Archived version lacks pagination.)
[vii] Cassidy, J. (2006). Me media. New Yorker, 82(13), 50-59. (Archived version lacks pagination.)
[viii] Andrejevic, M. (2004). Little Brother is Watching: The Webcam Subculture and the Digital Enclosure. MediaSpace: Place, scale, and culture in a media age. In Couldry N., McCarthy A. (Eds.), . New York: Routledge. (Book retrieved electronically)
[ix] (2004). Andrejevic, M. (2004). Little Brother is Watching: The Webcam Subculture and the Digital Enclosure. MediaSpace: Place, scale, and culture in a media age. In Couldry N., McCarthy A. (Eds.), . New York: Routledge. (Book retrieved electronically)
[x] McRobbie, A. (2002). Clubs to companies: Notes on the decline of political culture in speeded up creative worlds. Cultural Studies, 16(4), 516-531. [p. 523]
[xi] Resnick, P. (2005). Impersonal Sociotechnical Capital, ICTs, and Collective Action Among Strangers in Dutton, W. H. Transforming enterprise : The economic and social implications of information technology. Cambridge, Mass.: MIT Press. (p. 400).

| Comments (2) |

Article on new UK passports

posted by:Carlisle Adams // 10:39 AM // November 17, 2006 // General

Here's a link to an article from the Guardian on the new UK passports:


| Comments (0) |

Data Security: Quit collecting it if you cannot protect it!

posted by:Jennifer Chandler // 11:59 PM // November 14, 2006 // ID TRAIL MIX


We are busily inventing technologies to gather or create personal information “hand over fist.” Not only are we gathering personal information in more and more ways, but we are creating new personal information types.

In some cases, the new technology itself creates a new type of personal information to be gathered (e.g. the snapshot of our personal interests and curiosity that is contained in search engine query history – see Alex Cameron’s recent post). Other technologies enable the collection of personal information that exists independently of the technology (e.g. the various technologies to track physical location and movement, or to use physical attributes in biometrics – as described recently by Lorraine Kisselburgh and Krista Boa in their posts).

The creation of more and more stores of personal information exposes us to the risk of the misuse of that information in ways that harm our security and dignity. In the context of genetic information, consider the risks of genetic discrimination, or the controversy over “biocriminology,” [1] which has developed the idea of the individual “genetically at risk” of offending against the criminal law. Consider also the many uses to which information about one’s brain that is gathered through improved neuro-imaging techniques might be put. [2]

These new forms of personal data collection may solve some compelling social problems, but they will also expose us to risk. I set aside the full range of risks for the purposes of this blog post in order to focus on one in particular. There is ample evidence that we are better at creating stores of data than at securing them. The compromise of data security exposes the individual to the risk of impersonation as well as to the risk that a third party will use the information to draw conclusions about an individual contrary to that individual’s interests.

The impersonation risk is unfortunately now familiar – everyone knows about ID fraud and insurance companies are busily hawking ID theft insurance to protect us from some of the losses associated with it. Today, ID fraud capitalizes upon the most mundane and widespread of identification and authentication systems, including ID numbers, account numbers and passwords. However, the risk is clearly not restricted to these basic systems. Back in 2002, Tsutomu Matsumoto at the Yokohama National University demonstrated how to create “gummy fingers” using lifted fingerprints. These gummy fingers were alarmingly successful in fooling fingerprint readers. [3] All of this underscores the tremendous importance of protecting the security of stockpiles of personal data that can be used in ways to harm the interests and security of the individuals involved.

Our current legal system is woefully inadequate to deal with this problem. Breaches of data security occur so often [4] that they are becoming a bit of a yawn – a numbing effect that should be deplored. A recent Ponemon Institute survey reports that 81% of companies and governmental entities report having lost or misplaced one or more electronic storage devices such as laptops containing sensitive information within the last year. [5] Another 9% did not know if they had lost any such devices.

Although data custodians often seem to claim that the public relations costs of a major security breach are enough of a threat to encourage efforts to promote data security, the evidence makes me wonder if some additional encouragement would not be helpful. One of the key problems with data security is that a large part of the cost of a data security breach may be borne by persons or entities other than the organization responsible for protecting the data from being compromised. Under these circumstances, one would expect the organizations responsible to be inadequately interested in protecting the data.

One of the functions of tort law is to deter unreasonably risky behaviour. If careless data custodians could be held responsible for the damage to others flowing from breaches in the security of personal information under their control, they would be forced to internalize the very real costs of their carelessness.

There have now been a couple of dozen such lawsuits attempted in the United States and two class actions filed in Canada that raise a claim for damages based on the negligent failure to employ reasonable data security safeguards. The success rate so far is low.

One of the key problems facing plaintiffs in these suits is that a claim in negligence is based on a showing of actual harm. Courts will not treat an increased risk of harm as actual harm. This raises the question of how to characterize the insecurity that a data subject feels when his or her sensitive data has been carelessly exposed. Is the harm an anticipated one, namely eventual misuse by an ID fraudster? Or is the harm better understood as a present harm – the immediate creation of an insecurity that imposes emotional harm as well as financial harm (i.e., the cost of self-protective measures such as credit monitoring services, insurance, closing and re-opening accounts and changing credit card numbers). So far, the courts have held that actual harm occurs only once ID fraud happens.

It is clearly in the interests of the defendant data custodians that liability depend upon a showing of ID fraud because, it turns out, it is usually extremely difficult for a plaintiff to tie the eventual ID fraud to the breach of data security caused by the defendant. Because our personal information is so widely used and so poorly safeguarded by many data custodians, it becomes quite difficult to establish the necessary causal link between the ID fraud and the defendant data custodian. The data custodians are thus well-protected – no liability for a careless breach until ID fraud occurs, and no liability (usually) once ID fraud occurs because “who knows where the unknown fraudster got the data he or she used.”

The plaintiffs in these cases have also attempted another interesting argument in order to try to obtain compensation flowing from data security breaches. They point to the so-called “medical monitoring” cases in which some courts have permitted plaintiffs to recover the costs of medical monitoring after exposure to toxic chemicals (e.g. PCBs, asbestos, and drugs found to have harmful but latent side effects). The plaintiffs in the data security breach context argue that their predicament is analogous. They must bear present costs in order to monitor for the eventual crystallization of the risk into a concrete loss.

One might argue that the policy reasons for permitting recovery in the medical monitoring cases are not present in the data security breach cases. Indeed, the defendants in these cases often argue that human health is a more compelling interest than financial health and so relaxed liability rules that are justified in the medical context are not justified in the data security breach context. In my view, this argument is not as self-evidently correct as the defendants claim. The harmful effects of financial insecurity and fraudulent impersonation on human health and psychological well-being are well-known.

Perhaps the insecurity felt by a plaintiff whose sensitive personal data has been compromised ought to be understood as a present compensable harm in its own right in appropriate cases. When we look to the future and see the kinds of personal data that are being collected and/or created using novel technologies, the insecurity and vulnerability of the data subject takes on a new urgency. Given that choices are being made now about the development of these technologies and will be made soon about their deployment, it seems to me that there is no time like the present to ensure that the full costs of carelessness in the use of these technologies are internalized by those who seek to use them.

Until those who want to collect personal data can figure out how to keep it reasonably secure, they have no business collecting it.

[1] Nikolas Rose, “The Biology of Culpability: Pathological Identity and Crime Control in a Biological Culture,” (2000) 4(1) Theoretical Criminology 5-34.
[2] Committee on Science and Law, Association of the Bar of the City of New York, “Are your thoughts your own? “Neuroprivacy” and the legal implications of brain imaging,” (2005) <http://www.abcny.org/pdf/report/Neuroprivacy-revisions.pdf>.
[3] Robert Lemos, “This hacker’s got the gummy touch,” CNET News.com (16 May 2002) <http://news.com.com/2100-1001-915580.html>.
[4] See the list of major reported security breaches which is maintained at <http://www.privacyrights.org/ar/chrondatabreaches.htm>.
[5] Ponemon Institute, “U.S. Survey: Confidential Data at Risk,” (15 August 2006), sponsored by Vontu Inc., <http://www.vontu.com/uploadedFiles/global/Ponemon-Vontu_US_Survey-Data_at-Risk.pdf#search=%22ponemon%20vontu%22>.

| Comments (0) |

Anonymity: a relative and functional concept

posted by:Giusella Finocchiaro // 11:59 PM // November 07, 2006 // ID TRAIL MIX


Anonymous data are extremely relevant in Italian and European legislation: in fact, these data are not subject to the laws regarding processing of personal data. This is stated, for instance, by the recital no. 26 of the European Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data. Moreover, anonymity represents the best way to protect privacy and personal data, as has been affirmed on several occasion by the European Commission and the European Council.

Qualifying anonymous data is not, however, a simple operation.

Anonymous is, in the common language, a term which evokes an absolute concept: without name.

This concept of anonymity as namelessness, as the origin of the word reveals, by definition excludes the identity of the subject to which it refers.

That which is anonymous is therefore faceless and without identity. Anonymity is a concept which evokes an absolute lack of connection between a fact or an act and a person.

However, anonymity is often relative to specific facts, specific subjects and specific purposes.

A composition, for instance, may be anonymous for some but not for others, depending whether or not they know the author.

So the right to be anonymous, when recognized, refers to certain subjects, in predefined circumstances and for specific occasions, which can be specified by the law.

In the Italian law the anonymous data are defined as being data which in origin or after being processed “cannot be associated with an identified or identifiable data subject”. Data can be originally anonymous or can be treated so as be made anonymous.

The key point of the article is the sentence “cannot be associated”. In which cases can be deemed that data cannot be associated with a subject? Must this be a physical or a technological impossibility? Whether this has to be absolute or relative, has already been clarified by the Recommendation of the Council of Europe No. R (97) 5 on medical data protection, where it is stated that information cannot be considered identifiable if identification requires an unreasonable amount of time and manpower. In case where the individual is not identifiable, the data are referred to as anonymous.

On the contrary, the definition of “personal data” as stated by Italian Law, is “any information relating to natural or legal persons, bodies or associations that are or can be identified, even indirectly, by reference to any other information including a personal identification number” , while the definition given by the European directive is the following: “any information relating to an identified or identifiable natural person ('data subject'); an identifiable person is one who can be identified, directly or indirectly, in particular by reference to an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural or social identity”.
In both definitions the criterion is not only the reference but also the possibility to refer information to a data subject. This referability is measured in relation to the time, cost and technical means necessary to achieve it. The value and sensitivity of the information should also be taken into account. For example, medical data should require a high level of protection. Relating the information and the subject, to which it refers is a technical possibility; however the legality of this depends on legal and contractual boundaries.

Relativity is therefore central to the definition: data can be anonymous for some, but not for others.

Likewise for functionality, data can be anonymous for certain uses but others not so.

In conclusion, as personal data can be legally processed only for specified purposes by authorised persons, data can be anonymous only for certain people under pre-defined conditions. Therefore anonymity in processing of personal data is not an absolute concept: it is, instead, a relative and functional concept.

Giusella Finocchiaro is a Professor of internet law and private law at the University of Bologna, Italy.
| Comments (1) |

main display area bottom border

.:privacy:. | .:contact:.

This is a SSHRC funded project:
Social Sciences and Humanities Research Council of Canada