understanding the importance and impact of anonymity and authentication in a networked society
navigation menu top border

.:home:.     .:project:.    .:people:.     .:research:.     .:blog:.     .:resources:.     .:media:.

navigation menu bottom border
main display area top border

« September 2006 | Main | November 2006 »

Technologies of Identification: Geospatial Systems and Locational Privacy

posted by:Lorraine Kisselburgh // 11:59 PM // October 31, 2006 // ID TRAIL MIX

trailmixbanner.gif

In an increasingly mobile information society, location has become a new commodity giving rise to technologies such as wireless cell phones, global positioning systems (GPS), radio-frequency ID (RFID), and geographic information systems (GIS). Location technologies make visible an individual’s movements and activities, revealing patterns of behavior that are not possible without the use of this technology. In a typical day’s activities – using a debit card, an electronic toll pass, an automobile’s GPS navigation system, and a cell phone – information about one’s location can be tracked and stored in many ways.

The desire to protect this information is called location privacy, and is based upon Westin’s (1967) notion of privacy as “the claim of individuals … to determine for themselves when, how, and to what extent information about them is communicated to others”, a framework of autonomy or control of information about one’s self. [1] While much literature focuses on informational and relational privacy, locational privacy, is less well studied.

Communication tools, transactional cards, personal locator and navigational systems, radio frequency identification devices, and surveillance cameras all have the capability to provide information about one’s location and behavior. In particular, geospatial technologies, such as global positioning systems (GPS) and geographic information systems (GIS), are powerful in their scope and capability to converge locational and tracking technologies. Geographic information systems (GIS) aggregate data and information from multiple sources including satellite, aerial, and infrared imagery, geodetic information, and “layered” attribute information (such as property records). These aggregates of data, like data mining systems, create collected bits of information that generate valuable and powerful profiles of objects.

Boundaries of intrusiveness

There are a growing number of high resolution satellites providing imagery for GIS systems. These eyes in the sky raise the question of “how close is too close”, or at what level (i.e. resolution) do these images become intrusive to individual privacy. High resolution commercial satellite systems currently allow general features of facilities to be readily observed: the QuickBird system provides 0.6mGSD resolution satellite images with 1-14 day sampling. At this resolution, features such as buildings, roads, and large objects are visible (for example, see a 0.6m GSD [2] image of the Washington D.C. airport). GIS systems also include aerial images that provide details at <0.3mGSD. Thus, precise geolocation information can be discerned in geospatial systems, especially when information is aggregated with other sources.

It is tempting to say that only very high spatial resolution is intrusive. But consider the situation of a low spatial resolution object (such as a dot representing an individual) overlayed onto a map and then captured in near-real time, i.e., at high temporal resolution. For example, one can identify a teenager’s location on a map, and then track his movements in near-real time through GPS data. In this scenario, even without high spatial resolution, one’s behaviors and actions are identifiable, allowing a system to track movements and infer from that information one’s actions and behaviors. Thus, the combinatory effect of high temporal resolution, with either low or high spatial resolution, identifies and becomes intrusive in ways that singular information would not. This means both the spatial and temporal contexts must be evaluated when determining intrusiveness.

The new Real-time Rome project announced last month by MIT provides an example of the applications of GIS systems and visualization tools, using data from cell-phone usage, pedestrian and transportation patterns, to map usages of urban space. While visualization is based upon aggregated information, individual-level data is collected.

Rights to locational privacy

What rights do we have to locational privacy? In the United States, common law gives rise to four generally recognized privacy torts: (a) intrusion upon a person's seclusion; (b) public disclosure of private facts; (c) publicity in a false light; and (d) misappropriation of one's likeness. However, the public disclosure tort is limited by the clause “if an event takes place in a public place, the tort is unavailable” (Restatement (Second) of Torts 652D, 1977), and the courts have generally ruled that a person traveling in public places voluntarily conveys location information. But courts have also recognized that “a person does not automatically make public everything he does merely by being in a public place” (Nader v. GMC, 1969, 570-71; see also, Doe v. Mills, 1995).

Constitutional protections for privacy, derived from the Fourth Amendment, restrict government intrusion into our personal life through searches of persons, personal space, and information. In the seminal case Katz v. United States (1967), the United States Supreme Court held that government eavesdropping of a man in a public phone booth violated a reasonable expectation of privacy because the Fourth Amendment protects “people, not places.” The Court held that whatever a person “seeks to preserve as private, even in an area accessible to the public, may be constitutionally protected” (389 U.S. 347, 352, emphasis added). This gave rise to the two-pronged test of constitutional protection: whether an individual has an expectation of privacy that society will recognize as reasonable.

Case law has interpreted these locational privacy rights more specifically, examining intrusions of technology into the private sphere, government searches that are technologically enhanced, and the use of mobile devices and telecommunication information to derive locational information. While Fourth Amendment protection doesn’t extend to that which is knowingly disclosed to the public, the courts have ruled that the use of technologies not available to the general public can violate the privacy one reasonably expects (Kyllo v. U.S., 2001). But courts have shown a willingness to allow law enforcement to use technologically-enhanced vision for searches, including flying over a fenced backyard (California v. Ciraolo, 1986), a greenhouse (Florida v. Riley, 1986), or an industrial plant (Dow Chemical v. U.S., 1986), suggesting that the “open fields” doctrine [3] brings no reasonable expectation of privacy. [4]

This protection does not extend to deriving location information from communication devices. Transaction information such as telephone numbers is not protected (Smith v. Maryland, 1979), but providers are prevented from releasing information that discloses the physical location of an individual (CALEA, 1994/2000; U.S. Telecom v. FCC, 2000). However, using mobile communication devices as tracking devices to derive location information is not constitutionally protected (U.S. v. Meriwether, 1990; U.S. v. Knotts, 1983; U.S. v. Forest, 2004), as courts have ruled that individuals using cell phones, beepers, and pagers do not have a reasonable expectation of privacy when moving from place to place. (This interpretation continues to be challenged.)

Furthermore, while the Electronic Communication Privacy Act (1986) protects against unauthorized interception and disclosure of electronic communications (18 USC § 2510-22; 2701-11), it excludes tracking devices (§ 3117). However, the Wireless Communication and Public Safety Act (1999), explicitly protects location information in wireless devices, (47 USC § 222, §§ f), requiring customer approval for disclosure. [5] But the Patriot Act (2001) has nullified some of these protections, granting broad authorities for government surveillance, including the ability to use roving wiretaps.

In summary, legal protection for location privacy in the United States is inconsistent and sectoral, providing coverage under certain situations and for specific technologies.

Implications

Emerging geospatial technologies, through their power and invisibility, re-architect our public space and change our patterns of disclosure and interaction with others in this space. Architecture regulates the boundaries of accessibility in human interaction. Just as doors and windows increased barriers and expectations of privacy in 17th century rural villages, modern technologies are decreasing these barriers, by providing new capabilities to extend or enhance human senses (our eyes, ears, and memory). This changes the architecture of our public sphere, and shifts our constructions of public-private space and boundaries. These shifts are at odds with our expectations and sense of personal space, thus leading to a sense of intrusion. In turn, this changes our awareness of disclosing and interacting with others in this space.

At the same time, the pervasiveness and invisibility of locational technologies mean that control of access to information about oneself is not available. We are unaware of the presence and activity of such technologies, and thus lack autonomy in regulating the boundaries of accessibility. This has implications for understanding our navigation and negotiation of connectivity in the modern world. In addition, the aggregation of information – whether in data mining systems or geographic information systems – creates very powerful identifiers. Whereas a single bit of information may not be threatening, aggregated bits constitute a pattern of behavior or a profile that can reveal much information and threaten one’s privacy and liberty.

Thus, the unique threats of geospatial systems as technologies of identification are based on two primary factors: a) aggregated data creates very powerful identifiers; and b) the invisibility of data collection and use results in a loss of agency in the process by which we are identified. These in turn influence how we interact in our society, and by extension, the construction of our identities.

This raises questions that require further study: What do these technologies of identification mean for our construction of identity in digital realms? That is, when technologies extend human senses, what happens to our construction of personal space and retreat, and our concept of reasonable expectations of privacy? Further, under the current legal framework, how do we address new constructions of space (e.g., reconnaissance of space above private property), new technologies of intrusion (e.g., infrared, RFID, GPS, GIS), and new constructions of scope (e.g., aggregated information)? Additional research is needed to understand how individuals define these ambiguous boundaries, our expectations of private space, and the mechanisms by which we negotiate shifting boundaries in the face of emerging locational technologies.


[1] Westin, A. F. (1967). Privacy and Freedom. New York: Atheneum
[2] GSD, ground sample distance, refers to the pixel representation of the distance on the ground between two components, in digital imagery.
[3] See Hester v. United States, 265 U.S. 57 (1924) and Oliver v. United States, 466 U.S. 170 (1984) for a discussion of the “open fields doctrine” which suggests that constitutional protection is not extended to the open fields.
[4] Curry, M. (1996). In plain and open view: GIS and the problem of privacy. Paper presented at the Conference on Law and Information Policy for Spatial Databases, Santa Barbara, CA.
[5] Edmundson, K. E. (2005). Global positioning system implants: Must consumer privacy be lost in order for people to be found? Indiana Law Review, 38.

Lorraine Kisselburgh is a doctoral student in Media, Technology, and Society (Department of Communication) at Purdue University. Portions of this article were presented at the NYU Symposium on “Identity and Identification in a Networked World” and at the International Communication Association in Dresden Germany, and have been submitted for publication in the “ICA 2006 Theme Session Proceedings.” The author wishes to acknowledge the support of Eugene Spafford (Department of Computer Science, Purdue University) in the conceptualization of this project.
| Comments (1) |


Why Definitions Matter: an Example Drawn from Davis on Privacy

posted by:Jason Millar // 02:09 PM // October 17, 2006 // ID TRAIL MIX

trailmixbanner.gif

Concepts inform our interpretations of the world. As such their definitions are important for our common understanding. On a multidisciplinary project like the Identity Trail, confusion over definitions can undermine our ability to discuss certain issues that rest on complex concepts like privacy. Along these lines I would like to comment on one philosophical project undertaken by Steven Davis during his trip down the Identity Trail, namely his attempt to find a definition of privacy, as outlined in his forthcoming publication (initially entitled) “Privacy, Rights, and Moral Value”. For those who have not (and will not) read the paper I will offer a preamble on the general problem at hand.

The preamble: Haven’t we heard this before!?

Much of my time on the Identity Trail has been spent being exposed to a number of multidisciplinary perspectives on privacy. Some of those perspectives are legal ones, offering up descriptions of how current laws are challenged by the various privacy implicating technologies being used and created every day. Others are sociological, describing how technologies are approached and used with specific focus being placed on the effects or implications of privacy on a technologically mediated interaction. Still others are technologically focused, proposing interesting privacy-enhanced/enhancing technologies often as (partial) solutions to many of the current problems highlighted in the legal and sociological streams of the project. Of course, this description fails to capture the breadth of privacy research being performed on the Identity Trail [1] but it is sufficient to point to a common thread underpinning the work, namely the general concept of privacy.

For anyone interested in understanding privacy, our agreement on the nature of the general term has implications on how we might go about discussing the theories or issues that rely on it (like those mentioned above), just as we might have to understand what is meant by the word ‘equality’ in order that we might have a meaningful discussion about laws or public policies that implicate it. Of course, even the importance of understanding the nature of privacy generates much debate in and among the various fields concerned. Exasperated privacy advocates argue that we could better spend our time focusing on new policies in order to deal with the existing backlog of relatively uncontested privacy concerns, while on the other end of the spectrum academic theorists—philosophers and the like—seem uneasy (as they tend to do) about the grounds upon which the issues are being fought. However, it is clear that arguments centered on privacy, in whatever discipline they reside, rely to some degree on an understanding of the general concept of privacy for their force. Whether the parties are content to implicitly borrow concepts of privacy already established in the literature, or act to modify them (explicitly or implicitly) in some way in response to new research, some particular version of the concept of privacy is nonetheless present in the arguments. Often times discussions and disagreements over the particulars of laws, policies or technologies are largely motivated by disagreements over the particulars of the concepts underscoring them. This should not ring controversial. If we are to agree on the implications of privacy in ethics, law, technology or elsewhere, we can make progress by engaging the concept explicitly on some level, given its omnipresence in the discourse. With that in mind, it is a valuable undertaking to pose the question, “What is the nature of privacy?”, even if privacy issues are of interest yet philosophy is not [2].

Davis’ Definition of Privacy and Some Implications

In response to Davis’ definition I will focus on a tension that it draws out between one’s own preferences and others’ preferences. I believe the tension points to interesting consequences in our understanding of how generalized privacy laws operate relative to the operation of our individual notions of privacy.

Davis defines privacy as the following:

In society T, S, where S can be an individual, institution, or a group, possess privacy with respect to some proposition, p, and individual U if and only if

(a) p is personal information about S.
(b) U does not currently know or believe that p.

In society T, p is personal information about S iff and only most people in T would not want it to be known or believed that q where q is information about them which is similar to p, or S is a very sensitive person who does not want it to be known or believed that p. In both cases, an allowance must be made for information that most people or S make available to a limited number others.
...

Consider the following scenario. On Saturday, Jane is not sensitive about others knowing her sexual orientation. Other people are able to ascertain her sexual orientation though she never offers it up, and other people, in fact, do ascertain her sexual orientation. In addition, most people in Jane’s society are also not sensitive about others knowing their sexual orientation on Saturday. For some reason, on Sunday most people in Jane’s society develop a severe sensitivity to the idea of others coming to know their sexual orientation. Jane does not develop a similar sensitivity on Sunday, and other people continue to ascertain Jane’s sexual orientation through no action on her part.

On Davis’ account Jane suffers a loss of privacy sometime on Sunday. This seems counterintuitive. Jane’s privacy is linked to sensitivities that others develop—the fact that they stop wanting their sexual orientation to be known is presumably due to some sensitivity to the information—without her having to develop the sensitivity on her own. I will call this type of sensitivity a privacy preference, since the definition links preferences about which information is personal, and which is not, directly to the notion of privacy. In this case the privacy preferences of others seem to place some sort of demand on Jane, though it is not clear what the nature of this demand is. Perhaps it suggests that she should consider her sexual orientation to be a sensitive topic. Whatever the case may be, Jane’s continued indifference to the fact that others are able to ascertain her sexual orientation must be squared with the demand resulting from the claim that Jane has suffered a loss of privacy on Sunday due to the privacy preferences of others.

This tension seems even more problematic when we note that one’s own control over personal information features heavily in the definition yet is undermined by it. Not wanting others to know p is at the core of both the sensitive S’s notion of personal information, as it is at the core of the majority’s notion of personal information. The disjunctive in the definition of personal information causes problems in the way that Jane apparently suffers doubly on Sunday; she has apparently suffered a loss of privacy due to the shifting privacy preferences of others while at the same time suffering a loss of control of the very nature of information about her. Though the shifting nature of the information may not strike one as something over which they need to maintain control, many privacy theorists have placed a premium not just on the control of the flow of information, but also on control of the nature of it in order to maintain the contextual integrity that is seen as necessary for privacy [3]. I would suggest further that a loss of control over the scope of personal information is what leads to the strange new demand that is apparently placed on Jane.

I think we can understand where the demand plays out by addressing an underlying tension between the law’s need for a normative conception of privacy and individuals’ need to navigate privacy on their own terms. As a legal (largely instrumental) definition of privacy, I think Davis’ account gains considerable traction [4]. If a majority of individuals feel that certain information is personal in that they are sensitive to others coming to know it indiscriminately, and if there is a demonstrable harm associated with others coming to know it, then the law can justify prohibiting people from trying to come to know personal information.

However, Davis’ definition of privacy loses traction on the level of the individual. If Jane does not consider a privacy loss to have occurred, the normative claim placed on her by society (and the law) will not change this. The result is that we must question whether privacy, as defined by Davis, addresses the same kind of transgression that our concern for personal control over information, i.e. the moral kind, seeks to protect us against? Privacy laws, in the sense that they can be used in cases where individuals suffer harm, certainly address moral privacy concerns. But a focus on the legal/instrumental conception of privacy and control over personal information ignores the sensitivity that motivates our individual, moral, privacy concerns in the first place. If Jane does not feel that her privacy has been violated on Sunday, then the moral notion of privacy may differ necessarily from the legal one, if only so the law may function efficiently.

It has been suggested on the Identity Trail that many people don’t seem to care about their privacy [5]. A great deal of the resulting research has focused on trying to understand why this seems to be the case. Perhaps one factor in the equation is that we mistake the legal notion of the concept for the moral one when evaluating the sensibility of people’s actions in certain contexts. Understood this way the assertion that Jane has suffered a loss of privacy may be isolated to legal concerns. Convincing Jane otherwise may do nothing to secure her privacy.

Notes:
[1] It undoubtedly also fails in its attempt to describe the nature of the work being done in the various streams by the various researchers. To that end I would invite everyone reading this entry to browse the research that has accumulated on the Identity Trail in order to appreciate the full scope of it.
[2] Several collaborators on the Identity Trail have done this explicitly, including Marsha Hanen, Steven Davis and Dave Matheson, to name a few. Others have offered research into privacy implicating activities or technologies, always (I think) with an implicit view to informing or reaffirming our understanding of the concept.
[3] Nagel, T. (1998). Concealment and exposure. Philosophy and Public Affairs, 27(1), 3-30.; Nissenbaum, H. (1998). Protecting privacy in an information age: The problem of privacy in public. Law and Philosophy: An International Journal for Jurisprudence and Legal Philosophy, 17(5-6), 559-596.; Rachels, J. (1975). Why privacy is important. Philosophy and Public Affairs, 4, 323-333.; Scanlon, T. (1975). Thomson on privacy. Philosophy and Public Affairs, 4, 315-322.
[4] I invite the legal theorists to correct me in my discussion of the nature and function of laws if they feel compelled to do so.
[5] For example, Jaquelyn Burkell in this ID Trail Mix piece.

| Comments (4) |


Bouquets and brickbats: the informational privacy of Canadians

posted by:Jeffrey Vicq // 11:59 PM // October 03, 2006 // ID TRAIL MIX

trailmixbanner.gif

Recently, I spent some time examining the Canadian data brokerage industry.

In the last several years, a number of scandals in the US data brokerage industry made American companies like ChoicePoint and DocuSearch household names, even in many Canadian homes. American journalists prepared several interesting and extensive exposes describing, in rich detail, the sometimes messy results of the marriage of technology and data in the name of convenience, commerce and security.

Yet, the activities of the industry’s players in this country have traditionally been less well understood. Accordingly, working as part of a team under the direction of the talented Pippa Lawson at CIPPIC, a number of us sought to gain a better understanding of the Canadian data brokerage industry—identifying its key players, determining the types of information commonly made available, and tracking personal data as it flowed from consumer to compiler, and from broker to buyer. The final report was quietly released earlier this summer.

In the course of our investigations, I frequently found myself reflecting on two broader questions: first, I wondered how best law could protect the personal information of Canadians—and by extension the privacy of Canadian citizens—in the Canadian marketplace. Examining the data brokerage industry afforded me the opportunity to consider the effectiveness of privacy legislation in the face of an industry whose sole purpose is to assemble and trade personal information about Canadians. Second, I wondered about who was the biggest culprit responsible for the slow erosion of personal informational privacy that has occurred in Canada over the last several decades. Having the opportunity to consider how data on Canadians was collected, compiled, distributed and used in the data brokerage industry afforded me the opportunity to consider culpability from several perspectives.

Given that Parliament has recently reconvened for the fall sitting—and cognizant that PIPEDA, the federal private sector privacy legislation in force in much of the country, is due for review—I thought I might offer up a few thoughts on these points.

With respect to the protection of the personal information, it is clear that Canadians enjoy greater informational privacy than our US counterparts—thanks primarily, it would appear, to the impact of private sector privacy legislation. There is seemingly less information available for purchase online about Canadians than Americans [1] , and several companies claim to have curtailed operations or ceased operating altogether in Canada following the introduction of Canada’s private sector privacy legislation. Using provisions contained in the legislation, Canadian consumers can learn what information Canadian companies have about them and can seek the correction of errors in those records—rights which are unknown to American consumers. In this light, Canada’s data protection laws are arguably the single most valuable instrument available for the protection of Canadian informational privacy.

But these laws are not perfect. This legislation—and most glaringly PIPEDA—is hamstrung by the absence of robust enforcement provisions. During my time in private legal practice, it was an all-too-common occurrence that once a client was apprised both of the extensive obligations of the legislation and the ramifications of non-compliance, the client would elect to ignore the law. And there is reasonably good evidence to suggest that private sector organizations that have attempted to comply with the legislation have done so poorly: see, for example, CIPPIC’s recently published study examining the compliance (or relative lack thereof) of retailers with Canada’s data protection laws. The legislation’s lack of a robust enforcement mechanism undoubtedly plays a role in the high rates of non-compliance CIPPIC’s found.

To a lesser extent, Canada’s private sector privacy laws have also been maligned for the way they define “personal information.” These definitions qualify the “personal information” to which the laws pertain to information about “identifiable individuals.” As such, information that has been “anonymized” accordingly falls outside of the scope of the legislation. However, data anonymity specialists (including the terrific Latanya Sweeney) have been demonstrating for some time the relative ease and accuracy with which “anonymized” information can be reconnected to identifiable individuals.

Interestingly, my own research into the data brokerage industry indicated that many of these companies were not particularly concerned with the granularity of the information they attributed to individual citizens. For example, several Canadian data compilers rely on data—like public-use microfiles—that Statistics Canada makes available and considers to be “sufficiently anonymized or aggregated to be made publicly available.” Absent the services of someone like Dr. Sweeney, it may indeed be difficult to connect this information to a particular household. However these data compilers use the aggregated information (like mean household income for dwellings located in a particular postal code set) to attribute characteristics to all households in the set. This information—which on a household to household basis may be erroneous—is nonetheless usually of sufficient accuracy for marketing purposes. As such, despite Statistics Canada’s anonymization efforts, this information is still being used by marketers as personal information, in order to build broader and richer—if somewhat fuzzy—profiles of Canadians.

Given this, some in the privacy community have suggested that the definition of “personal information” should be amended to include all information about an individual, whether identifiable or not. I am not confident, however, that this would represent a feasible or practical response to the problems created by the use of anonymized or aggregated information to impute characteristics to Canadian households. That issue might better be addressed by legislation that precludes the use of data for certain purposes, as opposed to the wholesale revision of the definition of “personal information” itself.

These (and admittedly other) shortcomings aside, Canada’s privacy legislation has been a valuable tool for protecting the informational privacy of Canadian citizens. With certain amendments, the legislation could come to represent a truly effective set of tools to be used in the fight to protect the informational privacy still enjoyed by Canadians.

However, these tools will only be effective if the activities of the culprit primarily responsible for the erosion of the informational privacy of Canadians can be stymied. “Who is this culprit?,” you may ask. There are—both unfortunately and perhaps unsurprisingly—an abundance of candidates, given the actors and factors that have had a significant impact on informational privacy of Canadians in the last decade: the abundance of cheap and powerful digital database technologies, the growth of the internet, the emergence of the data brokerage industry and the development of a culture of fear in the US, are but a few.

However, I believe the primary culprits responsible for the erosion of informational privacy are, in fact, Canadians themselves.

In examining the sources of the data commonly exchanged in the data brokerage industry, I was astounded to discover how much sensitive data is provided willingly and openly—for little or no consideration—by Canadians. Admittedly, there are a number of collection vehicles wherein the language used to explain the purpose for the collection and planned use for the data is vague and / or misleading—if any language is used at all. But there were a remarkable number of occasions where the collection vehicles used clear and unequivocal language to explain the reasons for collection and use, and Canadians still appeared to respond in droves. There are numerous examples—Canadians complete surveys and questionnaires on sensitive topics, enter contests or offers that request extensive information about buying habits or preferences, and obtain free product samples in exchange for providing their personal details. The most recent iteration of one survey used extensively in the Canadian market is over 91 pages long, asking an exhaustive list of sensitive and highly personal questions about the respondent. [2] While consumers are often offered coupons or contest entries in exchange for completing the survey, many surveys offer no reward for their completion at all.

The aforementioned collection vehicles are examples of circumstances where it should be reasonably clear to the respondent (certainly if the data collector is complying with the requisite legislation) that there is little to be gained by them in disclosing their valuable personal information. Less clear, perhaps, are those circumstances where information is collected from Canadians contemporaneously with the acquisition of goods or services, whether over the internet or via traditional channels. Book, music and movie clubs, along with newspaper and magazine publishers, are fertile sources of information about the hobbies and interests of Canadians. General retailers and service providers are also rich sources.

Drawing on all of this, data brokers have accrued and trade in a broad range of information on many Canadians, including marital status, age, religion, income, property ownership, investments, health information, habits, interests, diet and credit card ownership, amongst others. One Canadian data broker claims to have a file containing the names of 8.7 million Canadians organized by preferred genre of book; 8.1 million organized by hobby, and another 3.1 million organized by the types of financial investments they own and plan to purchase. Another broker offers information on households in which one or more members has experienced any one of a variety of health conditions including ADHD, arthritis, bedwetting, depression, diabetes, heart or kidney disease, high blood pressure or cholesterol, lactose intolerance, macular degeneration, migraines, neck pain, nut allergies, urinary tract and yeast infections.

All of this information has been, for the most part, willingly provided by Canadians. And while much has been written about growing public concerns about privacy, the actions of Canadians do not accord with their purported fears. The results of a survey conducted by Forrester Research in 2005 found that “…while 86% of consumers admitted to discomfort with disclosing information to marketers, they participated in online surveys and research for free products or coupons, and entered competitions or sweepstakes at rates nearly equal to consumers who aren’t as concerned. [emphasis added]” [3]

Given this, it is Canadian citizens themselves that I see as posing the single greatest threat to their own informational privacy. The interests of Canadians do not appear to accord with their actions in this respect, which I would assume to be the product of a lack of education about how individuals can themselves be more responsible about protecting their own personal information. There is no question that being privacy savvy takes time and energy. However, the public must be invested with some of the responsibility for safeguarding their own personal information; otherwise, personal data privacy will continue to erode, despite the most finely crafted legislation, the efforts of the Privacy Commissioners and the lobbying of privacy advocates.

In this respect, government does have a role to play in educating the public about why informational privacy is important, and how personal information can be protected. In addition to making the changes to PIPEDA outlined above, government might also work with industry to develop and require the use of short uniform privacy policy templates, which would enable citizens to review and compare organizations’ privacy policies more quickly.

Similarly, those of us who have an appreciation of the importance of data privacy have obligations as well. We must resist the too-often-pursued predilection to “preach to the choir,” and instead make a concerted effort to educate the public about the importance of personal information privacy. An educated and engaged public can be far more effective in protecting their own informational privacy interests than even the most well-funded Privacy Commissioner or privacy advocate.

[1] In this context, I am considering information that is extant and generally available for purchase, as opposed to the use of the internet to contact parties who might—via pretexting or other means—obtain detailed information about an individual.
[2] It should be noted that this information is not typically made available with names and addresses attached; rather, it is released in an aggregated format.
[3] See "Privacy worries don't keep consumers out of online surveys and promotions" (Jan.30, 2006) Internet Retailer, http://www.internetretailer.com/dailyNews.asp?id=17434.

Jeffrey Vicq is a lawyer and consultant, and candidate in the Master of Laws (with Concentration in Law and Technology) program at the University of Ottawa.
| Comments (0) |


main display area bottom border

.:privacy:. | .:contact:.


This is a SSHRC funded project:
Social Sciences and Humanities Research Council of Canada