understanding the importance and impact of anonymity and authentication in a networked society
navigation menu top border

.:home:.     .:project:.    .:people:.     .:research:.     .:blog:.     .:resources:.     .:media:.

navigation menu bottom border
main display area top border

« February 2007 | Main | April 2007 »

Implanting Dignity: Considering the Use of RFID for Tracking Human Beings

posted by:Angela Long // 11:59 PM // March 27, 2007 // ID TRAIL MIX

trailmixbanner.gif

* This piece is a summary of the arguments contained in a longer paper that is currently a work-in-progress.

Debate is currently raging over the use of radio frequency identification devices (RFIDs) as a method of identification of unique entities. However, this debate has centered upon the general privacy concerns raised by the use of RFIDs. [1] While the privacy implications of RFID use are important, equally important are the unique implications of RFID related to human dignity. Concerns related to human dignity are especially relevant now, as implantable RFIDs have now been approved for medical use in the United States. [2] The VeriChip, an implantable RFID manufactured by Applied Digital Solutions, is being marketed to hospitals and doctors as a method of quickly identifying unconscious patients in the emergency room setting. They have also been used for and proposed for a variety of non-medical purposes, such as the tracking of English football players and migrant workers in the US. [3] In the non-implantable context, RFIDs are currently being used to monitor patient compliance in pharmaceutical trials, ie. to ensure that patients are taking their drugs properly. [4] This could easily be implemented in cases where patients with mental illnesses are subject to a community treatment order in order to ensure that drugs are being taken.

It seems likely, then, that the potential uses for implantable RFIDs will only increase in the future. Indeed, as the examples above illustrate, it appears that the use of RFIDs, both external and implantable, could shift from a voluntary and consensual model of use, to one that is neither voluntary nor consensual, which is of considerable concern to those concerned not only about privacy, but about ethics more generally. It is thus imperative to examine the ethical concerns; concerns about how we treat other human beings; surrounding the use of implantable RFIDs in more detail.

Many of the same privacy arguments made in the context of non-implantable RFIDs apply equally to implantable RFIDs. However, there is an additional factor within implantable RFIDs that raises our moral antennae; something more than just the typical informational privacy and anonymity concerns articulated by those writing on RFIDs generally; something that is unique to RFIDs that are implanted in human beings or otherwise used to track the actions and movements of human beings that has not yet been accounted for in the existing literature. [5] This additional factor in the implantable RFID context has been casually described as a concern for ‘human dignity’ in the popular media. Thomas C. Greene articulates it like this:

Unique RF identity chips and concealed RF readers everywhere: madmen have been complaining about this since the earliest days of radio. That’s how we knew they were madmen. Only an IT industry divorced from any sense of good taste and human dignity, in which technology becomes an end in itself, could strive to make the nightmares of the insane a common reality. And yet, here we are. [6]

And, as stated by Cédric Laurant, Policy Counsel at the Electronic Privacy Information Center:

Monitoring children with RFID tags is a very bad idea. It treats children like livestock or shipment pallets, thereby breaching their right to dignity and privacy they have as human beings. [7]

While this concern for ‘human dignity’ has been raised, it has not been explored in any philosophical or legal depth within the academic literature. As such, it remains, to some, mere rhetoric. Such an exploration, however, is necessary in order properly articulate the concerns that have been raised by these writers. It is also important to look at how such an analysis relates to, or even encompasses, our concerns about privacy and anonymity in the implantable RFID context, allowing for a new discourse on the myriad of concerns surrounding RFIDs that track the movements and actions of human beings. Such a discourse is important in the legal context, as human dignity, unlike privacy, has been continually recognized one of the underlying principles of the Canadian legal system, as enshrined by the Charter of Rights and Freedoms. By viewing the tracking of human activity through RFIDs as an infringement of human dignity, an argument against the legality of the use of RFIDs in these ways could be greatly bolstered through the infusion of one of the most fundamental values enshrined in Canadian law, and thus any legal argument against their use could be viewed as much stronger and likely more effective.

Human dignity is a concept that has longstanding meaning both within philosophy and within the law, most notably as the basis for modern human rights law, although it is not a particularly well-defined concept, as it often has very different meanings in different contexts. [8] Most recently, the concept of human dignity has received renewed attention in the field of bioethics, with experts striving to get to the root of the concept and to determine how it is being used by law and policy makers and to determine the ‘correct’ conception of the term. The most widely accepted theory of human dignity is that based on Kantian deontological philosophy, where it is viewed as the “essence of humanity” [9] that provides each human being with intrinsic worth by virtue of possessing a certain quality or qualities (usually agency or autonomy). Based upon possession of this quality, this intrinsic worth, all human beings are to be accorded respect and are to be treated as ends in themselves and not merely as a means to an end. However, the use of both implantable and external RFIDs to track the actions and movements of human beings clearly betray this imperative in using human beings to achieve ends unrelated to the well-being of the subject her/himself, ends that are usually related to the accumulation of information; information which may in fact be used against the person about whom it is collected.

Given that Canadian law aims to protect people from violations of their human dignity, at the very least from intrusion by the state under the Charter, any attempt by the state to use RFID in a non-consensual and non-voluntary manner may indeed be considered contrary to Canadian legal values and could run the risk of being declared of no force and effect under s. 52(1) of the Charter.

[1] See e.g. Katherine Albrecht & Liz McIntyre, Spychips: How Major Corporations and Government Plan to Track Your Every Move with RFID (Nashville: Nelson Current, 2005); Laura Hildner, “Defusing the Threat of RFID: Protecting Consumer Privacy Through Technology-Specific Legislation at the State Level” (2006) 41 Harv. Civil Rights-Civil Liberties L. Rev. 133.
[2] U.S. Department of Health and Human Services, Food and Drug Administration, 21 CFR Part 880 [Docket No. 2004N-0477] “Medical Devices; Classification of Implantable Radiofrequency Transponder System for Patient Identification and Health Information” (10 December 2004), online: <http://www.fda.gov/ohrms/dockets/98fr/04-27077.htm>. Although most apparently relevant to implantable RFIDs, human dignity concerns are also equally implicated in the external use of RFIDs where the specific use is to track the human beings to which they are linked. One example of such a use where human dignity concerns were raised is that in the case of Brittan Elementary School in Sutter, CA, where students were outfitted with RFID tags around their necks. Their movements inside the school were tracked by hand-held computers kept by the teachers. See e.g. Garry Boulard, “RFID: Promise or Peril?” State Legislatures (December, 2005) 22 at 22.
[3] With respect to tracking migrant workers in the US, see online: LiveScience <http://www.livescience.com/scienceoffiction/060531_rfid_chips.html>. It has also been suggested for use in soccer players to track their on field movements, see online: Manchester Evening News <http://www.manchestereveningnews.co.uk/news/s/217/217056_man_utd_plan_to_chip_players.html>.
[4] See online: Med-IC Digital Package <http://www.med-ic.biz/certiscan.shtml>.
[5] For example, while Dr. John Halamka discusses the privacy implications of the VeriChip, he appears to do so only within a strict informational privacy analysis, which in the context of something being implanted into the body, seems somewhat lacking. John Halamka, “Straight from the Shoulder” (2005) 353 New Engl. J. Med 331.
[6] Thomas C. Greene, “Feds Approve Human RFID Implants” The Register 14 October 2004, online: The Register <www.theregister.co.uk/2004/10/14/human_rfid_implants/>.
[7] Mark David, “Implantable RFID May Be Easy, But That Doesn’t Mean It’s Ethical”, online: Electronic Design <http://www.elecdesign.com/Articles/Index.cfm?AD=1&ArticleID=14794>.
[8] In the bioethical context, see e.g. James F. Childress, “Human Cloning and Human Dignity: The Report of the President’s Council on Bioethics” (2003) 33:3 Hastings Center Report 15 at 16 and Timothy Caulfield, “Human Cloning Laws, Human Dignity and the Poverty of Policy Making Dialogue” (2003) 4:3 BMC Medical Ethics 2.
[9] Deryck Beyleveld & Roger Brownsword, Human Dignity in Bioethics and BioLaw (Oxford: Oxford University Press, 2001) at 64.

| Comments (0) |


Someone has their identity stolen every 4 seconds

posted by:Jeremy Hessing-Lewis // 03:31 PM // March 23, 2007 // Digital Identity Management | General

The Economist has a sponsored article on Identity Theft. Quoting:

A complete identity package, including a permanent resident card (or green card) and a social security card, goes for $150 and takes about 40 minutes to deliver. Armed with those, an illegal immigrant can apply for a driving licence, acquire a bank account, rent an apartment and get a legitimate job.

Full article available HERE.

| Comments (0) |


Kim Cameron and the Seven Laws of Identity

posted by:Jeremy Hessing-Lewis // 03:10 PM // March 20, 2007 // Digital Identity Management

The Globe is running an interview with Kim Cameron, Microsoft's "Chief Architect of Identity" and author of the Seven Laws of Identity. Quoting:

KIM CAMERON: I thought we needed a multi-centred approach to identity, a user-centric one. My blog was well known, and they chose to put me in a position where I could have a growing influence. But Microsoft is so big, over 60,000 people, and they're very focused. But they were reading my stuff as much as people from outside Microsoft. We all wanted to know how we go forward from this.

Full interview available HERE.

| Comments (0) |


Anonymity on Wikipedia: Strength or Weakness?

posted by:Jeremy Hessing-Lewis // 10:04 AM // March 16, 2007 // Digital Identity Management | General

The Economist.com reports on the recent revelation that one of Wikipedia's top contributors, Essjay, proved to be a 24 year old college drop-out rather than a professor of religious studies. Still, anonymity lends itself to a meritocratic system despite its potential for misuse. Quoting:

That anonymity creates a phoney equality, which puts cranks and experts on the same footing. The same egalitarian approach starts off by regarding all sources as equal, regardless of merit. If a peer-reviewed journal says one thing and a non-specialist newspaper report another, the Wikipedia entry is likely solemnly to cite them both, saying that the truth is disputed. If the cranky believe the latter and the experts the former, the result will be wearisome online editing wars before something approaching the academic mainstream consensus gains the weight it should.

Complete article available HERE.

| Comments (0) |


In summary...(The Economist's Technology Quarterly)

posted by:Jeremy Hessing-Lewis // 03:06 PM // March 15, 2007 // Commentary &/or random thoughts | General | Walking On the Identity Trail

This blog, at its best, can be an excellent distillery. As part of a multidiscilplinary project, the idea is to influence each other by sharing incremental developments in our respective fields. Unfortunately, time constraints often narrow our academic focus down to headlines. This academic gap is mended by forced confrontation during workshops and conferences. The blog operates in-between these encounters as a distillery producing a palatable exchange of soundbytes. In light of this raison d'etre (accents are difficult in MovableType), let me offer a distilled techno-update drawn from The Economist's Technology Quarterly.

The full report is available HERE. Distilling after the jump....

1. Call and response
Next generation call-centres with sophisticated "speech analytics" to be deployed as chatbots.
Soundbyte:

Dr Brahnam has also found that the appearance of the chatbot's on-screen persona, or avatar, has signficiant impact on how much abuse is leveled at it. "My study showed that you get more abuse and sexual cooments with a white female compared with a white male," she says. Black female avatars were the most abused of all.

2. Working the crowd
New start-ups allow for users to install tracking software that tracks online habits. This information can then be sold through a data market with a commission going to the software vendor. This is essentially Google's business model but for entrepreneurial individuals.
Soudbyte:

In effect, Google users trade personal information in return for free use of Google's online services. But some people think this is a bad deal. They think the personal information is worth far more than the services that Google and others offer in return. Seth Goldstein, a serial entrepreneur based in San Francisco, believes that the personal information contained in users' click trails, online chats and transactions is something they ought to take hold of and sell themselves, generating direct payback. “Attention is a valuable resource, and we're getting to the point where it can be parsed in real time,” he says. So he has co-founded a new venture called AttentionTrust.

3. Big brother just wants to help
Government agencies applying data mining techniques to improve the delivery of public services.
Soundbyte:

Dr Paul Henman from the University of Queensland, who has written extensively on the subject, raises a rather more philosophical objection to government data-mining: that the technology starts to transform the nature of government itself, so that the population is seen as a collection of sub-populations with different risk profiles—based on factors such as education, health, ethnic origin, gender and so on—rather than a single social body. He worries that this undermines social cohesion.

4. Go with the flow
Mobile photo data is being used to map human activitiy in urban centres.
Soundbyte:

WHERE is everybody? Being able to monitor the flow of people around a city in real time would provide invaluable information to urban planners, transport authorities, traffic engineers and even some businesses. Bus timetables could take account of hourly or daily variations; advertisers would be able to tell which billboards were most valuable.

5. How touching
How haptic (touch) technology is being deployed on consumer electronics. Get your minds out of the gutter, this article is mostly about mobile phones (see e.gl. iPhone).
Soundbyte:

Dr Hayward's idea is that such switches could be used to convey information to the user without the need to look at the device. Skin stretch could be used to present the tactile equivalent of icons to the user, rather like a simple form of Braille.

6. What's in a name?
Bureaucratic glitches arising from converting foreign languages. This is in itself a matter of national security. Software is being applied to databases that "enriches" the names with cultural information.
Soundbyte:

Credit-card companies use the software to spot recidivists applying for new cards under modified names. (Names are cross-referenced with addresses, dates of birth and other data.) Developers and users are hesitant to discuss costs. But OMS Services, a British software firm, says government agencies pay a lot more than commercial users, who pay about $50,000 for its NameX programme.

7. Watching the web grow-up
Sir Tim Berners-Lee's three trends to watch (beyond the hype of Web 2.0): 1)mobile devices, 2)technology's growing social and political impact, 2)the semantic web.
Soundbyte:

These examples may not sound like a revolution in the making. But doubters would do well to remember the web's own humble origins. In 1989 Sir Tim submitted a rather impenetrable document to his superiors at CERN, entitled “Information Management: A Proposal”, describing what would later become the web. “Vague but exciting” was the comment his boss, the late Mike Sendall, scribbled in the margin.

| Comments (0) |


Where’s Waldo? Spotting the Terrorist using Data Broker Information

posted by:Louisa Garib // 11:59 PM // March 06, 2007 // ID TRAIL MIX

trailmixbanner.gif

In the fall of 2006, the Ottawa Citizen broke a leading news story based, in part, on work done by the Canadian Internet Policy and Public Interest Clinic, (CIPPIC). Pursuant to an access to information request, CIPPIC learned that the Royal Canadian Mounted Police (RCMP) had purchased consumer information from Canadian data brokers for law enforcement purposes. The information that the RCMP obtained from data brokers included individuals’ telephone numbers and addresses, as well as personal information available from public records (On the Data Trail: A Report on the Canadian Data Brokerage Industry, April 2006).

Commercial data brokers on both sides of the border collect personal information from various sources such as public registries, contest ballots, product warranty forms, newspaper and magazine subscriptions, travel bookings, charitable donation records and from companies that track credit-card use. In its coverage of the issue, the Ottawa Citizen reported that since September 2001, the RCMP has been buying and retaining this kind of personal information from data brokers, and in some instances may have forwarded that information to U.S. law enforcement.

Shortly after the story broke, the Canadian Association for Security and Intelligence Studies (CASIS) held its Annual Conference in Ottawa. At the conference, Canadian and U.S. policy officials, judges, academics, and defence analysts met to discuss intelligence gathering and surveillance in the current security environment. One of the conference panels debated the role and relevance of using “open sources” versus secret intelligence and information during law enforcement investigations. “Open source” information can be information freely available on the Internet, data contained in public records such as land title registries, or information collected and sold by the private sector. While the panel discussion focused on using information from press reports and websites, conference participants spoke of making “better” or more “effective use” of open sources, and the need for systems that could analyze open source information. Data brokers could certainly serve that purpose, by collecting, categorizing and conducting a preliminary assessment of open source information for law enforcement. By performing a “first cut” of massive amounts of information, the commercial data brokers can help the state to “spot the terrorist” or identify any other type of criminal.

Also in the fall of 2006, the Ontario Superior Court struck down the definition of “terrorist activity” in the federal Anti-terrorism Act, [S.C. 2001, c. 41] (ATA) in the case of R. v. Khawaja, [2006] O.J.No. 4245 (Ont. S.C.J.) (QL). The court found that the “motive clause” contained in the act infringed Mr. Khawaja’s rights to freedom of conscience and religion, and freedom of expression and association guaranteed by sections 2(a), (b) and (d) of the Canadian Charter of Rights and Freedoms.

The statutory definition linked terrorism to criminal activity motivated by religion, ideology or political belief. Judge Rutherford reasoned at para 58 that the “inevitable impact” of making motivation part of anti-terror investigations would be that a “shadow of suspicion and anger” would fall over certain groups in Canada, raising concerns about racial and ethnic profiling. In his decision, Justice Rutherford severed the invalid motive clause in the definition of terrorist activities from the rest of the anti-terrorism legislation; leaving the remainder of the provisions in force. To date, Mr. Khawaja has not proceeded to trial as there are aspects of his case that are currently before the courts.

While Khawaja, for now, stands as a bar to using motive as evidence of terrorist activity under the ATA, law enforcement’s potential use of personal information collected by data brokers raises the same concerns about racial profiling and creating groups of suspects that Justice Rutherford mentioned in his decision.

Information supplied by data brokers is unreliable. Brokers gather information from a variety of sources and have few incentives to determine and ensure the veracity of the information they collect and sell to law enforcement. Compounding this problem is the lack of transparency for consumers. It is virtually impossible for individuals to be aware of all of the organizations that have collected and retained their personal information over time. Consequently, consumers have minimal recourse to access, challenge and correct the myriad of what Professor Daniel Solove calls “digital dossiers” that often contain inaccurate personal information. The absence of recourse and access rights to ensure the reliability of information sold to law enforcement without consumers’ knowledge or consent also raises concerns about due process.

Nor is it clear what criteria law enforcement would use to assess the relevance, accuracy and reliability of information provided by commercial data brokers. What type of information is being purchased? How would the information interpreted and contextualized? What valid conclusions or predictions, if any, can be drawn from such information?

The inaccuracy or misinterpretation of information supplied by data brokers to law enforcement combined with the lack of transparency and oversight surrounding the use of that data can have dire consequences for targeted individuals and identifiable groups.

Identifying an individual as a security threat, terrorist, or terrorist sympathizer based on questionable information provided by data brokers can destroy a person’s livelihood, family life, reputation, and in some cases their physical security. Although it is not established that information from data brokers played a role in the “extraordinary rendition,” detention and torture of Canadian citizen Maher Arar, it is not difficult to contemplate the worst case scenario for an individual who is profiled according to information provided by data brokers based on what we know about Mr. Arar's terrifying ordeal. Identifying an entire group as suspect using information complied by data brokers could result in criminalization, stigmatization and marginalization, violating equality provisions as well as freedom of religion, thought, expression and association rights contained in the Charter.

Law enforcement’s potential practice of using information compiled by commercial data brokers isn’t only problematic for certain racialized groups or suspicious individuals; the practice implicates all of us. The private sector collects and uses personal information about nearly everyone. A criminal profile could be pieced together from various purchase records on any individual, based on the information complied by data brokers. That data could be used to establish a motive and identify individuals as suspects or potential suspects for any crime – including those not yet committed.

We could all, then, be profiled based on fragments of information about us that may be wrong, outdated, distorted, and removed from context. If information collected by the private sector is purchased and used by our government and law enforcement agencies without transparency, oversight and safeguards, it can be dangerously misinterpreted in ways that could prejudice people’s lives.

| Comments (0) |


Username and Password: Repeat ad infinitum

posted by:Jeremy Hessing-Lewis // 07:22 PM // March 03, 2007 // Commentary &/or random thoughts | Digital Identity Management | General | TechLife | Walking On the Identity Trail

The Globe's Ivor Tossel has a nice little piece on online identity management entitled: Who do you want to be?.

Tossel writes:

It's a problem that's older than the Web itself. One of the Internet's basic weaknesses is that there's no central way of keeping track of who you are. In real life, we have one identity that we take everywhere (it's the one on your passport, assuming you can get one these days). But there's no virtual passport in cyberspace: People change names online more often than they change underpants. Every time you go to a new website, you have to start the process of identifying yourself all over again.

Interestingly, I spent 45 minutes trying to find my username and password so that I could login to make this blog post.

I also broke my usual prohibition on reading comments and was delighted by the following reader wisdom:

B H from Toronto, Canada writes: 'It's not a bug, it's a feature.'

Well said my friend.

| Comments (0) |


main display area bottom border

.:privacy:. | .:contact:.


This is a SSHRC funded project:
Social Sciences and Humanities Research Council of Canada