understanding the importance and impact of anonymity and authentication in a networked society
navigation menu top border

.:home:.     .:project:.    .:people:.     .:research:.     .:blog:.     .:resources:.     .:media:.

navigation menu bottom border
main display area top border

« May 2005 | Main | July 2005 »

Privacy interest in my (domain) name?

posted by:Hilary Young // 02:32 PM // June 29, 2005 // Core Concepts: language and labels

A couple weeks ago, Michael Geist's Toronto Star column discussed the domain name dispute that was sparked when the Defend Marriage Coalition created websites with domain names of MPs (donboudria.ca, davidmcguinty.ca) which it used to denounce the Liberals' same sex legislation. It isn't clear whether the MPs have any recourse through CIRA but Geist doesn't believe so, and it is clear that the content itself is protected free speech.

It seems counterintuitive to me that someone could create a hilaryyoung.ca site with my e-mail address and phone number claiming I have free Live8 tickets to give away, and I would have no recourse (at least not to CIRA). The problem is that I don't own or have a trademark in my name. A simple Google search reveals dozens of other Hilary Youngs (one is an expert in Victorian porcelain, another is in the chorus of The Mikado in Austin, Tx) and if they wanted their own hilaryyoung.com sites, they'd be welcome to them. The privacy invasion happens when someone creates a web site with my name AND information about me. Is this any different than an unauthorized biography - "The Life and Times of Hilary Young" (zzzz...)? In most respects, no, but I think there is an expectation when a website's domain name includes a person/company's name and the site contains information about that person/company, that the website originated with that entity and has some legitimacy. Whether this expectation is enough to provide any legal protection is another matter...

To read Michael Geist's column on the domain name dispute, click here.

| Comments (0) | | TrackBack

Trailing Bread Crumbs Online

posted by:Jason Millar // 11:16 AM // // TechLife

Our smart worlds will automatically become smarter and more closely tailored to our individual needs in direct response to our own activities.

Andy Clark, Natural Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence

Andy Clark's prediction is realized in Google's new Personalized Search function that was released this week. Users can log onto the Personalized Search engine to perform regular Google searches. However, Google has designed a memory into its tool that keeps track of the kinds of topics you have searched for, in an attempt to modify future search results to better match your typical interests.

Here's an example. I logged into the system with my newly created Google account, and performed a search on "spam". Spam emails are a hot topic, which was reflected in the list returned by Google--everything from anti-spam programs to articles discussing attempts to increase legal actions against "spammers". I then performed a search on "meat", then one on "bologna", after which I re-entered the "spam" search. This time the top link was one to the company that produces the canned meat product "Spam". I also had quick access to sites dedicated to the wonders of Spam, including recipes and message boards containing everything Spam.

Google's Personalized Search, it would seem, learns about my interests and tendencies and modifies its results to suit them. Is this an example, as Andy Clark would argue, of a technology that acts transparently as an extension of me?

| Comments (0) | | TrackBack

Privacy, Anonymity and Adoption: A Drama Unfolds

posted by:Carole Lucock // 11:50 PM // June 28, 2005 // ID TRAIL MIX

OEDIPUS: With all these indications of the truth
here in my grasp, I cannot end this now.
I must reveal the details of my birth.

Sophocles, Oedipus the King

Proposed change to adoption law in Ontario would provide adoptees with access to their sealed birth records. This has created an intriguing drama that has seen privacy commissioners from across Canada line up along side Ontario’s Information and Privacy Commissioner, Ann Cavoukian, as she calls for amendments to Bill 183. These amendments, if accepted, would prevent the retroactive opening of sealed birth records (which the proposed Bill allows) if birth parents exercised a veto to prevent this from happening. Groups representing adoptees are calling instead for a contact veto, which would prevent adoptees from contacting birth parents if the parents had exercised this veto.

The Ontario Commissioner is arguing that at the time of adoption there was “an understanding or social contract that created an expectation of privacy and confidentiality that should not be retroactively revoked.” Others deny that assurances of confidentiality were made, or at least, that made as a matter of official policy. In addition, she raises concerns related to the emotional and psychological harm that might be caused if this understanding is changed retroactively.

Although I am concentrating on the issues as they relate to adoptees, it should be noted that Bill 183 also provides a right of access to the adoption records by the birth parents, which has also been criticized by the Ontario Commissioner and others, see for example, Laura Eggertson, “Don’t put these adoptees at risk again” Globe and Mail. June 14, 2005.

This drama heightened when it was reported that sixteen adoptees had filed human rights complaints with the Ontario Human Rights Commission against Ann Cavoukian on the grounds that her statements in connection with Bill 183 “‘intended to incite the infringement’ of an adoptee’s human rights to equal treatment regardless of family or marital status.”

News of these complaints drew rapid responses from Cavoukian and others, who claim that the adoptees’ human rights complaints are groundless in law and fact. Moreover, Cavoukian states that she sees “the filing of this complaint as an effort to silence my voice and discourage me from performing my duties to the public and the Ontario Legislature.”

These developments raise a host of procedural and substantive questions that will no doubt receive further comment from scholars and others in the coming months. These include:

· What is and should be the role and authority of privacy commissioners (and other similar office holders) with respect to the introduction or amendment of legislation and, in particular, the positions they adopt as they offer critical comment? For example, should such comment be based on principles grounded in the legislation that establishes the office? If not, on what should comment be based?

· What is and should be the relevance and weight of past promises, assurances or understandings of confidentiality or privacy when legislation is introduced that aims to change the status quo? What difference does it make if government has itself previously provided the assurance or the promise? If we extrapolate the principles that the privacy commissioners have adduced in this case to other cases, how would they apply? For example, in the context of health information, it would seem that past promises or assurances regarding confidentiality were at least as significant. However, recent legislation pertaining to health information across the country permits access apparently inconsistent with these expectations (for example, by granting access to health information to researchers without consent).

· What implications does the position taken by Canada’s privacy commissioners have for one of the cornerstones of informational privacy legislation and fair information practices in general, that of the right of access to one’s own records? Extrapolating from the adoption case, will the right to privacy trump the right of access when a record is subject to a collateral promise to another who is also implicated by the record?

So far, the debate about this issue has been heated and polarized. This is not surprising. It is difficult to think of a record that is more fundamentally connected to the person than the record of her birth, notwithstanding that promises of secrecy or confidentiality with respect to this record may have been given to somebody also implicated by the record.

Fundamental and ancient questions about the meaning and nature of identity are here at issue, opening up difficult questions about anonymity, privacy and the right to know the truth. The unfolding drama in Ontario could help us explore these issues not only from the perspective of the principled basis for argument and counter-argument, but also in the recognition that we are here dealing with matters that are, in significant ways, beyond our grasp.

| Comments (10) | | TrackBack

Promoting File Sharing Devices Attracts Liability Rules the US Supreme Court

posted by:Carole Lucock // 02:08 PM // // Digital Democracy: law, policy and politics

Yesterday the U.S. Supreme Court handed down its ruling in the MGM v. Grokster & StreamCast case, finding that distributing a device with the intent to promote copyright infringement will attract liability. Grokster and other peer-to-peer sites now face the prospect of liability for copyright infringements that occur on their sites.

The Court's unanimous decision was greeted with accolades from the Motion Picture Association of America and concerns from others who fear a chilling effect on the tech industry.

Further Information

| Comments (0) | | TrackBack

New U.S. federal regulations violate the privacy rights of Canadian adult industry performers ?

posted by:Rafal Morek // 12:38 PM // June 23, 2005 // Digital Democracy: law, policy and politics

From the Ottawa Citizen:

"Canada's privacy watchdog has been asked to determine whether a U.S. clampdown on Internet pornography violates the rights of Canadian adult industry performers.

A lawyer representing a B.C.-based porn company wants Privacy Commissioner Jennifer Stoddart to see if contentious new U.S. federal regulations that take effect today offend Canada's privacy laws.

At issue are revisions to U.S. rules that require porn producers to keep photo ID and release forms from anyone who appears in sexually explicit images. The regulations, intended to combat child pornography, had been in effect since the early 1990s, but today they will be expanded to encompass Internet porn sites..."

Click here for the rest of the article.

| Comments (0) | | TrackBack

More about anonymous sources…

posted by:Rob Carey // 11:59 PM // June 21, 2005 // ID TRAIL MIX

Like many people, I was fascinated by recent discussions in the media about the journalistic use of anonymous sources - occasioned in part by the troubles of Newsweek's Michael Isikoff and by the revelation of Deep Throat's identity.

In light of these events, the postings by Marsha Hanen and Dave Matheson on anonymity and credibility as they pertain to journalism were particularly interesting. Both point out that audiences should exercise care when interpreting stories in which unnamed sources are used. And if I understand Dave's post correctly, he also argues that anonymity does not necessarily obviate a source's credibility. Indeed, many news organizations try to take a similarly judicious position - formally, at least - on the use of unnamed
attribution, attempting to strike a balance between the need to maintain credibility with readers and the public's right to know. (The American Society of Newspaper Editors provides a compilation of various editorial policies regarding unnamed sources at http://www.asne.org).

Because this list was last updated in 2003, it may be more interesting as an artifact than anything else). As a former reporter, however, I am struck by a disparity between normative pronouncements about the use of unnamed sources - evident in
editorial policies - and the way such sources are actually used in practice.

Many editorial policies regarding anonymity are predicated on the assumption that granting anonymity to a source may be justified when it is necessary to get the story. However, unnamed sources - at least, in stories published by the "prestige press" - are sometimes used quite freely for other purposes. For example, as part of another paper that Jacquie and I are working on, we drew a sample of stories containing anonymous attribution published between 1993 -
2003 in the New York Times, the Washington Post, and the Los Angeles Times. (I should point out that our sample probably underrepresents the actual number of stories in which unnamed sources are used).

Although this wasn't a focus of our analysis, we were somewhat surprised to find that in 74 per cent of the stories we sampled, the key contentions tended not to rest on the testimony of an unnamed source. A closer reading suggested that anonymous sources were
frequently invoked simply to make the story more compelling - for example, to provide a vivid quote. In one sense, I suppose it's heartening to see that the majority of stories were not based solely on the testimony of unnamed sources.

On the other hand - and this is just speculation on my part - if it is
commonplace for journalists to view anonymous sources as an element of news writing style, to be deployed whenever a story needs to be "beefed up", to use Tina Brown's phrase, this may invite a certain laxness in their use, despite the careful qualifications of many editorial policies. All the more reason to view their use with caution. (As a side note, the Center for Media and Public
Affairs http://www.cmpa.com recently released a report noting that the use of anonymous sources in major U.S. media dropped by a third between the years 1981 and 2001. I haven't yet received a copy of the report, so I can't comment on it).

| Comments (1) | | TrackBack

Black Market in Stolen Credit Card Data Thrives on Internet

posted by:Jennifer Manning // 06:54 AM // // TechLife

From the New York Times
"Want drive fast cars?" asks an advertisement, in broken English, atop the Web site iaaca.com. "Want live in premium hotels? Want own beautiful girls? It's possible with dumps from Zo0mer." A "dump," in the blunt vernacular of a relentlessly flourishing online black market, is a credit card number. And what Zo0mer is peddling is stolen account information - name, billing address, phone - for Gold Visa cards and MasterCards at $100 apiece.

It is not clear whether any data stolen from CardSystems Solutions, the payment processor reported on Friday to have exposed 40 million credit card accounts to possible theft, has entered this black market. But law enforcement officials and security experts say it is a safe bet that the data will eventually be peddled at sites like iaaca.com - its very name a swaggering shorthand for International Association for the Advancement of Criminal Activity.

For despite years of security improvements and tougher, more coordinated law enforcement efforts, the information that criminals siphon - credit card and bank account numbers, and whole buckets of raw consumer information - is boldly hawked on the Internet. The data's value arises from its ready conversion into online purchases, counterfeit card manufacture, or more elaborate identity-theft schemes.

The online trade in credit card and bank account numbers, as well as other raw consumer information, is highly structured. There are buyers and sellers, intermediaries and even service industries. The players come from all over the world, but most of the Web sites where they meet are run from computer servers in the former Soviet Union, making them difficult to police.

Traders quickly earn titles, ratings and reputations for the quality of the goods they deliver - quality that also determines prices. And a wealth of institutional knowledge and shared wisdom is doled out to newcomers seeking entry into the market, like how to move payments and the best time of month to crack an account.

The Federal Trade Commission estimates that roughly 10 million Americans have their personal information pilfered and misused in some way or another every year, costing consumers $5 billion and businesses $48 billion annually.

"There's so much to this," said Jim Melnick, a former Russian affairs analyst for the Defense Intelligence Agency who is now the director of threat development at iDefense, a company in Reston, Va., that tracks cybercrime. "The story that needs to be told is the larger, long-term threat to the American financial industry. It's a cancer. It's not going to kill you now, but slowly, over time."

No one is willing to estimate how many cards and account numbers actually make it to the Internet auction block, but law enforcement agents consistently describe the market as huge. Every day, at sites like iaaca.com and carderportal.org, pseudonymous vendors do business in an arcane slurry of acronyms.

Click here for the rest of the article.

| Comments (1) | | TrackBack

Harmony in privacy advocacy and technological development?

posted by:Marty // 04:35 PM // June 19, 2005 // Core Concepts: language and labels

In a stark commentary by David M. Rabb, the privacy community can be seen to be taken to task.

Perhaps it's appropriate that the privacy community seems comprised mostly of people talking to themselves.
In one corner are public policy advocates who examine every new technology for privacy risks and inevitably find some. Their usual recommendation is to regulate or prohibit the new technology's use. Another corner holds academic and industry researchers working to build a detailed conceptual foundation for comprehensive privacy management. Yet another corner is reserved for software vendors offering standalone products with specific privacy-related functions. In a final corner, or maybe another room altogether, are corporate technology professionals whose only real goal is to satisfy their compliance departments. The corporate managers rarely interact with the other groups except when searching for vendors to help solve an immediate problem.
Still, the disjointed nature of the privacy discussion has a cost. The policy advocates often seem unconcerned with the practical implications of their suggestions, even though some advocates are themselves quite knowledgeable about business and technology. The researchers' conceptual frameworks could be very helpful to corporate systems designers, but only if they relate to infrastructures that actually come to exist. The value of the software point solutions is limited when there is no larger standardized framework for them to fit into.

The thesis of this article is that there is a point to the study of privacy implications resulting from new technologies. The disjointed nature of the various players in the privacy/technology nexus, however, does not allow the harmony necessary for full privacy concerns to enter into technology products, for better or worse (depending on one’s perspective). I would argue that, universally, this is not the case. Our project, On the Identity Trail, is just one avenue of research seeking to narrow this divide. Perhaps those who side with privacy advocacy should take a look at Raab's commentary, and those on the technology side, should explore our project and seek out other like minded research, for only through awareness and dialogue will harmony be achieved.

| Comments (1) | | TrackBack

U.S. General Accounting Office Issues RFID Report

posted by:Marty // 04:21 PM // // Surveillance and social sorting

A little late than never...

The U.S. General Accounting Office (GAO), Congress's oversight body, issued its report on the promise and perils of RFID use by the U.S. Federal Government, in May, 2005 (see "Information Security: Radio Frequency Identification Technology in the Federal Government"). The report highlights the use, or planned use, of RFID technology by Federal agencies. Moreover, the report makes the following findings regarding privacy and security of information:

Of the 16 agencies that responded to the question on legal issues associated with RFID implementation in our survey, only one identified what it considered to be legal issues. These issues relate to protecting an individual’s right to privacy and tracking sensitive documents and evidence.
Several security and privacy issues are associated with federal and commercial use of RFID technology. The security of tags and databases raises important considerations related to the confidentiality, integrity, and availability of the data on the tags, in the databases, and in how this information is being protected. Tools and practices to address these security issues, such as compliance with the risk-based framework mandated by the Federal Information Security Management Act (FISMA) of 20023 and employing encryption and authentication technologies, can help agencies achieve a stronger security posture. Among the key privacy issues are notifying individuals of the existence or use of the technology; tracking an individual’s movements; profiling an individual’s habits, tastes, or predilections; and allowing secondary uses of information. The Privacy Act of 1974 limits federal agencies’ use and disclosure of personal information,4 and the privacy impact assessments required by the E-Government Act of 2002 provide an existing framework for agencies to follow in assessing the impact on privacy when implementing RFID technology.5 Additional measures proposed to mitigate privacy issues, such as using a deactivation mechanism on the tag, incorporating blocking technology to disrupt transmission, and implementing an opt-in/opt-out framework for consumers remain largely prospective.

Supply & Deman Chain Executive, features this article, which offers a deconstructive view of the GAO's report.

The GAO report is flawed and provides a relatively unfavorable, potentially damaging view of RFID. The report cites several security-related issues that RFID can present, such as tracking individual movements, preferences, confidential personal information, etc. The report also suggests that interest from government officials in RFID is increasing, especially as costs fall and application uses expand. To compile the report the GAO focused on responses received from a variety of government agencies — 24 in total — including, the departments of State, Energy, Homeland Security, Labor and others.

As always, there are multiple views to every story.

| Comments (0) | | TrackBack

Inscription: using panoptic surveillance technologies to improve our memories, health and lives

posted by:Marty // 04:07 PM // // Core Concepts: language and labels

The ever predictive Howard Rheingold, in a recent article at TheFeature informs us that Microsoft Research’s new manager of the social computing, Marc Smith, is pushing panoptic surveillance technologies to a new kind of authorship. Smith has dubbed this “Inscription”.

Since we're going to be snooped, sensed and surveilled by sensors in the environment, why not use sensors attached to our mobile devices to augment our memories, track our health and otherwise enhance our lives? Smith says, "The state is going to be recording everything we do, why shouldn't we make our own recordings -- if only to challenge the accuracy of what others capture?"

See the full article here: http://www.thefeature.com/article?articleid=101694&ref=7818328

Note that this is something that On the Identity Trail's Steve Mann has been advocated and working towards for years.

| Comments (2) | | TrackBack

Networked individuals now an imminent reality

posted by:Chris Young // 10:39 AM // June 16, 2005 // TechLife

Ottawa's Zarlink Semiconductor recently announced a new chip designed for short-range, high-speed wireless communications between medical implants and external transceivers. Needless to say, once externalized the data is easily transferred over traditional Internet links.

This is surprising even for someone forecasting a time where real-time, networked monitoring of physiological states is common-place. It seems this will occur imminently.

"Physicians can use [this] technology to remotely monitor patient health without requiring regular hospital visits. For example, an ultra low-power RF transceiver in a pacemaker can wirelessly send patient health and device performance data to a bedside base station in the home. Data is then forwarded over the telephone or Internet to a physician's office, and if a problem is detected the patient goes to the hospital where the high-speed two-way RF link can be used to easily monitor and adjust device performance."

This is an important development because this sort of technology, apparently benign as described in this press release, can easily be used for non-emergency medical monitoring, or even non-health related monitoring of physiological states. Computers monitoring equipped individuals could notify third parties, in real-time, of such events as nicotine or drug ingestion, to present but one example.

| Comments (0) | | TrackBack

beginning of the end of 'anonymous' use of public transit in the GTA

posted by:Dina Mashayekhi // 08:49 AM // June 15, 2005 // Surveillance and social sorting

- see article below..
- i tried to find more information but these questions remained unanswered -- will cash fares increase thus inducing people to go with the smart card?
- where/how will personal info, travel histories be tracked/stored, who has access?
- the brief privacy notes on the mto page say that a person can still use the system without providing personal info "however some personal data will be required if riders want to make pre-authorized payments, protect their cards against loss or theft or obtain concession fares."
- a pilot project has been in place w/ the use of smart cards in some go transit corridors since 2001, haven't found much privacy related info -- atip possibility?

From the Globe and Mail:

TORONTO — The Ontario government is planning to bring in a single pass system for public transit in the Greater Toronto Area.

The unified-fare card will be good on GO Transit and seven local systems in the region.

The card, announced by Transportation Minister Harinder Takhar, will likely be available in early 2007.

Mr. Takhar says riders won't have to search for exact change, buy tickets or carry different passes to travel on the different transit systems.

Brampton, Burlington, Hamilton, Mississauga, Oakville, Toronto and York Region have all signed on to develop the integrated-fare system.

The plan is expected to be fully in place from Hamilton to Whitby by 2010.

"Creating a transit culture in this province means using the latest technology to improve transit service," Mr. Takhar said.

"The possibilities for this card are endless. In Hong Kong, for example, transit-fare cards can also be used at parking facilities, fast food outlets and vending machines."

| Comments (0) | | TrackBack

Why We Need Protection From The Technologies That Protect Copyright

posted by:Ian Kerr // 11:15 PM // June 14, 2005 // ID TRAIL MIX

Why We Need Protection From The Technologies That Protect Copyright

i. proposed anti-circumvention laws

after nearly a decade of indecision, it looks like canada is finally about to board the mothership.

in its recently released government statement on proposals for copyright reform, canada announced that it will comply with the wipo copyright treaty by tabling its own anti-circumvention laws.

the core provision, we are forewarned, will deem “the circumvention, for infringing purposes, of technological measures (most lawyers call these TPMs) applied to copyright material [to] constitute an infringement of copyright.” a second deeming provision will generate the same result for “the alteration or removal of rights management information (RMI) embedded in copyright material, when done to further or conceal infringement…”

in essence, these deeming provisions are meant to add a new legal layer, one that goes beyond existing copyright and contract laws, in order to deter and provide legal remedies against individuals who, with “infringing purposes,” hack past content-protecting technologies that automatically enforce particular uses of digital material. a central aim of the soon-to-be-proposed legislation (it could happen any day now) is “to provide rights holders with greater confidence to exploit the internet as a medium for the dissemination of their material and provide consumers with a greater choice of legitimate material.”

these are certainly laudable goals and the approach adopted has left some cautiously optimistic that canada’s proposed anti-circumvention provisions will do less harm to copyright’s delicate balance than the laws enacted in the US, europe, and elsewhere.

whether or not this is so, there is less reason to enjoy the same optimism regarding the effect of the proposed anti-circumvention law on personal privacy. when it comes to protecting intellectual privacy (the term julie cohen uses to describe the right to experience intellectual works in private, free from surveillance) the recently released gov statement whispers with the sounds of silence.

although ample statutory language is offered to illustrate how the law will protect TPMs from people, the gov statement offers zero indication as to whether the law will also be used to protect people from TPMs.

it is my contention that statutory silence about the permissible scope of use for TPMs risks too much from a privacy perspective. in particular, i am of the view that any law that protects the surveillance technologies used to enforce copyright must also contain express provisions and penalties that protect citizens from organizations using those TPMs to engage in excessive monitoring or the piracy of personal information. if the copyright industries are correct in claiming a legitimate need for new laws to prevent the circumvention of TPMs, then similar provisions are needed to protect citizens from organizations that use TPMs and the law of contract as a kind of circumvention device.


in order to understand why I think so, one must recognize the role TPMs play within a grander system of intertwining technologies and legal mechanisms that are being used to establish a secure global distribution channel for digital content.

a TPM is a technological method intended to promote the authorized use of digital works, usually by controlling access to such works, or various uses of such works (eg, copying, distribution, performance, display.) TPMs can operate as a kind of ‘virtual fence’ around digitized content and can therefore be used to lock-up content (whether or not it enjoys copyright protection). a TPM can be used on its own, or as a building block in a larger system of technological and legal mechanisms – a digital rights management system (DRM)

if TPM is a digital lock, then DRM is a digital surveillance system. DRM consists of two components. The first is a set of technologies including: encryption, copy control, digital watermarking, fingerprinting, traitor tracing, authentication, integrity checking, access control, tamper-resistant hard- and software, key management and revocation as well as risk management architectures. other technologies are used to express copyright permissions in ‘rights expression languages’ and other forms of metadata that makes a DRM policy machine-readable.

rights expression languages are the bridge to the second component of DRM, which consists in a set of legal permissions. in the current context, these permissions are typically expressed as a licensing arrangement which, by way of contract, establish the terms of use for the underlying work.

the technological components of most full blown DRMs are linked to a database which enables the automated collection and exchange of various kinds of information among rights owners and distributors about the particular people who use their products; their identities, their habits, and their particular uses of the digital material subject to copyright. the information that is collected and then stored in these databases can be employed in a number of different ways.

the surveillance features associated with the database are crucial to the technological enforcement of the licensing component. it is through the collection and storage of usage information that DRMs are able to “authorize use” in accordance with the terms of the licensing agreement and thereby “manage” copyrights.
together, the database and the license allow owners of digital content to unbundle their copyrights into discrete and custom-made products. and, since they are capable of controlling, monitoring and metering most uses of a digital work, DRMs can be linked to royalty tracking and accounting systems. on this basis, DRM optimists believe that it will offer a secure framework for distributing digital content, one that promises that copyright owners will receive adequate remuneration while enabling a safe electronic marketplace that offers to consumers previously unimaginable business models beyond sales and subscriptions, such as highly individualized licensing schemes with variable terms and conditions

amazingly, the bulk of writing on the subject of DRM has, to date, focused primarily on copyright policy. despite the fact that the capacity to monitor and meter customer habits is an essential feature of DRM, the level of sustained focus on the privacy aspects of DRM in canada is thin and, worldwide, is surprisingly sparse.

although referred to as “rights management” systems, what DRM really manages is information – information about users, which can be gathered 24/7 by way of automated, often surreptitious surveillance technologies. given DRM’s extraordinary surveillance capabilities, it is extremely difficult to imagine why the gov statement mentions no provisions that would directly address any aspects of the privacy implications of DRM in drafting its anti-circumvention laws.

iii. using DRM licences to circumvent privacy

like other contractual devices, an IP licence allows copyright holders to set the terms of use for their products. However, in the DRM context, intelligent agent technologies facilitate the automatic “negotiation” of contractual licences between content providers and users, as well the plethora of informational transactions that are generated as a result of them.
in an automated environment, most informational transactions take place invisibly through software exchanges between machines, about which few humans are aware and fewer still have the technical expertise to alter. bits and bytes of data, not to mention various forms of personal information, are collected and inconspicuously interchanged without human intervention and often without knowledge or consent. automation therefore exacerbates an already problematic inequality in the bargaining power between the licencors and licencees resulting from standard form agreements and mass market licences. the combination of TPMs and contracts in this manner could therefore lead to unfair transactions.

as my european colleague bernt hugenholtz once asked:

Are we heading for a world in which each and every use of information is dictated by fully automated systems? A world in which every information product carries with itself its own unerasable, non-overridable licensing conditions? A world in which what is allowed and what is not, is no longer decided by the law but by computer code

end user licences are becoming the rule and content providers the rulers. with increasing frequency, the terms of these licences are used to override existing copyright limitations.
while most people are of the view that individuals ought to be free to choose which contracts they enter into and that the state has no business interfering with the contracts freely entered into, an unbridled use of TPM with anti-circumvention legislation and contractual practices would permit content owners to extend their surveillance and personal information collection practices far beyond the bounds of what might otherwise be permitted by canadian privacy law. privacy law’s compromise between the needs of organizations and the right of privacy of individuals with respect to their personal information would be put in serious jeopardy if, irrespective of privacy rules, content owners were able to impose their terms and conditions through standard form contracts with complete impunity.

if anti-circumvention laws are to ensure that Canadians' privacy rights are not reduced or undermined, then the amendments to the Copyright Act must include a different kind of anti-circumvention provision. in addition to prohibiting the circumvention of TPMs for infringing purposes, there must be a balancing counter-measure that expressly prohibits the use of DRM to circumvent the protection of canadian privacy law. “appropriate balance,” in this sense, requires a legal lock aimed against organizations that would use TPMs, the proposed anti-circumvention law and the law of contract as a means of hacking past PIPEDA or its provincial equivalents.

ian kerr holds the canada research chair in ethics, law & technology at the university of Ottawa, faculty of law and is the principle investigator of on the identity trail. stay tuned for the release of more details of ian’s research on this topic, including recommendations outlining legal solutions to drm & privacy in the copyright reform context

| Comments (1) | | TrackBack

Technology as a Propaganda Model

posted by:Jason Millar // 07:22 PM // // TechLife

Almost all technologies act to constrain the various choices that we are able to make with regards to the use of that technology. Designers of automobile transmissions have embedded control mechanisms that prevent drivers from shifting into reverse while moving forward. Designers of the modern pop can have eliminated the old style of pull tab, which needed to be removed from the can prior to drinking from it. The new ones were introduced in order to prevent consumers from littering. Anyone over thirty will remember seeing those little metal strips on the street or sidewalk almost as often as cigarette butts. These examples both seem to be designs that restrict choice, through the use of embedded control mechanisms, as a means of preventing or eliminating a certain type of harmful behaviour when using the technology.

Those design choices might have been motivated primarily by the moral issues they act to promote—safe driving in the first case, cleaning up the environment in the second. In cases where there is a strong moral element associated with the design it is difficult to argue against the use of those technologies.

Other embedded control mechanisms are not necessarily driven by moral considerations. Take Digital Rights Management (DRM) for example. DRM is a technology being developed for use in digital media such as audio recordings, digital art or photographs, electronic books, and any other digital information that falls under the legal protection of copyright. DRM protects copyrighted material by attaching to it a piece of software that works in consort with your computer to detect and block any unauthorized attempt to copy or even open or play it. Although the companies who are developing this technology claim that DRM is primarily a response to alleged immoral behaviour (peer to peer file sharing), the scope of DRM technology being deployed today places extremely strict copy protection into copyrighted files, much stricter than in the past.

Given that the protection measures are so strong, one must ask whether those who are implementing DRM are primarily interested in the moral issue of piracy, or if there is some other primary concern motivating their actions. In the case of DRM examples of such concerns might include increasing control over copyright far beyond what was previously possible or, as Ian Kerr discusses in his most recent blog on this site ("HACKING@PRIVACY:
Why We Need Protection From The Technologies That Protect Copyright"), monitoring the behaviour patterns of consumers.

If embedded control mechanisms like DRM are primarily motivated by concerns other than moral ones, and have as their primary design function the restriction of a certain type of action, there is still a moral component to them. Restricting or preventing any subset of choices that an otherwise autonomous individual might make seems inherently tied to the larger issue of morality. But overriding moral autonomy with a technology that contains only a secondary consideration as its moral justification might not itself be morally justified.

Consider the case of the automobile transmission mentioned earlier. Trumping autonomy on the primary basis of passenger safety is probably justified. However, trumping autonomy on the basis of a secondary moral consideration, such as the prevention of piracy, might prove less tenable.

As such, designs should be scrutinized to ensure that the moral grounds for embedding control mechanisms—secondary ones in particular—are urgent enough to warrant their use.

Designed into pervasive social technologies, such as portable music players, and software such as Windows Media Player, embedded control mechanisms could take on an interesting social role. One interesting possibility is that the social function of such technologies is not to further a particular moral position (in the case of copyright infringement the moral aspect is arguably a thin one when compared to, say, the moral dilemma associated with break and enter or firearms proliferation—both discouraged by the use of locks as embedded control mechanisms). Rather, we might consider those technologies functioning more properly as propaganda mechanisms.

Mass media—the traditional playground of the propagandist—is typically seen delivering propaganda (used here in a value neutral sense) through messages such as news stories, posters, images, sounds, etc.. Indeed, the literature on propaganda focuses heavily on how language based messages are used to motivate people towards a certain type of action or belief. But our interactions with the technology of mass media, specifically our exposure to messages that can be delivered through a targeted use of embedded control mechanisms, should also be considered as a means to the same end.

In this sense embedded control mechanisms function primarily as a means of controlling not only behaviour, but also our understanding of the limits of responsible action, through repeated suggestion of the ‘only’ correct way to behave when interacting with such technology. These technologies, in combination with the strong traditional messages delivered via press releases and advertising, function as an essential component of the propaganda mechanism. Having a device that constantly reminds you not to burn music onto a CD, by preventing you from doing so, is not so different than placing that same message on a poster on the wall, or a billboard on the highway. It is simply more effective.

| Comments (0) | | TrackBack

if you think 'chipping' granny might be too invasive, here's your alternative

posted by:Dina Mashayekhi // 10:55 AM // // Surveillance and social sorting

The new geopositioning phone-bracelet detects any departure from a security zone surrounding the residence

The Canadian company Medical Intelligence has developed a bracelet for Alzheimer's patients that can message key people via a GSM network when a patient wanders out of a "secure zone" as monitored via A-GPS. The rate of Alzheimer's patients that "wander" or "stray" is almost 60%, with a high death rate when they are not found quickly. The innovation introduced today is a definitive solution to the problems that families, caregivers and police authorities must deal with.

Columba, the new geopositioning phone-bracelet, required three years of research and development. Louis Massicotte, founding president of Medical Intelligence, had the idea of creating the bracelet after the repeated wanderings of his own mother, who suffers from Alzheimer's.

To prevent any disappearance, the Columba bracelet automatically detects any departure from a security zone surrounding the residence or nursing home. The "zone" is pre-determined by the patient's family or caregiver. The Columba then alerts a medical assistance centre that promptly contacts the family or caregiver to coordinate assistance efforts.

If required, the medical assistance centre, which operates 24-7, can accurately geoposition the bracelet wearer and establish audio communication using Columba's "handsfree" feature.

The Columba has a GPS-Assisted positioning system, a GSM/GPRS transmitter/receiver with a SIM card for voice and data, and an intelligent alert detection system.

The very first implementation of the system will take place this summer in Paris at the Medidep Brune nursing home, and use the Orange phone network.

Nearly 800,000 people suffer from Alzheimer's in France, three-quarters of whom are in a home-care situation. Alzheimer's affects close to 10% of the population over the age of 65.

"To successfully keep Alzheimer's patients in the home, we must do our utmost to guarantee their safety", says Dr. Stephane Bergeron, President and CEO of Medical Intelligence. "In order to responsibly secure the patient's environment, without restricting or isolating him or her, we must be alerted at the very beginning of an instance of wandering or running away. The Columba bracelet ensures such security and enables,, when required, the geopositioning of the wearer. You can even speak with him because the phone-bracelet is connected to the Orange network and includes a "handsfree" phone feature."

"Orange has supported the development of this product for the last two years, and we are pleased to see that our mobile phone network can make an effective contribution to patients' security and well-being. We are proud to contribute to the introduction of an innovative mobile service that responds to a major public health problem. The Columba phone-bracelet provides its wearer with a "lifeline", giving him more freedom and more security," stated Jean-Noel Tronc, Director of Strategy at Orange.

The Columba phone-bracelet is expected to be available in drugstores before the end of 2005.

Found at http://www.pcscanada.com/newsstory_details.asp?id=1470&type=

Read the Press Release

| Comments (0) | | TrackBack

consumer profiling gets in your head

posted by:Dina Mashayekhi // 06:54 PM // June 13, 2005 // Surveillance and social sorting

Marketers try high-tech tool to push brain's 'buy button'

Marketers are trying to use brain scans to convince consumers to buy their product, although scientists say the approach may not be ready to be applied.

Peering into someone's brain seems like it may have its benefits for marketers, who aim to find out whether consumers will like a product.

"If you knew exactly how they were hearing your messages, clearly you can choose the best way of making that message to them," said Barry Welford, president of Strategic Marketing Montreal.

Brain scan technology, such as functional MRIs, shows which parts of the brain are activated by impulses. Some marketers theorize that since the scans suggest positive or negative reactions, the technology can help them to fine-tune their message.

"Right now, media tools are pretty much limited in terms of how to reach people," said Fred Auchterlonie, vice-president of PHD Media Canada, one of the first companies to use the technique in Canada. "Really what we're trying to get at is how to influence them. But the technique is not cheap."

A single experiment with at least 12 subjects could cost as much as $7,500.

Continued at CBC News

| Comments (0) | | TrackBack

EPIC 2014

posted by:David Matheson // 08:51 PM // June 10, 2005 // Digital Democracy: law, policy and politics

My wife recently drew my attention to an 8-minute fake documentary, "EPIC 2014." (Not the EPIC that many of you will be familiar with; this one stands for "Evolving Personalized Information Construct.") It was created by a couple of journalists, and contains an interesting vision of the future. Aside from just being a lot of fun (in the way that horror movies can be fun, some might say), there are some nice tie-ins to the interests of project members. For background on the whole thing, you can go here or here. To watch the movie, click here.

| Comments (0) |


posted by:Ian Kerr // 07:57 AM // // TechLife

one common conception of "privacy" is as a kind of "space" that enables intellectual consumption/exploration/achievement by allowing people to be "more or less inaccessible to others, either on the spatial, psychological or informational plane."

to the extent that privacy in this sense is of significant instrumental value, it was interesting to read an item in my inbox this morning from the register detailing a principal's decision to ban iPods in her school because their use "encourages kids to be selfish and lonely." according to the principal of the International Grammar School, "iPod-toting children were isolating themselves into a cocoon of solipsism."

ever since nicholas negroponte coined the concept of the "daily me" (referring to people's growing desire for only that information & news that pertained to them individually), much attention has been paid to network technologies and their ability to isolate rather than connect people.

after years of thinking about this, i still have no firm point of view on this subject -- it is interesting to note that the article on the iPod referred also to the Blog as a technology used by "ego-centric 'social minimizers'" -- but i do think it is worth raising the question whether these technologies are tools of that sort, or whether their use is better understood as a symptom of deeper social ills.

a penetrating example of the latter view is found in an image allan lightman portrays, in an angst ridden rail against the new technological age, in his book diagnosis. in the booming, buzzing confusion of technosociety, his characters would put on their headsets and blare music as the last resort means of acheiving intellectual solitude.

so ... where is the problem? and what is the solution?

| Comments (4) | | TrackBack

Privacy about the Future?

posted by:David Matheson // 11:48 AM // June 09, 2005 // Core Concepts: language and labels

I'm partial to a fairly minimalist conception of informational privacy. Informational privacy, I'm inclined to think, is not essentially a matter of control over, or limited access to, personal information (though protecting one's [right to] privacy -- something of huge importance -- surely is); it is rather simply ignorance of personal information.

Much of my recent work has consisted of exploring the plausibility of this conception of privacy, and that in turn has involved addressing a number of putative objections to it. Many of these objections I'm comfortable with, simply because I've come across them before and have had time to sort out why I think they're not as compelling as it might seem on first pass. But every once in a while I'm hit with an objection that seems to come from nowhere, and about which I'm not really sure just what to say.

Recently, while presenting a paper on privacy at the Social Sciences and Humanities Congress in London, an audience member delivered up an objection of this sort. It went something like this. On my account, an individual has privacy relative to a personal fact (bit of personal information) about her and to another individual just in case that other individual does not know the personal fact. But now consider future personal facts: personal facts about what an individual will do, or about what will happen to her. (Note that we're not talking here about facts about what an individual intends to do in the future, for these are present facts about an individual's intentions.) Does another's ignorance of these facts about an individual mean that the individual has privacy with respect to them? It seems not, went the objection; it is odd to claim that individuals can have privacy about future personal facts. Yet my account implies that the individual does in fact have privacy with respect to them. After all, on my account, others' ignorance of personal facts is all that is required for privacy with respect to them.

To illustrate, suppose that it is a future personal fact about Fleeta that she will kiss Alva at 9:42 pm tomorrow evening. It's a fact, but no one knows it yet -- not Alva, not even Fleeta herself. Does it make sense to say that Fleeta has privacy relative to Alva and to the fact that she will kiss Alva at 9:42 tomorrow evening? My account of privacy says that it does, in effect. But that's counterintuitive, says the objection. It doesn't make sense to say that Fleeta has privacy relative to Alva and that future personal fact about her.

Now one way I could respond to this objection is to deny that there are any such things as future personal facts: the future is in some sense open for individuals in a way that the past and present is not, and this is best explained by the idea that whereas there are past and present personal facts, there aren't any future personal facts. And, of course, if there are no future personal facts, then there are no future personal facts for others to be ignorant of, and hence no privacy to be had relative to others and such facts.

But that's not the way I want to respond to the objection. I'm enough of a determinist to agree with the objector's assumption that there are future personal facts. Even so, it seems to me to make perfect sense to say that one can have privacy with respect to future personal facts about oneself.

Consider the movie Minority Report. Here we have a situation in which, due to the powers of the "precognitives", plenty of future personal facts about individuals are known -- e.g. that if so-and-so is not arrested, then he will most certainly commit murder. One can't help as a viewer feel that there is some injustice going on here with respect to those individuals who get arrested for their future crimes. (Leaving aside the obvious injustice done to the precognitives through their exploitation.) But what is the injustice? Well, there's the point that, at the time of their arrest, they haven't actually committed the crime yet. But I'm not sure that this point alone is a compelling ground for complaint. If we know that, unless arrested now, so-and-so will most certainly commit murder in the near future, then it might be quite in accord with justice to go ahead arrest so-and-so.

I suspect that what does ground the feeling that these "future criminals" are being treated unjustly is the sense that they have had their right to privacy violated. Agents of the state who make use of the precognitives to find out future personal facts about individuals are acting as unjustly as they would be if they went on unwarranted fishing expeditions to find out past personal facts about individuals. In both situations, the individuals have their right to privacy violated. And how do they have their right to privacy violated? By having their privacy unwarrantedly taken away with respect to the relevant personal facts. But then it follows that in the case of the "future criminals" depicted in this movie, it makes sense to talk of them having and losing privacy with respect to future personal facts about them. By the state's use of the precognitives, these individuals lose privacy with respect to future personal facts about them. Were the state's use of the precognitives abolished, their privacy with respect to those facts would be regained.

Perhaps this provides a new perspective on Minority Report: the next time you see it, try doing so through the lens of privacy issues. But more to the purpose of this post, I think reflection on what's going on in the movie helps deal with the above-mentioned objection to the minimalist conception of privacy I favor. Assuming there really are such things as personal future facts, the movie nicely illustrates how one can -- contrary to that objection -- possess or lack privacy with respect to them.

| Comments (2) |

Send Us Your Anonymous Tips!

posted by:Annalee Newitz // 11:59 PM // June 07, 2005 // ID TRAIL MIX

When I advocate for anonymous communication in the United States, I always hear the same two objections: you can't trust an anonymous source; and anonymity promotes crime. And yet if you search for the phrase "send anonymous tips" on the Web, it is found most commonly on the Web sites for US police departments. So law enforcement is using untrustworthy, possibly criminal sources to -- stop crime. How do we make sense of this weird contradiction in US sentiment?

Apparently, under some circumstances, we trust anonymous people with our very lives. Indeed, one of the most notorious crimes in US history – the Watergate break-in which eventually brought down the Nixon regime – was brought to light by an anonymous tipster who did not reveal his true identity until last week. And yet most people insist that anonymity is antithetical to honesty and safety.

We resolve this conflicted attitude toward unknown speakers by setting up a false distinction between the trustworthy tipster who reports crimes, and the mendacious anonymous communicator who wants to mislead us into buying damaged Star Wars action figures on eBay or to lure our children into misdeeds.

Regardless of how politically suspect such a distinction might be, it can be helpful if we want to legitimize anonymous communications in everyday life. We can use the figure of the trusty tipster to remind
people of the many ways anonymous actors participate in public life. In addition to fighting crime, she is also a whistleblower who calls
citizens' and consumers' attention to corporate wrongdoings. A terrific article by Jeanne Linzer in the most recent issue of PLoS Medicine http://medicine.plosjournals.org/perlserv/
explores how anonymous tips from medical insiders have helped journalists uncover cronyism and corruption in the pharmaceutical industry.

But sometimes our trusty tipster has less weighty matters in mind. He may, for example, be a restaurant reviewer who hides his real identity in order to avoid getting special treatment at the restaurants he visits. We would never trust a reviewer who told restaurant owners that he had come to dine in order to write them up in the Globe and Mail – his meal would almost certainly be prepared with more care than average. The same goes for people who review consumer products. Consumer Reports, a magazine which has been publishing the results of products tests on items like cars since the 1930s, insists that its crew of shoppers purchase every item they review without revealing that they are with the magazine. While these trusty tipsters are not anonymous to their readers, they rely on a cloak of anonymity in order to bring readers unbiased reviews.

Of course the trusty tipster is always in danger of having her identity revealed. Litigious groups or individuals may bring libel lawsuits to find out who has been giving bad reviews to their products or businesses; pharmaceutical companies may claim trade secret violations and initiate legal proceedings to find out who is leaking information about their cozy relationships with government regulatory agencies.

That's why I feel cheerful every time I visit a newspaper, law enforcement agency, or product reviews Web site and see a colorful little box that says "Send us your anonymous tips!" People in the US may complain that criminals use anonymity, but they also know that anonymous people are stopping crime and fraud all the time.

Of course, we can only hope that the Web sites gathering these anonymous tips aren't keeping logs of the IP addresses from which these tips originate, or that the tipsters themselves are using an anonymous network like Tor <http://tor.eff.org> to cover their digital tracks. But that's a subject for another ID trailmix entry . . .

| Comments (0) | | TrackBack

Skepticism about Anonymous Sources

posted by:David Matheson // 11:33 AM // // Core Concepts: language and labels

In a very interesting ID Trail Mix post last week (which I would encourage everyone to read), Marsha Hanen concludes with the suggestion that readers manifest a "healthy and even heightened skepticism" about the reports of anonymous sources. Properly understood -- as a call, essentially, to avoid gullibility -- the advice is entirely sound. But I worry that the advice might be misunderstood and taken to an unsound extreme. We don't want to be so skeptical of anonymous sources that we refuse to accept their reports unless or until they either shed their anonymity or we have gathered for ourselves compelling independent confirmation of what they have reported.

Nevertheless, such extreme skepticism is tempting, and one might attempt to justify it along the following lines:

Premise 1. If a source S is anonymous to you, then you don't know who S is.
Premise 2. If you don't know who S is, then you ought not to trust what S says.
Conclusion. Therefore, if a source S is anonymous to you, then you ought not to trust what S says.

Premise 1 here just seems to follow trivially from a standard conception of anonymity. And, combined with Premise 2, it yields the Conclusion with all the force of deductive logic. But what about Premise 2?

I don't think we should buy Premise 2, for it seems to me that are considerations that might indicate the trustworthiness of an anonymous source S while falling short of revealing who S is. (For the purposes of the present discussion, I'm going to assume that in order properly to trust what a source says you must have some (positive) reason to think that the source is trustworthy. Some would say -- and I think not entirely implausibly -- that in order properly to trust what a source says, all that is required is that you have no (negative) reason to think that the source is untrustworthy; but leave that issue aside for now.)

What considerations do I have in mind? Well, consider the following suggestions:

1. S has her trustworthiness vouched for by others whom you properly trust. You might not know who S is, but others presumably do; and among those others might well be people you know and trust who can assure you that you should accept what S says. In that sort of situation, it might very well be proper of you to trust what S says. And the vouching can be construed quite liberally. For example, if you don't know who S is but do know that the good folks at Princeton University have awarded her a PhD in a field directly related to that of which she speaks, this might count: S's trustworthiness is in effect being vouched for by folks at Princeton whom you have good reason to trust (expect faculty members on dissertation committees, etc.).

2. S has a good track-record when it comes to her past reports. Suppose that S is a Jane Doe involved in very lengthy legal trial, and that, during the early stages of the trial, she makes a number of substantial claims under oath that prove to be correct and independently verifiable. During a later stage S makes another such claim. Despite your having no knowledge of who S really is (she is a Jane Doe to you, after all), you might nonetheless have good reason to accept this later claim of hers in the absence of independent confirmation.

3. There is a high-degree of coherence in what S says. What S says might in fact be a rather long report, containing a number of putatively factual claims. The chances are good that if S is lying or otherwise speaking falsely, these claims won't all fit together in a logically and explanatorily coherent manner. Similarly, the chances are pretty good that if they do fit together in this way, S is speaking the truth.

These are of course only a few considerations that might be relevant. And it would be silly to suggest that such considerations can serve as an airtight guarantee against being mistaken or mislead. Still, they are considerations that might pull in favor of the trustworthiness of a source S, even when you don't know who S is. If you have enough considerations of this sort in hand, accordingly, it may very well be the case -- contrary to Premise 2 above -- that you ought to trust S despite not knowing who she is. This allows us to avoid the extreme skepticism suggested by the Conclusion above. Plausibly, you can properly trust what a source says even if the source is anonymous to you.

| Comments (6) |

Google's memory stirs privacy concerns

posted by:Jennifer Manning // 10:50 PM // June 06, 2005 // Surveillance and social sorting

When Google's 19 million daily users look up a long-lost classmate, send email or bounce around the web more quickly with its new Web Accelerator, records of that activity don't go away.

In an era of increased government surveillance, privacy watchdogs worry that Google's vast archive of internet activity could prove a tempting target for abuse.

Like many other online businesses, Google tracks how its search engine and other services are used, and who uses them. Unlike many other businesses, Google holds onto that information for years.

Some privacy experts who otherwise give Google high marks say the company's records could become a handy data bank for government investigators who rely on business records to circumvent Watergate-era laws that limit their own ability to track US residents.

At a time when libraries delete lending records as soon as a book is returned, Google should purge its records after a certain point to protect users, they say.

"What if someone comes up to them and says, 'We want to know whenever this key word comes up?' All the capability is there and it becomes a one-stop shopping centre for all these kinds of things," said Lauren Weinstein, an engineer who co-founded People for Internet Responsibility, a forum for online issues.

Click here for the rest of the article.

| Comments (0) | | TrackBack

Pharming and other security woes hector VoIP

posted by:Jennifer Manning // 10:17 PM // // Surveillance and social sorting

From: CNET.com

There are few clearer signs that an information technology has hit the mainstream than when it becomes the focus of pharming and other security attacks.

Low-cost voice over Internet Protocol (VoIP) phone services now capturing the general public's imagination are indeed being targeted by online attackers, who have been known to eavesdrop on calls, deny customers access to their VoIP service and cause "clipping," or degraded service quality, on some accounts, say executives gathered here for Supercomm 2005, a major phone trade show.

VoIP's security problems only heighten concerns simmering since January, when a Harris Interactive poll found that 60 percent of all adults in the United States who are aware of Internet telephony but not using it believe it could be subject to security and privacy issues.

VoIP's security vulnerabilities both highlight the enormous potential of the service and threaten to derail the success of freely distributed VoIP software, which lets any Internet connection also serve as a home or business phone line. About 7.5 million out of 200 million homes and offices have traded in their traditional phone lines for VoIP. But research firm Gartner predicts there could be as many as 25 million VoIP-connected homes by 2008. Among the big draws: VoIP operators' $20-a-month unlimited calling plans.

One of VoIP's flaws is that it is inherently vulnerable to hackers because, like e-mail, VoIP calls find their way by locating an IP (Internet Protocol) address, a unique set of numbers assigned to each device connected to the Web. Yet while scores of commercial VoIP providers have quickly expanded to take advantage of the growing interest in the service, many have not implemented even basic security measures, such as encrypting phone calls.

While information about attacks on VoIP systems are mostly still the stuff of white papers, some businesses using the service are encountering attacks, according to corporate phone-systems integrator BearingPoint Institute, which didn't provide details.

"Security is crucial to broad acceptance of IP telephony," said Christian Stredicke, founder of Berlin-based Snom Technology and a speaker at a Supercomm security summit.

Time may be running out to completely contain VoIP security threats, however. In January, analysts at Gartner said it will be only two years before organized attacks begin on signaling networks, the portions of telephone networks that carry the routing instructions that ensure calls reach the right place.

"Not surprisingly, as many VoIP operators rush to capture new business, hackers are rushing too--to explore and exploit ways to steal or disrupt these services," Stephen Doty and Fred Hoffmann, two BearingPoint managers, wrote in a recently released white paper.

For their part, many VoIP service providers and equipment makers are turning to the relatively new Voice over IP Security Alliance. The alliance will define security requirements across a variety of VoIP deployments and address issues such as security-technology components, architecture and network design, network management, and end-point access and authentication.

New VoIP security threats seem to come every week, a brisk pace. One that recently surfaced is a VoIP version of pharming, one of the latest security scares for Internet users of all sorts.

Pharming exploits vulnerabilities in a piece of network equipment responsible for translating e-mail and Web addresses into IP addresses. Security experts speaking at Supercomm this week said that, by hijacking a domain-name system (DNS) server--a computer that stores and organizes IP addresses--pharmers get control of VoIP calls.

Without their knowledge, VoIP users' calls could then be redirected to IP addresses completely different from the ones the users dialed, warns Paul Mockapetris, the inventor of the domain name system.

The list of different VoIP attacks is growing and highlights the adaptibility of the attackers.

One of the earlist VoIP threats identified, Caller ID spoofing, substitutes someone else's Caller ID information as your own.

The security problem known as clipping, meanwhile, occurs when a cable modem is targeted with a huge flood of traffic, creating a "clipping" disruption on VoIP phone calls. Another type of attack, called V-bombing, occurs when thousands of voice mails are targeted simultaneously to a single VoIP mailbox.

| Comments (0) | | TrackBack

Group blasts Canada Revenue Agency over security lapse

posted by:Jennifer Manning // 03:06 PM // // TechLife

From the Canadian Press:
A taxpayers' lobby group is upset at what it describes as lax security at the Canada Revenue Agency.

John Williamson, of the Canadian Taxpayers Federation, is responding to an audit that shows a handful of former agency employees had the ability to access sensitive case files long after they left their jobs.

The staff belonged to offices in the Atlantic region.

The security lapse involved identification codes and passwords that employees use to log into the agency's central computer.

Both electronic codes remained active within the agency's computers for months — sometimes years — after the employees left.

Williamson says Canada Revenue should have been more vigilant, especially since identity theft is such a huge issue these days.

He says the incident is bound to shake the confidence of taxpayers.

Despite the slip up, Canada Revenue says it can find no evidence that former employees actually had unauthorized access to the system.

| Comments (0) | | TrackBack

Hacking the Personal Area Network

posted by:Jason Millar // 10:37 AM // June 03, 2005 // TechLife

Innovations in wireless technology are spawning new implantable and wearable devices that will communicate with one another, resulting in the emergence of the Personal Area Network (PAN). Bluetooth, a wireless communication standard, is fast emerging as the means by which these devices will communicate since it is specifically designed for short-range wireless communication between small devices. Examples of Bluetooth devices of this sort include cochlear implants for the deaf, insulin pumps and blood glucose monitors for diabetics, and full body montioring systems that continuously monitor critical bodily functions and communicate the information to medical professionals. Other Bluetooth devices include cell phones, PDAs, headsets, notebook computers--all of which could be be communicating sensitive physiological data or controlling the associated physiological processes over the PAN.

Although the benefits of PAN devices are obvious they also increase the potential for harm to the person by virtue of the fact that they provide access to highly personal sets of data. Ian Kerr recently discussed some implications of PANs at a conference in Ottawa, during his presentation entitled "Still Feelin' 'icky': The Utopias of Conrad Chase, Kevin Warwick and other Digital Angels". Compromised security in the PAN could result in any range of problemtatic outcomes, such as an invasion of privacy, descrimination based on knowledge of physiological conditions, or the loss of control of physiological processes vital to the well-being of the individual. Imagine a hacker suddenly broadcasting audio into a cochlear implant or publishing the details of your personal medical conditions on the World Wide Web.

A recent security hole in Bluetooth technology was discovered by cryptographers in the UK, which allowed them to take control of a Bluetooth network (a PAN for all intents and purposes) and manipulate the communications within it. Combined with the potential that Bluetooth offers for locating and identifying a person solely based on the unique IDs of their PAN devices, the technology raises serious privacy concerns.

Hacking the PAN will not simply result in lost productivity or a trip to the store to buy the latest anti-virus software. Designers of Bluetooth devices, and members of the Bluetooth Special Interests Group need to be aware of the unique, potential risks, posed by PAN technology so that they can adopt design features that respect and strengthen individuals' privacy within the PAN.

| Comments (1) | | TrackBack

Use PETs? You must have something to hide

posted by:Alex Cameron // 07:55 PM // June 02, 2005 // Digital Democracy: law, policy and politics

In this US case, evidence that the accused had PGP (pretty good privacy) technology on his computer was found to be relevant to proving criminal intent to attempt to make child pornography. This despite a finding that every Mac computer that comes out today may have PGP on it and that no encrypted files were present on the accused's computer.

This is a troubling finding because, as I see it, there is no win for privacy here.

On one hand, if ordinary emails and other online communications can be intercepted, they may not attract a reasonable expectation of privacy.

On the other hand, if we use privacy enhancing technology like PGP to establish a reasonable expectation of privacy in our files and communications, then it must be because we have something to hide. Even where there is no clear evidence of our use of encryption for illegal purposes, the fact that we even have it on our computer can be used against us.

Though perhaps not surprising given that child pornography was alleged in this case, it seems troubling that the presence of PGP could be found by a court to be relevant to proving criminal intent, particularly in the absence of evidence about whether and how the PGP was specifically used.

Full text of US court decision: http://pub.bna.com/eclr/K203106.doc

| Comments (0) | | TrackBack

ITAC Meeting

posted by:Catherine Thompson // 04:42 PM // June 01, 2005 // Walking On the Identity Trail

Yesterday I went to the Information Technology Association of Canada (ITAC) Cyber Security Forum held at the Standards Council of Canada on behalf of the Canadian Internet Policy and Public Interest Clinic.

A few matters arose that relate directly to the Anonymity Project’s activities.

First, Bill Munson of ITAC mentioned that the RCMP recently gave a presentation to ITAC in which it was mentioned they were going ahead with biometric and real time ID proposals, including chips in passports.

Industry Canada Director General of E-Commerce Richard Simpson spoke about the recently released Spam Task Force Report. At the end of his presentation, Simpson said he wanted to emphasize two points.

First, Simpson said if the Internet is to be an economic infrastructure, authentication and identity management ‘need to be dealt with.’ He said the days when we go on the Internet anonymously might have to end.

Secondly, Industry Canada will be looking to reinforce the ground rules of the Internet as an economic marketplace. It is Simpson’s belief that clear and consistent rules will lead to economic growth and that privacy will necessarily play a part.

Despite the brevity of his comments, it would seem part of Industry Canada’s next major focus could relate specifically to the subject matter of the Anonymity Project.

| Comments (0) | | TrackBack

main display area bottom border

.:privacy:. | .:contact:.

This is a SSHRC funded project:
Social Sciences and Humanities Research Council of Canada