understanding the importance and impact of anonymity and authentication in a networked society
navigation menu top border

.:home:.     .:project:.    .:people:.     .:research:.     .:blog:.     .:resources:.     .:media:.

navigation menu bottom border
main display area top border

« June 2006 | Main | August 2006 »

Municipal WiFi is Coming, and Why Privacy Advocates Should Care

posted by:Graham Longford // 07:00 AM // July 25, 2006 // ID TRAIL MIX

trailmixbanner.gif

At a press conference in March of this year, Toronto Hydro Telecom (THT) announced an ambitious plan to turn Toronto into the largest Wi-Fi (wireless fidelity) internet ‘hot-zone’ in North America. Flanked by Mayor David Miller, THT’s CEO David Dobbin called the availability of Wi-Fi in public spaces, and the ubiquitous, mobile connectivity that it enables, “the new benchmark for urban living.” Miller called the announcement “a historic moment in Toronto’s development as a world-class city.” THT’s announcement vaults Toronto to the forefront of municipal WiFi deployments in North America, alongside “muni WiFi” pioneers like Philadelphia, San Francisco, and Fredericton.

On its face, the case for deploying municipal WiFi systems is a compelling one. Advocates claim that city-wide WiFi schemes promote economic development and tourism, attract and retain skilled workers and investment, increase the efficiency of municipal services, improve emergency response and public safety, and narrow the digital divide. It is for reasons like these that hundreds of municipalities in North America, Europe and Asia are implementing or planning WiFi systems.

Yet, despite its allure, municipal WiFi is controversial, particularly in the U.S.. Private sector critics argue that municipalities have no business providing internet service to citizens. Muni WiFi services, they claim, duplicate and unfairly compete against private telecommunications services. Public health advocates, meanwhile, have weighed in with concerns regarding the dangers of electro-magnetic radiation emitted by wireless devices. Largely absent from the debate so far, however, have been arguments about the privacy risks of such systems. As major WiFi deployments like Toronto’s are rolled out across Canada and the rest of North America, surveillance and privacy scholars, activists and policy makers must become engaged in order to ensure that such systems are implemented in a manner that is transparent, accountable and as respectful of user rights, including privacy, as possible.

The following offers an overview of the THT WiFi plan and a preliminary analysis of its privacy implications. As THT’s service has yet to be deployed, some of this is unavoidably speculative. We can extrapolate, however, from the experience of other municipalities, including San Francisco and Fredericton, which will also be discussed. I conclude by reviewing a set of guidelines for enhancing the privacy of muni WiFi systems proposed by privacy advocates such as EPIC and EFF, and call for the development of THT’s WiFi system in conformity with them.


One zone, no strings attached?

THT’s WiFi plan, which it has dubbed “One zone, no strings attached,” plan envisions a wireless “cloud” covering the entire city (630km square) with ubiquitous internet connectivity within 3 years. The first phase of the rollout is under way, with THT promising to cover a 6 square kilometre area in the downtown core by the end of 2006. From a technical standpoint, the THT network will use license-exempt wireless spectrum (the same spectrum used for household devices like garage door-openers and baby monitors). Bandwidth will be supplied through THT’s existing 450km fibre-optic network, which it uses to monitor Toronto’s electricity grid. THT claims that its WiFi internet service will be up to ten times faster than existing broadband services in the city. The THT plan also relies on mounting WiFi equipment onto many of the city’s 18,000 street lights, which are owned by THT’s parent company Toronto Hydro. Under THT’s plan, every 7th street light in the city will be equipped with a WiFi device, bathing the city in wireless connectivity (Hamilton, 2006; Toronto Hydro Telecom, 2006).

While THT is a subsidiary of municipally-owned Toronto Hydro Corporation, its WiFi business model is unambiguously commercial and revenue-oriented. THT will offer its WiFi service free of charge for the first six months of operation, to be followed by the introduction of tiered service plans available on a prepaid or subscription basis at market competitive rates. THT plans to market the service to downtown businesses, workers, restaurant and hotel patrons, and university students (Toronto Hydro Telecom, 2006). Whether or not it will eventually target the broader residential broadband market in the city remains unclear.

Until THT’s WiFi network is deployed and its terms of service made public, it is difficult to comment on its privacy implications in detail. We know enough about its business model already, however, to raise some red flags. First, the THT system will require users to create accounts and authenticate. While this need not entail divulging personally identifying information, it certainly facilitates user data collection and session-to-session tracking, which could eventually be tied to personal information. Since THT also intends to sell the system on a subscription basis, it will most certainly collect and retain users’ banking and/or credit card information, thus enabling user data to be tied to individuals.

Secondly, THT has made it very clear that the main purpose of the system is to maximize revenue for its parent company, Toronto Hydro Corporation. With this in mind, THT will most certainly examine the revenue potential of the user data that it collects. Major web properties will no doubt line up to gain access to THT’s user data. Furthermore, THT may also be tempted by the prospect of generating additional revenue by selling ad space with its service; indeed location-sensitive advertising is a major component of many muni wifi business models, including San Francisco’s (Chester, 2006). Location-based advertising is dependent upon combining user data with location information in order to customize ads and services to a user’s geographic location. Such a combination can also be used to reveal an individual’s location, as well as patterns of movement through the network coverage area.

Finally, and alarmingly THT’s Dobbin recently speculated on the feasibility of integrating CCTV surveillance cameras into the system, mounting camera units on city street light poles and transmitting images to police via the THT WiFi network (Granatstein, 2006).

What does THT’s plan mean for the privacy rights of Torontonians and visitors to the city, as thousands (if not more) flock to the service? Fortunately, we do not need to wait for THT to deploy its system fully in order to grasp the potential implications for user privacy. The experience of municipalities that are farther down the road to deployment is instructive.


Google’s San Francisco WiFi deployment

In the spring of 2006, a partnership between Google and Earthlink was awarded a contract to develop a WiFi network for the City of San Francisco, beating out 4 other bids. The Google/Earthlink plan involves providing tiered internet access services, including a free low-speed service provided by Google and a paid high-speed service provided by Earthlink. The provision of each service is to be supported by a different business model. The free, low speed (300 Kbps) service offered by Google will be financially supported by online advertisements streamed to users of the network and tailored to their location, habits and preferences as tracked by Google. Earthlink’s premium, high speed (1 Mgps) service will cost users approximately $20 per month, and be free of advertising.

San Francisco’s proposed WiFi network has been scrutinized by privacy advocates. EPIC, EFF and the ACLU recently prepared a privacy analysis of the 5 competing bids for the contract, looking at the provisions made in each for the collection, use and retention of user data (EPIC, 2006). Four out of five, including Google/Earthlink’s, were found to be privacy-invasive. Only the proposal submitted by SF Metro Connect, a non-profit community network, passed muster. Analysis of the Google-Earthlink bid showed that the collection, commercialization, sharing of user data would be the default setting for the system. Google’s free service will be accessed via a location-aware captive portal page and user sign-in, thus allowing persistent tracking across sessions. Along with collecting user email addresses and usernames, Google intends to collect, analyze and commercialize user location information in order to customize advertising and other location-based services that users will see and have access to. Google’s concession to privacy concerns includes an “opt-out” provision for those who do not wish to access location-specific advertising and services or have their information shared with third parties, thus making information collection and sharing the default setting of the system. Additional concerns were raised about how Google will respond to requests for user information by law enforcement officials, including Google’s policy of not informing users when such requests have been made.

All told, the Google/Earthlink proposal was judged by the EPIC/EFF/ACLU study to be one of the most privacy-invasive of the 5 proposals for the San Francisco system. Google’s model for a free, ad-supported WiFi service has been the subject of intense scrutiny by the press and other municipalities, although rarely in relation to its privacy implications. Should it prove to be commercially viable, the Google model may well be replicated in hundreds of municipalities across the U.S., and possibly Canada, a prospect that should concern us.


Setting, applying, and advocating a standard for privacy-protective municipal WiFi systems

Part of the problem with the San Francisco deployment, according to the privacy advocates, is that the City set no minimum standards for privacy protection in its initial Request For Proposals. What might such a set of standards look like? The EPIC/EFF/ACLU privacy analysis document proposes a “Gold Standard” for privacy-protective municipal WiFi systems. The fundamental principle of a privacy-protective system is that “where information needs to be collected, it should only be used for operational purposes and deleted after it is no longer needed” (EPIC, 2006). Practically speaking, a “Gold Standard” muni WiFi system should:

• allow access without "signing in"; sign-in procedures often require personal information that enables tracking;
• offer a level of access that is free, since fee-based systems (e.g. subscription services) enable the identification of users through credit card or bank account information, unless provision for cash payment is made; and,
• forego offering targeted advertising and other customized electronic services based on user identity, location or surfing behaviour; such services may be offered, but on an “opt-in” basis requiring the user’s explicit consent..

For more detailed information on the EPIC/EFF/ACLU “Gold Standard,” including recommendations for data storage and retention practices, see EPIC, 2006.

Applying this Gold Standard to THT’s WiFi model is difficult of course, given that the service has yet to be rolled out. Based on what we know so far, however, it is highly unlikely that it will meet the standard. THT’s insistence on the use of log-ins and paid subscriber accounts ensures the collection of information beyond what is minimally and technically necessary to operate and permit access to the system, and creates the conditions for the persistent tracking of user behaviour tied to personally identifying information. The latter will also allow THT to construct commercially valuable user data profiles that it will be tempted to exploit by selling them to third parties. Only the adoption of an explicit “opt-in” policy for the collection and sharing of such data would mitigate the privacy risks posed by such a move.

The fact that THT has yet to roll out its system presents an opportunity to intervene, however, just as it is developing its policies and terms of service. The need for intervention is urgent, given that many other municipalities in the country are watching to see if the THT model provides a viable blueprint for other deployments. Any influence that privacy advocates have in shaping the THT model may well have ripple effects across the country. But the points of leverage from which to influence THT – be they city politicians, City hall committees, or Toronto Hydro itself - need to be identified, and pressure brought to bear. The privacy risks of muni WiFi need to be identified and articulated, along with best privacy practices. And as we think about best practices, we would do well to recall and revive interest in Canada’s homegrown model of muni WiFi – the Fredericton e-Zone – which has been eclipsed by the recent hype associated with the deployments in Philadelphia, San Francisco and, now, Toronto. Fred e-Zone has been operating successfully as a free municipal WiFi service in the New Brunswick capital since 2003, and without using authentication procedures, log-ins or collecting personal information. As the muni WiFi wave begins to roll across this country, we would do well to study the “Fred e-Zone” experiment closely, to better understand what has enabled it to succeed despite its admirably minimalist approach to user data collection.

References:

Chester, Jeff (2006) “Google’s Wi-Fi Privacy Ploy,” The Nation, March 24,
(http://www.thenation.com/doc/20060410/chester).

EPIC (2006) A Privacy Analysis of the Six Proposals for San Francisco Municipal Broadband, http://www.epic.org/privacy/internet/sfan4306.html
(http://www.epic.org/privacy/internet/sfan4306.html).

Granatstein, Rob (2006) “Network could be invitation to big brother,” Torontosun.com, July 15, 2006,
(http://torontosun.com/News/TorontoAndGTA/2006/07/15/1685800-sun.html).

Hamilton, Tyler (2006) “Downtown goes wireless,” Toronto Star, March 8, 2006.

Toronto Hydro Telecom (2006) “One Zone, no strings attached,” (www.thtelecom.ca).

__________

Graham Longford is a Postdoctoral Research Fellow in Community Informatics
Co-Investigator, Community Wireless Infrastructure Research Project (CWIRP)
Faculty of Information Studies
University of Toronto

| Comments (4) |


RFIDs: Patents Pending

posted by:Carole Lucock // 09:02 AM // July 23, 2006 //

Today's Globe and Mail contains an informative article on current and projected uses of RFID Tags. Although unlikely news to many who follow developments in the use of RFIDs, the rapidity of uptake and implementation by industry is troubling. All the more so because this is happening in a vacuum of regulation where the status quo appears to be that if you want to chip it then you can. The article refers to a fairly new Canadian RFID Centre, which ominously describes the consumer application of RFIDs as " monitor[ing] the movement of people; for example, shoplifting prevention, and electronic article surveillance."

| Comments (0) |


Little Brother – Electronic surveillance inside private organizations

posted by:Chris Young // 11:59 PM // July 18, 2006 // ID TRAIL MIX

trailmixbanner.gif

Much is made of the potential pitfalls of overly broad government surveillance of civilian activities, and rightly so. Most will agree it is a good thing that our society still has at least some intuitive understanding that such powers in the hands of those who govern us can do more harm than good in the long term. However there is a parallel realm of human activity where surveillance also occurs and which is discussed much less frequently. I have in mind here the world of private organizations and the (mostly) electronic surveillance they engage in over their own employees. Of course this is more of a potential issue with large organizations that have the resources necessary to pay for the technical and human resource requirements that this entails, however moving forward smaller and smaller organizations will be able to put in place the mechanisms necessary for employee surveillance, as it is very likely that out-sourcing services in this area will become available. Further, out of the two types of organization I am most familiar with, the university and the for-profit corporation, the latter is much more likely, in my view, to engage in employee surveillance, as universities still maintain a respect for researcher independence (among other factors). On the other-hand, corporations, slightly paranoid about anything that might affect their bottom line, will tend to jump reflexively to employee surveillance as just another good business practice. Before providing some thoughts on whether this should even be accepted as true, I will go over a few things that recently appeared in the news that shed a bit of light on what is actually going on in the corporate world.

CBC recently reported on the release of a Ryerson University report discussing the use of electronic eavesdropping on employees by private corporations in Canada. Apart from showing that the practice was widespread, one of the findings was that employers did not stop to think that this sort of activity might be a problem. Another report surveying American and British corporations found that somewhere close to 40% of these routinely eavesdropped on employee communications. To some extent this is warranted, as the way in which, and what, employees communicate to the outside world clearly is company business. However many employees will use their company email accounts for private communications. Further, other electronic communications media are either coming into regular use or are becoming more networked, making them equally vulnerable to surveillance on the part of the organization. I have in mind here instant messaging, heavily used by those under thirty years of age, and the transition of phone services from analog networks (which in practice makes eavesdropping rather difficult) to fully digital and integrated networks, the most obvious example being VoIP (Voice over Internet Protocol) telephony. For example, a private telephone call made over an IP phone on a corporate network to a government agency, during which the employee might communicate information such as their social insurance number, can not only be intercepted and heard by the IT department of that corporation, but can very easily be permanently stored on a corporate network. There are of course many other types of information of a private nature which individuals may prefer to keep to themselves and which are in no way corporate business. The international nature of contemporary businesses and electronic networks also means that such information is as likely stored in another country as where one’s physical place of work is. If information resides on US networks, it may well be available to US government security agencies under local legislation. Someone trained in law might better be able to shed light on this aspect of the issue than I can.

My own response when working at a private company has been to limit my use of company email software and phones to strictly business uses. My mobile phone and encrypted web-based email services I use for personal communications. However, I happen to be both tech-savvy and aware of developments and common practices in electronic surveillance, which is not true of the population at large. Further, I have the financial resources to use a mobile phone during the day, which may not be true of some categories of workers. In passing, the importance of encryption becomes obvious in this context as a way for employees to protect their private information, and speaks to its value as a democratizing force in an electronic world.

One point that is made in the Ryerson report that might be overlooked by some is that very often the IT departments of large corporations are implementing electronic surveillance practices without the oversight, or even the knowledge, of human resources departments. It seems to me that human resources personnel should be an essential part of the teams creating electronic privacy policies within corporations. For one thing they are trained in orginizational theory and are well-placed to judge what the best use of electronic surveillance over employees might be. It is not at all clear to me that pervasive surveillance of employee activity has the best outcome in terms of employee productivity and overall organizational efficiency. Secondly, human resources personnel often have at least some social science or liberal arts background, which (one would hope) gives them more of an insight into the appropriateness of using technology to peer into the private lives of employees.

Apart from the surveillance free-for-all that some IT departments engage in (at a large corporation I have recently worked at, the answer of one IT person to my question as to what they looked at in terms of web communications to the outside world was “everything”, or words to that effect), there is the added factor that the surveillance policies being implemented, whether planned or ad hoc, are rarely communicated to the employees. Many people may not realize that a private phone conversation related to family matters, for example, will possibly reside on company servers for the next several years or more (the conversation would likely be archived as part of the normal data storage activities of the company in question).

The surveillance of private communications in the corporate world, although less politically sensitive than government surveillance of the civilian population, does warrant some attention, as it will affect more and more people’s everyday lives in the coming years.

| Comments (2) |


When Personal Space is Nothing but Trouble

posted by:Jeremy Hessing-Lewis // 11:52 AM // July 17, 2006 // General | Surveillance and social sorting

Microsoft has withdrawn a free program that would have allowed users to create password protected folders. Private Window 1.0 would have allowed users to create privivate areas within user accounts that could protect sensitive data.

Unfortunately, the tool was set to cause chaos for IT departments accross the land. Companies don't like not being able to access parts of their own network. Moreover, the tool would have taken password recovery help to epic levels. The uproar caused Microsoft to retract the software within two days.

Although its too bad that the tool will no longer be available to individuals, it serves as an excellent example of Microsoft trying to balance corporate enterprise economics with personal data security.

Read more on CNET here.

| Comments (0) |


The Original Privacy Position

posted by:David Matheson // 11:50 PM // July 12, 2006 // Core Concepts: language and labels | Digital Democracy: law, policy and politics | Surveillance and social sorting

Thomas Nagel has pointed out that there is an analogy to be drawn between (what I’ll call) the problem of liberalism and the problem of privacy. The problem of liberalism concerns “how to join together individuals with conflicting interests and a plurality of values, under a common system of law that serves their collective interests equitably without destroying their autonomy.” (Nagel 1998, 4-5) The problem of privacy is that of “defining conventions of reticence and privacy that allow people to interact peacefully in public without exposing themselves in ways that would be emotionally traumatic or would inhibit the free operation of personal feeling, fantasy, imagination, and thought.” (Nagel 1998, 5)

One well-known attempt to deal with the problem of liberalism comes from John Rawls (1971). He asked us to imagine individuals in what he called the Original Position. Inhabitants of the Original Position are behind a “veil of ignorance” that cuts them off from any significant knowledge of their position in society: they don’t know whether they are rich or poor, powerful or disadvantaged, members of a social majority or minority, etc. Under such conditions of ignorance, they are faced with the task of determining the basic structures and rules whereby society is to be ordered. Whatever structures and rules they would agree upon, Rawls claimed, are the basic principles of justice (as fairness).

So what would the inhabitants of the Original Position agree upon? Rawls pointed to two fundamental principles. First, the liberty principle:

Liberty. Each individual is to have a maximal amount of basic liberty (including such things as the freedom to vote, the freedom to be considered for public office, freedom of speech, freedom of conscience, freedom of assembly, and freedom from arbitrary arrest and seizure) consistent with a similar liberty for everyone else.

Second, the difference principle:

Difference. Socio-economic inequalities are to be such that they bring the greatest benefit to least advantaged members of society.

By thus using the decision procedure that consists of thinking about what inhabitants of the Original Position would agree upon, Rawls suggested, we can get clear about the basic principles of justice. These principles provide the general framework for understanding “how to join together individuals with conflicting interests and a plurality of values, under a common system of law that serves their collective interests equitably without destroying their autonomy.” Hence the use of the Original Position gives us one way of dealing with the problem of liberalism.

I wonder if there isn’t an analogous solution to the analogous problem, i.e. to the problem of privacy. Perhaps we can make use of a privacy version of the Original Position; call it the “Original Privacy Position.” Thus, as before, imagine a group of individuals behind a metaphorical veil of ignorance. Now, however, the veil only precludes them from knowing anything significant about their privacy position in society. Inhabitants of the Original Privacy Position, in other words, don’t know such things as whether their privacy is generally at serious risk, whether they attach a great deal of value to their privacy, whether they are in a position to make a lot of money through the diminishment of others’ privacy (or whether others are in such a position with respect to them), etc. And behind this veil of privacy ignorance they are given the task of deciding upon the basic norms of “reticence and privacy,” to use Nagel’s phrase, or norms of the “contextual integrity” of personal information, to use Helen Nissenbaum (1998, 2002)’ s equally apt one. The idea would be that whatever basic norms inhabitants of the Original Privacy Position would agree upon, those are the basic privacy norms that any just society should respect.

Maybe they would agree upon norms quite analogous to Rawls’s two general principles of justice. First, there would be the privacy norm:

Privacy. Each member of society is to have a maximal amount of basic privacy consistent with a similar privacy for everyone else.

Then there would be something like the difference of privacy means norm:

Difference of privacy means. Inequalities with respect to individuals’ means of controlling their privacy (e.g. inequalities concerning access to technologies designed to protect their privacy, or to diminish that of others) are to be such that they bring the greatest benefit to the least privacy privileged members of society (i.e. to those members of society who are the least advantaged with respect to controlling their privacy).

Although I haven’t yet chatted with him about this, it seems to me that this Rawlsian approach to the problem of privacy might serve as a basis for justifying Steve Mann’s program of equiveillance. After all, a good case can be made that many of the surveillance structures in our actual society violate one of both of the just mentioned privacy norms. (Compare Lucas Introna (2000)’s claim that workplace surveillance practices sit ill at ease with the Rawlsian approach to justice as fairness.)

Consider, for example, the surveillance structures built into digital rights management technologies. Those structures certainly yield inequalities when it comes to individuals’ means of controlling their privacy. And they arguably bring no (let alone the greatest) benefit to the least privacy privileged members of society. Steve’s insistence that we aim for equiveillance through sousveillance could perhaps be cast as the point that sousveillance is needed to bring us back to an appropriate respect for such privacy norms as Privacy and Difference of privacy means.

References

Introna, Lucas. (2000). “Workplace Surveillance, Privacy, and Distributive Justice.” Computers and Society 33: 33-9

Nagel, Thomas. (1998). “Concealment and Exposure.” Philosophy & Public Affairs 27: 3-30

Nissenbaum, Helen. (2004). “Privacy as Contextual Integrity.” Washington Law Review 79: 119-58

Nissenbaum, Helen. (1998). “Protecting Privacy in an Information Age: The Problem of Privacy in Public.” Law and Philosophy 17: 559-96

Rawls, John. (1971). A Theory of Justice. Cambridge, MA: Harvard University Press.

| Comments (2) |


A Flickr of Web 2.0

posted by:Jeremy Hessing-Lewis // 11:59 PM // July 11, 2006 // ID TRAIL MIX

trailmixbanner.gif

Welcome to Web 2.0, where your life is the content. Thanks to such upcoming venture capital stars as folksonomy, social networking, Wikis, and architectures of participation, this second wave is already upon us. Yet, just behind the jargon and Silicon Valley hype, lies a collection of legal and ethical issues mirroring and amplifying previous iterations of online participation. This ID Trail Mix will briefly survey some of these issues using Flickr photo sharing as a case study of Web 2.0.

Flickr as Web 2.0
The name Web 2.0, although still under debate (and litigation), is an umbrella term used by a series of conferences hosted by O’Reilly Media and MediaLive International. Without referring to any specific technological innovation, the name is used to describe a collection of web tools and standards that fit within broad themes such as usability, participation, standardization, remixability and convergence.

Flickr currently has an estimated 1.5 million users in the increasingly competitive digital photography after-market. Despite the abundance of innovation, the Flickr mission statement remains sufficiently straightforward: 1) “We want to help people make their photos available to people who matter to them” and 2) “We want to enable new ways of organizing photos.” To accomplish these goals, Flickr has deployed an overwhelming number of tools. Pictures can be uploaded via email, cell phone, Flickr software, and of course the old fashioned web browser. They can be annotated, blogged, bookmarked, printed on mugs or t-shirts, and published in coffee table books.

And then there is the public dimension, where “available to people who matter to them” seems to include just about anyone with an Internet connection. While pictures can be set to private, most users post publicly in order to avoid having to assign individual permissions to Uncle Hank and Cousin Sue.

Users are the New Bots
When Yahoo! acquired Flickr in March 2005 for an undisclosed amount, it was not immediately clear why it would invest in a small Vancouver company when it already had a far more popular photo sharing site in Yahoo! Photos. The answer is based in how Web 2.0 tools can be used to sort content. Not only are the photos submitted by users, but they can also be annotated and categorized by members of the community itself.

opendoorexit.jpg
Photo by Open Door Exit under a Creative Commons Licence

Flickr organizes photos by way of folksonomy. In other words, content is identified in an open-ended system of collaboration. A taxonomy by folks. Meta-tags are added to each photo by the person posting the photo. Depending on the level of permissions, all Flickr users may be able to add additional tags. For example, I might include the tags “Birthday” and “Party” with the above photo. My photos would then be returned by searching for any of these tags. Another user might add “Jeremy Hessing-Lewis” at some later point. Some users even add GPS locations to situate photos in geographic context.

Unlike Google, which uses computer algorithms known as crawlers to locate and identify content, a folksonomic system will give results as interpreted by humans. Still, the ultimate goal remains the same: enable users to find content that they want.

It’s no surprise that Yahoo! also acquired the bookmarking site Del.icio.us in 2005 to complement its folksonomic goals. Del.icio.us allows all users to bookmark sites and add tags to the bookmarks in order to produce annotated lists of popular web content. The inevitable convergence allows Flickr users to add Del.icio.us bookmarks to photos, groups, and portfolios. This effect, where users add value to the network, is known as an “architecture of participation.”

Although the public may accurately classify content, folksonomy vastly complicates online privacy by limiting the content creator’s ability to disclose specific identifying details. While an infinite number of monkeys on an infinite number of laptops may correctly identify pictures of your birthday, do you really want an infinite number of monkeys browsing your birthday photos?

It’s not hard to imagine how folksonomic sorting could further impair privacy and anonymity. What if I choose to post pictures anonymously only to have others fill-in the remaining details (as in the above example)? What about controlling descriptors accompanying your image? The result is that although I may begin by disclosing my information in a certain way, inevitably “I am what you say I am.”

fabz.jpg

Photo by Fabz under a Creative Commons Licence

Social Networking
Since the $580 (US) million purchase of MySpace by the media behemoth NewsCorp, social networking has been the star of Web 2.0. While the idea of online social forums has been around since the earliest days of the web with such legendary haunts as the WELL, the social element is now built-in to just about every possible site.

Flickr’s social networking features are, not surprisingly, based around photography. Users can create a “group” and so long as it is listed as public (the default), anybody can contribute photos. People may then discuss the photos and form their own online communities forged around certain themes. The “wedding” tag alone contains 2, 544 groups. Each includes a collection of images and vast amounts of personal information. Although I didn’t attend Mark and Ruth’s Wedding, I feel like I was in the wedding party.

voteninjapartysm.jpg

Photo by Voteninjaparty under a Creative Commons Licence

By participating in Flickr groups, even voyeurism becomes a social activity. Relationships forged in this environment fall victim to exactly the same elements of preying-upon and web-stalking as have conventional chat rooms. The only difference is that a predator doesn’t have to wait to ask for your picture. Instead, they start with your picture and lure from there.

byncsaSteveCrane.jpg
Photo by Steve Crane under a Creative Commons Licence

The Network is the Platform
The most threatening Web 2.0 feature that I foresee is that the interface will become so usable and efficient that users will no longer recognize that they are passing information over a network. When a computer user’s desktop becomes an extension of a website, users give-up both privacy and proprietary control of their information.

For example, Flickr incorporates a fully web-based “Organizr” program. It is a simple browser-based tool for uploading and sorting pictures without having to install additional software. Users need only click and drag thumbnail images into a web-based desktop. This simple procedure poses two important privacy issues.

Firstly, superior ease-of-use will likely increase the number of photos that users share. As most computer users will attest, clicking and dragging is often done without considering the full implications of the action. The Organizr feature allows users to load personal pictures to a public, online repository with almost no consideration of the consequences.

An extension of this concern is that increased ease-of-use lowers barriers to participation by softening the technology. An early example of this behaviour would be the transition from command-line operating systems to the modern Windows or Mac desktop experience. In terms of Flickr, users who may not previously have shared their pictures online may find themselves posting personal images. This will increasingly be the case as ordering prints online becomes common practice. Such users may not fully understand the subtleties of access permissions, copyright law, or one and half million voyeurs.

Secondly, when the network is the platform, all of the user’s information is permanently housed on the servers of the host company. When Yahoo! acquired Flickr, the company’s servers were moved to the US where they are now governed by US federal law. As more and more web-based programs are developed (see eg. Google Calendar), the impacts on personal privacy will be significant. Having lawful access to telecommunications systems in one thing, but having access to an archive of any user’s content should certainly be enough to make law enforcement salivate.

As Web 2.0 continues to be developed, some of its drawbacks are becoming increasingly clear. Will folksonomy be the final death knell of online anonymity? Will society recognize the threats posed by increased social networking? Will privacy laws be able to protect the tremendous increase in both the amount and variety of personal information being shared online?

While these details may not be resolved any time soon, you can already order prints of Mark and Ruth’s wedding from the nearest Target location…to be picked-up within the hour.

| Comments (3) |


We Have the Technology

posted by:Jeremy Hessing-Lewis // 03:48 PM // July 10, 2006 // Commentary &/or random thoughts | General | TechLife | Walking On the Identity Trail

Said the Gramophone, a particularly good MP3 blog, has posted a copy of We Have the Technology by Peter Ubu. I'll leave the explaining to the Gramophone, but I believe that this song is perfectly relevant to the IDTrail project.

Enjoy.

| Comments (0) |


Profiling an ID Thief

posted by:Jeremy Hessing-Lewis // 09:34 AM // July 05, 2006 // Commentary &/or random thoughts | Digital Identity Management

The New York Times has posted an excellent profile of ID thief, Shiva Brent Sharma. He was the first charged under the New York State identify theft statute. The article draws attention to the relative ease with which ID theft is perpetrated and how this simplicity, combined with enormous payoffs, can seduce otherwise bright young people into criminality.

link (thanks Boing Boing)

| Comments (2) |


The New Paternalism, Technologies of Conformity, and Virtue by Default

posted by:David Matheson // 06:49 AM // July 04, 2006 // ID TRAIL MIX

trailmixbanner.gif

In his classic essay On Liberty, John Stuart Mill famously argued that state restrictions on an individual’s freedom are justifiable only to the extent that they are aimed at preventing harm to others. When it comes to the state’s limitation of a citizen’s liberty, the appeal to what is in her own best interest or for her own good, Mill insisted, “is not a sufficient warrant.”

With various qualifications, something like this principle of liberty is near and dear to the heart of every political liberal, and it is usually thought to stand at odds with state paternalism. Yet, according to a recent special report from The Economist (“The Avuncular State,” 8 April 2006), there is a growing endorsement in academic circles of a kind of paternalism thought to be consistent with the liberal premium on individual liberty. The idea of the new, “soft” paternalism is that citizens’ behavior can be given the right shape by the state – for the sake of their own good – with no significant restrictions on their liberty. A central way of accomplishing this is through a restructuring of the default frameworks for citizens’ behavior.

The Economist asks us to consider, by way of illustration,

[one] example of soft paternalism [that has] has already attracted the interest of governments and the backing of this newspaper: employees should be signed up for company pension schemes by default. Such schemes, which typically attract tax breaks from governments and matching contributions from employers, are usually in the best interest of workers. You might say that joining is a ‘no-brainer’, except that what little brainwork and paperwork is required defeats a surprising number of people. A soft paternalist would presume that people want to join, leaving them free to opt out if they choose. In one case study […] changing the default rule in this way raised the enrolment rate from 49% to 86%.

Since the default policies and mechanisms favored by the new paternalism (which range far beyond restructured pension scheme defaults) include opt-out features, so the thought goes, they can’t be charged with forcing or compelling citizens to act in their own best interests. And this in turn means that the policies and mechanisms avoid the sorts of external restrictions that the principle of liberty proscribes.

Interestingly, the Economist article ends on a less than entirely enthusiastic note. In encouraging citizens to act in the right sorts of ways by default, it suggests, the new paternalism may end up discouraging them from developing the sorts of character traits and intellectual skills that we typically deem praiseworthy:

Reasoning, judgment, discrimination and self-control – all of these the soft paternalists see as burdens the state can and should lighten. Mill, by contrast, saw them as opportunities for citizens to exercise their humanity. Soft paternalism may improve people’s choices, rescuing them from their own worst tendencies, but it does nothing to improve those tendencies. The nephews of the avuncular state have no reason to grow up.

We might capture the force of the Economist’s skepticism here by considering a distinction drawn from Aristotle between the mere conformist to right behavior and the virtuous individual. The mere conformist acts in the right sorts of ways, but is not praiseworthy for so doing, because his actions are not properly motivated. The virtuous individual, by contrast, not only acts in the right sorts of ways but is further deserving of praise for her behavior because it is properly motivated. To illustrate with a rather low-key example, consider a new dog-owner who begins feeding his dog a type of food that is in fact optimally conducive to the dog’s health. The dog-owner didn’t choose the food for that reason, however. His motivation was one of convenience: he simply went to the nearest pet store and picked up a bag of whatever food happened to be the most well-stocked. Contrast this first dog-owner with a second, who ends up feeding her new canine companion the very same type of dog food, but does so because, having taken the time to look into the relative merits of different types of dog food, she decided that food was in fact the best for her dog. There’s an intuitively clear sense in which the second dog owner is praiseworthy in her dog-care behavior but the first is not: despite the fact that both dog owners end up doing the right thing vis-à-vis their dog’s nutritional needs, the second has the right motivation for doing it whereas the first does not. The first dog owner is a mere conformist when it comes to his dog’s nutritional care; the second is virtuous.

In effect, then, the point of the Economist’s skepticism is that even if the default policies and mechanisms of the new paternalism end up promoting conformity to right behavior, there is no reason to suppose that they will promote virtue, for those policies and mechanisms are quite consistent with citizens doing the right sorts of things without the right sorts of motivations. Worse, the new paternalism may end up demoting virtue by promoting conformity in the way it does: the more common it is that citizens do the right thing with the wrong motivation (or perhaps, depending on the nature of the default framework, with no particular motivation at all), the less common it will be that they do the right thing with the right motivation. And the less common it is that citizens do the right thing with the right motivation, the less likely it is that they will develop those stable traits of character and intellect that we call virtues (good reason, sound judgment, apt discrimination, and self-control, just to name a few). This is because, as Aristotle emphasized, the development of virtue requires the practice of virtue: in order to acquire the traits we call virtues, we must repeatedly do the right sorts of things with the right sorts of motivations. The danger of the new paternalism is that its efforts to promote conformity to right behavior by default threaten to undermine this practice condition on citizens’ development of virtue. Simply put, there is no such thing as virtue by default. And, arguably, a central mistake of the new paternalism is to assume that there is (or worse, that we really don’t need virtue at all).

My suspicion is that this very same mistake is made by advocates of new technologies of conformity, i.e. new technologies aimed at the automated short-circuiting of problematic user behavior. Consider, for example, the escalating movement toward implanted radio frequency identification microchips. Vendors are no doubt quite right to claim that users’ self-identification activities become more convenient and more reliable with chip in arm. But this merely speaks to conformity to right identity management behavior. Does it speak at all to identity management virtue? Might it not speak against?

Or consider recent battles about the use of digital rights management technologies, prominent participants of which include many of our own project’s gifted members. (See, for example, here and here.) The main objection to the use of these technologies is not of course that they fail effectively to protect against real copyright infringements. The objection is that they overprotect. And perhaps the overprotection, at least in its more ubiquitous forms, is bound to do something to users much more troubling than whatever its absence might do to copyright holders. As means of securing users’ copyright conformity, the DRM technologies may well incapacitate users’ copyright virtue. It seems pretty clear to me, at any rate, that they won’t secure that virtue by default.

| Comments (4) |


main display area bottom border

.:privacy:. | .:contact:.


This is a SSHRC funded project:
Social Sciences and Humanities Research Council of Canada