understanding the importance and impact of anonymity and authentication in a networked society
navigation menu top border

.:home:.     .:project:.    .:people:.     .:research:.     .:blog:.     .:resources:.     .:media:.

navigation menu bottom border
main display area top border

« February 2006 | Main | April 2006 »

Subjectright (S), a reciprocal to Copyright (C)

posted by:James Fung // 11:59 PM // March 28, 2006 // ID TRAIL MIX


Author(s): Steve Mann (stevemanncorp.com), James Fung, Kyle Amon Inc.

This article presents the argument that any debate about copyright is inherently unbalanced, because it preferentially considers the right of a source entity, without equal regard to the right of a destination entity. Accordingly, we propose the concept of Subjectright, i.e. recipient rights, as a reciprocal to copyright.

In contrast to the analogous mechanisms of Intellectual Property (Copyright, Trademark, Patent, etc.) that protect that which is offered through predominant volition of a “transmitient”, Subjectright also covers that which we give off without conscious thought or effort, as well as that which we are exposed to simply through our existence.

Subjectright includes our physical facsimile, as might be protected by the Humanistic Property License Agreement (HPLA), http://wearcam.org/clerks.htm, http://wearcam.org/hpla.htm, http://wearcam.org/hp_manifesto.htm as well as our spoken word, molted detritus and mental engrams.

In this paper, we expand upon the principle of Subjectright to include that which we receive through eminent volition, and, in particular, that which we receive as subject, thus have been SUBJECTed to, often without our consent and sometimes even against our will.

In order for information to propogate, five functions must exist. There must be a creator, a transmitter, a conduit, a reciever and a processor of information. All five may reside within the same entity or be distributed, singly or multiply, between various entities. If any one of these five functions is lacking, information propogation can not occur.

Current Intellectual Property law and practice only affords privledges to the “transmitient” (creator, transmitter and conduit functions of information propogation). While Copyright(c), for example, provides extensive powers to the creator, transmitter, and/or conduit of information (e.g. an author, publisher, broadcaster), Subjectright, recognizing that individuals are recievers (eg. consumers) and processors (eg. users) as well as creators (e.g. producers), transmitters and conduits of information, extends commensurate powers to them as such.

Since we hold it to be self evident that all entities come into existence free, subject to none but their own mortality, having an inalienable right to maintain this freedom, we propose that a reciprocal set of privileges to those afforded by current Intellectual Property law to creators, transmitters and conduits of information, as instigators, be extended to conduits, recievers and processors of information, as subjects, under Subjectright(s), and, furthermore, that information instigators be morally and legally bound by Subjectright(s), necessitating them to respect the inherent, independent volition of all entities as free beings, and their right to maintain this freedom, in order to provide a means of redress when information instigators contaminate entities with unwanted information as subjects.

While Copyright is intended to protect the deliberate creation and transmission of information, Subjectright is intended to protect the primarily involuntary disclosure of information (e.g. physical facsimile, spoken word, molted detritus, etc.), as well as the often involuntary receipt of information (e.g. marketing and advertising, music, video, etc.) as mental engrams.

Note that in this sense of reciprocality Copyleft (i.e. Gnu Public License, GPL) is not really a reciprocal for copyright, in the sense that both Copyright and Copyleft attempt to protect a transmitient, although in quite different ways. In particular, to the extent that fame and fortune are fungible, Copyright and Copyleft are two sides of the same coin, whether that coin be a coin of commerce, or a coin of recognition and social status.

In view of the often involuntary nature of this exchange with regard to the recipient (eg. subject), it has been argued that Subjectright deserves stronger protection than Copyright. See, for example, First Monday, volume 5, number 7 (July 2000), URL: http://firstmonday.org/issues/issue5_7/mann/index.html

A scholar’s right to cite sites

Legal development is sometimes said to be significantly more dilatory than technological development (notwithstanding our desire to state that “The trouble with law is that so many new laws are created so quickly that technology is having a hard time catching up.”). As society evolves, the original intent of old laws is often lost and they begin to be misapplied as a result. In some cases, after a significant amount of subtle, social evolution, the results can be egregious. It is therefore not very surprising that many Intellectual Property laws are now in conflict with the reasonable freedoms of scientific, scholarly, or academic pursuit.

Consider, for example, the Felton case, http://eff.org/sc/felten/ Felten
v. RIAA.

"Freedom of Speech should not be sacrificed in the recording industry's war to restrict the public from making copies of digital music.
When a team led by Princeton Professor Edward Felten accepted a public challenge by the Secure Digital Music Initiative (SDMI)to break new security systems, they did not give up their First Amendment right to teach others what they learned. Yet they have been threatened by SDMI and the Recording Industry Association of America (RIAA) to keep silent or face litigation under the Digital Millennium Copyright Act (DMCA). Professor Felten has a career teaching people about security, yet the recording industry has censored him for finding weaknesses in their security. USENIX regularly publishes scientific papers that describe the weaknesses of technologies, but they are chilled by RIAA litigation threats.
EFF is asking the court to affirm the right of these scientists to publicly present what they have learned and the right of USENIX to publish the scientists' paper in their conference proceedings. EFF has also asked the court to overturn the anti-distribution provisions of the DMCA as unconstitutional restraints on the freedom of expression.
"When scientists are intimidated from publishing their work, there is a clear First Amendment problem," said EFF's Legal Director Cindy Cohn. "We have long argued that unless properly limited, the anti-distribution provisions of the DMCA would interfere with science. Now they plainly have."
"Mathematics and code are not circumvention devices," explained Jim Tyre, an attorney on the legal team, "so why is the recording industry trying to prevent these researchers from publishing?"
USENIX Executive Director Ellie Young commented, "We cannot stand idly by as USENIX members are prevented from discussing and publishing the results of legitimate research.""

Another important case fighting the infringement of current Intellectual Property laws on the First Ammendment is the 2600 case: http://www.2600.com/ and the appeal to a loss against a Motion Picture Association of America (MPAA) suit in August 2000. http://www.2600.com/news/display.shtml?id=211 The 2600 website says of this appeal,

"The case arises from 2600 Magazine's publication of and linking to a computer program called DeCSS in November, 1999 as part of its news coverage about DVD decryption software. DeCSS decrypts movies on DVDs that have been encrypted by a computer program called CSS. Decryption of DVD movies is necessary in order to make fair use of the movies as well as to play DVD movies on computers running the Linux operating system, among other uses. The Studios object to the publication of DeCSS because they claim that it can be used as part of a process to infringe copyrights on DVD movies.
Universal Studios, along with other members of the Motion Picture Association of America, filed suit against the magazine in January 2000 seeking an order that the magazine no longer publish the program. In the case, formally titled Universal v. Remeirdes, et. al., the District Court granted a preliminary injunction against publication of DeCSS on January 20, 2000. By August 2000, after an abbreviated trial, the Court prohibited 2600 Magazine from even linking to DeCSS."
Scholarly discourse and academic research seeks to spread new ideas, new discoveries, and in general new thoughts. The medium of thought conveyence is language, without which there can be no transmission of thought and thoughts must remain privy to their creators alone. Language is thus the transmitter of thought and it’s medium is the articulate symbol, manifested in speech or inscription, conveyed by an ever increasing number of media.

The articulate symbols of language were initially transmitted, and thought thus propogated, exclusively synchronously by phonetic utterance through the medium of air (ie. speech). Asynchronous transmission, and thus mass propagation, of thought became possible with the advent of inscription since the mediums of inscription were less mutable than the medium of air. It was then discovered that even speech could be inscribed on certain media and electrically reproduced, engendering an asynchronous manifestation of a type of thought transmission that was previously possible only synchronously. Ultimately, electromagnetic media was found to be extremely versatile, facilitating both synchronous and asynchronous transmission of all antecedent media necessary for the transmission and propogation of thought and, with the advent of the internet, with a fine degree of control.

The extreme versatility of electromagnetic media fostered it’s rapid proliferation as a multiply manifested thought transmission medium second in prominence only to the medium of air in conveyance of the spoken word and pictorial symbol.

It’s prominence has resulted in a devolution toward the more mutable paradigm of television and away from the less mutable literary tradition of the book. This transformation, in concert with the expansion, misuse and abuse of intellectual property laws, not only threatens the right and ability of scholars to make enquiry and publish results, but also to make scholarly citations to build upon in the tradition of science and scholarly thought.

For example, many web sites utilize CGI scripts that cause a single URL to reference multiple documents, making it impossible for scholars, critics, and scientists to cite and properly credit sources of reference and specific quotation, omission of which causes the work to suffer, thereby reducing it’s aggregate social benefit, when uncitable material is left out and exposes the author to intellectual property infringement liability when uncitable material is included for the benefit of the work and, consequently society, regardless. Moreover, complete web sites often vanish suddenly. For example, a scientific article referencing a January 22, 2001 article on Mediated Reality and EyeTap? Technology, published on the about.com wearables site, http://wearables.about.com/library/weekly/aa012201a.htm, will no longer be found by scientists wishing to extend work based upon this article in the future since it is no longer maintained on the about.com site, presumably because it is no longer considered profitable (e.g. does not generate enough advertising revenue, or the like).

One possible solution is to backup or mirror sites when cited. For example, an article published on the eyetap.org site, making a scholarly reference to this article, could cite a mirror site: http://about.eyetap.org/library/weekly/aa012201a.shtml. Each article being written would then contain all of its references to at least one level of recursion. With increases in mass storage capability, it might even be reasonable to bundle articles to two levels, but certainly one level would be reasonable.

While the creation of backup and mirror sites of scholarly citations helps ensure, in a technical sense, access to these works, current intellectual property law may criminalize those scholars who seek to preserve the works they reference. For instance, consider an academic journal which charges fees for access to their published articles. Such a journal is not responsible for ensuring long term access to the published article. However, were a scholar to mirror the article to help ensure its availability, existing intellectual property law may expose the scholar to potential legal action for circumventing the access fees charged by the journal publication. Furthermore, recent laws favor commerce by revoking the legal concept of fair-use and scholarly backup.

Consider, for example, Bill C-32 - As passed by the House of Commons http://www.pch.gc.ca/wn-qdn/c32/c-32toce.html

With the advent of wearable computing http://wearcam.org/ieeecomputer/r2025.htm Computer, Vol. 30, No. 2, February 1997 it is now possible that a person can remember everything they take in. Thus we are at a pivotal era (or will soon witness such an era) when an individual can remember what they have been taught, and that individual can also teach others. When abilities we currently attribute to ‘digital’ media move within the realm of second nature ‘personal abilities’ through such inventions, restrictions upon the person’s use of what they take in becomes akin to the notion of ‘thought police’.

In order to protect against such ``Thought Police what is needed is a new kind of agreement that is binding on the Transmitter (not just upon the Receiver) of information.

It is suggested, therefore, that Subjects would apply this Subjectright philosophy to information received, and that persons not wishing to release information under Subjectright, refrain from exposing Subjects to said information.

This ``right to teach therefore becomes recursive under Subjectright. A person bound to Subjectright simply declares: ``You have no right to teach me unless you grant rights for me to teach others, or more formally: ``By teaching me any new knowledge, you agree to be bound by the following Terms and Conditions: … one of which must permit re-teaching of what is taught.

Teaching is a form of brain damage, in the sense that once taught, we can never really forget. This brain damage is relatively permanent, e.g. the synaptic weights of the brain are permanently altered by advertising, loud (sometimes unwanted) music that is inflicted upon us, as well as by a good joke one can never forget. quote: ``There’s a song going around in my head… [words to a song about trying to forget a song, goes something like “there’s a song going around and around, there’s a song going around in my head and i don’t want to hear it no more, no more…”]

(This example underscores the difficulty in eradicating knowledge, and when that knowledge is unwanted, it causes a sort of pollution to one’s memory space…).

Thus there is a need for a concept such as subjectright that deals not only with the right to be free of unwanted violations of both privacy and solitude (such as being free of unwanted brain damage, unwanted insertion of material), but also to be free to provide scholarly discourse on what is learned.

Subjectright and Copyright

Even though under existing copyright laws, works may be reproduced for scholarly dissemination or criticism, such protections are not afforded to many of the day-to-day situations people encounter, whether or not conscious efforts are made to obtain or disseminate media. For instance, company logos used in advertising conveniently deliver the “stamp of the transmitter”, which provide the subjected and inflicted a clear target towards which to exercise their Subjectrights.

It was suggested that a fee could be charged by an unwilling Subject.
http://firstmonday.org/issues/issue5_7/mann/index.html (the cracker, hacker analogy of the brain as a computer being deliberately compromised by malicious spammers; realworld advertising as spam) The fee would be charged to the perpetrator of this pollution, or to those who benefit from the pollution (or both).

It would not be unreasonable to charge a fee for both the Reception of the unwanted information pollution, as well as for the storage, and for any damage that the pollution caused on the storage medium.

As if trying to add insult to brain injury, those bombarding us with unsolicited sounds, sights, and other forms of radiation pollution have the nerve to then try to charge us for remembering what we didn’t really want to learn. Such is the nature of Copyright, that one can be unwittingly or unwillingly SUBJECTed to input, and then be prevented from legally reproducing this same detritus. Stallman’s article entitled “Reevaluating Copyright: The Public Must Prevail”, examines the origins of copyright, pointing out that at the onset of the printing press, copyright was instituted as a method of encouraging the creation of works by publishers by restricting the freedoms of people to copy or redistribute those works. Such a system worked to allow the publisher to charge for access to their works. The article points out that at the time, since individuals could not distribute the works without a printing press, which few could afford, the agreement mutually favoured the public who gave up little, while allowing for publishers to profit from their work.

Since that time, however, technology has made it possible for individuals to distribute and reproduce material. Furthermore, while in the days of the printing press, reproduction of works had some physical cost associated with it in the form of the cost of paper and ink and transportation, modern distribution techniques have no such costs associated with them other than the rather small cost of electricity and bandwidth.

The situation has thus placed many works under copyright into a freely reproducible and publicly sharable medium where many people can benefit from the works without loss of quality in reproduction of the original.

Attempting to license or charge individuals for access to publicly accessible or mass marketed works which said individuals are bombarded with in an otherwise freely reproducible media is THEFT from Subjects. Attempting to block proliferation of reproducible, mass marketed teachings to Subjects is THEFT against those Subjects.

Perpetrators of this THEFT are asked to either cease and desist in such bombarding of Subjects with such material, or at the very least to allow Subjects to reporduce that which they are bombarded with.

If such works require an individual to pay a licensing fee or to agree to unethical or unreasonable conditions (see for example, http://wearcam.org/seatsale/poster/poster_agree_terms.htm) this is THEFT in the sense that it violates the Terms and Conditions of the Subjectright Transmitient License Agreement.

In such cases the Subject (Recipient) is thus required (by Subjectrights) to charge the content provider a de-licensing fee, or ``disservice fee.

Teaching as brain damage

Teaching involves stimulating the brain in order to impart knowledge, learn a skill or condition a frame of mind. The brain and consequently the individual, if affected by these stimuli, changes as a result. Teaching, a crucial component to human interaction and development, allows the exchange of ideas to take place. When neurological modification is undesired and unconsentual, the individual’s state of mental development does not progress, grow or improve, but instead regresses. The degree of regress is proportionate to the amount of mental clutter absorbed, due to the processing and filtering operations that must be done in an attempt to reverse the undesired teaching affects (in returning to the state of mind before the change). The persistence of memory and the absorption of information and feelings (the unconsentual nature of the teaching creates tension and conflict in the mind, bringing about negative emotions) into the subconscious mind ensure that neurological modification can never be entirely reversed. Is teaching brain damage? Perhaps we have a right to answer yes, if the teaching was unsolicited, and argue that this residual mental detritus constitutes brain damage proportionate to the quantity and intensity of the unconsentual teaching. Although the act of teaching is the same with or without consent, the consequences and resulting state of mind of the subject can differ substantially. Perhaps a good analogy is sexual contact: there’s a big difference between consentual sexual contact, and unconsentual. The physical activity is the same in both cases, but the result (happily married versus criminal activity) can be quite different.

Crime scene documentation

If the Subject witnesses or documents evidence of attempts to stop the proliferation of Subjectright media, the Subject is compelled to take legal action against such criminal activity (e.g. activity of causing brain damage with or dependency upon material that is not freely re-teachable).

Pirates are NOT Thieves (By who’s law?)

When it is said whether an act is legal or illegal, we must ask the question as to who’s law? Canadian law? American Law? EXISTech Corporation’s law? or Internic’s law?

Piracy did not, originally, pertain to software, but, rather, described captains of pirate vessels who were given permission by an issuing government to raid and plunder on the open seas ships of another government. The issuing government, in return, guaranteed safe haven at their ports, and allowed pirates to profit from their plunder (through what was known as a letter of writ). The accumulation of private wealth by this method was called “privateering”[Petrie, Donald A, “The Prize Game: Lawful Looting on the High Seas in the Days of Fighting Sail”, Naval Institute Press, Annapolis, Maryland, 1999], and was not regarded as theft, since pirates were acting legally within the domain of their own government.

Governments, at the time, made piracy and privateering not only legal, but also profitable. Thus pirates were the ones who were in fact government sponsored and supported. Privateering made trading and travel upon the otherwise open medium of the seas a dangerous proposition.

Today, “piracy” is commonly applied to the copying of software, or music. However, considering, the origins of privacy and privateering, we can re-examine the current trading on the otherwise open seas of moving digital bits around and determine who best fits the definition of a “pirate”.

There is also the notion of “fair use”. There is a well established “fair use doctrine” in the scholarly and scientific community which must be continued, lest we enter the “new dark ages”. As is well known, the origin of the internet has in its roots in the development of a method to share work between scholars. The development of copyable floppy disks, writeable CDs and widespread internet access allowed for ease of `trade’ upon the high seas.

However, many service providers and copyright holders are trying to prevent such “fair use”. Attempts to conceal, obfuscate, and prevent proper copying, backup and the spread of Subjectright works could thusly be labelled “privateering” (piracy). Many efforts to create pay systems and encryption to prevent copying within these mediums on behalf of the publishing companies would then be considered engaging in piracy. Certainly attempts to block/intercept the exchange of or extract payment for works exchanged between individuals (i.e. on the open seas) are also acts of “piracy”, supported through letters of writ issued by Copyright holding publishers.

Putting works that we are Subjected to into a freely accessible, reproducible medium (to escape the plundering pirates), may then be regarded by some as a noble and publicly beneficial activity.

Some might even argue that one should extend this basic concept to include “ripping” CDs, scanning and copying books, de-encrypting DVDs, opening source-code, reverse-engineering software, and regarding these practices as a noble and publicly beneficial activity, to counteract the piracy caused by otherwise inflicting such material on Subjects, often without the consent of the Subjects.

Such piracy is often committed by those who seek to enforce copyright. For instance, in 1996 the American Society of Composers, Authors and Publishers (ASCAP) received much media attention when it applied a licensing fee to the American Campers Association (ACA) for use of campfire songs. ASCAP does and remains in a position to, under existing copyright laws, levy fines and require licensing for summer camps to hold campfire sing-a-longs which include songs such as “Puff the Magic Dragon” and “Happy Birthday”. Unfortunately, many people have been unwillfully exposed to such music in unlicensed situations and have developed, in a sense, a cultural addiction to these songs. A birthday would not be complete without a “Happy Birthday” song, and much would be lost at a silent campfire, or one where the singers sing in fear of litigation. Furthermore, notice is not given to the listeners that these songs are subject to copyright and licensing, and thus no choice is given to the listeners but to learn the music.

Such a situation could only have evolved under copyright laws where private performances are allowed and encouraged, thus teaching the dependency and placing it into the freely accessible and sharable medium of verbal tradition, but public performances must be licensed, and thus profit may be extracted from a taught dependency. However, within the SubjectRights? framework, by exposing individuals to songs, ASCAP must then allow subjects to share it freely and reproduce that which they
have been involuntarily exposed too. ASCAP is still allowed to own copyrights to songs, but must find a more responsible way to market them to ensure they are only heard by those who are truely willing to pay their fees. (See “WHEN IN DOUBT, DO WITHOUT: LICENSING PUBLIC PERFORMANCES BY NONPROFIT CAMPING OR VOLUNTEER SERVICE ORGANIZATIONS UNDER FEDERAL COPYRIGHT LAW”, Washington University Law Quaterly Volume 75, Number 3, Fall 1997 http://ls.wustl.edu/WULQ/75-3/753-5.html on page says “Cite As 75 Wash. U. L.Q. 1277”.)

“Privateering” which might better describe acts commited by large corporations, and their paid lawyers.


Subjectright attempts to provide a sense of balance to an otherwise one-sided (e.g. Transmitter-only) point of view. Subjectright looks at both the Transmitter and Receiver of information.

As we enter the cybernetic era (from software to softwear, to implantables), we will see a blurring of the distinction between thinking and computing.

SoftWARE? embodies the idea of WARE:

Dictionary definition of “ware”

Main Entry: [^3]ware Function: noun Etymology: Middle English, from Old English waru; akin to Middle High German ware ware and probably to Sanskrit vasna price – more at VENAL Date: before 12th century 1 a : manufactured articles, products of art or craft, or farm produce: GOODS - - often used in combination b : an article of merchandise 2 : articles (as pottery or dishes) of fired clay 3 : an intangible item (as a service or ability) that is a marketable commodity

Main Entry: ve.nal Pronunciation: ‘vE-n[^&]l Function: adjective Etymology: Latin venalis, from venum (accusative) sale; akin to Greek Oneisthai to buy, Sanskrit vasna price Date: 1652 1 : capable of being bought or obtained for money or other valuable consideration : PURCHASABLE; especially : open to corrupt influence and especially
bribery : MERCENARY (a venal legislator) 2 : originating in, characterized by, or associated with corrupt bribery (a venal arrangement with the police) - ve.nal.i.ty /vi-‘na-l&-tE/ noun - ve.nal.ly /‘vE-n[^&]l-E/ adverb (C) 1997 by Merriam-Webster, Incorporated

Now having taught those 2 new words, WARE and VENAL, we hopefully all now have a right to use the English language without paying a word usage fee.

We were required to attend a public school, and we were exposed to these words against our will. We were forced to eat these words, now at the very least we should be free to use these words.

Likewise, the teaching of software skills (e.g. teaching someone how to use a program) must carry with it the free use of that program, in order to avoid brain damage arising from learning something (very hard to unlearn) that the person will not have free access to. Accordingly, it is our duty as teachers to teach people only how to use programs that are freely available to them at a later point in time.

Teaching a dependency (e.g. to get persons addicated to a certain product they must then buy) is theft.


The effects of copyright, left, and center, tend to focus on protecting the interests of creators, producers, and distributors of information. We presented a reciprocal concept, namely that of Subjectright, that considers the rights of those who are exposed to informatic content, whether by choice, by accident, or against their will.

We believe that especially when people are subject to informatic content against their will, that they have every right to “rip, mix, burn” or do what they like with it. Moreover, we also believe that any discussion of copyright is inherently unbalanced if it does not also consider subjectright.

| Comments (2) |

Surveillance in Spheres of Mobility: Privacy, Technical Design and the Flow of Personal Information on the Transportation and Information Superhighways

posted by:Michael Zimmer // 11:59 PM // March 21, 2006 // ID TRAIL MIX


A recent Nassau County Supreme Court ruling held that data retrieved from a vehicle’s black box - a computer module that records a vehicle’s speed and telemetry data in the last five seconds before airbags deploy in a collision - could be admitted as evidence even though law enforcement officials did not have a search warrant. The court ruled that by driving the vehicle on a public highway, “the defendant knowingly exposed to the public the manner in which he operated his vehicle on public highways. ...What a person knowingly exposes to the public is not subject to Fourth Amendment protection.” A federal judge in upstate New York made a similar ruling, stating that police officers did not need a warrant to secretly attach a Global Positioning System device to a suspect’s vehicle. The judge said that a suspect traveling on a highway has no reasonable expectation of privacy.

In January 2006, the web search engine Google resisted requests from the U.S. Department of Justice to turn over a large amount of data, including records of all Google searches from any one-week period, partially on the grounds that it would violate their users’ privacy. This event generated widespread concern over the privacy of web search histories, and prompted many users to question the extent to which this component of their online intellectual activities might be shared with law enforcement agencies. (Indeed, it was later revealed that three other search engine providers – America Online, Yahoo and Microsoft – had previously complied with government subpoenas in the case, without public notice.) Similar concerns have arisen over commercial access to search engine histories as the vast databases of search histories held by these providers are increasingly matched up with individual searchers and demographic information from other search-related services in order to provide individually targeted search results and advertising.

The two technological systems described above - networked vehicle information systems and web search engines - represent important tools for the successful navigation of two vital spheres of mobility: physical space and cyberspace. However, they also share a reliance on the capturing and processing of personal information flows, and provide the platforms for surveillance of the person on the move. Networked vehicle information systems, which include GPS-based navigational tools, automated toll collection systems, automobile black boxes, and vehicle safety communication systems, rely on the transmission, collection and aggregation of a person’s location and vehicle telemetry data as she travels along the public highways. Similarly, web search engines, striving to provide personalized results and deliver contextually relevant advertising, depend on the monitoring and aggregation of a user’s online activities as she surfs the World Wide Web. Taken together, these two technical systems are compelling examples of the increased “everyday surveillance” (Staples, 2000) of individuals within their various spheres of mobility: networked vehicle systems constitute large-scale infrastructures enabling the widespread surveillance of drivers traveling on the public highways, while web search engines are part of a larger online information infrastructure which facilitates the monitoring and aggregation of one’s intellectual activities on the information superhighway.

The political and value implications of these infrastructures on individuals as they navigate through these spaces cannot be understated, yet they generally remain unexplored. These implications include shifts in the contextual integrity of the norms of personal information flows, challenges to the expectation of privacy in public spaces, concerns over whether one’s online intellectual activities are shared with third parties, and the potential for the “panoptic sorting” (Gandy, 1993) of citizens into disciplinary categories. Taken together, these infrastructures of everyday surveillance increasingly threaten the privacy of one’s personal information, and contribute to a rapidly emerging “soft cage” (Parenti, 2003) of everyday surveillance, a growing environment of discipline and social control.

In his book Technopoly, Neil Postman warned that we tend to be “surrounded by the wondrous effects of machines and are encouraged to ignore the ideas embedded in them. Which means we become blind to the ideological meaning of our technologies” (1992, p. 94). As the ubiquity of networked vehicle systems and web search engines intensifies, it becomes increasingly difficult for users to recognize or question their political and value implications, and more tempting to simply take the design of such tools “at interface value” (Turkle, 1995, p. 103). It becomes vital, then, to heed Postman’s warning, remove the blinders, prevent the political and value implications of networked vehicle systems and web search engines from disappearing from public awareness, and to critically engage with the design communities to mitigate these unintended consequences.

To accomplish this, three things must happen:

1. Broaden conceptual understanding of privacy: Efforts must be made to broaden the conceptual understanding of privacy to fully appreciate how the introduction of these new technologies disrupt the norms of personal information flows in the contexts of their particular use. A starting point is embracing more contextually-based theories of privacy, such as Helen Nissenbaum’s formulation of privacy as “contextual integrity.” Contextual integrity is a benchmark theory of privacy where the privacy of one’s personal information is only maintained if certain norms of information flow remain undisturbed. Rather than aspiring to universal prescriptions for privacy, contextual integrity works from within the normative bounds of a particular context. If the introduction of a new technology into a particular context violates either the norms of information appropriateness or information distribution, the contextual integrity of the flow of one’s personal information has been violated.

The theory of privacy as contextual integrity is particularly well suited, then, to consider how the introduction of networked vehicle information systems and web search information infrastructures might impact the governing norms of the flow of personal information in the contexts of highway travel and online intellectual activities. (For a starting point in such an analysis, see my paper presented at the “Contours of Privacy” conference.)

2. Engage in value-sensitive design: The notion that the design and use of technical systems have certain political and value consequences suggests the possibility of achieving alternative technical designs that might help to resist or otherwise mitigate such threats prior to their final design and deployment. It becomes vital, then, to engage directly with these technical design communities to raise awareness of the political and value implications of their design decisions and to make the value of privacy a constitutive part of the technological design process.

The multi-disciplinary perspective known as value-sensitive design is well suited to guide this endeavor. Value-sensitive design has emerged to identify, understand, anticipate and address the ethical and value-laden concerns that arise from the rapid design and deployment of media and information technologies. Recognizing how technologies contain ethical and value biases, the primary goal of value-sensitive design is to affect the design of technology to take account for human values during the conception and design process, not merely retrofitted after completion.

3. Foster critical technical practices: Recognizing that the choices designers make in shaping these systems are guided by their conceptual understandings of the values at play, work must be done to ensure technical designers possess the necessary conceptual tools to foster critical reflection on the hidden assumptions, ideologies and values underlying their design decisions. This is best accomplished by fostering “critical technical practices” within the design community. Formulated by Phil Agre, critical technical practice works to increase critical awareness and spark critical reflection among technical designers and engineers of the hidden assumptions, ideologies and values underlying their design processes and decisions. An example of critical technical practice in action is the Culturally Embedded Computing Group at Cornell University, which seeks to elucidate the ways in which technologies reflect and perpetuate cultural assumptions, as well as design new computing devices that reflect alternative possibilities. Their work provides a model for integrating critical technical practices into the technical design communities of networked vehicle information systems and web search information infrastructures.

At a moment when concern over government surveillance of its citizens is high, the prospect of the creation of a nationwide networked vehicle system infrastructure capable of monitoring vehicle location and activity causes pause. Similarly, general concerns over the privacy of web search histories is further aggravated by the possibility of the information being shared with government authorities. Broadening the conceptualizations of privacy to include approaches such as contextual integrity can help raise awareness of the political and value implications of these emerging information technologies. Further, embracing the pragmatic tools of “value-sensitive design” and “critical technical practice,” will ensure attention to political and ethical values becomes integral to the conception, design, and development of technologies, not merely considered after completion and deployment.

These prescriptions mark the first steps towards avoiding the ideological blindness Postman feared, engendering critical exploration of both the privacy threats of these emerging technologies, as well as their potential to trigger widespread surveillance and social control within two vital spheres of mobility.

Michael Zimmer is a PhD student in the Department of Culture and Communication at New York University, and maintains a blog at www.michaelzimmer.org.

| Comments (0) |

Escaping your history

posted by:James Muir // 11:59 PM // March 14, 2006 // ID TRAIL MIX


Imagine that every search phrase you have ever typed into Google from your home computer was recorded and stored in a user-profile on one of Google's servers. What would this profile say about you? No doubt you would consider some of this information private. It might alarm you when you realize that this information is now out of your control. Perhaps you trust Google not to divulge it, but there may be legal circumstances which would force them to do so.

You don't have to imagine this scenario -- Google does in fact keep a record of your search history and they are currently under legal pressure to release a subset of this data to the U.S. government. Some surprising facts about Google's user-profiling are discussed in a recent CNET article (D. McCullagh, 3 Feb 2006). One of the questions that Google's data collection practises raises is the following: Is it possible for a user to use a search engine anonymously from their home computer? For instance, is it possible to do a Google search for "picking magic mushrooms" without having this tied to your identity and possibly used against you at a later date? There is a very brief discussion of this question in the CNET article. Two specific recommendations made are to 1) regularly delete any Cookies your browser collects, and to 2) proxy your web browsing through an anonymizing service like Tor. In this note, we explain just what these two instructions mean and argue that they alone may not suffice to anonymize your Google searches.

We begin by recalling some basic facts about the Internet. Every computer connected to the Internet is identified by a unique number called its IP address. An IP (version 4) address is a sequence of four numbers in the range 0...255 separated by dots (e.g., Your home computer's IP address is obtained from your ISP and they keep track of which IP addresses are assigned to which customers. If your ISP is subpoenaed, then they can be forced to match a customer's identity to a given IP address. When you surf the web normally, your IP address is submitted to the web sites you visit so that their content can be routed back to your computer and displayed in your browser. You can check what IP address you are advertising by visiting here.

Each time a user carries out a Google search, Google can record their IP address and their search phrase (as well as the current date and time). Thus, they can form a history of the search phrases which originate from a particular IP address. However, these IP address search histories are not necessarily the same as user search histories. There are two main reasons for this: 1) ISPs sometimes change the IP addresses of their customers; 2) the customers of some ISPs, like AOL, access the web through caching HTTP proxies which effectively results in many users advertising the same IP address to a web site. These issues can be overcome by using Cookies. A Cookie is a small data-file that a web site generates and stores in your browser. When you first visit Google, they set a Cookie in your browser which serves as a unique user-id. This Cookie can be subsequently read by Google each time you do a search through their web site and so it can be used to track your behaviour, even if your ISP happens to change your IP address.

Deleting Cookies regularly removes data that Google uses to track you and your web browser. Note that the Firefox browser can be set to delete its Cookies each time you close it. This explains the first recommendation. You may be wondering if there is a way to carry out a Google search while keeping your IP address hidden. This is where Tor fits in.

Tor is a network of 250+ Internet computers in various countries which run freely available software designed to facilitate low-latency anonymous communication. Tor has several interesting features but what is most relevant to our discussion is that it can allow anyone to surf the web without revealing their IP address. To start using Tor, you simply download a client program and then configure your browser to send its traffic to the client. Once the client is activated, it negotiates an encrypted pathway through the Tor network which will carry your browser's traffic. The pathway consists of three Tor servers and these are changed every minute or so. When your web traffic travels through the Tor network en route to Google, it appears to Google as though it was originated by the last server in the pathway. In particular, the IP address recorded by Google will be the IP address of the last server in the pathway. So, if you use Tor, your search phrases will likely be bound to an IP address other than your own.

However, the story doesn't end there. Even if you disable Cookies and surf through Tor, it may still be possible to maintain a profile of your web searches. If you take a look here, then you will see several examples of information that can be extracted about your browser and computer even when you have followed the two recommendations. For example, it is possible to learn what browser you are using, its version, what operating system you run, your preferred language, what timezone you are in, what plugins you have installed, and what the current settings of your display are. Google could compute a digest of this information and record it along with any search phrase you have submitted. It's not clear if this information would suffice to uniquely identify a user, but users who use less common browsers and operating systems are more at risk of this.

Much of this additional information about your browser and computer is accessible only through JavaScript and Java. If you do not want this information to be collected and then you can disable these components in your browser. Unfortunately, many web sites will fail to work with JavaScript disabled, but, if you want strong anonymity, then this might be a reasonable trade-off.

James Muir is a Postdoctoral Fellow in the School of Computer Science at Carleton University.
| Comments (2) |

Privacy Issues and Canada’s Faith Communities

posted by:Travis Dumsday // 11:41 PM // March 07, 2006 // ID TRAIL MIX


Broadly speaking, public policy issues have an unfortunate tendency to become ghettoised, with particular problems being championed by certain segments of society while being mostly ignored by other interest groups and society at large. Thus certain segments become associated in both public and official consciousness, rightly or wrongly, with certain issues. The aboriginal community for instance tends to be associated mostly with issues directly relevant to that community, such as the economic development of reservations, preservation of native languages, etc. I call this ghettoisation unfortunate partly because it can lead to an accompanying tendency on the part of government and media to ignore the community’s involvement and stake in other issues. In the aboriginal example this might include the environmental advocacy undertaken by some native groups. Worse, it can lead to insular thinking in the group itself; when government and media link a community with a particular, narrow set of interests and issues, a subtle yet compelling psychological pull can be created in which the community unconsciously conforms itself to that image and ignores problems which may be of vital interest to it.

With that in mind, if someone asked you to write down a list of the issues of interest to Canadian religious communities, what would be the first item to pop into your mind? I realize that ‘Canadian religious communities’ is an exceedingly broad designating phrase, but humour me for a moment. What comes up first? Gay marriage? Abortion? Government funding of religious schools? I suspect that one of these three will be uppermost in the minds of many readers. Poverty relief and advocacy, peace initiatives, interfaith dialogue, these will tend to take a mental backseat, despite the tremendous time and resources which Canadian religious communities devote to these issues. How about privacy? Would that enter anywhere on the radar screen? I suspect not. I further suspect that this would be the case for most of those who would consider themselves members of these communities. Privacy is not seen as a ‘religious’ issue. But faith groups in this country are going to have to address some difficult questions relating to privacy in the near future, if they are not embroiled in them already.

In this context I think especially of the position of Canada’s Islamic community. If CSIS were to send undercover agents to attend services at mosques and monitor sermons given by Canadian Imams, in the hopes of spotting nascent terrorist sympathies or recruiting tactics, would this be a privacy violation? Leave aside for a moment the question of whether, if a violation, it would be justified. Is this even a privacy issue? It may be. In the philosophical literature on privacy and privacy rights the question has been raised as to whether groups, and not merely individuals, can possess a right to privacy. I think it has been convincingly argued that they can. For example, if a member of the Freemasons or some other secret society reveals to a reporter the group’s inner workings and rituals, it is plausible to think that the privacy of the group has been violated. Or consider some sensitive corporate meetings, or for that matter the meetings of the Canadian cabinet, whose minutes are kept sealed for decades. For a member of these groups to reveal what went on in such meetings is to violate the group’s privacy. And this is not merely a question of a group member violating the group’s trust. If an intrepid reporter were to plant a bug in the cabinet meeting room, he would be violating its privacy.

But can government surveillance of religious gatherings be considered in this light? After all, aren’t religious services public? Then presumably for the government to monitor their proceedings could not be a violation of privacy. I think this is a plausible argument, but am not entirely happy with it. For although the services may be open to anyone, it can be argued that there is an implicit understanding present whereby those in attendance at a worship service are there for friendly or at least neutral reasons (curiosity, for instance). If someone attends the service for potentially hostile reasons, this understanding is breached. Yet how does this relate to privacy?

This is where some further conceptual analysis comes in handy. Philosophers have been arguing for several decades about the nature of privacy. I believe that the proper view of privacy is essentially informational. Person X has privacy with respect to fact P if and only if P is not known and is in some way sensitive information, ie. information that if revealed might cause some harm to X. Now a loss of privacy occurs whenever such information is revealed, irrespective of to whom it is revealed. If someone reveals a fact to her priest in the confessional, she loses privacy with respect to that information and in regard to that person, the priest. But there has of course been no privacy violation. The information has been willingly relinquished. But there are times when information is willingly relinquished but in which privacy is still violated. If the woman confesses to someone she believes to be a priest, but who in fact is an imposter who gets a kick out of hearing people’s confessions, a gross violation of privacy has obviously taken place. Or consider a spy at a Freemason meeting, who is there only to gather information to release to the media. He too is violating privacy, in this case the privacy of the group.

But can this analysis be extended to public religious gatherings? Two questions arise here. One is whether any privacy violation can take place in the context of a public gathering. If this is possible, then it is possible of a public religious gathering. The other is whether, and perhaps to what extent, some religious gatherings, in this case services at a mosque, are truly public.

It is quite clear that violations of privacy can occur in a public setting. If Mrs. Jones stands up at a town hall meeting and tells of how her neighbour’s husband is having an affair, it is plausible to think that some sort of privacy violation has just occurred. So if private information is revealed in public, the fact that it is in a public setting does nothing to mitigate the violation; quite the opposite, in fact. But what about information which is revealed in a public setting which does not involve the violation of any individual’s privacy? Can a person violate privacy by virtue of his attendance at a public gathering? This may depend on what counts as ‘public.’ Here is another tricky conceptual problem. I think that sufficient conditions for a gathering to be public would be if it were held on public property and advertised as open to anyone with no explicit conditions of entry. A public town hall meeting, for example, or an organized and free gathering in a public park. But these are obviously not necessary conditions; a public gathering can be held on private property, for instance. A necessary condition is more difficult to come up with. But I think a plausible candidate would be that a gathering is public if it is open to anyone; more detailed specification is no doubt required here, but what I mean is something like a gathering in which no one is excluded on some specific grounds, whether explicit or implicit, such as being a woman, or of a certain race or political affiliation. Any meeting in which such exclusions are made cannot properly be termed ‘public.’

So is a worship service at a mosque a public event? Well, certainly no one is excluded on grounds of race or gender. But it is not unreasonable to think that someone would be excluded if it were known that he was there on behalf of CSIS to collect information for the government. You could say then that the gathering is restricted on grounds of employment, or perhaps motivation of the attendee. Thus the service is not a public gathering in the same sense as the town hall meeting would be, in which a CSIS agent presumably could not be excluded even if his presence were known, indeed even if he were there on behalf of CSIS, however uncomfortable it might make the other members of the public and the municipal officials.

So a mosque service is not a public event, or at least not fully public, if indeed it makes sense to speak of degrees of publicity. This being the case, someone might violate the privacy of those in attendance simply by virtue of his attendance, if it is understood that he is excluded from the event on some ground. If I sneak into a Freemason meeting and pretend to be a Mason, I am violating that group’s privacy. If I in bad faith and under misleading pretenses attend services at a mosque, I think it is reasonable to see this as a similar violation. This is the case even though the event is nowhere near as private as the Freemason gathering; it is still private to some extent, by virtue of the implicit exclusion of certain peoples, namely those of bad faith or inappropriate motives. This is an exclusion which would not apply in the context of more or fully public gatherings, such as the town hall meeting. Thus the surveillance of mosques by undercover CSIS agents can plausibly be thought of as a privacy issue.

Of course, so far as I know there is no evidence to indicate that such surveillance is going on. And again, it is quite possible that such surveillance would be justified in some cases, with interests of public safety overriding privacy concerns. But here we have an instance of a privacy issue which should no doubt be of concern to Canada’s religious communities. I think this illustrates that the stakeholders in privacy policy are much wider than one might think from a casual scan of the civil liberties groups one typically associates with the issue.

Travis Dumsday is a graduate student in philosophy at the University of Waterloo
| Comments (0) |

main display area bottom border

.:privacy:. | .:contact:.

This is a SSHRC funded project:
Social Sciences and Humanities Research Council of Canada