Haste Makes Waste: Attending to the Possible Consequences of Genetic Testing
posted by:Kenna Miskelly // 11:59 PM // July 31, 2007 // ID TRAIL MIX
Technological advances are making genetic testing and screening easier and more accessible. My concerns are that the ease and accessibility are masking the fact that these are not straightforward decisions that should be made quickly. Such decisions may include whether or not to terminate a pregnancy if your fetus has Down syndrome, whether to have prophylactic surgery if you test positive for breast cancer genes, whether to be tested for a late onset disease that may have no treatment or cure, and whether or not to submit to genome testing without knowing what the future will hold in terms of discrimination and possible privacy threats. The reasons for genetic testing have real world consequences that are often not spelled out before the testing takes place.
A recent article in the Globe and Mail discusses new recommendations that pregnant women over the age of 35, but under the age of 40, should no longer undergo routine amniocentesis. It has been standard practice that amniocentesis be available to women over the age of 35 because the probability of conceiving a child with a disability or genetic condition increases with maternal age. New non-invasive screening tests such as maternal blood tests and the nuchal translucency test (a detailed ultrasound taken at 11-13 weeks gestation that measures the fluid levels behind the fetus’s neck) can now indicate whether further testing is indicated or whether the risk of abnormalities is low. This development is very positive as amniocentesis is invasive and carries with it a risk of miscarriage.
However, the article states, “40 is the new 35 when it comes to being labelled a high-risk pregnancy.”  The implication here that is repeated several times throughout the article is that pregnant women who are over 35 no longer have the same risks associated with this maternal age; it seems that somehow their risks have decreased, which is not true.
As well the article quotes a physician stating,
“Even if you’re over 40, your risk may be that of a 20-year-old. Screening is making you different from your age.” 
Obviously the screening tests are a positive medical advance. Yet coupled with the misleading implication that risks have somehow decreased, what we see here is often the case: the language of genetic discoveries and genetic technologies seems to support a “wait and see” attitude – find out what the testing tells you, then decide what to do. It sometimes appears a bit like a lottery.
Francis Collins, direction of the National Human Genome Research Institute has mentioned that genetic technologies are much like new drugs – we must see what the general reactions are to them after they are first introduced. And many authors advocate that we should work to address concerns as they appear, as opposed to limiting technological advances with unnecessary policies. This is not to confuse the “wait and see” attitude of the researchers developing the technology with the “wait and see” attitude of the doctor performing the testing – they seem to be on a continuum.
Sonia Mateu Suter notes from her research as a genetic counsellor for prospective parents, “little emphasis is placed on the many emotional and psychological ramifications of undergoing such testing, leaving patients unprepared for certain choices and emotional reactions.”  She feels that this has “impoverished the informed consent process”.  Likewise, a “wait and see” attitude ultimately diminishes autonomy because we are not able to make choices we might have made if we had a comprehensive understanding of all the options and consequences.
Much is unclear as new technologies emerge. What we do know is that the vast majority of those individuals at risk for Huntington’s disease choose not to be tested for the HD gene. A child whose parent has had Huntington’s has a 50% chance of inheriting the gene and developing the disease. There are no cures or preventative measures. Yet at-risk individuals also have a 50% chance of not inheriting the gene and never developing Huntington’s disease. The choice not to be tested struck me as surprising until I read the stories of those at risk and those living with the knowledge that they are carriers. Some of the stories such as Katharine Moser’s (http://www.hdfoundation.org/news/NYTimes3-18-07.php) really put in perspective what it must be like to live with the end of your life before you. She had prepared herself with the requisite six months of counselling when she decided to be tested at age 23, yet admitted she never really believed the test result would be positive. Is it fair for certain people to live this way when no one’s future is certain?
Many would say that genetic testing for other conditions such as Alzheimer’s disease or Multiple Sclerosis, which may become reality in the near future, are not on par with testing for the HD gene. Likely such testing will be in terms of probabilities rather than certainties, such as the current testing for the breast cancer genes – a positive test translates into an increased risk for developing breast, uterine, and ovarian cancer but does not mean a woman will get any of these for certain. Nor does it mean that a woman without these genes is immune to these illnesses. Most likely this difference is part of the reason that intensive counselling is often not part of the testing process, though many acknowledge that the system would be improved if it were. Yet I wonder what the idea of an “increased risk” will mean to people and their families, especially for diseases with no known cure? What will the consequences be for them? Will it be easily accepted as a “probability” – something to think about or watch out for – or will they feel that the die is cast, and they cannot escape their fate? It seems that the outcome will be based on each situation and individual, which underlines the inappropriateness of the “wait and see” attitude.
As testing advances, home testing, where an individual sends a sample away and waits for results, may become more commonplace. Such scenarios have serious implications for privacy and ethics. I read a story of a man who did a home paternity test behind his wife’s back (this is actually encouraged on one paternity website as a way to gain initial information before proceeding with overt testing). The man confronted his wife with the test results that showed he was not the biological father of their children. She flew into a rage and told him he would never see the kids again. While he still has rights as a father, even if he is not a biological one, he now has to battle for these in court. He confessed that he had never fully thought through the consequence of a negative result and deeply regretted doing the test. He was unsure what relationship to have with his kids now, how to think of them, whether he was really their “daddy”. My point here is not to begin a commentary on paternal rights – I mean merely to highlight that this man felt he had acted without fully considering how the test results would affect him.
As genetic testing becomes easier and more commonplace concerns over emotions, psychological states and privacy concerns may be easily overlooked to the point that they are seen as unimportant. Yet to promote autonomous choices we must attend to genetic decision-making in context and encourage individuals to think about what test results will mean to them, their families, and their future. This is not to decry genetic testing; it is to open a dialogue about choices before decisions need to be made. Let’s not “wait and see” what the future holds if diminished autonomy becomes an accepted part of our medical system.
 Pearce, Tralee. 2007, July 10. Amniocentesis: New guidelines. 40 is the new 35 for test. Globe and Mail, L1 and L3; p.L1.
 Ibid, at p.L3.
 Mateu Suter, Sonia. 2002. The routinization of prenatal testing. American Journal of Law & Medicine, 28: 233-270; p.234.
Collision Course? Privacy, Genetic Technologies and Fast-tracking Electronic Medical Information
posted by:Marsha Hanen // 11:59 PM // July 24, 2007 // ID TRAIL MIX
Andre Picard, writing in the Globe and Mail on June 14, made a poignant plea for speeding up the move to electronic health records for all Canadians. He says:
It’s not enough to create health records; it must be done right. That means including information on visits to physicians, hospital stays, prescription drugs, laboratory and radiology tests, immunization, allergies, family history and so on. It also means integrating all these records and making them compatible in every jurisdiction…
Picard points out that medical records should be accessible to all health professionals we consult, from the pharmacist close to home through the emergency room at the other end of the country. And then he adds, in parentheses: “With the requisite protection of privacy, of course.”
And there’s the rub. Just what is the requisite protection of privacy, and how should it be implemented? For example, in British Columbia a few years ago there was a huge, and quite public to-do about the contracting out of the Medical Services Plan databases to a U.S. company, and the need to protect the information from unwarranted access through the Patriot Act. The B.C. Privacy Commissioner, David Loukidelis, played a very visible role in helping to achieve a reasonable understanding of what would be appropriate in this case. But it turned out that, a year after contracting out the information collection and management to EDS Advanced Solutions, an employee of the company spent several months improperly and repeatedly surfing the files of sixty-four individuals, including the file of a woman whose ex-husband had claimed he could find out where she lived, despite her efforts to keep her location secret. And the source of that information, apparently, was to be the employee who had been doing the surfing. As it happened, none of this had anything to do with access through the Patriot Act.
EDS performed an audit that revealed “some unexplained accesses”, and then claimed there had been no privacy violations because they found no evidence that the information had actually been disclosed to anyone! Furthermore, it took nine months before the woman who had complained received notification about what had actually happened and what lay behind her ex-husband’s claims that he could find her. Various safeguards were subsequently put in place, but one can’t help wondering how much “snooping” of electronic health records might take place without being detected, especially considering the access that vast numbers of employees of pharmacies, hospitals and physicians’ offices would have to such information.
Meanwhile, British Columbia has embarked on a major effort to digitize all medical records, including providing electronic medical records technology to groups of doctor’s offices, much along the lines advocated by Picard. Indeed, B.C. plans to be a leader in Canada in this area of moving from paper records to electronic ones. It is clear that such a project could have the effect of improving medical care enormously by integrating records so that each physician or nurse or pharmacist with whom we interact has access to an overview of our medical histories and records. Advantages may include the fact that tests don’t need to be repeated endlessly, that many errors can be avoided, and that some diagnoses can be made without requiring patients to travel long distances. All good. But since many people are quite concerned about preserving their medical privacy, there is a remaining worry revolving around how we are to ensure the protection of that privacy within the system, and the related autonomy and dignity of patients.
So the first questions are about who needs to have access to all this information, and how we can ensure that access is not granted beyond those groups, except under carefully monitored conditions. Secondly, we need to devise ways to ensure that the information is never used to the detriment of patients, that patients are fully informed at all stages, and that they are involved to whatever degree they wish to be in all decisions about their testing, their results and their treatment. All of these are standard issues in designing good medical care plans – it is just that some of them are more likely to lead to problems when medical records are computerized and networked.
The situation becomes more complicated when we add the more recent developments in genetic and genomic technologies, which will, if they haven’t already, expand not just the amount of information available about individuals, but also the kind of information that is gathered. Individuals who agree to the collection of information are usually assured that their privacy will be protected by secure coding of the information and other means. But to what extent are these measures monitored, and how easy or difficult is it for the codes to be cracked? Even if the coding is secure now, it may well be easy to decipher with new information technology methods.
To be sure, not everyone worries about the privacy implications of these technologies. There has been much discussion surrounding the sequencing of individual genomes, two of the most recent highly publicized examples being J. Craig Venter, former president of the Celera Corporation and James D. Watson, one of the scientists who formulated the double helix model for DNA. And amidst the excitement about these developments the likelihood increases that certain genetic information pertaining to individuals will become part of their medical records and, in due course, so will their entire genomes. No doubt for some purposes this is all to the good in the sense that more information about an individual may well make it possible to provide better care.
But what if making this information available leads to refusal of treatment for people with certain “genetic diseases” or various other forms of discrimination such as denial of insurance or employment? Or what if the individual simply wishes to keep certain matters about his genetic make-up private? Or what if he does not wish to know that he is at risk for a disease such as Alzheimer’s, which manifests itself later in life? Or what if someone’s records are retained and used at a later time in a non-secure environment? We must also remember that genetic information about a given individual tells us quite a bit about his or her family, which may expose many people to having their genetic information widely known, whether or not they have consented to such exposure.
In discussions about information technology and medicine, one commonly heard complaint is that privacy advocates are holding up progress by making it difficult to implement the obviously necessary computerization and integration of medical records. On the other side, one might argue that the focus on technology in this area carries with it the danger that privacy considerations will be relegated to the sidelines and may even come to be seen as insignificant. Unfortunately, a consequence of failing to respect privacy is that the dignity and autonomy of individuals is likely to be impaired. In that case, we will all pay the price.| Comments (3) |
"CITIZEN, PICK UP YOUR LITTER": CCTV evolves in Britain 
posted by:Meghan Murtha // 11:59 PM // July 17, 2007 // ID TRAIL MIX
Planning to litter, hang around looking intimidating, or just generally be a public nuisance in England? Careful where you do it.
This past spring, Britain, already host to more video surveillance cameras than any other country in the world , rolled out a new crime prevention measure: ‘Talking CCTV’ (closed-circuit television). Government officials describe the new development as “enhanced CCTV cameras with speaker systems [that] allow workers in control rooms to speak directly to people on the street.” The ‘Talking CCTV’ initiative is just one component of the British Home Office’s Respect Action Plan a domestic program designed to tackle anti-social behaviour and its causes. 
What this means in practice is that when staff, operating from an unseen central control room, observe an individual engaged in anti-social behaviour they can publicly challenge the person using the speakers. At the moment the one-sided conversation is relatively unscripted, although workers are expected to be polite. The first time a member of the public is spoken to about her behaviour, she hears a polite request. If she complies, she is thanked. If not, she can expect to hear a command . If she fails to correct her behaviour, the anti-social individual may find surveillance footage of her alleged infraction splashed across the evening news.
While ‘Talking CCTV’ may be novel, video surveillance is nothing new in Britain. It is estimated that a person living and working in London is photographed an average of 300 times a day.  One commonly quoted figure is that there is one surveillance camera for every 14 people in Britain.  This year the government is spending half a million pounds to set up ‘Talking CCTV’ in twenty communities and it is likely that the program will be expanded in future funding cycles.
Critics of the program argue that the money spent adding speakers to existing surveillance cameras is being wasted. The human rights organization Liberty contends that 78% of the national crime prevention budget in the past decade has been spent on CCTV equipment without proper studies conducted to assess whether or not the expenditure is effective. The organization argues that spending the same percentage of the budget to increase the number of law enforcement officers on patrol would go a lot further to improving public safety. 
‘Talking CCTV’ supporters, on the other hand, cite statistics that would please any elected official. In Middlesbrough, where the pilot program took place, officials claim that the system adds an “additional layer of security”:
At the bottom end of the scale, we use the talking CCTV for littering offences, for which it's proven to be absolutely a 100% success. Middlesbrough's cleanliness has improved dramatically since the speakers were installed.' he said. 'As you move up the scale a bit on public order offences - like drunkenness or fighting - we're proving the speakers are coming into their own, and we're recording about 65% to 70% success rate for those kinds of offences.
But measured against what? In their 1999 study of CCTV in Britain, Clive Norris and Gary Armstrong demonstrated how government and law enforcement officials often present CCTV as a panacea without proving it provides the dramatic results attributed to it. Their review of the numbers suggested that, throughout the 1990s, publicly-quoted figures about the benefits of CCTV were often inaccurate or did not tell the whole story, yet they were used to convince taxpayers to buy into the surveillance system.  This is not to say that Middlesbrough is faking its numbers. It is quite likely that 100% of individuals exhibiting the anti-social behaviour of littering, who were publicly reprimanded when caught on camera, put their garbage in the bin as directed.
The ‘talking’ modification to the existing CCTV system is being sold to the public as a way to clean up the streets and create a safe, law-abiding community. The Home Secretary, John Reid, states that the new measure is aimed at “the tiny minority who make life a misery for the decent majority.” Safe, clean streets sound great but one academic has noted that public debate about CCTV tends to be shaped more by the government’s focus on how technology can improve law and order and far less on other, more complex, issues about the appropriateness of using the technology. 
Government employees now have a powerful tool to single out and shame an individual in public. The fact that “100%” of litterbugs in Middlesbrough obeyed the authoritative, disembodied voice ought not to be underestimated. They likely did so out of shame and embarrassment. Before signing on to such a program, it is worth noting that video surveillance operators, no matter how well-intentioned they may be, are human and they bring their very human biases to their jobs. Norris and Armstrong’s 1999 study showed that the workers watching the monitors disproportionately targeted males, youths, and black people as surveillance subjects.  Biases may change depending on the era and the community. The past few years, for example, has seen an aggressive crack-down on panhandling in Liverpool, along with laws designed to minimize youth loitering about urban shopping districts. 
Will youth people, the urban poor, and members of visible minority communities be disproportionately targeted by ‘Talking CCTV’? Officially, the answer is likely to be “no” but it has been observed that:
Unequal relations between rich/poor, men/women, gay/straight and young/old are precisely relations that have been managed and negotiated through state activities via combinations of welfare, moral education, and censure and exclusion from public space. For some who inhabit our cities, their identity, through the eyes of a surveillance camera, is constructed in wholly negative terms and without the presence of negotiation and choice that middle class consumers may enjoy. 
Public shaming of individuals engaged in so-called anti-social behaviour may result in British cities ‘designing away’ social problems as those who are targeted too often by authorities will find other spaces in which to spend their time.  The rest of the community may find itself enjoying litter-free streets and ‘Talking CCTV’ will be given credit. But it will all have happened without the benefit of serious public debate about whose behaviour is anti-social behaviour and why that makes people uncomfortable. Britain has been trying to rid itself of anti-social behaviour for a long time now and it seems unlikely that a few talking cameras will get to the root of the problem.
 Clive Norris et al., “The Growth of CCTV: a global perspective on the international diffusion of video surveillance in publicly accessible space.” Surveillance & Society 2:2/3 (2004).
 Anti-social behaviour has been seen as such a problem in Britain for the past few decades that the Crime and Disorder Act 1988 gave it a legal definition and criminalized it. That was followed by the Anti-Social Behaviour Act 2003. Legally defining the problem doesn’t appear to have helped much as the government continues to struggle with anti-social behaviour across Britain.
 Clive Norris and Gary Armstrong, The Maximum Surveillance Society: The Rise of CCTV (Oxford: Oxford University Press, 1999): 3. (Note that this was a 1999 study. While this continues to be the figure quoted it is possible the number has increased in the past eight years.)
 Clive Norris et al., “The Growth of CCTV”.
 Norris and Armstrong also quote the ‘78% of the budget’ figure in their 1999 work. It is unclear if this continues to be the expenditure or if Liberty is quoting their work. See Norris and Armstrong, The Maximum Surveillance Society: 54.
 Norris and Armstrong, The Maximum Surveillance Society, 60-7.
 William R. Webster, “The Diffusion, Regulation and Governance of Closed-Circuit Television in the UK,” Surveillance & Society 2:2/3 (2004): 237.
 Norris and Armstrong, The Maximum Surveillance Society: 109-10.
 Roy Coleman, “Reclaiming the Streets: Closed Circuit Television, Neoliberalism and the Mystification of Social Divisions in Liverpool, UK,” Surveillance & Society 2:2/3 (2004).
 Coleman, “Reclaiming the Streets”: 304.
 Bilge Yesil, “Watching Ourselves: Video surveillance, urban space and self-responsibilization,” Cultural Studies 20:4 (2006).
Calibrating Public Access to Personal Information in Legal Databases: Anonymity and 6 Degrees of Google Clicking
posted by:Alana Maurushat // 11:59 PM // July 10, 2007 // ID TRAIL MIX
Hi, I’m Alana. I’m a techno-luddite who confesses to rarely participating (well writing at least) in weblists, chatrooms or blogs. In the fall of 2006 I felt compelled, however, to respond to a posting in the closed list server, cyberprof. The posting in question concerned public access to personal information found in a legal database known as projectposner. Projectposner is a database developed by Tim Wu and Stuart Sierra containing many influential judgements of the late American Judge Richard A. Posner. One such judgment referred to a sexual harassment case where the plaintiff was fired for allegedly refusing to have sex with her boss. The plaintiff (who shall remain anonymous) requested the removal of her name (or the entire case) from a judgement found in projectposner. This request for removal triggered a long debate amongst cyberprof colleagues as to the scope of anonymity (and pseudonymity) with regards to online public access to court records.
Privacy was seen as important but absolute privacy was neither seen as desirable nor possible. Some argued that there was already an appropriate mechanism in place, namely a protective order to remove all references to a party’s name during the course of litigation. The ability to remain anonymous in court proceedings is at the discretion of the judge residing in the matter (at least it is in the United States). It was argued that protective orders are better made as a matter of public policy by judges rather than disclosure decisions done on an ad hoc (or post hoc) basis by individual website owners. Some further argued that there was no objectively significant invasion of privacy in the case at hand. There were references to star chambers, decreasing access to case reports, and the social utility of online searching.
Others, including myself, expressed concerns of the personal, psychological and social effects about public accessibility about sensitive personal information. We noted the lack of education with regards to accessibility of online judicial opinions and court files. We noted any legal obligations requiring website operations to edit and censor information. We even looked at psychological motivation to access and stalk former victims of sex crimes, as well as those of employers wishing to gain access to potential employees.
As lawyers we did a good job debating the legal and policy elements of the situation. As moral agents or ethicists we failed badly. We failed to consider those most vulnerable to the consequences of access to court records – women and children. We failed to consider the privacy invasion from a subjective perspective. And we failed to consider the consequences of 6 degrees of Google clicking.
This situation is not about appropriate court issued protective orders and the ability to access court records online. It is about the ability with a single “I feel lucky” click to have unfettered and unnecessary personal information outside of the scope of the original intended search. It is about using Google ethically (I like Googlethics). It is about what I call 6 degrees of Google clicking.
Similar to our dilemma, consider the following hypotheticals:
1) You are a university student taking a literature course from Professor Woolengala. You wish to see a list of some of her publications and you are, in general, a bit of a nosy parker. In short, you google your professor. The first result produced is a link to a legal database with a judgment where your professor was the victim of a sexual harassment suit which occurred 12 years ago. Within two clicks, you have retrieved and are reading this personal and sensitive information.
2) You are a partner at the law firm McQuarey Nightrum. You wish to hire a new associate. You ask your assistant to conduct a personal background check of all candidates. This includes a search on Google. Your Google search indicates that a candidate was a plaintiff in a workplace harassment suit, as well as a plaintiff in an insurance suit to obtain additional refunds for radiology treatment (3 clicks). Based on this information, you do not shortlist the candidate.
There is an appalling lack of education amongst Google users and website owners on the extent of google search-ability. There are only too many online privacy blunders illustrating this point. Sensitive information of corrupt Hong Kong police finding their way to subdirectories on the Internet (many linked to organized crime). Ongoing police investigations files in Japan again finding their way to subfiles on the Internet. All searchable through Google. All avoidable with the use of FTP protocol, or robot exclusion protocol which does not allow Google’s webspiders to retrieve information from a website – none of these protocols were used by professional IT security experts.
What if FTP or robot exclusion protocol had been used in projectposner? It would still be possible to retrieve the decision from the actual website but the judgment would not be searchable with Google. This would, theoretically, better limit the ability for those to find and use personal information in an unnecessary and unfettered matter (Google search/click for online legal databases, click on database selected, type in party name and click, click on judgment(s) – at least 4 degrees of Google clicking). For this reason, many free online legal databases such as those found in worldlii.org are not searchable with Google. Of course, this also hinders legitimate and efficient searching methods. Google is popular because it works well. There is a middle ground. The same robot text can be used to retrieve access to a website but not to a deeplink. In other words, you may be directed to projectposner but then have to perform an internal search once within the website. More beneficial, of course, would be in the ability to dissociate website ranking so that a result with personal information would not appear in the first page of results. These small technical specifications could have reduced some of the ethical (and legal) dilemmas of online access to court information, but they could not, of course, have avoided altogether many of the issues.
There is no quick answer to this issue but I for one, would like to see a policy of 6 degrees of Google clicking. In the game of 6 degrees people try to link actors to movies starring Kevin Spacey. The object of the game is to make the link with as minimal degrees as possible with a maximum link of 6. The reverse for online searching of personal information found in legal databases may be good policy. Requiring 6 degrees of Google clicking would provide a stronger incentive for those with genuine vested interest in obtaining personal information while reducing unnecessary and unfettered access.
I haven’t nearly begun to explore the many important and deserving ethical issues presented in accessing online information in legal databases. It is an act requiring fine calibration. I invite your input.
| Comments (4) |
Alana Maurushat, B.A. (University of Calgary), B.C.L.(McGill), LL.B. (McGill), LL.M. with Concentration in Law and Technology (University of Ottawa), PhD Candidate (University of New South Wales). The author is Acting Academic Director of the Cyberspace Law and Policy Centre, sessional lecturer, and PhD candidate at the Faculty of Law at the University of New South Wales, Australia. Prior to moving to Sydney, she was an Assistant Professor and Deputy Director of the LLM in Information Technology and Intellectual Property at the University of Hong Kong’s Faculty of Law. She has taught in summer programs for the University of Santa Clara, Duke University, and has been invited to teach at the Université de Nantes this coming year. Her current research is focused on technical, ethical and legal dimensions of computer malware building on past research projects which addressed the impact of surveillance technologies on free expression and privacy. She currently teaches Advanced Legal Research.
Home is Where the Heart is: Dignity, Privacy and Equality under the Charter
posted by:Daphne Gilbert // 11:59 PM // July 03, 2007 // ID TRAIL MIX
A country’s constitution can be described as the mirror into the national soul. A constitution is a foundational instrument, reflective certainly of its country as it exists, but also aspirational in nature. In countries, like Canada, where the constitution protects individual rights and freedoms, citizens are empowered by the values that shape the legal guarantees. This is at least, the hope behind Canada’s Charter of Rights and Freedoms. What then to make of the fact that an interest or value in ‘privacy’ is not expressly protected by our constitution?
The question of the role privacy plays as a foundational constitutional value has been addressed by the Supreme Court of Canada on numerous occasions. It is well-settled law that sections 7 and 8 of our Charter do contain protections for some aspects of a privacy interest. What is less clear is whether a robust concept of privacy, and privacy-related interests, are adequately and wholly protected in Canada’s Charter. Given the constraints of the privacy protections recognized in sections 7 and 8, finding another home for privacy in the Charter might open up new potential. In my view, it would be both helpful and appropriate to consider privacy in the context of the section 15 equality guarantee.
I stress here that I am proposing “another” and not a “new” home for constitutional recognition of privacy interests, because I agree that sections 7 and 8 offer important and necessary protections for certain privacy interests. These two sections are, however, limited in their scope. They appear in a part of the Charter labeled “Legal Rights”, a heading that has been interpreted as placing boundaries on the application of sections 7 and 8. In Gosselin v. Quebec (Attorney General),  a majority of the Supreme Court of Canada affirmed that the guarantees under the “Legal Rights” section of the Charter are triggered by state action involving the administration of justice. In most situations, the “Legal Rights” guarantees are triggered in the criminal law context, though these protections can be used in administrative contexts too (as they were, for example, in the case of New Brunswick (Minister of Health and Community Services) v. G.(J.)  , involving challenges to child protection processes). While Gosselin left open the question of whether an adjudicative context was required for “Legal Rights” to apply, the majority insisted that it was appropriate to restrict the applicability of the “Legal Rights” protections to the administration of justice.  In Gosselin, this meant the section 7 guarantee to life, liberty and security of the person was useless in challenging an inadequate welfare regime. If privacy protections are housed only in sections 7 and 8 of the Charter, the nature of the interests protected are necessarily limited. These limitations mean that only certain kinds of privacy interests are protected by the Charter, and that a “right” to privacy only comes into play in situations captured by section 7 and/or 8. In my view, this is an impoverished interpretation of what privacy could offer as a constitutional value.
Since the Canadian Charter does not recognize the same sort of “penumbral effects” as the Americans see in their Bill of Rights, we are required to locate our constitutional values within specific Charter guarantees. If there is potential for constitutional recognition of privacy outside of the “Legal Rights” context, privacy must find another resting place. In my view, section 15 offers significant hope and advantages as another home for privacy. Chief Justice McLachlin of the Supreme Court of Canada describes “equality” as perhaps the most difficult of the Charter rights to interpret and define, and indeed, section 15 has had a tumultuous history since it came into force in 1985. In the 1990s, the Court was particularly divided on the proper interpretive approach to section 15, until in 1999 the Court reached a tentative consensus on a “test” for equality violations in Law v. Canada (Minister of Employment and Immigration).  [Most section 15 scholars agree the Law test is problematic and that the Court has in any event fractured into differing views on equality rights in recent years, however, Law remains in theory and in practice at least, the prevailing structure for section 15.] In Law, the Supreme Court decided to make “human dignity” the central focus of the equality guarantee, explaining the purpose of section 15 as:
to prevent the violation of essential human dignity and freedom through the imposition of disadvantage, stereotyping, or political or social prejudice, and to promote a society in which all persons enjoy equal recognition at law as human beings or as members of Canadian society, equally capable and equally deserving of concern, respect and consideration. 
Section 15 claimants must show, as one of the three required steps in the Law test, that the legislative provision they contest violates or demeans their human dignity.  Justice Iacobucci, writing for the Court in Law, outlined his version “human dignity” in the equality context, intending his approach to be comprehensive but non-exhaustive:
What is human dignity? There can be different conceptions of what human dignity means… [T]he equality guarantee in s.15(1) is concerned with the realization of personal autonomy and self-determination. Human dignity means that an individual or group feels self-respect and self-worth. It is concerned with physical and psychological integrity and empowerment. Human dignity is harmed by unfair treatment premised upon personal traits or circumstances which do not relate to individual needs, capacities, or merits. It is enhanced by laws which are sensitive to the needs, capacities, and merits of different individuals, taking into account the context underlying their differences. Human dignity is harmed when individuals and groups are marginalized, ignored, or devalued, and is enhanced when laws recognize the full place of all individuals and groups within Canadian society. 
Connections between privacy and human dignity have long been acknowledged and explored by theorists  and the Supreme Court of Canada has declared, “a fair legal system requires respect at all times for the complainant’s personal dignity, and in particular his or her right to privacy, equality, and security of the person.”  It seems almost natural, then, that privacy should find a new home outside of the “Legal Rights” portion of the Charter, within human dignity, as it is understood and protected under section 15.
There are many benefits to interpreting section 15 to include a privacy interest, broadly captured by two significant features. First, protecting privacy as part of the Charter’s equality guarantee provides opportunities for a set of privacy-related claims that do not fall within the boundaries of the “Legal Rights” section to be brought forward. A claimant whose privacy interests have been violated outside of the Legal Rights context (meaning sections 7 and 8 are not triggered), may now have an avenue under section 15 to bring forward the claim, expanding the Charter’s spectrum of privacy protections. For example, in contexts including (dis)ability discrimination, social welfare or employment regimes, access and funding for abortion or contraceptive services, poverty and homelessness, government relationships with aboriginal peoples, as well as other pressing equality concerns, arguments around privacy interests might be helpful in unpacking and explaining the human dignity step of the Law framework.
Second, an understanding of privacy embedded within the Charter’s equality framework could open up more expansive possibilities for protecting a range of privacy interests beyond those that fall within sections 7 and 8. Section 8 has been interpreted as protecting three specific ‘classes’ of privacy interests: personal, territorial and informational privacy. Section 7’s protection for security of the person, which includes bodily integrity, includes decisional privacy interests. A number of theorists, however, including feminists Allen, Roberts, Gavison, McClain and others, have argued that a robust understanding of privacy includes more than simply protecting these manifestations of recognized privacy interests, and may include such features as positive obligations on the state to provide the conditions necessary for true private choice to be exercised. It is possible that interpreting privacy within section 15 could lead to the legal recognition of new or different ‘kinds’ of privacy, over and above those protected by sections 7 and 8.
Whatever the content of privacy is understood to include, there is general agreement in law and society that privacy is worth protecting, as a “core value of a civilized society,”  and as a requirement both of “inviolate personality”  and human dignity. Expanding the possibilities for protecting privacy by including it within the ambit of the section 15 equality guarantee is further and uniquely Canadian recognition of the foundational role that privacy plays in our society. Equality, and by necessity a constitutional right to equality, is at the heart of a compassionate democracy. While the Charter protects and advances many of our most cherished values, section 15 is at the heart of the Charter’s vision for Canada. Finding a home for a privacy interest in our understanding of human dignity, not only promotes a more fulsome understanding of the many facets of privacy as a core value, but also opens up new equality arguments for vulnerable and marginalized groups.
 2002 SCC 84
  3 S.C.R. 46.
 Then Justice Arbour took a different and radical approach to section 7, and would have removed it from the limitations of its placement in the “Legal Rights” section of the Charter. She left the Court soon after the Gosselin decision and her views have not gained traction at the Court so far.
  1 S.C.R. 497.
 Ibid. at para. 59.
 The first two steps in the Law test are that the claimant establish that he or she is a member of one of the enumerated or analogous grounds listed in section 15 and that the impugned legislative provision imposes a burden or denies a benefit to the claimant on the basis of the ground.
 Ibid. at para. 53.
 A number of philosophers have connected privacy to human dignity, and explained the relationship between the two as harmonious and even symbiotic in nature. Edward J. Bloustein reasoned:
The man [or woman] who is compelled to live every minute of his [or her] life among others and whose every need, thought, desire, fancy or gratification is subject to public scrutiny, has been deprived of his [or her] individuality and human dignity. Such an individual merges with the mass. His [or her] opinions, being public, tend never to be different; his [or her] aspirations, being known, tend always to be conventionally accepted ones; his [or her] feelings, being openly exhibited, tend to lose their quality of unique personal warmth and become the feelings of every man [or woman]. Such a being, although sentient, is fungible; he [or she] is not an individual.
See: Edward J. Bloustein, “Privacy as an Aspect of Human Dignity: An Answer to Dean Prosser” in Schoeman, Ferdinand, eds. Philosophical Dimensions of Privacy: An Anthology, (Cambridge University Press, 1984 at page 188). See also: Jeffrey H. Reiman, “Privacy, Intimacy and Personhood” in Ibid, at page 305; Helen Nissenbaum, “Privacy as Contextual Integrity” (2004) 79 Wash. L. Rev. 119.
 R. v. O’Connor  4 SCR 411 at para 154.
 See Olmstead v. United States, 277 U.S. 438 (1928) (Brandeis J., dissenting).
 Warren & Brandeis, “The Right to Privacy” 4 Harv. L. Rev. 193, 194 (1890).