understanding the importance and impact of anonymity and authentication in a networked society
navigation menu top border

.:home:.     .:project:.    .:people:.     .:research:.     .:blog:.     .:resources:.     .:media:.

navigation menu bottom border
main display area top border
« Private Conscience, Identity and Equality: Limiting Free Speech for the Greater Good | Main | Bluetooth Blues »

Here’s to the Stupid Users: Notes from the WSIS Working Group Meeting on Cybersecurity

posted by:Valerie Steeves // 08:34 PM // August 30, 2005 // ID TRAIL MIX

In July, I went to the World Summit on the Information Society meeting on cybersecurity in Geneva. It was a remarkable experience in many ways, not least because Deborah Hurley, who organized the meeting, seeded every panel with representatives from the developing world. Western demands for tightened security - including the routine authentication of online users - were put into wonderful context when the delegate from Tanzania pointed out that almost everyone accessed the Net in his country only through cybercafés - try authenticating them - and, although cybersecurity is a priority, it’s less of a priority than things like clean water and electricity. Security takes on a different flavour in those circumstances.

But the thing that really stuck with me after the meeting was a comment made by one of the European delegates. As he bemoaned the sorry state of cybersecurity on the Net, he said, “It’s the stupid users. If we could just get them to use the technology properly, then we wouldn’t have a problem.”

Throughout the meeting, government and industry representatives - and many academics - talked about threats, attacks, counterattacks, command and control centres, arsenals and systems. Security was defined in terms of the network, not the people who use the network for their own purposes. The emphasis was put on creating a network that was controllable - that’s why the users are problematic, because they’re harder to control than routers and cables. At the very least, so the argument goes, users should identify themselves so the system can be protected from criminals.

This discussion bothered me, for a number of reasons. Not least is my firm belief that language is important. Not only does the language we use structure how we define a problem but it also structures the kinds of solutions we embrace. Early articulations of privacy rights in a world of databases were rooted in the European experience of World War II. Deep concerns about abuses of power and the gross denial of human rights led to the enactment of the Universal Declaration of Human Rights and the recognition in international legal instruments that privacy is an essential element of human dignity, autonomy and the democratic process. That’s a far cry from our current apparent consensus that we should strip away the anonymity of the stupid users who are screwing up the works because they pose a threat to the corporations and governments who use the Net to deliver goods and services to consumers.

I’m not arguing we don’t need to address problems like denial of service attacks and botnets. We do. But we should be more particular about the way we do it. Massive surveillance of “users” isn’t the answer because it creates its own insecurities. As Bruce Schneier points out, the automatic tracking of the numbers you call on your cell phone puts you, the person using the technology, at risk because that information becomes available to others. GPS functionalities create a similar problem. Identity theft is facilitated by the massive collection of your personal information by institutions which are then vulnerable to internal leaks, not through you forgetting to cover up your PIN at your local grocery store.

Blaming the user blinds us to the larger issues of corporate responsibility for these unintended security problems. It also predisposes us to accept solutions that are privacy invasive, because we no longer see the user as a person with fundamental rights or the law as a means to protect those rights. In fact, the law becomes the problem because it makes it difficult to protect the network.

For example, a South American police officer at the WSIS told a story about tracking a man who had apparently disappeared but then used a hotmail account to send an email to a friend. When he contacted the police in the US to get the IP address, he couldn’t get the information because of data protection laws. This led to a lengthy discussion of the ways in which data protection laws - although well-meaning - create insurmountable barriers to law enforcement in a networked environment by protecting the identity of criminals.

The thing is, the US doesn’t have data protection laws in place for IP addresses, so it’s hard to see how data protection could be at fault. And last time I checked, a person who leaves his family without telling them where he’s going isn’t a criminal.

Blaming the user is a dangerous ideology because it blurs the line between users and criminals. Especially in a global context, “criminals” can include human rights activists and political dissidents who use the Net to exercise their right to free expression and association. We’re doing them and us no favours when we build mandatory authentication and surveillance into the network.

Rather than worrying about controlling the stupid users, we should be worrying about the effects of weakening judicial supervision of police surveillance. We should also invest in privacy-respectful alternatives, like honeypot servers that attract attackers and provide early warning of pending attacks on the network - all without collecting personal information or invading anyone’s privacy. Because ironically, in a world information society, it’s the users that matter. The people who talk to their friends, carry on their businesses and surf through the vast labyrinth of information that resides on the Net are the society we’re seeking to protect.

Comments

As odd as it may seem, serious privacy (and human rights) violations are still a standard matter in some parts of the world in the wake of the WSIS summit, including its host country Tunisia. According to IFEX's Tunisia Monitoring Group (TMG), government censors routinely block access to at least 20 websites that provide independent news and analysis about human rights and political issues in Tunisia. They include kalimatunisie.com, tunezine.com, tunisnews.net and reveiltunisien.org.

The TMG says the credibility of the WSIS will be seriously compromised unless Tunisia takes immediate measures to improve free expression conditions.

For more visit http://www.ifex.org/en/content/view/full/68076/
http://www.ifex.org/en/content/view/full/85/

Posted by: Anonymous at August 30, 2005 01:18 PM

One has to remember that the meeting was a thematic one, one that just feeds in comments for consideration to the actual WSIS itself. The real work, the real drafting will be taking place in a few weeks time at the upcoming preparatory committee.

I'll be attending the meeting and will try to post messages both on my blog and podcast. The URL for the feeds are as follows:

WSIS blog - http://www.privaterra.org/activities/wsis/blog/

WSIS podcast - http://www.privaterra.org/activities/wsis/podcast/rss.xml

Posted by: Robert Guerra at August 31, 2005 09:23 PM

Post a comment




Remember Me?


main display area bottom border

.:privacy:. | .:contact:.


This is a SSHRC funded project:
Social Sciences and Humanities Research Council of Canada