Jun 27, 2008 1
Inside / Outside Part III: Life in a Surveillance Society
In Part I of this look at how technology is changing our perceptions of privacy, I wrote about how the spread of new technologies is blurring the line between private space and public space / the public sphere. In Part II, I wrote about unidirectional versus bidirectional flow of information, and specifically the ramifications of corporate monitoring of our actions online. Both relate to our ability to keep actions secret; even assuming the home is an unassailable fortress of privacy, there are still grey areas where actions within the public sphere may be protected as private (and actions which should probably be protected as private are made public). For privacy in our day-to-day life to extend beyond the four walls of our homes, a wall of anonymity is required — but technology is continually wearing that wall down.
A lot has been written about the rise of a surveillance society over the past few years. In Britain, for example, there exists one surveillance camera for every 14 citizens, and earlier this week Ars Technica reported that surveillance microphones are in the works. Surveillance data are also often analysed with facial recognition software, or even x-rays of your skeleton or other biometrics, in order to assign an identity to the image. Even outside the view of cameras, placing a cellphone call logs your general location, to the nearest cell tower. Every time you access a Wi-Fi network, your IP address and location can be recorded and traced to you. You leave a digital footprint every time you interact with networked technology — including just walking down the street, let alone surfing the net.
Some reports estimate the average Londoner is photographed 300 times a day — it’s easy to make a leap to a not-too-distant day where every time you go outside, your movements are plotted and recorded in a database. There’s nothing stopping the stores you shop in from recording your image, logging a scan of your face into a database, assigning you a number and keeping track of your purchases from visit to visit, even if you’re paying with cash.
Today, this information is typically not shared between sources, so individual enterprises only know bits and pieces of your activity; no single body is able to assemble the complete image of where you go, what you read, what you buy, what you type. However, this information is of value, and stripped of identifying data, can be sold to anyone looking to collect intelligence for marketing purposes. Virtually all our activities are monitorable — and monetizable. Of course, I’m conflating two things: security cameras, and other ways of monitoring behaviour which are used to gather consumer data. But both involve following our actions and recording them, and all of it is essentially unavoidable to people living in urban areas in the 21st century. Further, as David Lyon writes, as the processes become increasingly automated, the distinction becomes academic: “The neat theoretical distinctions — between government and commerce, between collecting data and supervising — do begin to blur when confronted with the realities of contemporary electronic surveillance. Increasingly, disciplinary networks do connect employment with civil status, or consumption with policing.” A good example of this is government’s repeated attempts to subpoena search engines and ISPs (which collect surveillance data for consumer purposes) to obtain information to be used in policing or even lawmaking.
We are policed and we are monitored, but what’s also interesting is the power that is being transferring into the hands of the individual as well. The explosive publicity of Rodney King‘s beating of 1991 would have been inconceivable a decade earlier, but the tools of surveillance were, in that situation, in the hands of the public, not of the state. More recently, we have seen stories like that of a Gatineau teacher caught on a cameraphone in the act of excoriating a student — a video later posted online, subverting the usual structure of authority.
The question all of this begs: are we better or worse for living in a society of constant surveillance, including surveillance by the private sector or other members of the public? We tend to see the invasion of our privacy as something inherently evil on mere principle, but I think most people would find it difficult to explain exactly what it is about being monitored that they object to, without resorting to fuzzy concepts like “invasion of privacy” and “loss of freedom.” But at a basic level, as I have argued, we really oughtn’t to make any claims to a right to privacy within the public sphere in the first place; loss of freedom, while obviously bad, doesn’t follow from surveillance — surveillance is simply the collection of data, and it’s only once those data are put to use that danger to personal freedoms arises. The potential benefits of living under surveillance are clear: safer neighbourhoods thanks to a deterrent to crime and a higher rate of conviction when crimes are committed, faster emergency response for things like housefires, and even more efficient marketing, as I’ve alluded to in earlier posts. Concrete examples of the detrimental effects of hypersurveillance are difficult to cite.
However, Lyon argues that there are destructive effects to ubiquitous surveillance; they are not immediately apparent, because they do not cause direct harm, but indirect harm is real and insidious. According to Lyon, the main issue with surveillance is not one of privacy, but one of social justice. In both policing and consumer surveillance, data are collected — benign in and of itself — but the ways that data are used affect the ways people are treated. For example, in a casino (another place where one’s every move is captured by camera), the more money one spends at the tables, the greater the perqs — a free room upgrade or show tickets here, a free dinner or bottle of champagne there. It may sound innocuous, but carried over into life in general, the sorting of people into categories has an deleterious effect on equality.
We’re all told as children not to be prejudiced, but that’s exactly what these systems are designed to do — they look at superficial data to draw broad generalisations about individuals, so that they can be treated differently from one another. Racial profiling and economic profiling are just two of the most obvious examples of how an automated surveillance system can categorise people — and then ghettoize or idolise them — based on an analysis of superficial data. This is equally likely as we navigate the physical world, or as we navigate cyberspace. Anywhere surveillance monitors our activities (or even our appearance), that information is always filtered in ways to allow for analysis — and that filtering is necessarily built around metrics that discriminate between people, and usually in ways that we would consider socially unjust, such as discrimination by race, gender, age, or class. To extend or withhold offers to people based on surveillance data, or to subject people to or insulate people from increased levels of policing, based on surveillance data, is socially divisive along lines already dividing society.
Essentially, the question is not whether there is a danger from surveillance on- or offline. Collecting security and consumer data is harmless — even productive — on the face of it. While often couched in terms of privacy or even personal freedom, those two concepts really have no bearing on our actions in the public sphere; we should always assume our actions in the public sphere will not be kept private. It’s not our personal privacy or freedom that surveillance threatens; it’s our equality. Ideally, we would all have equal an opportunity to access goods and services online or in the physical world, and we would all have an equal opportunity to be free from harassment or persecution or even excessive scrutiny. Prejudice and discrimination is something we all encounter daily, and as a product of the individuals we meet, we can only counter it as individuals ourselves. But the threat of what will happen as we institutionalise prejudice and discrimination by systematically filtering data which categorise people — a social-sorting system — is something we must be very considerate of as we try to achieve our policing and marketing ends.