Privacilla.Org

Home
Past Releases and Reports
Coverage
About Privacilla
Privacy Fundamentals
Privacy and Government
Privacy and Business
Online Privacy
Financial Privacy
Medical Privacy
Report your thoughts to Privacilla!
Your Source for Privacy Policy from a Free-market, Pro-technology Perspective


Click to return to list of releases and reports

Home > Past Releases and Reports > Prepared Remarks to the American Bar Association Section of Science & Technology Law Panel Entitled “Biometrics: New Weapons in the War Against Terrorism or New Blow to Privacy?”


Prepared Remarks of Jim Harper, Editor of Privacilla.org, to the American Bar Association Section of Science & Technology Law Panel Entitled “Biometrics: New Weapons in the War Against Terrorism or New Blow to Privacy?”

August 11, 2002

I did a panel this morning at 8:00 am and thought I had seen it all. Now, I come to find out that you people will even attend a panel discussion like this late on a hot Sunday afternoon. Now I really have seen it all.

I always learn a great deal by speaking on panels like this, and I think I'm definitely getting the best of you here. I know that I'm going to learn more than I share today.

As you heard, I am the Editor of Privacilla.org, which is a Web-based think-tank devoted to privacy. I am also an Adjunct Fellow at The Progress & Freedom Foundation. And, in addition, I have a lobbying and consulting firm called PolicyCounsel.Com. None of my clients has specific privacy issues, but privacy touches nearly every public policy issue in one way or another. None of the material on Privacilla and none of what I say to you represents the views of any client, but be aware of my potential for bias, as you would be with any privacy advocate.

At Privacilla, we try to sort out the issues that, in public debate, go under the name of “privacy.” We have put forward a definition of privacy so that policymakers can better address the issue directly and determine what interests they are pursuing with various proposals.

The word “privacy” has come to be used to describe just about every concern with the modern world. That’s fine for regular people, but when we as policymakers address these concerns, we need to be a little more precise.

At Privacilla, we have developed a working definition of privacy that we believe should form the basis of policy discussions on the topic: Privacy is a subjective condition that individuals enjoy when two factors are in place — legal ability to control information about oneself, and exercise of that control consistent with one's interests and values.

Most importantly, privacy is a personal, subjective condition. I know that in a roomful of lawyers, I don’t have to tell you what “subjective” means. It means that my sense of privacy is my own, and yours is yours. Legislators and regulators can’t pass laws to tell us we have privacy when we think we don’t. Those laws can only represent guesses about what privacy might look like.

The first factor I mentioned is the legal power to control information. This essentially asks whether people have been deprived of power to control information in some way. There are thousands of laws and regulations that deprive people of power over information about themselves. Let there be no mistake about the good intentions of these laws. Unfortunately, there is a nearly direct correlation between how helpful a law is intended to be and how privacy-invasive it is.

The second factor I mentioned is exercise of control consistent with our interests and values. Ultimately, the only thing that can deliver privacy on the terms consumers want it is consumer awareness and education. If you don’t know how information moves in the Information Economy, you can’t reject practices that you disapprove of. It is the actions of educated and aware consumers in the marketplace that determine whether the uses businesses wish to make of information are acceptable.

Along with giving definition to the term “privacy,” we are trying to move issues that are not properly regarded as privacy into other boxes. This is part of our effort to improve the work that policymakers do.

Identity fraud, for example, is widely perceived as a “privacy” problem. But it is better understood as a group of crimes that thrive on the use of personal identification and financial information. Because of this widespread misperception, the crimes that constitute identity fraud go poorly enforced while Congress and many states consider things like banning many uses of Social Security Numbers. My suggestion is that the cure for the problem of identity fraud is to start putting some bad people in jail.

Security is another example. Although it is sometimes treated as a twin of privacy, security has to do with all the steps a business or government takes to protect its operations, data, and possessions. Privacy is just one promise that governments and businesses make to citizens and consumers. Security allows privacy promises to be carried out, but the two are not the same.

Likewise, unwanted commercial e-mail, or "spam," is an offensive intrusion into electronic communications and a serious annoyance that is often called a "privacy" problem. Spam exists in large part because e-mail marketers know little or nothing about the interests of potential customers. It is difficult to reconcile spam — e-mails broadcast to unknown people nearly at random — with the heart of the privacy concept, which is too much personal information being available too widely.

Before I go too far into any details of biometrics and privacy, I thought I would just air out a problem I have with the idea that biometrics have a role in combating terrorism. It’s nearly a political fact that biometrics and national ID cards have some relationship to controlling terrorism. Well, political facts and actual facts sometimes have little relationship to one another. From my perspective, the only way to reduce terrorism is to develop good intelligence about terrorists and terrorist organizations.

Tracking and tracing the rest of us — the law-abiding non-terrorists — is equivalent to protecting the individual pieces of hay in a haystack from each other because their might be a needle present. Tracking all of us is essentially an incoherent response to terrorism. I’m well-aware that the fact of its incoherence is not going to stop us from going down that path for quite a ways.

Terrorists are going to ingratiate themselves into any security system we deploy for ourselves. They are going to hide their intentions and motivations. And it is intentions and motivations that make people terrorists. Finding these people before they act is what will prevent terrorism. Biometrics can not do that. So I’m just unimpressed with biometrics as terrorism control.

Biometrics are an inert tool that appear to have a much better role in ordinary crime control, security, and convenience than anything so exotic as terrorism. I’m sure we can fill hours just discussing biometrics generically without getting lost in the notion that biometrics have much to do with preventing terrorism. But I've gotten ahead of myself.

The other panelists surely know more about biometrics, but I have done just a tiny bit of research to demystify them for myself. “Biometrics” literally means measurement of life, or I would say measurement of living things.

I have researched when biometrics first came into use and I’ve only been able to come up with a rough estimate of when we started using them. It’s about 250,000 years ago when we as humans emerged as a distinct species. Earlier ancestors were certainly using biometrics prior to that, and animal species were using biometrics then and are using it today.

You may have figured out by now that I’m adopting a very literal understanding: “biometrics” means taking the measure of other living things. We really have been using facial recognition and voice recognition essentially forever. When you see someone or hear them talk, you use that in your head to confirm their identity. Only in the very recent past, we’ve started using signatures and fingerprints as additional biometrics.

Obviously, we’re here to talk about something more interesting than the idea that we can recognize people we’ve seen before, so let me think out loud about what’s new in biometrics.

One element to consider is that modern biometric techniques are allegedly able to differentiate people perfectly from one another. I don’t think this is much of an improvement over past methods like facial recognition and fingerprints. We’ve never thought about it much, but who has ever, for more than a moment, mistaken a person for someone close to them like their mother or best friend? If it is true that retinal scanning is an essentially perfect way of distinguishing people, we don’t need to worry that it is being used rather than fingerprints or other biometrics that we’re more familiar with.

A more important change in technology is standardization of biometric methods. That is, taking biometrics out of our heads and putting them into computers. The old way of doing biometrics has been to calculate in our heads whether a set of characteristics is similar enough to call it a match. We learned to do this from birth through trial and error, we don’t think about it much, and we’re all expert at it by the time we’re four years old. Exporting this type of “thinking” to computers is not an easy task, and I don’t envy the people who are trying to do so. Obviously, they’re ability to do it correctly determines whether a biometric method succeeds or fails.

The most significant change, I think, is that biometric information about individuals can be collected and put into large, accessible databases. In an imaginary future, identifying biometric information and other information can be combined quickly and easily. This would absolutely devastate the ability to remain anonymous or pseudonymous that we take for granted today.

Anonymity and pseudonymity are two extremely important social customs that we enjoy today that we don’t want to give up lightly. There are good and appropriate reasons why people might want to interact with others without revealing their identities, or while providing false identity. For example, a woman in a bar might want to make sure she never hears from a particular guy again, so she may give him a false number. This has never happened to me, I swear.

I disapprove of hype, so I’m not going to say that anonymity or pseudonymity are going away. They’re not, because we’re going to make sure they don’t. That is the potential of the technology, though, and we need to be aware of its potential impacts so that we can consider them and mitigate those impacts. The way we do that, I think, is by parsing out the uses of biometrics carefully.

And I think the most important thing that you should consider in terms of biometrics is not biometrics themselves, but who is using them. On Privacilla, we brought some rather spectacular innovation to the privacy field — and it’s unfortunate that this was innovative thinking — when we started talking about privacy from government and privacy in the private sector as being very different things.

Like it or not, governments have the power to take and use personal information about us without our consent. All variety of programs require us to share personal information, law enforcement being only the best known. We intend no slight to the beneficent motives of public officials when we say that loss of privacy is a cost of government.

The legal structure that governs information after it has been collected by governments is also one of uncertainty. Under the federal Privacy Act, for example, an agency only has to declare a new “routine use” in the Federal Register in order to make a new use of information about citizens. More deeply, Congress can change the uses that can be made of information with the passage of any new law. Who do you turn to when the privacy invasion is being done by your own government? For this reason, we deem personal information in the hands of governments to be categorically unprivate.

Governments also have a poor system of incentives when it comes to collecting information. Because they have legal power to demand it, they will tend to collect more information than they need to. They don’t lose “customers” if they are too demanding.

The private sector, on the other hand, is subject to market forces. Consumer groups regularly expose information practices that consumers find objectionable and companies drop them immediately. I recall that Borders Books was testing facial recognition cameras in the UK a few years ago. Public outrage caused them to back away from that idea very quickly.

Private enterprise is also subject to law. You can enter into contracts that dictate what uses may be made of information handed over in a transaction or derived from it, and that contract can not by changed by either of the parties.

Some privacy advocates believe that the terms of the contracts we as consumers regularly enter into allow too much information sharing today, but that is an argument they have to take up with consumers. If they can convince consumers to avoid companies with “bad” information practices, then they have found an area where consumers disapprove of commercial information practices.

In addition, there is the tort law. When any entity in the United States holds sensitive personal information about people, they hold it with a responsibility, enshrined in the tort law, not to reveal it in an outrageous or humiliating way. This is baseline privacy protection that has been evolving in our state law for 100 years. There is an array of protections for privacy when you deal in the private sector that are not found in the government sector.

So the most important question about privacy and biometrics, I think, is Who is using biometrics?

When it is governments, we have reason to be concerned and skeptical. They have unique powers that no other institutions have. And, again, I think it is the databasing of information that is the major concern.

I testified before Congress on the privacy implications of red-light cameras, and found that the cameras themselves currently have few legitimate privacy implications. There are nascent privacy problems, and I made a Fourth Amendment argument about it. When governments combine digital cameras, optical character recognition, and databases to track the movements of cars and people in the absence of some level of suspicion, this will offend basic notions of privacy and violate the Fourth Amendment.

I think the same applies to warehousing biometric information about non-suspects, or using biometric technologies to record the movements or actions of non-suspects. One of the most interesting areas of public records law today is that we need to start talking about data destruction, which has never had to be much of a priority before.

In the private sector, we have much more protection, and I am much more optimistic about the uses that consumers can make of biometrics — with their consent. In the not too distant future, we could have access to cash or credit wherever we are, with or without a card or a PIN number. We could enter theaters or airports quickly and with greater security than ever before. We won’t need to carry car keys or house keys. Imagine never losing your wallet or keys because you don’t carry them any more. Outstanding. Of course, those of us who want greater privacy should be able to refuse these options and suffer the modest inconvenience of doing so.

So, to summarize, biometrics are not so remarkable in terms of privacy — they are an inert technology that can be used for good or bad. If you are looking to protect privacy, the more important consideration is who is using the technology, government or the private sector.


©2000-2003 Privacilla.org. All content subject to the Privacilla Public License.