Privacilla.Org

Home
Past Releases and Reports
Coverage
About Privacilla
Privacy Fundamentals
Privacy and Government
Privacy and Business
Online Privacy
Financial Privacy
Medical Privacy
Report your thoughts to Privacilla!
Your Source for Privacy Policy from a Free-market, Pro-technology Perspective


Click to return to list of releases and reports

Home > Past Releases and Reports > Remarks to the Computers, Freedom, and Privacy Conference Panel Entitled "Government Profiling/Private Data"


Prepared Remarks of Jim Harper, Editor of Privacilla.org, to the Computers, Freedom, and Privacy Conference Panel Entitled "Government Profiling/Private Data"

April 23, 2004

I’m in an unusual position today because I’m not the one with the most extreme position. Bill Scannell seems to have taken that mantle so I am left to sound like the voice of reason. I may even commit the mortal conference-speaker sin of being boring.

I am the Editor of Privacilla.org, which is a Web-based think-tank devoted to privacy as a public policy issue. On Privacilla, we try to cover the whole range of privacy issues, including privacy from government and privacy in the commercial sector: financial, medical, and online privacy. In addition to these areas, we work on “privacy basics,” asking what it is we’re really talking about when we talk about privacy.

To that end, I want to make a critical distinction about the major problem with a program like CAPPS II. The latest version of the CAPPS II Privacy Act notice is actually pretty good about privacy in terms of the information that will move between an Acxiom and the CAPPS II system. If Acxiom will only return a numerical score that verifies a person’s identity, that is very little personal information moving around. There is little privacy problem there, though there are a number of other privacy problems remaining in the proposed system.

The heart of the problem with this system is the Due Process consequences. When private data is used for public decision-making, the rights that we have against government should extend to that private data. If Acxiom data is going to be used by the government to determine whether we fly or how much searching we undergo before we get on a plane, the protections of Due Process should and do require that data to be opened in ways that Acxiom might not like.

I am not a fan of the vaunted “Fair Information Practices,” which range in number from four to twenty-four, depending on who you talk to. But they are certainly much more appropriate in the government context than in the private context. Governments have more power and are not constrained by law and market incentives the way private actors are, so requirements like notice, choice, access, and security are more important to spell out and require in that context.

So I will make a point that may not be popular here, but I’ve made it before to Jennifer and to other people in her industry: Using private data in government decision-making will lead Fair Information-like requirements to migrate across the divide into your data. Data holders may come to find that they are no longer allowed to use data in creative ways. Rather, they will become like public utilities. What Acxiom is doing is a threat to the whole industry.

But I want to take a step back and make a broader point about identity-based security of the kind that CAPPS II, and the sharing Acxiom seeks, are a part.

The Markle Foundation recently released a report on the appropriate homeland security network, and one of the Appendices to that report started by saying that there is substantial evidence supporting the use of identity verification in homeland security. Now, I’m a careful reader and I compared that statement to what I know: Truth be told, I am not aware of real evidence that knowing people’s identity helps to prevent security incidents. I know that there are assumptions on that point and that there may even be a lot of consensus on that point, but I do not know of evidence.

To attack this assumption, I went and did some research into the value of identity. I found a statistic from the Violence Policy Center showing that in the first half of 2001, there were more than 200 incidents of murder-suicide in the United States. The people committing these acts were assumedly known to the people they killed and to their communities, but that didn’t do anything to protect their victims. When a person develops the state of mind leading them to kill others and themselves, identity is irrelevant.

This mindset — to kill others and kill themselves — is the mindset that the 9/11 terrorists had, though they got to it a different way. We didn't accurately know who they were. But that wasn't a proximate cause of the killing they did. If we are pursuing the prevention of the next terrorist incident, we must ask ourselves “What do we know when we know who someone is?”

“What do we know when we know who someone is?”

We don’t know that a person lacks the motivation to kill others because we know them. With people killing others and themselves more than 200 times every six months, it seems clear that knowing someone is a poor protection against being killed by them. Identity-based systems seem a poor way to try to predict who poses a danger to security.

Rather, security that focuses on “tools and methods of attack” is much more likely to work. When you look at each of the vulnerabilities we have, it will be much easier to protect them against likely tools and methods of attack than to go out and predict who will attack so we can interdict them.

To bring this point home, let me use our common experience: I imagine you all live in homes and want to see them protected from burglary. To do so, are you researching all who might come to your residence and attempt to break in? Of course you aren’t. You are hardening your abode by placing locks on the doors and windows.

After 9/11, we all demanded to know “Who did this?” That is the appropriate question after harm has been done. But before harmful action has been taken, the appropriate question is “To what attacks are we vulnerable?” This should be our focus and we should dispense with the identity-based systems like the one we are discussing today.

Our privacy and civil liberties would be far more secure if this better approach to security were our top priority, and if the resources now devoted to identity-based security were directed to those purposes.


©2000-2004 Privacilla.org. All content subject to the Privacilla Public License.