This is a very long post containing technical information about how data security works, and how it interacts with a social contract. To read the whole thing, click under the cut.
This post has been edited to include an explaination of what metadata is
If you’ve been watching my twitter stream, you’ve found yourself baraged by near-constant coverage of the ongoing NSA surveilance scandal. My opinion is pretty easy to divine: I’m apalled, and I’m relieved. Those of us who are into IT Security have known pieces of this story for years, and have cobbled together the rest based on a deep familiarity with how the technology works.
Ubiquitous surveilance is more powerful than nuclear weapons, and universal access by anyone means just that–if more than one person can get in, sooner or later anyone who wants to can. That’s how this game works. And, in my opinion, people who don’t get that are running the show, and it’s a degree of malfeasance that can only be described as somewhere between criminal and treasonous.
That’s my opinion, which may be completely wrong. My position is somewhat different:
Free society only works when all concentrations of power can be checked by other concentrations of power. You and I have been left out of this loop for too long. It’s possible that, in the end, we’ll all decide we prefer a completely transparent society–but we must have the discussion, rather than letting it happen by default. Because it will change everything.
So, here are a few things you need to know about what’s going on.
What the Geeks Know, and Others Don’t
- The leaker is a low-level sysadmin
- Metadata is More Important Than Data
- Signals and Noise
- This is the Natural Result of Mission Creep
- Everyone is an Outsider
- There Is No Technological Solution
- My Opinion Doesn’t Matter
This one is shocking everyone, because this is now the second case where some “nobody” was able to steal secrets that even most Senators don’t have access to. The situation sounds analogous to a groundskeeper at the Tower of London stealing the crown jewels.
But here’s the ugly truth: Networks must be administered. It’s a complex, difficult job, and anyone with physical access to the network switch can tap the line. Anyone with administrator’s access to a switch or a router can install listening software and siphon everything that goes through it (this is, in fact, what the NSA appears to have been doing), or crash a network, or disable a power grid or a communications grid or an air traffic control system. That’s your “data in motion,” and that’s how it gets stolen, modified, and attacked.
The same principle applies to “data at rest,” the stuff in databases, on hard drives, in RAM or storage anywhere in the world. The administrator of a system can read all the files on the system. That’s how it works. A user may employ tricks to hide their data (some very sophistocated tricks exist), or they may encrypt their data with a very secure scheme, and that will protect the data in most scenarios. But even then, the data will eventually get used, and when used, it can be captured. Someone with an administrator account will eventually have the power to pilfer just about anything they want.
(For my brothers and sisters in the security community, I know I’m grossly oversimplifying this. Bear with me. This is the 101-level class).
Snowden said in his interview that he could have shut the whole system down if he had wanted to. That’s not an idle boast. Someone with administrator’s access and a decent skillset (such as that you’d need to have to do his job) has exactly that power. And there’s no way to change that situation.
And, just so you don’t have to read between the lines, that means that there are (and always will be) thousands of people (i.e. white-collar nobodys just like most of you) with catastrophic, classified access to any data the government acquires–and history tells us that many of them will not be as ethical as was Snowden (see Aldrich Ames for a recent example).
“Nobody is listening to your phone calls,” said the President. But here’s the thing: Nobody needs to hear your phone calls. Any good analyst can figure out your politics, sexual tastes, friends, business associates, education level, background, favorite driving/commute routes, vacation spots, hobbies, sports teams, religion, and drinks by watching your metadata–without ever reading your emails or listening to your phone calls.
Metadata is the information about your communications. Who did you call? When? From where (and from what elevation)? What camera did you take this picture with? Where was it snapped? When was it taken? Every time you generate data, the devices you use generate metadata to help it organize things so that you can find them later. The word “metadata” means “data that describes data.”
Metadata does not contain the words of your phone call, or the text of your email, but it does contain everything else: The serial number of your cell phone, the email program you composed on, the operating system you use, the time zone you’re in, the times you’re active, etc. etc. etc.
Using metadata only, you can fall under suspicion based not on anything you’ve said or done, but based upon who you know and what you like. Metadata’s job in an analyst’s hand is to create suspicion which you may then be forced to dispell–which, as anyone who has been subject to a tax audit knows, is difficult bordering on impossible. Proving one’s innocense is a losing game–and the more innocent you are, the more difficult it is to prove.
Vast stores of data are useless unless you can extract meaningful information from them. Unfortunately, it’s a truism that the more data you have, the less you know. It’s not just because of the way human psychology works, it’s because of the chaotic nature of the world. The more factors you have to control for, the less certain you can be of your conclusions. That’s not a limitation of today’s technology, that’s an inescapable fact of mathematics (read up on Chaos Theory and Fractal Geometry for more background on this).
But if all this data is useless for solving crimes, you may ask, surely it’s not a threat?
Unfortunately, the opposite is true. A good data mining operation can generate thousands of promising-looking suspects and leads from a large surveilance database. The trouble is that there’s no good way to sort the “promising leads” from the “actual suspects” without engaging in police-state tactics–and there’s never a guarantee that while you’re spending massive resources on data acquisition and mining, you’re not missing some major operation happening right under your nose.
Some recent terrorist attacks that happened right under the noses of the NSA, CIA, and FBI include: The attacks of September 11th, the Boston Bombing, the London Train bombings, and just about every other terrorist attack (attempted or accomplished) in the western world since the late 1990s. All of these attacks were, in theory and in retrospect, predictable based on the intelligence the government agencies had at hand. But none of them were prevented, because there is no good way (and never will be) to reliably sort the signal from the noise.
In that situation, the only solution (from the data analyst’s point of view) is to regard everyone who rasies a little bit of interest as being “under suspicion.” And we’ve seen how well that works.
As far as we know, not a single crime has been prevented, and no single criminal has been caught, by the NSA dragnet so far. Regular police work, on the other hand, is ruthlessly efficient (even in very poorly-run departments). This is because police work is domain-specific, relies on a very small set of universal variables, and is based on procedures that have numerous checks and balances that help ensure the quality of the data.
Just like bureaucracies, IT is subject to mission creep. Whenever you turn a task over to IT geeks or technocrats (among whom I count myself, at least marginally), you submit your thinking to Moore’s Law: “The bad guys can do X, we need to do Y to keep up, and tomorrow we’ll be able to do Z to counter the bad guys doing AA.”
Intelligence and security are not self-governing systems, because they operate in a sea of paranoia. A good security person is by definition a paranoid nutcase, and paranoia feeds upon itself (just look at Alex Jones, or the House Unamerican Activities Committee in the 1950s and 60s).
The trouble is, an open society requries trust in order to function. That trust need not be blind–I can well imagine a completely open society where all details of everyone’s lives and dealings (including those in power) are freely available. But when the eye looks only one way, it creates distrust.
First, it is a manifestation of distrust. A government that does not trust its citizens is like a wounded tiger, prone to lashing out at friend and foe alike in order to keep all possible threats at bay, whether that possible threat is its own cubs or the veteranarian that can heal it, or its mate that would otherwise guard it. Distrust makes governments irrational, and this is the first and most important reason why the paranoid intelligence community–which every government needs!–should always be subject to strict, stern oversight by civilians.
Second, a distrusting government quickly loses the trust of its citizens. This is much worse than it sounds, because it sets of a viscious cycle that is difficult to break. A government untrusted by its citizens invariably sees a spiraling set of problems. Tax revenues go down as people defect to the underground economy. People in the underground economy have no legal recourse for disputes, and violence eventually becomes the default method. In response to falling revenues and rising crime, the government raises new taxes and imposes new rules and intensifies law enforcement, all of which erode trust and drive more people underground. As the citizenry gradually lose trust in the social contract, they fracture and balkanize along class, ethnic, professional, and religious lines. They stop getting involved in government, and this creates a selection pressure on public officials, as the situation disproportionately rewards the corrupt, the paranoid, and the sociopathic. If this cycle continues long enough, the result is a failed state–like Iraq, or Afghanistan, or the former Yugoslavia in the 1990s.
This path isn’t inevitable–many states dance this dance for a while and then reform. We’ve done it here several times in our history. And a “failed state” isn’t a forever sentence–failed states almost always eventually rebuild, but why go through that misery needlessly?
Since the trust of the citizens is the only thing that keeps a civilication ticking over, this is a huge danger.
The natural reaction to all of this is to say “I have nothing to hide, nobody will be interested in me.” If you were a Republican under Bush, you were much more likely to say that then than now. If you’re a Democrat, you are more likely to chafe under Bush’s abuses and hand-wave Obama’s, even on issues where the policy is identical.
It’s human nature to give your team a pass. To extend the benefit of the doubt, even when you’re troubled. But this kind of thinking can get you into trouble in politics and in security. Whether you’re handing a password to a sysadmin or an expanded power to a President, the principle is the same: Anything the current guy can do, the next guy can do (and the next guy will be able to do it better and more efficiently). We do not live in a monolithic culture. There are no small groups of outsiders who are easy to marginalize and forget about. Each and every one of us is a minority with opinions, values, views, and morals that frighten some other group of our fellow citizens.
In other words, when you hand someone a gun, they might not shoot you. But you can’t trust that the person THEY hand it to won’t.
Ah, but there’s that word again: Trust. Didn’t I just say we needed to re-create an environment of trust?
I did. And we do. But trust is not the same as blind trust. If you lend your car to a relative stranger (or even a good friend), you want to make sure they have insurance before they take it away. If they abuse your trust, you want recourse. You hope to God you never need it, but having it there makes it possible to trust in the first place.
Secret programs overseen by secret courts who entertain secret legal theories provide no recourse for those caught up in their net. Every sysadmin makes mistakes. Every single bureaucracy has its collateral damage, even when there’s not a hint of malice in the system (for a stunning example of this, see the case of Richard Jewel).
And, of course, malice does creep into every system. A good-hearted cop can acquire a paranoia about a certain class of people. Sometimes, the cop doesn’t start off with a good heart to begin with. Today the Tea Party and Occupuy are national demons. When I was growing up, it was neopagans A good system is fault-tolerant–it realizes such things are an inevitable problem, and it provides recourse.
Trust, but verify. Good trust is never blind. It is always verifiable.
Technology will continue to get more powerful, and it will continue to create problems and benefits for us all. It is not possible to “innovate” your way out of this dilemma, because every new innovation will spark its own version of the same problem.
The only solution to a problem of this sort is cultural. We must decide how we will allocate the power such technology gives to governments, and how we will hold those governments accountable for the ways in which they use that power. There’s no dodging the issue, because this IS the central issue of civilization. Everything else–race, class, environment, freedom, business regulation, criminal justice, and the future shape of human society–will be predicated upon how we decide (or fail to decide) what happens next on this topic.
My personal preference is for an open society with effective, but highly limited, government. I’m willing to risk a good deal of security to have the opportunities that such a society presents. Your preferences may be different.
The good news is, all of the above leaves room for all kinds of arrangements where security is concerned. You can have a transparent system that has high levels of regulation and social welfare and corporate welfare, you can have a radical libertarian system with almost no governmet involvement in day-to-day life, or you can have anything in between, and the system can still function…
…so long as that trust remains intact.
For the sake of our civilization, I hope you all bear that in mind in the months to come, when we will be debating how to govern the surveilance state, and whether to dismantle it, and how to keep our security and our freedom in whatever proportions we decide they’re important.
We all have a dog in this fight. Let’s make sure it’s one we take seriously.