The two bomb blasts near the Boston Marathon’s finish line last April were the most significant acts of terrorism on U.S. soil since Sept. 11, 2001. Other contemporary terror attacks claimed more lives than the three lost in Boston (the shooting by Nidal Malik Hasan at Fort Hood in 2009 killed 13), but the Marathon attack had a greater emotional impact. More than 250 were wounded, some horrifically, including many victims whose limbs had to be amputated. The around-the-clock coverage the Boston attack received is testament to its civic significance.
The bombings achieved a key goal of terrorist attacks: They scared many of us. In The National Journal, Ron Fournier wrote that the attack was notable “for its social significance,” for the fact that “death at the finish line in Boston makes every place (and everybody) less secure”—including malls, churches and schools. He feared that the attack might signal “a ‘new normal’ for America,” leaving “no place and nobody” feeling safe.
Fournier is correct that in an open society there is an almost limitless number of targets. But in times of tragedy, it is easy to perceive social vulnerabilities while overlooking social resiliency and strength. The fear that Boston might be only the beginning has—thus far—proved unfounded.
A year after the Boston attack, it is worth understanding the strengths that have prevented a spiral of more attacks. Post-Boston revelations about the scope and scale of U.S.-led electronic surveillance, particularly that conducted by the National Security Agency, also make it worthwhile to take stock of the difficult balancing act that counterterrorism policies have to maintain. We need an understanding of the real tradeoffs between security and openness, and a discourse on those tradeoffs that is less knee-jerk and binary.
One of the core strengths of the United States in grappling with terrorism is the fact that the allure of terrorist violence is dimmer than our adversaries would like. Al Qaeda has mustered its resources into an effort to rally American Muslims to its jihadist cause, but levels of terrorist violence remain lower than in the 1970s. As Brian Michael Jenkins of the the RAND Corporation reports, in the 1970s there were “60 to 70 terrorist incidents, most of them bombings, on U.S. soil every year—a level of terrorist activity 15 to 20 times that seen in most of the years since 9/11.” From 1970 to 1978, terrorist incidents in the United States claimed 72 lives, greater than the loss of life inflicted by terrorism since 9/11.
Of course, 9/11 is the reason there has been greater concern about terrorism than in the 1970s, because those attacks showed that some terrorists have the desire and ability to carry out mass casualty attacks on U.S. soil. But there is a simple reason we have not seen terrorists striking one soft target after another on U.S. soil. They have not attracted enough people to their cause. And for the immediate future, it is unlikely that any violent terrorist cause will be able to mobilize American sympathizers to strike indiscriminately in the way Fournier feared.
But if terrorism has not become an epidemic, it remains a problem that the United States must defend against. In doing so, policymakers must balance competing considerations. One of these, the balance between security and privacy, came into sharp focus less than two months after the Boston bombings, when the British newspaper The Guardian published the first in a series of revelations about the scope of the N.S.A.’s domestic and international surveillance.
The revelations made by Edward J. Snowden—now a household name—are wide ranging, and some raise real concerns that illustrate the tension between security and privacy. The crucial revelation in this regard is the U.S. government’s retention of electronic metadata about Americans, including such information as where telephone calls originated, what numbers they connected to and their duration.
Digital Needles, Data Haystacks
This N.S.A. data collection obviously raises several concerns, chief among them the privacy ramifications of government retention of data like this, which includes telephone calls to suicide prevention hotlines, to drug or alcohol treatment centers and to phone numbers that provide information about medical conditions. But the government is not the only data collector, and the intense focus on government intrusions on privacy in public discussion obscures the bigger picture. Three factors distinct from the threat of terrorism have converged to erode electronic privacy: a legal framework that has remained static since the 1970s, changes in our use of technology and the tracking by commercial providers of our online habits.
The laws governing electronic privacy remain stuck in 1979. In that year, the Supreme Court decided the case of Smith v. Maryland, which addressed whether the State of Maryland needed a warrant to install a pen register (which would record telephone numbers called, but not the contents of those calls) on a suspect’s home phone. The court held in Maryland’s favor, finding that the Fourth Amendment protected contents of a call, but not information about the call, like the number dialed. The Fourth Amendment only applies when the government’s actions intrude upon a reasonable expectation of privacy, and the court found that no such expectation existed for the numbers a person dials. After all, phone users know they convey this information to a third party, “since it is through telephone company switching equipment that their calls are completed.”
In other words, when a third party knows what a person is doing in an electronic environment, no reasonable expectation of privacy exists. Based on Smith, it appears the N.S.A.’s metadata collection is legal since this information has already been transmitted to third-party providers. A more disturbing extension of Smith’s reasoning is that our use of the Internet enjoys little or no constitutional protection.
While Smith’s framework remained static, our use of technology evolved. The number of worldwide Internet users exploded from one million in 1998 to one billion by 2005. Social media, which for all intents and purposes did not exist in 1998, started to become more common in this period. Social media postings have by now become impulsive, and users divulge far more than they know. Last year, for example, researchers found that by relying only on users’ Facebook “likes” they could discern which users were gay and even how users voted.
People were divulging intimate details about themselves without realizing it, and commercial providers’ capacity to track users’ digital lives grew. There are several ways commercial providers track user activity. Social networks require tracking: a server has to authenticate a password to return user requests. Cookies are placed in a browser by a website to store this information, so that a Facebook user, for example, does not have to re-enter his or her password with every click to a different page. But cookies can also follow the user’s activity across the web, potentially recording information entered into different pages and building a profile of the user. And cookies are just one method of tracking. André Pomp, an academic based in Germany, notes that in a typical visit to the Internet, a user will encounter “hundreds of different trackers.”
There are advantages to treating personal data as a commodity. Companies can provide remarkable services at no cost to the user because they make money by getting to know their users’ interests, aspirations, likes and dislikes. But there are also disadvantages: When we consider the information we are disclosing and the methods of data analysis now available, we might grow uncomfortable with what these companies know about us.
Counterterrorism policies were built against this backdrop of encroaching web providers armed with progressively improving technologies. After 9/11, the N.S.A. was charged with sifting through electronic data to shake out potential threats. The agency wanted a lot of data as a result. As James M. Cole, deputy attorney general, said, “If you’re looking for the needle in a haystack, you have to have the haystack.” This is not to say that the N.S.A.’s programs should be accepted as they are, but the present debate has taken on a Manichaean quality in which the N.S.A. is often portrayed as rapacious. It is in fact aggressively pursuing the mission with which it was charged at a time when privacy itself was shedding its old meaning.
The Screener’s Dilemma
But privacy and security are not the only tradeoffs in a post-Boston, post-9/11 world. There is also the matter of the burden imposed by counterterrorism policies and their economic costs.
The agency that has been most emblematic of the burdens imposed by efforts to catch terrorists is the Transportation Security Administration. Its post-9/11 screening procedures almost seemed designed to be as burdensome as possible, though for a noble reason: the government wanted to ensure that no group felt unfairly singled out.
But we ended up with a system where even such a well-known figure as Al Gore received unnecessary scrutiny at the airport. Gore was twice singled out for extra screening during a trip to Wisconsin in 2002. In the same year, the 75-year-old congressman John Dingell was forced to strip to his underwear in Washington, D.C.’s Reagan National Airport to prove that his artificial hip, and not a weapon, had set off a metal detector. Singling out someone like Gore—a public figure if there ever was one—wastes policing resources. Gore’s adventures with T.S.A. screeners were indicative of a broader inefficiency that, aggregated over the system, caused airport security to be more resource-intensive and burdensome than necessary.
New T.S.A. procedures reported in October 2013 represent a change. Rather than maintain neutrality in its screening, T.S.A. has begun to aggressively differentiate perceived risks, with the screening process beginning before passengers even reach the airport. Under the new procedures, T.S.A. assesses the level of scrutiny that should be applied based in part on information from various databases. Though the information the agency will rely on was not disclosed, The New York Times noted that sources may include car registration and employment data, as well as a passenger’s “tax identification number, past travel itineraries, property records, physical characteristics and law enforcement or intelligence information.”
The civil liberties and privacy communities have been predictably displeased. But is the policy justifiable? Most Americans would probably think so. The erosion of privacy or civil liberties appears marginal, while these policies, by promoting more individualized risk assessment, may enhance safety while reducing the burden and expense of security procedures. It is best to focus limited resources on the most serious threats.
For example, assume T.S.A. needs to allocate resources between two passengers: a 35-year-old male with a prior conviction for making explosive devices, and a 90-year-old retiree who has accumulated no criminal record. Would an ideal system treat them the same? Though this is an extreme example, with 1.8 million passengers screened every day, there will be similarly large variations. The view that risk differentiation should never occur is blind to that fact. And risk differentiation does not always result in passengers facing more scrutiny; it can do the opposite. T.S.A.’s PreCheck, perhaps the best thing to happen to air travel in the past 13 years, allows select passengers to pass through a faster, less intrusive checkpoint.
One can think of the new system as focusing scrutiny on the unfortunate few or on wisely lightening the load on the lucky many. But absent new information that the T.S.A. is dealing with data in a way that may make us feel uncomfortable—such as, for example, accessing medical records or undertaking a wholly new collection of data on Americans—the new procedures are beneficial. The controversy surrounding them illustrates the difficulties involved in forging appropriate counterterrorism policies.
The Balancing Act
The answers to questions about the tradeoffs involved in ensuring security and efficiency of counterterrorism efforts while upholding the values of privacy and civil liberties are imperfect. But the fact that this balancing act is extraordinarily difficult points to the inadequacy of the debates about counterterrorism that were supercharged by the Snowden revelations. An ideal system would reduce the inconvenience that antiterrorism policing imposes while reviving the concept of privacy and providing an acceptable level of security. But a step in the direction of one of these values may entail at least half a step backward with respect to some other goal.
The T.S.A. pre-screening measures are an example of an innovation that aligns the goals of reducing public inconvenience and enhancing security. Policies that advance these competing values simultaneously are the most welcome, but they may not always dovetail so neatly. In those instances where they cannot, there is no ready answer as to which value should be given the prevailing priority. Indeed, declaring that one value should always have primacy risks dramatic overreaction in one direction or another.
Perhaps a good place to start toward a sustainable balance between security and privacy is by asking whether lawmakers should limit commercial entities’ ability to retain user data. There is, of course, good reason for these entities to track users. User data gives them a source of revenue, and nobody should feel bitter that companies are trying to make money. But is old data essential, or even relevant, to their business efforts? Do commercial entities need to know what websites you visited and who you sent instant messages to eight or 10 years ago? The government could require these entities to purge user data (including messages sent, websites visited, records of individuals called and geolocations) that is more than, say, five or seven years old if a) the user has tried to get rid of it by, for example, deleting the information; and b) there is no independent reason, such as litigation or national security concerns, to retain it.
This would be a small step, but one that could open new avenues of discussion about privacy. It is a conversation that involves not only liberty and security, but also commerce rights, Internet users’ appetite for free and convenient services and the desire for privacy not only from one’s government but also one’s neighbors. The right kind of conversation would recognize this.
But given the way the surveillance debate has proceeded, we are not likely to get there. Our failed national discussion about privacy and security is emblematic of our broader inability to address the hard questions that dominate the counterterrorism sphere.