On June 6, the Guardian began publishing stories about how the U.S. National Security Agency intercepts as much as half of the world’s digital communications1, often between American citizens. That same day, Edward Snowden, the former intelligence contractor responsible for leaking details of the program, gave an interview to one of the paper’s journalists, Glenn Greenwald, in his hotel in Hong Kong, where he had fled to escape prosecution. The whistleblower had given up his high-paying job at the agency and the life he’d built with his girlfriend in Hawaii. Greenwald, like the rest of the world, wanted to know why he did it.
“When you’re in positions of privileged access like a systems administrator for the sort of intelligence community agencies,” Snowden told Greenwald, “you’re exposed to a lot more information on a broader scale than the average employee, and because of that you see things that may be disturbing but over the course of a normal person’s career you’d only see one or two of these instances.”
The whistleblower did not see himself as exposing the corrupt acts of any one NSA agent or program. Instead, Snowden was revealing that the once-small military outfit built for signals intelligence in the Korean War had since evolved into a domestic spying organization that today spends billions to fund thousands of agents inside a 60-building surveillance complex.2 The NSA exploits secret surveillance court warrants and encryption loopholes, and taps into fibre optic Internet cables in order to spy on millions of Americans. (And according to former NSA Senior Executive Thomas Drake, who blew the whistle on the agency in 2006, its internal goal for the past decade has been to “own the Internet.”)
To Snowden, the lesson was simple: Somewhere in its six-decade history, the agency had spun out of control. The question is, how did it all go wrong?
Before the NSA came to life on the eve of Dwight Eisenhower’s election, its job was done by a loose group of three independent intelligence outfits in the Army, Navy, and Air Force. The groups came into their own during World War II, as Washington began to see that significant signals intelligence, or SIGINT, could be invaluable in wartime3: On the Western front, British cryptographer Alan Turing’s Enigma machine had been able to decode German movements during the Allied forces invasion of Normandy. On the Pacific front, U.S. intelligence became so crucial that Admiral Chester Nimitz said SIGINT deserved credit for the Allied victory during the Battle of Midway.4
But just as the American SIGINT program’s successes came into focus during the war, so did its weaknesses. The three groups, two of which were run out of separate, converted women’s schools, often viewed one another as competitors. At one point, the Army and Navy went so far as to divide up intelligence work based on whether the day of the month was odd or even. (NSA historian Thomas Johnson would describe this peculiar practice as a “Solomonic Solution.”) The British government—in many ways superior in those days in terms of intelligence gathering—would later liken dealing with the American intelligence community to dealing with the colonies after the Revolutionary War.
The effects of this disorder were operational, as well. When, a few years later, the Soviets invaded Korea, the intelligence community was taken by surprise. The SIGINT agencies might have been able to predict the threat, as they had intercepted messages from Korea in 1949, but they had no Korean translators, dictionaries, or typewriters. And throughout the Korean War, the intelligence infrastructure continued to erode, so much so that in June of 1952 an Army general complained of how “during the between-wars interim we have lost, through neglect, disinterest and possibly jealousy, much of the effectiveness in intelligence work that we acquired so painfully in World War II.”
Read the Rest at – The Daily Dot
NSA and GCHQ: the flawed psychology of government mass surveillance
Research shows that indiscriminate monitoring fosters distrust, conformity and mediocrity
Recent disclosures about the scope of government surveillance are staggering.
We now know that the UK’s Tempora program records huge volumes of private communications, including – as standard – our emails, social networking activity, internet histories, and telephone calls. Much of this data is then shared with the US National Security Agency, which operates its own (formerly) clandestine surveillance operation. Similar programs are believed to operate in Russia, China, India, and throughout several European countries.
While pundits have argued vigorously about the merits and drawbacks of such programs, the voice of science has remained relatively quiet. This is despite the fact that science, alone, can lay claim to a wealth of empirical evidence on the psychological effects of surveillance. Studying that evidence leads to a clear conclusion and a warning: indiscriminate intelligence-gathering presents a grave risk to our mental health, productivity, social cohesion, and ultimately our future.
Surveillance impairs mental health and performance
For more than 15 years we’ve known that surveillance leads to heightened levels of stress, fatigue and anxiety. In the workplace it also reduces performance and our sense of personal control. A government that engages in mass surveillance cannot claim to value the wellbeing or productivity of its citizens.
Surveillance promotes distrust between the public and the state
People will trust an authority to the extent that it is seen to behave in their interest and trust them in return. Research suggests that people tolerate limited surveillance provided they believe their security is being bought with someone else’s liberty. The moment it becomes clear that they are in fact trading their own liberty, the social contract is broken. Violating this trust changes the definition of “us” and “them” in a way that can be dangerous for a democratic authority – suddenly, most of the population stands in opposition to their own government.