One month ago, in her FP article Think Again: Big Data, Kate Crawford wrote:
While many big-data providers do their best to de-identify individuals from human-subject data sets, the risk of re-identification is very real. Cell-phone data, on mass, may seem fairly anonymous, but a recent study on a data set of 1.5 million cell-phone users in Europe showed that just four points of reference were enough to individually identify 95 percent of people. There is a uniqueness to the way that people make their way through cities, the researchers observed, and given how much can be inferred by the large number of public data sets, this makes privacy a "growing concern." We already know, thanks to academics like Alessandro Acquisti, how to predict an individual's Social Security number simply by cross-analyzing publicly available data.
Today, some of the fears Crawford was talking about seem to be realized with the news, broken by the Guardian's Glenn Greenwald, that the NSA has ordered Verizon to provide daily information on all telephone calls within its system for a three-month period ending on July 19. This could be more than 98.2 million customers and there are obviously unanswered questions about which other companies received similar orders.
It's only fairly recently that the technology for analysis has advanced to the point that a dataset of this size would be useful. As Greenwald wrote, the data doesn't include personal information or the content of calls, but "its collection would allow the NSA to build easily a comprehensive picture of who any individual contacted, how and when, and possibly from where, retrospectively."
As Shane Harris points out, the NSA's potential uses for this data could go beyond tracking individuals :
As I wrote in my book, The Watchers, the NSA has long been interested in trying to find unknown threats in very big data sets. You'll hear this called "data mining" or "pattern analysis." This is fundamentally a different kind of analysis than what I described above where the government takes a known suspect's phone number and looks for connections in the big metadatabase.
In pattern analysis, the NSA doesn't know who the bad guy is. Analysts look at that huge body of information and try to establish patterns of activity that are associated with terrorist plotting. Or that they think are associated with terrorist plotting.
The NSA spent years developing very complicated software to do this, and met with decidedly mixed results. One such invention was a graphing program that plotted thousands upon thousands of pieces of information and looked for relationships among them. Critics called the system the BAG, which stood for "the big ass graph." For data geeks, this was cutting edge stuff. But for investigators, or for intelligence officials who were trying to target terrorist overseas, it wasn't very useful. It produced lots of potentially interesting connections, but no definitive answers as to who were the bad guys. As one former high-level CIA officer involved in the agency's drone program told me, "I don't need [a big graph]. I just need to know whose ass to put a Hellfire missile on."
But of course, the technology to do this kind of pattern analysis has improved dramatically since the Bush years. As Emanuel Pastreich put it in an op-ed written several days before the latest news, "The dropping cost of computational power means that individuals can gather gigantic amounts of information and integrate it into meaningful intelligence about thousands, or millions, of individuals with minimal investment."
I've highlighted some exciting uses of this kind of computational power on this blog, but we're also seeing the birth of a kind of government surveillance that lawmakers and privacy advocates have never had to contend with before.
Update: Looks like there's more to come.
Barton Gellman and Laura Poitras of the Washington Post report that "the National Security Agency and the FBI are tapping directly into the central servers of nine leading U.S. Internet companies, extracting audio, video, photographs, e-mails, documents and connection logs that enable analysts to track a person’s movements and contacts over time." The companies involved are Microsoft, Yahoo, Google, Facebook, PalTalk, AOL, Skype, YouTube, and Apple.
Gellman and Poitras write that the program, called PRISM, and others like it, show how "fundamentally surveillance law and practice have shifted away from individual suspicion in favor of systematic, mass collection techniques."
Justin Sullivan/Getty Images
For a study published in the journal Social Forces (ungated version), Ofer Sharone of MIT's Sloan School of Management interviewed white collar job seekers in two centers of high-tech industry -- San Francisco and Tel Aviv -- and noticed some interesting differences in how the subjects described the reasons for their unemployment:
Israeli job seekers consistently attribute their difficulties to external factors, and most commonly, as in the case of Eldad, to the "system." The "system"--by which Israelis are usually referring to private employment agencies and testing institutes described below-- is described as a "meat market" or a "conveyor belt." Job seekers experience is that they are not evaluated on their true merits but screened-out using arbitrary proxies, or as Eldad put it, "buzzwords." Over time Israeli job seekers report feeling increasingly like they are "invisible" and "at a loss" vis-à-vis a blind and arbitrary system. In addition, Israeli job seekers also explain their difficulties as arising due to the tight labor market conditions for someone with their level of skills and experiences, and to the actions or inactions of the State, which is seen as standing behind both the market and the dominant labor market institutions that form the "system."
By contrast, American white-collar job seekers typically come to feel that there is something wrong with them. American expressions of self-blame vary with respect to what aspect of the self is to blame. Explanations of the most significant obstacle to their getting a job included: "lack of self-confidence," "low self esteem," an absence of "self-discipline," not being "good at interviews," being a "bad networker," or not knowing "what I really want to do." In some cases the self-blame is simply expressed as: "I didn't get the job so I must have done something wrong." Over time, the nature of the self-blame tends to become less focused on one's job searching capacities and more focused on one's inner self. American job seekers, after several months of job searching, often report believing that they are not finding work because they are somehow "flawed" or "defective." Becky, a thirty six year old translator, expressed it thus: "I feel more like an orphan. No one wants me, and I don't want to impose myself on anyone."
Sharone writes that in contrast to Israel, where hiring is often done through the intermediary of private testing agencies and there's a greater emphasis on tangible skills, in the United States "getting a job requires establishing your "fit" with a particular employer....Ultimately what matters most are intangible inner qualities that come through in your presentation-of-self. The focus is on the person behind the skills." While Israelis compare job interviews to oral exams, U.S. workers tend to describe them as "first dates."
Sharone believes this culture of individual accountability is at the heart of the American self-help industry represented by books like What Color Is Your Parachute, and is also the reason why there's comparatively little political organization around the issue of unemployment in the United States.
The finding fits with the common stereotype of Americans as inherently individualistic, even to a fault, but I'd be interested to know if there are other countries that look more like the United States in this regard.
Via National Affairs
Spencer Platt/Getty Images
As a writer, newly-minted nominee for U.S. ambassador to the United Nations Samantha Power is still best known for her 2002 book A Problem From Hell, a blistering indictment of U.S. inaction to confront genocide in the 20th century. But the former reporter has also been a prolific contributor to publications including the New Yorker, New York Review of Books, and Foreign Policy.
Looking over her past writing today, I came across an interesting article from 1999, published in the journal Daedalus -- Power was at Harvard's Carr Center for Human Rights at the time -- discussing the use of Holocaust analogies in arguing for action to prevent mass atrocities.
Power writes that while the U.S. public was slower than often realize today to confront the realities of the Holocaust -- she cites the theater adaptation of The Diary of Anne Frank from 1955 and the 1961 film Judgment at Nuremberg as playing a major role in waking Americans up to what had happened -- the Shoah came to occupy a singular place in the American conciousness, and not surprisingly was frequently invoked as an analogy for describing events in Cambodia, Bosnia, Rwanda and other sites of genocide. Power writes that, "In the face of genocide, supporters of humanitarian intervention have seized upon the Holocaust metaphor as if it might constitute a moral life preserver in a sea of interest-based callousness."
She also suggests that when American presidents start "Holocaustizing," you can start to expect the cavalry:
American policymakers are the most transparent in their aims. They often begin Holocaustizing as soon as they have decided to act militarily against a foreign foe. They assume that the analogy will add a moral veneer to their policy. In the buildup to the Gulf War, for example, George Bush transformed Saddam Hussein into American "Enemy Number One" less by portraying him as the man who seized Kuwaiti oil fields as by depicting him as "another Hitler" who killed Kuwaiti babies. Bush latched onto the Hitler analogy, first employing it on August 8, 1991, when he announced the dispatch of American troops to the Gulf. Upon deciding to bomb Yugoslavia in March 1999, President Clinton and his cabinet similarly likened the Serb outrages to those perpetrated by Hitler's henchmen.
For what it's worth, this has not been the case with Power's current boss. It's been over a year since President Obama visited the U.S. Holocaust Museum on Holocaust Remembrance Day with America's most prominent Holocaust survivor, Elie Weisel, at his side, to announce the creation of a new Atrocities Prevention Board, saying "it's one more step that we can take toward the day that we know will come -- the end of the Assad regime that has brutalized the Syrian people -- and allow the Syrian people to chart their own destiny."
If this was a prelude to military action, it's been an awful long one.
In the end, Power is wary of Holocaustizing contemporary events, noting that it can often simply end in a war of analogies: the Holocaust for interventionists, Vietnams (and now Iraq) for anti-interventionists. There's also the problem of setting the standard too high:
Another drawback of the analogy is that, though we have committed ourselves to preventing the Holocaust from happening again, the Holocaust sets a grossly "high" bar for attention or action. Quantitatively, one would hardly want to wait until six million individuals were killed before we concluded that the necessary threshold had been crossed. Qualitatively, the scientific, meticulously plotted plan to kill every single Jew in Europe earns Hitler a special place in history. Rwanda constitutes the only case of genocide since 1945 in which a perpetrator has outline his intentions as explicitly as Heinrich Himmler did when the declared that "all Jews without exception must die."
With American politicians resolutely opposed to intervening to stop these cases of genocide, it is likely that no amount of Holocaustizing would have generated meaningful action. The mass graves in Cambodia, Bosnia, and Rwanda offer testimony to the fact that the analogy has not succeeded in overcoming political opposition to intervention.
Some have suggested the appointments of Power and Susan Rice suggest an administration moving ever-so-slightly toward interventionism. If that's the case, it should be interesting to see how Power -- now an "American policymaker" herself -- builds her rhetorical case.
JIM WATSON/AFP/Getty Images
1. More than half of Americans now own smartphones, according to Pew.
2. The CDC has a gun control research agenda, but will anyone pay for it?
3. Mapping America's linguistic divides. (A "bubbler"? Explain yourself Wisconsin.)
4. Software that can detect PTSD.
5. Chinese wine consumption "is a mere bottle per head a year, compared with Vatican topers, who lead the world table at 73 bottles annually."
Admittedly, I'm a little late to this one, but the world's richest book critic also took aim recently at development critic Dambisa Moyo, author of the controversial 2009 book, Dead Aid. The Sydney Morning Herald reported on the comments Gates made at a televised forum in Australia last week:
Mr Gates said the book had not helped in his aim for governments to increase their foreign aid spend. He added that the author "didn't know much about aid and what aid was doing".
"I think that that book actually did damage generosity of rich world countries," Mr Gates said. "People have excused various [foreign aid] cutbacks because of it," he added.
Mr Gates said if one was to objectively look at what foreign aid had been able to achieve then they "would never accuse it of creating a dependency".
"Having children not die is not creating a dependency, having children not be so sick they can't go to school, not having enough nutrition so their brains don't develop. That is not a dependency. That's an evil thing and books like that - they're promoting evil," he said.
The economist responded in a blog post on her website:
- I wrote Dead Aid to contribute to a useful debate on why, over many decades, multi billions of dollars of aid has consistently failed to deliver sustainable economic growth and meaningfully reduce poverty. I also sought to explicitly explain how decades of government to government aid actually undermined economic growth and contributed to worsening living conditions across Africa. More than this, I clearly detailed better ways for African leaders, and governments across the world, to finance economic development. I have been under the impression that Mr. Gates and I want the same thing - for the livelihood of Africans to be meaningfully improved in a sustainable way. Thus, I have always thought there is significant scope for a mature debate about the efficacy and limitations of aid. To say that my book "promotes evil" or to allude to my corrupt value system is both inappropriate and disrespectful.
- Mr. Gates' claim that I "didn't know much about aid and what it was doing" is also unfortunate. I have dedicated many years to economic study up to the PhD level, to analyze and understand the inherent weaknesses of aid, and why aid policies have consistently failed to deliver on economic growth and poverty alleviation. To this, I add my experience working as a consultant at the World Bank, and being born and raised in Zambia, one of the poorest aid-recipients in the world. This first-hand knowledge and experience has highlighted for me the legacy of failures of aid, and provided me with a unique understanding of not only the failures of the aid system but also of the tools for what could bring African economic success.
It's a little surprising to me how much controversy Dead Aid continues to generate four years after its publication. The author recently got the full Jacob Weisberg interview treatment over at Slate, during which she -- perhaps not coincidentally -- had some suggestions for how Gates might better spend his money: endowing a university or building a Microsoft plant rather than funding humanitarian programs.
As Tom Murphy notes, Moyo doesn't have an awful lot of fans among fellow economists, even those who share some of her doubts about the effectiveness of aid programs. Edward Carr suggests that Moyo's prominence has actually made it easier for people like Gates to dismiss the objections of slightly less extreme critics.
Daniel Kahneman, Nobel Prize-winning psychologist, author of the bestselling Thinking Fast and Slow, and FP Global Thinker says contemporary psychology research has a credibility problem. Specifically, in an open letter to colleagues published in Nature, he singles out experiments on social priming, how subtle psychological cues affect our beliefs opinion and behavior. (In an example I remember participating in for course credit while taking Pysch 101 as an undergrad, I was taken on a pleasant walk in the woods near campus then took a questionaire about my opinions on various topics.)
[R], right or wrong, your field is now the poster child for doubts about the integrity of psychological research. Your problem is not with the few people who have actively challenged the validity of some priming results. It is with the much larger population of colleagues who in the past accepted your surprising results as facts when they were published. These people have now attached a question mark to the field, and it is your responsibility to remove it.
Nature provides a bit of background, noting several recent high-profile cases in which attention grabbing priming results proved impossible to replicate:
In November 2011, Diederik Stapel, a social psychologist from Tilburg University in the Netherlands and a rising star in the field, was investigated for, and eventually confessed to, scientific fraud on a massive scale. Stapel had published a stream of sexy, attention-grabbing studies, showing for example that disordered environments, such as a messy train station, promote discrimination14. But all the factors making replication difficult helped him to cover his tracks. The scientific committee that investigated his case wrote, "Whereas all these excessively neat findings should have provoked thought, they were embraced ... People accepted, if they even attempted to replicate the results for themselves, that they had failed because they lacked Mr Stapel's skill." It is now clear that Stapel manipulated and fabricated data in at least 30 publications.
Stapel's story mirrors those of psychologists Karen Ruggiero and Marc Hauser from Harvard University in Cambridge, Massachusetts, who published high-profile results on discrimination and morality, respectively. Ruggiero was found guilty of research fraud in 2001 and Hauser was found guilty of misconduct in 2010. Like Stapel, they were exposed by internal whistle-blowers. "If the field was truly self-correcting, why didn't we correct any single one of them?" asks Nosek.
Kahneman -- who isn't a skeptic of priming research in general and cites quite a bit of it in the discussion of "system 1" thinking in his recent book -- suggests researchers set up a replication daisy chain of separate labs who attempt to replicate robust-seeming priming results under identical conditions. He also believes researchers should pre-commit to publishing their results, whether or not significant priming effects are discovered.
It seems like psychologists have been taking it on the nose lately in debates over academic rigor and publication bias. In addition to the problems raised by Kahneman, there's been widespread mockery of some recent dubious paper published in the leading journal Pyschological Science, particularly one purportedly showing that showing that single women were more likely to vote for Barack Obama while menstruating.
Obviously, psychology research-- even social priming research -- isn't the only field that suffers from similar problems. I do wonder if the fact that surprising priming findings are popular subjects for the general interest media-- including yours truly -- has earned these researchers more scrutiny that those in other fields.
Via Chris Blattman
Sean Gallup/Getty Images for Burda Media
1. What would John Stuart Mill make of the Internet?
2. I'm getting a little sick of "snow fall" web layouts, but the new science magazine Nautilus seems very cool
3. The bleeding edge of computerized agriculture
4. French wine is actually Italian
5. Indiana Jones denied tenure: "Moreover, no one on the committee can identify who or what instilled Dr. Jones with the belief that an archaeologist’s tool kit should consist solely of a bullwhip and a revolver."
Ahsan Butt of George Mason University wonders if scholars of international politics should spend more time thinking about what war really looks like:
The other thing I would say about this is that war and violence is, for lack of a better word, highly sanitized - at least in the IR (and Comparative) scholarship I am most familiar with. There's just not that much blood and gore. The stuff I'm reading about Japanese conduct in Nanking in Beevor's book probably would not make it to very many mainstream IR courses.
Maybe this is a functionalist explanation, but I wonder if this has something to do with modern social science's aversion to moral considerations. At least in political science, moral questions seem to be consigned solely to political theorists (at least from my vantage point). I understand and accept the need for distance and analytic neutrality, but I do wonder if we've gone too far.
This idea has occurred to me as well in some of the research I've written about recently on the causes and characteristics of war. There's a lot to be gained from quantitative approaches to international affairs, but it's hard to get a sense of the horror of war from a dataset, no matter how detailed.
HT: Duck of Minerva
MUJAHED MOHAMMED/AFP/Getty Images
War of Ideas is a blog on the theory behind the practice of global politics. Foreign Policy associate editor Joshua E. Keating brings you the latest research, data, and intellectual debates from around the world.