Just how tough is peer review?

Psychologists Douglas Peters and Stephen Ceci decided to test the peer review process of journals in their field by resubmitting already published papers. Here's the abstract from their paper in Behavioral and Brain Sciences:

The present investigation was an attempt to study the peer-review process directly, in the natural setting of actual journal referee evaluations of submitted manuscripts. As test materials we selected 12 already published research articles by investigators from prestigious and highly productive American psychology departments, one article from each of 12 highly regarded and widely read American psychology journals with high rejection rates (80%) and nonblind refereeing practices.

With fictitious names and institutions substituted for the original ones (e.g., Tri-Valley Center for Human Potential), the altered manuscripts were formally resubmitted to the journals that had originally refereed and published them 18 to 32 months earlier. Of the sample of 38 editors and reviewers, only three (8%) detected the resubmissions. This result allowed nine of the 12 articles to continue through the review process to receive an actual evaluation: eight of the nine were rejected. Sixteen of the 18 referees (89%) recommended against publication and the editors concurred. The grounds for rejection were in many cases described as “serious methodological flaws.” A number of possible interpretations of these data are reviewed and evaluated.

The "Tri-Valley Center for Human Potential" is a really nice touch. 

The issue of peer review came up for debate recently among economists in the aftermath of the Reinhart-Rogoff affair. That paper was not peer-reviewed, but if the process for economics journals works anything like the apparent process for psychology journals, it seems quite possible they wouldn't have caught the error anyway.

Update: Didn't realize on first read that this study is from 1982. Obviously, this makes it somewhat less relevant to current debates.