top of page

Data Falsification: Lessons from a Case

A Column by Michael Seadle

The 2014 case under discussion here is somewhat different because the author of the retracted papers still insists on his innocence. Since the person’s name is irrelevant to the scholarly discussion, this column will refer to him only as JF. Anyone who really wants to learn his name need only look at the reference.

The issue in the JF case involves datasets whose results are statistically too perfect. An unnamed whistleblower did an analysis: “The chances of this happening were one in 508,000,000,000,000,000,000, he claimed.” (Kolfschooten, 2014)

 

The whistleblower is apparently known to the university and to the National Board for Research Integrity (LOWI) in the Netherlands (Kolfschooten, 2014). Maintaining the whistleblower’s anonymity seems legitimate as long as due process is followed and the accused has a reasonable chance to respond. Just how much opportunity JF had to respond is unclear from published sources. He implied that the opportunity was limited in an open letter to Retraction Watch (Amarcus41, 2014):

“The rapid publication of the results of the LOWI and UvA [University of Amsterdam] case happened quite unexpectedly, the negative evaluation came unexpectedly, too. Note that we were all sworn to secrecy by the LOWI, so please understand that I have to write this letter in zero time. Because the LOWI, from my point of view, did not receive much more information than was available for the preliminary, UvA-evaluation, and because I did never did something even vaguely related to questionable research practices, I expected a verdict of not guilty. … I do feel like the victim of an incredible witch hunt directed at psychologists after the Stapel-affair.“

 

JF appears not to have kept the original data, only his summary of the results, which is a lesson to other scholars not to be too ready to clean their files in case the original data are needed. Investigators also raised suspicions about the data in the thesis of one of JF’s doctoral students. The doctoral student was declared innocent of wrongdoing, because the data came from JF. For JF the trouble did not stop: “A panel of statistical experts from UvA that embarked on a second, more comprehensive investigation found “strong evidence for low veracity” of the results in all three papers, as well as in five others.” (Kolfschooten, 2016)

 

And “… as part of a settlement with the German Society for Psychology (DGPs)” JF agreed to further retractions (Palus, 2016). The weight of opinion has been strongly against JF to the point that he left the academic world for private practice. (Stern, 2017)

In a sense the case is closed, but questions remain. Accusations of fraud tend to come in groups, perhaps because an initial case inspires people to look more carefully, and perhaps because opinion shifts away from a presumption of innocence. After the Stapel case, Uri Simonsohn built a statistical tool to detect the possibility of certain kinds of fraud where the data patterns were too perfect to be believed (Enserink, 2013). There is no evidence that this tool was involved in JF’s case, but the principle appears to be the same: the data were just too perfect, not merely once, but in paper after paper. Of course high quality data are what scholars need to get publications. The push to get perfect data is strong.

One should not forget how complex the creation of a research data set is, and that experienced researchers learn how to get good results without necessarily faking or directly manipulating the data. Selecting participants is an art in a world where genuine random selection is often impossible. A highly successful scholar might unconsciously seek just the right subjects without obvious tampering, and might learn how to ask exactly the right questions in exactly the right way to elicit exactly the right responses without further manipulation. Perhaps this seems implausible, but highly successful researchers must do something different or they would not be quite so untypical.

In any particular case, repeated perfect results must seem unlikely, but it may be less unlikely that factors other than outright fraud could play a role. In the case of JF, the investigation seems never to have considered other reasons.

One of the lessons from this case for researchers young and old is to keep all of the experimental data over a longer period. The lack of original data was a factor in this case that counted strongly against JF.

References

Amarcus41. 2014. “Social Psychologist Förster Denies Misconduct, Calls Charge ‘Terrible Misjudgment.’” Retraction Watch. 2014. Available online.

Enserink, Martin. 2012. “Fraud-Detection Tool Could Shake up Psychology.” Science 337 (6090). American Association for the Advancement of Science: 21–22. Available online.

Kolfschooten, Frank van. 2014. “Scientific Integrity. Fresh Misconduct Charges Hit Dutch Social Psychology.” Science (New York, N.Y.) 344 (6184). American Association for the Advancement of Science: 566–67. Available online.

Kolfschooten, Frank van. 2016. “No Tenure for German Social Psychologist Accused of Data Manipulation.” Science, July. Available online.

Palus, Shannon. 2016. “Psychologist Jens Förster Earns Second and Third Retractions as Part of Settlement.” Retraction Watch. 2016. Available online.

Stern, Victoria. 2017. “Psychologist under Fire Leaves University to Start Private Practice – Retraction Watch.” Retraction Watch. 2017-12-12. Available online.

bottom of page