ORIGINAL: Ars Technica
by John Timmer - July 18 2012
Envy those who succeed by making up their data? Here's how you can, too!
Aurich Lawson |
Running scientific experiments is, frankly, a pain in the ass. Sure, it's incredibly satisfying when days or weeks of hard work produce a clean-looking result that's easy to interpret. But often as not, experiments simply fail for no obvious reason. Even when they work, the results often leave you scratching your head, wondering "what in the world is that supposed to tell me?"
The simplest solution to these problems is obvious: don't do experiments. (Also, don't go out into the field to collect data, which adds the hazards of injury, sunburn, and exotic disease to the mix.) Unfortunately, data has somehow managed to become the foundation of modern science—so you're going to need to get some from somewhere if you want a career. A few brave souls have figured out a way to liberate data from the tyranny of experimentation: they simply make it up.
Dr. Yoshitaka Fujii |
But you can't let such skepticism from your peers slow you down—and Fujii certainly didn't. Even after the comment was published, two different medical schools hired him as a faculty member. He continued to publish, generally using faked data, racking up an eventual record of 200+ bogus papers.
Nobody took any responsibility for investigating the prospect of fraud, despite requests made by other researchers who suspected something was amiss. It took until 2011 for the editors of several journals that were victimized by Fujii to band together and hire an outside investigator, who found extensive evidence that the data reported by Fujii was unlikely to have resulted from actual experiments. That finally prompted Toho University, his current employer, to launch its own investigation (PDF). Conclusion: almost none of Fujii's publications were free of falsified data.
Decades of scientific fraud simply shouldn't be this easy. Yet Fujii, along with a few other serial fraudsters, have somehow managed it year after year. In tribute to his staggering success, Ars presents this handy guide on how to get away with faking your data, based on the most popular techniques used in the biggest cases of scientific fraud (so far). Hopefully, it will help answer one of the key questions looming over the Fujii story: In a world of hard data and peer review, just how was such a colossal fraud even possible?
The tao of the fraudster
- Fake data nobody ever expects to see. If you're going to make things up, you won't have any original data to produce when someone asks to see it. The simplest way to avoid this awkward situation is to make sure that nobody ever asks. You can do this in several ways, but the easiest is to work only with humans. Most institutions require a long and painful approval process before anyone gets to work directly with human subjects. To protect patient privacy, any records are usually completely anonymized, so no one can ever trace them back to individual patients. Adding to the potential for confusion, many medical studies are done double-blind and use patient populations spread across multiple research centers. All of these factors make it quite difficult for anyone to keep track of the original data, and they mean that most people will be satisfied with using a heavily processed version of your results. That makes faking the data much easier. The classic example here is Jon Sudbø, who apparently made up even the patients used in a number of studies. He was only caught because someone working for the Norwegian public health institute couldn't figure out where he had possibly obtained some of his results; the fake data itself never aroused suspicions prior to that point. If you still want to provide real—well, "real"—underlying data, one option is to use one-of-a-kind equipment, since nobody else will be likely to make sense out of the raw data, anyway. But really, the hassle and expected anonymity involved in working with human subjects is the most convenient data screen of all.
- Work with many collaborators. This has several advantages. For one, it helps ensure that your fake data is unlikely to be the centerpiece of a given report, and thus less likely to attract scrutiny. Second, your fraud will also bask in the borrowed credibility of all the solid research that surrounds it. Finally, it keeps other scientists in your field guessing. If you handle things well, even your own collaborators may not know who supplied which data from what experiments, making the whole deception more difficult to untangle. This technique worked well for Fujii, who found it so helpful that he started adding collaborators to his papers without bothering to tell them that he had done so. In the rare cases where journals actually required some form of acknowledgement from the collaborators, Fujii just... forged their signatures. Simple and elegant.
- Tell people what they already know. Isaac Asimov is credited with saying, "The most exciting phrase to hear in science, the one that heralds new discoveries, is not 'Eureka' but 'That’s funny...'" Since you don't want anyone excited about your work, due to the likelihood they will ask annoying questions, you need to avoid this reaction at all costs. Under no circumstances should your work cause anyone to raise an intrigued eyebrow. The easiest way to do this is to play to people's expectations, feeding them data that they respond to with phrases like "That's about what I was expecting." Take an uncontroversial model and support it. Find data that's consistent with what we knew decades ago. Whatever you do, don't rock the boat.
- Don't do research anyone cares about. This is a close relative of point three, but with a subtle distinction—people may care about your work even if they don't find it strange. They may even want to repeat it with a larger population. They may want to try one of your techniques in a different context. They may think up novel ways to extend it to other areas. Although this might be exactly the sort of thing you want to see happen if you were performing actual research, it's a potential disaster for the aspiring fraudster. The last thing you want is for anyone to look over your work in enough detail to repeat some aspect of it. The trick is to keep producing papers that are just good enough to be published—but not good enough that anyone else will want to follow up on them. Fujii had this down (if you'll pardon the pun) to a science. With a few exceptions, the papers he wrote were cited less than a handful of times by other scientists. Despite an impressive research output and a fat C.V., most of his work made no impression on his fellow scientists. Which, for him, was a good thing.
- Don't publish in journals focused on your field. In general (see point 4), it's best to avoid publishing in high-profile journals altogether, since those will draw attention to your work. At the same time, you don't want to keep seeing your stuff published in the same journals, or those editors will start feeling a personal responsibility to make sure their star researcher is on the up-and-up. But there's one other aspect of journal choice that can make a big difference when committing fraud: pick a journal where your reviewers won't understand exactly what you're doing. This is easier than it sounds, as most papers actually touch on a number of topics. Take Fujii's field of anesthesiology as an example. From a medical point of view, anesthesia is used in dozens of types of surgeries. From a research perspective, it could also bring in biochemistry, physiology, neuroscience, and genetics. Journals in all those specialties were fair game for Fujii's research, and he sprinkled papers in a whole host of them: the Journal of Oral and Maxillofacial Surgery, the Journal of Applied Physiology, Current Therapeutic Research, etc. The best thing about this scattershot approach, from Fujii's perspective, is that the reviewers normally used by these journals are probably not that well versed in the details of anesthesiology. By sending his work to these journals, Fujii knew that problems were less likely to be spotted by peer reviewers.
- Distribute responsibility. If suspicions do crop up, someone will eventually have to perform an investigation. If you plan things carefully, it should be impossible for anyone to figure out who should be responsible for that investigation. Typically, three groups will feel some sense of responsibility: the journals, the research institutions, and the organizations that funded the research. If you're going to commit fraud, ensure there are plenty of each. (This is related to point 5, of course.) Publish in lots of different journals. Change jobs often enough to make certain that nobody's sure where the problematic work took place. Make sure you get grant money from several different agencies, so everybody thinks someone else paid for the bogus results. The unfortunate fact is that investigating scientific fraud is a slow and painful process. A false accusation can ruin someone's career, so these investigations tend to be careful, thorough, and done completely in private, in case the accusations are off base. Nobody really wants to be the one on the hook for running something like that. If you keep everybody thinking that you're someone else's problem, you give them all reasons to avoid dealing with it themselves. Classic buck-passing is your friend. You can see how this played out in the Fujii case. It took a long time for journals to get fed up enough with his work that they banded together to hire an outside statistician to look over the data. And, even with that in hand, it took a while for the university where Fujii was working to launch a formal investigation of its own.
- Don't plagiarize. To reiterate: the goal is to draw as little attention to your work as possible. Direct plagiarism, thanks to digital tools, is now laughably easy to detect and will cause a lot of people to ask, "What else is off about this paper?" Don't give them an excuse to start asking that question! If you've got time to make up all your own data, then you've got time to come up with the words needed to describe it. No one said fraud was for the lazy!
- Don't duplicate images. Ah, the pretty pictures. These are hands down the most common way that people get caught. For instance, Jan Hendrik Schön had actually won a number of prizes in physics before other researchers noticed the background noise in some of his plots was identical, suggesting that Schön was using the same graph in different papers. A number of biologists have been tripped up by similar goofs, including the person behind the chronic fatigue virus scare. Same thing with the researcher who first claimed to clone human stem cells. This is not to say that anyone who uses the same image to indicate two different things will be caught—just that a lot of the people who do get caught are making this rookie mistake. Don't become a statistic. Use different images if you want to misrepresent them.
Off in a corner
Follow these tips, and you could potentially have a long and obscure career in science without having to deal with the messy and frustrating process of science. Long and illustrious, of course, isn't really an option, since you'll be trying to attract as little attention as possible.
If you do get caught, though, your career is likely over. You can write letters protesting your treatment, but they won't do much. As Toho University put it in a press release, "We organized a disciplinary committee and decided that a disciplinary dismissal was appropriate for Dr. Fujii.... Dr. Fujii has already been dismissed from Toho University."
John Timmer / John became Ars Technica's science editor in 2007 after spending 15 years doing biology research at places like Berkeley and Cornell.
No hay comentarios:
Publicar un comentario
Nota: solo los miembros de este blog pueden publicar comentarios.