Scientist Screwed Up? Send 'Em to Researcher Rehab

Jim DuBois is offering fallen scientists a shot at redemption. But not everyone thinks they deserve a second chance.
Getty Images

This January, a group of four scientists sat quietly in a windowless classroom in a two-story, brick block of a building on the campus of Washington University. The weather in St. Louis was grey, cold. Someone had arranged the rectangular desks into something approximating a circle. All eyes were on a black-haired, spectacled man at the front of the room.

“We encourage the use of first names only,” he said. “Let’s go around the circle and promise to hold everything we say in this workshop as strictly confidential. I’ll start: ‘My name is Jim...’”

Another voice echoed the oath. “My name is John,” said a man, a biochemist whose name is not in fact John. “And I promise to hold everything we say in this workshop as strictly confidential.” One by one, the other scientists in the circle repeated the mantra.

The man we’ll call John Smith and the other scientists hadn’t come to St. Louis to discuss the latest findings in their field, or be trained in a new lab technique. They came from very different fields and had little interest in each other’s work. Something else had brought them together: These scientists had each done something wrong. And Jim DuBois, the spectacled man swearing everyone to secrecy, was there to make sure they never did again.

DuBois’ program is sometimes described as “researcher rehab,” though he does not condone the nickname. The scientists who come to his three-day Professionalism and Integrity workshop have made a mistake: anything from incorrect paperwork for animal experiments to a falsified image, faked data, or plagiarism. Not fireable offenses, but serious enough—or frequent enough—that the researcher’s institution wants a change.

High-profile scientific deviants periodically make headlines: Haruko Obokata was hailed as a star when she claimed to have triggered stem cell-like behavior in normal cells, until an investigation concluded she’d manipulated her experiments and falsified results. Michael LaCour seemed to have finally figured out how to change people’s minds on gay marriage—but actually he’d faked his survey data. Thoracic surgeon Paolo Macchiarini developed a plastic tube to replace the trachea, operating on nine patients before it was revealed he’d never properly tested the device. Seven of his transplant recipients have since died.

In 1992, in response to several high-profile cases of fraud in federally funded research, the Department of Health and Human Services created the federal Office of Research Integrity to investigate the most serious allegations of research misconduct. They receive about 200 reports a year, handled by a highly specialized team of investigators—experts in detecting image doctoring and recovering hidden data.

But only the most egregious cases make it to the office’s door, and only the ones that involve federally funded medical research. And while grand-scale, criminally minded deceptions are titillating, they're rare. The more common problems—just as damaging to science’s reputation, in the aggregate—are the pedestrian indiscretions: plagiarism, image doctoring, data a little too carefully groomed.

And so the majority of scientific misdeeds are handled in-house, as quietly as possible, by the scientist’s institution. Which is where DuBois comes in.

Getty Images
Introduction to Ethics

Researcher rehab begins with a belief: that behind any academic misdeed is a complex story of rights and wrongs, good intentions and extenuating circumstances. Science strives to iron out the wrinkles of life with methodical experimentation and repetition. But it’s still humans that have to do that work—so the wrinkles never entirely disappear.

DuBois, a professor in the department of medicine at Washington University, is a philosopher and psychologist by training, and he’s spent most of his career pondering the ethics of research and medicine. He co-wrote and edited a book of case studies in research ethics and founded a journal that collects patient stories about ethically sensitive issues, like gender reassignment surgery for intersex children.

Over more than a decade in the field, DuBois developed a reputation on campus as a research ethics authority, and administrators started pointing problematic scientists to his office for a talking-to. The story was always the same, DuBois remembers: “We've got this one researcher who is causing a bit of trouble and we've tried to work with them, but he keeps breaking the rules.” As part of an action plan, they’d ask DuBois to work with them for a certain number of hours.

He turned them away every time.

Despite his expertise, DuBois couldn’t think of anything he could say to “fix” these scientists’ behavior. He knew from his research that integrity trainings—commonly used as preventative measures by academic institutions—have almost no impact on scientists’ future behavior.

He kept saying no, for more than a decade. But then, in 2011, DuBois saw an announcement for a grant from NIH to develop new research ethics training programs, and he began to think he might be missing an opportunity to innovate. There was obviously an unmet need. Maybe working with the very scientists he had been sending away—the people who had already broken the rules—was the best opportunity to create better scientists. After all, who would be more motivated to change than a scientist fresh out of a humiliating and time-consuming investigation?

Eighteen months later, in January of 2013, DuBois was meeting his inaugural class of scientific miscreants. He had worked with a team of psychologists and research integrity experts to create a program that would be more like group therapy than traffic school. He’s now held 12 sessions, training over 52 researchers from 33 different institutions. Smith and his cohort were in group number 11.

Getty Images
Origin Stories

The bagels and fruit on the back table sat uneaten as Smith and the other scientists introduced themselves in turn. “This is not an ethics course,” DuBois told the three scientists in his barren classroom. “We will not teach the basic rules for the responsible conduct of research.” Instead, they would together try to identify the obstacles keeping each scientist from following the rules, and the skills that could guide better decision-making.

After everyone swore to keep the proceedings secret, DuBois posed a question to the group: “Why did you become a scientist?”

Smith spoke up. He had been a scientist for as long as he could remember, he said—doing math problems for fun, driving his parents crazy by taking apart anything he could get his hands on. He just wanted to understand how the world works. When he made his first significant scientific discovery—a new protein modification technique—as a biochemistry graduate student, it was thrilling. Now he headed his own lab, with a squadron of graduate students and staff making the discoveries for him.

Smith sat on his plastic chair and listened as the other offenders shared their scientific origin stories. For hours, DuBois gently steered the conversation, drawing out the unspoken rules, culture, and bias that had shaped each of the scientists’ labs. At one point, he pulled out a biometric device and passed it around, teaching everyone how to manage stress by monitoring their heart rate.

Around 4:30, the day’s programming was finished and the sun was starting to set outside, though no one in the windowless room had noticed. DuBois gave the group some homework: Write down an honest version of the events that landed you here.

Smith took an hour-long Uber ride to his discount motel, sat at the built-in particleboard desk, looked around at the '70s-style orange and brown décor, and silently told himself the story.

He was grateful to have his own lab, but over time, he had started feeling overwhelmed by his mountain of administrative duties. Smith’s research required human subjects, and experiments on humans are approved and monitored by an ethics board. The board evaluates every step of an experiment to make sure the people involved are treated safely and fairly. The process was sometimes more stressful than useful. He wasn’t always as diligent as he should have been.

Then, during one project, Smith mishandled a human protocol. He was studying two groups of people with two similar diseases, and published the results of both groups as one paper when they should have been kept separate. His oversight board discovered the discrepancy in an audit.

The next day, listening to the others' stories, Smith realized their mistakes could easily have been his. They were the kinds of slip-ups he imagined most scientists make at some point—and he wasn’t wrong. Questionable research practices are far more prevalent than the program’s small group of four would suggest. In a 2005 survey of over 3,000 scientists, more than a third of respondents admitted to at least one form of dubious conduct, including 6 percent of participants who self-reported “failing to present data that contradicts one’s own previous research.” Nearly 8 percent said they had circumvented “certain minor aspects of human-subject requirements.”

Together, DuBois’ group spent most of the second day processing their stories. And on the third day, DuBois helped the scientists come up with a plan. They could hold regular meetings with lab staff and students, or create a standard operating protocol for data management. “I try to ask everyone before they leave, ‘Is this going to happen again?” DuBois says. “What if next time someone makes up data in your lab, it’s discovered because you caught it?”

When Smith went home, he started implementing new procedures so important details won’t slip by—or at least that’s what he and his institution hope. This is DuBois’ goal for all scientists: practical processes and cognitive skills to manage the complexity and intense pressures of research, so that the rules and norms governing science—IRBs, thoughtful data review—are easier to respect. Although that’s not the only upshot: The training is also an insurance policy for researchers and their universities, a way to demonstrate they're taking real action.

Nichelle Cobb, director of the Institutional Review Boards office at the University of Wisconsin, has referred three researchers to the program. To her, it functions as a wake-up call for errant scientists: “One of the things that the program does is encourage some self reflection,” she says. “A lot of researchers are very successful, very driven people, but they might not have necessarily ever been taught certain skills.” Spending a few days discussing their own weaknesses and getting the management skills to deal with them is usually a positive experience, and she’s seen it lead to better, less combative relationships between scientists and their oversight committees.

DuBois isn’t under any illusions that his students will never make a mistake again, or that Smith’s new strategies will be foolproof. But hopefully, next time—if there is a next time—Smith won’t see himself as a victim who mishandled an overly burdensome review process. He'll value the rules, and welcome the correction.

Getty Images
Bad Apples

Not everyone agrees scientists should get a second chance at integrity. DuBois’ program is covered by in publications aimed at scientists; places like Nature, Science, and Retraction Watch. DuBois’ main message to the scientific community in these publications is “This could easily happen to you.” That message is not always well received.

“The amount of hate mail they receive in response is shocking,” DuBois says. “When scientists think about scientists, they want their heroes and villains, and they don't want to think that the people who've screwed up are actually way more like they are.” The comments section gets heated: “Just eject them from the system altogether.” “Throw out the bad apples.” “Why spend even more money on cheaters?” “Boot the cheating slimebags.” Here, the misbehaving scientists are not like everybody else: They’re tainting science’s reputation and stealing ever-rarer grant money from their worthy peers. “Instead of spending money on people who know better and choose not to do the right thing, bar them from public funding for LONG terms,” read one typical response to a Retraction Watch post about the program.

In a strange way, scarce resources are the motivating factor for both those angry commenters and the scientists they rail against. In DuBois’ surveys of his workshop participants, 72 percent of them said the ultimate cause of their lapse was because they weren’t paying attention, often because they were over-extended and under-staffed. That presents one major criticism of DuBois’ approach: that he’s focusing on fixing individual scientists instead of the environment they operate in.

“Scientists operate in the context of their direct environment, and also in the context of broader science system,” says Coosje Veldkamp, a doctoral student at the Tilburg University Meta-Research Center who studies methods aimed at reducing human error in science. “Anything that happened from pressures or lots of stress, if those factors don’t change then I think it’s very hard for scientists to just change their own individual behavior.” Plus, she adds, there’s still more speculation than science about what leads to questionable research practices. She thinks it’s important to conduct that research before diving into rehab-like training programs.

Scientific culture can beget misconduct in other ways. Plagiarism—one of the most common forms of bad behavior—often stems from graduate students from other countries who don’t even know it’s wrong, according to John Dahlberg, who recently retired as the deputy director of the Office of Research Integrity. Indeed, over half the scientists who have participated in DuBois’ program were born outside of the United States, roughly twice what would be expected based on the demographics of American scientists. Dahlberg generally agrees with DuBois’ philosophy that one bad decision doesn’t make someone a bad scientist—but not everyone in the ORI shares this position. Last year, six of the office’s eight investigators threatened to resign over conflicts with the new director, Kathy Partin, partly because she wanted to focus more on plagiarism.

Consider the case of Boris Kosharskyy. In 2015, Ukranian Kosharskyy had been working as an anesthesiologist at the Albert Einstein School of Medicine for about five years. He was a productive scientist, frequently publishing papers and book chapters. One day, his department chair forwarded him an email from the editor of a journal where Kosharskyy had published a paper almost 10 years ago.

“It came to my attention a fraudulent paper was published in this journal….” the email read. Parts of Kosharskyy’s paper had been plagiarized, and the article would need to be retracted. Kosharskyy was as shocked by this as anyone—the paper was over a decade old, and he certainly didn’t remember misusing someone else’s work. But there it was: several paragraphs from another paper, no quotation marks. A junior high schooler could see it was plagiarism.

Kosharskyy thinks it was his co-author, not him, who slipped the offending paragraphs in. But the co-author was no longer in academia, so Kosharskyy took the blame. The Einstein investigation was a lonely experience. “I didn't have anyone to talk to about it, and no one could help me through the process,” Kosharskyy says. “It was a very painful year and a half.” His dean assured him that this wasn’t the first plagiarism case he’d dealt with, but none of Kosharkyy’s colleagues stepped forward to share their experience with him. He assumed they were too ashamed to even mention it.

“I was trying to find an attorney to represent me because I thought there would be some kind of hearing,” Kosharskyy says. “The attorney’s fee was $1,000 an hour, and I started thinking we should probably sell the house.” He soon learned there wouldn’t be a criminal trial, but he would need to fork over several thousand dollars and spend a few of his vacation days in St. Louis. His institution wanted him to go to rehab.

At first Kosharskyy resented the cost and the lost vacation time. But as soon as the actual workshop started and he could talk to other people about what he’d been through, those feelings were replaced with relief.

“No one was saying ‘Oh my god, why did you do this?’ That was just not on the table. It was more like, ‘Let's see if we can correct it, if we can make it better in the future,’” Kosharskyy says. “‘Let’s see if we can improve and change the culture.’” The program helped him come to terms with what happened, and he’s now one of the few participants who’s gone public with his experience.

After he came back from St. Louis, Kosharskyy excitedly planned a seminar on plagiarism, part of an early morning series that takes place at 7 am. Normally, “people are just sleeping,” Kosharskyy says. But this seminar was different. “People were actually taking pictures of my slides, and they were asking questions, and when it was done they were meeting me in the hallway to ask me more questions.” Now he’s organizing a panel to discuss plagiarism at the next meeting of the National Association of Anesthesiologists, to share what he’s learned. “I now use software to double check the end result, and give lectures to residents and medical students before we even start,” Kosharskyy says. “I'm trying to create a culture of responsibility.”

Getty Images
Survey Says

Kosharskyy is a model student, and a great spokesperson. But his story doesn’t prove that researcher rehab works. If his version of events is true, and he’s not the one who copied words into his publication, his proactive work to prevent plagiarism doesn’t tell you much about researchers who really need help. It doesn’t even tell you about the program’s effect on middle-of-the-road offenders like Smith, whose impropriety stems from lapsed attention to detail. Do DuBois’ students actually become more truthful, careful scientists?

DuBois recognizes that efforts to change scientists’ behavior have a history of failure—sometimes, they even make behavior worse. On the morning before his first workshop, he found himself replaying conversations with colleagues who helped him design the workshop. Should he have separated the researchers by seniority, so people who’d been “cheating” their whole career wouldn’t “corrupt” more innocent, younger scientists? What if he was just burnishing bad apples and throwing them back in the bin?

The truth was, DuBois wasn’t sure it was a good idea. He didn’t know much about the scientists he would be trying to re-program. What if they were bad apples? Sitting in his car, with just a few minutes left before the class would begin, he decided to try a stress-management breathing exercise—the same one he would later teach to Smith and his group. DuBois inhaled deeply and watched his heart rate start to slow on a biofeedback device he’d brought in for demonstration. Five minutes later, a little calmer, he got out of the car and went in to meet his pupils for the first time.

Since then, DuBois has come to believe that the program does have an impact on his students. Like any good scientist, DuBois won’t rely on anecdata. He and his co-instructors conduct follow-up surveys and phone calls to track whether their students report statistically significant behavioral changes (the results of these surveys will be published some time next month). Also like any good scientist, DuBois is quick to point out the limitations of these results: no control group, and the data is self-reported.

But with or without solid evidence, he’s confident enough in his work’s effectiveness that he’s hoping to export the curriculum to other countries, and double the offerings in St. Louis to six times a year. He knows there are more scientists who could use his help.

Smith agrees. “The rules are there for a purpose,” he says, “and you need to appreciate the purpose is real.” Since he got back from St. Louis, he’s been trying to share his new perspective with the rest of the lab, even designating one member of his lab to be an internal compliance czar.

Recently, Smith was getting ready to start a new experiment with a colleague, and he found himself doing some teaching of his own. “Whenever we do something, we have to think about ‘What’s the worst case scenario that could happen?’” he said to his colleague. “‘Assume that happens, how would that affect the people who we're working with?’” Before DuBois’ workshop, “I would have just said, ‘Just make sure you follow the rules,’ and that would have been the end of it,” Smith says.

DuBois concluded the twelfth session of the workshop this month, sending another group of (maybe) reformed researchers out of St. Louis. He’ll greet the next class in September, ready to swear them to secrecy before they together battle the scientific method's undercurrent of human fallibility.