A scientific paper that raised concerns about the safety of the abortion pill mifepristone was retracted by its publisher this week. The study was cited three times by a federal judge who ruled against mifepristone last spring. That case, which could limit access to mifepristone throughout the country, will soon be heard in the Supreme Court.

The now retracted study used Medicaid claims data to track E.R. visits by patients in the month after having an abortion. The study found a much higher rate of complications than similar studies that have examined abortion safety.

Sage, the publisher of the journal, retracted the study on Monday along with two other papers, explaining in a statement that “expert reviewers found that the studies demonstrate a lack of scientific rigor that invalidates or renders unreliable the authors’ conclusions.”

It also noted that most of the authors on the paper worked for the Charlotte Lozier Institute, the research arm of anti-abortion lobbying group Susan B. Anthony Pro-Life America, and that one of the original peer reviewers had also worked for the Lozier Institute.

Mary Ziegler, a law professor and expert on the legal history of abortion at U.C. Davis: “We’ve already seen, when it comes to abortion, that the court has a propensity to look at the views of experts that support the results it wants,” she says. The decision that overturned Roe v. Wade is an example, she says. “The majority [opinion] relied pretty much exclusively on scholars with some ties to pro-life activism and didn’t really cite anybody else even or really even acknowledge that there was a majority scholarly position or even that there was meaningful disagreement on the subject.”

  • Blu
    link
    fedilink
    arrow-up
    6
    ·
    10 months ago

    As someone who peer reviewed papers, and got familiar with the process, most reviewers do not take the time to seriously examine papers. I would compare my comments to other reviewers for the same paper, and holy shit they barely read it. I would spot pretty blatant omissions–bad methodology, incomplete sections that make a paper impossible to reproduce, poor quality figures, need for major revisions. The other reviewers would offer minor suggestions and leave it at that. And the chief editor will push it out the door with minor revisions that don’t address any issues.

    I have seen some truly blatant shit get published. Like figures that have made up data, or that we’re straight up copied from the authors’ previous publication and presented as new. The for-profit publishing industry doesn’t give a fuck. Those issues might get caught 10 years down the road, like in that case, but it’s usually a slap on the wrist for tenured faculty unless it gets lots of attention.

    Prof in my department when I was a grad student blatantly copied work from another researcher, and the only sanctions he got were a moratorium on taking new grad students.

    • SatanicNotMessianic@lemmy.ml
      link
      fedilink
      arrow-up
      4
      ·
      10 months ago

      Same. I’ve also had peer reviews that pointed out that I spelled Erdős’ name incorrectly as Erdos. I had another that I grew so irate over Reviewer #2’s critique of my lack of explanation that I turned a ten page paper into a 53-pager, which was then accepted. I’ve also seen absolute blatant inattention, and I’ve definitely been subject to being told to add coauthors because of their seniority/role or current lack of pubs.

      I’m completely with you on the academic publication industry. I sympathize with the younger researchers now who are in a far more pay to play environment than I ever was. We’d always build public fees into our funding because we felt obligated to open access all of our work (being government funded, but also just morally), but we were a big money institution that had that kind of flexibility. $10k is nothing on a $5M grant. But now, there’s so many journals that exist only to churn out papers for the publish or perish culture, and no one seems to take seriously the fact that they go unread and are just hitting a check mark.

      99% of the time I’m sure it doesn’t matter. It’s just flotsam. But there should be a way of gauging a paper’s potential importance, both by journal ranking and maybe by topic. I’m really not going to call out some overseas researcher who is just trying to keep their job for publishing in a backwater journal, but it’s like that old saying that a lie can travel around the world while the truth is still putting on its shoes. Or that Ashkenazi story about the rabbi emptying the pillow full of feathers to illustrate how a damaging lie is impossible to recover from.