• Frenchy@aussie.zone
    link
    fedilink
    English
    arrow-up
    158
    arrow-down
    1
    ·
    9 months ago

    Well that’s… unfortunate. I’d like to know how the fuck that got past editors, typesetters and peer reviewers. I hope this is some universally ignored low impact factor pay to print journal.

    • fossilesque@mander.xyzOPM
      link
      fedilink
      English
      arrow-up
      121
      ·
      9 months ago

      We all know Elsevier only upholds the highest standards, after all why would they have such a large market share?

      • NegativeInf@lemmy.world
        link
        fedilink
        English
        arrow-up
        62
        ·
        9 months ago

        That name. Being a hobbyist with niche interests has made me hate them so very much. Scihub forever.

    • gregorum@lemm.ee
      link
      fedilink
      English
      arrow-up
      34
      arrow-down
      1
      ·
      edit-2
      9 months ago

      because they’re all as bad as most of us and only read the headline :(

            • BakerBagel@midwest.social
              link
              fedilink
              English
              arrow-up
              44
              ·
              9 months ago

              Dear We are very sorry for this error that occurred, as we provided an incorrect version when submitting the revised paper, as we did not use AI tools to write anything, but one of the authors did, and we removed this paragraph, as this entire paragraph does not add anything to our article.( It seems that I did not save the modifications). We wrote the conclusion ourselves, without resorting to artificial intelligence tools. We hope you understand what happened, and we are very sorry. Here you can find the conclusion that we wrote ourselves Conclusion In conclusion, proper treatment of iatrogenic vascular injuries is dependent on an accurate assessment of the stage of the injury. The injury should be recognized quickly. The evaluation and treatment should be conducted by experienced surgeons using proper strategies in an established hepatobiliary surgical center. Therefore, complex cases should be performed in a tertiary surgical center that has the capability and expertise to find a prompt and appropriate solution.

              I understand that English probably isn’t his first language, but this reads like he used ChatGPT to write his apology

    • GenEcon@lemm.ee
      link
      fedilink
      English
      arrow-up
      16
      ·
      9 months ago

      Since the rest of the paper looks decent (I am no expert in this field), I have a guess: it got to review and it came back with a ‘minor review’ and the comment ‘please summarize XY at the end’.

      In low impact journals minor reviews are handeled in a way, that the editor trusts the scientists to address minor changes accordingly. Afterwards it goes to production, where some badly payed people – most of the time from India – put everything in format, send out a proof with a deadline of max 2 days and then it will be published.

      I don’t want to defend this practice, but thats how something like this can get through.

  • PositiveControl@feddit.it
    link
    fedilink
    English
    arrow-up
    79
    ·
    edit-2
    9 months ago

    It’s the second time in a few hours that I see a post about AI-written articles published in an Elsevier journal. Maybe I’m not super worried about these specific papers (since the journals are also kinda irrelevant), but I’m worried about all the ones we’re not seeing. And I fear that the situation is only going to get worse while AI improves, especially regarding images. The peer review system is not ready to address all of this

    • Pyr_Pressure@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 months ago

      There are so many different journals out there it’s hard to keep track of which ones are actually reputable anymore.

      Almost need some overarching scientific body that can review and provide ratings for different journals to be able to even cite from the information within or something.

      Like science and nature would be S-tier, whereas this journal should be F-tier apparently and people shouldn’t even be allowed to cite articles found within it for their own papers.

  • PatFusty@lemm.ee
    link
    fedilink
    English
    arrow-up
    77
    ·
    9 months ago

    Dang that got published… I had to jump through fucking HOOPS to get my advisors to allow me to publish shit. This is ridiculous

    • RBG@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      29
      ·
      9 months ago

      Not sure you’d want to publish in Radiology Case Reports. It has an impact factor of 0.8, and I am not saying using impact factor as a general quality metric is good, but anything below 1 is probably not worth your time unless it is a very very new journal that just doesn’t have enough history.

    • 52fighters
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 months ago

      So no peer review? Or did the peer just use a chat not too?

  • Björn Tantau@swg-empire.de
    link
    fedilink
    English
    arrow-up
    78
    arrow-down
    2
    ·
    9 months ago

    What’s so puzzling about this stuff is that I get why they’re using AI to write the text because writing is hard. But why don’t they at least read it once before submitting?

    • Risk@feddit.uk
      link
      fedilink
      English
      arrow-up
      18
      ·
      9 months ago

      I work in healthcare. Doesn’t surprise me in the slightest.

    • RBG@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      37
      ·
      edit-2
      9 months ago

      This practise is a remnant of the printing times. Papers would get accepted and then printed in a later issue. But once the online publishing started, this kind of was not necessary anymore. Which lead to online publication before print, but somehow still using the print date for the article because a lot of journals still have physical prints.

      That said, I don’t know if this journal does that and then if not it is simply stupid. They might do it because they limit “online” issues in size, like the printed ones. Which is idiotic if you don’t actually print anything.

    • Wirlocke@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      edit-2
      9 months ago

      Come to think of it, I wonder if using ChatGPT violates HIPPA because it sends the patient data to OpenAI?

      I smell a lawsuit.

        • Juviz@feddit.de
          link
          fedilink
          English
          arrow-up
          7
          ·
          9 months ago

          You’re right about that, but other countries have similar protection. E.g. our board equivalent here in Germany would tear you a new one for. And the GDPR is gonna finish the job

        • Wirlocke@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          Typically for the AI to do anything useful you’d copy and paste the medical records into it, which would be patient data.

          Technically you could expunge enough data to keep it inline with HIPPA, but if there’s more people careless enough not to proofread their paper, then I doubt those people would prep the data correctly.

          • survivalmachine@beehaw.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 months ago

            ChatGPT has no burden to respect HIPAA in that scenario. The medical provider inputting your PHI into a cloud-based LLM is violating your HIPAA rights in that case.

            • Wirlocke@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              2
              ·
              9 months ago

              Just to clarify I am implying the medical provider would be the one sued. I didn’t think ChatGPT would be in the wrong.

              ChatGPT has just done a great job revealing how lazy and poorly thought out people are all over.