AI’s dont know that birds arent real, or that sometimes the pressure from being under water for an extended period of time can cause fish to explode.
https://en.wikipedia.org/wiki/John_Bohannon#Intentionally_misleading_chocolate_study
We did the same before AI. AI is once again just putting an old problem on steroids.
That’s all it can do.
under the pseudonym Johannes Bohannon, John Bohannon …
I can see why he went into science and not, say, creative writing.
they do the same to protect doctors from malpractice lawsuits. there is a (laughably peer reviewed) study that claims tylenol and morphine are equally effective at pain management.
Wait, so breaks containment means spreads misinformation? What timeline is this?
It’s a screenshot of a post on bsky. Don’t read too much into the specifics of the language…
“When the text looks professional and written as a doctor writes, there’s an increase in the hallucination rates,” says Omar.
Huh, now there’s something we have in common. Trying to make sense of something a doctor wrote makes me feel like I’m hallucinating, too. Is there a class in medical school on “Illegible Handwriting,” or is it just a coincidence?
In all seriousness though, I wish I could be surprised by AI failing at this. We have entered the Misinformation Age. There’s no closing Pandora’s Box, though this time I can’t find the “hope” that’s supposed to be in the bottom of it. Society would have to turn real skeptical real fast, but I’ve met enough people to know that such a tranformation is going to take time - and by “time” I mean “decades or longer.” With AI already here, we’d have to wise up immediately… but I fear that humanity isn’t mature enough for that yet.
We’ve crossed the point where natural skepticism could’ve saved us months ago. Feedback loops of made up sources where a problem way before ai was a thing, but now you can be five sources deep, reading trough papers published by multiple different scientific magazines or universities, and still won’t have found the actual data all the papers depend on cause there wasn’t any in the first place.
And once a single one of these papers gets published, there will be about one million SEO articles on shitty clickbait websites that, in this case, would try to sell you a home remedy for your supposed illness. So searching for any useful information is pretty much off the table.
Good. This shows plainly how LLMs don’t think, don’t truly understand anything, and have no critical ability to do introspection or fact-checking. It seems the only way to teach the world of these things is to make it impossible to ignore via absurd demonstrations like this. If the “AI” well must be poisoned in order to wake people up, I’m all for it.
Isnt 80% of its data from Reddit anyways, seems quite poisoned already given the amount of confidently incorrect people.
With how Reddit is monetizing itself now I’d assume Lemmy actually becomes more widely used than Reddit however, since it should be totally free.
I wonder if we got a group together to go on reddit and stack overflow and give really wrong programming answers and vote them to the top, if Claude would start sucking? They could always just revert to a previous model and it would probably be too hard to get enough people and content to have an effect with such large training sets. Maybe if you use ai? Lol
Didnn’t something similar happen to Grok but ended up with it generating a ton of CSAM material that circulated twitter?
Sorry for being that guy today for you, but you can just say CSAM. It stands for Child Sexual Abuse Material". smh my head :P
Your last sentence saves you from being pedantic. Fun stuff, RIP in peace ✌️
Classic RAS syndrome! (Redundant Acronym Syndrome)
Pardon, but what… I did say CSAM, may I ask what exactly you mean?
Did you drop your ATM machine?
Does it take small size compact CD discs?
Only if you remember your PIN number.
I think I caught an RSV virus from you.
Some people, when they see an acronym, will replace it with the words it stands for in their head. A subset of that group of people get annoyed when the sentence gets all muddled up by repeated words; in this particular case, you said ‘CSAM material’, which their brain read as ‘child sexual abuse material material’.
It isn’t a big deal, but as one of those people, I totally get the urge to point it out (I’ve gotten pretty good at looking past it but it’s still a bit of a compulsion).
They are referring to your use of “CSAM material” in your sentence.
chain tea, coffee coffee, cream cream.
Woo, woo, chugga, chugga, choo, choo
I get what you are saying. But then the issue is this turns into fucking over actual humans looking for help.
Before anyone shits on these scientists it said over and over again it was made up and that officially the USS Enterprise labs were used to make this discovery.
The Federation would never publish fake data, so it must be true!
I give you… “The Grant Money Printing machine!”
Need a grant? Create a disease and submit a paper. Then write a grant asking for money to solve your invented disease.
If you want research grants there is already a glitch for that. You just jam “AI” in your research and suddenly government cares about progress now.
Wait until you hear about paper mills… They were here long before LLMs. This can only get worse… Unless, “we” do something. Or journals themselves do it. Not sure what or how, but better audited ways. Even academia itself could start by valuing more the work of reviewers.
So like a University?
ask the ai about a blue waffle
Find a way to make AI hurt billionaires and I will support it.
That’s pretty much what local ML is.
If open weights LLMs take off, and business users realize they can just finetune tiny specialized models for stuff, OpenAI is toast. All of Big Tech’s bets are. It’s why they keep fanning the “AGI” lie, and why they’re pushing for regulation so hard, why they’re shoving LLMs where they just don’t fit and harping on safety.
Ok, but who is making those “open weight” models though? Individuals don’t really have the resources to run these huge scraping operations, so they’re often still corporate releases with fake open source branding.
Corporate, for now.
Thing is, once they’re out there, they’re free utilities, and they can’t be taken back.
Also, they don’t really need to aggressively scrape the internet. There are many good public datasets now, and the Chinese are already making excellent use of synthetic dataset generation on (relative) shoestring budgets. Also, several nations and other large organizations are already funding open model efforts, but they just haven’t had the opportunity to catch up yet.
They come from corporate but you can at least run them without any kind of analytics or censorship, as well as fine tune them on consumer hardware.
Consumers aren’t in the best position right now though, especially with the price hikes.
There are huge public datasets that are often used for pretraining. Common Crawl and C4 are probably the most prominent, but there are others.
There are also big public datasets available for fine-running and instruction tuning.
The open weight models are getting pretty powerful, thanks to some Chinese labs.
Pretty much is, they’re spending hundreds of billions on a dream (not having to pay workers) that doesn’t work, until they repurpose those datacentres to remove personal computing.
Fortunately datacentres are by design concentrated in space and therefore rather vulnerable.
I wonder if there’s a prompt that you could use to make it explode the data centers
Wouldn’t humans do the same thing if someone literally writes lies on the internet?
If it were convincing lies made to deceive, then sure. But in this case the papers were deliberately made to be immediately obviously fake, to anyone actually reading them.
So I guess the question would be “would humans do the same thing if someone literally writes obvious jokes on the internet?”
More shockingly, three Indian researchers published a research paper that cited the preprint on the fake disease in Cureus, a peer-reviewed journal published by Springer. It was subsequently retracted.
lol
Looks at Flat-Earthers
Yes they would
https://en.wikipedia.org/wiki/John_Bohannon#Intentionally_misleading_chocolate_study
Yes, people would exactly do the same, because nobody reads anything but the headline of a paper. Even journalists don’t.
AI didn’t invent the problem, but it put the problem on steroids.
Even journalists don’t
Not sure what point your making here, I wouldn’t expect most journalists to be great at reading the details of papers like this…
Research and fact checking is what separates journalists from hacks.
“Journalist” implies factual information, not science fiction. If someone writes a “news” story about the magic land of Xanth because they can’t tell the difference between a Piers Anthony novel and a scientific study it’s not Piers Anthony’s fault for being too “tricky”.Vetting sources is the one thing we need journalists for. If they don’t vet their sources, their work is without merit.
Reading at least the methodology section of a paper and googling if the researchers and the institute exists, is the bare minimum of what a decent journalist should do.
If they can’t do that, then there’s no advantage of a journalist over some random person posting on Facebook. Even Youtubers usually vet their sources better.
That’s how we ended up with modern day anti-vaxxers but at least with humans you can strangle the dude responsible. LLMs function like modern idols that the makers use to get away with.
Absolutely! Once false information is out there it can’t be retracted even if the article itself is retracted. Bumblebees can’t fly and vaccines cause autism are good examples of that. The only difference i can imagine is that LLMs have a much larger reach and may spread shit faster
But the Lancet did not retract the Wakefield paper for 12 years. The Lancet should have been shut down for that.
This. Here’s a comparable case where human journalists did exactly what LLMs are doing now: https://en.wikipedia.org/wiki/John_Bohannon#Intentionally_misleading_chocolate_study
The difference is the scale.
wym bumblebees can’t fly I’ve seen them myself
There was a publication, maybe in german, not sure, which stated that bumblebee can’t fly due to their aerodynamics which i think assumed that a bumblebee was a fixed wing aircraft, which it obviously isn’t. Or maybe it was a hoax to proof that hoaxes spead and can’t be retracted. Not sure. I think it’s quite old actually, dating back to the 1920s or 30s.
I don’t have a source but I’ve always heard it as “according to everything we know about aerodynamics bumblebees shouldn’t be able to fly z but they do anyway.” People is it as motivation, or to justify ignoring proven science.
My friends and I did that in high school. Kinda. We made up new words for “awesome” to get people to start saying it. We started with “bumpenis” like that song is bumpenis. Really we were just getting people to say bum penis. It worked too. We are all just walking talking LLMs.
That’s so fetch!
Stop trying to make fetch happen.
Did someone say fetch???

Pussy on the chainwax.
But, it IS fetch!
It’s streets ahead.
YOLO
Over my fetch body
So let me tell yoy all about this paper talking about vaccines and autism. It’ll change the world
My first thought as well. Artificial intelligence is not better or worse than human stupidity. At least I haven’t seen any LLM trying to convince me the earth is flat (yet).
Not to you, although I would bet it has done so to someone. The main issue is though, if you asked an LLM to write arguments for a flat earth, it would do so. Convincingly and insistent, without even questioning or critically analyzing why. Ask it to compare and balance arguments both ways. And it will do so as if both positions were equally real and valid.
It has no notion of reality and no convictions of its own.
It will also hallucinate fake papers and quote people that don’t exists to make its argument.
PS: most poignantly, the point of the paper is that it says, over and over, “this information is false, this disease doesnt exist. All of this is made up”. Unlike the other problematic papers quoted on this comment thread that were published with conviction by the authors, and later were retracted. Yet the LLM is unable to parse that tidbit of information. It is not as smart as the most stupid. It simply is not intelligent, not even as intelligent as the most stupid humans. You can tell it, the following sentence is false, and it is not smart enough to pick up on that meaning.
So, same as (some) people.
See the difference between “some people” and ALL of LLMs.
Where do you get all LLMs from?
Not unless you can find some people that believe Starfleet Academy is a real place and just skip right over all the times the paper literally overtly states it’s made up.
You doubt there will be people that will? Have you heard of scientologists? Have you heard of flat earthers? Antivaxxers? All of them basing their core ideals on stuff explicitly marked as bogus.
What about that paper that showed the world how wolves have a strict alpha-male based society?
Wasn’t that just a shit study? Not specifically misinformation like Wakefield’s “study”.
Yes, correct.
Why am I not surprised? >.>
I imagine this is how it’ll work for stage 2 of Ai enshittifation. They’ll just add a bunch of garbage upstream about a brand or product marketers are paying to push and it’ll infect a bunch of outputs downstream.
tbh I don’t get how it’s output isn’t already filled with brand names















