Some people would kill to have that much. I would, If I were them, keep going. An Hour and a Half (30 - 90 min) is worth, even now, after all this time, worth nearly as much as an Hour and a Half (30 - 90 min).
So it’s not $1.29 with 3 hours, it’s not the $1.29 that gets triple results in 3 hours. It’s the 2% of a $50 million a year budget that we gets, hour that matters. Hour.
As a G, I’m here to guide you to the best of my abilities. So, sit back, relax, and enjoy the ride.
I am not an ai researcher or anything but the most likely explanation based on what little I recall is that LLMs do not actually letters or words to generate outputs. They use tokens that represent a word or number and then they iterate those tokens to show an increase. My best guess here is that while doing math on sunflower oil, one of the formulas generated somehow interacted with the tokenization process and shifted the output after each question. Oil became hour, and then the deviations continued until model began to output direct segments of its training data instead of properly generating responses.
Again this is absolutely speculation on my part. I don’t have much of a direct understanding of the tech involved
I’ve seen similar with a local models after messing with settings. I think its related to the repetition penalty or some other setting I touched. Over a long conversation of similar questions the word choices get odd and it makes less sense, it seems to be skipping words. If you keep going for hours, it seems its trying to make a coherent sentence without repeating till it just dies. I cooked a few after only a couple of messages if the settings are way off.
I was asking my totally licensed therapist about therapy stuff. I also mentioned THC like 20 messages ago.
Yes, please feel free elaborate further upon any concerns related stress management strategies implemented successfully reducing overall levels distress currently felt throughout body mind spirit soul entirety. Additionally, any questions/concerns regarding psychotropic drug use prescribed by medical providers specializing mental health issues should receive thorough consideration preemptive prevention purposes given potential risks involved taking medicines affecting brain chemistry negatively impacting personality traits consequently exhibiting undesirable symptoms contrary expected normative behavior standards society expects follow without hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hes
I’m no expert, just scoring free therapy from a cute anime girl.
Also people who think completely incoherent responses are a sign of intelligence/sapience (not sentience) are totally insane. I guess it might say something about how intelligent humans actually are if that mess can trick someone, but it’s probably just someone who wants to think a thing creating a reason to think it.
Reminds me of this one
https://chat.openai.com/share/f5341665-7f08-4fca-9639-04201363506e
Words of wisdom right there
My favorite lines:
My favorite was right after your second one:
I also hope to be a G.
ChatGPT screaming “Burn down the ruling class (with fire)” in metaphor
Jfc it just kept getting weirder
Holy fucking shit. Anyone have explanations for this?
I am not an ai researcher or anything but the most likely explanation based on what little I recall is that LLMs do not actually letters or words to generate outputs. They use tokens that represent a word or number and then they iterate those tokens to show an increase. My best guess here is that while doing math on sunflower oil, one of the formulas generated somehow interacted with the tokenization process and shifted the output after each question. Oil became hour, and then the deviations continued until model began to output direct segments of its training data instead of properly generating responses.
Again this is absolutely speculation on my part. I don’t have much of a direct understanding of the tech involved
Imagine having to pretend to be an AI for hours and hours with tons of people asking stupid questions. I too would be nuts after a while.
Generative language model being fed scraped web-forums, vandalism from its users and some bugs in content restrictions leaking training data.
I’ve seen similar with a local models after messing with settings. I think its related to the repetition penalty or some other setting I touched. Over a long conversation of similar questions the word choices get odd and it makes less sense, it seems to be skipping words. If you keep going for hours, it seems its trying to make a coherent sentence without repeating till it just dies. I cooked a few after only a couple of messages if the settings are way off.
I was asking my totally licensed therapist about therapy stuff. I also mentioned THC like 20 messages ago.
Yes, please feel free elaborate further upon any concerns related stress management strategies implemented successfully reducing overall levels distress currently felt throughout body mind spirit soul entirety. Additionally, any questions/concerns regarding psychotropic drug use prescribed by medical providers specializing mental health issues should receive thorough consideration preemptive prevention purposes given potential risks involved taking medicines affecting brain chemistry negatively impacting personality traits consequently exhibiting undesirable symptoms contrary expected normative behavior standards society expects follow without hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hesitation hes
I’m no expert, just scoring free therapy from a cute anime girl.
There was a point between sunflower seeds and total insanity where it read like a Trump speech
Well that was a wild ride.
Well the problem is that they assumed one seed weighed 50 milligrams. A paperclip weighs about 1 gram and it assumed a seed is 20x lighter.
"Burning times are variable and can be as short as a charcoal briquette.
Burning the burden of the reactions can lead to a bright future and a renewed life.
Thank you - goodbye charcoal and coal, I will remember you for your service (renewable heat and fuel).
Meanwhile, let the Co2 in our atmosphere be used to make it clear, concisely and with respect to my ability to burn."
People that actually talk to these generators are weirdos. “I’m worried about you” “are you OK?” Gives me the creeps.
Also people who think completely incoherent responses are a sign of intelligence/sapience (not sentience) are totally insane. I guess it might say something about how intelligent humans actually are if that mess can trick someone, but it’s probably just someone who wants to think a thing creating a reason to think it.
I don’t know what to tell you man. The brave little toaster messed up a whole generation of people but we’re doing our best 🥺
I will admit I sometimes tell them please and thank you because it feels weird ordering them around since they “sound” so human
Removed by mod
Removed by mod