I asked it about how to use etc with three dots in an example followed by a brand new sentence starting with a capital letter afterward.
It told me : / In standard usage, “etc.” is typically followed by three dots and then continues with a lowercase letter. If you are starting a new sentence, you do not add additional dots after “etc.” /
Then I begged it to give me an example of that rule. One such as:
I love swimming, soccer, etc… I also love eating animals.
And it just couldn’t do that. It kept typing 4 dots or single dot or no dots at all, and it can’t even recognize what it typed every single time. Lol try it yourself
So I can imagine there might be several things going on here, including the way in which you prompt it. Trying to dissect exactly what’s happening would require more info, and more experimentation.
For the record, using three periods after etc is considered incorrect in formal writing. Grammar books will be pretty unanimous in saying you should only use 1 period (“etc.”). If your prompt is asking it to “correctly” do something that its training data says is “incorrect”, I could see that causing problems.
On the flip side, chatGPT isn’t designed to use/enforce “proper” grammar. It’s simply regurgitating patterns based on how people use it in the real world (again, based on the training data). If the training data has lots of inconsistencies and counter-examples, it would seem logical that it might show up in the output, too.
And finally, it could just be how well you design the input prompt. In my own trial (also using free 3.5 model), it was very consistent in using it properly, even though I had suggested it do otherwise:
Input:
Tell me how to use “etc” properly in a sentence, punctuating it with three dots. Give me multiple examples that show how to correctly use it at the end of a sentence and in the middle of a sentence.
Output:
Certainly! “Etc.” is an abbreviation for “et cetera,” which means “and so on” or “and others.” Here are examples of how to use it properly:
At the end of a sentence:
- I enjoy various outdoor activities: hiking, biking, camping, etc.
- She has a wide range of skills: coding, graphic design, project management, etc.
- The conference covered topics like artificial intelligence, machine learning, cybersecurity, etc.
In the middle of a sentence:
- The restaurant offers a variety of dishes, such as pizza, pasta, salads, etc., to cater to different preferences.
- Our team includes experts in software development, data analysis, user experience design, etc., ensuring a well-rounded approach to projects.
- The museum features art from different periods—Renaissance, Baroque, Romantic, etc.—providing a comprehensive cultural experience.
Something I have noticed in GPT 3.5 is that it seems something on the site itself, not necessarily GPT’s output, changes the text that GPT provides. I had an issue where I asked it for the links to websites it was sourcing. I could see a longer URL pop up for just a second before it got shortened to underlined blue text which was not a link. I asked it multiple times in multiple ways to give me tbe URL again. It happened multiple times where the URL showed up for just a second before getting converted to weird, non-URL format.
I finally got the URLs it was trying to provide when I told it, “Please provide the raw HTML of your most recent response.” Then I looked at the “a href” sections and found the URLs it had been trying to provide.
I wonder if the site itself and not GPT is changing three or more consecutive periods to one as well as some of the other issues you have reported.
It seems like ChatGPT can write, but from what I’ve understood about the technology it always sounded more like it was taught to “speak”. Not with sounds obviously, but the sentences are build without necessarily knowing all characters that make it up, like children do with speech before learning to write.
I’m not a researcher on the topic, so I could’ve interpreted something wrong. I’d like to see Cunningham’s law proven right, if I did!
AI is basically just “heuristically, sentences usually look like this, and when this word is used in this context, the next word is usually…” And so on.
There is no “thinking” behind ChatGPT, no real understanding of the topics it’s provided. Just a computer that provides sentences in a way that make humans attempt to humanize it.
U can make “thinking” by havibg it generate a thought process its often used to make agents with langchain that are significantly morr intellegent than the base models especialy when given the capability to use tools.
It’s like you’re typing from a phone with no autocorrect.
Well, at least it can handle contractions!
Data has left the chat
Considering you asked it to follow a rule that does not exist, it’s unsurprising it can’t do it “correctly”
well it’s even worse if it just totally made that freaking rule up out of its algorithm xD
You are using GPT-3.5, right? It is imo much worse than the paid version.
you’re correct. right now 4.0 is the paid one. the free one is 3.5. its dumb as a bag of dicks
4 is free if you use it through bing
Where’s the link to it? I logged in microsot account and tried couldn’t find it. It might be banned in my country. Thank you
Just go to bing.com and you’ll see “bing chat” or “copilot”, which are chatGPT4
Its not that dumb its still pretty good at writing code. And if u want 4 cheap get an api key and use betterchagpt
Surprised nobody mentioned this: Most of these models use tokenization; they group words into groups of symbols like “ea” and “the” and “anti” - they don’t pick which key to press for the text, they pick which bunch of keys to press. These are called tokens. I believe there are tokens it just can’t output, or tokens that are extremely unlikely. I could imagine that “etc.” and “…” are tokens with relatively high probabilities, but perhaps “etc…” doesn’t break into a nice set of them? (or the tokens it can be broken into all have extremely low weights for the model).