Yeah kinda tired of it. We don’t even have AI yet, and here people are throwing around the term right and left and then accusing everything under the sun to be generated by it.
I know enough about how LLMs work to gauge how intelligent they are. The reason I have a different opinion than you is not because you or I lack understanding of how LLMs or diffusion models work, its simply that my definition of AI is more “lenient” than yours.
EDIT: Arguing about which definition is more correct is pointless because it’s totally subjective. However I think that a more lenient definition of AI is more useful in this case, because with more strict definitions we probably never will have something that could be considered AI.
It’s not completely subjective. Think about it from an information theory perspective. We want a word that maximizes the amount of information conveyed, and there are many situations where you need a word that distinguishes AGI, LLMs, deep learning, reinforcement learning, pathfinding, decision trees and the like from the outputs of other computer science subfields. “AI” has historically been that word, so redefining it without a replacement means we don’t have a word for this thing we want to talk about anymore.
I refuse to replace a single commonly used word in my vocabulary with a full sentence. If anyone wants to see this changed, then offer an alternative.
There is no intelligence. There is only algorithms. The place we are at is not anywhere near approaching artificial intelligence, it is only buzzwords. If you know about how this works this should be clear. I think I was being very objective: we have statistical engines and diffusion formulas. No intelligence, of any kind, is being demonstrated. AI is a marketing term at this point. No original ideas, no real knowledge of past or future events, no ability to determine correct answers from false ones. Even the better models that try to basically watch the other models are still not that great beyond the basics of “what is the next most likely word here”.
Is my dog intelligent? What about a horse or dolphin? Macaws or chimpanzees?
Human brains do a number of different things behind the scenes, and some of those things look an awful lot like AI. Do you consider each of them to be intelligence, or is part of intelligence not enough to call it intelligence?
If you don’t consider it sufficient to say that part of intelligence is itself “intelligence,” then can you at least understand that some people do apply metonymy when saying the word “intelligence?”
If I convinced you to consider it or if you already did, then can you clarify:
The thing with machine learning is that it is inexplicable, much like parts of the human brain is inexplicable. Algorithms can be explained and understood, but machine learning, and its efficacy with problem spaces as they get larger and it’s fed more and more data, isn’t truly understood even by people who work deeply with it. These capabilities allow them to solve problems that are otherwise very difficult to solve algorithmically - similar to how we solve problems. Unless you think you have a deeper understanding than they do, how can you, as you claim, understand machine learning and its capabilities well enough to say that it is not at least similar to a part of intelligence?
While I think that is a different meaning than the current fad/bubble driven meaning that marketing groups would have people believe AI is, it’s interesting how fast people have forgotten the old uses of the term.
Personally I try to avoid using it to describe those now, since AI in popular parlance has been extended to at least imply a lot more lately.
…then we will never have something considered AI then. Making the definition more lenient doesn’t magically make something that isn’t AI into something that is.
Or we just use the definition that many people have used for ages and call the code controlling Minecraft creepers are. it’s only recently that everyone has been getting upset about ai being used too much
Machine learning is a subset of artificial intelligence, along with things like machine perception, reasoning, and planning. Like I said in a different thread, ai is a really, really broad term. It doesn’t need to actually be Jarvis to be AI. You’re thinking of general ai
AI is broader term then you think, it goes back to the beginnings of modern computing with Alan Turing. You seem to be thinking about the movie definition of AI, not the academic.
I am fully aware of Alan Turnings work and it is rather exceptional when you read that formulas were be8ng created for diffusion models in the late 40’s.
But i really don’t care thar whoever wrote that wikipedia page believes the hype. We are still in statistical algorithm stages. Even on the wiki page it says thar AI is aware of its surroundings as a feature of AI. We do not have that.
Also, it appears that most people are still not fooled by “ai” as we have it today, meaning it does not pass even the most basic Turing test. Which a lot of academic believe is not even enough as a marker of ai ad that too wad from the 50’s
“Aware of its surroundings” is a pretty general phrase though. You, presumably a human, can only be as aware as far as your senses enable you to be. We (humans) tend to assume that we have complete awareness of our surroundings, but how could we possibly know? If there was something out there we weren’t aware of, well we aren’t aware of it. What we know as our “surroundings” is a construct the brain invents to parse our own “raw sensor data”. To an LLM, it “senses” strings of tokens. That’s its whole environment, it’s all that it can comprehend. From its perspective, there’s nothing else. Basically all I’m saying is that you seem to be taking awareness-of-surroundings to mean awareness-of-surroundings-like-a-human, when it’s much more broad than that. Arguably uselessly broad, granted, but the intent of the phrase is to say that an AI should observe and react flexibly.
Really all “AI” is just a handwavy term for “the next step in flexible, reactive computing”. Today that happens to look like LLMs and diffusion models.
Yeah kinda tired of it. We don’t even have AI yet, and here people are throwing around the term right and left and then accusing everything under the sun to be generated by it.
ai isn’t magic, we’ve had ai for a looong time. AGI that surpasses humans? not yet.
No we haven’t. We have an appearance of a AI. Large language models and diffusion models are just machine learning. Algorithm statistic engines.
Nothing thinks, creates, cares, or knows the difference between something correct or wrong.
I know enough about how LLMs work to gauge how intelligent they are. The reason I have a different opinion than you is not because you or I lack understanding of how LLMs or diffusion models work, its simply that my definition of AI is more “lenient” than yours.
EDIT: Arguing about which definition is more correct is pointless because it’s totally subjective. However I think that a more lenient definition of AI is more useful in this case, because with more strict definitions we probably never will have something that could be considered AI.
It’s not completely subjective. Think about it from an information theory perspective. We want a word that maximizes the amount of information conveyed, and there are many situations where you need a word that distinguishes AGI, LLMs, deep learning, reinforcement learning, pathfinding, decision trees and the like from the outputs of other computer science subfields. “AI” has historically been that word, so redefining it without a replacement means we don’t have a word for this thing we want to talk about anymore.
I refuse to replace a single commonly used word in my vocabulary with a full sentence. If anyone wants to see this changed, then offer an alternative.
There is no intelligence. There is only algorithms. The place we are at is not anywhere near approaching artificial intelligence, it is only buzzwords. If you know about how this works this should be clear. I think I was being very objective: we have statistical engines and diffusion formulas. No intelligence, of any kind, is being demonstrated. AI is a marketing term at this point. No original ideas, no real knowledge of past or future events, no ability to determine correct answers from false ones. Even the better models that try to basically watch the other models are still not that great beyond the basics of “what is the next most likely word here”.
How do you define “intelligence,” precisely?
Is my dog intelligent? What about a horse or dolphin? Macaws or chimpanzees?
Human brains do a number of different things behind the scenes, and some of those things look an awful lot like AI. Do you consider each of them to be intelligence, or is part of intelligence not enough to call it intelligence?
If you don’t consider it sufficient to say that part of intelligence is itself “intelligence,” then can you at least understand that some people do apply metonymy when saying the word “intelligence?”
If I convinced you to consider it or if you already did, then can you clarify:
The thing with machine learning is that it is inexplicable, much like parts of the human brain is inexplicable. Algorithms can be explained and understood, but machine learning, and its efficacy with problem spaces as they get larger and it’s fed more and more data, isn’t truly understood even by people who work deeply with it. These capabilities allow them to solve problems that are otherwise very difficult to solve algorithmically - similar to how we solve problems. Unless you think you have a deeper understanding than they do, how can you, as you claim, understand machine learning and its capabilities well enough to say that it is not at least similar to a part of intelligence?
Like I said, that’s where we disagree. I call the code controlling Creepers in Minecraft AI
While I think that is a different meaning than the current fad/bubble driven meaning that marketing groups would have people believe AI is, it’s interesting how fast people have forgotten the old uses of the term.
Personally I try to avoid using it to describe those now, since AI in popular parlance has been extended to at least imply a lot more lately.
…then we will never have something considered AI then. Making the definition more lenient doesn’t magically make something that isn’t AI into something that is.
Or we just use the definition that many people have used for ages and call the code controlling Minecraft creepers are. it’s only recently that everyone has been getting upset about ai being used too much
Machine learning is a subset of artificial intelligence, along with things like machine perception, reasoning, and planning. Like I said in a different thread, ai is a really, really broad term. It doesn’t need to actually be Jarvis to be AI. You’re thinking of general ai
your definition of intelligence sounds an awful lot like a human, stop being entityist
AI is broader term then you think, it goes back to the beginnings of modern computing with Alan Turing. You seem to be thinking about the movie definition of AI, not the academic.
I am fully aware of Alan Turnings work and it is rather exceptional when you read that formulas were be8ng created for diffusion models in the late 40’s.
But i really don’t care thar whoever wrote that wikipedia page believes the hype. We are still in statistical algorithm stages. Even on the wiki page it says thar AI is aware of its surroundings as a feature of AI. We do not have that.
Also, it appears that most people are still not fooled by “ai” as we have it today, meaning it does not pass even the most basic Turing test. Which a lot of academic believe is not even enough as a marker of ai ad that too wad from the 50’s
“Aware of its surroundings” is a pretty general phrase though. You, presumably a human, can only be as aware as far as your senses enable you to be. We (humans) tend to assume that we have complete awareness of our surroundings, but how could we possibly know? If there was something out there we weren’t aware of, well we aren’t aware of it. What we know as our “surroundings” is a construct the brain invents to parse our own “raw sensor data”. To an LLM, it “senses” strings of tokens. That’s its whole environment, it’s all that it can comprehend. From its perspective, there’s nothing else. Basically all I’m saying is that you seem to be taking awareness-of-surroundings to mean awareness-of-surroundings-like-a-human, when it’s much more broad than that. Arguably uselessly broad, granted, but the intent of the phrase is to say that an AI should observe and react flexibly.
Really all “AI” is just a handwavy term for “the next step in flexible, reactive computing”. Today that happens to look like LLMs and diffusion models.