OpenAI spends about $700,000 a day, just to keep ChatGPT going. The cost does not include other AI products like GPT-4 and DALL-E2. Right now, it is pulling through only because of Microsoft's $10 billion funding
Remind me again how that “revolution of human mobility”, the Segway, is doing now…
Or how wanderful every single one the announcements of breakthroughs in Fusion generation have turned out to be…
Or how the safest Operating System ever, Windows 7, turned out in terms of security…
Or how Bitcoin has revolutionized how people pay each other for stuff…
Some of us have seen lots of hype trains go by over the years, always with the same format and almost all of them originating from exactly the same subset of people as the AI one, and recognize the salesspeak from greedy fuckers designed to excite ignorant naive fanboys of such bullshit chu-chu-trains when they come to the station.
Rational people who are not driven by “personal profit maximization on the backs of suckers” will not use salesspeak and refer to anything brand new as “the most incredible creation of humanity” (it’s way too early to tell) or deem any and all criticism of it as “shitting on it”.
Funny how from all the elements were it ressonates with historical events: “people promoting it”, “bleeding edge tech”, “style of messaging”, “extraordinary claims without extraordinary proof” and more, your ended up making the kind of simplistic conclusion that a young child might make.
What are you looking for here? Do you want it to be self aware and anything less than that is hot garbage? That latest advances in AI have many uses. Sure Bitcoin was over hyped and so is AI, but Bitcoin was always a solution with no problem. AI (as in AGI) offers literally a solution to all problems (or maybe the end of humans but hopefully not hah). The current tech though is widely useful. With GPT4 and GitHub Copilot, I can write good working code at multiple times my normal speed. It’s not going to replace me as an engineer yet, but it can enhance my productivity by a huge amount. I’ve heard similar from many others in different jobs.
AI, even at the current state is one of the most incredible creation of humanity.
If there was a nobel prize for math and computer science, the whole field would deserve one next year. It would probably go to a number of different people who contributed to the current methodologies.
You cannot compare nft to AI. You can open nature or science (the scientific publications) now and you’d see how big is the impact of AI.
I actually have some domain expertise so excuse me if I don’t just eat up that overexcited ignorant fanboy pap and phamplet from one of the very companies trying to profit for such things.
GAI (General Artificial Intelligence, i.e. a “thinking machine”) would indeed be that “incredible creation of humanity”, but that’s not this shit. This shit is a pattern matching and pattern reassembly engine - a technologically evolve parrot capable of producing outputs that mimic what was present in its training sets to such a level that they even parrot associations that were present in their training sets (i.e. certain questions get certain answers, only the LLM doesn’t even understand them as “questions” and “answers” just as textual combinations).
Insuficiently intelligent people with no training in hard sciences often actually confuse such perfect parroting of that which intelligent beings previously produces with actually having intelligence, which is half part hilarious and half part sad.
Edit: that was actually unfair, so let me put things better: some reactions to the hype on this AI remind me of how my grandmother - an illiterate old lady from the countryside who had been very poor most of her life - used to get very confused when she saw the same actor in multiple soap operas. The whole concept of actors and Acting was beyond her life experience so when I was a kid and she had moved to live with us in the “big city”, she took what she saw on TV at face value. I suspect a lot of people who have no previous understanding of the domain and related are going down the same route of reasoning on AI as my nana did on soap operas, so end up confusing the LLM’s impeccable imitation of human language use with there actually being a human-like intelligence behind it, just like my nana confused the “living truthfully in imaginary circunstances” of good actors with the real living it imitated.
If you have domain expertise you would agree with us that, despite not being AGI, as it is now, deep learning, reinforcement learning, generative AI are an incredible creation of humanity, that, among other things, are capable already of:
solving long standing scientific challenges such as protein folding,
taking independent decisions and develop strategies that, on specific tasks, surpass human experts
mapping human languages and artistic creations in high dimensional vector spaces where concepts and relationships are retained as properties of the spaces, allowing to perform math and statistical inference, generating original images and text (a thing for which, few decades ago, not many would have guessed such manageable mathematical representation could even exist).
On top of this we give for granted all the current already existing applications, such as image recognition, translation, text classification…
You would also agree with us that the potential of current AI methodologies in all fields of science and technology is already enormous, as demonstrated by alphafold for instance. We just need few more years to see even more groundbreaking applications of the exising methodologies, while we wait for even more powerful techniques or, why stop dreaming, AGI in few decades.
What it’s doing is just a natural extension of what was done with basic Neural Networks back in the 90s when it started being used for recognition of human-written postal code numbers on mail envelopes.
This is why I disagree that this specific moment in the development of AI is “an incredible creation of humanity”. Maybe the domain as a whole will turn out to be as groundbreaking as computers, but the idea that what’s being done now by itself is that is ignorant, premature or both.
As for the rest, I actually studied Physics at a Degree level and with it complex Mathematics and your point #3 is absolute total bollocks.
I was actually taking the time to share with you some very basic resources for you to learn something on basic stuff such as latent space, embedding, attention mechanism, markov decision processes, but your attitude really made change my mind.
It’s fine that you clearly don’t have the domain knowledge you claim, but your rudeness is really annoying. Enjoy your life with your achievement of complex math at degree level and learn how to speak
BTW, neural networks, even if few decades old, are an incredible achievement of humanity, even knowing how to roughly simulate a human neural network involves understanding of the brain, of non-linear math and existence of computers and (each of them) are astonishing achievements of humanity
Ah, yes.
Remind me again how that “revolution of human mobility”, the Segway, is doing now…
Or how wanderful every single one the announcements of breakthroughs in Fusion generation have turned out to be…
Or how the safest Operating System ever, Windows 7, turned out in terms of security…
Or how Bitcoin has revolutionized how people pay each other for stuff…
Some of us have seen lots of hype trains go by over the years, always with the same format and almost all of them originating from exactly the same subset of people as the AI one, and recognize the salesspeak from greedy fuckers designed to excite ignorant naive fanboys of such bullshit chu-chu-trains when they come to the station.
Rational people who are not driven by “personal profit maximization on the backs of suckers” will not use salesspeak and refer to anything brand new as “the most incredible creation of humanity” (it’s way too early to tell) or deem any and all criticism of it as “shitting on it”.
“Completely unrelated thing X didn’t live up to its hype, therefore thing Y must also suck” is not particularly sound logic for shitting on something.
Funny how from all the elements were it ressonates with historical events: “people promoting it”, “bleeding edge tech”, “style of messaging”, “extraordinary claims without extraordinary proof” and more, your ended up making the kind of simplistic conclusion that a young child might make.
What are you looking for here? Do you want it to be self aware and anything less than that is hot garbage? That latest advances in AI have many uses. Sure Bitcoin was over hyped and so is AI, but Bitcoin was always a solution with no problem. AI (as in AGI) offers literally a solution to all problems (or maybe the end of humans but hopefully not hah). The current tech though is widely useful. With GPT4 and GitHub Copilot, I can write good working code at multiple times my normal speed. It’s not going to replace me as an engineer yet, but it can enhance my productivity by a huge amount. I’ve heard similar from many others in different jobs.
AI, even at the current state is one of the most incredible creation of humanity.
If there was a nobel prize for math and computer science, the whole field would deserve one next year. It would probably go to a number of different people who contributed to the current methodologies.
You cannot compare nft to AI. You can open nature or science (the scientific publications) now and you’d see how big is the impact of AI.
You can start your research here https://www.deepmind.com/research/highlighted-research/alphafold . Another nobel prize material
I actually have some domain expertise so excuse me if I don’t just eat up that overexcited ignorant fanboy pap and phamplet from one of the very companies trying to profit for such things.
GAI (General Artificial Intelligence, i.e. a “thinking machine”) would indeed be that “incredible creation of humanity”, but that’s not this shit. This shit is a pattern matching and pattern reassembly engine - a technologically evolve parrot capable of producing outputs that mimic what was present in its training sets to such a level that they even parrot associations that were present in their training sets (i.e. certain questions get certain answers, only the LLM doesn’t even understand them as “questions” and “answers” just as textual combinations).
Insuficiently intelligent people with no training in hard sciences often actually confuse such perfect parroting of that which intelligent beings previously produces with actually having intelligence, which is half part hilarious and half part sad.Edit: that was actually unfair, so let me put things better: some reactions to the hype on this AI remind me of how my grandmother - an illiterate old lady from the countryside who had been very poor most of her life - used to get very confused when she saw the same actor in multiple soap operas. The whole concept of actors and Acting was beyond her life experience so when I was a kid and she had moved to live with us in the “big city”, she took what she saw on TV at face value. I suspect a lot of people who have no previous understanding of the domain and related are going down the same route of reasoning on AI as my nana did on soap operas, so end up confusing the LLM’s impeccable imitation of human language use with there actually being a human-like intelligence behind it, just like my nana confused the “living truthfully in imaginary circunstances” of good actors with the real living it imitated.
If you have domain expertise you would agree with us that, despite not being AGI, as it is now, deep learning, reinforcement learning, generative AI are an incredible creation of humanity, that, among other things, are capable already of:
On top of this we give for granted all the current already existing applications, such as image recognition, translation, text classification…
You would also agree with us that the potential of current AI methodologies in all fields of science and technology is already enormous, as demonstrated by alphafold for instance. We just need few more years to see even more groundbreaking applications of the exising methodologies, while we wait for even more powerful techniques or, why stop dreaming, AGI in few decades.
What it’s doing is just a natural extension of what was done with basic Neural Networks back in the 90s when it started being used for recognition of human-written postal code numbers on mail envelopes.
This is why I disagree that this specific moment in the development of AI is “an incredible creation of humanity”. Maybe the domain as a whole will turn out to be as groundbreaking as computers, but the idea that what’s being done now by itself is that is ignorant, premature or both.
As for the rest, I actually studied Physics at a Degree level and with it complex Mathematics and your point #3 is absolute total bollocks.
I was actually taking the time to share with you some very basic resources for you to learn something on basic stuff such as latent space, embedding, attention mechanism, markov decision processes, but your attitude really made change my mind.
It’s fine that you clearly don’t have the domain knowledge you claim, but your rudeness is really annoying. Enjoy your life with your achievement of complex math at degree level and learn how to speak
BTW, neural networks, even if few decades old, are an incredible achievement of humanity, even knowing how to roughly simulate a human neural network involves understanding of the brain, of non-linear math and existence of computers and (each of them) are astonishing achievements of humanity
I have in fact been following that stuff, thank you very much.