(Go stick your head in a pig!)
Come to think of it, “share and enjoy” is exactly the way I would expect an AI-generated YouTube video to end.
I think LLMs work just fine if you know how to use them and their limitations. Imo, they aren’t ready for general use without a lecture in how they work and what to expect from them.
Personal computers were enthusiast devices in the 70s and 80s and users had to know how to write code to use them. It took a bit of time for their interfaces to become friendly for the general population. The internet in the early 90s was the same. It is a shame tech companies today want to push this AI down the throats of everyone without first figuring out what and how it should actually be used.
I think it would be a shame if we discard all LLMs today as they do have practical uses. We just shouldn’t overuse them where they don’t belong.
Got a new laptop recently. Copilot pops up, so I asked it how to permanently disable Copilot.
It gave me a wordy non-answer, along with a “fun fact” about my local area — totally relevant and not creepy at all.
Then, after I demanded it tell me how to permanently disable itself, Copilot gave me a completely wrong answer.
After specifying the “app or service” I’m using (Windows, you fucking clueless piece of shit), it then gave me a half-baked answer that called commands which weren’t installed by default.
I then used duckduckgo to figure out how to install the configuration tool copilot said to use but that Windows had decided to hide from me.
Good job completely wasting my time, you ai-loving fucks at Microsoft. I don’t need new reasons to nuke your shitty software and install Linux, but now I have them. If Linux had native vst3 support, I wouldn’t have even booted into Windows.
Edit: Stranger in a Strange Land is a great book, and being the sci-fi novel backgrounding hippie culture, I wouldn’t have expected Musk to have read it.
Would KX Studio’s “Carla” help with VST3?
I wouldn’t have expected Musk to have read it.
Who said he actually did? The term “grok” is listed in The Jargon File / The New Hacker’s Dictionary. Musk probably read it long ago. …Like every proper geek. Nowadays, every time he drops an epic meme (as kids say these days), it’s a hazily remembered reference to something nerdy from ages gone by, and it just demonstrates he has absolutely no idea about the context.
This whole series of events feels very Hitchhiker’s Guide.
Edit: Stranger in a Strange Land is a great book
Not going to lie, it was one of my least favorite Sci-Fi novels. Felt entirely too Just-So. The characters - particularly Heinlein’s self-insert Jubal Harshaw - just came across as vapid, bigoted, and annoying. And so much of the book felt like a climax to an apocalypse everyone deserved (but not in a Douglas Adams funny way, just a deeply nihilistic “Everyone sucks and I hate it here” kind of way).
Windows 11 is fucking horrible.
The only reason I still have a windows machine is for PC VR gaming, and even that minimal interaction is annoying. Every major update seems exclusively be MS further enshitifying their OS. It’s an hour of research and work into remove whatever new garbage they’ve added.
Hopefully steam OS gets released soon and then I can just forget about Windows. Except actually I can’t because I need it for work, but my personal machine doesn’t require it.
Wait, Musk thinks grok comes from the hitchhiker’s guide? What a moron
People were saying, “I grok Spock” long before Douglas Adams used the word.
Yeah this whole time I thought he was a at least a Heinlein fan…
You assume he reads
He apparently doesn’t even watch movies. See my other reply in this comment thread.
Considering Heinlein had a lot of sympathy/support for fascism, it makes sense Musk would be a fan.
Are you just saying this because of Starship Troopers? He explored a lot of ideas through fiction but I don’t think you can call him a supporter of fascism because of that one book, unless you know something else about his personal life. Of all his books I’ve read I think the only real ideological through line is “horny old guy objectifies women and that’s OK”.
Like Stranger in a Strange Land has Space Jesus starting a revolutionary free love commune, I don’t think that’s really in line with fascism.
Agreed entirely. Heinlein is one of my top 5 sci-fi authors and you are correct. Each of his works comes in from a different angle and they’re often quite interesting. “Sexually-available pretty young women” is a consistent theme but never the point of the story. Even in_ I Will Fear No Evil_ wherein the old man main character becomes a beautiful young woman, her sex appeal is present throughout but not the point at all.
Yeah, I can’t even hold the pretty young women thing against him, because he clearly thinks of them as people instead of objects. It’s just kind of a funny quirk, like Tarantino’s creepy foot fetish. Calm down, Grandpa, and finish the very thoughtful and nuanced story.
And his love of Ronald Reagan, his love of libertarian capitalism, his love of noted racist Ayn Rand’s work, and constant sexism towards women in almost every novel.
Like every Sci-Fi writer has their issues, but Heinlein is far from any form of progressive.
Like Stranger in a Strange Land has Space Jesus starting a revolutionary free love commune, I don’t think that’s really in line with fascism.
No but it abandons plot for gratoius sex scenes with a Jesus figure where all women find him hot, so it’s not exactly the best thing for “Hey abandon the patriarchy.”
Libertarianism is not Fascism. Fascism doesn’t mean “right wing politics I dislike”. It doesn’t mean patriarchy or sexism or even racism (which Heinlein was provably not). He was a radical individualist, and his ideology was absolutely at odds with ethnonationalism and authoritarianism, the two major hallmarks of fascism. He was progressive about a great many things including sexuality, religion, and individual freedoms. You can criticize him for being a capitalist, but don’t conflate it with fascism.
It really shouldn’t be all that surprising.
https://futurism.com/the-byte/elon-musk-main-character-blade-runner
You don’t want your doors to let out moans of excitement everytime you walk through them? You don’t want a manically-depressed butler bot?
Here I am with a brain the size of a planet and they ask me to pick up a piece of paper. Call that job satisfaction? I don’t.
Weren’t Sirius Cybernetics Corporation also the first against the wall when the revolution came?
Bunch of mindless jerks…
I’d rather listen to Vogon poetry than use Elon’s AI.
“They’re the same picture.”
Vogon poetry isn’t that bad
I mean… Marvin is highly entertaining.
But no, I wouldn’t want him actually around me.
“It’s the people you meet in this job who really get you down. The best conversation I had was over 34 million years ago. And that was with a coffee machine.”
I would have at least tried to replace the diodes down his left side. Though I imagine the conversation would have gone something like this:
“Marvin, do you want me to replace those painful diodes down your left side?”
“Now they ask me if they want my diodes replaced. Of course I want my diodes replaced; they hurt a lot. Here I am, brain the size of a planet and they ask me stupid questions like that. Maybe I should cast my head in concrete.”
“I would like to help you not be in pain anymore. Can you show me your schematics so I can order the parts?”
“of COURSE I can. It would be the very simplest task. Oh god, what next?”
“You know what? Never mind.”
“Life. Don’t talk to me about life.”
Hi Douglas
If I wanted to be reminded of how depressing everything and myself is, I’d look in a mirror with the front page news. I don’t need a Marvin.
He is funny as a character, and Adams understood that.
“Your plastic pal that’s fun to be with!”
Share and enjoy,
Share and enjoy,
Journey through life,
With your plastic boy,
Or girl by your side,
Let your pal be your guide,
And when it breaks down,
Or starts to annoy,
Or grinds when it moves,
And gives you no joy,
Cause it’s eaten your hat,
Or had sex with your cat,
Bled oil on your floor,
Or ripped off your door,
And it gets to the point,
You can’t stand anymore,
Bring it to us, We won’t give a fig.
We’ll tell you,
Go stick your head in a pig
See also: Talkie Toaster
Very much so. And really even Kryten. Especially the original Kryten. Just completely flawed in many ways.
I’m glad they swapped actors, Robert is Kryten to me.
Kryten is one of the few robots in Sci-Fi I would fully trust to never take over the world. Even Data has had more than one “something is up with my programming, I’m going to assume control.”
Yeah, the original Kryten was not good.
He was fine for the episode/his role but I would not have liked Season 3 onwards as much if they kept the same actor.
Grok is like that AI door they had to smooth talk to get it to open for them
In the 80s it was quite common to depict AI as being stupid, typical Schwarzenegger scene here from total recall: https://youtu.be/xGi6j2VrL0o
Removed by mod
When someone calls you a nimrod, do you start going on about how the word “actually” means a great hunter?
Regardless of its root, the word luddite has a clear meaning. I find it hilarious the anti-ai bros wear it as a badge of honor, thinking they are somehow bringing the old meaning back when the current common one describes them exactly; someone who is stupidly against progress and new technologies.
When it pisses them off this much? Absolutely. What else am I going to do for a laugh?
David Attenborough narrating:
“Here, we can clearly see the flying squid, a true master of jumping out of the water and gliding through the air, preying upon an unsuspecting ranter. In the blink of an eye, the squid jumps. With it, a joke flies over the ranter, whose stress levels rise exponentially. Dazed by the demonstration, it tries to counterattack, but it is futile, for another joke carried by the flying squid flies right over its head.”
Removed by mod
Well there is revisionism going on, but I don’t think it’s from me.
Believe it or not, people wrote things down in the early part of the 19th century so we actually know what the Luddites thought and why they did what they did.
Removed by mod
Again, I think someone here was being wilfully ignorant, but I’m pretty sure it’s the person who decided the truth about the Luddites was historical revisionism.
Removed by mod
I never said I was “General Ludd.” I thanked you for the compliment. You then said it was historical revisionism, the article I pasted. Then I pointed out that their own words said otherwise. Then you called me wilfully ignorant, at which point I pointed out refusing to believe their words would be the wilfully ignorant stance.
Incidentally, there’s no actual evidence Ned Ludd existed. He was a symbol. So yes, I’m fully aware I’m a real person that exists, thank you.
Also, I’m not even sure what you think my ideas are. Are you under the bizarre impression that I’m one of the two people in the image I posted? Or are you talking about what I said about You Tube?
You compare to maga, and yet here we are with all the swearing and mud flinging 🙄
That wasn’t a compliment?
Oh man. Sick burn bro. 🫶. Fr.
Take a victory lap with this one; for today, you are the tru tru internet boss.
Yeah! Anyone that thinks “Corporate run AI is problematic, obnoxious trash” is obviously a luddite. Everyone knows that all implementations of new technology is inherently good – even when it’s used by awful people with bad intentions.
As a college student, yeah, ain’t nobody trying to avoid AI, lol. We ALL use that shit every single day
Using it to your own detriment. Fucking idiots.
How do you figure? Please extrapolate for me.
You’re paying tens of thousands of dollars to learn how to think critically and do hard shit with your brain. Instead of actually putting in the work, you’re letting an AI do the hard parts. Which defeats the purpose of going/paying in the first place.
I’m sure I would have done the same back then too. But it is short sighted.
I asked my math teacher when I would ever use [whatever we were learning at the time] and his answer stuck with me.
“It’s not about learning how to [thing], it’s about learning to solve a problem in a new way”
I regret not taking it to heart in school but I try to remind myself of that when I “waste” a night working on something “useless” - you learn a lot more than the solution while solving a problem
It hadn’t struck me until now that some think that all you’re trying to learn in school is the solutions to arbitrary problems, but people thinking that makes a lot of sense and helps clarify why they have the posture they do towards education.
Some aspects of education overemphasize memorization so maybe a lot of people think that applies to all of education when it does not.
Yup. Unlike a calculator which just makes doing the things you already understand the steps of doing easier, AI just gives answers whether they are right or not.
I remember a teacher telling me the same thing about calculators way back when. “If you’re not able to do the calculation yourself you won’t know if the calculator’s answer is right or not”
The difference being that if you put in the right equation the calculator will give you the correct result.
AI might give you the right result.
Almost everyone who goes to college is paying tens of thousands of dollars to get a piece of paper so they can get a better paying job.
And considering how many people leave college completely inept I’d say they’re not doing the hard work of learning.
AI is ultimately just a tool, and whether it’s beneficial or detrimental depends on how you use it.
I’ve seen some of my classmates use it to just generate an answer which they copy and paste into their work, and yeah, it does suck.
I use it to summarize texts that I know I won’t have time to read until the next class, create revision questions based on my notes, to check my grammar or rephrase things I wrote, and sometimes I use Perplexity to quickly search for some information without having to rely on Google, or having to click through several pages.
Truly it isn’t much different from what we used to do around 2000-2015, which was to just Google things and mainly use Wikipedia as a source. You can just copy and paste the first results you find, or whatever information is on Wikipedia without absorbing it, or you can use them to truly research and understand something. Lazy students have always been around and will continue to be around.
I use it to summarize texts that I know I won’t have time to read until the next class
It’s bad at that, because effective summarization requires an understanding of the whole, which AI doesn’t have.
The difference between what you’re doing and what people were doing 10 years ago is that what they were doing was referencing text written by people with an understanding of the subject, who made specific choices on what information was important to convey, while AI is just glorified text prediction.
It’s bad at that, because effective summarization requires an understanding of the whole, which AI doesn’t have.
LLMs can learn skills beyond what’s expected, but of course that depends on the exact model, training data and training time (See concepts like ‘emergence’ and ‘grokking’).
Currently the models tested in the study you mention (Llama-2 & Mistral) are already pretty outdated compared to other LLMs that lead the rankings. Indeed, research looking at the summarization capabilities of other models suggests that human evaluators rate them equal to or even better than human summarizers.
The difference between what you’re doing and what people were doing 10 years ago is that what they were doing was referencing text written by people with an understanding of the subject, who made specific choices on what information was important to convey, while AI is just glorified text prediction.
Well that’s a different argument from the first commenter, but to answer your point: The key here is trust.
When I use an AI to summarize text, reword something or write code, I trust that it’ll do a decent job at that - which is indeed not always the case. There were times when I didn’t like how it wrote something, so I just did it myself, and I don’t use AI when researching or writing something that is more meaningful or important to me. This is why I don’t use AI in the same way as some of my classmates, and the same is true for how I use Wikipedia.
When using Wikipedia we trust that the contributors who wrote the information on the page didn’t just nitpick their sources and are accurately summarizing and representing said sources, which sometimes is just not the case. Even when not being infiltrated by bad actors, humans are just flawed and biased so the information on Wikipedia can be slanted or out of date - and this is not even getting into how the sources themselves are often also biased.
It’s completely fair to say that AI can’t always be trusted - again, I’m certainly not always satisfied with it myself - but the same has always been true of other types of technology and humans themselves. This is why I think that even in their current, arguably still developmental stage, LLMs aren’t more harmful than technological changes in information we’ve seen in the past.
The second study link you gave didn’t find that it was ‘better’ than human writers, it concluded that if you do a lot of fine tuning then it can summarize news stories in a way that six people (marginally better than n=1 anecdote in the first link, I guess?) rated on par with Amazon mturk freelance writers. And they also noted that this preference for how the LLM summarized was individual, as in blind tests some of them still just disliked it. There are leagues and leagues of room between that and “summarizes better than humans.”
You and I both know that 99.9% of people are not fine tuning LLMs that way when they ask for a summary, which means almost nobody is going to be getting that ‘kind of as good as a person’s summary maybe if you like that style of summarizing.’ They’re getting the predictive text slop. Like, good for you if you aren’t, but maybe you should be a bit more upfront about how little you trust it and how much work you have to do to get it to give you an accurate (maybe?) summary?
My problem with LLMs is that it is fundamentally magic-brained to trust something without the power to reason to evaluate whether or not it’s feeding you absolute horseshit. With a human being editing Wikipedia, you trust the community of other volunteers who are knowledgeable in their field to notice if someone puts something insanely wrong in a Wikipedia article. An LLM will tell you anything and phrase it with enough confidence that someone with no expertise on a subject won’t know the difference.
You’re relying on something else for comprehension and composition. Those neural pathways that are made by reading (or listening) to information and digesting it are essentially becoming vestigial. Despite my personal feelings on AI (it has no place in the arts or to replacing voice actors), you cannot always rely on it. It’s already proven fallible for simple things and summaries of any kind are no substitute for reading, listening, or watching it yourself. Doesn’t matter if it’s Cliff’s notes, spark notes, Dead Meat Kill Count, or a garbage AI summary and essay.
As a professor, we know.
Professors are most definitely also included in the ‘all’
As a professional developer, same. It saves me so much time. My colleagues also use it. Lemmy is a bubble just as much as (or maybe even more so than) Reddit. Mention a use for AI and you’ll end up downvoted to hell. You just said “use AI” and people jump to “this guy switched off his brain and does nothing but blindly copy-paste ChatGPT output into his assignments.”
Yeah, I’m discovering that AI is one of those no-no topics in this particular echo chamber. Disappointing really, this whole thing is a lot more fun when people actually want to talk instead of just following the crowd. It is in the name I guess, lol.
Man, back in my day when we wanted to get something wrong on an assignment we had to do it ourselves.
Maybe you should considering your career could be at stake.
AI is quickly becoming an integral part of basically every career imaginable. Those that actually take the time to learn how to use it properly are going to inevitably be in a far better position than those too scared to figure it out. The real challenge is finding the balance between using AI as the tool that it is and just getting an easy answer (which, considering all the downvotes I’m getting, is probably the part yall are justifiably concerned with). We need to teach the world (ourselves) how to use AI, not avoid it, and run away like we keep doing. This cat is out of the bag and ain’t never going back.
We had a fresh CS graduate who could only function with ChatGPT. He gave up thinking completely, ChatGPT whatever-latest-model was his thing. He was always arguing with us that the AI told him to do it this way or another. He could not take input from folks with two decades plus of experience during review. He bragged that AI would replace us all in a year. He did not last two months with us - my boss cut him loose after lots of bugs and hideous refactorings. He was more of a drag on the team than any help. Don’t become that guy.
I work with Linux and was recently obligated to work with “Linux admins” from another company. One of them had apparently never used Linux before. I don’t begrudge anyone their lack of experience, but they shouldn’t be in positions that require fairly extensive experience.
Anyway, at one point they were doing a screenshare of some (very simple) code that I wrote but that I’m pretty sure they didn’t know I wrote. They were all collectively trying to figure out how the (again, very simple) script worked (it literally just changed permissions and renamed some things, IIRC). For every single line, they would copy and paste it into ChatGPT and ask what the line did. It was kind of amazing to watch.
I work with Linux and was recently obligated to work with “Linux admins” from another company. One of them had apparently never used Linux before. I don’t begrudge anyone their lack of experience, but they shouldn’t be in positions that require fairly extensive experience.
My job for the last decade has been working with sysadmins on Linux systems. Notice I didn’t say “Linux sysadmins” because most of them aren’t. They know a few commands by rote, but anything beyond that is impossible magic. The concept of the working directory, navigating the file path, permissions, and networking are all beyond their understanding.
I call them “turtles on posts” because they couldn’t have gotten themselves in that position and are now stuck. And since this has been happening for years it’s got nothing to do with AI.
Fortunately for me, I’m probably in the lower half of my company in terms of qualifications; it’s one of the best workforces in which I’ve ever participated. It actually bothered me a lot when I started, but as the saying goes, if you’re the smartest person in the room you’re probably in the wrong room.
The underqualified staff were with another company with whom we were required to work.
That sounds super painful to work with. But also a hilarious anecdote so you got that.
Honestly, it was painful, but mainly because of the ridiculous number of meetings they forced on us. Watching them bumble through messing up their tasks was pretty entertaining.
When I code using AI I get the best results by being very specific and write a class with pseudo code for it to fill out with the missing code.
If I just ask for it to write me a class that can X I often get some simple example code directly from stackoverflow.
It’s decent at writing simple string tools etc., because that’s what is out there, the day it starts writing code from API documentation will be a big milestone.
Currently it’s just a parrot that knows Python.
Squawk GPT wants a
def
This right here is the big, glaringly obvious problem with AI, especially in academics. But, it’s also exactly why this whole issue isn’t really a big deal as long as long enough people learn to use AI correctly. Those that don’t learn and fall into the trap of easy solutions and laziness will always, inevitably fail as soon as they get to the real world and must then either learn or fade into obscurity. Those that do learn how to utilize AI will find far more success and will hopefully be able to pass on their skills and knowledge. Thus, the system, given enough time, kinda corrects itself eventually. It’s just a bit dangerous until then, hence why we need to teach and learn rather than fear what’s coming.
I feel like you are so close to realising why your argument is rubbish.
AI is absolutely a good tool. But only for people who understand what they are doing already.
Its good to help you with arduous tasks but you need to be able to review what it does with knowledge and experience or you wont understand what it gets wrong.
I use it in my job to help me to write large access lists. If i give it the parameters, the addresses i want to give access to and in what ports and protocols etc it can dump a hughe ACL and i can review it and correct any errors i find.
If i didnt know how and ACL was written, didnt know the correct syntax and didnt understand where it should be placed i could very easily apply a dodgey ACL to a live network and fuck things up for everyone.
You keep saying you need to learn how to use it and then its fine.
But its not. You need to learn that its mostly dumb and you need to scrutinise everything it does.
That scrutiny is exactly what I’m getting at when I say we need to learn how to use it. AI is really powerful, but is so incredibly far from being the magic bullet that people think it is. It is just a tool that needs to be applied carefully and responsibly, of course only the people that understand what they are doing are going to succeed. My argument is that we need to be building that understanding and sharing it as widely as possible so that even more people can use the tools properly. And, yes, that means check the fucking output, use your brain instead of replacing it.
I am reminded of an argument i had years ago about people relying on google to do their jobs.
I argued that using google to give you the answer to a problem doesn’t help you in the long run. Instead of understanding the solution and being able to use that understanding to solve problems in the future, you just become dependent on google to get you through the day.
It is much more important to learn why a solution fixes a problem and the steps you take to understand the elements of the solution. It opens more doors, and you learn how to use your brain.
Both thinking and googling will get people far, but if google ever went away, only the thinkers would survive.
This is happening again but this time its AI.
The funny thing is, the people who made google search, and the people who created AI, are likely the thinkers.
AI is quickly replacing a lot of careers.
And it will continue to do so.
I’m amazed that you think otherwise when it’s happening right now.
Also, “taking the time to learn to use it” takes all of what, a couple of days of reading at most if you want it to do something really unusual? We’re not talking about advanced coding here.
AI is not replacing much of anything, not yet anyway. It is evolving and forcing the world to evolve with it. While AI is used to write notes, summarize content, generate content, integrate data, organize life, etc., all of that still requires input of some kind from someone. Careers are going to be all about performing that input and interpreting the result. People will not be replaced (except the ones that refuse to keep up), they will just fill a different role.
Also, “taking the time to learn to use it” takes all of what, a couple of days of reading at most if you want it to do something really unusual? We’re not talking about advanced coding here.
You clearly understand nothing about AI if this all you think it is. Sure, anyone can type a prompt and get a garbage result in about 30 seconds, but there is a hell of a lot more to it if you want to actually solve a real problem using AI. Learning advanced coding isn’t actually a bad idea for the future.
Maybe you can understand a different perspective if you stop thinking of AI as gimmicky solution and start thinking of it as what really is, a powerful set of tools meant to make finding the solution easier, nothing more.
First, we are discussing careers, not individuals. No shit people are losing jobs, but guess what, that is exactly what happens when careers evolve or new ones are created. Every. Single. Time.
Think about when precision machining was invented, when printing presses where invented, when cars where invented, when computers where invented, when the fucking internet was invented, etc. Yes, a fuck-ton of people were suddenly out of a job. But then suddenly there are also a whole bunch of brand new jobs and careers to fill. People either learn to adapt and fill those roles, or they don’t, and they get left behind. AI isn’t really any different except that it is happening right now and it’s therefore hard to see what’s to come.
That’s just how the world works. It’s sad and frustrating, I know, but being scared and hiding your head in the sand doesn’t change that fact. Learning how to live and thrive with the new stuff does, though, so maybe let’s try that instead.
Second, and this isn’t to discount everything you linked, but you understand that there is a huge bias going on here, right? People are understandably scared about the future, and the media latches onto that fear and creates articles that feed the narrative beast. But often times, the articles completely neglect to talk about the other side of the coin, which is what we are discussing here.
Okay, well maybe you see things like the death of journalism and the death of criticism and the death of voice-over acting and the death of music composition to be good things, but I don’t know that you’re in the majority there.
And you may also see the massive ecological disaster that AI is becoming is a good thing too. I certainly do not.
https://en.wikipedia.org/wiki/Environmental_impacts_of_artificial_intelligence