Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

  • BigMuffin69@awful.systems
    link
    fedilink
    English
    arrow-up
    20
    ·
    edit-2
    7 months ago

    This gem from 25 year old Avital Balwit the Chief of Staff at Anthropic and researcher of “transformative AI at Oxford’s Future of Humanity Institute” discussing the end of labour as she knows it. She continues:

    "The general reaction to language models among knowledge workers is one of denial. They grasp at the ever diminishing number of places where such models still struggle, rather than noticing the ever-growing range of tasks where they have reached or passed human level. [wherein I define human level from my human level reasoning benchmark that I have overfitted my model to by feeding it the test set] Many will point out that AI systems are not yet writing award-winning books, let alone patenting inventions. But most of us also don’t do these things. "

    Ah yes, even though the synthetic text machine has failed to achieve a basic understanding of the world generation after generation, it has been able to produce ever larger volumes of synthetic text! The people who point out that it still fails basic arithmetic tasks are the ones who are in denial, the god machine is nigh!

    Bonus sneer:

    Ironically, the first job to go the way of the dodo was researcher at FHI, so I understand why she’s trying to get ahead of the fallout of losing her job as chief Dario Amodei wrangler at OpenAI2: electric boogaloo.

    Idk, I’m still workshopping this one.

    🐍

    • Mii@awful.systems
      link
      fedilink
      English
      arrow-up
      21
      ·
      edit-2
      7 months ago

      Many will point out that AI systems are not yet writing award-winning books, […]

      Holy shit, these chucklefucks are so full of themselves. To them, art and expression and invention are really just menial tasks which ought to be automated away, aren’t they? They claim to be so smart but constantly demonstrate they’re too stupid to understand that literature is more than big words on a page, and that all their LLMs need to do to replace artists is to make their autocomplete soup pretentious enough that they can say: This is deep, bro.

      I can’t wait for the first AI-brained litbro trying to sell some LLM’s hallucinations as the Finnegans Wake of our age.

      • Sailor Sega Saturn@awful.systems
        link
        fedilink
        English
        arrow-up
        21
        ·
        7 months ago

        Many will point out that magic eight balls are not yet writing award-winning books, let alone patenting inventions. But most of us also don’t do these things.

    • Steve@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      7 months ago

      I maintain that people not having to work is a worst-case scenario for silicon valley VCs who rely on us being too fucking distracted by all their shit products to have time to think about whether we need their shit products

    • counteractor@pawoo.net
      link
      fedilink
      arrow-up
      4
      ·
      7 months ago

      “Many will point out that AI systems are not yet writing award-winning books, let alone patenting inventions. But most of us also don’t do these things.”

      And most of us never will.

      (🤔🤔🤔)

  • David Gerard@awful.systemsM
    link
    fedilink
    English
    arrow-up
    19
    ·
    edit-2
    7 months ago

    Well would you fucking believe it: it turns out that deplatforming your horrible arseholes fixes the problems! (archive)

    (well, until one of the problems scrapes together $44b and buys the platform)

    every fuckin time

    bsky comment:

    the number of times i have seen even a small private forum or discord server go from “daily fights” to “everyone gets along” by just banning one or two loud assholes is uncountable

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      7 months ago

      one of the follies of the early non-research internet was that the folks in charge of the more influential communities (Something Awful comes to mind) all tended to be weird fucking libertarian assholes insisting that debating fascists in a “free and fair environment” (whatever that is) must be a good thing, we can’t just ban them on sight for some reason. generally these weird libertarian assholes were motivated by typical weird libertarian asshole things — greed, or being fascists themselves.

      and all of these horseshit policies around not banning fascists ended in complete disaster for those communities and those libertarian shitheads (SA again), but somehow they’re practically the only element of those early communities that carried over to the modern internet, likely because caring about community quality doesn’t make money, but pandering to nazi fuckwits does.

      it probably goes without saying, but I take the apparently radical stance that nobody needs to interact with fascists and assholes and I won’t give their bullshit transit on any system I control. it’s always been surprising to me how many sysops consider “kick out the fucking fascists” an unworkable policy when it’s very easy to implement — it only gets hard when you allow the worst fucking people you know to gain a foothold.

      • David Gerard@awful.systemsM
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        7 months ago

        SA has been taken over by a non-shithead and now the moderation is “yeah, you can fuck off now” and it’s much nicer

        though tbf it’s greatly improved because we’re all old with bad backs now and aren’t putting up with dumb bullshit any more

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          7 months ago

          that is true! usually these assholes set things up so the communities they founded collapse when they leave (and from what I’ve heard that almost happened to SA) so I’m glad they’re flourishing within their niche without said asshole parasocially positioned at the helm. I really should re-include the forums in the list of places I visit when I’m bored.

      • deborah@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        7 months ago

        to be fair, a lot of the early Internet, including the early research Internet, was driven by libertarian tendencies (which always ignored the dilemma in combining libertarian tendencies with the fact that the entire early Internet was enabled by massive government funding). John Perry Barlow, the EFF, etc. It’s just that a lot of those people were libertarian utopians – and I will fully admit that in my youth it seemed very convincing. It felt like there were no space for bad actors because when the Internet was smaller, it was less obvious to idealists and the naïve that a larger internet would be incredibly useful for bad actors.

        As recently as gamergate the EFF was loudly insisting that all moderation by private companies was wrong, and in the intervening few years they have only grudgingly and rarely admitted that overly libertarian moderation policies can suppress speech massively. And yet I fully believe all the EFF people mean well.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        7 months ago

        I’m not up to date on my SA banning people lore, but hasn’t it also been several times that after people finally got banned from they made stuff infinitely worse? Like 4chan is from SA, but there was also a far right racist forum for people banned from SA, the my posting career people for example (at least I heard they were banned SA people, which could be a lie, we know how much they love multiple blankets of irony. Also their site is now dead thankfully).

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          ·
          7 months ago

          there’s a direct line from SA not explicitly banning ironic naziism to anime nazis being everywhere and I fucking hate it

          • Soyweiser@awful.systems
            link
            fedilink
            English
            arrow-up
            6
            ·
            7 months ago

            Not just SA, but everywhere. We should have listened to the barmen of old better. (hope most of you are familiar with the barman who kicks out the polite nazi story).

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      16
      ·
      7 months ago

      You laugh, but people in incel circles are heralding the nascent arrival of better than real AI girlfriends.

      Now I am laughing harder.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        ·
        7 months ago

        The incels have also been crying about how the upcoming robot sexbots/ai will end women forever for a decade now so it isn’t anything new. (They prob are shouting about it more, but they have been obsessed by this idea already in the past).

        • maol@awful.systems
          link
          fedilink
          English
          arrow-up
          11
          ·
          7 months ago

          All I can say is bring it on. These guys seem to assume women will be furious about being replaced in this hypothetical scenario. I imagine most women would be relieved to have more free time and greater opportunities. Losing your role is only scary if you have benefited from your social position.

          • Soyweiser@awful.systems
            link
            fedilink
            English
            arrow-up
            7
            ·
            7 months ago

            They prob (at least that was the mindset I got last time I read up on who they seem to think, doubt much has changed) women are useless and massively protected by society and as soon as that stops (because of robots) they will all starve or be enslaved or worse. It isn’t a rational group of people (even if they claim to be).

            And just did a quick check, and they don’t seem to mention AI shit at all. Guess even the incels can see through the (fin)techbro bs. Lot of sexism, racism and antisemitism though.

    • BlueMonday1984@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      7 months ago

      I seen someone on LinkedIn yesterday talking about how you’ll be able to “chat with your database”

      Like, what if i don’t want to be friends with a database? What next ? a few beers with my fridge? Skinny dipping with my toaster?

      Both options would be horrible, of course, but they’d both be better than spending another five fucking minutes on LinkedIn.

      • earthquake@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 months ago

        I am so happy that we know with 100% certainty that Jack Dorsey saw this image and wasn’t mad. He was laughing, actually.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      7 months ago

      You know, I have no direct information as to what kind of background those people might’ve had for their trips, but 1337% it was wrong (or this person is just full of it). Set and setting. Always. And dear god if I came out of a trip with that? What a fucking abysmal prospect

      And yet holy fuck that’s quite possibly a strong driver in all this insanity isn’t it?

      You could guess my internal state right now, and you’d need zero psychedelics to do it

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        25
        ·
        7 months ago

        it’s a fucking tragedy what techbros, fascists, and gurus have done to the perception of what a trip even is. there’s this hilariously toxic idea that a trip must have utility or else you’ve wasted your time and material; these people will gatekeep and control how you process what should be a highly personal experience, because that gives them a subtle but powerful amount of influence over what you’ll take back from your trip. the Silicon Valley/TESCREAL crowd have even ritualized bad set and setting so much that they don’t need a guru personally present in order to have a bad fucking time.

        the damage these fucking fools have done is difficult to quantify, largely because psychedelic research is either banned or firmly in the hands of those very same fucking fools. it’s very important to them that you don’t use your temporarily jailbroken neocortex for your own purposes; that you never imagine a world where they don’t matter.

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          10
          ·
          7 months ago

          yep. a couple of years ago I read We Will Call It Pala and it hit pretty fucking on the nose

          a couple years later, with all the dipshittery i’ve seen the clowns give airtime, and I’m kinda afraid? reticent? to read it again

          • ebu@awful.systems
            link
            fedilink
            English
            arrow-up
            8
            ·
            edit-2
            7 months ago

            never read this one before. neat story, even if it is not much more than The Lorax, but psychedelic-flavored.

            unprompted personal review (spoilers)

            it makes sense that the point-of-view character is insulated / isolated from the harm they’re doing. my main gripe is that in doing so, the actual problems of the hypothetical psychedelic healthcare industry (manufactured addiction, orientalism and psychedelic colonization, inequality of access, in addition to all of the vile stuff the real healthcare industry already does) wind up left barely stated or only implied. i was waiting for the other shoe to drop; for Learie to, say, receive a letter from a family member of a patient who died on the bed due to being unattended to, a result of stretching too few staff too thin over too many patients, et cetera. something that would pop the bubble that she built around herself and tie the themes of the story together.

            instead it feels like she built the bubble and stays in the bubble. she’s sad her cool business idea outgrew her, that the fifty million dollars she got as a severance package doesn’t fill the hole in her heart she got by helping people directly. which is neat and all, but, like. what about all the uninsured and poor Black people who never got to even try to see if psychedelics could help? what about the Native Americans who watched their spiritual medicine, for which they were (and still are) punished heavily for using, get used to make Learie’s millions, for which they will never see a penny? what about your overworked staff, Learie!?

            from a persuasive and political perspective, to me it seems the non-sequitur ending leaves the entire story up for ideological grabs. think it sounds like capitalism is bad? sure, go for it. think the problem is that we need to do capitalism, But Better™? sure, go for it! hell, that’s basically the author’s own conclusion:

            But what we really need are psychedelic models for business - business that defines new standards for integrity, equity and ethics; business reimagined with a technicolor glow.

            sorry, but a can of glow-in-the-dark paint over the same old exploitative business practices is not a solution. it’s just more marketing. where is this even going?

            If you feel called to share a message with the world, consider taking the course to work with David, and gain structure, fellowship with changemakers, and accountability to breathe life into your story.

            a $3,000 value course for only $999! what a steal!! order now, seats are first-come first-serve!

            • froztbyte@awful.systems
              link
              fedilink
              English
              arrow-up
              6
              ·
              edit-2
              7 months ago

              yep to a lot of your points (I’ll try reply in more detail tomorrow, majority of brain context atm is going to fucking android bullshit)

              as a bit more context, it was originally published on https://aurynproject.org/, which I think also says something about its origin/background. imo overly-narrow horizons in their optimism is a problem that plagues a lot of psychedelic-treatment evangelists (and I already view the argument favourably!); it’s something I’ve often felt irked by but also haven’t really been able to engage with in much depth to try form any wide-consumption counterargument too, because headspace and a lot of other stuff too

              • ebu@awful.systems
                link
                fedilink
                English
                arrow-up
                6
                ·
                7 months ago

                best of luck with android bullshit. i’m not familiar with either psychedelics themselves or their evangelists, but yeah, would love to hear thoughts

              • jonhendry@awful.systems
                link
                fedilink
                English
                arrow-up
                2
                ·
                3 months ago

                The Auryn project has a sub-project called North Star, founded by a VC partner, and a Bain Capital veteran, among others.

                The whole thing gives me the ick.

  • blakestacey@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    18
    ·
    7 months ago

    Check out the big brain on Yud!

    I think the pre-election prosecutions of Hillary and Trump were both bullshit. Is there anyone on Earth who holds that Hillary’s email server and Trump’s lawyer’s payment’s accounting were both Terribly Serious Crimes, and can document having held the former position earlier?

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      18
      ·
      edit-2
      7 months ago

      Ah yes the famous trial where HRC was prosecuted and convicted, I remember that, very analogous to Trump’s case, yes.

      Big Yud has been huffing AI farts for so long he’s starting to hallucinate like one!

      edit the replies once again prove that a blue check is an instant -10 point to credibility.

      • sc_griffith@awful.systems
        link
        fedilink
        English
        arrow-up
        13
        ·
        edit-2
        7 months ago

        my Absolute Fondness for Doing So remains… and I can’t emphasize this enough - unchanged. that’s it. that’s the post

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      7 months ago

      Yud: I am the sole arbiter of which crimes are punishable or not.

      Filthy peasant: ok which crimes

      Yud: criticising me or my uwu fwens :(

  • blakestacey@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    18
    ·
    edit-2
    7 months ago

    Vitalik Buterin:

    A few months ago I was looking into Lojban and trying to figure out how I would translate “charge” (as in, “my laptop is charging”) and the best I could come up with is “pinxe lo dikca” (“drink electricity”)

    So… if you think LLMs don’t drink, that’s your imagination, not mine.

    My parents said that the car was “thirsty” if the gas tank was nearly empty, therefore gas cars are sentient and electric vehicles are murder, checkmate atheists

    That was in the replies to this, which Yud retweeted:

    Hats off to Isaac Asimov for correctly predicting exactly this 75 years ago in I, Robot: Some people won’t accept anything that doesn’t eat, drink, and eventually die as being sentient.

    Um, well, actually, mortality was a precondition of humanity, not of sentience, and that was in “The Bicentennial Man”, not I, Robot. It’s also presented textually as correct…

    In the I, Robot story collection, Stephen Byerley eats, drinks and dies, and none of this is proof that he was human and not a robot.

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      15
      ·
      7 months ago

      Why are techbros such shit at Lojban? It’s a recurring and silly pattern. Two minutes with a dictionary tells me that there is {seldikca} for being charged like a capacitor and {nenzengau} for charging like a rechargeable battery.

      • gerikson@awful.systems
        link
        fedilink
        English
        arrow-up
        14
        ·
        7 months ago

        Incredible Richard Stallman vibe in this picture (this is a compliment)

        Person replying to someone saying they are not a cult leader by comparing them to another person often seen as a cult leader.

      • zogwarg@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        7 months ago

        But he didn’t include punctuation! This must mean it’s a joke and that obviously he’s a cult leader. The funny hat (very patriarch like thing to have) thief should only count himself lucky that EY is too humble to send the inquisition after him.

        Bless him, he didn’t even get angry.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        7 months ago

        Nobody drills down to the heart of the matter, why would this prove you are not a cult leader, and what does this say about somebody like Charles Manson, who also dressed funny and sometimes was disrespected by his cult members (he was anti drugs, and his followers well yeah, not so much) does this make him also not a cult leader?

        Lol at the ‘where can I join your cult?’ in the replies. Also literally text written on the walls, also not very crazyperson style look.

    • ebu@awful.systems
      link
      fedilink
      English
      arrow-up
      20
      ·
      edit-2
      7 months ago

      i really, really don’t get how so many people are making the leaps from “neural nets are effective at text prediction” to “the machine learns like a human does” to “we’re going to be intellectually outclassed by Microsoft Clippy in ten years”.

      like it’s multiple modes of failing to even understand the question happening at once. i’m no philosopher; i have no coherent definition of “intelligence”, but it’s also pretty obvious that all LLM’s are doing is statistical extrapolation on language. i’m just baffled at how many so-called enthusiasts and skeptics alike just… completely fail at the first step of asking “so what exactly is the program doing?”

      • BigMuffin69@awful.systems
        link
        fedilink
        English
        arrow-up
        15
        ·
        7 months ago

        The y-axis is absolute eye bleach. Also implying that an “AI researcher” has the effective compute of 10^6 smart high schoolers. What the fuck are these chodes smoking?

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        7 months ago

        this article/dynamic comes to mind for me in this, along with a toot I saw the other day but don’t currently have the link for. the toot detailed a story of some teacher somewhere speaking about ai hype, making a pencil or something personable with googly eyes and making it “speak”, then breaking it in half the moment people were even slightly “engaged” with the idea of a person’d pencil - the point of it was that people are remarkably good at seeing personhood/consciousness/etc in things where it just outright isn’t there

        (combined with a bit of en vogue hype wave fuckery, where genpop follows and uses this stuff, but they’re not quite the drivers of the itsintelligent.gif crowd)

          • blakestacey@awful.systemsOP
            link
            fedilink
            English
            arrow-up
            14
            ·
            edit-2
            7 months ago

            Transcript: a post by Greg Stolze on Bluesky.

            I heard some professor put googly eyes on a pencil and waved it at his class saying “Hi! I’m Tim the pencil! I love helping children with their homework but my favorite is drawing pictures!”

            Then, without warning, he snapped the pencil in half.

            When half his college students gasped, he said “THAT’S where all this AI hype comes from. We’re not good at programming consciousness. But we’re GREAT at imagining non-conscious things are people.”

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            3
            ·
            7 months ago

            yeah, was that. not sure it happened either, but it’s a good concise story for the point nonetheless :)

          • Soyweiser@awful.systems
            link
            fedilink
            English
            arrow-up
            4
            ·
            7 months ago

            Either sexy voice or the voice used in commercials for women and children. (I noticed a while back that they use the same tone of voice and that tone of voice now lowkey annoys me every time I hear it).

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        7 months ago

        Same with when they added some features to the UI of gpt with the gpt40 chatbot thing. Don’t get me wrong, the tech to do real time audioprocessing etc is impressive (but has nothing to do with LLMs, it was a different technique) but it certainly is very much smoke and mirrors.

        I recall when they taught developers to be careful with small UI changes without backend changes as for non-insiders that feels like a massive change while the backend still needs a lot of work (so the client thinks you are 90% done while only 10% is done), but now half the tech people get tricked by the same problem.

        • ebu@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          ·
          7 months ago

          i suppose there is something more “magical” about having the computer respond in realtime, and maybe it’s that “magical” feeling that’s getting so many people to just kinda shut off their brains when creators/fans start wildly speculating on what it can/will be able to do.

          how that manages to override people’s perceptions of their own experiences happening right in front of it still boggles my mind. they’ll watch a person point out that it gets basic facts wrong or speaks incoherently, and assume the fault lies with the person for not having the true vision or what have you.

          (and if i were to channel my inner 2010’s reddit atheist for just a moment it feels distinctly like the ways people talk about Christian Rapture, where flaws and issues you’re pointing out in the system get spun as personal flaws. you aren’t observing basic facts about the system making errors, you are actively in ego-preserving denial about the “inevitability of ai”)

          • Soyweiser@awful.systems
            link
            fedilink
            English
            arrow-up
            7
            ·
            edit-2
            7 months ago

            I’m just amazed that they hate lin charts so much that the Countdown to SIN - lin chart is missing.

            E: does seem to work when I directly go to the image, but not on the page. No human! You have a torch look down, there is a cliff! Ignore the siren cries of NFTs at the bottom! (Also look behind you, that woman with her two monkey friends is about to stab you in the back for some reason).

        • mountainriver@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          ·
          7 months ago

          I can’t get over that the two axis are:

          Time to the next event.

          Time before present.

          And then they have plotted a bunch of things happening with less time between. I can’t even.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      7 months ago

      Similar vibes in this crazy document

      EDIT it’s the same dude who was retweeted

      https://situational-awareness.ai/

      AGI by 2027 is strikingly plausible. GPT-2 to GPT-4 took us from ~preschooler to ~smart high-schooler abilities in 4 years. Tracing trendlines in compute (~0.5 orders of magnitude or OOMs/year), algorithmic efficiencies (~0.5 OOMs/year), and “unhobbling” gains (from chatbot to agent), we should expect another preschooler-to-high-schooler-sized qualitative jump by 2027.

      Last I checked ChatGPT can’t even do math, which I believe is a prerequisite for being considered a smart high-schooler. But what do I know, I don’t have AI brain.

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    16
    ·
    7 months ago

    I just want to share this HN comment because it was the worst thing I’ve read all day.

    Wait until they have kids. The deafness gene will be passed along. Soon enough we’ll be like the cars with the hardware without the software or the locked features.

    A treatment for a type of genetic deafness sounds good and all; but think of the implications man. The no-longer-deaf people might hear some steamy music and then get frisky and in the mood to make deaf babies. And next thing you know bam! There’ll be activation keys for hearing.

    • o7___o7@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      7 months ago

      Who is more evolutionarily fit: a deaf person who appreciates technological progress and has kids, or an unloved eugenics enjoyer posting on hacker news?

    • deborah@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      7 months ago

      I expect SneerClub to provide my alibis when reading one of these finally makes me snap.

    • maol@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      7 months ago

      Apparently deaf people never have kids. Hmm. Alright. Okay.

      This guy isn’t even the first techbro to suggest recently that if disabled people are allowed to have kids, eventually we will all be disabled. I know very very little about genetics, but I’m still pretty sure it doesn’t work that way.

    • maol@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      7 months ago

      Always a good sign when your big plan is virtually identical to L. Ron Hubbard’s big plan in the 70s.

          • Soyweiser@awful.systems
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            7 months ago

            Certainly, but I was specifically mentioning the woman who boarded the scientology boat, who has never been seen or heard of since who Scientology claims is fine, stop asking about her, she is doing great, better even! Allegedly. Like specific boat cult related things, not just a drill to head or locking down a hospital like the cryptocurrency deaths you hear about. (And even more indirectly than the hospital deaths, the slow cooking of the planet and upcoming climate disaster, welcome to the coolest summer of your life!)

  • BigMuffin69@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    7 months ago

    Not a sneer, just a feelsbadman.jpg b.c. I know peeps who have been sucked into this “its all Joever.png mentality”, (myself included for various we live in hell reasons, honestly I never recovered after my cousin explained to me what nukes were while playing in the sandbox at 3)

    The sneerworthy content comes later:

    1st) Rats never fail to impress with appeal to authority fallacy, but 2nd) the authority in question is max totally unbiased not a member of the extinction cult and definitely not pushing crank theories for decades fuckin’ tegmark roflmaou

    • Mii@awful.systems
      link
      fedilink
      English
      arrow-up
      19
      ·
      edit-2
      7 months ago

      “You know, we just had a little baby, and I keep asking myself… how old is he even gonna get?”

      Tegmark, you absolute fucking wanker. If you actually believe your eschatological x-risk nonsense and still produced a child despite being convinced that he’s going to be paperclipped in a few years, you’re a sadistic egomaniacal piece of shit. And if you don’t believe it and just lie for the PR, knowingly leading people into depression and anxiety, you’re also a sadistic egomaniacal piece of shit.

      • BigMuffin69@awful.systems
        link
        fedilink
        English
        arrow-up
        14
        ·
        7 months ago

        Truly I say unto you , it is easier for a camel to pass through the eye of a needle than it is to convince a 57 year old man who thinks he’s still pulling off that leather jacket to wear a condom. (Tegmark 19:24, KJ Version)

  • Mii@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    7 months ago

    So apparently Mozilla has decided to jump on the bandwagon and add a roided Clippy to Firefox.

    I’m conflicted about this. On the one hand, the way they present it, accessibility does seem to be one of the very few non-shitty uses of LLMs I can think of, plus it’s not cloud-based. On the other hand, it’s still throwing resources onto a problem that can and should be solved elsewhere.

    At least they acknowledge the resource issue and claim that their small model is more environmentally friendly and carbon-efficient, but I can’t verify this and remain skeptical by default until someone can independently confirm it.

    • deborah@awful.systems
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      7 months ago

      The accessibility community is pretty divided on AI hype in general and this feature is no exception. Making it easier to add alt is good. But even if the image recognition tech were good enough—and it’s not, yet—good alt is context dependent and must be human created.

      Even if it’s just OCR folks are ambivalent. Many assistive techs have native OCR they’ll do automatically, and it’s better, usually. But not all, and many AT users don’t know how to access the text recognition them when they have it.

      Personally I’d rather improve the ML functionality and UX on the assistive tech side, while improving the “create accessible content” user experiences on the authoring tool side. (Ie. Improve the braille display & screen reader ability to describe the image by putting the ML tech there, but also make it much easier for humans to craft good alt, or video captions, etc.)

      • Steve@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        7 months ago

        I deleted a tweet yesterday about twitter finally allowing alt descriptions on images in 2022 - 25 years after they were added to the w3c spec (7 years before twitter existed) . But I added the point that OCR recommendations for screenshots of text has kinda always been possible, as long as they reliably detect that it’s a screenshot of text. But thinking about the politics of that overwhelmed me, hence the delete.

        Like, I’m kinda sure they already OCR all the images uploaded for meta info, but the context problem would always be there from an accessibility POV.

        My perspective is that without any assistance to people unaware of accessibility issues with images beyond “would you like to add an alt description” leaves the politics of it all between the people using twitter. I don’t really like seeing people being berated for not adding alt text to their image as if twitter is not the third-party that cultivated a community for 17 years without ALT descriptions, then they suddenly throw them out there and let us deal with it amongst ourselves.

        Anyway… I will stick to what I know in future

        • Steve@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          ·
          7 months ago

          read that back and it’s a bit of an unreadable brain-dump. Apologies if it’s nonsense

        • deborah@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          7 months ago

          Yah, this makes sense. Community conventions can encourage good accessible content creation, and software can have affordances to do the same. Twitter, for many years, has been the opposite. Not only did it not allow alt, but the shorthand and memes and joke templates that grew up on short form Twitter was an extremely visual language. Emoji-based ascii art, gifs, animated gifs, gifs framed in emoji captioned by zalgo glitch unicode characters… there’s HTML that can make all that accessible, in theory, but the problem is more structural than that.

    • Eiim@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      3
      ·
      7 months ago

      As a rough rule of thumb, if it’s running locally on a CPU with acceptable performance, the environmental impact is going to be minimal, or at least within socially acceptable bounds.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        7 months ago

        This elides whatever resources were used in training/development. Which in the cases of ML models is quite often not minimal. Even the diy things you can train yourself make a significant dent. And there’s likely to be an ongoing cost of this too, because of updates

    • deborah@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      7 months ago

      Trying to figure out if that comment is a bit or not, and at least one of the poster’s other comments on the video is this literary masterwork:

      some people have observable x chromosones and others have observable y chromosones, and those categories are good to make useful distinctions, just like I make a distinction between a chair and a sofa… I still don’t believe gender is real, and I can still observe sex. Sure, sex could be an illusion, but it doesn’t matter. The chair is most certainly an illusion… At what point does wood become a chair? Is a three legged chair a chair? A two legged? A one legged? A none legged? Is a seat and a chair the same thing? What if I break the seat in half? Is it still a chair?

  • sc_griffith@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    7 months ago

    my friend started working for a company that does rlhf for openai et al, and every single day I wish I could post to you guys about the bizarre shit she sees this company do. they’re completely off the rails

  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    7 months ago

    Apparently MS is “making significant changes” to recall, and there are murmurs that multiple people at MS are realmad about how this played out

    Because yes, it’s the dirty users with their petty concerns that are the problem here, not the perfect product that they managed to bring to market oh so rapidly!

    I wonder how long they’ll leave this in the windows build before finally making it a manual feature install (if not entirely killing it off, which will eventually happen too)

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      7 months ago

      Also much further up that thread there is another example of the same tired “if you’re not ready to accept AI…” shit that many corps have been pulling. I just lambasted sentry about that shit recently too. These fucking people do not fucking understand consent, and they don’t seem to think that people might not actually want this utter shite