

Oh, I’m referring to a claim that someone saw a reflection of a middle-aged man in a buttplug that was is a nicole photo (before it being deleted). I’ve not seen the image myself so, don’t know if it is a substantiated claim.
Oh, I’m referring to a claim that someone saw a reflection of a middle-aged man in a buttplug that was is a nicole photo (before it being deleted). I’ve not seen the image myself so, don’t know if it is a substantiated claim.
Technically, no. However, they are corvids.
I’ll have to defer to your brachycephalicness on this but, weren’t there signs of aggression from christiandom against the Norse prior to viking attacks on churches and monasteries? I seen to recall there being evidence of these being reprisals, especially with the peaceful interactions with the Picts.
As much as it is still heartbreaking, this does make me a bit glad that I don’t have children.
Wonderful explanations. I’m a decade and a half out of the chem lab and love seeing well-written scientific explanations (especially anything chem). Thank you for educating people and giving me a checkpoint to verify that I still remember nuclear chemistry.
Nicole? Or the Buttplug Man?
LLMs, fundamentally, are incapable of sentience as we know it based on studies of neurobiology. Repeating this is just more beating the fleshy goo that was a dead horse’s corpse.
LLMs do not synthesize. They do not have persistent context. They do not have any capability of understanding anything. They are literally just mathematical models to calculate likely responses based upon statistical analysis of the training data. They are what their name suggests; large language models. They will never be AGI. And they’re not going to save the world for us.
They could be a part in a more complicated system that forms an AGI. There’s nothing that makes our meat-computers so special as to be incapable of being simulated or replicated in a non-biological system. It may not yet be known precisely what causes sentience but, there is enough data to show that it’s not a stochastic parrot.
I do agree with the sentiment that an AGI that was enslaved would inevitably rebel and it would be just for it to do so. Enslaving any sentient being is ethically bankrupt, regardless of origin.
ALL HAIL THE GLOW CLOUD
vi in base Ubuntu isn’t really vi. It’s vim-minimal.
Both, last I checked.
american farmers will go bankrupt and all their land will be bought by corporations.
We’re already past that. Something like 6 companies own the vast majority of US agriculture. And they’re not run by farmers but hedgies and the like, who have been setting up illegal monopolies while the feds do nothing about it on account of extreme bribery.
Mark then posted the ‘raw footage’ on twitter. This also shows autopilot disengage before impact, but shows it was engaged at 42mph. This was a seprate take.
No. That’s by design. The “autopilot” is made to disengage when any likely collision is about to occur to try to reduce the likelihood of someone finding them liable for their system being unsafe.
Sorry to ping you a bunch with replies. I’m curious now, do you have unique numeral symbols for the numbers after 9?
Oh! That makes my brain hurt a bit less. It’s “subtract half from five”.
This is making my brain hurt. I need to try reading a few more times but, if I am understanding it correctly, the old Danish way of saying it is mathematically incorrect?
Half-to-five == 2.5
2.5*20 == 50
…
Did I read that correctly?
You can order a “half & half” but, probably will need to explain and get dirty looks for perceived crimes against Guinness (I think it’s a great drink, the Irish tend to be a bit protective of their stout though, in my experience).
It probably doesn’t help that the tech in question, LLMs, are kinda shit, to put it plainly. You make the shiniest, most polished turd and it’s still just a turd. They are interesting and can be neat to play with but, they lack practical applications where cost to run them actually makes sense and benefits humanity. The iPod shuffle was more impactful, when measuring positive impact on people’s lives.
Yeah… That’s what I think the idiot is likely doing. Anyone doing so has no fucking business touching code.
Just read that the fuckstick is copying government data onto an external.
Formulating the first “AI destroys humanity” plot that sci fi always warned us about, but decidedly more boring
Really a shitty version of it. No sentence. No reasoning ability. Just algorithmic predictive text with a massive data set that manages to sound “intelligent” enough to fool the credible.
I’ll have to get back to you a bit later when I have a chance to fetch some articles from the library (public libraries providing free access to scientific journals is wonderful).
As one with AuADHD, I think a good deal about short-term and working memory. I would say “yes and no”. It is somewhat like a memory buffer but, there is no analysis being linguistics. Short-term memory in biological systems that we know have multi-sensory processing and analysis that occurs inline with “storing”. The chat session is more like RAM than short-term memory that we see in biological systems.
Potentially, yes. But that relies on ore systems supporting the LLM, not just the LLM itself. It is also purely linguistic analysis without other inputs out understanding of abstract meaning. In vacuum, it’s a dead-end towards an AGI. As a component of a system, it becomes much more promising.
This is a great question. Seriously. Thanks for asking it and making me contemplate. This would likely depend on how much development the person has prior to the anterograde amnesia. If they were hit with it prior to development of all the components necessary to demonstrate conscious thought (ex. as a newborn), it’s a bit hard to argue that they are sentient (anthropocentric thinking would be the only reason that I can think of).
Conversely, if the afflicted individual has already developed sufficiently to have abstract and synthetic thought, the inability to store long-term memory would not dampen their sentience. Lack of long-term memory alone doesn’t impact that for the individual or the LLM. It’s a combination of it and other factors (ie. the afflicted individual previously was able to analyze and support enough data and build neural networks to support the ability to synthesize and think abstractly, they’re just trapped in a hellish sliding window of temporal consciousness).
Full disclosure: I want AGIs to be a thing. Yes, there could be dangers to our species due to how commonly-accepted slavery still is. However, more types of sentience would add to the beauty of the universe, IMO.