• 2 Posts
  • 200 Comments
Joined 2 years ago
cake
Cake day: June 13th, 2023

help-circle


  • I see intelligence as filling areas of concept space within an econiche in a way that proves functional for actions within that space. I think we are discovering more that “nature” has little commitment, and is just optimizing preparedness for expected levels of entropy within the functional eco-niche.

    Most people haven’t even started paying attention to distributed systems building shared enactive models, but they are already capable of things that should be considered groundbreaking considering the time and finances of development.

    That being said, localized narrow generative models are just building large individual models of predictive process that doesn’t by default actively update information.

    People who attack AI for just being prediction machines really need to look into predictive processing, or learn how much we organics just guess and confabulate ontop of vestigial social priors.

    But no, corpos are using it so computer bad human good, even though the main issue here is the humans that have unlimited power and are encouraged into bad actions due to flawed social posturing systems and the confabulating of wealth with competency.



  • While I agree about the conflict of interest, I would largely say the same thing despite no such conflict of interest. However I see intelligence as a modular and many dimensional concept. If it scales as anticipated, it will still need to be organized into different forms of informational or computational flow for anything resembling an actively intelligent system.

    On that note, the recent developments with active inference like RXinfer are astonishing given the current level of attention being paid. Seeing how llms are being treated, I’m almost glad it’s not being absorbed into the hype and hate cycle.



  • As always, the problem is our economic system that has funneled every gain and advance to the benefit of the few. The speed of this change will make it impossible to ignore the need for a new system. If it wasn’t for AI, we would just boil the frog like always. But let’s remember the real issue.

    If a free food generating machine is seen as evil for taking jobs, the free food machine wouldn’t be the issue. Stop protesting AI, start protesting affluent society. We would still be suffering under them even if we had destroyed the loom.


  • Chatgpt website*

    That statement is more of an echo of previous similar articles.

    Anyone who uses the api or similar bots for their site, such as this one, should be responsible to do the same. If they are using the api/bot without similar warning, they also don’t understand basic use of the technology. It’s a failure on the human side more that the bot side, but that is not how it tends to be framed

    My point is that it doesn’t matter how good the tools are if people just assume what they are capable of.

    It’s like seeing a bridge that says “600 pound weight limit”. And deciding it can handle a couple tons just because you saw another bridge hold that much.

    Imagine if this situation lead to a bunch of people angry at bridges for being so useless.


  • It’s like nobody cares to even touch base level understanding of the tools they are using.

    Can we stop framing this as if llms have actual intent?

    This shouldn’t surprise me given how many people think that we have access to the literal word of God, but they don’t even read the damned book they base their lives and social directives around.

    Or is it that “news” sources intentionally leave out basic details to ramp up the story?

    Ignore the note on the page you are using that says info might not be accurate. Blame the chat bot for your unprofessional ineptitude.

    You shouldn’t even be putting that level of blind trust into human beings, or even Wikipedia without checking sources.

    Guess what, when i use bots for info, i ask for the sources, and check the original sources. Really not difficult, and I’m not being paid half as much as the people I keep seeing in these news articles.

    Maybe this should make it more obvious how wealth is not accrued due to competence and ability.

    Or for having reliable news. I feel like i live in a world controlled by children.




  • Perhaps instead we could just restructure our epistemically confabulated reality in a way that doesn’t inevitably lead to unnecessary conflict due to diverging models that haven’t grown the necessary priors to peacefully allow comprehension and the ability exist simultaneously.

    breath

    We are finally coming to comprehend how our brains work, and how intelligent systems generally work at any scale, in any ecosystem. Subconsciously enacted social systems included.

    We’re seeing developments that make me extremely optimistic, even if everything else is currently on fire. We just need a few more years without self focused turds blowing up the world.


  • PeanuttoTechnology@lemmy.worldGenerative A.I - We Aren’t Ready.
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    edit-2
    10 months ago

    AI or no AI, the solution needs to be social restructuring. People underestimate the amount society can actively change, because the current system is a self sustaining set of bubbles that have naturally grown resilient to perturbations.

    The few people who actually care to solve the world’s problems are figuring out how our current systems inevitably fail, and how to avoid these outcomes.

    However, the best bet for restructuring would be a distributed intelligent agent system. I could get into recent papers on confirmation bias, and the confabulatory nature of thought, on the personal level, group level, and society level.

    Turns out we are too good at going with the flow, even when the structure we are standing on is built over highly entrenched vestigial confabulations that no longer help.

    Words, concepts, and meanings change heavily depending on the model interpreting them. The more divergent, the more difficulty in bridging this communication gap.

    a distributed intelligent system could not only enable a complete social restructuring with autonomy and altruism both guaranteed, but with an overarching connection between the different models at every scale, capable of properly interpreting the different views, and conveying them more accurately than we could have ever managed with model projection and the empathy barrier.




  • I feel like we need to design an obligatory habitat for parents. They would hate this idea, but hear me out.

    This could allow super efficient distribution of resources specifically to help children, as well as a controlled environment to ensure the parent doesn’t spend it on alcohol while neglecting the child.

    You could tailor the local net to avoid bad actors from outside, while maintaining a degree of freedom and privacy for the children. Maybe holds parents to some standard of accountability, while providing an obvious escape route for children with abusive parents. A place for children where they don’t feel separated from, or lesser than other children due to the circumstances of their birth. Maybe allow a healthier, safer environment.

    I would suggest a system that can be regulated and controlled without people putting cameras on the children.

    Adults suck. Children should not be forced to endure the hell that adults can create.

    Yeah parents wouldn’t like it, but very few existing parents provide an actually healthy environment to grow in. Maybe becoming a parent should hold some actual responsibility.




  • The main issue though is the economic system, not the technology.

    My hope is that it shakes things up fast enough that they can’t boil the frog, and something actually changes.

    Having capable AI is a more blatantly valid excuse to demand a change in economic balance and redistribution. The only alternative would be destroy all technology and return to monkey. Id rather we just fix the system so that technological advancements don’t seem negative because the wealthy have already hoarded all new gains of every new technology for this past handful of decades.

    Such power is discretely weaponized through propaganda, influencing, and economic reorganizing to ensure the equilibrium stays until the world is burned to ash, in sacrifice to the lifestyle of the confidently selfish.

    I mean, we could have just rejected the loom. I don’t think we’d actually be better off, but I believe some of the technological gain should have been less hoardable by existing elite. Almost like they used wealth to prevent any gains from slipping away to the poor. Fixing the issue before it was this bad was the proper answer. Now people don’t even want to consider that option, or say it’s too difficult so we should just destroy the loom.

    There is a markov blanket around the perpetuating lifestyle of modern aristocrats, obviously capable of surviving every perturbation. every gain as a society has made that reality more true entirely due to the direction of where new power is distributed. People are afraid of AI turning into a paperclip maximizer, but that’s already what happened to our abstracted social reality. Maximums being maximized and minimums being minimized in the complex chaotic system of billions of people leads to inevitable increase of accumulation of power and wealth wherever it has already been gathered. Unless we can dissolve the political and social barrier maintaining this trend, it we will be stuck with our suffering regardless of whether we develop new technology or don’t.

    Although doesn’t really matter where you are or what system you’re in right now. Odds are there is a set of rich asshole’s working as hard as possible to see you are kept from any piece of the pie that would destabilize the status quo.

    I’m hoping AI is drastic enough that the actual problem isn’t ignored.