• 0 Posts
  • 301 Comments
Joined 2 months ago
cake
Cake day: June 8th, 2025

help-circle



  • Generational wealth is an area where this money hoarding really, really goes wrong, because then it’s lifetime after lifetime of the accumulation and hoarding of wealth. So obviously one change I would strongly support is extremely high tax on inheritance income.

    But we need to separate money from wealth, I think. Because if it takes all of us working together to generate that wealth in the first place, there is simply no possible excuse for not sharing that wealth equitably. As long as money=wealth, I’m just not sure we’ll ever really accomplish that, though.
















  • Sorry, no LLM is ever going to spontaneously gain the abilities self-replicate. This is completely beyond the scope of generative AI.

    This whole hype around AI and LLMs is ridiculous, not to mention completely unjustified. The appearance of a vast leap forward in this field is an illusion. They’re just linking more and more processor cores together, until a glorified chatbot can be made to appear intelligent. But this is struggling actual research and innovation in the field, instead turning the market into a costly, and destructive, arms race.

    The current algorithms will never “be good enough to copy themselves”. No matter what a conman like Altman says.


  • Eh, no. The ability to generate text that mimics human working does not mean they are intelligent. And AI is a misnomer. It has been from the beginning. Now, from a technical perspective, sure, call em AI if you want. But using that as an excuse to skip right past the word “artificial” is disingenuous in the extreme.

    On the other hand, the way the term AI is generally used technically would be called GAI, or General Artificial Intelligence, which does not exist (and may or may not ever exist).

    Bottom line, a finely tuned statistical engine is not intelligent. And that’s all LLM or any other generative “AI” is at the end of the day. The lack of actual intelligence is evidenced by the way they create statements that are factually incorrect at such a high rate. So, if you use the most common definition for AI, no, LLMs absolutely are not AI.