• Peanut
    link
    fedilink
    arrow-up
    20
    ·
    edit-2
    11 months ago

    Funny I don’t see much talk in this thread about Francois Chollet’s abstraction and reasoning corpus, which is emphasised in the article. It’s a really neat take on how to understand the ability of thought.

    A couple things that stick out to me about gpt4 and the like are the lack of understanding in the realms that require multimodal interpretations, the inability to break down word and letter relationships due to tokenization, lack of true emotional ability, and similarity to the “leap before you look” aspect of our own subconscious ability to pull words out of our own ass. Imagine if you could only say the first thing that comes to mind without ever thinking or correcting before letting the words out.

    I’m curious about what things will look like after solving those first couple problems, but there’s even more to figure out after that.

    Going by recent work I enjoy from Earl K. Miller, we seem to have oscillatory cycles of thought which are directed by wavelengths in a higher dimensional representational space. This might explain how we predict and react, as well as hold a thought to bridge certain concepts together.

    I wonder if this aspect could be properly reconstructed in a model, or from functions built around concepts like the “tree of thought” paper.

    It’s really interesting comparing organic and artificial methods and abilities to process or create information.

    • tlf@feddit.de
      link
      fedilink
      arrow-up
      3
      ·
      11 months ago

      I find it fascinating that AI development provoked the question of how our thoughts actually work and am curiously awaiting the results.