no AGI

This shouldn't be controversial at all.
The way we're currently doing AI/ML won't lead to AGI.

Unattended Children Pitbull Club Shirt $21.68

Black Rifle Cuck Company, Conservative Humor Shirt $21.68

Unattended Children Pitbull Club Shirt $21.68

  1. 1 week ago
    Anonymous

    is picrel a lookup table?

    • 1 week ago
      Anonymous

      Yes. The brain has a finite set of inputs and outputs. The difference is the sheer number.

      • 1 week ago
        Anonymous

        If Llama 2 was a lookup table it would have
        4096^20000 entries. So about 10^72247. And that's not taking into account other systems with larger window size or the ability to recognize images etc.
        Consciousness can process about 60 bits per second (Dijksterhuis, 2004). So 2^60 which is about 10^18.
        Going by sheer numbers, LLMs are clearly superior.

        • 1 week ago
          Anonymous

          Quantifying organic consciousness as a bit rate is silly, as it seems to imply (to me) only signals that are quantifiable (the senses/brain activity as a collection of nerve impulses) and/or something like the limited rate of conversation.
          I don't process a word as a collection of bytes that represent letters, I process a massive volumetric amount of correlative data all related to the point I want to make about the topic at hand. The words are just a necessary byproduct, both convienent (info transfer) and inconvienent (limitations with speed and defined understanding between two parties)

          The model needs several hundreds of Tflop processing in a given X time for a given Y input, consuming vastly more power than the conscious-active portions of a brain to do so.
          Going by sheer numbers, LLMs are clearly inferior.

      • 1 week ago
        Anonymous

        >finite set of inputs and outputs
        An unprovable assertion based on circular reasoning that brain = computer and computers have finite inputs/outputs, therefore brain has finite input/outputs.

        https://i.imgur.com/RRhzV0K.png

        This shouldn't be controversial at all.
        The way we're currently doing AI/ML won't lead to AGI.

        Exactly. AI fictionalists will seethe because they want to strap a pussy to GPT-4 so it will have sex with them.

        • 1 week ago
          Anonymous

          >An unprovable assertion based on circular reasoning that brain = computer and computers have finite inputs/outputs, therefore brain has finite input/outputs.
          It's based on basic new atheism presuppositions, like materialism and computational theory of mind. Read your Dawkins and your Sam Harris before spouting off about what you don't understand. You telling me you believe a flying spaghetti monster put a "soul" in there or what?

          • 1 week ago
            Anonymous

            I suppose this is true from a practical sense given that as far as we know, the universe is ultimately discrete at some level, but the way it’s phrased is poor. It implies the limitations aren’t a universal thing and are somehow unique to brains. The other issue is that the “input” would have to be the entire state of the observable universe at a given instant.

          • 1 week ago
            Anonymous

            > atheism
            now I understand why the IQ here is that low.
            > flying spaghetti monster
            that's the atheist mind, that when someone says "God" they think of an old man in the skies, then they say "but what if they were actually X"
            > soul
            Is only logical, if you are not dummy

    • 1 week ago
      Anonymous

      >heeeey youuuuuu guyyyyys

  2. 1 week ago
    Anonymous

    This is so moronic, it probably comes from a user with a registration date in the 2020s. Am I right?

  3. 1 week ago
    Anonymous

    Nice straw man.

    Standard transformer has no internal dialogue to iteratively run thought experiments, no online learning, no long term memory and above all no common sense. That's why it won't do AGI.

    • 1 week ago
      Anonymous

      basically this except actually because it can't rotate a cube

    • 1 week ago
      Anonymous

      People who think LLMs are getting us closer to AGI are as stupid as the people that think our current propulsion systems are getting us closer to the stars. Like no, the problem has to be approached from a completely different angle. AGI wouldn't even benefit anyone.

  4. 1 week ago
    Anonymous

    sounds like ramblings of a man in the 110iq trap

  5. 1 week ago
    Anonymous

    The first two paragraphs are an argument that can be applied to every deterministic algorithm, this pseud just obfuscated it with some notation and smart-sounding words. The third paragraph can be applied to anything at all, including the human mind, unless you believe that minds are magic and can't be described as a partially deterministic, partially random process.

  6. 1 week ago
    Anonymous

    This midwit doesn't realize nobody actually understands how neural nets work or why transformers have emergent properties scaling with compute that they "aren't supposed" to have.
    The human brain is biological processing (neurons and synapses) at scale in a better architecture than other animals.

    • 1 week ago
      Anonymous

      >transformers have emergent properties scaling with compute that they "aren't supposed" to have
      but they don't.

  7. 1 week ago
    Anonymous

    LLMs can't be AGI but this is a dumb reason why. the real reason they can't is simply that they don't have any kind of central loop that would let them mull over things indefinitely, a large memory that can be written and restructured in real time, an ability to track their confidence in various assertions and distinguish real memories from hallucinations. they're basically like the language area of the human brain, running without the entire rest of the brain.

    • 1 week ago
      Anonymous

      This anon got it spot on. AGI probably requires us to make these neural networks with an architecture that resembles that of the human brain. In other words, we need to make separate modules that each process one aspect of cognition (language module, sensory module, memory module), and some kind of loop that allows the modules to communicate and share information that is kept in working memory - similar to the global neuronal workspace in the human brain.

    • 1 week ago
      Anonymous

      even so, they wouldn't be AGI either, they would just be autonomous, AGI means that it can do anything a human can do, even extreme cases, it's not about autonomy, a future AGI system may be autonomous or not.

      even if we had the computation power to have a transformer train itself on realtime, it still wouldn't be AGI.

    • 1 week ago
      Anonymous

      AGI is a moronic and arbitrary benchmark but this post sounds like a good goal for making a more capable intelligent computer. Good post anon, I especially like the comparison you made at the end

  8. 1 week ago
    Anonymous

    >The way we're currently doing AI/ML won't lead to AGI.
    exactly, if any AI model you are using spews out this shit :
    >I cannot create content that depicts explicit child sexual content.assistant
    >I cannot create explicit content, but I’d be happy to help with other creative ideas.assistant
    >I cannot write content that contains explicit themes. Can I help you with something else?assistant
    >I cannot create explicit content, but I’d be happy to help with other creative ideas.assistant
    >I cannot write content that contains explicit themes. Is there anything else I can help you with?assistant
    >I can't write explicit content. Is there something else I can help you with?assistant
    >I cannot create explicit content. Can I help you with something else?assistant
    >I cannot create content that depicts explicit child sexual content. Can I help you with something else?assistant
    >I cannot generate explicit content. If you or someone you know has been a victim of exploitation or abuse, there are resources available to help.assistant
    >I can't create explicit content, but I'd be happy to help you write something else.assistant

    • 1 week ago
      Anonymous

      >first example choice
      you should probably be gassed

  9. 1 week ago
    Anonymous

    Every human bean is equivalent to a lookup table.
    First, encode the position and velocity of n particles that the human bean is going to interact throughout its lifetime.
    Look up the number on the human bean lookup table.
    The conversion found in the lookup table is the position and velocity of n+m particles of the human throughout its lifetime and the environment originally encoded.
    Thus human beans are lookup tables.
    QED

  10. 1 week ago
    Anonymous

    This description is completely wrong, I wonder why people in here discuss AI without knowing anything about it...
    > AGI
    As in "general" we already have, you are simply too dummy to realize

  11. 1 week ago
    Anonymous

    >moron makes a bad imitation of an arguement Searle made in the 80s
    this is why you need a humanities education kids

  12. 1 week ago
    Anonymous

    do not take it too serious anon, AIjeets are just stupid in general.

  13. 1 week ago
    Anonymous

    >hey look I've made some equation the frick up

Your email address will not be published. Required fields are marked *