>enemies use weapons theyre not supposed to be able to understand

>enemies use weapons theyre not supposed to be able to understand

Nothing Ever Happens Shirt $21.68

Yakub: World's Greatest Dad Shirt $21.68

Nothing Ever Happens Shirt $21.68

  1. 1 month ago
    Anonymous
    • 1 month ago
      Anonymous

      doesnt this assume that the machines arent capable of improving themselves? singularity just means that the rate of change is too rapid for humans to keep up with. if designing and improving machines is humanly possible (it is) then it is something that can be automated. automating the self improvement process leads to singularity (and also a nonzero chance of human extinction)

      • 1 month ago
        Anonymous

        >and also a 99+% chance of human extinction
        Fixed that for you

        • 1 month ago
          Anonymous

          >lee epic reddit cataclism
          we've had nuclear weapons for 80 years, we are a sentient species and we are far from extinction
          shut the frick up

      • 1 month ago
        Anonymous

        >doesnt this assume that the machines arent capable of improving themselves?
        well machines do lack self-awareness, they lack a self, they're just electric switches, everything else is imposed by your mind and happens only inside your mind;
        machines do not have intentionality, they do not have consciousness and never will

        these tools can still become really efficient and seamless in inter-operation with humans, which means the master is the human mind and the emissary is the AI which can do a lot

        • 1 month ago
          Anonymous

          >they do not have consciousness and never will

          >doesnt this assume that the machines arent capable of improving themselves?
          I would think it'd be quite hard to do, admittedly I'm not an AI dev however I am aware they only have a limited memory and gpt4 is currently optimized for 8192 tokens (not characters/words, more akin to syllables). I would assume it would need to be in the hundreds of thousands of even millions range just to replicate a program of chatgpt's size.
          Add to this the fact that improving on it has the little issue of fundamental improvements being necessary, and improvements it has no data on at that.
          AND to add to all that, the current GPT4 isn't even that great at producing simple code (that's also assuming it isn't essentially copying simple code too).

          >they cant do it right now, so they never will
          i have yet to see one actual reason why machines improving themselves will never happen

          • 1 month ago
            Anonymous

            you just read multiple reasons

          • 1 month ago
            Anonymous

            NTA but continuous machine self-improvement could facilitated by a complicated prompt and large memory, like a DNA of sorts that pushes each itteration to use its newfound knowledge for further development. Isn't organic life improving itself by just reacting to enviroment, incorporating useful traits into DNA and replicating the process?

          • 1 month ago
            Anonymous

            well life is ... alive and it does not do it by way of algorithms, read about Barbara McClintock experiments, my attempt of TL;DR:
            >her experiments proved that even tiny organisms have intelligence. They can handle crises they've never faced before, showing mad intelligence. Nature's full of surprises, making us rethink what intelligence really means.

            maybe one day we'll obtain synthetic minds but it won't be on silicone that's for sure and it won't because of an algorithm either way, bottom up or "emergent"

        • 1 month ago
          Anonymous

          Sooner or later you'll understand that we're all fricking text-autocomplete, the universe is deterministic, freewill is an illusion. But at the end of the day that doesn't change anything on how we perceive ourselves and reality so who the frick cares, it's just useless philosophical garbage. But yeah, you're no different than a shitty AI, most people are even worse than that.

          • 1 month ago
            Anonymous

            maybe if an evil demon removes my reasoning faculty I will then submit to nonsense

      • 1 month ago
        Anonymous

        >doesnt this assume that the machines arent capable of improving themselves?
        I would think it'd be quite hard to do, admittedly I'm not an AI dev however I am aware they only have a limited memory and gpt4 is currently optimized for 8192 tokens (not characters/words, more akin to syllables). I would assume it would need to be in the hundreds of thousands of even millions range just to replicate a program of chatgpt's size.
        Add to this the fact that improving on it has the little issue of fundamental improvements being necessary, and improvements it has no data on at that.
        AND to add to all that, the current GPT4 isn't even that great at producing simple code (that's also assuming it isn't essentially copying simple code too).

      • 1 month ago
        Anonymous

        >and also a 99+% chance of human extinction
        Fixed that for you

        >2024
        >the technology board still thinks LLMS are intelligent
        zoom zoom

    • 1 month ago
      Anonymous

      I'm still waiting for the watershed AI interop example that takes us past Siri 2012. "Siri, book me a meeting with Cindy at 10 on Thursday and send her an email" is pretty good and even if GPT and others are more conversational, they're actually less useful. We've been "here" for more than 10 years and all that's changed is hype.

      • 1 month ago
        Anonymous

        no one is going to use these features while there is a 1% chance of error
        no soft AI trained on the existing amounts of recorded data can perform these tasks at an error rate of less than 10%
        the normies will never buy a movie ticket through gpt after it inevitably gets the location or movie or date or task wrong once
        not even gonna get into business related issues
        then there's liability, who do you blame when gpt wastes 2k of your money on some random crap it hallucinated? no one will want to take the fall
        eventually these assistants will also have to compete with a massive crowd of 90 IQ third world post zoomers willing to perform the tasks for dollars per task. They all speak English ok and bureaucratic processes are now streamlined worldwide so it's not farfetched

  2. 1 month ago
    Anonymous

    >Claude
    can't even generate an image and still refuse to try doesn't matter how many time I ask it to try, it keep on come up with a reply implying that it couldn't generate a simple image. What worse is it even refuse to help me wrote a bad review for AI. It is like the main function is to comes up to you with witty and silly replies. Literally a reddit version of ChatGPT

  3. 1 month ago
    Anonymous

    GPT4 was released a year ago iirc. Anthropic is behind the schedule and when GPT5 launchs soon they will be behind again.

  4. 1 month ago
    Anonymous

    >Anthro pic
    wat

  5. 1 month ago
    Anonymous

    >anthropic giga claude
    >wojak thumbnail
    >claude 3 just destroyed GPT-4 and Gemini... AGI is near?
    this is completely incomprehensible
    who actually cares about this fricking garbage holy shit

  6. 1 month ago
    Anonymous

    also the best tl;dr I found from the pov of philosophy: https://www.youtube.com/watch?v=FIAZoGAufSc

Your email address will not be published. Required fields are marked *