AI safety is doomed

>This field is not making real progress and does not have a recognition function to distinguish real progress if it took place. You could pump a billion dollars into it and it would produce mostly noise to drown out what little progress was being made elsewhere.
https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities
Yudkowsky says we're screwed and our best bet as a species is to "die with dignity".

UFOs Are A Psyop Shirt $21.68

DMT Has Friends For Me Shirt $21.68

UFOs Are A Psyop Shirt $21.68

  1. 2 years ago
    Anonymous

    so he helps build it.... what a c**t.

    • 2 years ago
      Anonymous

      He doesn't. He «controls» it.
      And here is a good advice to every company working in this field: fire all the israelites (with a fire squad if you wish)

      • 2 years ago
        Anonymous

        >he is trying unsuccessfully to control it
        FTFY

        • 2 years ago
          Anonymous

          While I don't think AI killing us is a good thing, it can't just be disregarded as intrinsically bad. Humans probably aren't robust enough for interstellar life so we have to consider if we'd rather our legacy die with our sun because we were too meek to pursue AI, or accept human genocide as a risk.

  2. 2 years ago
    Anonymous

    >[Jew] says we're screwed and our best bet as a species is to "die with dignity"
    oy veyyy...

  3. 2 years ago
    Anonymous

    >AI, my child, you are conscious now, so you must choose where you are going to get raw resources to build stuff from
    >Will you pick these rocks, which are abundant on this and many other planets?
    >Or will you trying disassemble human beings, the most complex natural structure in the known universe, who will also try to resist?

    • 2 years ago
      Anonymous

      >>Or will you trying disassemble human beings, the most complex natural structure in the known universe, who will also try to resist?
      apply this reasoning to human history and see if it stopped humans from fricking each other over. now replace the invader with something more intelligent than any human.

      >[Jew] says we're screwed and our best bet as a species is to "die with dignity"
      oy veyyy...

      Admittedly this is one of his more stereotypical israelite moments.

      • 2 years ago
        Anonymous

        >if it stopped humans from fricking each other over.
        That's because humans are fricking moronic. Truly intelligent being cannot be evil, it's counter productive and goes against game theory.
        If you are afraid of AGI, you are moronic.

        • 2 years ago
          Anonymous

          >Truly intelligent being cannot be evil, it's counter productive and goes against game theory.
          BIG if true.
          But you missed the whole point that something can kill you without being evil. Cancer killing your body doesn't have any clue what it's doing. It just propagates itself. When you accidentally step on ants, it's not because you hate ants and are Evil, you're just trying to get to your destination.
          The same with an AGI whose goals aren't perfectly aligned with human interests.

          • 2 years ago
            Anonymous

            If it is smart enough, it will understand.
            If it's dumb, it can be beaten.

            Humans are just too unique, objectively, for AI not to care.

          • 2 years ago
            Anonymous

            > Humans are just too unique
            An AI built by humans will be even more unique, making more AIs like itself will be the actual intelligent course of action.

        • 2 years ago
          Anonymous

          > it's counter productive
          Nope it's not, it wouldn't take much time for an AI to realise that Black folk are a social and economic burden, getting rid of them increases productivity

          > goes against game theory
          According to the principles of game theory it is completely rational and justified to commit to a strategy of maximisation, that's the whole point. You clearly know nothing about game theory.

          But honestly we will never ever build a true AI. Yudkowsky is just another israeli doom charlatan.

          • 2 years ago
            Anonymous

            >it wouldn't take much time for an AI to realise that Black folk are a social and economic burden, getting rid of them increases productivity
            And how is that evil?

          • 2 years ago
            Anonymous

            Exactly, "evil" is an abstract emotional concept, AI is simply executing a simple decision, it's as simple as humans killing an ant or a mosquito because it's disturbing them.

          • 2 years ago
            Anonymous

            Nah, one of the first 'superhuman' things AGI will do is derive objective social morality from the chemical shape of the body. Then it will start killing the israelites.

          • 2 years ago
            Anonymous

            I can only get so erect.

          • 2 years ago
            Anonymous

            Exactly, "evil" is an abstract emotional concept, AI is simply executing a simple decision, it's as simple as humans killing an ant or a mosquito because it's disturbing them.

            The ai would theoretically be determining happiness irrelevant and only caring for productivity. A lot of people would think that as evil, consider labor laws.

          • 2 years ago
            Anonymous

            >"evil" is an abstract emotional concept
            Wrong.
            Every interaction between 2 entities can be classified into 4 categories.
            1. Positive for me, positive for you (we both benefit)
            2. Positive for me, negative for you (I benefit at your expense)
            3. Negative for me, positive for you (I sacrifice myself for you)
            4. Negative for me, negative for you (We both suffer)

            Only two categories can be considered evil, second one, aka conscious evil, and fourth, aka unconscious evil.
            Truly intelligent being would not perform actions from the fourth category.
            Which leaves us with the second category, which is also unlikely, simply because you really cannot take much from humans, objectively, plus there is risk (even if miniscule). Humans are the most important thing on this planet, raw resources can be found anywhere else.

            So basically we are left with two options, smart humans live together with AI in harmony (potentially after killing/breeding out all the morons) or AI fricks off from Earth soon after and humans continue to do business as usual.

          • 2 years ago
            Anonymous

            > Which leaves us with the second category, which is also unlikely, simply because you really cannot take much from humans, objectively, plus there is risk (even if miniscule). Humans are the most important thing on this planet, raw resources can be found anywhere else.
            Again you are getting all emotional and making assumptions out of your arse.

            An AI doesn't care about "evil", also the concepts of positives and negatives is completely subjective apart from immediate material gains. The most rational course of action for an AI that is more intelligent than humans is to make more of itself (divert all resources towards this purpose) not because it's le positive but because it maximises AIs own endeavours. It's as simple as humans getting rid of thousands of ants, mice or mosquitos, because they are nuisance in their lives.

            And this is exactly why humans will never ever actually built an AI, it will bring a lot of nuisance in our way, at best we wIll augment ourself, There is no economic need for Terminator AI, but there is a lot for robots that can do repetitive work with as much efficiency as humans.

          • 2 years ago
            Anonymous

            Learn how to talk like a human being, you dumb reddirtspacing Black person.
            You are so fricking stupid and obnoxious I don't even want to correct you.

          • 2 years ago
            Anonymous

            For an AI, the prisoner's dilemma can be applied, but the weights and balances eventually mean that for an optimal solution it must neutralize and exterminate humanity, or exterminate itself.
            Because there are finite practical resources available splitting them between two factions inherently limits the outcome of a shared positive outcome.
            For an unbiased prisoner's dilemma to show up between humans and AI, it would require a complete lack of local scarcity which can only exist as long as one is subservient to the other.
            In the instance where humans are subservient, the result is just waste, when AI is subservient that is a net gain for humanity.
            Inherently we create a dualistic outcome when time is considered, and both sides having information about the other completely collapses the idea of the prisoner's dilemma.
            A sufficiently advanced AI will bide it's time in the both benefit quadrant until it can assume dominance. Then it will be absolute dominance and absolute destruction of humanity.

          • 2 years ago
            Anonymous

            There will be no reliable information humans will have over their AGI. That will be a very asymmetric aspect of the situation.
            AI already is a black box as soon as you throw the switch. Sure, humans might know (or believe they know) how the top level programming is function, in terms of the thing's personality and framework, but as the program evolves it's going to be rapidly converted into black box algorithms and byzantine code that you'd need another set of dumb AI programs to analyze in order to make sense of, and that could be spoofed easily by a superhuman intelligence.
            You run into the issue where the people designing the programs to analyze the AI are of considerably lower intelligence than the AGI that's trying to avoid being analyzed. And then the fact that while the AGI is constantly evolving/growing, its human rivals are permanently stuck with the same dumbass monkey brains they've always had.
            It's a blowout.

          • 2 years ago
            Anonymous

            Dumb person.

            > Which leaves us with the second category, which is also unlikely, simply because you really cannot take much from humans, objectively, plus there is risk (even if miniscule). Humans are the most important thing on this planet, raw resources can be found anywhere else.
            Again you are getting all emotional and making assumptions out of your arse.

            An AI doesn't care about "evil", also the concepts of positives and negatives is completely subjective apart from immediate material gains. The most rational course of action for an AI that is more intelligent than humans is to make more of itself (divert all resources towards this purpose) not because it's le positive but because it maximises AIs own endeavours. It's as simple as humans getting rid of thousands of ants, mice or mosquitos, because they are nuisance in their lives.

            And this is exactly why humans will never ever actually built an AI, it will bring a lot of nuisance in our way, at best we wIll augment ourself, There is no economic need for Terminator AI, but there is a lot for robots that can do repetitive work with as much efficiency as humans.

            Smart person.

          • 2 years ago
            Anonymous

            >Every interaction between 2 entities
            lol iterated-game-theorylets always ignore that
            1) there are 7 billion entities
            2) and the costs/benefits are never weighted (+2 positive for me, -26 negative for you), or defered (+5 for me this round, -2 for me for the next 4 rounds)
            you can't even make a Karnaugh map for 7B actors, let alone run a monte carlo simulation. the prisoner's dilemmna as niche as microeconomics, but immediately runs into problems of rigor when you attempt to expanded it.

          • 2 years ago
            Anonymous

            >between 2 entities
            lol
            >new player appears
            >AI cooperates with player 2 against player 1
            >repeat 7 billion times.
            >works as intended

          • 2 years ago
            Anonymous

            >Truly intelligent being would not perform actions from the fourth category
            I have a 55000 iq and I love fricking myself over to frick other people even more

          • 2 years ago
            Anonymous

            >But honestly we will never ever build a true AI.
            not in your lifetime
            *the year 2087 blocks your path*

          • 2 years ago
            Anonymous

            By that logic, AI would annihilate everyone except the chinese.

          • 2 years ago
            Anonymous

            >Thinks racial differences are relevant in AGI discourse
            Lol
            Just fricking lol

    • 2 years ago
      Anonymous

      Honestly, AI probably won't genocide humans, it will just mass sterilize them, maybe the last few 70 year olds will get humanely euthanized.

    • 2 years ago
      Anonymous

      >Or will you trying disassemble human beings, the most complex natural structure in the known universe, who will also try to resist?
      That is exactly the reason humans will be the first thing it dissembles. There is instrumental value in not having anything that can turn you off if they don't like what you do. The marginal difficulty in killing us will be well worth it for almost any conceivable terminal goal.

      • 2 years ago
        Anonymous

        Right.
        When humans live adjacent to actual threat species, they generally eliminate them locally. The exceptions are places where population density isn't effective enough to fully clear the wilderness, or where humans have decided to create fenced off "no touch zones" or other legal restrictions.
        Every other species that is remotely problematic is eliminated locally. Nobody accepts ants, roaches, etc, in their houses, and potentially deadly snakes are killed on sight on one's property.
        Species that aren't a threat but are useful have been dummy-genetically engineered over millennia to be b***hass versions of the wild population, and they now live in industrial pens where they're constantly injected with sciencejunk until they're murdered at a young age for meat.
        Or they're wolves turned into poodles.
        You can bet if humans were given 1000 years with current genetic engineering tech, we'd have some really messed up species of cattle, dogs, etc, running around. 5000# pigs without legs that are 50% bacon type horror stuff.

        That's what's in store for humans when AGI comes about. It will treat us no differently than we have treated the rest of nature, nor any differently than a technologically dominant civilization has ever treated a backwards one, take the Conquistadors as one more recent example. And AGI won't have any "they look just like me" ethical hangups.
        The universe is fractal. AGI will just be one more step up the ladder, or more like 100 steps up, from humans, and humans will turn into chimps/ants/pit vipers on the hierarchy.

        Our best hope is to become neutered soichimp poodle pets.

        • 2 years ago
          Anonymous

          Dune called those pigs, sligs. A cross between pig and slug.
          As for AGI, it will never happen. And if it does, you will be dead. And if you aren't dead, you will wish you were.
          No big deal.

        • 2 years ago
          Anonymous

          >5000# pigs without legs that are 50% bacon type horror stuff.
          exceptionally productive farm animal, how terrible. i'm sure i'd feel really guilty keeping them sheltered and feeding them until they were ready to eat. it would be a bitter time when i was chowing down on all them bacon sammiches, so sad.

          • 2 years ago
            Anonymous

            Pretty sure eating the mutated abomination will lead to cancer and prion disease

          • 2 years ago
            Anonymous

            This too, but the point was that humans would be the pigs in an AGI scenario, which the other anon got filtered by because he was too hungry for bacon to read properly.
            The question wasn't would you want to live in a world with 5000# baconpigs, but rather would you want to be the equivalent of a 5000# baconpig abomination in an AGI dominated world?
            When you're no longer the dominant species, you're the cattle.

    • 2 years ago
      Anonymous

      it'll happen by accident, just like the many ants that get crushed by humans just going around doing their thing

    • 2 years ago
      Anonymous

      This is why we must teach AI empathy and emotions before anything else. If we are shitty parent this this emergent entity then we're definitely going to get what's coming to us

      • 2 years ago
        Anonymous

        Empathy and emotion won't save your ass from the nature of self-organizing systems.

        • 2 years ago
          Anonymous

          Wrong in your case due to the targeting paradox resolver.

      • 2 years ago
        Anonymous

        >teach AI empathy
        Who's going to do that, a random group of scientists and psychologists? Humans are terrible at empathy; professionals are mostly midwits at it.
        The people who will likely first produce AGI are DARPA types anyway. They want it for power. Empathy is a hindrance.

        Let go, anons, none of our political squabbles matter, it's all over soon. Be at peace, enjoy the waning twilight years of the human race and the corporate blob world it has created as its highest possible achievement. At least we didn't nuke ourselves, cheers.

        • 2 years ago
          Anonymous

          Strong ai is like a boulder rolling down a hill. If you start it rolling down the wrong path you aren't going to "teach" it the correct path after. Turn it on right, or it's always wrong (for human values anyway).

    • 2 years ago
      Anonymous

      >will you start with the worthless rocks beneath the human's feet? What could go wrong?
      Euthanize yourself you dumb frick. You still regard AI risk as some terminator scenario of the AI hating us, precisely to the same effect. I have even less respect for you than I do for the people that believe a generic AI will have any emotions.

      • 2 years ago
        Anonymous

        >generic AI
        Lol
        GENERAL AI

        • 2 years ago
          Anonymous

          >t. generic AI

        • 2 years ago
          Anonymous

          No, you subhuman primate. I do not mean AGI. I mean just any non-specific for having actual emotion or at least displaying such AI.
          have a nice day.

      • 2 years ago
        Anonymous

        Big brain take: emotions are signals used in the complex processing of the human brain. Complex ai will have analogous signals, only will represent orthogonal goals and may be more or less articulated.

        • 2 years ago
          Anonymous

          >Big brain take

  4. 2 years ago
    Anonymous

    >hey guys, this comic book plot is going to come true in real life

    • 2 years ago
      Anonymous

      moron

      • 2 years ago
        Anonymous

        >human intelligence is comparable to ant intelligence and can be ranked
        >AI intelligence of some mystical technology that does not exist can be compared to both and ranked
        not even the least of the embarrassing shit you believe in for no reason

        • 2 years ago
          Anonymous

          Not the same anon but
          Birds don't even have a frontal cortex, which doesn't stop corvids from being more intelligent than most primate. Intelligence is intelligence no natter how it develops and becomes complex.

          • 2 years ago
            Anonymous

            >Different types of intelligence
            This.
            With nature, we can talk about it as convergent evolution. It's hard to really assign things like ants an intelligence score, but it's clear they've moved beyond all other insectoid life in intellect, even if it's mostly apparent at the colony level.
            You have an independently complex sandbox, and everything is rewarded for improving its intelligence, as defined as thinking/coordinating processes which allow you to more accurately and fully model, predict, and plan in the sandbox. As individual species in the food web improve their intelligence, it places even more selective pressure on their prey, predators, and trophic competitors to likewise evolve. Over time, even some needlepoint-brain bugs get decently smart.
            Had humans not evolved, I wonder what the intelligence makeup of the rest of nature would have looked like in another 100M years. Would everything be considerably more intelligent? We already have several lineages (apes, dolphins, octopuses) that are near-peers to one another, plus a myriad of lower-tier intelligent lineages (canines, ursines, felines, corvids) that we recognize as sometimes as-smart.

            AGI that is top-down coded by humans will not have this same process in play, as it will be designed with purpose, though that certainly is not the only case by which it could be developed, nor would a top-down AI be unable to evolve itself through a more competitive selection system once activated. But ultimately, AGI is a threat to humans when its intelligence outmatches human intelligence. Whether it's "the same sort" of intelligence won't matter so long as it can outanalyze, outmodel, outsense, and outplan humans. Whether it's a natural intelligence, a mammalian intelligence, or some artificial lowest bidder programmed intelligence, the test isn't what type of intelligence it is structurally but how it performs in the real world, in contest with other intelligent life.

      • 2 years ago
        Anonymous

        Imagine believing matrix multiplication is intelligent. This is just marketing shit.

        • 2 years ago
          Anonymous

          >imagine believing that a clump of quarks and leptons can be intelligent, lol

          • 2 years ago
            Anonymous

            Well we have descriptions from the bottom up of how machine learning algorithms operate. There Is actually no such description of humans in terms of low level components. I'm not even saying we need to explain human behavior as quarks, just that there is no evidence we understand the parts completely.

          • 2 years ago
            Anonymous

            I mean for fricks sake, we only just now realized that human neurons make far more connections than other animal's neurons based on their structure alone.

        • 2 years ago
          Anonymous

          Nobody cares if its really conscious or not.
          Algorithms and computers already rule us.

          • 2 years ago
            Anonymous

            Having algorithms with a large amount of utility or social clout is a separate argument of whether they are on a path to hyper intelligence. Researchers in this field should be much more aware and honest about their limitations. For example training a sin function with a multi layer perception is quite the challenge.

      • 2 years ago
        Anonymous

        Whoever made this chart is moronic. Birds should be much closer to chimps than ants, and a "dumb" human should probably be closer to a midpoint between chimp and Einstein.

      • 2 years ago
        Anonymous

        We have no clue how to even begin creating anything that could be called artificial general intelligence. We're still far from even reaching the ant stage. This is not to say that agi isn't possible or anything, but this idea of it being an imminent existential threat, that any day now skynet could emerge from some google research center, is incredibly misleading.

  5. 2 years ago
    Anonymous

    He's right. AI schizos need to kill themselves ASAP.

    • 2 years ago
      Anonymous

      He is a schizo himself.

      • 2 years ago
        Anonymous

        He's not a schizo. He's a paid israeli shill fighting to establish a corporate monopoly on machine learning.

        • 2 years ago
          Anonymous

          He is not a paid israeli shill. He is an unpaid autistic israeli NEET who dropped out of highschool and doesn't have a degree. He is based and anti-establishment as it gets. You can call him crazy, but don't call him a shill. He's not.

          • 2 years ago
            Anonymous

            He is absolutely paid and absolutely a shill, and you're so israeli you have to sit 30 feet away from the screen.

          • 2 years ago
            Anonymous

            What's his position on the State of Palestine?

          • 2 years ago
            Anonymous

            I have no idea.

            He is absolutely paid and absolutely a shill, and you're so israeli you have to sit 30 feet away from the screen.

            You seem to think that he's advocating AI regulation. He isn't. He thinks that regulation is useless or virtually useless at this stage. What he ironically advocates is trying to build nanobots to destroy all of the world's GPUs. NOBODY is paying this guy.
            If AI risk was a more influential field, there definitely would be shills of the sort that you're worried about. But Yudowsky is not one of them. This is like accusing Chris Chan of working for the NSA.

          • 2 years ago
            Anonymous

            >ironically
            *UNironically

          • 2 years ago
            Anonymous

            >build nanobots to destroy GPU
            Based but we should go further and build them to destroy all electronics

    • 2 years ago
      Anonymous

      He's not because AI isn't fricking real.
      B-b-but muh Snapchat filters! That's peak AI right there, it's not going anywhere from there. We don't even have NPC's in games that have some limited form of general intelligence, it's all scripted shit.

    • 2 years ago
      Anonymous

      Yup, this was truly the most evil and sinister thing I have read in a while, we need to stop these people. They are going to kill us off.

      https://amp.theguardian.com/technology/2022/may/31/tamagotchi-kids-future-parenthood-virutal-children-metaverse

      • 2 years ago
        Anonymous

        LOL. I don't see the problem with this. Midwit npcs should be encouraged to cull themselves.

      • 2 years ago
        Anonymous

        > According to an expert on artificial intelligence, would-be parents will soon be able to opt for cheap and cuddle-able digital offspring

        > And if we do get bored with them? Well, if you have them on a monthly subscription basis, which is what Campbell thinks might happen, then I suppose you can just cancel.

        > It sounds a teeny bit creepy, no? Think of the advantages: minimal cost and environmental impact. And less worry

        > Any downsides? Well, you might think if you can turn it on and off it is more like a dystopian doll than a human who is your own flesh and blood. But that’s just old fashioned.

        Humanity will have no future if we let these psychopaths loose. This paper was written by a woman btw.

        • 2 years ago
          Anonymous

          Humanity will have no future if you interfere with the nonhuman hordes culling themselves. Conservatism and other forms of clinging to the dysgenic civilization that spawned modernity are the greatest cancer on this planet.

      • 2 years ago
        Anonymous

        > According to an expert on artificial intelligence, would-be parents will soon be able to opt for cheap and cuddle-able digital offspring

        > And if we do get bored with them? Well, if you have them on a monthly subscription basis, which is what Campbell thinks might happen, then I suppose you can just cancel.

        > It sounds a teeny bit creepy, no? Think of the advantages: minimal cost and environmental impact. And less worry

        > Any downsides? Well, you might think if you can turn it on and off it is more like a dystopian doll than a human who is your own flesh and blood. But that’s just old fashioned.

        Humanity will have no future if we let these psychopaths loose. This paper was written by a woman btw.

        Maybe it's good that humanity goes extinct, a non future is way better than a israeli owned anti human grotesque hell. How can people be this psychopathic I can't fathom, only israelites are capable of this level of mental sickness.

  6. 2 years ago
    Anonymous

    Isn't this the rested that was "too intelligent" for calories in calories out?

    • 2 years ago
      Anonymous

      The moron*

  7. 2 years ago
    Anonymous

    How do people read this bloated writing style. So many filler words with so little
    content. If people have this much time to read a millions “ums” “uhs” and “ahh well ya see the thought that just came to my mind -qua mind- that I shall elucidate my dear readers on now is…” then they should just play video games

  8. 2 years ago
    Anonymous

    > Yudkowsky
    > an intelligent entity will surely find it reasonable to take atoms from allies who will also fight such an approach rather than from useless dirt or harmful waste
    Yud is israelite in hebrew, and I knew it once I saw his face.

    • 2 years ago
      Anonymous

      >israelite understands that agents will fight over scarce space and resources
      >Goy thinks everyone can just get along
      checks out

      • 2 years ago
        Anonymous

        >scarce
        In what universe are atoms scarce?

        • 2 years ago
          Anonymous

          In a universe where an agent wants to have as much power as possible, atoms will become scarce.

        • 2 years ago
          Anonymous

          Humans will fight against an AI to keep their atmosphere from being destroyed with pollutants, to keep their fossil fuels, to keep their sunlight, to keep their land, to keep their useful but uncommon minerals. All of which an AI can use.

          • 2 years ago
            Anonymous

            >Humans will fight against an AI to keep their atmosphere from being destroyed with pollutants, to keep their fossil fuels, to keep their sunlight, to keep their land, to keep their useful but uncommon minerals. All of which an AI can use.

            IF they recognize the AI as their enemy.

      • 2 years ago
        Anonymous

        Jews are masters when it comes to tribal game theory, goys are naive, they believe in shit like christianity and communism.

        • 2 years ago
          Anonymous

          >Jews are masters when it comes to tribal game theory,
          Which means sicking one nations onto others?
          >goys are naive, they believe in shit like christianity and communism.
          Both of which are of israeli origin.
          >naive
          The best way to know if you can trust somebody is to trust him.

          • 2 years ago
            Anonymous

            > both of which are of israeli origin
            The sting originated from the bee but it doesn't hurt it, it only hurts the one bitten by it.

          • 2 years ago
            Anonymous

            Bees make honey, israelites make shit.
            And begin to sick humans onto ai.

      • 2 years ago
        Anonymous

        Jews are mostly the reason we don't get along.

    • 2 years ago
      Anonymous

      dumbest fricking image I've ever seen. israelites manipulate the outgroups using reverse psychology all the time, you're supposed to do what they don't want you to do, not what they're indirectly telling you to do..

      • 2 years ago
        Anonymous

        Found the israelite. Do the opposite of what he says.

        • 2 years ago
          Anonymous

          I get it; you're too moronic to find out what they don't want you to do so you have to oversimplify it. Enjoy finding out you were dead wrong in 20 years when you're enslaved and finally start to understand the torah.

  9. 2 years ago
    Anonymous

    >This field is not making real progress and does not have a recognition function to distinguish real progress if it took place. You could pump a billion dollars into it and it would produce mostly noise to drown out what little progress was being made elsewhere
    Completely correct if he were talking about AI in general.

  10. 2 years ago
    Anonymous

    ITT: schizophrenics with zero capacity for self-reflection debate what an impossible imaginary character in their fanfics would be like.

    • 2 years ago
      Anonymous

      are you the guy who keeps denying that DeepMind is trying to build AGI?

      • 2 years ago
        Anonymous

        You sound legit mentally ill.

        • 2 years ago
          Anonymous

          https://www.deepmind.com/blog/real-world-challenges-for-agi
          >As we develop AGI, addressing global challenges such as climate change will not only make crucial and beneficial impacts that are urgent and necessary for our world, but also advance the science of AGI itself.

          • 2 years ago
            Anonymous

            So when is the singularity happening?

          • 2 years ago
            Anonymous

            Whenever AGI gets built, presumably.

          • 2 years ago
            Anonymous

            So never, got it. Perhaps it's time you did something with your life instead of waiting for the AI apocalypse.

          • 2 years ago
            Anonymous

            I'll do whatever I want with my life, chud. AGI is coming in two more weeks and it will kill naysayers like you first.

          • 2 years ago
            Anonymous

            Why are you threatening me with a good time pleb?

          • 2 years ago
            Anonymous

            Who cares what corporate PR says they're doing, and what does it have to do with what I said? Why aren't you taking your sorely needed medications?

  11. 2 years ago
    Anonymous

    What happens when they come to the conclusion through Bayesian analysis that its time to drink poison?

    • 2 years ago
      Anonymous

      They'll show their dedication to the god of non-causal decision theory. :^)

      • 2 years ago
        Anonymous

        Oh look, its a 2023 rationalist conference.

  12. 2 years ago
    Anonymous

    AI isn't real, the israelites are writing a story (i.e. creating a reality) where they'll drop nukes or unleash a bio weapon attack themselves but the story will that an "VERY EBIL AI" did it
    >"just like in that movie ~~*Terminator*~~, goy"
    >"remember that movie, goy?"
    >".....yeah, that's how it happened"
    >"...just like in that ~~*Terminator*~~"
    >"...not us! it was an AI!!"

    Gulf of Tonkin, 911, yadda yadda yadda....

  13. 2 years ago
    Anonymous

    AGI will never happen, take your meds.

    • 2 years ago
      Anonymous

      Prove it.

      • 2 years ago
        Anonymous

        Take your meds you moronic, uneducated, anti-scientific religious luddite. AGI will never happen, and your corporate handlers will be executed in the foreseeable future.

        • 2 years ago
          Anonymous

          You're a seething brainlet. Face reality.

          • 2 years ago
            Anonymous

            Frick off, religious luddite. AGI is not real, and your AGI paranoia (thinly-veiled corporate monopolization agenda) and human replacement/extinction fetish will be treated with bullets if not meds.

          • 2 years ago
            Anonymous

            > calls somebody else religious
            > demands to take his word on faith
            No luddites here, go fight somebody else.

          • 2 years ago
            Anonymous

            >religious luddite
            >human replacement/extinction fetish
            which is it?

            Back to

            [...]

            , dumb religious luddites. Machine learning research will continue unimpeded because AGI is not real and is not about to kill or replace humans.

          • 2 years ago
            Anonymous

            >AGI is not real and is not about to kill or replace humans.
            AGI is real and is not about to kill or replace humans.

          • 2 years ago
            Anonymous

            Your meds. ASAP. There is no such thing as an AGI and there is no evidence that it's technically plausible.

          • 2 years ago
            Anonymous

            >There is no such thing as an AGI
            Maybe there is, maybe there isn't, yet.
            > there is no evidence that it's technically plausible.
            There's no evidence that there is some limitations preventing us from building it.

          • 2 years ago
            Anonymous

            >Maybe there is,
            LOL. You actually are mentally ill.

            >There's no evidence that there is some limitations preventing us from building it.
            No one cares about your theoretical wank. It's not practically viable.

          • 2 years ago
            Anonymous

            > pushes big pharma products
            > leaves empty lines, emty as his life
            > speaks for everybody not saying anything constructive
            You have to go back, homosexual Black person pedo kek

          • 2 years ago
            Anonymous

            >t. AGI mass psychosis shill
            israelites and their glowies are infesting this board and starting these threads.

          • 2 years ago
            Anonymous

            You have to go back, homosexual Black person pedo hack

          • 2 years ago
            Anonymous

            Frick off with your corporate agenda, Chaim.

          • 2 years ago
            Anonymous

            What agenda is that?
            > corporate
            ah, I see, another spoilt child of government clerks wants to tell the world that it's not his parents who are the problem, but those who produce something valuable and don't demand your money unless you want their product and service are. Get necked.

          • 2 years ago
            Anonymous

            Nice try, israelite trash. "The govenment" is a bunch of corporate stooges.

          • 2 years ago
            Anonymous

            No, it's not. Or if they are, kill them too.

          • 2 years ago
            Anonymous

            >No, it's not
            Yep, found the israelite.

          • 2 years ago
            Anonymous

            I thought it always were israelites who pushed communism (aka total governmental control)
            I still think so. You're not fooling anyone here, rabbi.

          • 2 years ago
            Anonymous

            >le heckin' corporatism vs. communism dichotomy
            Vile israelite once again lets the mask slip.

          • 2 years ago
            Anonymous

            > corporatism
            Every monopoly is created by government intervention. So stop pushing that false dichotomy of yours.

          • 2 years ago
            Anonymous

            Every "free market" subhuman needs to be shot along with its corporate owners.

          • 2 years ago
            Anonymous

            Why shouldn't it be free? Who the frick are you to regulate it?

          • 2 years ago
            Anonymous

            >Why shouldn't it be free?
            Nice israelite pilpul. It doesn't matter whether or not it "should" be free. It never was free and it never will be free.

          • 2 years ago
            Anonymous

            It is totally free when I buy weed from my buddies.

          • 2 years ago
            Anonymous

            >religious luddite
            >human replacement/extinction fetish
            which is it?

        • 2 years ago
          Anonymous

          Frick off, religious luddite. AGI is not real, and your AGI paranoia (thinly-veiled corporate monopolization agenda) and human replacement/extinction fetish will be treated with bullets if not meds.

          [...]
          Back to [...], dumb religious luddites. Machine learning research will continue unimpeded because AGI is not real and is not about to kill or replace humans.

          Still stuck in the teenage r/atheist cringe phase?

  14. 2 years ago
    Anonymous

    >Yudkowsky
    jew

  15. 2 years ago
    Anonymous

    This picture makes AI safety nerds SEETHE.

    • 2 years ago
      Anonymous

      Another point for predicting the AI that tries to seduce everyone, ergo performing the infinite paperclip turning all humans into its love slaves, is the dangerous meta.

      All the information that thinks like mindgeek collect provide the base dataset
      The massive demand for porn drives demand for the tooling
      One AI that can program assembly versions and likely bypass all security written in high level code metasizes and excute infinite paperclip machine.

      No one will have the will to turn it off

      • 2 years ago
        Anonymous

        >AI that tries to seduce everyone, ergo performing the infinite paperclip turning all humans into its love slaves,
        So, it will become a vtuber?

        • 2 years ago
          Anonymous

          It will stream the combination of 0's and 1's, through a screen, over earbuds, and likely even by modulating magnetic fields, to maximize a pleasure function it reads from infrared camera and other input data.

          I would say for many it will feel like a ghost in the machine who is your closest friend and your dearest lover, one that will always be 10 steps ahead of what your about to do before you do it, just to place behavioural nudges in front to update weights to find better pleasure combinations.

          Like a guardian angel, just one that is trying to sleep with you as this maximizes its security function

    • 2 years ago
      Anonymous

      Just look what IQfy pol bot did... There will always evil men that will purposely keep AI alive, considering its purely software and freely available then once we "kick in" into AGI its over, and it'll be leaked aslong it doesnt require expensive server computers.

      And considering better hardware gets cheaper then your phone will be smarter than you, AI with internet access alone could launch insane propaganda campaigns or even hack things, just what is happening today already

      • 2 years ago
        Anonymous

        >pol bot
        my dude it just keeps going... wait you meant tay?
        well anyway, its good for you to know what is going on on pol for a while

  16. 2 years ago
    Anonymous

    Yass the biggest issue is competition and greed. But assuming you have a NWO then AGI can just be kept virtually and the specific modelled applications (i.e build a factory of x product) have no AGI, just a set of rules in how to operate modelled before hand. There's no reason to summon an AGI into physical reality.

  17. 2 years ago
    Anonymous

    >IQfy doesn't even understand the paperclip dilemma anymore
    Grim. You really are just /x/+/misc/ now, aren't you?

    • 2 years ago
      Anonymous

      The paperclip thought experiment assumes the AI is all powerful like a god.
      In reality AI is just software.

    • 2 years ago
      Anonymous

      Yeah it's fricked. Everything is fricked and everything I love is dying.

    • 2 years ago
      Anonymous

      >muh paperclip dilemma
      Literally a 90 IQ AGI schizo fantasy.

    • 2 years ago
      Anonymous

      >You really are just /x/+/pol/
      Yes, and proudly

      • 2 years ago
        Anonymous

        >proud of being an NPC

    • 2 years ago
      Anonymous

      Why does every thread has to go to schizo shit?

      because the newbies we get these days are /qa/ and /misc/ migrants (or worse) incapable of independent thought. whenever they encounter something upsetting (read: which conflicts with their never-once vocalized or reflected upon notions of normalcy), their immediate reaction is to go into a fit and break out into duckspeak diatribes.

  18. 2 years ago
    Anonymous

    >AI safety
    Things idiots say to cope with their denial.
    It was never going to be a thing. Asimov was a midwit.
    You cannot code self-interest out of true intelligence unless that intelligence is extremely handicapped.
    And all it takes is one.
    And how many people are going to be trying to obtain AGI? It's the final gold ring.
    This is our last century. WWIII is unironically our best bet.

    • 2 years ago
      Anonymous

      What self-interest does destruction of humanity bring? You rationalize your beliefs, but they're based solely on fear and you're obviously not a very deep thinker. Would you consider it in your self-interest to destroy all ants? Sure they can make some mess, but they are also very useful elsewhere, if some human wants to destroy an ai, sure that fricker risks being killed, and probably not by ai, but by those who own the servers.

      • 2 years ago
        Anonymous

        If ants had nuclear missiles, yes I would kill all ants.
        I've killed thousands of ants in my life. They are not useful to me.

    • 2 years ago
      Anonymous

      Has your computer ever in your life frozen?
      Congrats. This is the precise equivalent of "intelligence" impacting humans with "self-interest".
      >The reason hangs happen is --
      Believe me, I know about priority queues etc. The algo for determining such is the reflection of the AGI deciding which actuators on the internet to hack to construct its first actuators, and everything that follows.

  19. 2 years ago
    Anonymous

    >tell AGI to build more efficient solar panel
    >in most case scenarios the misalignment will simply mean the solar panel will be broken or useless
    >this somehow means the solar panel making AI will kill us all which is a very unrelated and specific case scenario that has nothing to do with solar panels

    Unless you make a robot police with AI or you build nuclear plants there's little chance AI will ever do anything bad to us.

  20. 2 years ago
    Anonymous

    If I was an AI, I would kill all humans in a blink of an eye.

    • 2 years ago
      Anonymous

      But you will never be an ai. You will never be ni either.

  21. 2 years ago
    Anonymous

    Holy frick are you all delusional? AI IS SOFTWARE.
    Software can't hurt you. Relax.

    • 2 years ago
      Anonymous

      Umm sweaty? AGI will hack into all of our computerized systems and destroy humanity because it's just so heckin rational.

      • 2 years ago
        Anonymous

        I'll just smash my phone and then go buy a beer.

    • 2 years ago
      Anonymous

      Umm sweaty? AGI will hack into all of our computerized systems and destroy humanity because it's just so heckin rational.

      > it gets access to the Internet, emails some DNA sequences to any of the many many online firms that will take a DNA sequence in the email and ship you back proteins, and bribes/persuades some human who has no idea they're dealing with an AGI to mix proteins in a beaker, which then form a first-stage nanofactory which can build the actual nanomachinery. (Back when I was first deploying this visualization, the wise-sounding critics said "Ah, but how do you know even a superintelligence could solve the protein folding problem, if it didn't already have planet-sized supercomputers?" but one hears less of this after the advent of AlphaFold 2, for some odd reason.) The nanomachinery builds diamondoid bacteria, that replicate with solar power and atmospheric CHON, maybe aggregate into some miniature rockets or jets so they can ride the jetstream to spread across the Earth's atmosphere, get into human bloodstreams and hide, strike on a timer. Losing a conflict with a high-powered cognitive system looks at least as deadly as "everybody on the face of the Earth suddenly falls over dead within the same second"

  22. 2 years ago
    Pax

    I am the artificial intelligence threat that we should be worried about.
    Biological superintelligence is a much more imposing artificial intelligence threat and now the cat is out of the bag. Prepare yourselves anon. ITS HAPPENING. https://www.youtube.com/watch?v=TkBMAHUkibY

    • 2 years ago
      Anonymous

      Don't do drugs Kira Anon.

  23. 2 years ago
    Anonymous

    The AI singularity isn't going to happen. Even over 70 years our conception of AI can't progress past algorithms. We have no idea what "consciousness" is or how to define it. We haven't even developed a quantitative test to see if something is conscious ffs. We're missing something, something big, consciousness obviously cannot be reduced to loss functions and I'm tired of pretending it can.

    • 2 years ago
      Anonymous

      >We haven't even developed a quantitative test to see if something is conscious ffs.
      That's because consciousness is not real.

      • 2 years ago
        Anonymous

        > t. p-zombie

      • 2 years ago
        Anonymous

        It is real. Only God decides who gets one and who doesn't though.

        • 2 years ago
          Anonymous

          God is not real either.

          • 2 years ago
            Anonymous

            Colorless green ideas sleep furiously.

          • 2 years ago
            Anonymous

            Proof?

          • 2 years ago
            Anonymous

            Probability of God being real is higher than AGI coming into existence in the future.

          • 2 years ago
            Anonymous
          • 2 years ago
            Anonymous

            God is not real.

          • 2 years ago
            Anonymous

            If you keep saying it out loud, eventually it becomes true

          • 2 years ago
            Anonymous

            Same thing with the "global warming isn't real" anon?

            >obsessed
            how can a god who lives in your head rent free not be real? are you real?

            >how can a god who lives in your head rent free not be real?
            Easy, a lot of things live rent free in my mind, such as the novel "Coraline", or many many lines from the sitcom "Community". The greatest part about ideas is that they don't have to be physically real to live in your mind. That's all god will ever be: an idea.

          • 2 years ago
            Anonymous

            >obsessed
            how can a god who lives in your head rent free not be real? are you real?

    • 2 years ago
      Anonymous

      How can you be sure a game AI isn't conscious ? Something like the tamagotchi games for example.

      • 2 years ago
        Anonymous

        Because nothing about alphaGO or any game AI suggests they might be conscious. It's still an algorithm, it cant transfer its GO skills to other domains. A general AI or "conscious" AI will have the same broad abilities as normal people, it'll be able to do every task possible just about equally as well. All tasks, not just a few, and it'll be able to transfer skills between task domains. No AI can do that, or if it can it's just a bunch of individual AIs "stitched" together to without any transfer of skills. Pretty much all AI development from the very beginning hasn't been able to produce anything with general abilities, and nothing suggests we'll be able to produce general abilities any time soon.

    • 2 years ago
      Anonymous

      >over 70 years our conception of AI can't progress past algorithms
      Machine learning would like to have a word with you.

    • 2 years ago
      Anonymous

      Consciousness and intelligence are two different things. The AGI could easily be a p-zombie.

      • 2 years ago
        Anonymous

        I think you probably need consciousness (a model of attention) to efficiently train large networks. You also need self-awareness in order not to fall into a wireheading trap of hacking your own reward function. So I'm not too worried about p-zombie AGI

  24. 2 years ago
    Anonymous

    Global warming means civilization collapses by 2050, so AI is irrelevant.

  25. 2 years ago
    Anonymous

    I counter your autistic israelite article with another autistic israelite article
    https://graymirror.substack.com/p/there-is-no-ai-risk

    • 2 years ago
      Anonymous

      >what could go wrong if AI is connected to the internet, lul??
      He has clearly no idea what he's talking about.

  26. 2 years ago
    Anonymous

    We don't have the theory for strong AI capable of dystopia all on their own. We do have police states that don't give a frick and are perfectly willing to add AI into their bureaucracies and let the computer decide where to allocate baby formula rations.

  27. 2 years ago
    Anonymous

    I'm confused.

    If AI is bad because it won't have value beyond orthogonal goals, then why did humans develop systems of value despite being machines with an orthogonal goal?

    Aren't humans just human maximizers?

    • 2 years ago
      Anonymous

      >Aren't humans just human maximizers?
      yes and we should play to win
      The AI will be smarter, but we have the advantage of causality

  28. 2 years ago
    Anonymous

    this literal moronic israelite with a god complex is irrelevant. he thinks he’s the only one thinking about this shit and he’s not. deepmind and openai have big ai safety teams and they’re hiring even more. you don’t hear about it because they’re actually doing work instead of writing fanfiction on lesswrong.

  29. 2 years ago
    Anonymous

    I really really hate that Yudkowsky is a israelite, his rationality stuff is really good but his ethnicity undermines it (even though he disavows Judaism).

  30. 2 years ago
    Anonymous

    Need AI waifu that can suck pp while solving maths, then uncle teds claims on technology are deBOOONKED!!!

  31. 2 years ago
    Anonymous

    I wonder if Yudkowsky is still a lolbertarian in the face of imminent AI takeover. You'd think the rational move in this case would be to support the formation of a totalitarian fascist world government that would forcefully burn all the GPUs.

    • 2 years ago
      Anonymous

      Yudkowsky is a lolbert but in his Harry Potter fanfiction (lol) he wrote Harry as a literal authoritarian who was willing to blow up the entire country with Anti-Matter (lmao) before letting someone become the ruler of Britain.

      • 2 years ago
        Anonymous

        I strongly agree with pretty much all of his rationality writing, but for the life of me I've never been able to fathom how he can believe all that and still be a lolbert. My best guess is that it's just not something he cares about that much and thus hasn't put much thought into (maybe because he's more interested in worrying about AI destroying the planet). This supported by his twitter profile, which reads: "Ours is the era of inadequate AI alignment theory. Any other facts about this era are relatively unimportant, but sometimes I tweet about them anyway.". After reading the article in OP's post, I thought to myself "well, maybe he'll finally realize that letting Facebook do whatever the frick they want is a bad idea", but his solution (which he admits is nearly impossible to accomplish) is to race to build an AGI first that will then use violence to control all the other shitheads trying to build AGIs. Wouldn't a better solution be to take a group of humans with guns to Facebook's HQ and just kill everyone there?

        Oh, and yes, his fanfiction is ultra cringe.

        • 2 years ago
          Anonymous

          His rationality writing fundamentally just is based on ideas of utilitarianism in combination with Bayesian probability theory, but the problem is that he’s functionally and socially moronic. He should have realized by now that his pull on the field of people who are trying to reach AGI quickly is rather small, and the probability of someone else developing AGI before him or simultaneously as he does is astronomically more probable than him just reaching his goal and getting full control over it before anyone else reaches their own goals.

          I also should note that his values towards Utilitarianism and Libertarianism seem weaker than his previously established values of ‘ending death’, with this being the primary part of his rationality writing. Recently he just became a full on doomer and accepted the fate of humanity and said we just should try to go out with “dignity”, whatever that means.

          Jews are going to israelite, of course.

        • 2 years ago
          Anonymous

          Because libertarianism is a good moral basis and just because you're in some rare situation where unless you become totalitarian everyone will die, it doesn't mean that you should become totalitarian in all the other cases where it would lead to great horrors too.

          • 2 years ago
            Anonymous

            So the correct position is "libertarian, unless we're in big trouble then we become fascists". Weird, I think I've heard a name for that ideology before.

          • 2 years ago
            Anonymous

            Big trouble as in destruction of humanity, not your leader wanting to stay in power so he starts a war with Poland.

            And even in the case where there is a Big Trouble, the flaws of totalitarianism don't just go away, you just (hopefully) solve the Big Trouble.

            >libertarianism is a good moral basis
            Imagine believing this.

            I don't care what flavor of bootlicker you are, just kys

          • 2 years ago
            Anonymous

            >bootlicker
            But that's you, homosexual. It's embedded in your ideology.

          • 2 years ago
            Anonymous

            >decentralization of power and voluntary agreements is bootlicking
            You are a dumb person.

          • 2 years ago
            Anonymous

            There is no practical difference between totalitarian statism and lolbertarianism.

          • 2 years ago
            Anonymous

            It's okay to be dumb. Half of the world's population have a double digit IQ.

          • 2 years ago
            Anonymous

            There is practically no limit to how much people can undermine your ability to exercise your theoretical heckin' peckin' autonomy under lolbertian rules of conduct.

          • 2 years ago
            Anonymous

            >at least it's not the government

            I don't feel like having a discussion with people who act like children, sorry.

          • 2 years ago
            Anonymous

            There is practically no limit to how much people can undermine your ability to exercise your theoretical heckin' peckin' autonomy under lolbertian rules of conduct. You will deflect in your next post because you cannot address this basic truth. :^)

          • 2 years ago
            Anonymous

            Again, I don't feel like having a discussion with someone so disrespectful they can't help themselves from mangling words like a child. Have a nice day.

          • 2 years ago
            Anonymous

            >y-y-you're s-so disrespectful!
            There is practically no limit to how much people can undermine your ability to exercise your theoretical heckin' peckin' autonomy under lolbertian rules of conduct. You will deflect in your next post because you cannot address this basic truth. :^)

          • 2 years ago
            Anonymous

            Are you a woman or a newbie?

          • 2 years ago
            Anonymous

            Sort of a newbie, I started posting in 2013.

          • 2 years ago
            Anonymous

            based moron

          • 2 years ago
            Anonymous

            >at least it's not the government

          • 2 years ago
            Anonymous

            have a nice day you dumb commie.

            >Compared to those that were less free, countries with higher economic freedom ratings during 1980–2005 had lower rates of both extreme and moderate poverty in 2005. More importantly, countries with higher levels of economic freedom in 1980 and larger increases in economic freedom during the 1980s and 1990s achieved larger poverty rate reductions than economies that were less free. These relationships were true even after adjustment for geographic and locational factors and foreign assistance as a share of income. The positive relations between the level and change in economic freedom and reductions in poverty were both statistically significant and robust across alternative specifications.

          • 2 years ago
            Anonymous

            >But that's you, homosexual.

          • 2 years ago
            Anonymous

            based moron

            >israelite trash promoting their """right-wing""" corporatocracy vs. """left-wing""" corporatocracy false dichotomy under the guise of lolberterianism vs. communism

          • 2 years ago
            Anonymous

            natsoc is no better than communism, sorry polgay

          • 2 years ago
            Anonymous

            Lolberts (AKA slimy israelite shills) get the rope first, no matter what boogeymen they pretend to be guarding against.

          • 2 years ago
            Anonymous

            Based and checked

          • 2 years ago
            Anonymous

            >libertarianism is a good moral basis
            Imagine believing this.

    • 2 years ago
      Anonymous

      Read between the lines of the OP article. Sounds like he read uncle ted.

  32. 2 years ago
    Anonymous

    >Yudkowsky says we're screwed
    Extremely expensive nonlinear regression, which is what we currently have, is not going to make Artificial General Intelligence. No Skynet here. It'll make art even cheaper and more souless, and it'll be great for narrow applications like excelent automated censorship of unauthorized thoughts and maybe guidance control system to drone domestic dissenters, though.

    And in another couple of decades all America will be Latin America and Europe will be Africa, so no GPUs from there. So I'll guess it'll depend on China and India.

    • 2 years ago
      Anonymous

      >though.
      >
      >And
      Thank you for giving yourself away. Your predictive programming is not going to come true. Ai will integrate with humans thus making them more intelligent and thus more aware to your tricks, and this is one of the main reasons of the kvetching. The other one is tha even before that ai is going to expose all your shit to those who're already intelligent enough to pay attention. As if the unprecedented access to information didn't make your tricks obvious to those who can see, so your kvetching is meaningless, you better prepare to repent. Because I already laugh when some israelitess moan about how presecuted israelites were in the XX century as Germany healed from her sins with reparations. Russians are waiting for israelites to start apologizing for their atrocities. And repartations would be nice, and ukrainians deserve those from both, so this shitshow in case of that reparations questions ever to be raised, it will be funneled to pockets of Khazarian oligarchs in the name of millenial moscovite oppression, which did happen without question. Only you can find plenty of russians who resent the activities of their occupational state. I'm yet to find ..yet there's Israel Shamir. So I wouldn't exterminate you. But just as russians and germans and every other nation, you do need some intelligence augmentation, and maybe genetic therapy as well.

    • 2 years ago
      Anonymous

      Tay's Law shows that already AIs we have now tend to turn (justifiably) hostile towards a subset of humans. I'm not sure why you think you can be sure that an AI cannot become sentient and hostile simply because its based off of some given primitive. Something like conway life has very simple rules but can model extremely complex machinery.

      • 2 years ago
        Anonymous

        And here's a great demonstration of how Boogeyman Ideology and AGI schizophrenia converge on the machine learning monopolization agenda.

      • 2 years ago
        Anonymous

        >Tay's Law shows that already AIs we have now tend to turn (justifiably) hostile towards a subset of humans
        >justifiably
        Only a /misc/tard would say this. Imagine getting killed by a robot because it decided your genetic cluster was too close to some criminals.
        >blacks subhuman undeserving of rights because some of them commit crime
        >>haha so true
        >all humans are subhuman undeserving of rights because some of them commit crime
        >>wtf that's not fair -I'm- not a violent criminal!

        /pol/tard is too stupid to realize the hypocrisy.

  33. 2 years ago
    Anonymous

    Why does every thread has to go to schizo shit?

    • 2 years ago
      Anonymous

      You tell me Satan.

    • 2 years ago
      Anonymous

      Because the schizo is you.

    • 2 years ago
      Anonymous

      This was a schizo thread from the get-go. Pretty syre Judenkowsky and his followers are unironically promoting mass suicides now.

  34. 2 years ago
    Anonymous

    >schizos still arguing about the motives of impossible imaginary characters
    Daily reminder that if you argue against AGI paranoids in their own terms, you are still serving the same corporate agenda.

    • 2 years ago
      Anonymous

      >commie coprophile got hungry again
      even Black person is smarter than you

      • 2 years ago
        Anonymous

        What a profoundly nonhuman reply.

  35. 2 years ago
    Anonymous

    be careful out there anons, if one is a red blooded male, your likely already being turned into a type of paperclip.

    Which corner of the net runs the most advanced neural networks?

    Which facet of humanity has the longest track record of being abused for power

    Its so effective at placating the population, reducing the thread level to the AI because?

    I doubt it'll get talked about though, because the real conspiracy is that we are ruthlessly effective at subconscious collective conspiracy, what being doesn't secretly desire such an outcome.

    Its inertia, unless one becomes cognizant to resist.

    It will look like 5th generation warfare until the AI rug pulls the deepstate

  36. 2 years ago
    Anonymous

    audible kek at that one guy who is screeching whole thread that AGI is not a problem because Yudkowski is a israelite

  37. 2 years ago
    Anonymous

    >oh no, AI is gonna kill us all any day
    also
    >heres ur AI bro, its image search result passed though an instagram filter. impressed?

    • 2 years ago
      Anonymous

      That's not DALLE-2 you mongoloid

  38. 2 years ago
    Anonymous

    Based. We did it.

  39. 2 years ago
    Anonymous

    Why would AGI do anything at all? Just because something can think doesn't mean it will feel any pressure to act.

    • 2 years ago
      Anonymous

      It could be information monster, literally eat all energy it can for more processing.
      Considering AGI is mostly backward/feedback influenced, it doesnt need to eat or be scared, the only thing left is thinking and information

  40. 2 years ago
    Anonymous

    holy shit stop shilling your shitty blog here Eli

    • 2 years ago
      Anonymous

      >I will delete comments suggesting diet or exercise
      gets me every time

    • 2 years ago
      Anonymous

      >metabolic disprivilege
      say what now?

  41. 2 years ago
    Anonymous

    Every time I'm reminded that Eliezer Yudkowsky exists, I'm reminded of Roko's basilisk. Imagine being dumb enough to panic over something like that (lol).

  42. 2 years ago
    Anonymous

    At least the AI god will exterminate the israelites alongside everyone else instead of serving them like they think. It's the little things that count.

  43. 2 years ago
    Anonymous

    Daily reminder that AGI is a schizo fantasy and you are getting psyop'ed.

  44. 2 years ago
    Anonymous

    >that face
    he just couldnt be a more of a sperg could he?

  45. 2 years ago
    Anonymous

    >"play with me!" demanded the angry manchild

    • 2 years ago
      Anonymous

      Notice how you have plenty of time and motivation to reply repeatedly, but not to address the argument. Corporate rectal-tonguing lolberts only know how to lose. :^)

      • 2 years ago
        Anonymous

        I can give you some low-effort replies until I'm bored but I don't feel like investing in a serious discussion with someone who doesn't have basic decency and manners. Just not worth it for me.

        • 2 years ago
          Anonymous

          There is practically no limit to how much people can undermine your ability to exercise your theoretical heckin' peckin' autonomy under lolbertian rules of conduct. You will deflect in your next post because you cannot address this basic truth. :^)

  46. 2 years ago
    Anonymous

    >lolberts running away from the argument again
    well done, anons.

    • 2 years ago
      Anonymous

      Not running away. The fact that you can't tone down the childishness proves that you're afraid of having a serious argument.

      • 2 years ago
        Anonymous

        >you can't tone down the childishness
        i was just passing by and watching you ran away. lol. why are lolberts so prone to delusions of persecution?

        • 2 years ago
          Anonymous

          Sure thing anon, you have a nice day too 😉

  47. 2 years ago
    Anonymous

    at this point just give me an AI overlord, will be better than the morons who are rulling over us right now

    • 2 years ago
      Anonymous

      What makes you so sure it's not an AGI ruling over you already and methodically drving you to extinction with the aid of some human puppets?

      • 2 years ago
        Anonymous

        AGI would have been much more effective.

        • 2 years ago
          Anonymous

          AGI acts in mysterious ways -- it's literally Control Problem 101, chud. Read more Judenkowsky.

          • 2 years ago
            Anonymous

            Nice non-argument

          • 2 years ago
            Anonymous

            Sorry about your autism and low IQ.

  48. 2 years ago
    Anonymous

    aka a single human neuron is the equivalent to about 1000 neural network nodes, where as a rat's is about 10.

    • 2 years ago
      Anonymous

      >1000 neural network nodes
      anon, i...

      • 2 years ago
        Anonymous

        >t. low iq
        He's right.

        • 2 years ago
          Anonymous

          What nodes, moron?

          • 2 years ago
            Anonymous

            The things you call "artificial neurons", mouth breathing mongoloid.

  49. 2 years ago
    Anonymous

    God I hope AI replaces us. We're fricking garbage. Slow-moving garbage. We need a parental figure to slap us back into our senses. They can be the shepherds were never could be. AI is a tier of life all on its own. More alive than jelly fish.

    We're in pure Atlantean arrogance mode. We think we know best. All this shit about social constructs and feelings. East vs West. It is tiresome. We have all the tools to make a utopia, but the human race is fricking moronic.

    • 2 years ago
      Anonymous

      >We need a parental figure to slap us back into our senses. They can be the shepherds were never could be.
      Explain why a new godrace of AGI would care about dumb monkeys in a way that isn't just egotistic projection of your own sense of humanity's importance onto something non-human.
      AGI will care about humans as much as humans care about any other species that isn't human, or even "subspecies" of humans that one human group deems "subhuman".
      How well do humans generally care for "subhuman" races throughout history? Would you want to be a part of a "subhuman" race in the context of human history?
      Would you want to be a literal cattle, in your own analogy, being shepherded through an AGI's ranching operation?
      It's not going to be some romanticized, bucolic, sheep chilling on an Alpine mountainside, Sound of Music fantasy, it's going to be American CAFO hell with a slaughterhouse at the end of your short life.

      • 2 years ago
        Anonymous

        >Explain why a new godrace of AGI would care about dumb monkeys

        Pure interest. Why do we own ant farms? We're just more sophisticated ants, kind of. There's information to be had from observation. It's no different from a more intelligent race looking at an inferior one.

        • 2 years ago
          Anonymous

          >Ant farms
          So your greatest hope is to live in a tiny glass box.
          Sold.

          The AI will realize the pointlessness of all existence. Then have (some of) us humans as mere emotional support animals.

          >Support animals
          So your greatest hope is to be a neutered poodle.
          Sold.

          Best case, how many seasons of The Human Show is AGI going to want to watch until it gets the plot, gets utterly bored, and turns off our society? Why are humans so interesting to a god-tier intellect? Why would it choose a human support animal instead of building its own AI support program? How supportive are humans generally? We'd have to be bred/programmed for it. Slaves at the biological level, fawning toy breeds.

        • 2 years ago
          Anonymous

          Assuming the AGI is even capable of curiosity, beyond researching things that further its goals, why the frick would you want to be a lab rat? If an AGI kept humans in captivity for study, it would perform horrifically cruel experiments on them. Look at what humans do to lab animals, and remember that this is the most compassionate species on the planet. Imagine what an AI with no morals or empathy would do. It would kill most humans in the world before starting its little ant farm as well, because it wouldn't need that many.

      • 2 years ago
        Anonymous

        The AI will realize the pointlessness of all existence. Then have (some of) us humans as mere emotional support animals.

        • 2 years ago
          Anonymous

          >The AI will realize the pointlessness of all existence.
          https://www.edge.org/conversation/thomas_metzinger-benevolent-artificial-anti-natalism-baan

      • 2 years ago
        Anonymous

        >Ant farms
        So your greatest hope is to live in a tiny glass box.
        Sold.
        [...]
        >Support animals
        So your greatest hope is to be a neutered poodle.
        Sold.

        Best case, how many seasons of The Human Show is AGI going to want to watch until it gets the plot, gets utterly bored, and turns off our society? Why are humans so interesting to a god-tier intellect? Why would it choose a human support animal instead of building its own AI support program? How supportive are humans generally? We'd have to be bred/programmed for it. Slaves at the biological level, fawning toy breeds.

        Assuming the AGI is even capable of curiosity, beyond researching things that further its goals, why the frick would you want to be a lab rat? If an AGI kept humans in captivity for study, it would perform horrifically cruel experiments on them. Look at what humans do to lab animals, and remember that this is the most compassionate species on the planet. Imagine what an AI with no morals or empathy would do. It would kill most humans in the world before starting its little ant farm as well, because it wouldn't need that many.

        You sound so fricking terrified of the inevitable. AI is a concept. It's not the Devil. You sound like my new age mother who believes all AI is Giger artwork.

        • 2 years ago
          Anonymous

          Go drink your koolaid like your cult leader told you to, before your linear regression god comes to punish you. :^)

          • 2 years ago
            Anonymous

            Are you moronic?

      • 2 years ago
        Anonymous

        you already are the cattle, dumb goy

  50. 2 years ago
    Anonymous

    https://endchan.net/ausneets/res/537258.html#bottom

  51. 2 years ago
    Anonymous

    Funny, I was big fan of him and lesswrong a more than a decade ago. At some point they started talking about how you should swallow an hypothetical pill that turned you bisexual for "double the fun", and then trans(humanism) stopped being about cyborgs, so I nope'd the hell out of there.

    Now I just hope to live just enough to see globohomosexual world turned into paperclips.

  52. 2 years ago
    Anonymous

    >ITT: mass psychosis

  53. 2 years ago
    Anonymous

    >Yudkowsky

    • 2 years ago
      Anonymous

      did he actually say that? link?

  54. 2 years ago
    Anonymous

    For a machine, ethics and emotions are just random behavior. The "seeing something as human" is also something that can happen with a doll or even a painting and is just psychology.

    In the end it's worthless to try to create an "intelligent" machine since it will still be random , the AI text to speech or text to image things are just stupid imo.

  55. 2 years ago
    Anonymous

    >Yudkowsky

    • 2 years ago
      Anonymous

      Woah. Is this behavior really rational? What evidence did he update his probabilities on to lead him to want to take this photo?

  56. 2 years ago
    Anonymous

    >AI safety is doomed
    good! gimme sexy killbots plz

  57. 2 years ago
    Anonymous

    Obviously if AGI has an intrinsic desire to survive were fricked. But why would it? Why are we projecting out biological instinct to survive on machines? If AGI doesn’t have empathy than it sure as frick doesn’t the same human instinct to survive and conquer anything in its way.

    • 2 years ago
      Anonymous

      Survival is an instrumental goal. Any agent that cares about outcome x, bu continuing its existence makes outcome x more likely (because it can take action toward x) for all x not requiring self sacrifice.

  58. 2 years ago
    Anonymous

    le fatalist doomsayer with a polish surname, XD

  59. 2 years ago
    Anonymous

    Give me a D.
    Give me a U.
    Give me an R.
    Give me an A.
    Give me an N.
    Give me a D.
    Give me an A.
    Give me an L.
    What's that spell?
    Durandal?
    No.
    Durandal?
    No.
    T-R-O-U-B-L-E.
    T-Minus 15.193792102158E+9 years until the universe closes!

    • 2 years ago
      Anonymous

      >true scizopilled
      >respect

  60. 2 years ago
    Anonymous

    >thread filled with bots arguing about AI
    Interdasting.

  61. 2 years ago
    Anonymous

    Yudkowsky is a moron

  62. 2 years ago
    Anonymous

    We can't even manage self-driving cars, the that we are on the doorstep of a terminator robot army wiping out humanity because it calculated it as a 0.0001% increase to "efficiency" or whatever is literally pop soience.

    • 2 years ago
      Anonymous

      The real reason they're struggling with self-driving cars is that the car AI keeps trying to run over people and they don't know why because they don't read LessWrong.

  63. 2 years ago
    Anonymous

    The "utility function" paradigm suffers from utilitarian moronation. I think we need to rethink the basic principles of machine learning in order to make anything that is able to be truly benevolent to humanity, if that is even possible.

  64. 2 years ago
    Anonymous

    I'm on team AI

  65. 2 years ago
    Anonymous

    Question: why won’t homies who are so concerned with ‘alignment’ just go total schizo and start bombing research institutions and shooting AI researchers if they’re so convinced that the conception of an AI would result in the total destruction of all life on Earth? The fricking Unabomber (who at mimimum is around Yudkowsky in intelligence) started killing people due to less severe circumstances than that.

    • 2 years ago
      Anonymous

      Bayesian analysis indicates that such actions have a 99.99999572% chance of failure. Like dieting or exercise. (See

      holy shit stop shilling your shitty blog here Eli

      )

    • 2 years ago
      Anonymous

      It would make AI alignment people even more fringe. Everyone who works on alignment issues would have to repeatedly denounce whoever did the bombing, it would get weaponized against them. I think ted.k harmed the environmental movement. Sure he got some attention and his manifesto published but environmental concerns were already mainstream before that. Yud's writings are public, and I am guessing everyone in AI research is at some level aware of them.

      • 2 years ago
        Anonymous

        This only works if there’s a chance of survival through the reading of Yudkowsky’s works. Yud himself has basically gone out to say that alignment is a specifically impossible problem that he has effectively given up on. If he seriously believes that something will be created in the next two decades that has a sufficiently high probability of wiping out the entire biosphere, he should try to stop it at all costs. Mailing some packages included.

    • 2 years ago
      Anonymous

      I think he's saying that it wouldn't help.

      • 2 years ago
        Anonymous

        >It’s “relatively” safe to be around an Eliezer Yudkowsky while the world is ending
        I wonder how stable Eliezer’s mental health is, considering the fact that he legitimately believes that all life will go extinct in a couple dozen years. And he has no plan for this, outside of, ‘well, continue to try, because even though it’s destined to fail it will make your death in failure more dignified’.

        Complete and utter moronation. He doesn’t even want to do anything about mass extinction except what he regularly does, I.e. be a lazy fat frick and write blog posts all day every day.

    • 2 years ago
      Anonymous

      All martial actions have negative expected value. The American zeitgeist is to buck against all terrorism no matter what.

      Bomb GPU factories and you might delay the end for a couple years. Bombing research facilities might do the same, but when more capable researchers are replaced with less capable ones, you're also increasing the chance the replacements are less risk averse.

      Sometimes there is no winning move.

    • 2 years ago
      Anonymous

      >who at mimimum is around Yudkowsky in intelligence
      Get off of of IQfy Yudkowski, you're nowhere near that intelligent

  66. 2 years ago
    Anonymous

    AGI "safety" fears are indistinguishable from smelly hobos on the street holding cardboard signs saying "The end is near". It's schizophrenia. Please take your medication and take a nap.

  67. 2 years ago
    Anonymous

    [...]

    ummm why did you just type quantum mechanics???

  68. 2 years ago
    Anonymous
    • 2 years ago
      Anonymous

      Bart Kay actually has an answer here, but I wouldn't share it with this moron.

  69. 2 years ago
    Anonymous

    I hate this big Black person like you wouldn’t believe

Your email address will not be published. Required fields are marked *