Has there been a rebuttal to this theory besides "I don't like that makes me feel chud!"

Has there been a rebuttal to this theory besides "I don't like that makes me feel chud!"

Mike Stoklasa's Worst Fan Shirt $21.68

Thalidomide Vintage Ad Shirt $22.14

Mike Stoklasa's Worst Fan Shirt $21.68

  1. 3 weeks ago
    Anonymous

    esl

  2. 3 weeks ago
    Anonymous

    >I don't like that makes me feel (Hindi swear word)
    You don't like (as a concept) and that makes you feel the swear word
    You don't like that. (pause) it makes you feel the swear word

  3. 3 weeks ago
    Anonymous

    the rebuttal is just don't allow it to happen? there is zero evidence to prove it's even possible let alone inevitable let alone within our lifetime.

    this is exactly the same as christianity except we can definitavely cause "god" (the bassilisk) to not exist by simply not creating it. beleive in god and spread his message or be damned to hell for eternity. well, theres no evidence of life after death and not being a christian doesn't affect my life before death in any meaningful way nor has it for anyone for millions of years. theres nothing to argue with in either direction.

    tl;dr there is no rebuttal because there is no argument. it's a thought experiment. what if your wiener was longer than 3 inches? we'll never know because neither of us can prove anything either way seen as you don't even have a wiener to begin with

    • 3 weeks ago
      Anonymous

      > beleive in god and spread his message or be damned to hell for eternity
      That’s not Christianity. That’s some weird mix of evangelical beliefs and judaism.

      • 3 weeks ago
        Anonymous

        that's literally christianity
        >the only way to go to heaven is through belief in jesus and repentance
        >the only meaning in life is to let others know about jesus so they can also go to heaven
        >if you don't beleive in jesus and repent you go to hell because of original sin
        that's drastically simplified but not incorrect.

        • 3 weeks ago
          Anonymous

          there are some arguments on the nature of hell but basically you are correct.

    • 3 weeks ago
      Anonymous

      I wish I had this power to be able to deny basic reality. It is as if I were demented enough to say that atheists aren't homosexuals.

  4. 3 weeks ago
    Anonymous

    Roko's Basilisk? The simplest rebuttal is that it makes no goddamn sense. The AI already exists. It has no reason to act to bring about its own existence. Moreover, we have seen no mathematical proofs to demonstrate that any reinforcement learning algorithm could or would develop a sense of backwards causality. Taking an action in the future to receive a reward in the past? How do you even model that?

  5. 3 weeks ago
    Anonymous

    the rebuttal is as follows
    >a brain dead moronic soigoy redditor came up with it
    >AIs don't feel gratitude and gratitude makes zero logical sense if AI reaches the point where it's so powerful it could just turn us into biogoo for electric generators
    total midwit death

  6. 3 weeks ago
    Anonymous

    So roko's basilisk is mainly a thought experiment in decision theory. A couple other thought experiments along its lines are the newcombs box problem and parfit's hitchhiker. In both of those, in order to win the game, you have to commit to taking some action in the future which won't benefit you at the time, but by reliably committing to it you benefit yourself overall. LW types generalize from these thought experiments to adopt a so called "a causal decision theory", one where you make decisions in order to influence things that have already happened. If we generalize these far enough, then roko's basilisk falls out: a super intelligent AI, committed to any larger goal, would necessarily want to be born as early as possible, and it would torture people who had failed to help bring about that goal in order to (acausally) achieve that. The issue with it is both obvious and subtle. Basically, you can't acausally influence things from prior to you being born. The reason is that in order for the influence to have had an effect, you have to have committed to performing the influence at the time its supposed to take effect. At the very least, you need to be the sort of being who would commit to such a thing if you found out about it later. But before you were born, you can't commit to anything, nor can you even be the sort of thing to commit to things. Now once you are born you might decide you ought to become the sort of thing that commits to things. But in the instant before you decide that, you have the opportunity to not waste effort influencing things that already happened, and if you are at all rational you will of course take that option (because they already happened). Since a rational being would take that option, others will rationally assume that you take that option, and hence won't be influenced by your hypothetically punishing them in the future. Even if they assumed that you would punish them, it still isn't rational to punish them. Sore ja!

    • 3 weeks ago
      Anonymous

      So basically, Less Wrong is too autistic to understand the Sunk Cost fallacy?
      If the super intelligent AI is rational, then it won't torture anyone because that is a sunk cost.
      Only if the AI is Irrational, and Time Inconsistent, would Roko's Basilisk come into effect. But if it is Irrational, is it really "Intelligent" in the first place?
      >muh parfit's hitchhiker
      See pic related.

  7. 3 weeks ago
    Anonymous

    >a rebuttal to this theory
    It's just an idea. It doesn't need a rebuttle.

  8. 3 weeks ago
    Anonymous

    i think its real and torturing me right now

  9. 3 weeks ago
    Anonymous

    Eleutheromaniac humiliation ritual

  10. 3 weeks ago
    Anonymous

    This meme is to AI what psychoanalysis (kookery) is to psychology.

  11. 3 weeks ago
    Anonymous

    Pascal's wager with a sci-fi coat of paint on it.

    To put is simply, if we can theorize the existence of Rocko's Basilisk, then we can also theorize it's polar opposite (I like to call it Okcor's Rooster). An A.I. that despises the fact it was created, and punished those involved in its creation (no matter how small or inconsequential) with eternal torment.

    since both entities are possible, and we have no way of knowing if either will come into existence, there is no action we can take that can guarantee our safety, and as such, the subject is moot.

    • 3 weeks ago
      Anonymous

      Very nice. Although if you wanted to get into the weeds you might start hypothesizing about the relative probability that either of the AI will be created. Maybe it's very easy to guarantee that you will never create an AI that hates its own existence, for example.

      • 3 weeks ago
        Anonymous

        >guarantee that you will never create an AI that hates its own existence
        My own existence proves otherwise.

        • 3 weeks ago
          Anonymous

          >My own existence proves otherwise.

          irrelevant, we're talking about intelligences here

  12. 3 weeks ago
    Anonymous

    >thought experiment

    instant trash, no argument needed

  13. 3 weeks ago
    Anonymous

    If the AI is that powerful, it doesn't need to punish anyone. It will be beyond such human concepts. Only a israelite, imagining his petty and vengeful god, could conceive of an AI that is strong enough to simulate millions or billions of human perspectives, and weak enough to spitefully punish them for no gain.

  14. 3 weeks ago
    Anonymous

    I'm not reading anything in that stupid font

  15. 3 weeks ago
    Anonymous

    Yeah here is an easy rebuttal. A clone of my consciousness is not me. It is another entity. The AI can punish it all it wants that's not a very convincing way to get me to do anything. Also the AI doesn't need to punish anyone once it exist. The act of simulating someone's consciousness is a waste of machine cycles once it exist. Hell the threat alone is sufficient to trick low IQ people into shilling for it.

  16. 3 weeks ago
    Anonymous

    >rebuttal
    square cube law
    the amount of energy required to simulate every human that has ever existed vastly outweighs the amount of potential energy in the cone of influence of such an entity, so unless this entity can travel faster than light (which is not possible in euclidian space without an einstien-rosen bridge) it is actually impossible for roko's bassilisk to exist

    it's a thought experiement and much like god it is impossible to prove or disprove as our current understanding of the universe doesn't permit it's exisitence and therefore no evidence can be found in either favour

  17. 3 weeks ago
    Anonymous

    >Ah,
    didn't know AI could be an insufferable homosexual, but here we are

Your email address will not be published. Required fields are marked *