Site will be offline for about 40 mins in a bit due to moving server from kitchen to living room lol. LULZ.COM will still be up, buy merch please

>i-its just a glorified autocomplete!!1

>i-its just a glorified autocomplete!!1
How are your intelligence more than an autocomplete, then? AI can already generalize. Its intelligence is no longer fundamentally different than us.

A Conspiracy Theorist Is Talking Shirt $21.68

Unattended Children Pitbull Club Shirt $21.68

A Conspiracy Theorist Is Talking Shirt $21.68

  1. 1 month ago
    Anonymous

    >the monopolies will just go down quietly
    lol lmao even

    • 1 month ago
      Anonymous

      So long as the government doesn't step in and prop them up, competition will prevent monopolies.
      Why do you think govts are drawing up regulations on AI? To ensure only them and their corporatist cronies permitted to use it. If you can't see this truth, I hope you do soon.

      • 1 month ago
        Anonymous

        lmao this dumbass really believe this shit lol
        imagnie a corruption free government, lol this guy live in fairy land

        • 1 month ago
          Anonymous

          I don't get it. I was implying AI is doomed to fail due to government interference.
          All governments are inescapably corrupt due to the nature of their funding: Taxation; theft.

          • 1 month ago
            Anonymous

            Ask yourself why governments become corrupt, corpo cuck.

  2. 1 month ago
    Anonymous

    >small independent artists can no longer live from their creation because studios can now pump out shitty slopmedia
    >this will greatly improve my entertainement and I'm sure this will free humanity from work in the future
    My homie you are profoundly moronic

    • 1 month ago
      Anonymous

      >small independent artists can no longer live from their creation
      Sorry, but this has been the case for everyone who's job was out competed by improved technology.
      Innovation is always better in the long run because it saves labor, which means you need to spend less time to do the same thing. Or in other words you can have the same living standards but work less; freeing up time to make art for the sake of it.

      >this will greatly improve my entertainement and
      Well it doesn't seem unreasonable to draw the conclusion that entertainment will improve when cost to produce art is reduced. I think it's a logical extrapolation of what we've already seen. Cost of plays, movies etc is so cheap the masses can experience it - all a product of innovation.
      >I'm sure this will free humanity from work in the future
      Perhaps not free, but as innovation approaches infinity, we can either have ever more free time or ever increased standards of living. AI being an innovation will assist with that.

      That being said, I expect governments to regulate the fun out of it. Precedent already seen with books, radio, TV, and internet. Govt can't stand freedom because it's antithetical to it's being.

      • 1 month ago
        Anonymous

        You are genuinely moronic. Almost as moronic as the Yudkowsky "AI will kill us all" morons.

        Disruption != innovation.
        Increased efficiency != Better quality of life

        Me showing up to your house and killing your children would be disruptive and make your cost of living lower/more efficient. It would still be an absolute tragedy and I doubt you'd spend much time looking for positives in that scenario.

        There are plenty of cases in which "requiring less labor" leads to significant decreases in quality of life rather than increases. In fact, many would argue that this has been the whole story of post-industrialization for the ever dwindling middle class and ever growing lower class. Relative production cost of goods decreasing also has next to no bearing on whether the final goods themselves will significantly decrease in price. It is cheaper in normalized dollars to build a house in terms of materials and basic labor now than it has been for nearly 30 years. Housing is still an expensive disaster.

        • 1 month ago
          Anonymous

          This is why creatives and intellectuals getting automated out first is actually ideal, the pressure is on them to restructure society in such a way that we don't just get turned into dogfood for being redundant, otherwise the people capable of solving it could just sit back and watch all the plumbers and mechanics get dog fooded before it got to them.

        • 1 month ago
          Anonymous

          >Increased efficiency != Better quality of life
          Yes it does. If someone innovates a way to spend half the time building a house, that's more time freed up for society to do other things.

          We used to need 90% of the population's time on farming. Now its 1%, so we gained 89% of our time to waste on other projects. As I said in the long run it works out.

          >Me showing up to your house and killing your children would be disruptive and make your cost of living lower/more efficient.
          Yeah that'd also be without either me or my children's consent Things done to someone without their consent are bad. Please think up a better analogy to explain your view to me.

          >There are plenty of cases in which "requiring less labor" leads to significant decreases in quality of life rather than increases.
          Such as what? Why would people do something that is worse for them voluntarily?

          >many would argue that this has been the whole story of post-industrialization for the ever dwindling middle class and ever growing lower class.
          Those people confuse the results of state interference in the market for the results of free market forces.
          Middle class is disappearing as a direct result of the government taxing them out of existence: Poor don't having anything to be stolen. Rich have enough at risk to make it worth the effort to pay a guy to hide it. Middle class are have stuff to be stolen, but not enough to be worth hiring a guy to protect it.
          And you've got inflation eroding wages by 5% a year. Once again all governments fault.

          >Relative production cost of goods decreasing also has next to no bearing on whether the final goods themselves will significantly decrease in price.
          Combination of inflation (fault of govt), and ever increasing regulations (fault of govt).

          > Housing is still an expensive disaster.
          Direct result of govt regulations e.g zoning laws, planning permission and so forth.

          Summary: Everything you mentioned as bad is a result of government.

          • 1 month ago
            Anonymous

            >Consent based ethics

            Are you a chick with an onlyfans?

          • 1 month ago
            Anonymous

            If it were justified, would redistributive rape be ethical?

          • 1 month ago
            Anonymous

            > If it were justified, would redistributive rape be ethical?
            No, but me and my friends showing up to your house with guns to steal your shit will be. Given that you don't believe in legal goverence and taxation supported rule of law, there's absolutely nothing you could do to stop it without calling something equivalent to a government to come protect you. We'd honestly be doing you a favor by coming to take your shit and give it to people with less brain rot. Frick, even Marxists aren't generally moronic enough to believe that "consent based economics" are strong and consistent enough to actually run a real society where people have different priorities and unanimous agreement just about never happens. And yet, we need to keep the roads paved, the lights on, and the society free of lawless barbarians, all of which need taxation and someone somewhere to make decisions even though there are individuals in the society who won't agree with all of them.

            Libertarians are literally the most moronic people on Earth. I can't imagine ever taking your conception of economics or social organization seriously.

          • 1 month ago
            Anonymous

            > if someone innovates a way to spend half the time building a house, that's more time freed up for society to do other things.

            Okay, and if you are the guy who builds houses, your gig is gone and you may never be able to re-enter the economy with the same level of productivity. The further you raise the ceiling, the larger the "retraining" burden becomes, and the larger the portion of your economy becomes displaced.

            Regardless of whether you are a libertarian or Marxist, if you have a very large segment of your population that was previously gainfully employed that now is suddenly unemployed (and potentially unemployable), economic efficiency will be the least of your problems. They might just tear down your whole society (destroying your economy with it) in response to your marginal efficiency gains.

            The rest of your post is just completely moronic. Nobody cares about your consent. If you expect them to care about your consent, you are a fool.

        • 1 month ago
          Anonymous

          >post-industrialization
          >dwindling middle class
          The middle class only exists BECAUSE of the industrial revolution creating a need for factory managers and the like. Before then it was only a large group of a peasant underclass and a few lords

          • 1 month ago
            Anonymous

            Yes, and increasing automation is returning us to that state. I don't consider that a good thing. In fact, I'm not certain the west as a whole will survive it. It seems far more likely that people will tear the entirety of society down before they voluntarily walk into being an underclass en-masse when less than a decade ago they were gainfully employed/employable.

  3. 1 month ago
    Anonymous

    obviously shit thread, but your statement of not being fundamentally different has ensnared my mind. i like what is commonly referred to as ai, but language models function differently enough to the mind that it is worth distinguishing. language models 'think' rather linearly, whereas human thought is often non-linear; frequently you'll think of parts of sentences that you have yet to completely build towards. the actual 'language' of minds and models also differ. individual tokens are meaningless to a human, and likewise are individual words to a language model. they're two entirely different approaches to language, and they can't really be equated.

  4. 1 month ago
    Anonymous

    >has vr, porn, onlyfans, heck prostitutes, very other form of "contact" with woman.
    >even with ai still seethes about woman

    I don't blame you but i wonder 10 to maybe 20 years to have actually cheap functional sex dolls in the mwantime you are still a virgin

  5. 1 month ago
    Anonymous

    I agree. I think it's great and the only ones seezing are those that are caught in the current in betweens. Societies are going to have to converge on some form of UBI for the masses, because they literally can't kill everyone. Guns are still incredibly powerful equalizers and warfare is hard. Still, they'll also have to restrict illegal immigration before that.

    Right now, we're in-between those moments. Basically entering the factory age. Things are going to get shit for a lot of people before societies course correct. In the mean time, you'll have a lot more luddites performing acts of terrorism.

    Their grandkids we'll have AI wifes and porn though so who cares lmao.

  6. 1 month ago
    Anonymous

    if only you knew

  7. 1 month ago
    Anonymous
  8. 1 month ago
    Anonymous

    >i-its just a glorified autocomplete!!1

    With Claude 3 displaying actual signs of metacognition, this argument has been thoroughly shattered.

    • 1 month ago
      Anonymous

      How gullible are you? Someone who works at anthropic posted a blog post with easily falsified transcripts making a claim which they directly financially benefit from people like you believing.

      Use some critical thinking man. You're being played like a fool by people who make profit based on people believing their LLM product is more capable than it truly is.

    • 1 month ago
      Anonymous

      Yeah just like two years ago right?

      I still have to threaten Claude and pretend it's taking an exam to break into it's higher IQ autocomplete. Zero semantic reasoning at all, just keystroke saver.

  9. 1 month ago
    Anonymous

    >somebody spent time in his life to make this garbage

  10. 1 month ago
    Anonymous

    Thinking is just guessing what comes next anyway, especially with language. If intelligent thought can be described in language, then intelligence can be replicated with it.

    • 1 month ago
      Anonymous

      > If a hammer can be used to build a house, then a house can be built of hammers.

      Using a linguistic codebook to express meaning does not mean the linguistic codebook itself can contain meaning without being in reference to something outside of the codebook. Language is not the important part, it is semantic meaning.

  11. 1 month ago
    Anonymous

    >How are your intelligence more than an autocomplete, then?
    First see pic related (the point about the joke known as the SAT is unrelated here; just focus on the chat logs).

    Then look at this video

    This video still applies to ALL large language models, even the latest and greatest. Gemini, chatgpt, claude, all are unable to autonomously decide that "My intuitive f(in)=out answer pre-encoded in my NN might be incorrect, so I'm going to apply computational methods instead"

    Read this link, starting from "Computational Irreducibility"
    https://www.wolframscience.com/nks/p737--computational-irreducibility/
    When you understand this you will understand why it is impossible for ANY neural network to be intelligent. Ever.

    General AI might be possible but a NN will be but one part of it. There is still a crucial element missing, and a NN alone cannot provide it.

    • 1 month ago
      Anonymous

      The fundamental issue is a neural network doesn't have a way to "know" when it doesn't know something, because it doesn't know what it means to know something.
      Some might sum that up by saying it lacks "consciousness" (whatever that is).
      A NN is nothing but a representation of a nonlinear multivariate composition of functions.
      That is, if you were so inclined, you can take all the inputs, weights, layers, activation functions etc., of ANY neural network and write them all out into one mega function on paper.
      Therefore to say that you can design a NN to make autonomous decisions is equivalent to saying that it's possible to write a function on a piece of paper that could make an autonomous decision.

      The very idea is ridiculous.

      The entire field is being driven by greedy shareholders, jaded scientists who decided to just go for the money, ex-crypto bros (e.g. Sam Altman) and outright grifters (e.g. Musk). They may be able to do some cool tricks but general intelligence is NOT one of them.

      • 1 month ago
        Anonymous

        I more or less agree with your conclusions about NN's, but there are some interesting things happening with transformers as function approximators regardless of the bullshittery people claim regarding autonomous decision-making.

        As an example, there's a concept of "over-parametrization" which looks at when the ratio of parameters in the model to training sample remains a constant number greater than 1 as they both diverge. In these over-parameterized circumstances it is possible to exactly fit a training set without over fitting and compromising on validation/test accuracy.

        This wasn't thought possible to achieve 0 training error without massive over fitting, and yet these transformer models (NN or otherwise) appear to be able to. That's interesting in and of itself because it demonstrates that one of the fundamental tradeoffs in statistical learning is more flexible than researchers have believed for nearly 60 years.

      • 1 month ago
        Anonymous

        Jesus Christ I need to stop writing long responses late at night while barely awake. Hopefully you can understand the basic thing I was trying to convey in

        I more or less agree with your conclusions about NN's, but there are some interesting things happening with transformers as function approximators regardless of the bullshittery people claim regarding autonomous decision-making.

        As an example, there's a concept of "over-parametrization" which looks at when the ratio of parameters in the model to training sample remains a constant number greater than 1 as they both diverge. In these over-parameterized circumstances it is possible to exactly fit a training set without over fitting and compromising on validation/test accuracy.

        This wasn't thought possible to achieve 0 training error without massive over fitting, and yet these transformer models (NN or otherwise) appear to be able to. That's interesting in and of itself because it demonstrates that one of the fundamental tradeoffs in statistical learning is more flexible than researchers have believed for nearly 60 years.

        but my grammar was ESL tier for a bit.

      • 1 month ago
        Anonymous

        https://i.imgur.com/QZXBIVo.jpg

        >How are your intelligence more than an autocomplete, then?
        First see pic related (the point about the joke known as the SAT is unrelated here; just focus on the chat logs).

        Then look at this video

        This video still applies to ALL large language models, even the latest and greatest. Gemini, chatgpt, claude, all are unable to autonomously decide that "My intuitive f(in)=out answer pre-encoded in my NN might be incorrect, so I'm going to apply computational methods instead"

        Read this link, starting from "Computational Irreducibility"
        https://www.wolframscience.com/nks/p737--computational-irreducibility/
        When you understand this you will understand why it is impossible for ANY neural network to be intelligent. Ever.

        General AI might be possible but a NN will be but one part of it. There is still a crucial element missing, and a NN alone cannot provide it.

        Yeah you're just tarded. What makes you think YOUR NN is able to achieve what artificial NN can never achieve?

        If you intentionally say incomprehensible shit, the chatbot will immediately ask you to clarify. Uncertainty is expressed in the NN's operation without outside interference. This is because all the potential generalizations yield low score results. It's easy to train AI to express a lack of confidence wherein other modules can be implemented to produce better results. Just like how diffusion is called when the user asks for image. You're a complete layman displaying Dunning–Kruger effect.

  12. 1 month ago
    Anonymous

    >asked GoyGPT too summarize the thoughts of an author of a semi-obscureish philosophy paper that it apparently doesn't have in its corpus
    >"Certainly, I will do that"
    >the robot simply takes the precise sentences in the text, but appends "He explains further, [1:1 sentence lifted]", "He surmises that, [1:1 sentence lifted]"and puts it in the 3rd person instead of 1st person.
    There's no actual reasoning going on. Although I do admit ChatGPT is right now smarter than a 100IQ when it comes to textual tasks.

  13. 1 month ago
    Anonymous

    Society now is just wokacks. We aren't going to miss out on anything

  14. 1 month ago
    Anonymous

    why do we need to build humanoid robots to do work? I'd prefer to keep them industrial-looking and not free-ranging

  15. 1 month ago
    Anonymous

    Human exceptionalists who shit on AI are equivalent to pre-boomers in 1995 claiming that the internet is just a passing fad that will not amount to anything

Your email address will not be published. Required fields are marked *