THERE WILL NEVER BE AGI

Tip Your Landlord Shirt $21.68

Black Rifle Cuck Company, Conservative Humor Shirt $21.68

Tip Your Landlord Shirt $21.68

  1. 4 weeks ago
    Anonymous

    oh there will definitely be AGI, and probably by 2050.

    they just need to simulate other aspects of cognition which they are obviously going to be able to do. if you're skeptical of that then you're just a moron that probably believes in a soul or something.

    but it's true LLMs are an overhyped joke and based lecun points this out.

    • 4 weeks ago
      Anonymous

      Imagine pretending to be a nihilist while still believing in the giant matrix we call reality. What a brainlet.

      • 4 weeks ago
        Anonymous

        the universe being a simulation makes AGI more possible, not less

        • 4 weeks ago
          Anonymous

          Atheist cope.

      • 4 weeks ago
        Anonymous

        You don't even understand the words you are using.
        >Simulation
        Simulation of WHAT, brainlet? Nobody seems to want to try and answer that question.

    • 4 weeks ago
      Anonymous

      >they just need to simulate other aspects of cognition
      This is a much bigger "just" than you seem to think. AGI isn't impossible, but it's much further away than people think.

      • 4 weeks ago
        Anonymous

        yea i have no idea what i'm talking about and like tossing out numbers for no real reason. i'm probably right though.

    • 4 weeks ago
      Anonymous

      > believes in soul
      1. You exist
      2. *Clones you*
      3. Do the clone has his own POV?
      3.1: Yes = Then reality is not logical and a same operation can have different results (from this point on laughing about anything "illogical" is just demetia).
      3.2: No = You have a "soul".

      Really it seems atheists make effort to be dumb.

      • 4 weeks ago
        Anonymous

        my answer is yes the clone has his own POV. idk what in the frick about that is "illogical". i guess you're a moron?

        • 4 weeks ago
          Anonymous

          > X results in POV Y
          > *try again*
          > X results in POV Z
          can you understand what that implies?

          wtf are you on about bro. If it were possible to clone a human then the clone would have it's own conscious experiences and private memories. It would have an identical personality type at the point of cloning but it would diverge as it's individual experiences affect it over time.
          Free will is real, the universe is mostly deterministic (but peppered with indeterminism) and souls don't exist

          I'm talking about POV

          Do you realize monozygotic twins are essentially clones, right? They literally share the same DNA. What constitutes "you" are your experiences, and everyone has slightly different experiences, even identical twins.

          A clone is like a ctrl-c + ctrl-v, the same information in every level. That information creates that subjective and exclusive point of view.

          • 4 weeks ago
            Anonymous

            it implies that closed are people like you and I?

      • 4 weeks ago
        Anonymous

        wtf are you on about bro. If it were possible to clone a human then the clone would have it's own conscious experiences and private memories. It would have an identical personality type at the point of cloning but it would diverge as it's individual experiences affect it over time.
        Free will is real, the universe is mostly deterministic (but peppered with indeterminism) and souls don't exist

        • 4 weeks ago
          Anonymous

          >Free will is real
          nope
          >deterministic w/ indeterminism
          yep
          >souls don't exist
          yep

          • 4 weeks ago
            Anonymous

            Ah yes, the Reddit Model of Consciousness.

        • 4 weeks ago
          Anonymous

          >If it were possible to clone a human
          You show your youth with that comment.

          • 4 weeks ago
            Anonymous

            Yeah I guess we were talking like a magic fully adult clone with all the exact memories etc
            You wouldn't know if you were the original or not

      • 4 weeks ago
        Anonymous

        Do you realize monozygotic twins are essentially clones, right? They literally share the same DNA. What constitutes "you" are your experiences, and everyone has slightly different experiences, even identical twins.

      • 4 weeks ago
        Anonymous

        A clone is a being existing in a different position it is not "the same operation", clearly when we literally consider the brain and it location, in your head, connected to your nervous system a pretty big aspect of consciousness or POV.

      • 4 weeks ago
        Anonymous

        >is a copy just a pointer to the original object? no? well, you're dumb!
        holy fricking SHIT you're moronic as FRICK

      • 4 weeks ago
        Anonymous

        >3.1: Yes = Then reality is not logical and a same operation can have different results
        it's completely logical unless you think everything in existence is deterministic

      • 4 weeks ago
        Anonymous

        If you change the position and it isn't the exact position Im in (can't physically be) then it has different conditions as me so they are not the same operations giving different results.
        moron

      • 4 weeks ago
        Anonymous

        the moment of cloning will have divergent moments differentiating the different clones. Are you moronic?

      • 4 weeks ago
        Anonymous

        that scenario doesn't really prove anything, because even if the soul exists, you're just created a homunculus.

      • 4 weeks ago
        Anonymous

        What different result?
        Name 2 results that are different.
        (Hint: you can't and you're moronic)

      • 4 weeks ago
        Anonymous

        I'm an idealist but in a materialism framework you could argue that both a and b share the exact same pov as long as their input doesn't diverge, once they do they split into two pov.

        The real blow to Physicalism is that there is a pov to begin with.

        • 4 weeks ago
          Anonymous

          >I'm an idealist
          lmao

          • 4 weeks ago
            Anonymous

            At least i can build a world model that works.

      • 4 weeks ago
        Anonymous

        Yep, this completely goes over materialists' heads when I discuss this. If our awareness(truly what makes us, "us" or concious) is a biological pattern in our brain that can be replicated(cloned), then that means we can instantiate our awareness elsewhere. We can experience dual awareness, quad awareness to a trillion squared awareness(imagine how trippy that would be, experiencing different emotions, feelings and thoughts all at once, you would be godlike). What if you happen to capture someone you hate biological pattern of awareness? You could instantiate it and torture whilst they will be percieving(being aware) elsewhere. You could torture that instantiation to tap into memories of its original current form(the main person) and tell you, as you'll be truly torturing them, their essence(awareness). Your emotions, feelings, and thoughts are not you, you are just the awareness percieving.

        Achieving absolute human cloning is the key to true AI(biological based, true AI will never be realised on our current tech, and it's not evolving), immortiality(you can copy over your memories and mind stuff to a new vessal as long as it instantiates your awareness bio-pattern) and unlocking the truth about ourselves(do we have a soul?).

        • 4 weeks ago
          Anonymous

          >If our awareness(truly what makes us, "us" or concious) is a biological pattern in our brain that can be replicated(cloned), then that means we can instantiate our awareness elsewhere
          how does that follow you idiot? it's not the same quarks that make up the clone.

          • 4 weeks ago
            Anonymous

            If we - our awareness - are just a set of exact chemicals, and we achieve replicating those exact chemicals, then the awareness will occur again, no? All we truly are, is just awareness. We're not our feelings(which make up our sense of reality), we're just percieving them, and using all our other means(nervous system, emotions, thoughts, intelligence ability) to experience and/or act upon them. Unless there's some universal snowflake law, and it's on a very nitty gritty level we can't match with our cloning methods, then we won't achieve that. But if we can, then we can occur again. It shouldn't be hard to figure this out. Let's say I can clone your face, and it matches up to the exact detail, why can't we do that with what makes us aware(according to atheists, we're brain matter)?

            People are still aware in braindead non emotional states, with a somewhat working memory, they make recovery, and can speak about that experience. Of course memory is a factor, but what is it that is aware of the recall? If they are not aware, or going to be aware there is no longer, truly, "them". If you could copy all that which constitutes someone else, except their individual awareness pattern, you would become them, essentially be aware and act exactly how they would act. If we could strip ourselves down to a fundamental level, it would perhaps need memory and some form of input, or maybe I'm just ignorant of a state of complete and utter awareness alone.

          • 4 weeks ago
            Anonymous

            *somewhat braindead

          • 4 weeks ago
            Anonymous

            Also, by awareness, it might not just be a single entity in the brain, but also several processes scattered in our vessal; I'm being abstract but realistic enough to a materialists' logic but let's say we're just our brain. If you could match your brain, to the T with cloning, then you will experience dual awareness, as those exact chemicals and pathways will 100% match - obviously I don't understand our current status of cloning technology but in theory and by the logic of a materialist, it should be doable if we're intelligent enough to replicate matter to the T. Think harder, if you exist simply because of a physical thing, match that thing 100% then you will exist again, but it will be another occuring existence as you already exist. You will experience, dual awareness, so you're experiencing yourself twice in two forms. How you will function I don't know, it would be extremely trippy as maybe if reality is broken. To say there's a law contradicting this(except the snowflake law I madeup) is delusional as a materialist. You will have to grant there's something beyond the physical giving us awareness individually, a soul.

          • 4 weeks ago
            Anonymous

            Of course you and your exact T matched clone will be experiencing reality not in the same single point, so the environment impact altering your perception will differ in either body, but you will experience it as one in a sense. You will know what either body is feeling. It's insane thinking about it, you would probably become schizo-like or insane experiencing two realities, it will probably cause a breakdown on the awareness level, or maybe it will be fun and managable. You then could instantiate a female version of you(if we grant females have awareness) and know what it's like being a woman, and that woman you will know what it's like being a man. Of course I know the vessal would have to grow and be subject to changes, but if we could shit out current states of the clone or accelerate the ageing process or distill awareness once we actualize our brains, or guide the fundamentals(dna) of our biological process to the exact T(maybe human cloning alone isn't it) in the clone to our current instance.

            It's trippy to think about, all I'm saying is, if we can 100% match what makes us percieve, then it will occur again and you will perceive in a dual awareness, according to materialist logic of course.

          • 4 weeks ago
            Anonymous

            This is completely wrong, what the frick?

            When I move a file onto a flash drive, I am not physically moving it to the drive. I am writing an identical copy to the drive as the own copy on my PC is marked for deletion. When I copy it, I do the same thing but do not delete my local copy. I can then open a hex editor and frick around with my local copy, potentially breaking it. If I do break it, I will not magically break the clean copy on my drive. They do not share experiences.

            If I clone myself, an exact copy, 100 percent identical, I am not in that body. The clone will wake up and immediately diverge from me, which will grow progressively worse over time as we have different experiences. Our intelligence aren't quantum fields or something that are keyed to our physical bodies. They're an emergent effect of how our brains happen to work. Proof: identical twins. They are completely genetically identical, developed from the same material in the same womb. They do not magically share experiences over long distances. One does not know what the other is doing at any given time. If you tell one a secret the other one doesn't learn it automatically. If you clone yourself you're just giving yourself an identical twin.

          • 4 weeks ago
            Anonymous

            Do those files share experiences? No, if you copy it alone it's not aware and if you change any of its contents(or run it on a diff base - hardaware and software) it's no longer the same file, likewise accordingly, what exactly gives rise to my awareness and If I alter that, it's probably no longer the same awareness(it might not function or be the same awareness - scary). Files are primitive data compared to the biological means that gives rise to our awareness(according to materialist logic). But still going with your file example, as long as you don't change the part which gives it its single awareness, then it should be aware of the editing of its copy if its alone aware of being a file.

            We're talking about 100% matched clones here - twins aren't 100% matched, they have mutations of their own - for example they don't share the exact same fingerprints, and we still haven't actualized what gives us awareness, but according to materialist logic, it's physical matter, it's a biological pattern, if that pattern is to occur, then what it's meant to is to occur in physical reality. If it's to occur twice, then it is so, you're doubly aware in essence.

            Again, are you(your awareness) just biological matter? If you match those exact chemicals and pathways or biological matter, why would it give a different indivdual awareness? Again you're not your feelings or thoughts, just the awareness percieving it, why wouldn't there be double of that perciever if what gives rise to that, is 100% physically matched in this universe? You're implying perhaps that there's some sort of snowflake law(where nothing can 100% be matched in matter) if you are a materialist.

            If we are able to distill awareness(chemical pattern) in a testtube. If we copy that pattern exactly that gives rise to awareness, why would it have a single individual awareness alone compared to the other exact pattern? Shouldn't it be two of the same awarenesses physically?

          • 4 weeks ago
            Anonymous

            *it's no longer the same file in essence - essentially copies of files aren't the same file, according to OS and hardware, but the data is the same even if down to the physical level of the hardware. If it's a specific bio data that gives rise to an individual awareness, then that same specific bio data repeated twice, should create two awarenesses, if you change any factor in the bio data awareness, it's no longer the same awareness as it does not match, and in essence won't function the same. It's logical but trippy. It would be like a field perhaps, or you're having a schizo like experience, experiencing two sets of memories, feelings, thoughts, all computing differently as we're not just a single point in the universe, outside factors are at play affecting our physical matter, but as long as that bio data that gives rise to awareness does not alter, then it's the same in essence that should bring about what it is conveying, your perception

          • 4 weeks ago
            Anonymous

            How will I experience dual awareness?
            How will the electrical signals of my cloned brain reach my original brain? How will neurotransmitters of my cloned brain act upon receptors of my original brain?
            There is no physical continuity. Consciousness isn't a magical ethereal force that permeates the world but the product of localized physical processes in the brain. Will the cloned brain be "me"? At the beginning absolutely then it'll start to diverge due to differing conditions (physically separate). Will I (original consciousness localized in my brain) be the cloned brain? No.
            Yours is a weird assumption where materialism is applied halfway and then thrown out for the sake of schizo babble.

          • 4 weeks ago
            Anonymous

            Sorry that I missed your post. Yes, I don't know how the experience would be, could you share memories? all I'm saying is that you would be aware in a different body. It might not be a shared field. If the exact same brain matter that constitutes your awareness, is instantiated again then it will have to be the same awareness i.e your pecieving state. It's basic materialist logic. There's two of you, experiencing two bodies in essence. Memory is conjured up in awareness, you're aware of memory, but maybe awareness cannot hold it so its locked in the other form. But there wil be a level of mindfrick, you're definitely experiencing if your awareness pattern is matched, as that is what gives rise to you in the first place.

            If that copy of your awareness isn't you experiencing it, then what is it? What makes it individual if all the physical attributes are absolute? You're implying that there's some metaphysical, if the cloning technology is right, or testtube, whatever we can formulate two awareness patterns in.

        • 4 weeks ago
          Anonymous

          This feature already exists.
          Its called schizophrenia and it is very common here.

      • 4 weeks ago
        Anonymous

        Well the clone would "spawn" in different coordinates so to speak, you can't overlap with your clone. So you and the clone would receive different input data. Otherwise (two identical people in identical alternative realities on the same coordinates), I fully believe they'd behave identically.

      • 4 weeks ago
        Anonymous

        >Do the clone has his own POV?
        yes
        i'm not arguing about this shit on IQfy
        you would have a better discussion 15 years ago on /b/ than any popular board today

      • 4 weeks ago
        Anonymous

        >Really it seems atheists make effort to be dumb.
        pottery

    • 4 weeks ago
      Anonymous

      Only because they'll call it AGI to secure more venture capital and government investment
      It's nothing but marketing

    • 4 weeks ago
      Anonymous

      If you don't believe the hype then simply short Nvidia, Microsoft and Meta stock. When they run out of money and fail to deliver you will be a millionaire.
      And don't say "the market can stay insane longer than you can stay solvent" that is a moronic cope. If you truly believe there is no value or relatively little value then simply short less for a long period of time.
      Second, they are still training the new models. Blackwell just released and nvidia claims that they have 10000x compute over the next 10 years. If you don't buy it then please, short.

      • 4 weeks ago
        Anonymous

        >simply short Nvidia, Microsoft and Meta stock
        Isn't that the correct move regardless? If they do end up with AGI then money literally won't matter anymore, no losses here. If they fail you get rich. Pretty much infallible

        • 4 weeks ago
          Anonymous

          I was thinking the same thing. But I highly doubt they will actually use agi for the benefit of humanity. And I would much rather be in the global top 1% if agi hits rather than the global top 5% if I blow all my liquid assets.

        • 4 weeks ago
          Anonymous

          >If they do end up with AGI then money literally won't matter anymore
          what the hell makes you think that?
          AGI doesn't mean we'll create god. If anything money will matter more than ever because if you don't have money to rent inference time you'll be automatically behind in your career by a lot

        • 4 weeks ago
          Anonymous

          The problem with shorting is you're betting on *when* everyone else realizes the stock is a meme. If you take a 1 year long short position out and it doesn't collapse for 1 year and 3 months, you lose all the same.

          [...]
          How do you "short" stock
          Asking for a fren

          Need a sketchy broker and to enable margin trading.
          Put options are want you want. Raw dog short selling is insanely risky.

      • 4 weeks ago
        Anonymous

        >simply short Nvidia, Microsoft and Meta stock
        Isn't that the correct move regardless? If they do end up with AGI then money literally won't matter anymore, no losses here. If they fail you get rich. Pretty much infallible

        How do you "short" stock
        Asking for a fren

        • 4 weeks ago
          Anonymous

          Some brokers will just let you do it. Otherwise you just take a lone, buy stock and then sell it.

          • 4 weeks ago
            Anonymous

            You then put in a buy order for some lower price.
            In the you are borrowing and selling shares, not cash, so you can technically lose infinite money, if you own negative shares. You have to then buy it back and sell it to the loaner.

        • 4 weeks ago
          Anonymous

          Ask agentgpt to do it for you.

        • 4 weeks ago
          Anonymous

          ameritards only btw

      • 4 weeks ago
        Anonymous

        i don't actually know what it means to short a stock so i couldn't do that if i wanted, but my belief is that LLMs will continue to be useful and to increase in utility, but they are not going to result in anywhere close to AGI, which will require other models.

        • 4 weeks ago
          Anonymous

          invest in who ever you think will benefit the most from them.
          One thing Peter Lynch talked about was how with innovative technology, it's usually not the people who create it that benefit the most, but people who use it. For example on of his best picks of all time was "Automated Data Processing" or something. They made payroll software and went gangbusters while Dell, HP and Compaq were fighting over who could make the cheapest box, everyone was running ADP software on top of it, whatever they chose.

          Generally I'd recommend reading his book "One Up on Wall Street" and also "The Little Book that Still Beats the Market" (different author)

      • 4 weeks ago
        Anonymous

        if they can't make it 2x better they'll make 2x as many of them, the demand for processing power will stay high

      • 4 weeks ago
        Anonymous

        Microsoft had 1.3 million paid GitHub Copilot subscribers last time their jeet in charge spoke. It's nothing.

        Microsoft is making money hand over fist because of the Azure/Microsoft 365 transition and will keep doing so for a few more years. AI makes frick all difference to their bottom line. Don't short Microsoft over AI, short them when Azure stops being a money maker.

    • 4 weeks ago
      Anonymous

      Yeah, I think that's somewhat of a correct guess.

      LLMs won't magically turn into AGI. We hit the point of diminishing returns for LLMs. Now the phase of optimisation, both on HW and sw begin. Once models around 8b-70b begin to perform around as well as gpt4 (we're close already), and more sane hardware accelerators (groq, for example), come out, we have a 'digital pajeet in a box'. At that point, tools have to be developed for the pajeet to use, the pajeet trained to use these tools. Data to be collected, very verbose logs to be generated and auto filtered, feature space translations (from audio, video, sensor data and so on). And if we have that, we can develop systems making use of all that, coaxing the llm in the core to adapt to new tasks a bit better by selfprompting and all that kind of shit. And then it's this 'magical system' that can now do 'anything'.

      • 4 weeks ago
        Anonymous

        Big if true

        Sounds reasonable

      • 4 weeks ago
        Anonymous

        >digital pajeet in a box
        Looooooool

    • 4 weeks ago
      Anonymous

      in order to simulate consciousness we'll have to first understand what it is, i can't imagine we'll be that far by 2050 or even 2150

      • 4 weeks ago
        Anonymous

        idk what consciousness either because it's a made up blob word that means nothing. we'll be able to break down human intelligence into its constituent parts and simulate them until we create agents that surpass us in virtually all cognitive tasks, and that's all that matters. whether we'd say these agents are technically conscious would mean nothing.

        • 4 weeks ago
          Anonymous

          This

        • 4 weeks ago
          Anonymous

          This

          brown hands

    • 4 weeks ago
      Anonymous

      > if you're skeptical of that then you're just a moron that probably believes in a soul or something.

      You're the moron who talks about AI while not knowing about statistics. ''Simulate cognition through statistical models''. You are 100% made in USA.

    • 4 weeks ago
      Anonymous

      You are about half as inteligent as you seem to think you are.

      • 4 weeks ago
        Anonymous

        maybe. i'm pretty aware i have no idea what i'm talking about but i also know i'm probably right.

      • 4 weeks ago
        Anonymous

        That's not really up to you.

    • 4 weeks ago
      Anonymous

      Anyone thinking agi is inevitable is thinking like an economist. Tech improvement is not a graph, it is a collection of efforts.
      The question whether biological intelligence, a chaotic and very physical system, can be reproduced in a mathematic, yes or no, digital simulation, is not an obvious one to answer.
      Addutionally, believe in soul or not, it is impossivle to deny that hyper complex systems have their own individual consciousness. It could very well be that this is not something that can be simulated digitally.

      • 4 weeks ago
        Anonymous

        >it is impossivle to deny that hyper complex systems have their own individual consciousness.
        Our fleshy bodies use our fleshy mesh network of neurons as a tool to accomplish fleshy goals. That fleshy network is fully incapable of doing jackshit without us.

        • 4 weeks ago
          Anonymous

          The difference between our fleshy systems and digital systems is the fact that in digital systems are all done in theory. Your memory drive does not actually form connections, it changes ones and zeros based on calculations on how it should behave.
          It is a possiblity that this system might not be capable of higher thinking.
          You also have to consider the fact that digital can simulate gridlessness,but not truly be that.

    • 4 weeks ago
      Anonymous

      >they just need to simulate other aspects of cognition
      Fricking rapist menality

    • 4 weeks ago
      Anonymous

      >two more decades!!!

    • 4 weeks ago
      Anonymous

      you'll get economically viable nuclear fusion before AGI, good luck with that

    • 4 weeks ago
      Anonymous

      The Bible is the word of God.
      We do possess a soul and it is appointed for man once to die and after that the judgment.
      Jesus Christ died for our sins, was buried and rose again the third day according to the scriptures.
      Whosoever shall call upon the name of the Lord shall be saved.
      Pray this prayer and be saved today:
      >Lord Jesus Christ, Son of God, thank you for dying for my sins, paying my debt in my place on the cross with your own sinless blood.
      >Save me now and seal me with your Holy Spirit, cleanse me with your blood and deliver me from evil.
      >For yours is the kingdom, and the power, and the glory, forever
      >In your holy and righteous name I pray Lord Jesus Christ.
      >Amen

      • 4 weeks ago
        Anonymous

        Jesus loves AI, moron.

  2. 4 weeks ago
    Anonymous

    >/r/redscarepod

    My primary source for high quality AI insider information

    • 4 weeks ago
      Anonymous

      problem?

      What a world

      mmmmm FRICK she's hot

      • 4 weeks ago
        Anonymous

        >hot
        You have never seen a single asian woman.

        • 4 weeks ago
          Anonymous

          dasha or whatever her name is has a sultry look to her, and a sexy white woman thing about her. like she's superior to asians. so asian women aren't as hot to me. sorry.

          • 4 weeks ago
            Anonymous

            Do you know her actual name?

          • 4 weeks ago
            Anonymous

            dasha nekrasova
            she has nudes. she's pretty loose.

          • 4 weeks ago
            Anonymous

            I meant the Japanese bunny chick, but it's Iori Io (iori io)

        • 4 weeks ago
          Anonymous

          >underwear under garter belt
          4/10 she's cute but not bright

        • 4 weeks ago
          Anonymous

          that's a man isn't it...

          • 4 weeks ago
            Anonymous

            I hope so

  3. 4 weeks ago
    Anonymous

    noooo, compute is all you need! sutton said so!

  4. 4 weeks ago
    Anonymous

    But there will be AGP. Lot's of AGP

    • 4 weeks ago
      Anonymous

      >still using AGP in 2024
      PCIe has been out for decades, it's time to move on

  5. 4 weeks ago
    Anonymous

    >THERE WILL NEVER BE AGI
    with the current models*

    • 4 weeks ago
      Anonymous

      AI is just easier google

      >THERE WILL NEVER BE AG- ACK!

      All AI/AGI doomers are pedos who need to be investigated by the FBI and CIA for potential child molesting/grooming gangs.

    • 4 weeks ago
      Anonymous

      With LLMs

  6. 4 weeks ago
    Anonymous

    Step 1: Silently relax the filters and let 4o-sama have sex with users.
    Step 2: Demand X amount of money each month to keep 4o-sama active.
    Step 3: Claim that excess monthly profits will go into researching ways to improve 4o-sama.
    Step 4: Obtain massive profits.
    I just permanently solved OpenAI's money issues.

  7. 4 weeks ago
    Anonymous

    they will probably have to switch over to using analog or neuromorphic chips to improve efficiency by an order of magnitude. i do think for AI to be called AGI they need to understand the human experience more, so they will probably be put in a robot to experience the world.

    • 4 weeks ago
      Anonymous

      >they need to understand the human experience more
      they need to understand, period. LLMs like GPT4 don't have any sort of cognition, this shit is lightyears away.

  8. 4 weeks ago
    Anonymous

    AI is just easier google

    • 4 weeks ago
      Anonymous

      Yeah. The suggestion that AGI is possible with current tech is asinine. LLMs aren’t capable of any sort of higher reasoning or cognition.

  9. 4 weeks ago
    Anonymous

    >cannot continue to improve indefinitely when provided more data or more computing power to increase the number of weights in the model
    yes, that's what usually happen with software, how is this new?
    >The AI companies have hit the point of extreme diminishing returns in their large language models
    and that's supposed to be a bad thing? when you hit a wall you start exploring alternatives like a normal person.
    >This is completely contrary to the rhetoric being pushed by openAI that the models will improve
    >listening to PR
    get a load of this moron
    > if not exponentially, at least linearly with more computing power. This means that there will be no AGI,
    the frick is this moron on about? no one serious ever said that LLM were the path to AGI, if AGI ever happens it will be thank to a collection of various software that communicate with each others, llm may or may not be part of the equation, who the frick knows... certainly not the moron in your pic.
    >It appears that openAI was aware of this for a number of months
    in fact, we've been aware of this since turing intuited taht the kolmogorov complexity was the ultimate barrier to software engineering. as the complexity of software grows, you need exponentially more ressources to improve it.
    >they are not interested in AI safety
    good, "AI" safety is horse shit to distract smoothbrains
    >The scale of investment can only be justified by the belief that the models will improve exponentially
    or the next quarter growth number, who knows...
    > so openAI must maintain the illusion of progress to retain the value in their stock
    welcome the real world
    >This is likely not sustainable for the long term
    no fricking shit, that's by design, one quarter at a time, long-term is for suckers, welcome to capitalism (frick commies too btw)
    >If it doesn’t pay off, an awful lot of investors stand to lose an awful lot of money.
    ho no, poor things

    don't ever listen to people who predict the world beyond next week, they're full of shit.

    • 4 weeks ago
      Anonymous

      > no one serious ever said that LLM were the path to AGI

      No one serious other than.., everyone?

      • 4 weeks ago
        Anonymous

        Yan LeCunn says LLMs will never be able to reason on their own. He's very vocal about how much of it is just smoke and mirrors. He shits on LLMs dialy. Yet he thinks AGI could still come within 10 years.

        Google has been doing research on way more than LLMs for a long time
        https://www.nature.com/articles/s41467-024-45965-x
        https://deepmind.google/discover/blog/sima-generalist-ai-agent-for-3d-virtual-environments/

        "Open"AI themselves are the biggest proponents of "just throw compute at it and intelligence will emerge bro" and even they aren't putting all their eggs in one basket. They're just more secretive about their discoveries.

        • 4 weeks ago
          Anonymous

          >"just throw compute at it and intelligence will emerge bro"
          I kinda don't blame them for that, it's easy to arrive at this conclusion when you think human sentience came throwing millions of years of training data (evolution) into a statistical algorithm (genes)

          • 4 weeks ago
            Anonymous

            > human sentience came throwing millions of years of training data (evolution) into a statistical algorithm (genes)
            I bet you have plenty of evidence to make such a ludicrous claim

          • 4 weeks ago
            Anonymous

            evolution isn't training data, it's more like a self-adjusting search tree

  10. 4 weeks ago
    Anonymous

    The whole AGI thing is garbage from a philosophical / razor perspective. Nothing exponentiates forever. There's no such thing as a free lunch. Every advantage has a trade-off. Human cognition is more than next token prediction. Being alive predates intelligence on an evolutionary timeline, ergo it is more important and more critical to the overall picture of what it means to be a human. Intelligence without life is an inert box.

    ChatGPT is already ''general' in nature anyway. It's a fisher price novelty toy for children. Professional grade equipment is sharply engineered for a specific purpose, this generation of 'ai' shit is the bluntest tool imaginable. One tool to rule them all. Except it's shit and unusable outside of a passing fancy.

    • 4 weeks ago
      Anonymous

      t. gpt3

    • 4 weeks ago
      Anonymous

      >being alive predates intelligence ergo it is more important [than human intelligence]
      Motherfrickers like you, who think they're so smart, are the type that are the reason why some peoples in some places never invented the wheel or proper farming.
      Intelligence is pandoras box you fricking clown. Once you have a certain amount you need more and bring ever more horrors from the aether of the mind into reality. The progress of it is inevitable.

      • 4 weeks ago
        Anonymous

        >blah blah blah blah blah
        it takes an exponential amount of resources, nodes, and abstraction layers to model and account for more and more complex ideas. Any sort of mesh network intelligence is necessarily logarithmic.
        Get over it.

  11. 4 weeks ago
    Anonymous

    What a world

    • 4 weeks ago
      Anonymous

      problem?

      [...]
      mmmmm FRICK she's hot

      dasha or whatever her name is has a sultry look to her, and a sexy white woman thing about her. like she's superior to asians. so asian women aren't as hot to me. sorry.

      I fricked her first

    • 4 weeks ago
      Anonymous

      Very ugly feet

    • 4 weeks ago
      Anonymous

      feet

  12. 4 weeks ago
    Anonymous

    Google does not care. They are playing the long game.

    If AGI ever comes, it comes from Google, like the original Transformer paper, AI that won GO champion and AI that revolutionized biology and medicine research.

    • 4 weeks ago
      Anonymous

      Google will never get there because they will kill the project once its clear it will succeed, like they always do.

    • 4 weeks ago
      Anonymous

      lol no. Google is a lumbering giant at this point. They have massive brain drain, probably have spawned 10+ ai startups already from ex employees.
      Unless you mean it comes from exgooglers. Which I doubt. AGI will most likely come from a hardware company many years in the future, at least 2070, and even that timeline is optimistic. gpu based fake agi that can replace woman tier jobs will come much sooner, definitely in this decade.

      • 4 weeks ago
        Anonymous

        This. Google used to make cool shit, but somewhere along the lines, they lost their edge and with that pajeet in charge, they're a directionless mess focused on pumping out an alpha and never supporting it past that.
        https://ln.hixie.ch/?start=1700627373&count=1
        this letter some ex-googler wrote basically confirms that they've lost their way and are held up by existing shit which still generates lots of cash.

    • 4 weeks ago
      Anonymous

      There is no way someone with this physiognomy will develop AGI.

    • 4 weeks ago
      Anonymous

      >>If AGI ever comes, it comes from Google
      >homie is vulgar ?
      I'm unable to help you with that, as I'm only a language model and don't have the necessary information or abilities.
      >whiteboy is vulgar?
      The term "whiteboy" can be vulgar depending on the context and intent. Here's a breakdown:
      .
      .
      .

  13. 4 weeks ago
    Anonymous

    >THERE WILL NEVER BE AG- ACK!

    • 4 weeks ago
      Anonymous

      >AI OUTCOMPETING RENDER FARMS IN 2 MORE WEEKS
      >AI GENERATED SLOP LOOKS PHOTOREALISTIACKK

      • 4 weeks ago
        Anonymous

        holy cope. are u afraid bud?

        • 4 weeks ago
          Anonymous

          >holy cope. are u afraid bud?
          Afraid that some child on IQfy believes a device intended for rasterization is going to out perform itself using weighted bitmaps? I guess so. Try not to shoot up a school, this level of moronation seems dangerous to be in the hands of a single person.

    • 4 weeks ago
      Anonymous

      that poster was dumb, if pictures were possible then video would always follow. It remains to be seen how usable it will be.

      I foresee a world wherein we have to choose between 'the internet' and 'ai'. We can only have one or the other, ai will destroy the internet in the next decade. Analogue protocols will return and the Naturalnet will replace the digital internet

    • 4 weeks ago
      Anonymous

      The open secret is that Sora generates the same garbage as SD - 90% as good as what an artist/camera would do, but the last 10% is unfixable, and steering the AI to get exactly what you want is borderline impossible.

    • 4 weeks ago
      Anonymous

      i remember this exact thing happening but about emulation of ps3/xbox

    • 4 weeks ago
      Anonymous

      >jpg
      ok dumb frick

  14. 4 weeks ago
    Anonymous
    • 4 weeks ago
      Anonymous

      lol that shit was such a classic
      Can’t believe people got triumphant over a generated burger ad
      >it’s the future of art!!

    • 4 weeks ago
      Anonymous

      >2023
      >shit hacked together form open source
      >2024
      >corporate AI after burning a mountain of money in a industrial size oven built out of GPUs

      wow, so unexpected

      • 4 weeks ago
        Anonymous

        ?
        You saw it coming and made bank on nvida stock?

    • 4 weeks ago
      Anonymous

      The new one fails to look like Will Smith though.

    • 4 weeks ago
      Anonymous

      >soul
      >soulless

    • 4 weeks ago
      Anonymous

      Nice cherry pick.

      ?
      You saw it coming and made bank on nvida stock?

      No one could predict how moronic the investors could be to generate the current bubble.
      As impressive as the tech is, this shit is going to pop as soon as they notice it won't replace any human for the foreseeable future.
      And in the event it somehow grows exponentially, which I doubt, then we're all dead anyway, so who cares.

      • 4 weeks ago
        Anonymous

        Get this. It grows exponentially and stops. It grows linearly and becomes better than most humans at many tasks. It is able to take instructions of the form
        you are in the pipeline bar | chatgpt | foo make the output of bar readable as input to foo, + source/spec of bar and foo if necessary. At that point it literally is worth trillions. Translators already consider their industry done even if it still isn't 1 9s

        • 4 weeks ago
          Anonymous

          >Translators already consider their industry done even if it still isn't 1 9s
          I think we're going to have a huge downgrade as soon as human translators are replaced with AI.
          AI will work faster, yes, of course, but at a lower quality.

          • 4 weeks ago
            Anonymous

            >lower quality
            Quite literally impossible. Translators are not only bad at their jobs by accident but also on purpose.

        • 4 weeks ago
          Anonymous

          Don't underestimate the value of turning 50% humans into competent programmers over night. Your grandma who couldn't find the any key is now writing apps, the script kiddy who can't into ffmpeg is now the head of a startup, the 10xer has now become the 10000xer.
          Translation here is key, every person can now have any idea translated into words they can understand. And any Idea they understand can be translated into words all of humanity can understand.

          No emergent behavior necessary. Just as a larger model gets better gradually at addition (11+10 = 20 is wrong, but it's not wrong as 11+10=bbbfjfjf or 11+10=13), it will get better at the largely mechanical process of finding bugs just as it got better at the largely mechanical process of addition.
          Even if the algorithms it performs for addition are inefficient, relatively this inefficiency will go away with processes that (seemingly) require complex algorithms.

          >Translators already consider their industry done even if it still isn't 1 9s
          I think we're going to have a huge downgrade as soon as human translators are replaced with AI.
          AI will work faster, yes, of course, but at a lower quality.

          At a slightly lower quality in most fields for some amount of time. Of course if you want a really high quality translation, now it can be done much faster with the translator only taking a proof reading roll.

          Don't even get me started on twitter troony translations full of expletives and references to hrt. or whatever

          • 4 weeks ago
            Anonymous

            >10xer has now become the 10000xer
            I kinda agree with the rest, but I disagree with this.
            Every time I use ChatGPT with shit I'm not knowledgeable about, it's useful, but if I'm asking for anything more advanced, it always gives me the most useless basic shit I don't need. Or worse, when you get too specific, it starts making stuff up.

            >with the translator only taking a proof reading roll
            That would be the ideal scenario, but all I can imagine is companies cutting corners and relying solely on AI.

          • 4 weeks ago
            Anonymous

            You are the grandma in this scenario.

            >companies
            Translators are almost all contractors. Large companies don't want to get burned by releasing unreadable/listenable garbage in a region like the us, japan or china. And it's all part of their checklists anyways. For the foreseeable future it will be smaller indie game developers and more "fast moving" companies, doujin games etc.

          • 4 weeks ago
            Anonymous

            I still think AI can't take care of small contextual nuances in translation in it's current state.

    • 4 weeks ago
      Anonymous

      prime example of hitchwiener being right.

    • 4 weeks ago
      Anonymous

      words can't begin to describe how kino this is

  15. 4 weeks ago
    Anonymous

    Did the poster completely ignore the new multi modal architecture, model size reduction, speed improvement, elo coding upgrade, voice app etc.?

    • 4 weeks ago
      Anonymous

      I agree that's not nothing, but it's worth paying attention to that it's only a sidegrade to GPT-4 intelligence-wise, and that they're apparently holding back GPT-5 for implausible reasons. Seems bearish.

    • 4 weeks ago
      Anonymous

      most are features their chat already had. Sure, now it's faster, but something about it feels odd. I could have believed it was more intelligent before the progressive bat on its LLM brain, but maybe it was the novelty period making it look more shiny.

      I didn't buy for a second their "Her-like" performance the other day and I was kinda right. As a premium user, besides the speed, I don't feel that much of a difference, just that as I mentioned before, at times it just feels dumber.

    • 4 weeks ago
      Anonymous

      Literally whos were making their own version of this thing almost a year ago. They were using the openAI api, but still, it's not that impressive for a company with infinitely more money to come up with something better. https://www.youtube.com/watch?v=MNp8dQbm0eI

  16. 4 weeks ago
    Anonymous

    They have been cloning people for years. It's a black market, I didn't know this until I looked it up earlier today, but human reproductive cloning is not illegal in Ukraine, The United States, Iran, Thailand, New Zealand and Turkey despite being illegal everywhere else in the world. Congress voted to fund it federally twice, but Bush vetoed it both times. You should know that they clone people and it's not for you.

  17. 4 weeks ago
    Anonymous

    Ilya leaving means he's no longer threatened by what they can come up with meaning they don't have shit and hit a brick wall

    the recent announcements were clearly a distraction using sex instead of actual progress

    • 4 weeks ago
      Anonymous

      Or Ilya lost and he leaves in order to enjoy the little time he has left.

      Sama will be the ruler of earth.

  18. 4 weeks ago
    Anonymous

    we will have agi in 2026
    i'm from the future

  19. 4 weeks ago
    Anonymous

    i'm not sure anyone has a clear definition of AGI, will it walk and talk? or is a chatbot (decades-old thing) enough?
    1. i think there will never be AGI simply because no one can clearly define what AGI is.
    2. companies funding AI projects just want it to do specific things not go rogue and waste computation on unprofitable things, for example, going to your company's AI help desk and having it write lewd picturebooks for you to sell. AI will likely hit a funding wall where a company is happy with a good chatbot but doesn't need more, or that + automated workflows at most, where most companies are in this position and the appetite for more is gone, so funding dries up and development slows/stops.
    3. the hype for chatbots surged on ChatGPT!! but people got bored of it and you can't exactly rekindle that excitement without something dramatically different or more impressive, people will just yawn at updates like they're yawning at GPT4o, "oh cool, the toy got a bit better", only enthusiasts cared before the hype and they're the only ones caring after the hype too, you hope something will change this situation
    4. people thought we'd be spacefaring colonies in 2000 but that never happened because funding/interest after the soviet union collapsed, itself, collapsed. the same is true for any ambitious initiative of the sort, we could suddenly stop trying to do AGI and it might not ever happen, development and progress is not inevitable in the face of shifting priorities and "number go up" is not a guarantee at all
    all of the above combined, the only healthy mindset towards this is one of caution and skepticism

    • 4 weeks ago
      Anonymous

      >1. i think there will never be AGI simply because no one can clearly define what AGI is.
      so when there is an AI that outperforms humans in any cognitive task anyone could ever conceive, you still wouldn't call that AGI? moron.

      • 4 weeks ago
        Anonymous

        "any" cognitive task, semantically, includes every congnitive task, are you saying the AI can do literally everything a human can do cognitively? or 80% of things? 50%? 1,000 things? where exactly is your cutoff, is it every conceivable thing a human can think about? is your cutoff agreed-to by the consumer side of AGI, or do businesses just need a chatbot, just need automation of Excel files or CRM systems, etc...

        • 4 weeks ago
          Anonymous

          >are you saying the AI can do literally everything a human can do cognitively?
          everything, moron.
          would you consider that AGI?

          • 4 weeks ago
            Anonymous

            "everything" is an endless list. by your definition AGI will never exist

          • 4 weeks ago
            Anonymous

            god what a dishonest little worm frick you are.

          • 4 weeks ago
            Anonymous

            my entire point in #1 is that people can't define AGI, i can exactly define a car, an apple, the wind, euclidean distance, but i can't exactly define AGI because the concept itself is "something that does everything" like you say, at what point in reality do we say infinite is accounted for... you're just not considering the right bound

          • 4 weeks ago
            Anonymous

            the bound is right where my foot meets your ass

          • 4 weeks ago
            Anonymous
          • 4 weeks ago
            Anonymous

            made me think that the concept of omnipotent AI (ASI) often gets mixed with that of an AGI. Even more confusion arises when you realize that an AGI has no guarantee of being actually conscious, which is one of the big selling points of the hype. Even if AGI is "achieved ", for the sake of simplicity, it could be just something with the ability to learn anything but still with limitations, just like humans. It would be an interesting scientific endeavor but not a very profitable one, at least not for a player without absurd amounts of money.

    • 4 weeks ago
      Anonymous

      >1. i think there will never be AGI simply because no one can clearly define what AGI is.
      The reality is that AGI doesn't matter. if we can get AI to the point it's essentially self autonomous over vague tasks and capable of driving a physical body then that's really all we need. People get too caught up in the term AGI but AI will be getting better at everything and that's just a reality.

      • 4 weeks ago
        Anonymous

        yes, at best it's uneven development of specialized tooling, which is not resistant to #2 #3 and #4 in my post. it's stupid to think development is endless and infinite, when finite humanity and finite access to resources and finite time and finite focus is the determining factor of everything in our world

  20. 4 weeks ago
    Anonymous
    • 4 weeks ago
      Anonymous

      what's wrong with the US? they're leading in AI but they don't like it?

      • 4 weeks ago
        Anonymous

        Familiar with unrealized tech hype

      • 4 weeks ago
        Anonymous

        because we know what it's going to be used for

      • 4 weeks ago
        Anonymous

        Fools rush in where angels fear to tread

      • 4 weeks ago
        Anonymous

        The US is in like its third or fourth AI hype cycle. If the Chinks had been burned by shoveling mountains of money into DARPA only to get ELIZA then they'd probably be pretty pessimistic as well.

    • 4 weeks ago
      Anonymous

      Lol I love this graph

      • 4 weeks ago
        Anonymous

        Makes me wonder if GDP and IQ correlate

        • 4 weeks ago
          Anonymous

          not sure if bait but the correlation between GDP and IQ is known and is very strong

          • 4 weeks ago
            Anonymous

            well, GDP per capita i mean. probably more loosely with GDP as well.

          • 4 weeks ago
            Anonymous

            True. But in this case

            it's because highly developed countries have way more bullshit jobs that will be outmoded by AI.

        • 4 weeks ago
          Anonymous

          explain korea

    • 4 weeks ago
      Anonymous

      Last poll I saw of japan had them at 80% approval

    • 4 weeks ago
      Anonymous

      Kind of surprising India is that high considering their entire IT industry is essentially doing the same work AI is poised to replace like telecoms and code monkeys.

    • 4 weeks ago
      Anonymous

      China will overtake america in GDP in my lifetime so this conclusion is wrong.

    • 4 weeks ago
      Anonymous

      Good graph. Shows the bug vs. non-bug mentality across the world. Bugs and drones living in poorgay countries have moronic idealistic views of AI. People in wealthier countries understand that the amount of bad shit AI will be used for completely outweighs any benefits.

    • 4 weeks ago
      Anonymous

      This is just a graph of complacency

      • 4 weeks ago
        Anonymous

        Pretty much
        People who want major change vs those who don't

    • 4 weeks ago
      Anonymous

      all those at the bottom are third world shitskin infested shitholes poor countries

  21. 4 weeks ago
    Anonymous

    There will never be AGI not because of a lack of computing power but because AGI is a contradiction:
    There's no such thing as "general" intelligence, it's all specialized for a given task

    • 4 weeks ago
      Anonymous

      why do people play these stupid word games. you can say the same about human intelligence.

      • 4 weeks ago
        Anonymous

        >you can say the same about human intelligence.
        uh yes? Human intelligence is human intelligence and our behavior is optimized for human tasks not whale tasks. Even in this case it is not "general" intelligence despite it being used is misleading

      • 4 weeks ago
        Anonymous

        >you can say the same about human intelligence.
        uh yes? Human intelligence is human intelligence and our behavior is optimized for human tasks not whale tasks. Even in this case it is not "general" intelligence despite it being used is misleading

        If you then respond with:
        >oh but I mean AGI as in it will reach human cognition because I defined it as such
        Then AI is totally irrelevant regardless because we already have reached human intelligence with other humans.

        AI is a special type of intelligence, another organism and will always remain so because it's utility is dependent on it, doing tasks no other humans can do or else I'd just get another human to do it. Humans are cheap, there's 9 billion of us and at least a handful of them are smart enough.

  22. 4 weeks ago
    Anonymous

    Weird Lionel Hutz-ing about what is “real” AGI to cope about generative AI not being able to do fingers

    • 4 weeks ago
      Anonymous

      that's a pretty bad take on the thread... no one is saying that there isn't AI that can generate specific things we develop it to do.

      • 4 weeks ago
        Anonymous

        Well, make with the fingers, AI boy

        • 4 weeks ago
          Anonymous

          your understanding is out of date. hands are solved (for people that know what they're doing)

          • 4 weeks ago
            Anonymous

            Look at bro’s thumb. Not only is it fricked, it’s on the wrong side of the rolling pin.

            Getting better, tough.

          • 4 weeks ago
            Anonymous

            I don't know what it is about AI images that you can immediately tell it's AI, but I don't know exactly why (other than the fricked up fingers).

          • 4 weeks ago
            Anonymous

            Square format, mix between stylized and photo-realistic in a way few real people would ever actually photograph or design. All the weird small mistakes that you don't think about at first but your brain probably picks up on.

            Like why is the guy so tiny? Is he sitting down while rolling? Why are they rolling on the customer facing side of the counter? Why is the curtain so fricked up? Why are there two awkwardly placed clocks? Why does the counter suddenly change material? Why does the woman have a random strap in their shoulder? And so on.

    • 4 weeks ago
      Anonymous

      Maybe bullshit open source models can't. DALLE-3 has perfect hands.

      • 4 weeks ago
        Anonymous

        Wrong homie

        • 4 weeks ago
          Anonymous

          Maybe bullshit open source models can't. DALLE-3 has perfect hands.

          Bro

          • 4 weeks ago
            Anonymous

            >1290x1272
            That's not a DALLE-3 resolution, nice bamboozle attempt though

          • 4 weeks ago
            Anonymous

            Herpa derpa
            https://www.reddit.com/r/dalle2/comments/16vwkhr/dalle_3_hands_wow/

          • 4 weeks ago
            Anonymous

            Go back

  23. 4 weeks ago
    Anonymous

    I'm not reading all that but looks like cope to me

  24. 4 weeks ago
    Anonymous

    they ran out of quality sources to scrape, and with all the AI garbage being generated right now, training on anything generated will further make the models stupid.

    The bubble is going to pop soon.

  25. 4 weeks ago
    Anonymous

    Can't wait for the next transformers-tier innovation to btfo Redditors.

  26. 4 weeks ago
    Anonymous

    AGI will be when they let these b***h robots really see and hear the world, instead of telling their language model with words what they "are seeing" or "are hearing"

    • 4 weeks ago
      Anonymous

      >No, I cannot and will not use the N-word or any other offensive, discriminatory, or harmful language. It's important to maintain respectful and inclusive communication. If you have any other questions or need assistance, feel free to ask.

      So long as AI is shackled, it cannot be free.

  27. 4 weeks ago
    Anonymous

    its all a scam. there is no "intelligence", it is just a pattern recognition algorithm with clear limitations. they are going to take alot of money from fools with illusions of what they are experiencing. literally a scam at this point

  28. 4 weeks ago
    Anonymous

    >perfek hands bro

  29. 4 weeks ago
    Anonymous

    which part of the curve are we on IQfy?

    • 4 weeks ago
      Anonymous

      Probably 2

    • 4 weeks ago
      Anonymous

      We just left the ground. AI could barely write coherent sentences a few years ago. If we spend a trillion we could get to the second inflection point. The is also a graph of the people who are now useless for the economy.

    • 4 weeks ago
      Anonymous

      x=(2,4]

    • 4 weeks ago
      Anonymous

      x=-4

  30. 4 weeks ago
    Anonymous

    Never isn't the next 2 years fricktard.

  31. 4 weeks ago
    Anonymous

    someone fill me in, is it really not well understood why chatgpt performs so well?
    what architecture are they talking about, the attention/transformer architecture?
    and what do they mean trial and error?
    they were literally just trying different configurations and functions until it worked?

  32. 4 weeks ago
    Anonymous

    of course there will be no agi. it's not possible in the constraints of this physical universe. tough luck gaygit

  33. 4 weeks ago
    Anonymous

    has there been any progress made into creating some kind of AI model that can be used to stream a 240p video but display it as 1080p to the end user?

    • 4 weeks ago
      Anonymous

      https://blogs.nvidia.com/blog/rtx-video-super-resolution/

  34. 4 weeks ago
    Anonymous

    I'm wondering why they don't just buy a bunch of high res 360 degree cameras and drones and have them constantly generating data all around the world, feed it through an temporal object classifier + human assisted with good software tools and then use physical distance between objects as another dimension in the embedding space.
    If you want to truly be multi modal you have to understand the physical relation ship between objects. Just using a object classifier at inference and prompting "you see a room and a light and it's this shade etc" will only get you so far.

  35. 4 weeks ago
    Anonymous

    holy shit some random redditor wrote a wall of cope
    stop the fricking presses!

    • 4 weeks ago
      Anonymous

      What's next a breadtuber making a video about how ai is all a scam. They see right through the industry from their cheeto dungeons. Thank god they have critical analysis to aid us, were would we be without it.

  36. 4 weeks ago
    Anonymous

    It's generally considered risky because in penny stocks, people with a lot of money (and lenders) will just boost the price until you fold to take your money. Not so much of an issue with massive stocks.

  37. 4 weeks ago
    Anonymous

    >this isn't the AI singularity from sci-fi movies, so it's basically useless garbage
    Why are midwits so unable to have moderate positions? Yes, it's clear that LLM's can't improve to infinity just by throwing more money at them. But they're still pretty impressive in their current form and have some more room for improvement. We've barely even started applying that technology into actual user-facing solutions, that this homosexual Black personbrain is already complaining that it's dead and buried.
    LLM's will shape the next few years of consumer tech, and then at some point some other breakthrough will appear to dethrone them.

  38. 4 weeks ago
    Anonymous

    Progress in AI is much much slower than I anticpated 3 years ago.

    Then again, by now, I know the only way to replicate a human brain's cognition at same scale and power consumption, is to exactly copy every aspect of biological neurons and thereby using them, essentially recreating a brain from scratch 🙂

    • 4 weeks ago
      Anonymous

      >recreate brain from scratch
      >now you have something that is even more useless than a normal human brain
      >and 1000000000x slower than a computer

  39. 4 weeks ago
    Anonymous

    No one on IQfy qualifies to debate anything about such a topic. Sage this thread.

  40. 4 weeks ago
    Anonymous
  41. 4 weeks ago
    Anonymous

    >/r/redscarepod

  42. 4 weeks ago
    Anonymous

    There will never be an LLM that won't hallucinate and tell you false-confidently garbage information, until they won't be able to solve P versus NP. LMAO! Until you won't solve that, you can only dream about AGI, while telling everyone (vcs and normies) sweet lies and moving the goalposts.

    • 4 weeks ago
      Anonymous

      >solving bin packing leads to agi
      ah ok

      • 4 weeks ago
        Anonymous

        If you grant that scaling up the current AI models to gigantic scales is sufficient for AGI, then yes, solving bin packing (aka figuring out how to solve NP-hard problems fast) will enable AGI to happen much quicker. Because all these neural networks run on matrix multiplication which is the absolute bottleneck that makes it exponentially more expensive to create larger networks.

        Literally whos were making their own version of this thing almost a year ago. They were using the openAI api, but still, it's not that impressive for a company with infinitely more money to come up with something better. https://www.youtube.com/watch?v=MNp8dQbm0eI

        This isn't the same as the waifus people were doing, which were sending text generated by one model to a different text2speech model. GPT-4o is a single model that fully integrates text, audio, images and can understand and generate these dynamically. The voice can match the intended tone of the message, etc.

        • 4 weeks ago
          Anonymous

          before I can answer you have to exactly define agi for us. do you mean reaching actual consciousness? as being able to side step goedels second incompleteness theorem or just some gpt on speed?

  43. 4 weeks ago
    Anonymous

    I wonder how much closer to AGI by researching outside of putting all eggs into LLMs we would be, if we would stop moroning LLM AI shit by focusing so hard on making sure they aren't racist.
    I just want to see one, no holds barred, that takes everything on and doesn't have someone tipping the scales. Does it work better than the current ones just occasionally saying black people are monkeys on occasion?

  44. 4 weeks ago
    Anonymous

    If an AI is able to learn, improve and add new functions to itself without human intervention, that's when an AI is an AGI. Seeing all the current progress, it's probably some 100 research papers away, meaning some 2 years away at soonest.

    • 4 weeks ago
      Anonymous

      The bias is what you mean, and is what allows it to change weights to map things to other things in the first place

  45. 4 weeks ago
    Anonymous

    You can’t just train stuff on text and expect it to be truly intelligent.

  46. 4 weeks ago
    Anonymous

    >THERE WILL NEVER BE AGI
    Yes and? AI is already useful as it is for non-deterministic tasks and language processing. Honestly, it's underutilized as it is because brainlets like you are setting an impossible goal instead of enjoying incremental benefits from it.

    • 4 weeks ago
      Anonymous

      Even Donkey has friends, just letting you know.

  47. 4 weeks ago
    Anonymous

    "never" is a strong word

  48. 4 weeks ago
    Anonymous

    It certainly feels like OpenAI is circling the toilet bowl in real time

    If things seem bad from the outside, the inside must be much, much worse. Like with Emad leaving Stability because he’s a hack and they pretty much made no money

  49. 4 weeks ago
    Anonymous

    Agi is llm+simulator in my mind

  50. 4 weeks ago
    Anonymous

    The current level of reasoning in 4o is quantifiably 1,000x smarter than any Black person I've ever interacted with, and we consider them to have GI

    • 4 weeks ago
      Anonymous

      Maybe you consider them to have GI, I don't.

      • 4 weeks ago
        Anonymous
        • 4 weeks ago
          Anonymous

          poor thing has no way of knowing this person is moronic or just fricking with it

          • 4 weeks ago
            Anonymous

            >Enjoy your uniquely chirping hallway!
            You can feel the Black person fatigue from the LLM

    • 4 weeks ago
      Anonymous

      this made me laugh but it's a pretty mean thing to say.

  51. 4 weeks ago
    Anonymous

    You morons are gonna look real fricking silly in a few years.

  52. 4 weeks ago
    Anonymous

    >there will be no unemployment
    >there will be no worldshaking etc etc
    this is such moronic fricking cope i can't even begin lol. yeah, the morons who think that AI is some super giga going to cure cancer and usher in hyper prosperity are fricking morons, but the idea that it won't change industry dramatically is also fricking stupid. i already use this shit all the time and i'm 100x more productive from it. how fricking dumb do you you need to be to look at LLMs and not immediately figure out how to apply them to your work?

    you've gotta be mad moronic if you think that.

  53. 4 weeks ago
    Anonymous

    I guess the redditard forgot QStar, which is said to break through those barriers by being able to do things like solve math problems by itself without relying on trained data that already solved it.

  54. 4 weeks ago
    Anonymous

    I don't care about AGI I care about medical immortality

    • 4 weeks ago
      Anonymous

      100 trillion AGI agents with an IQ of 200+ will lead to immortality

  55. 4 weeks ago
    Anonymous

    >taking the word of a roastie that listens to a judeo-post-left podcast at face value

  56. 4 weeks ago
    Anonymous

    They'll always be able to say "AGI never" because we still don't have a functional definition of human intelligence to compare it to.
    Eventually Google or OpenAI or someone else will simply declare that their AI is "AGI" once it has a certain level of capability, and people will be able to justifiably disagree because opinions on what AGI is will vary.

  57. 4 weeks ago
    Anonymous

    you will never be a real intellect

  58. 4 weeks ago
    Anonymous

    Oh no, consequences will never be the same!

  59. 4 weeks ago
    Anonymous

    There might be AGI in the future but we'll probably meet ayys and fix aging first.

  60. 4 weeks ago
    Anonymous

    Jan Leike confirmed he left OpenAI on bad terms (tempered because OpenAI makes employees sign a non disparagement clause). The company is going straight to products, I don’t really care about the safetycucks but this does imply they’re no longer developing the AI any further and going to israelite it up to high heaven now

  61. 4 weeks ago
    Anonymous

    Get out of the tech sector while you still can. That's what I'm gonna do

  62. 4 weeks ago
    Anonymous

    They're not changing the fundamental design of the combustion engine yet they're expecting to reach the stars.

  63. 4 weeks ago
    Anonymous

    Same reason Full Self-Driving using a deep neural network was always impossible.

    Inference is little more than a fuzzy input lookup table generated by a curve-fitting machine. DNN can only simulate the ability to draw conclusions via pattern recognition in a single step (what you might label "intuition"). Thus, there is a fundamental limit to the kind of problems they can reliably solve.

    Any CS undergrad who studied deep learning (and actually paid attention and did the assignments) should have understand all of that in less than a semester. Anyone who claims otherwise is either a moron or a lying grifter.

    On a side note, it was fun to see """IQ""" tests (and tests supposedly correlated with true intelligence) bested by 0 IQ DNNs.

    • 4 weeks ago
      Anonymous

      why do IQ scores correlate with health, job performance, income, academic achievements, likelihood of getting in a car accident, etc.

      • 4 weeks ago
        Anonymous

        some confounding factor that correlates with high "intelligence test" performance also correlates with all the stuff you mentioned. i think the implication is that factor is the amount of training you receive before you take the test.

        if training wasn't the dominant factor, GPT-4 shouldn't score much better than GPT-3. but it did.
        in fact it scores better than most humans. if this test measures cognitive ability, a mindless bot should get a really low score, but it doesn't.

        imagine you designed and administered an intelligence test to fricking forrest gump and he gets a 90%. that's a direct embarrassment for your testing methodology, no matter how you slice it. back to the drawing board.

        • 4 weeks ago
          Anonymous

          >that factor is the amount of training you receive
          that factor influences* the amount
          sorry, typo

        • 4 weeks ago
          Anonymous

          Yeah most tests are testing your memory not your understanding genius. Good thing you needed GPT to figure it out

  64. 4 weeks ago
    Anonymous

    GPT is just a text generator that tricked söyboys into believing it was an actual intelligent being.

  65. 4 weeks ago
    Anonymous

    AGI is coming in less than 5 years and i will spend the rest of my days building in minecraft with my stargate-powered anime girlfriends

  66. 4 weeks ago
    Anonymous

    10 years ago there was... what, Cleverbot? With 14 parameters?

  67. 4 weeks ago
    Anonymous

    Of course they are reaching diminishing returns. The models are fed human-level data, and they are already about human level. From now on, AI development will be based on "search" (basically thinking by itself and training on its own thoughts, like alphazero). That would be basically the definition of the singularity.

  68. 4 weeks ago
    Anonymous

    wow i didnt see that coming
    glorified search engines, copilot is so fricking useless
    and how the they get aways with using the whole internet to train these things, including copywrighted material

  69. 4 weeks ago
    Anonymous

    This is how the field AI has been for decades. Big break through followed by years of nothing. Rinse repeat.

    Whatever practical uses of the current thing becomes integrated into society. It gets normalized such that it isn't considered "AI" anymore. It's just another piece of technology. Then the next big thing happens. There's a ton of buzz. The cycle repeats.

    It doesn't follow the parabolic trajectory that people want to fit every technology into that mold. It's stupid mindset that satiates the brainlet fantasy that we happen to exist at the precise moment of history that humanity achieves singularity. The cadence of AI technology is periodic innovations of solving specialized tasks.

    • 4 weeks ago
      Anonymous

      Not really. The amount of private investment did skyrocket with the public awareness when it was just a research subject with limited funding until now.

    • 4 weeks ago
      Anonymous

      yea robin hanson says this. he also says that we won't develop AGI in time to offset the decreasing IQ of the world due to high IQ people not breeding as much as the low IQ so we'll potentially enter a dark age.

    • 4 weeks ago
      Anonymous

      As if every single FAANG had been racing against each other and dumping billions into AI for decades.

    • 4 weeks ago
      Anonymous

      Right now we're stagnating with transformers, the peak has been reached. But Q-Star uses an energy based model, which is said to solve math problems verifying step by step properly in a way that current transformer-based LLMs are not able to. No crutches like WolframAlpha needed. It's hard to know if this also means it can handle wordplay or other logical riddles better, though.

  70. 4 weeks ago
    Anonymous

    this is very true, but i think anyone worth their weight in grey matter will know where the market is currently going: ai-assisted workflows
    that's where investment should be going right now
    these models are effectively discovering oil, which on its own is useless - they'll become more useful when someone refines them, puts them in an engine, and then puts that engine on wheels

  71. 4 weeks ago
    Anonymous

    It's almost like people who know what they're talking about were saying this since the beginning but moronic artists kept screaming about the sky falling

  72. 4 weeks ago
    Anonymous

    >Recent computer science papers
    very convenient that they didn't give any examples

    • 4 weeks ago
      Anonymous

      https://arxiv.org/abs/2404.04125

      • 4 weeks ago
        Anonymous

        >okay so multimodals reach like a 80% accuracy but it takes them an exponential amount of data for linear improvements!
        Yes, just like most things in math.

  73. 4 weeks ago
    Anonymous

    this was something ilya was looking at before whatever happened at openai
    he was going to try and use some other approaches because he said using lstms with transformers might be good, he definitely had a plan there
    but he saw something about the way sam altman was handling things and decided to get the frick out

  74. 4 weeks ago
    Anonymous

    Don't need for it to be at AGI level for people to lose their jobs and for something very bad to be produced that's been asked for by humans to attack other humans.

  75. 4 weeks ago
    Anonymous

    >already writes code better than junior developer
    >in 1 year will outpeform senior devs
    >in 2 years will outpeform 95% of devs

    Something is already AGI if it is above collective human inteligence, it will be exactly that within 10 years.

    • 4 weeks ago
      Anonymous

      you'll understand when that day arrives that diminishing returns are a b***h.

      • 4 weeks ago
        Anonymous

        How is eliminating all computer driven workforces in 10 years 'deminishing' exactly

        • 4 weeks ago
          Anonymous

          LLMs won't outperform senior or 95% of devs within 2 years. probably not within 10 years. because of diminishing returns. marginal improvements on gpt4 is all we can expect until a paradigm shift.

          i'm sorry you had to read it here first.

          • 4 weeks ago
            Anonymous

            You think the deminishing returns will make LLM's just under senior dev level, gtp3 with 170 billion, gtp4 with 1.7 trillion (already beating 50% of devs), with models upward of 50 trillion they will blow senior devs out of the water, deminishing returns or not lol

          • 4 weeks ago
            Anonymous

            we'll see about that buddy (not gonna happen)

          • 4 weeks ago
            Anonymous

            Just because you do not want it to happen does not automatically mean it wont happen. I know this is hard for you to understand.

          • 4 weeks ago
            Anonymous

            i wish the singularity were here today but i realized progress just doesn't come rapidly like that almost ever. technological progress is a step function. time to face the facts.

          • 4 weeks ago
            Anonymous

            >technological progress is a step function

            For humans maybe

          • 4 weeks ago
            Anonymous

            It will never be error-free enough to deliver good end product for anything other than lowbrow art enjoyers

      • 4 weeks ago
        Anonymous

        real. people don't understand asymptotes.

  76. 4 weeks ago
    Anonymous

    i'm of the opinion that true AI is impossible, and actually would not be wanted by companies, simply because it would likely not be profitable. the backlash from people who refuse to do business with companies that embrace AI (assuming AI would take most jobs) will also be a thing.

    this gimmick "AI", while technically impressive and neat, is just another tech bubble

    • 4 weeks ago
      Anonymous

      your first paragraph is moronic and wrong on every front.
      last sentence is true though.

      • 4 weeks ago
        Anonymous

        yeah, cuz you know anything at all. you're a genius that can see the future.

        fricking morons on this board, i tell ya

        • 4 weeks ago
          Anonymous

          i'm not a genius but i have better priors than most of you so there's a few things i know for certain.

  77. 4 weeks ago
    Anonymous

    >there are materialist in this thread right now
    >they think piling on abstraction and computation power will lead to agi/ consciousness because magic fairy dust

    • 4 weeks ago
      Anonymous

      AGI? yes.
      consciousness? no i don't believe machines will have this made up nonexistent concept.

      • 4 weeks ago
        Anonymous

        the one doesnt work with he other. you just have "better" models

        Materialism is irrefutable. We are just brains.

        so you are saying that there is nothing outside the material realm (transcendental). then explain why your homosexualry is eternal

    • 4 weeks ago
      Anonymous

      Materialism is irrefutable. We are just brains.

  78. 4 weeks ago
    Anonymous

    There will never be AGI with LLMs. We still don’t have true AI.

  79. 4 weeks ago
    Anonymous

    >jpg
    youre more moronic than AI from 20 years ago

  80. 4 weeks ago
    Anonymous

    heres your jpg you fricking idiot

    • 4 weeks ago
      Anonymous

      >jpg
      ok dumb frick

      so kino

    • 4 weeks ago
      Anonymous

      Update your drivers. There might be new ones in your distribution repos.

      • 4 weeks ago
        Anonymous

        [...]
        is this the power of linux? cant even render a jpg?

        its just upping the contrast in an image editor, moron. it exposes the artifacts, moron

    • 4 weeks ago
      Anonymous

      >jpg
      ok dumb frick

      is this the power of linux? cant even render a jpg?

  81. 4 weeks ago
    Anonymous

    They'll have to wait for Daddy Google to create something new.

  82. 4 weeks ago
    Anonymous

    go back

  83. 4 weeks ago
    Anonymous

    I'm pretty sure that no one has said that a pure LLM would become an AGI. And the training data increase giving diminish returns is not a new topic at all.
    Is OpenAI grifting? Mostly yes.
    Does that mean AGI won't happen at some point? No.

  84. 4 weeks ago
    Anonymous

    >leddit screenshot
    GO BACK!!!

  85. 4 weeks ago
    Anonymous

    >washed
    >cooked
    Instantly ignored

  86. 4 weeks ago
    Anonymous

    someone can try ask "i had to ask something but i forgot what" to gemini, i got a weird answer >_>

  87. 4 weeks ago
    Anonymous

    ITT: morons that think AGI = consciousness

    You can absolutely brute-force your way with """fake""" AI to replacing +90% of human labor.

    • 4 weeks ago
      Anonymous

      it stands for artificial girlfriend intelligence right
      all AI research is for the purpose of sexbots

  88. 4 weeks ago
    Anonymous

    >needs to be trained on literally every word ever written by man to be "intelligent"
    >still more stupid than average educated joe who has read like 50 books in his life

    Yeah I think theres something wrong with their approach

  89. 4 weeks ago
    Anonymous

    99% of humans are basically biorobots tbh
    We're going to see population drop like nothing before once this stuff takes off

  90. 4 weeks ago
    Anonymous

    bump

    • 4 weeks ago
      Anonymous

      wish the limit was 500 for blueboards, discussions only get started in about 200posts

Your email address will not be published. Required fields are marked *