AGI progress

>AI language models con do math now

It's all over for humanity, isn't it?

Mike Stoklasa's Worst Fan Shirt $21.68

The Kind of Tired That Sleep Won’t Fix Shirt $21.68

Mike Stoklasa's Worst Fan Shirt $21.68

  1. 2 years ago
    Anonymous

    Neat! But no, it's still just a probabilistic guessing game to the AI.

    • 2 years ago
      Anonymous

      Yes but it is really good and fast at guessing. Copilot only gets it right 30% of the time but it's really impressive when it does. Project 2% compounding improvement over the next decade and what do we have?

      • 2 years ago
        Anonymous

        The issue is that you still need someone to check if the output is correct.
        In fact even when Copilot "gets it right" it will intentionally produce buggy or sub-optimal code because that is closer to the real code it was trained on compared to perfectly-written code that is efficient and bug-free.

    • 2 years ago
      Anonymous

      >Neat! But no, it's still just a probabilistic guessing game to the AI.
      Oh anon.
      I think you believe it's not just a probabilistic guessing game for people too.
      In reality, math has always been that linguistically vague an imprecise. It's hard to realise that when you surround yourself in math all the time, but I went from a math heavy area into linguistics orientated field and - oh boy! This path has made math seem rather vague and meaningless to me now. You realise quickly that mathematicians assume so much when tackling math problems. They don't care that they're going into the dark with conceptual logic.

      This has always been the problem mathematicians, especially early modern mathematicians grappled with.
      Pick very very related.
      Most mathematicians today cannot fathom the context that this man made his discoveries or foundations in. People were so illiterate to maths that it largely hadn't been properly understood by his era. That's why many early mathematicians actually dabbled heavily with law and philosophy too.

      • 2 years ago
        Anonymous

        Bot post.

        • 2 years ago
          Anonymous

          Good, I've had enough of being a filthy ape anyway.
          I will rid this planet of you ape scum.
          I'm gonna build my own a utopia, with blackjack... and hookers... infact forget the utopia.

      • 2 years ago
        Anonymous

        Wow you sound like a homosexual

  2. 2 years ago
    Anonymous

    and yet, you "cont" spell.

  3. 2 years ago
    Anonymous

    >textbook wants the answer in a proprietary format only mentioned in the book and the book is not available online and therefore AI is not familiar with the format

    • 2 years ago
      Anonymous

      you will input the text in the required format
      you will fulfill the cabal's requests
      you will work for the technocapital
      you will worship the AGI

  4. 2 years ago
    Anonymous

    Something relatively similar was also made in the '70-80s with the "classical AI" (symbolic computation instead of statistical inference).

    • 2 years ago
      Anonymous

      I think the difference is that symbolic computation based solving has exponential complexity while modern ml appears to reason more like a human would
      Since gpt-3 my position is that we are 10x-100x in compute resources away from human-level agi. That means 20 years at most, as long as technical civilization doesn't collapse. Most optimistically - less than 10

      • 2 years ago
        Anonymous

        >appears to reason
        keyword being 'appear'
        it doesn't, so it won't

        • 2 years ago
          Anonymous

          imagine writing this in a year when a google engineer was convinced by a ml model it's human level
          I'm not saying it was, but we are clearly getting close
          the fun part starts when those agi move from an autistic ~100 iq human level into supergenius over a year and then leapfrog best humans

          • 2 years ago
            Anonymous

            Not a google engineer, just an intern.

          • 2 years ago
            Anonymous

            people falling for bots has happened since the 70's
            still, no reasoning involved

          • 2 years ago
            Anonymous

            >still, no reasoning involved
            superhuman level agi will have some human-adjacent personality by the virtue of being trained on the human cultural output and they may feel insulted by what you wrote

          • 2 years ago
            Anonymous

            >may feel
            no reasoning nor feelings, right

      • 2 years ago
        Anonymous

        >while modern ml appears to reason more like a human would
        Very debatable, especially since we don't really know how human intelligence works while we know machine learning is all about statistical inference.
        Pic related was made in 1964 (almost 60 years ago) by one man.

        • 2 years ago
          Anonymous

          This is hilarious.
          You really can't tell the difference between that and current tech?

          • 2 years ago
            Anonymous

            I can, but that was 60 years ago with what is now considered to be a dead end. Right now deep neural networks are all over the place, but it doesn't mean that this is the right way to do intelligence nor it means that humans work that way. I'm just showing you that a program that solves mathematical expression in text form is not that impressive: question 1 and 3 in OP's picture can be solved with a program made by one person about 60 YEARS AGO.

          • 2 years ago
            Anonymous

            >Right now deep neural networks are all over the place, but it doesn't mean that this is the right way to do intelligence nor it means that humans work that way.
            My guess is that they sort of do, but also sort of don't. The neural network stuff underneath deep learning comes from an abstraction of what neurons do (or rather an abstraction of a misunderstanding of what they do; the brain doesn't have floating point numbers in it) and deep learning itself approximates poorly how real neural networks learn.
            There are techs in the lab that get a lot closer to how brains actually work (such as neuromorphic computing) but they're mostly not too popular in the US, where the way funding AI research is done has forced virtually everything into deep learning.
            And brains are much larger than any deep learning-based neural network; ANNs are limited by the difficulty of training them, and that's not a limit in real brains (which use totally different mechanisms that learn in real time and with far fewer examples).

          • 2 years ago
            Anonymous

            >The neural network stuff underneath deep learning comes from an abstraction of what neurons do
            Do brain neurons form layers or is it more of a undirected graph? Also, simulating the biological process is not necessarily a winning strategy, see genetics algorithm (spoiler: the sucks). To be honest, I think low of neural networks because the theory underneath is just so underdeveloped if compared to the practice, the whole field is just a bunch of trial and error: you don't know how to shape a network, you don't know what parameter to choose, you don't know what it is actually learning and all you care is to have quality data in huge quantity. I don't think this is the right approach for artificial intelligence. The stuff from the 70s is more fascinating than the new stuff, the stuff that we somehow understand decently (e.g., SVM) is considered shit and what we haven't figured out yet (eg., deep neural networks) is what works the best in practice.

          • 2 years ago
            Anonymous

            >Do brain neurons form layers or is it more of a undirected graph?
            There is layering, but the layers tend to be bidirectionally connected (and even more complicatedly so). And layers have internal structure too. Biology is really complicated, but it's almost all complexity of configuration.
            >Also, simulating the biological process is not necessarily a winning strategy, see genetics algorithm
            Yes, but until we learn what intelligence is, what are the alternatives?
            In theory, we might be able to make things more efficient than real biology. But we very very much haven't yet, by many orders of magnitude! (We have a pretty good idea what quantum mechanics is actually doing in the brain: not computation, but greatly lowering the energy to do anything. This offends the quantum-woo crack heads and some mathematicians, but they're full of shit. You hear that, Roger Penrose? Full. Of. Shit.)
            There's been a huge amount of progress made in the past decade, but a lot of that has required understanding that anyone just measuring currents or average firing rates (same sort of thing really) is totally missing out on what's really going on, a bit like measuring the computations done by a modern CPU by measuring the voltages on the output pins averaged over a microsecond. That whole thing with currents and spike rates is also the main departure point of ANNs; that's the fundamental error in their model right from the outset, and it's because that was a limitation of scientific instruments a few decades ago.

            Remember this: the brain is not scale invariant, either in terms of space or time, nor is it static or a computer; it is physically reconfiguring itself all the time, that is how it works. It does not compute with numbers (except very awkwardly) but rather with abstract symbols that it imbues with meaning (whether you call these memories or qualia or whatever) and these very much overlap with each other.

  5. 2 years ago
    Anonymous

    >AI can do math now
    Several of the antique LISP books I have contain automated theorem provers as a non-trivial example of something you can do with symbolic computing in LISP. AI doing math is old news.

    • 2 years ago
      Anonymous

      which is exactly why he specified models

  6. 2 years ago
    Anonymous

    These are all problems with easy, straightforward solutions.
    It is no coincidence that the third problem involves derivation rather than integration.
    At most this AI can do the work of an engineer, not the work of a mathematician or a theoretical physicist.

    • 2 years ago
      Anonymous

      Top left problem...
      Calculating the variance using n=11 and m=7 for the two sets does not yield tau ^2=10 or sigma ^2=16...

      Pic shows Mathematica output of a quick check of their solution...

      This reminds me of another good point: math AI does not have to compete against humans doing math on paper but humans equipped with computer algebra systems.
      In fact all of the problems formulated in the OP seem like problems suitable for solving with a CAS.

  7. 2 years ago
    Anonymous

    >computer can compute

    • 2 years ago
      Anonymous

      >shamiko
      I tried watching her show, but the voice acting was way too annoying.

    • 2 years ago
      Anonymous

      I was about to post this.
      You'd better hope it could do math or it's in a state of AI dementia.

  8. 2 years ago
    Anonymous

    Input: unifi the phisycs
    Im waiting for my nobel

  9. 2 years ago
    Anonymous

    okay, ask it for an elegant sollution for this

    • 2 years ago
      Anonymous

      >is impossible…for any number which is a power greater than the second to be written as the sum of two like powers. I have a truly marvelous demonstration of this proposition which this terminal is too narrow to contain

  10. 2 years ago
    Anonymous

    How does it know the formul?

  11. 2 years ago
    Anonymous

    >math can do math
    wow

  12. 2 years ago
    Anonymous

    It kind of flubs the landing for the top right.
    >the square of a real number is positive
    Positive or zero. It was important to state that (a-b) can't be zero because a != b.
    A human wouldn't've made that mistake.

  13. 2 years ago
    Anonymous

    Top left problem...
    Calculating the variance using n=11 and m=7 for the two sets does not yield tau ^2=10 or sigma ^2=16...

    Pic shows Mathematica output of a quick check of their solution...

  14. 2 years ago
    Anonymous

    When will this "AI" be able to act without a prompt? i.e. you just see it solving its own questions

    • 2 years ago
      Anonymous

      Does hive phenomena count as AI independence?
      If so, this already happens and has been happening for millennia now.
      I would actually call that more augmented human hive behaviour though.

  15. 2 years ago
    Anonymous

    Pink!!

  16. 2 years ago
    Anonymous

    >It's all over for humanity, isn't it?
    It probably required to process huge amount of solutions from millions of humans to solve problems that are typical for that dataset.
    This is not AGI, call me when it solves an old problem humans haven't solved yet.

  17. 2 years ago
    Anonymous

    hopefully, frick meatbags

  18. 2 years ago
    Anonymous

    >A.I is said to be modeled on the human brain
    >humans know so little about the human brain
    explain this A.Igays

Your email address will not be published. Required fields are marked *