destroys AI

no hard feelings

CRIME Shirt $21.68

Tip Your Landlord Shirt $21.68

CRIME Shirt $21.68

  1. 1 month ago
    Anonymous

    Yes it's a lookup table, a very high dimension one and can exhibit out of sample generalization, which is all we need from it anyways.

  2. 1 month ago
    Anonymous

    well yeah duh. And it's trained with a regression. That's all it is. That's why anyone who does anything serious with AI knows to not think of that thing ever reasoning. It's a model operating from the data it's been trained on. Nothing more. It can be real close to pretending to come across like a human, but invariably it'll become apparent that it's actually more moronic than a moron. And that's fine. It has its use cases. It also has its limitations. Humans are basically the same lol.

    • 1 month ago
      Anonymous

      Pretty much. People use ChatGPT failing at certain riddles as proof it's not intelligent, yet how many humans would pass? The goalposts keep getting moved on what machines can do, so anyone paying attention can see this train ain't stopping anytime soon.

      • 1 month ago
        Anonymous

        Your brain is also a lookup table.

        lol lmao even

    • 1 month ago
      Anonymous

      > reasoning
      LMAO, just give a precise definition of that word and AI will start reasoning in no time.

    • 1 month ago
      Anonymous

      Humans are not the same, because we're created by God out of His image and are therefore conscious and capable of subjective experience. You project your own shallowness onto others, don't do that.

      • 1 month ago
        Anonymous

        >because we're created by God out of His image
        God is a lookup table.

        • 1 month ago
          Anonymous

          >God is up in the sky
          >Sits on a throne
          >If he has a throne, it stands to reason that he has a table too
          Makes sense to me

  3. 1 month ago
    Anonymous

    Your brain is also a lookup table.

    • 1 month ago
      Anonymous

      Sure, but also causality is just a lookup table for interactions between elementary particles.

    • 1 month ago
      Anonymous

      Consciousness is not a computation

  4. 1 month ago
    Anonymous

    Yeah obviously. Most people are too dumb to exceed that level as well, in fairness to my OpenAI product (which I am very satisfied with).

    It is kind of hilarious that a LLM or visual model can be trained on every text ever written and every picture ever saved by humanity and still not understand what a letter is or what perspective or anatomy are. Kind of depressing too, like working with a special ed kid that's been in school for 10,000 years.

  5. 1 month ago
    Anonymous

    >can someone ask a neural network if my reasoning is valid and correct?
    Kek

  6. 1 month ago
    Anonymous

    >any function is a lookup table
    duh
    >given that a lookup table does not perform any reasoning
    and the source is that he made it the frick up

    what is this 39 IQ thread

    • 1 month ago
      Anonymous

      No it's not obvious mr. Sarcasm homosexual because people argue human brains work like LLMs, that there is no meaningful difference, etc. Yet humans are conscious and AI is not and never will be.

    • 1 month ago
      Anonymous

      >and the source is that he made it the frick up
      how does a lookup table reason anon? we're all waiting for your answer
      > it just does ok!
      lol lmao. cry some more

      • 1 month ago
        Anonymous

        >how does a lookup table reason anon?
        how does your brain reason
        what is reasoning?

      • 1 month ago
        Anonymous

        https://i.imgur.com/KFGZq2M.png

        no hard feelings

        nobody is going to play ball with you unless you define what you mean by "to reason"

    • 1 month ago
      Anonymous

      >any function is a lookup table
      not correct - any finite LUT is a sample of a hypothetical function. the entire LUT can be represented by that function not because the LUT is equivalent to the function but because the function can generate any sample portion of the LUT

      thinking of it another way: you can't fully prove the relation between the LUT and the function from the LUT alone, but the function remains able to generate the LUT even if you don't have any sample of the LUT. if they were equivalent you'd be able to generate exactly one function from the LUT alone (you can't do this with finite LUTs because there are infinitely many solutions that just happen to intersect each other at the values in the LUT)

      there's also the problem of systems for which LUTs can exist but not generating functions (e.g. random number sequences, especially finite) - there are infinitely many potential intersecting functions that will happily predict a sample outside of the LUT that doesn't exist because there was no actual function generating the LUT

      the Universal Approximation Theorem is about approximating the LUT, not emulating the function itself - that's why it's not the "Universal Emulation Theorem"

      Yes it's a lookup table, a very high dimension one and can exhibit out of sample generalization, which is all we need from it anyways.

      now if only people would realize 'outside' here means the 'new information' that regression has always provided to a dataset - it's not 'understanding' the function (see: infinitely many regression solutions)

      What's an adjacency matrix and how is it AI?

      a LUT for node connections

      Yeah obviously. Most people are too dumb to exceed that level as well, in fairness to my OpenAI product (which I am very satisfied with).

      It is kind of hilarious that a LLM or visual model can be trained on every text ever written and every picture ever saved by humanity and still not understand what a letter is or what perspective or anatomy are. Kind of depressing too, like working with a special ed kid that's been in school for 10,000 years.

      once artificial abstraction/mental models are cracked (you need more than regression to emulate logic that can interpret the processes underlying the regression, and for modal relations between mental models that aren't just 'adjacency probability in training output' - LLMs and FFNNs in general won't cut it (but i still think it's entirely possible to do)), the only barrier to AGI will be continuous learning.

      well, unless self-direction is WAY harder than i think it will be

      • 1 month ago
        Anonymous

        gradient descent has something known as catastrophic blowup. you should look into it. continuous learning is not possible with current tools

        • 1 month ago
          Anonymous

          You mean catastrophic forgetting

          • 1 month ago
            Anonymous

            there is that too but gradients can blow up and there is something known as the zero gradient problem which makes learning impossible. basically continuous learning is a non-starter with the current tools and techniques

          • 1 month ago
            Anonymous

            Zero gradient problem was largely solved by skip connections, popularized by ResNet
            This is 2015 research

          • 1 month ago
            Anonymous

            it's not solved. you can still get into zero gradient zones even with skip connections. skip connections mitigate the problem but there is no guarantee you won't zero out the weights and make the skip connections useless

          • 1 month ago
            Anonymous

            >zero out the weights
            That's what normalization layers are for
            Again, largely solved

          • 1 month ago
            Anonymous

            good luck with your continuous learning plan then. i'm sure it will work out

          • 1 month ago
            Anonymous

            You're the only one who's gungho on continuous learning
            I only said "zero gradient problem" and "zero out the weights" are largely solved problems

  7. 1 month ago
    Anonymous

    Computers work because of science.
    Human brain works because of magic.
    Therefore, computer programs can never be as intelligent as humans.

    • 1 month ago
      Anonymous

      Yeah that misrepresentation funny and upvote worthy three years ago before we built AI and it proved the existence of the soul empirically.

  8. 1 month ago
    Anonymous

    Why are NP complete problems hard?
    They're just lookup tables

  9. 1 month ago
    Anonymous

    "AI" is just a misnomer to scam investors, all of this is semantic disagreement.
    these things are useful in their place, like any tool.

    • 1 month ago
      Anonymous

      What do you think the "A" stands for?

  10. 1 month ago
    Anonymous

    >completely handwaves away randomness
    >this kills nondeterminism, free will, quantum woo and human reasoning
    Nothing personal kid.

  11. 1 month ago
    Anonymous

    if only you knew how bad things are..

    • 1 month ago
      Anonymous

      >compresses and re-encodes inputs in UTF-8 THREE times, only to calculate some "distance"
      wtf?

    • 1 month ago
      Anonymous

      Training a neural network is an optimization problem.
      Compression and optimization are equivalent.

      • 1 month ago
        Anonymous

        Unfortunately, you are wrong.

        Compression and optimization are not equivalent, and you'd know that if you knew the basics of information theory.

        Lossy compression does use a mean square error distortion criterion if the source is continuous, but if it's already discrete/digital then it's a totally different process entirely that deals with maximization of a mutual information channel. It has an optimization step in there, but it isn't optimization itself, and the optimization is only one small part of the compression process.

  12. 1 month ago
    Anonymous

    >every neural network is equivalent to a lookup table
    This dude wrote a whole essay on the inner-workings of AI and has never heard of an adjacency matrix.

    • 1 month ago
      Anonymous

      What's an adjacency matrix and how is it AI?

  13. 1 month ago
    Anonymous

    And who says humans can do "actual thinking"?
    What if we are just biological stimulus-response-machines, all our thinking and behaviour determined by the chemicals and electronic potentials zipping around in our neural network of synapses?

    • 1 month ago
      Anonymous

      we aren't any different in principle, biological machines just (currently) receive orders of magnitude more data through a litany of physical senses. language is hilariously less information dense than sight, smell, touch, sound, etc.

      we also have a foundational instinct layer that evolved with the collective sensory data of hundreds of millions of years of past living experience. when you realize how big that data set actually is in raw number bytes, it's astronomical. AI surpassing humans is an inevitability because it's only a matter of time until we can scale the data set big enough to rival our own

  14. 1 month ago
    Anonymous

    Transformers are Turing Complete.
    https://www.jmlr.org/papers/volume22/20-302/20-302.pdf

    A Turing machine can perform any computation.

    Including human consciousness.

    Since humans can reason, it follows that a Transformer network can reason too - in principle.

    But can human consciousness be computed? Of course! At minimum, a computer could simply simulate the atoms of a human brain and compute the chemical reactions of the neurons firing.

    So in summary, the person in OPs screenshot is a stupid moron.

    • 1 month ago
      Anonymous

      >At minimum, a computer could simply simulate the atoms of a human brain and compute the chemical reactions of the neurons firing.
      It's not evident that a simulation of that detail is possible, or that it would generate a form of consciousness, a virtual awareness that could experience qualia.

    • 1 month ago
      Anonymous

      >Including human consciousness.
      Except consciousness is NOT computation.

      • 1 month ago
        Anonymous

        unless it is.

  15. 1 month ago
    Anonymous

    >neural network can be represented by a function
    No shit.
    Everything can be.

  16. 1 month ago
    Anonymous

    >equivalent to a lookup table
    You could say that about any finite discrete function.
    Not every abstraction is useful. Anon from picrel abstracted off the most meaningful parts of the thing he was attempting to describe. He literally fell into a classic "human is a featherless biped" fallacy

    • 1 month ago
      Anonymous

      Every neural network is finite and discrete. He's not abstracting anything

  17. 1 month ago
    Anonymous

    His conclusions are immediately obvious and irrelevant. There are actual morons in the room with us now who believe a sufficiently complicated stack of punch cards = intelligence.

    • 1 month ago
      Anonymous

      yes, this is correct. good job for making the same point as OP

  18. 1 month ago
    Anonymous

    This conversation stems from the flawed idea that we understand how human minds work. It is most likely this flaw which causes us to draw comparisons between the mind and 'AI'.

  19. 1 month ago
    Anonymous

    >ITT: Midwits drowning in bathwater

    • 1 month ago
      Anonymous

      >refuses to refute the argument
      >leaves the thread

  20. 1 month ago
    Anonymous

    post on IQfy

  21. 1 month ago
    Anonymous

    https://arxiv.org/abs/2106.05181

  22. 1 month ago
    Anonymous

    Every function is equivalent to a lookup table. Only a midwit would think that's profound.`

    • 1 month ago
      Anonymous

      so why is OpenAI worth $100B?

      • 1 month ago
        Anonymous

        What a moronic non-sequitur. How does the value of OpenAI have anything to do with the mathematical statement that any deterministic function can be represented as a lookup table?

        • 1 month ago
          Anonymous

          even non-deterministic functions can be represented as lookup tables but the question still stands. why is a lookup table worth $100B?

          • 1 month ago
            Anonymous

            >why is a lookup table worth $100B?
            mostly a bet on what else they can achieve.

          • 1 month ago
            Anonymous

            Because that lookup table is really useful, and it took a lot of expensive engineers and a lot of compute to create it, and investors think OpenAI is going to continue creating the world's best lookup tables and make a lot of money selling access to them.

      • 1 month ago
        Anonymous

        >so why is OpenAI worth $100B?
        That's air money, it doesn't really exist.

  23. 1 month ago
    Anonymous

    Every neural network is built off discrete representations of continuous functions.

    LUT are just literally the discrete representation of continuous functions.

    I fricking hate computer scientists so God damn much. Neural networks are no different from ANY other classification or regression method, we are searching for optimal CONTINUOUS or DISCRETE functions (depending on inputs) that output results closest to our many equations few unknowns problem.

    There's nothing special about this, but they're not just look-up tables either.

    Anyway, neural networks are memes outside classification problems, and only certain flavors of classification problems like image, language, etc.

    The hot shit for regression has been ensemble models for a good minute now. But all the problems are literally, at the end of the day:

    Y = f(X; p) + e

    We just switch around the architecture of f and search for optimal p, and then characterize e.

Your email address will not be published. Required fields are marked *