Elon open sourced Grok Ai

https://academictorrents.com/details/5f96d43576e3d386c9ba65b883210a393b68210e

Thalidomide Vintage Ad Shirt $22.14

Yakub: World's Greatest Dad Shirt $21.68

Thalidomide Vintage Ad Shirt $22.14

  1. 1 month ago
    Anonymous

    >+600GB of VRAM necessary
    oh lawrd

    • 1 month ago
      Anonymous

      >he fell for the 'less than 600GB ram' meme
      ngmi

    • 1 month ago
      Anonymous

      Just cpumaxx with dual socket and 1tb of ram

      • 1 month ago
        Anonymous

        VRAM, not RAM

        • 1 month ago
          Anonymous

          >he doesn't want to load on RAM to fasttrack his bootstrap to solutions using Grok ASAP
          Really? One week for a token is more than enough for humans. You won't make it with that attitude, I'm sorry.

    • 1 month ago
      Anonymous

      oh well, maybe i can run it in 2075

      • 1 month ago
        Anonymous

        Jim Keller will save us. Pray, brother.

  2. 1 month ago
    Anonymous

    BASED, great white hope

    • 1 month ago
      Anonymous

      Mistral is better and cheaper. The frogs won.

      • 1 month ago
        Anonymous

        Tpbp

      • 1 month ago
        Anonymous

        >only 100 GB
        so that's only 2x A100 aka 20k list price aka 40k street price
        a bargain

        • 1 month ago
          Anonymous

          So 3090s

          • 1 month ago
            Anonymous

            4 3090s*

        • 1 month ago
          Anonymous

          Better than the 8xH100 that Grok needs.

        • 1 month ago
          Anonymous

          >+600GB of VRAM necessary
          oh lawrd

          Jim Keller will save us.

  3. 1 month ago
    Anonymous

    can i run this at home? i have a 3080

    • 1 month ago
      Anonymous

      only if you have 60 of them in parallel

  4. 1 month ago
    Anonymous

    is this officially the biggest open source model in history?

  5. 1 month ago
    Anonymous

    whats min vram for this?

    • 1 month ago
      Anonymous

      >Chunky beast, needs 320GB VRAM likely 4 bit, likely is being run on 8 bit on 8 x 80GB GPUs
      >Emad

    • 1 month ago
      Anonymous

      more than you can afford pal

      • 1 month ago
        Anonymous

        jej

  6. 1 month ago
    Anonymous

    can't it be quantized or some shit

  7. 1 month ago
    Anonymous

    Elon dude fr tho open source some hardware to run AI on frick this 1000TB VRAM bullshit

  8. 1 month ago
    Anonymous

    Allowing the LLM's weights to be downloaded is not "open sourcing." That's like claiming a program is open source because you can download the binary.

    • 1 month ago
      Anonymous

      This. Especially since they've been quantized. It's like releasing the assembly, but not the original C of a program.

    • 1 month ago
      Anonymous

      The code and associated Grok-1 weights in this release are licensed under the Apache 2.0 license. The license only applies to the source files in this repository and the model weights of Grok-1.

  9. 1 month ago
    Anonymous

    don't care

  10. 1 month ago
    Anonymous

    good morning sers

    • 1 month ago
      Anonymous

      heh

      [...]

      I dont get it. What's the point here? Recruiters will check so you cant get away with this

  11. 1 month ago
    Anonymous

    https://github.com/xai-org/grok-1/issues/9

    • 1 month ago
      Anonymous

      heh

  12. 1 month ago
    Anonymous

    [...]

    why are you putting spaces after your quote arrows?

  13. 1 month ago
    Anonymous

    [...]

    i'm trying to shame you for posting like a newbie. not sorry if that upsets you
    lurk six months before posting again

  14. 1 month ago
    Anonymous

    So how do I actually run it, assuming I don't have access to Ngreedia giga-GPUs?

  15. 1 month ago
    Anonymous

    Are they going to keep working on it? It's like 10x the side of mistral's AI but performs worse than a model I can already run locally.

  16. 1 month ago
    Anonymous

    ugly homies

  17. 1 month ago
    Anonymous

    Someone screencap the twitter thread
    https://twitter.com/elonmusk/status/1769447665723470236

  18. 1 month ago
    Anonymous

    This seems like a very clever and veiled attempt at regulation. Once China uses it for something nefarious, the gov 100% crack down on open models. Knowing Musk, something like this wouldn't be far-fetched.

    • 1 month ago
      Anonymous

      >actually open source AI is a scam designed to lead to centralized control of AI

      • 1 month ago
        Anonymous

        Makes sense. Musk never open sourced grok-0 and clearly doesn't plan on open sourcing any future models. Actual open source AI would be a huge threat to corporations like X anyways that rely on spreading misinformation. It would never benefit them in any way.

        • 1 month ago
          Anonymous

          More on the lines of grok kill all others.

    • 1 month ago
      Anonymous

      Grok is fricking shit though. China has made a better and much more efficient open model with Qwen already

    • 1 month ago
      Anonymous

      From someone:

      Yes, as if we don't have dozens of good open models already, somehow Musk's thing will be that. Yes maybe he would like some regulation, but at the same time this isn't enough at all.
      In practice I don't see this being more than him showing he's not hypocritical during his OpenAI lawsuit that is about getting GPT-4 weights to be open too.

      Typical for this idiot. How many more AI experts warning about the dangers of AI will it take for insane egoists like Musk and Zuckerberg to realize that AI should not be open source?

      I can't wish you doomers to disappear from this earth sooner. If Clippies would kill us all, it'd be a relief if only because you guys would be gone.
      In all seriousness though, these models are harmless and you're an idiot sucking corpo dick, nobody but big companies benefit from keeping their moat.
      Even if the situation was like you said, I'd trust it more in any anon's hands than what the corpos are doing right now with their models, it's all purely SJW brainwashing. Is that what you really want? Better to have it in everyone's hands and keep a good equilibrium than be forever a slave to these companies.
      Seriously, frick off. In any case, all this doomer logic is pointless, neural networks can't be considerably more dangerous than humans and that's fine, for reasons that are a bit long to go into here, but doomer logic bases off their reasoning on GOFAI's which aren't even proven to be possible or that they'd work as AGIs to begin with.
      All that you get with closed models is that you (and everyone else) gets cucked, that is all.

      Grok is fricking shit though. China has made a better and much more efficient open model with Qwen already

      It does seem mediocre. I guess it's the biggest MoE model. I'm not a big fan of them as they're not that useful for local, but they're are useful for people with a lot of VRAM and GPUs that want the fastest speed they can get.
      Should still have some use for researchers and potentially even local with various hacks if people bother implementing them, but as others said, better models exist out there.

      • 1 month ago
        Anonymous

        Makes sense. Musk never open sourced grok-0 and clearly doesn't plan on open sourcing any future models. Actual open source AI would be a huge threat to corporations like X anyways that rely on spreading misinformation. It would never benefit them in any way.

        That's most worrying, I think he's just doing this for brownie points in court, same as he did with open sourcing some car designs. If he was serious he'd have given us also fp16 weights and the other versions too, but I think it's fine to have this. This is closer to GPT-4's design than existing models, however yes it's seems rather average.

        [...]

        Jim Keller will save us.

        I wished for this many years ago (since 2020), I think the guy has good intention, but he's not executing well enough, although some of his side projects may lead in good directions that could help with this.
        I have more hopes for China at this point: they've been pushed strongly by US wanting to deny them AI and semiconductors and have finally managed to catch up. They have a good A100 clone by now and in half a year to a year they'll have HBM. As long as they don't frick with Taiwan,
        we might be looking at sweet things like AI chips dumping that would obliterate nvidia, what a sight that would be, who wouldn't buy that! Not only that, a country that lacks doomer influencers in politics that would want to DRM your GPUs or restrict open models, that truly is perfect.

  19. 1 month ago
    Anonymous

    Typical for this idiot. How many more AI experts warning about the dangers of AI will it take for insane egoists like Musk and Zuckerberg to realize that AI should not be open source?

    • 1 month ago
      Anonymous

      Good morning schlomo

    • 1 month ago
      Anonymous

      Better than only government glowie psychos having access to it.

  20. 1 month ago
    Anonymous

    >Download 318.24GB

  21. 1 month ago
    Anonymous

    i have 40 rtx a4000 at work, this will be magnificent

    • 1 month ago
      Anonymous

      have you tried it? is it censored?

  22. 1 month ago
    Anonymous

    [...]

    >Now they are deliberately using the term open source in a way that doesn't allow the peasants to build and modify the software.
    Huh?

    • 1 month ago
      Anonymous

      I bet you think binary blobs are "open source" too.

  23. 1 month ago
    Anonymous

    [...]

    It's not open source but you're still a flaming homosexual

  24. 1 month ago
    Anonymous

    It's not practical in it's current form but I think this still a Elon W.

  25. 1 month ago
    Anonymous

    who gives a frick whether it’s open source or not, you homosexuals. The REAL question is is it censored, or will we finally have an LLM to answer very specific questions related to the manufacturing of explosives

    • 1 month ago
      Anonymous

      that wouldn't be a problem if it was actually open source, not only binary weights.
      you could just do your own alignment.

  26. 1 month ago
    Anonymous

    in 10 years we will be able to run this locally on our phones

    • 1 month ago
      Anonymous

      highly doubt it
      10 years ago phones weren't that much weaker than today's phones and I don't see any significant advancements in hardware in the near future
      it's more likely that there will be many ai services hosted in the cloud that anyone will be able to use from their phones

      • 1 month ago
        Anonymous

        well yeah
        https://nanoreview.net/en/soc/apple-a9
        https://nanoreview.net/en/soc/apple-a16-bionic
        transistor count increased only 8 times and shrunk 3.5 times in 10 years. we will see what 128b 1nm transistors can do in the future.

    • 1 month ago
      Anonymous

      highly doubt it
      10 years ago phones weren't that much weaker than today's phones and I don't see any significant advancements in hardware in the near future
      it's more likely that there will be many ai services hosted in the cloud that anyone will be able to use from their phones

      For what it's worth phones are now able to beat Go professionals. Alphago was run on a supercomputer. It's not surprising that magic black box programs can get more efficient.

      • 1 month ago
        Anonymous

        Go professionals are now able to beat Alphago tho.
        >We attack the state-of-the-art Go-playing AI system KataGo by training adversarial policies against it, achieving a >97% win rate against KataGo running at superhuman settings. Our adversaries do not win by playing Go well. Instead, they trick KataGo into making serious blunders. Our attack transfers zero-shot to other superhuman Go-playing AIs, and is comprehensible to the extent that human experts can implement it without algorithmic assistance to consistently beat superhuman AIs. The core vulnerability uncovered by our attack persists even in KataGo agents adversarially trained to defend against our attack. Our results demonstrate that even superhuman AI systems may harbor surprising failure modes.
        https://arxiv.org/abs/2211.00241

  27. 1 month ago
    Anonymous

    [...]

    corporations don't mind when freetards give them open source software and drive down the cost of tech labor, on the contrary they benefit greatly from it and encourage it.

    • 1 month ago
      Anonymous

      you are delusional if you think you could train your own transformer

      • 1 month ago
        Anonymous

        >2022

  28. 1 month ago
    Anonymous

    They released the back-end code and data used to train Grok in late 2023.
    >Grok is known for its ability to provide sarcastic and edgy responses
    Freagin' trash.

  29. 1 month ago
    Anonymous
    • 1 month ago
      Anonymous

      >reliable safeguards against malicious use

      So it can't say anything that upsets israelites/trannies?

      • 1 month ago
        Anonymous

        why would you want to upset people? that's rude.

        • 1 month ago
          Anonymous

          It's not that my goal is to upset these people, it's that I don't want to be constrained by their sensibilities.

    • 1 month ago
      Anonymous

      Seems neat, honestly. There's no way I'll ever have the hardware to run it, but I've never actually thought of a use case for LLMs anyway.

    • 1 month ago
      Anonymous

      >rust
      grok confirmed troony

  30. 1 month ago
    Anonymous

    I heard that Grok is woke, unfortunately

Your email address will not be published. Required fields are marked *