Hopper, AMD btfo

Nvidia has now the tech to fuse two dies as if it were a single one.

This is more advanced than what AMD is doing with the chiplets.

Soon Nvidia will make a monster gaming GPU that is 4x faster than anything AMD can deliver.

Ape Out Shirt $21.68

Yakub: World's Greatest Dad Shirt $21.68

Ape Out Shirt $21.68

  1. 2 months ago
    Anonymous

    >Soon Nvidia will make a monster gaming GPU that is 4x faster than anything AMD can deliver.
    AI accelerator chip that didn't make the cut for the datacenter that's now being sold as a "gaming GPU" for $4k*
    And people will buy it because AMD admitted they've given up trying to compete at the top end (again)

  2. 2 months ago
    Anonymous

    Nvidia GPU + AMD CPU master race reporting in.

    Frick Israentel.

    • 2 months ago
      Anonymous

      The WMAF of hardware.

    • 2 months ago
      Anonymous

      That bringup board looked insane.

      It's sad they're so into pushing their CPU. Was thinking that thing would be a beast with an 9684X. It looks as if Nvidia wants to eliminate all server manufacturers like they're trying to do with AIB partners and just do everything themselves for the most money.

      Biggest generational improvements ever and they're pulling it off on the same 4N process as H100. The 3nm and 2nm generations will be unbelievable.

      Yeah, I thought it was kinda strange. Seems like they're holding back. Same VRAM amount as a MI300X despite using HBM3e instead of HBM3.

    • 2 months ago
      Anonymous

      AMD CPU + Intel GPU = GOATED

  3. 2 months ago
    Anonymous

    Should I buy stocks?

    • 2 months ago
      Anonymous

      Yes

    • 2 months ago
      Anonymous

      If you can afford them

      • 2 months ago
        Anonymous

        One can buy fractional shares.

        • 2 months ago
          Anonymous

          it's even better if you buy a leveraged ETF
          10x is the minimum if you don't wanna miss the AI train

    • 2 months ago
      Anonymous

      It's been a safe option for a couple of years. You should have bought them yesterday.

      • 2 months ago
        Anonymous

        Every day I regret not slurping the dip in October 2022. Was too busy browsing /sdg/ threads ahahaha

  4. 2 months ago
    Anonymous

    Nvidia is not a foundry. They have access to services and IP provided by their foundry partners.

    • 2 months ago
      Anonymous

      What is this foundry meme? Apple owns no favorites and they're still worth trillions on the iPhone IP.

      • 2 months ago
        Anonymous

        iPhone / Apple services revenues might be a bit more solid than LMAO AGI IN TWO MORE WEEKS MORE COMPUTE NAO revenues overtime

        • 2 months ago
          Anonymous

          You think Apple makes money on services and not their expensive line of brushed metal they don't manufacture? Shut up moron.

  5. 2 months ago
    Anonymous

    The real moat is in the switch chip. That's what AMD and Intel is lacking.

    • 2 months ago
      Anonymous

      I think AMD is working with Broadcom on a similar switch.

    • 2 months ago
      Anonymous

      MI300 has 10.8TB/s bidirectional bw between all 4 dies at full capacity, vs 10TB/s between 2 dies here.
      That does have an issue as a single die can only access 2.7TB/s at max vs 5, but overall both chips are fully coherent so it is all moot.
      Same tech, both via CoWoS_L.
      MI300 is more advanced packaging though, as the compute dies are connected to the base dies via SoIC with a total bandwidth of over 17TB/s bidirectional across all 8 compute dies.
      All in all Blackwell feels like an evolution of Hopper, bigger numbers, moar Si but no single killer feature to differentiate.
      AMD can create a larger chip, and Si spam is what ultimately wins, AMD already did more primitive MCM with MI250X, but that used less advanced packaging so it was 2 devices on one package due to insufficient bandwidth.

      MI400 will integrate all the things, plus that xGMI switch Broadcom is developing with AMD. NV's overall system architecture advantage is neutralised next year, along with being leapfrogged in hardware again.
      Software will become basically even eventually as well, so it comes down to roadmap execution.

      https://twitter.com/ctnzr/status/1769852326570037424

      AYYMDEADPOORgayS BTFO

      NVIDIA HAD CHIPLETS YEARS BEFORE AYYMDEAD

      The crossbar is no secret, and that is a monolithic die with 2 hemispheres, near side needing data from far side leads to a nasty latency hit.

      Biggest generational improvements ever and they're pulling it off on the same 4N process as H100. The 3nm and 2nm generations will be unbelievable.

      The uArch gains are solid considering it is the same node family, so they couldn't rely on a full node for big gains. So instead they had to double the die size per device to achieve generational targets. PPW gains are solid, but the fact it can go to 1200W is not the greatest indicator of something that will be excellent at 700-1000W.

      That bringup board looked insane.

      It's sad they're so into pushing their CPU. Was thinking that thing would be a beast with an 9684X. It looks as if Nvidia wants to eliminate all server manufacturers like they're trying to do with AIB partners and just do everything themselves for the most money.
      [...]
      Yeah, I thought it was kinda strange. Seems like they're holding back. Same VRAM amount as a MI300X despite using HBM3e instead of HBM3.

      Most sales will continue to be x86 boxes, NV is trying desperately to push the full stack above all else, but they will ultimately hurt themselves and their partners through futility. N3B sucks, had to use N4 for TTM. More than 192GB requires more HBM stacks or 12-hi stacks, neither are ready yet.

      • 2 months ago
        Anonymous

        The performance gains from using the whole NV hardware stack instead of x86 seem pretty big. There's nothing on the PCIe roadmap etc that can match NVLink. I would guess that if the AMD/Intel datacenter APU approach wins Nvidia will be able to copy it before the CUDA hegemony breaks.

        • 2 months ago
          Anonymous

          Even if Nvidia products offer better performance, you can stack up 80% cheaper alternatives for 90% of the performance.
          Nvidias profit margins are a flash in the pan, it's not like it's that hard to design / make GPUs

          • 2 months ago
            Anonymous

            Hardware cost doesn't matter much for datacenter. 90% of their cost is power and infrastructure. Doesn't matter how cheap your hardware is if doesn't win on density and efficiency. Putting a lot of bandwidth and ALU on a chip is easy but making a whole stack to support it is hard. They sell MI300 cheap but it costs as much as B200 to manufacture and has worse AI performance. AMD takes lower margins because they wouldn't move any products otherwise. If a price war ever happens they will be the ones who are squeezed out. I doubt there will be one though. That's not how the sector works.

          • 2 months ago
            Anonymous

            For traditional supercomputing AMD has really good efficiency.

            It's just a tiny market now, compared to providing CUDA accelerators to all the companies copy pasting AI code.

          • 2 months ago
            Anonymous

            Well MI300 was built for HPC firstly, DoE requirements for El Capitan specified a lot of stuff that isn't useful for hyperscaler AI memes.
            It has a lot of PCIe, very good standard for HPC due to very low error rates which is mandatory for mission critical stuff.
            Sucks for AI, which needs as much bandwidth between chips as possible.
            MI400 will be built for the biggest TAM possible, so stripping out HPC stuff for AI for better area and power eff and using the savings for giga networking and data decompression.

          • 2 months ago
            Anonymous

            >it's not like it's that hard to design / make GPUs
            Supreme moronic euro

          • 2 months ago
            Anonymous

            >it's not like it's that hard to design / make GPUs
            If that were true, why are there only two relevant players in the desktop gpu space? And why isn't AMD/Qualcomm/Apple offering GPUs with similar performance?

        • 2 months ago
          Anonymous

          NV are saving their best configs for their full stack solution, basically they are admitting that they will lose their stranglehold outside of large scale training, they are all in on trying to retain the highest margin market almost entirely.
          PCIe/CXL is a far different standard to NVLink, which is a SerDes/networking interconnect standard ala AMD's xGMI or just good old Ethernet/Infiniband.
          AMD/Broadcom are more than able to compete in this arena, Intel is pretty lost, they will need to rely on external partnerships to stand any chance.

  6. 2 months ago
    Anonymous

    god i hate nvidia so much it is unreal

  7. 2 months ago
    Anonymous

    >Soon Nvidia will make a monster gaming GPU that is 4x faster than anything AMD can deliver.
    lol no it won't. They'll hold onto the technology and continue to release slight upgrades until AMD pushes them.

    • 2 months ago
      Anonymous

      If they want to kill competition yes they will, they will monopolize everything (and more) in few years.

  8. 2 months ago
    Anonymous

    https://twitter.com/ctnzr/status/1769852326570037424

    AYYMDEADPOORgayS BTFO

    NVIDIA HAD CHIPLETS YEARS BEFORE AYYMDEAD

  9. 2 months ago
    Anonymous

    A lot of fancy tech words and animations.
    Only thing I care about is them slapping at least 48GB on the 5090. Anything less isn't worth the price tag they're going to have on it.
    If it has less, then I'm not upgrading my 3080 and I'll wait until the 6000 series or whatever refresh might have that 48GB.

    • 2 months ago
      Anonymous

      This is not that kind of conference.

      • 2 months ago
        Anonymous

        >18.1B revenue
        >4.7B cost of revenue
        >13.4B gross profit
        Their products are fricking 75% profit. And people honestly think they aren't price gouging. They're completely open about it here.

        • 2 months ago
          Anonymous

          Blackwell is only a small price bump because comp is going to ramp up massively, so they are going to lose a bit of margins.
          They have plenty of room to fall, just depends on how strong demand remains, if it starts to crater, the numbers will tank to hell.

    • 2 months ago
      Anonymous

      They're not putting 48GB on the 5090. It would completely eat up the ML hardware's lunch

      • 2 months ago
        Anonymous

        I think it's a bit of a coin flip.
        They don't want to put 48GB on it, but at the same time they want to win all of the mindshare out there, because their stock is in the largest bubble in human history and they want to keep it going.
        If they create a halo product that's unimpressive, they will lose a lot more money in terms of lack of hype driven investment, than what they lose in cannibalizing a bit of the dedicated ML hardware.

        • 2 months ago
          Anonymous

          B100/200 are like 200-300gb vram. Even if they double it to 96vram for 5090 it's not competing for the same market

    • 2 months ago
      Anonymous

      You are never getting anything over 24GB vram from nvidia ever again.

  10. 2 months ago
    Anonymous

    amd has mi300a too

  11. 2 months ago
    Anonymous

    Silicon interposers are fricking expensive. NVIDIA is just throwing money at making terrible python shit run a little faster.

    They might make some 4k$ GPUs for shits and giggles too, but that's hardly relevant.

    • 2 months ago
      Anonymous

      Very affordable for their ML/datacenter customers. The main reason will not be utilized for their gayming stuff. The gayming stuff will be the die rejects that couldn't be used for interposers.

      • 2 months ago
        Anonymous

        >Very affordable for their ML/datacenter customers.
        Only because good AI programmers are unobtainium. If you need all that interconnect bandwidth your code is shit.

        • 2 months ago
          Anonymous

          PS. I mean on the GPU. NVLINK is fine, but the interposer provided bandwidth is orders of magnitude more. Organic carriers are more than sufficient for NVLINK level links.

    • 2 months ago
      Anonymous

      I was wondering this myself. The technological achievement here is just 2x the previous chip with a fast, but not zero, latency interconnect? And then they do that every 10 years? Will that work?

  12. 2 months ago
    Anonymous

    Not chiplets, it is more superscaling that has been part of GPU hardware development since the beginning.
    >Soon Nvidia will make a monster gaming GPU that is 4x faster than anything AMD can deliver.
    Not going to happen, a combination of economic reality, power consumption and lack of demand to justify the massive R&D costs.
    This tech is going only be used for Nvidia's compute and ML line which is more lucrative then silly gayming.

  13. 2 months ago
    Anonymous

    >Soon Nvidia will make a monster gaming GPU that is 4x faster than anything AMD can deliver.
    yeah and at 8 gigs of VRAM max kek frick off homosexual marketer

  14. 2 months ago
    Anonymous

    Biggest generational improvements ever and they're pulling it off on the same 4N process as H100. The 3nm and 2nm generations will be unbelievable.

  15. 2 months ago
    Anonymous

    if this doesnt mean GPU heatsinks will get smaller then I dont give a shit

  16. 2 months ago
    Anonymous

    >gaming GPU
    and yet they announced nothing
    they showed off slave robots but no gaming gpus

  17. 2 months ago
    Anonymous

    I wish, just so that freetards admit that Nvidia is better than AMD

    • 2 months ago
      Anonymous

      what is there to admit? nvidia is absolutely gigantic now, one of the most important and financially endowed companies on earth.
      amd is a distant second when it comes to ai chips and gpus, followed by pretty much nothing of significance

  18. 2 months ago
    Anonymous

    Betting there will be an edge inference product next year with a single salvaged b100 chiplet. Should be interesting.

  19. 2 months ago
    Anonymous

    if a single 4090 die draws 400 watts, two 4090 dies togheter would be 800 watts
    how the frick do you dissipate that heat?

    • 2 months ago
      Anonymous

      If you power limit it to half you still have 80% of the performance, that's how you get an RTX 5000 Ada

  20. 2 months ago
    Anonymous

    BBC (Big BLACKEDWELL Chip)

  21. 2 months ago
    Anonymous

    >Soon Nvidia will make a monster gaming GPU that is 4x faster than anything AMD can deliver.
    If only. You know ngreedia will only release something barely faster than whayt amd offers because people will buy it anyway.

    • 2 months ago
      Anonymous

      4090

  22. 2 months ago
    Anonymous

    A gpu that cost more than 10 grand is not a gaming gpu. Nvidia will never have a gaming gpu 4 times stronger than what amd can deliver.

  23. 2 months ago
    Anonymous

    Nvidia already has AMD btfo lol. I say this as someone who almost exclusively uses AMD.

Your email address will not be published. Required fields are marked *