>Aren't these things moving a little too quickly?
Why are you complaining? It's one of the only parts of modern computing still seeing extremely fast growth. PCI-e is roadmapped out to 10.0 in 2029 and it's supposed to transfer four fricking terabytes in a second.
>bus faster than RAM
What is the fricking point of moving shit that fast if it can't be processed?
Personally I can't wait to see PCIe 7 NVMe SSDs completely wasting 90% of the bandwidth of those 4 lanes, because you know, motherboard designers can't be arsed unless it's selling for $600, and it needs to be compatible with Gen3 SSDs.
M.2 was a mistake, waste of lanes, waste of board real estate. Bring something new at least as compact and flexible as SATA or frick off.
I imagine that when we finally start phasing out hard drives we'll start seeing U.2 on desktop. PCI-e 5.0 1x would be as fast as 3.0 4x so conceivably in future we could have cheap mass storage SSDs that only use one lane each or even share lanes. We just need to get to that point where SSDs become desirable for mass storage.
>We just need to get to that point where SSDs become desirable for mass storage.
I have a feeling mass storage will become an obsolete concept before large capacity SSDs will be realized
You already have plenty of people ok with having a just a few 1TB/2TB M.2 because having spinning rust is a sin nowadays.
Those people only had 1/2 terabyte hard drives, anyway.
Everything going cloud is a big worry though to be certain. I'll just frick off into the woods in that case, technology would be over for me.
2 years ago
Anonymous
The one good thing is that the cloud is just another computer, which still takes hard drives and SSDs.
Even if the consumer demand for storage fricks off and dies, you'll still have access to storage you just might be paying slightly more for it and not from the usual retailers.
2 years ago
Anonymous
I might still be able to buy ten year old servers yeah, but the problem is that they'd be ten years old. And being a consoomer myself mostly interested in playing vidyagames, there would be frick all I could do on them.
Cloudshit scares the frick out of me unironically. Computers have been a huge part of every era of my life and I know that someday I may have to completely drop my interest in them. It's incredibly terrifying and I would have nowhere to go after that.
>We just need to get to that point where SSDs become desirable for mass storage.
I have a feeling mass storage will become an obsolete concept before large capacity SSDs will be realized
You already have plenty of people ok with having a just a few 1TB/2TB M.2 because having spinning rust is a sin nowadays.
Those people only had 1/2 terabyte hard drives, anyway.
Everything going cloud is a big worry though to be certain. I'll just frick off into the woods in that case, technology would be over for me.
I might still be able to buy ten year old servers yeah, but the problem is that they'd be ten years old. And being a consoomer myself mostly interested in playing vidyagames, there would be frick all I could do on them.
Cloudshit scares the frick out of me unironically. Computers have been a huge part of every era of my life and I know that someday I may have to completely drop my interest in them. It's incredibly terrifying and I would have nowhere to go after that.
>We just need to get to that point where SSDs become desirable for mass storage
SSDs have been around for a long time and I have yet to see any sign that they're getting any closer to substituting high-capacity HDDs.
Furthermore, fools relying on cloud storage for their important stuff are in for a surprise when those sweet vulnerable undersea internet cables suffer an "accident" by an unknown nation interested in disrupting western markets. If shit keeps escalating as it is, it won't take long for something to happen.
No, you don't need a nuclear war to lose access to your Netflix and Google Cloud. It's extremely easy to cut the internet off from half of the world.
I used to make fun of hoarders, but now I see the value of keeping physical copies of some stuff to entertain you for at least a few months or years.
>SSDs have been around for a long time and I have yet to see any sign that they're getting any closer to substituting high-capacity HDDs.
In 2016, I bought a 750GB drive for $400 CAD.
In 2020, I bought a 1TB drive that's ~4x faster for $150 CAD.
We are getting there if not very quickly.
2 years ago
Anonymous
>In 2020, I bought a 1TB drive that's ~4x faster for $150 CAD.
Now go compare that to the max capacity and the price/GB of HDDs of the time.
Imagine if we had SATA 4.0 or 5.0 with 12Gbps or 24Gbps speeds per port. Instead of being limited to 6Gbps. We'd get 1 GB/s sata SSDs. Or 2 GB/s sata SSD, instead of the current limit of ~550 MB/s for 6 Gbps Sata 3.0. Imagine a dozen of those 2 GB/s sata SSDs on a single low end $50 board.
Just 12Gbps would be enough, you'd at least match a decade old SAS standard.
Wouldn't even need a pin count increase.
In a world where they can meme 2.5Gb and 5Gb NICs into existence and somehow make it popular, surely they can meme a new sata standard in.
why not just make the number (max data rate) big to begin with
Because hardware is the limiting factor. It's better to have incremental improvements that can be implemented in a timely manner then it is to try to jump the gun and make a standard no one can actually implement for 2 decades.
> just buy a $100 piece of hardware to add in 3-4 extra nvme ports
Hell, try ~$900
2 years ago
Anonymous
AAAAAAAAAAAAAAAAAAAAA
2 years ago
Anonymous
rather buy a used mellanox for that price
2 years ago
Anonymous
Based Mellanox poster
2 years ago
Anonymous
Ah yes the people that own PLX boardcom. Just get a pcie-x4 to u.2 or a board that supports bifrication.
2 years ago
Anonymous
PCIe lanes are limited, that's the whole point of the discussion. Once you run out on your PC you have to go to very expensive HBAs/PCIe switches.
Fricking SAS got an upgrade to 24Gbps in 2018 because while it was clear NVMe was going to dominate there was still a need for a protocol that doesn't rely on PCIe lanes.
2 years ago
Anonymous
That's for the professional market and they'll pay anything, now check chink stuff that work exactly as well
2 years ago
Anonymous
>now check chink stuff that work exactly as well
It doesn't exist
2 years ago
Anonymous
look harder
2 years ago
Anonymous
I've looked
The only tri-mode NVMe PCIe 4.0 HBA I've found is based on the same chipset, has half the ports and is $300 more
here's a copy of an answer that sums it up pretty well
>At face value sending 8 bits at once seems faster than sending just one at a time. In reality, there are other factors that affect parallel transfers. >The Achilles heel of parallel is the wait time between bytes. When the transmitter places a set of 8 bits on 8 lines it takes those bits a non zero amount of time to arrive at the receiver. Furthermore, those bits do not arrive at the same time. There is a spread between the fastest bit and the slowest bit. This spread requires that there is a blanking time between when the bits are asserted and when the bits are read. >At slow speeds this blanking time is irrelevant. As the speeds increase this blanking time becomes significant. The blanking time has to account for the worst-case scenario. This includes the cable, connectors, circuit board traces, etc... As speeds increase the blanking time will become the limiting factor. >With serial, all the bits come down the line back to back and require no blanking time. A speed increase only really requires the ability to cram the bits down the line faster. I'm glossing over that with serial signal integrity still needs to be maintained. Serial is also less sensitive to cabling abnormalities.
the best way to do parallel in the current day is to just have multiple individual serial lines on one interface. coincidentally, that's exactly the concept that drives pci, usb c, sata, Ethernet, FireWire, all non analog video connectors, and literally any other plug you can think of that has more than 4 pins.
Well to be fair games are made under the assumption of generally most people having PCIe 3.0 support right now, once that trend shifts games will likely utilize it more for more intense asset streaming as appropriate.
But yeah this isn't really relevant to the consumer market, these newer PCIe standards are mostly for enterprise and whatever to start using, it takes years before they trickle into consumer spaces as PCIe 4.0 just has started to.
Stuff like Sam and if and resizable bar on pcie g4 are very underutilised so is direct storage.
Gpu and cpu interconnected speed/bus will matter more than cpu and ram speeds going forward I think
>Shit like this makes me not care to upgrade >That and diminishing returns from GPU
This I got my two pcie4 mobos x570 and B550-F pcs I'll be alright for a long time maybe mgpu sli xfire coming back? I'm assuming pcie5 and 6 and 7 is mainly for server bandwidth and bifurcation
yeah, pcie had a momentary pause in development but now we're seeing them catch back up to integrators demand for massive IO. though latency will never recover as usual.
>Aren't these things moving a little too quickly?
Spec writing is easy, any moron can be your ideas guy. The actual implementation is what's hard, and you'll just have to pay up for what you don't need. It's why the majority of phones are stuck at usb2 speeds and wifi chips are cheap garbage that never meet the spec.
>muh consoomer hardware
Shut The Frick Up
Nobody cares that Gaymen RGB Globohomosexual crap lags real tech by 5 years
Are you too poor or gay to afford normal hardware?
I wonder why this triggered a boot to reply. Or maybe it's posting indiscriminately? Disregarding that, I think adoption will be faster for future specs, considering
I'm pretty sure I'm still on pcie 3. How much does this matter anyway for someone only using it for a GPU? Pretty sure even the new 4000 series isn't gonna reach a bottleneck for me.
The fastest next-gen cards may feel a bit of a hit even at x16 PCIe 3.0, but I doubt it will be too significant at this point. AMD's lower end cards have been pulling some absolute garbage moves and come with less than 16 PCIe lanes though, now if you run a card like that on PCIe 3.0 or below it can take a fairly big hit since it's already running at the limit due to its cut down lanes, but full 16 lane cards would still be fine I think.
It's fast
why not just make the number (max data rate) big to begin with
>128 giga tera per second
they played you for absolute fools
You absolute giga-tera-tard.
https://en.wikipedia.org/wiki/Transfer_(computing)
>it's out
>starting development on the standard
OP is a homosexual.
They've figured out higher big number = hype. 5G is not even fully rolled out so you need to be at least at 8G, etc.
6 years ago sagiri was the only girl i fapped to
What's preventing me from making a PCIe 14 and say it has petabit throughput? It's not like these fellows are making a usable implementation
PCIe 6 is already finalized so the first silicon is coming out now. That means that development of PCIe 7 physical layer is ongoing.
Sharif don't like it.
>Rockin the Casbah
It's not out, they plan to finalize the spec for release by 2025.
This is just what the roadmap is, they're letting hardware companies know now so they can plan product roadmaps years out.
I look like that
i'm gonna sex you
Intel's Core series stagnated the whole computing industry for a while. This is how fast things developed before the Core series.
>Aren't these things moving a little too quickly?
Why are you complaining? It's one of the only parts of modern computing still seeing extremely fast growth. PCI-e is roadmapped out to 10.0 in 2029 and it's supposed to transfer four fricking terabytes in a second.
Because I don't want all my expensive hardware to become ewaste thanks to all the bloatware resulting in fricking tarabyte sized OS installations.
>bus faster than RAM
What is the fricking point of moving shit that fast if it can't be processed?
Personally I can't wait to see PCIe 7 NVMe SSDs completely wasting 90% of the bandwidth of those 4 lanes, because you know, motherboard designers can't be arsed unless it's selling for $600, and it needs to be compatible with Gen3 SSDs.
M.2 was a mistake, waste of lanes, waste of board real estate. Bring something new at least as compact and flexible as SATA or frick off.
I imagine that when we finally start phasing out hard drives we'll start seeing U.2 on desktop. PCI-e 5.0 1x would be as fast as 3.0 4x so conceivably in future we could have cheap mass storage SSDs that only use one lane each or even share lanes. We just need to get to that point where SSDs become desirable for mass storage.
>We just need to get to that point where SSDs become desirable for mass storage.
I have a feeling mass storage will become an obsolete concept before large capacity SSDs will be realized
You already have plenty of people ok with having a just a few 1TB/2TB M.2 because having spinning rust is a sin nowadays.
Those people only had 1/2 terabyte hard drives, anyway.
Everything going cloud is a big worry though to be certain. I'll just frick off into the woods in that case, technology would be over for me.
The one good thing is that the cloud is just another computer, which still takes hard drives and SSDs.
Even if the consumer demand for storage fricks off and dies, you'll still have access to storage you just might be paying slightly more for it and not from the usual retailers.
I might still be able to buy ten year old servers yeah, but the problem is that they'd be ten years old. And being a consoomer myself mostly interested in playing vidyagames, there would be frick all I could do on them.
Cloudshit scares the frick out of me unironically. Computers have been a huge part of every era of my life and I know that someday I may have to completely drop my interest in them. It's incredibly terrifying and I would have nowhere to go after that.
>We just need to get to that point where SSDs become desirable for mass storage
SSDs have been around for a long time and I have yet to see any sign that they're getting any closer to substituting high-capacity HDDs.
Furthermore, fools relying on cloud storage for their important stuff are in for a surprise when those sweet vulnerable undersea internet cables suffer an "accident" by an unknown nation interested in disrupting western markets. If shit keeps escalating as it is, it won't take long for something to happen.
No, you don't need a nuclear war to lose access to your Netflix and Google Cloud. It's extremely easy to cut the internet off from half of the world.
I used to make fun of hoarders, but now I see the value of keeping physical copies of some stuff to entertain you for at least a few months or years.
>SSDs have been around for a long time and I have yet to see any sign that they're getting any closer to substituting high-capacity HDDs.
In 2016, I bought a 750GB drive for $400 CAD.
In 2020, I bought a 1TB drive that's ~4x faster for $150 CAD.
We are getting there if not very quickly.
>In 2020, I bought a 1TB drive that's ~4x faster for $150 CAD.
Now go compare that to the max capacity and the price/GB of HDDs of the time.
Too many PCIe NVME drives can already over-saturate current CPU's.
because shitty manufacturers like gigaboom will find a way to kill your pci-e speeds because you inserted a single nvme ssd
Can we bring back SATA ports with higher speeds again? I don't want my motherboard to be limited to 1-2 nvme drives.
HELLO? SATA?
Imagine if we had SATA 4.0 or 5.0 with 12Gbps or 24Gbps speeds per port. Instead of being limited to 6Gbps. We'd get 1 GB/s sata SSDs. Or 2 GB/s sata SSD, instead of the current limit of ~550 MB/s for 6 Gbps Sata 3.0. Imagine a dozen of those 2 GB/s sata SSDs on a single low end $50 board.
Just 12Gbps would be enough, you'd at least match a decade old SAS standard.
Wouldn't even need a pin count increase.
In a world where they can meme 2.5Gb and 5Gb NICs into existence and somehow make it popular, surely they can meme a new sata standard in.
Because hardware is the limiting factor. It's better to have incremental improvements that can be implemented in a timely manner then it is to try to jump the gun and make a standard no one can actually implement for 2 decades.
>You will never own more than 2 M.2 SSD and YOU WILL ENJOY IT.
>YOU'LL HAVE TO USE CLOUD STORAGE IF YOU WANT MORE THAN 1 TB
Bro just buy a pcie nvme expansion card
>instead of adding a <$2 electrical support in upgrading the SATA 3 to SATA 4/5, just buy a $100 piece of hardware to add in 3-4 extra nvme ports
YIKES
> just buy a $100 piece of hardware to add in 3-4 extra nvme ports
Hell, try ~$900
AAAAAAAAAAAAAAAAAAAAA
rather buy a used mellanox for that price
Based Mellanox poster
Ah yes the people that own PLX boardcom. Just get a pcie-x4 to u.2 or a board that supports bifrication.
PCIe lanes are limited, that's the whole point of the discussion. Once you run out on your PC you have to go to very expensive HBAs/PCIe switches.
Fricking SAS got an upgrade to 24Gbps in 2018 because while it was clear NVMe was going to dominate there was still a need for a protocol that doesn't rely on PCIe lanes.
That's for the professional market and they'll pay anything, now check chink stuff that work exactly as well
>now check chink stuff that work exactly as well
It doesn't exist
look harder
I've looked
The only tri-mode NVMe PCIe 4.0 HBA I've found is based on the same chipset, has half the ports and is $300 more
then get one for a single port for like $20
wow. and I was considering grabbing a new mobo for pcie 4 speeds...
Nakadashi Shagriri-chan
god i wish i was her
should we try parallel again ?
Can today's silicon even handle those speeds without getting ludicrously expensive ?
I imagine the demand comes from data centers that'll have 256 core CPUs and beyond in the near future.
here's a copy of an answer that sums it up pretty well
>At face value sending 8 bits at once seems faster than sending just one at a time. In reality, there are other factors that affect parallel transfers.
>The Achilles heel of parallel is the wait time between bytes. When the transmitter places a set of 8 bits on 8 lines it takes those bits a non zero amount of time to arrive at the receiver. Furthermore, those bits do not arrive at the same time. There is a spread between the fastest bit and the slowest bit. This spread requires that there is a blanking time between when the bits are asserted and when the bits are read.
>At slow speeds this blanking time is irrelevant. As the speeds increase this blanking time becomes significant. The blanking time has to account for the worst-case scenario. This includes the cable, connectors, circuit board traces, etc... As speeds increase the blanking time will become the limiting factor.
>With serial, all the bits come down the line back to back and require no blanking time. A speed increase only really requires the ability to cram the bits down the line faster. I'm glossing over that with serial signal integrity still needs to be maintained. Serial is also less sensitive to cabling abnormalities.
the best way to do parallel in the current day is to just have multiple individual serial lines on one interface. coincidentally, that's exactly the concept that drives pci, usb c, sata, Ethernet, FireWire, all non analog video connectors, and literally any other plug you can think of that has more than 4 pins.
she's thirteen what the frick are those breasts op
i mean she draws sex scenes on that tablet but still
nobody needs that much speed
My gpus and ssds ain't even maxing out pcie 3 or 4 lmao it's a meme there isn't even any pcie5 anything
Well to be fair games are made under the assumption of generally most people having PCIe 3.0 support right now, once that trend shifts games will likely utilize it more for more intense asset streaming as appropriate.
But yeah this isn't really relevant to the consumer market, these newer PCIe standards are mostly for enterprise and whatever to start using, it takes years before they trickle into consumer spaces as PCIe 4.0 just has started to.
Stuff like Sam and if and resizable bar on pcie g4 are very underutilised so is direct storage.
Gpu and cpu interconnected speed/bus will matter more than cpu and ram speeds going forward I think
i hear there's not even much performance difference between 3 and 4 when gaming
Shit like this makes me not care to upgrade
That and diminishing returns from GPU
>Shit like this makes me not care to upgrade
>That and diminishing returns from GPU
This I got my two pcie4 mobos x570 and B550-F pcs I'll be alright for a long time maybe mgpu sli xfire coming back? I'm assuming pcie5 and 6 and 7 is mainly for server bandwidth and bifurcation
PCIE x4 GPUs anyone?
Imagine how hot PCIe 7 hardware will be
What happened to pci-e 6.0?
Standard is finalized. If there are any implementations they'd be in server environments right now.
Blew my fricking mind when I realized PCIe 3.0 came out in 2010
yeah, pcie had a momentary pause in development but now we're seeing them catch back up to integrators demand for massive IO. though latency will never recover as usual.
It paused? That guy earlier said because of intel Core
>7
I didn't even knew there was a 6
Wake me up when this matters for GPUs
UOOOOHHH, SEX SEX HNGGGG DAMN FEMALE KID SEDUCING ADULT sRAPE CORRECTION NEEDED
This thread is about PCIE slots
Sagiri-chan's cute, pink PCIe slot...
Doesn't really matter, they are all compatible. Doubt there's anything that maxes 4.0 tho
just give me faster SATA
nakadashi that sexy nbr trash
>Aren't these things moving a little too quickly?
Spec writing is easy, any moron can be your ideas guy. The actual implementation is what's hard, and you'll just have to pay up for what you don't need. It's why the majority of phones are stuck at usb2 speeds and wifi chips are cheap garbage that never meet the spec.
aw great, next gen mid range gpus could be just pcie-7 x1 because it's fast enough, leaving all the current mobos in the dust
>spec in 2025
plus how many years more before it's in consumer hardware?
5?
>muh consoomer hardware
Shut The Frick Up
Nobody cares that Gaymen RGB Globohomosexual crap lags real tech by 5 years
Are you too poor or gay to afford normal hardware?
I wonder why this triggered a boot to reply. Or maybe it's posting indiscriminately? Disregarding that, I think adoption will be faster for future specs, considering
.
@87466005
kys zoomer npc Black person cattle
go add more homosexual strips to your gaymen pee see
>only knows meme vocabulary/definitions
Oh no, it's moronic! At least that explains why it replied.
I'm pretty sure I'm still on pcie 3. How much does this matter anyway for someone only using it for a GPU? Pretty sure even the new 4000 series isn't gonna reach a bottleneck for me.
The fastest next-gen cards may feel a bit of a hit even at x16 PCIe 3.0, but I doubt it will be too significant at this point. AMD's lower end cards have been pulling some absolute garbage moves and come with less than 16 PCIe lanes though, now if you run a card like that on PCIe 3.0 or below it can take a fairly big hit since it's already running at the limit due to its cut down lanes, but full 16 lane cards would still be fine I think.
Interesting
I hoped it'll be faster
Black person consumer triple cattle homosexual have a nice day
Buy real hardware or Wrong board