This. Get a processor from the last couple generations. E-waste tier stuff is cheaper in the short term, but given how much sooner you have to replace it, it ends up costing you more.
no dummy the real upgrade path would be to keep everything and just swap for an i7-2600. Now you have another 4 threads and are set for LIFE.
Just stay on vanilla bios and disable all mitigations. You dont do anything important anyway amirite.
Black person, you're using 16 lanes for the gpu, you still have lanes left, stfu
2 years ago
Anonymous
have fun with those 4 from the chipset. ever ran a nvme from them? try it.
2 years ago
Anonymous
Isn't SATA routed through the chipset as well?
2 years ago
Anonymous
yes but 600MBps max does not hurt
2 years ago
Anonymous
my sata in raid 0 breaches 900MB/s
prolly could get better results with better drives (mine cost 20 bucks each)
2 years ago
Anonymous
yes i meant 600 per sata port, if sata3.
2 years ago
Anonymous
What's the throughput of the chipset to cpu connection though? Even basic chipsets support half a dozen SATA and USB3 connections in addition to whatever PCIe lanes they carry. I'd expect PCIe4x through the chipset to still have significant advantages over SATA.
2 years ago
Anonymous
depends heavily on chipset. Z has more than H or B.
2 years ago
Anonymous
lower latencies
iirc sata goes through the modern equivalent of southbridge
pcie has a direct line to the processor
2 years ago
Anonymous
For PCIe lanes that connect directly to the CPU. Consumer grade chips often only have 16 lanes with direct connections for the gpu, and extra pcie lanes are routed through the chipset. Extra latency is expected (not like a few extra nanoseconds matters for storage devices), but is bandwidth affected at all? Even if you had every other possible device running through the chipset saturated?
2 years ago
Anonymous
>but is bandwidth affected at all?
i know it is. but i couldnt tell to what extent, what is the actual bottleneck 🙁
2 years ago
Anonymous
You'll likely find that add in cards connected to the chipset will be limited to 1x. On my mobo I have to disable the extra USB3 ports to get 4x to the chipset slot. Which I did because I hardly use USB on this machine but you'd be surprised at how little it matters in day to day use. Like I was getting 500MB/s on 1x and 1600MB/s on 4x and everything takes about the same amount of time to load. The overwhelming majority of games, professional applications, windows itself, everything doesn't just blat data from the drive straight into RAM and start using it. Everything has to be parsed, processed, assembled, sometimes even compiled before use and as a result your 10+ year old CPU is the bottleneck, not chipset PCIe.
2 years ago
Anonymous
my sata in raid 0 breaches 900MB/s
prolly could get better results with better drives (mine cost 20 bucks each)
what i mean is that raiding a couple drives is less hassle than coercing a mobo to support nvme
2 years ago
Anonymous
It's literally as simple as opening a bios from a Haswell board, copying the nvme driver, and pasting it into a sandybridge/ivybridge bios, then flashing.
2 years ago
Anonymous
>flashing
:/ always carries a risk
and still more hassle than pressing a key and following instructions
i dont say there isnt a use case either
starting with mere curiosity
theres also working with many small files
i would do it if i could on my rig. but then i have reasons for wanting to cut down latency in loading files as much as possible
but for most uses, i would go with raided satas
i run one on raid 0 for 5years now(?)
not a single issue despite having problems with my machine randomly turning off (too little power, then thermal limits, then my 12v molex literally melting down)
2 years ago
Anonymous
I'm a brainlet and I managed to add nvme boot support to my z77 board on the first attempt. Probably took less time that it would to just read about the different RAID options available. My board has dual bios with a physical switch, so I felt comfortable with experimenting though.
2 years ago
Anonymous
there are in fact premium z77 boards that offered boot from pcie in late bios updates. i had one of those myself. asrock z77 extreme4. one of the finest boards i ever had.
2 years ago
Anonymous
nice.
the last time i "flashed" a mobo it consisted in physically removing the chip lmao
now i have dual boot too, but since i have pcie 2.0 nvme is a no go
no big deal tho, sata SDD is pretty fast, and if the latency is a problem i can code around
and the menu isnt confusing at all.
its press x then press y and voila!
you are the happy owner of raided sata
2 years ago
Anonymous
>no big deal tho, sata SDD is pretty fast
This is the truth. The benchmark numbers from NVMe drives don't seem to translate into real performance. Going from SATA to a 2500MB/s nvme drive reduced my windows boot time from like 12s to 10s. Heavily modded games that take several minutes to load also saw marginal gains. File transfer and copy speeds were also underwhelming and were much lower than the capabilities reported by the manufacturer or crystalmark tests.
2 years ago
Anonymous
bottlenecks.
bottlenecks all the way down
also advertised performance is obtained when tests are run in ideal conditions
2 years ago
Anonymous
When I run crystalmark on my nvme drive, it confirms the advertised speeds. Those speeds are never realized in real performance though. I get the discrepancy for loading programs, the cpu can get bottlenecked processing assets, but even just copying files from a SATA ssd drive to a PCIe NVMe drive never even comes close to saturating the SATA read speed. What gives?
2 years ago
Anonymous
>what gives
well first off, you are using an ssd, or a moron drive as I call them.
second off, i fricking hate ssd's
2 years ago
Anonymous
>second off, i fricking hate ssd's
that's because you're brain-dead
2 years ago
Anonymous
>second off, i fricking hate ssd's
And I thought 2500k threads were peak irony posting
2 years ago
Anonymous
2 years ago
Anonymous
>buying Seagate products in the current year
.... why?
2 years ago
Anonymous
where's your backup? i use shit like your spinning rust to keep clones of important files and i keep them OFF. you're just an imbecile
2 years ago
Anonymous
my backup is in your mothers vegana
2 years ago
Anonymous
Keeping them off has its own problems. When your drive is idle the firmware is programmed to verify the data. Magnetically charged particles don't hold their charge indefinitely, it spreads around, weakens, gets damaged by environmental factors... While the drive is powered but not in use the drive will maintain itself. If you just copy new data then unmount and switch off, you risk the oldest data suffering rot. Typically this rot will be corrected by the ECC when you read it, but eventually the rot might get too much for that to handle. At the very least if you insist on leaving them unpowered most of the time then periodically do a full read scan of the entire drive so it can catch the active problems at least.
2 years ago
Anonymous
Black person, don't you teach me how to do backups
2 years ago
Anonymous
It's amazing how nobody ever experienced bitrot until SSDs were invented. Absolutely amazing.
2 years ago
Anonymous
jesus christ is this how morons actually view the internet? why is the image so warm?? Im fricking triggered right now
2 years ago
Anonymous
>super ultra wide monitor >night light mode set thru windows 10 >zoomed 200% tomorrow mode IQfy
yup, its gamer time
2 years ago
Anonymous
well, you got latency between loading your files
you got the hardware-caused latency, but it can vary depending on your filesystem too.
then your sata has a buffer too. [dont know about nvme, didnt look into it yet (since i dont own one)]
so depending on how your data is arranged your mileage can vary.
and thats without even considering OS-caused latency or the hard limit in the amount of data that can go through your CPU or data bus
like i wrote, its bottlenecks all the way down
to get advertised speeds you would have to run a barebone system, on perfectly tuned hardware, with a workload tailored to the specs of your drives. which obviously wont ever happen IRL
2 years ago
Anonymous
Regardless of how bogged down and far from perfect tuning my current system is, my samsung sata ssd reports full 550MB/s read speeds in crystalmark benchmarks, and my WD Black NVME drive reports full 2500MB/s write speed, yet in practice copying a file starts at 200 MB/s and drops down to 50MB/s.
2 years ago
Anonymous
same here
afaik its:
fragmented disk + multiple files + other stuff accessing your drive.
you might want to ask
Does your SATA drive achieve saturation speeds on its own? This is where linux is more helpful, you can dd data around while minimising OS and filesystem overhead to narrow down what's causing the bottleneck.
It might be something really stupid like your SATA cables. SATA connectors have a really low repeated connection limit in their design. They were intended to be connected up and left like that for years and so the design of the cables is not sturdy by design. And SATA the protocol is very stable so you may never notice that it's fricked. I have fixed so very very many weirdo RAID issues by replacing all the SATA cables that I now never assume the cables are fine just because SMART isn't reporting data transmission errors and windows is reporting 6Gb/s links.
that anon seems way more knowledgeable in the matter than i am
2 years ago
Anonymous
Is disk fragmentation even a thing on modern ssd's and operating systems? Windows handles defragmentation of hdd's in the backgrond now, and ssd's have their own protocols for how to store data to memory cells and keep it refreshed.
At any rate, when the benchmarking software runs its tests, the hardware is capable of delivering the advertised results, but it never seems to be realized in practice.
If
Does your SATA drive achieve saturation speeds on its own? This is where linux is more helpful, you can dd data around while minimising OS and filesystem overhead to narrow down what's causing the bottleneck.
It might be something really stupid like your SATA cables. SATA connectors have a really low repeated connection limit in their design. They were intended to be connected up and left like that for years and so the design of the cables is not sturdy by design. And SATA the protocol is very stable so you may never notice that it's fricked. I have fixed so very very many weirdo RAID issues by replacing all the SATA cables that I now never assume the cables are fine just because SMART isn't reporting data transmission errors and windows is reporting 6Gb/s links.
can offer some explanation to what's taking place, it would be appreciated.
2 years ago
Anonymous
What happens if you run crystaldiskmark on your SSD *and* your NVMe at the same time?
One of the major issues that NVMe turned up is that storage drivers in windows and linux were just not intended to have so much available bandwidth. They were intended to issue a request then go away and do something else and wait for the interrupt that says "data is ready." NVMe is so fast that the kernel gets overwhelmed just trying to deal with the incoming data and it winds up massively impacting performance elsewhere.
Unfortunately trying to figure out what to do about it will involve a whole lot of trial and error and fiddling with settings. Ultimately the fault might be the specific SATA chipset you have that just can't handle buffer bloat, always assuming the drive will be the bottleneck.
To add, Wendel at Level1Techs ran into a problem where a massive NVMe array was saturating PCIe and everything related so hard that the drives were timing out trying to send data. What was happening was the drive was responding to the requests but the packet was being "lost" due to no available CPU to handle it, this led to massive performance drops while the drive waited for a response that would never come.
It could very well be that your NVMe is just too fast for your CPU. Sucks, but if Wendel couldn't find a solution, I doubt you or I can.
2 years ago
Anonymous
Wendel is based. I still don't get it though.
Pic rel is the benchmark, run simultaneously on the samsung sata ssd (left) and WD black nvme (right), with a 2GB test file size. I omitted secondary tests since we're looking at advertised performance under ideal circumstances, like transferring a single large file.
Then I copied a 1.5GB file from the samsung drive to the WD drive. It peaked at 250MB/s, then quickly dropped to ~50MB/s.
Is even a sandybridge era cpu so handicapped that it can't even handle transferring a file form at more than 50MB/s? How tf does it manage conducting two simultaneous benchmarks without the same crippling performance issues?
2 years ago
Anonymous
At this point I'm afraid we're trying to debug the windows kernel.
Well, we can rule out bandwidth saturation since you were able to get full wire speed with the two benchmarks simultaneously. Do you get the same performance drop when you go the other way? From WD to Samsung and Samsung to WD?
I'm wondering if it's a Native Command Queueing bug. Are your drives on the latest firmware? It might also be the "cheat factor" of the drives. SSDs are cheap because they use QLC NAND and such but it's slow to write. So they often use large RAM caches AND a small SLC section that can absorb smaller writes temporarily to keep the performance up. But if you're writing large files with filesystem overhead you might have saturated that and are now getting the "real" speed of your QLC NAND...
There's just so many things to test. I would be inclined to boot from linux, try raw dd to /dev, try ext4, try ZFS, try frickin' BSD... Try another mobo, try another SATA controller card...
Sorry I can't be more help, it's just a fricker of a problem you've got there.
2 years ago
Anonymous
What happens if you run crystaldiskmark on your SSD *and* your NVMe at the same time?
One of the major issues that NVMe turned up is that storage drivers in windows and linux were just not intended to have so much available bandwidth. They were intended to issue a request then go away and do something else and wait for the interrupt that says "data is ready." NVMe is so fast that the kernel gets overwhelmed just trying to deal with the incoming data and it winds up massively impacting performance elsewhere.
Unfortunately trying to figure out what to do about it will involve a whole lot of trial and error and fiddling with settings. Ultimately the fault might be the specific SATA chipset you have that just can't handle buffer bloat, always assuming the drive will be the bottleneck.
2 years ago
Anonymous
Does your SATA drive achieve saturation speeds on its own? This is where linux is more helpful, you can dd data around while minimising OS and filesystem overhead to narrow down what's causing the bottleneck.
It might be something really stupid like your SATA cables. SATA connectors have a really low repeated connection limit in their design. They were intended to be connected up and left like that for years and so the design of the cables is not sturdy by design. And SATA the protocol is very stable so you may never notice that it's fricked. I have fixed so very very many weirdo RAID issues by replacing all the SATA cables that I now never assume the cables are fine just because SMART isn't reporting data transmission errors and windows is reporting 6Gb/s links.
2 years ago
Anonymous
there are in fact premium z77 boards that offered boot from pcie in late bios updates. i had one of those myself. asrock z77 extreme4. one of the finest boards i ever had.
Adding NVMe/PCIe boot to any BIOS takes literally 5 minutes doing it yourself and you'll have up to date boot firmware and not the beta shit. Look it up.
2 years ago
Anonymous
yes i agree. but all they did is what you describe. its just bios 2.90 with nvme boot added from asrock themself.
2 years ago
Anonymous
maybe dig a little deeper, moron, and come back with what you find
2 years ago
Anonymous
maybe not? learn something or try something, check the results then come back. nvmes didnt exist back then and are crippled in latency and speed through the chipset. above all its just pcie2. dumb frick.
2 years ago
Anonymous
Black person, you're a fricking Black person, there's no point wasting my time with you
2 years ago
Anonymous
thats the smartest thing you said. now go take your medication and leave for a while. a fresh breeze of air will give you the energy you need to keep calling everyone smarter than you a Black person when you are back.
>no need for such fancy things. you boot from ssd to the nvme on pcie, taking away from those precious pcie lanes accepting the bottleneck.
Waiting for the OS to load is the only time you can't alt-tab and do something else. Might as well not even bother with nvme if you're not even going to boot from it.
You ought to be fine with ivy 3770k + W10 until you're forced to use W11 for security updates, then you'll need a new computer.
3770k should be good until 2032 when support for the current W10 LTSC build ends, right?
2 years ago
Anonymous
Anon, how old are you? SSD on ivy bridge builds are adequate, sure you wait like 10 seconds to get to the login screen because you don't have nvme, but I doubt OP has the autism to be bothered by something like that. Those computers are more than enough to browse the internet or watch YouTube on.
Pcie lanes are better used for capture cards or USB 3.0 cards if the motherboard is quite old. I had a riser to test nvme on a very old computer and compared to a regular SSD you only get like 1-3% improvement in performance. If you use a riser on something really old, like pre 2010, then you get drastically higher speeds compared to using motherboard SSD headers which are using really old sata standards that will never be able to reach transfer speeds which a pciex8/16 from the same era could push more through, assuming you don't want to use a dedicated GPU and your motherboard has onboard video. It might be something desirable for a computer stuck using node locked software/peripherals.
My haswell cpu finally broke, I put it on another haswell computer same result. BSOD on windows and completely freeze on linux after a few seconds, wonky boot, bios losing settings, etc, etc. The rest of my computer is fine, I put the i3 on it until I get desperate enough to upgrade mobo+cpu+ram.
Same, forget about the bios part it was the battery but I swapped my 4690k with an i3 4130, both computers had the same issues with the 4690k. It was overlocked to 4.2 @ 1.2v for only 2 years.
I would presume the ivr's on haswell are the most likely thing to die. I recently had a 4670 die, which was the second cpu I've ever seen die in like 10 years of working with computers.
>i5 2500 chads, what's a sensible upgrade path? If I get a 3770k, I should be set for another few years right?
I never noticed a difference in performance between my 2700k and the 4960k that I had to replace it with after dumping an entire cup of water into my computer. And it wound up being replaced by a 3600 when I was getting bottlenecked in thread count playing VR.
Don't buy anything less than a 11900k or 5900x. You don't have to be extremely spendy for a good upgrade on current/last gen hardware. If you really want to chump the change for ddr5 then even the 12600k is really good, albeit on a not so great platform for the price and cooling requirements.
>Don't buy anything less than a 5800X or 5900X
That's basically what you're saying. I wouldn't recommend less than 12th gen for Intel (which can be used with DDR4 btw).
>I wouldn't recommend less than 12th gen for Intel (which can be used with DDR4 btw).
yeah but then you're buying an inferior board with memory that is often slower than zen 3. Unless you get a 12900k with its full l3 cache, you shouldn't be considering DDR4 and alderlake.
>yeah but then you're buying an inferior board with memory that is often slower than zen 3
So just buy Zen 3 then. Problem solved. I would never recommend Rocket Lake to anyone.
2 years ago
Anonymous
>I would never recommend Rocket Lake to anyone.
Someone on a budget can scoop a 11900k and z590 for less than a 5800x and be at least 10% faster across the board. Cooler is required on both if you didn't realize so at that point the costs really tip in intels favor.
2 years ago
Anonymous
>Someone on a budget can scoop a 11900k and z590 for less than a 5800x
What the frick? Where? In Europe the 11900K costs almost as much as a 5900X. Are you talking about buying used? >be at least 10% faster across the board
Maybe (in some workloads), but it will use ridiculous amounts of power to do so.
Former 3770k owner here. Delidding is definitely worth while due to the shitty thermal paste. It was already dried up and crusty when I did it half a decade ago. The heatspreader on my chip also was substandard, with a significant concavity. My thermals would get worse and worse over time until repasting. I suspect it was due to the thermal paste between the IHS and heatsink getting 'pumped' out over time through repeated expansion/contraction thermal cycles. Eventually ground the IHS down flat and got that sorted.
>4790k to 12700k
Nothing but headaches. So many issues that I wish I’d just kept waiting. Just keep using your 2500k, OP. You don’t need to upgrade yet.
>browsing IQfy for any other reason than getting your daily dose of seethe from troony morons shilling linux distros or venting your frustration by making new i5 2500 threads
ngmi
is bclk overclocking sensible? I've seen discussion about how it can cause corruption on pcie connected devices, like nvme drives. Is it viable because you're running an old system with no pcie storage devices?
You'll likely find that add in cards connected to the chipset will be limited to 1x. On my mobo I have to disable the extra USB3 ports to get 4x to the chipset slot. Which I did because I hardly use USB on this machine but you'd be surprised at how little it matters in day to day use. Like I was getting 500MB/s on 1x and 1600MB/s on 4x and everything takes about the same amount of time to load. The overwhelming majority of games, professional applications, windows itself, everything doesn't just blat data from the drive straight into RAM and start using it. Everything has to be parsed, processed, assembled, sometimes even compiled before use and as a result your 10+ year old CPU is the bottleneck, not chipset PCIe.
>iirc defrag is not a thing with linux
That's true, but it's because A) multi-gigabyte-per-file pirated media (by far the most common type of file stored by Linux users) isn't really affected by fragmentation that much, and B) many Linux filesystems "reserve" up to 10% for their own odds and ends, so there's workspace for storing files contiguously. Windows users, on the other hand, would rightfully complain at length if they got "out of disk space" when there was still 50GB free on their drive. >iirc ntfs is quite inefficient in fetching files
Nope. This myth is based upon the terrible NTFS drivers (such as ntfs-3g) FOSS has shat out over the decades, coupled with the fact that Linux doesn't have anything resembling FindFirstFile()/FindNextFile(), which is an order of magnitude faster than stat(). As a result, FOSS liquishit does everything the POSIX way (calling stat() on a loop), so freetards hurrdurr away with NTFS is slow/inefficient.
thanks for the update
i didnt look into these matters since i stopped overclocking (15 years ago)
> Windows users, on the other hand, would rightfully complain at length if they got "out of disk space" when there was still 50GB free on their drive.
Manually defragging used to be regular house keeping on Windows, but that was like a decade ago. It's handled in the background now.
More like two decades - it's been a idle-time scheduled task since Windows XP to my recollection.
But, understand that the background defrag has always been a simple file cluster consolidation (because it's way way faster, so it can be cancelled fast if the machine stops being idle): free space remains fragmented, and therefore so will the next file you store.
Defragging made sense when drives were accessed via Cylinder, Track and Sector addressing. The physical layout of the blocks were directly controlled by the OS and so it just made sense for the files to be contiguous. But when we moved to LBA the drive firmware started to make "executive decisions" about how to lay out the data. Sure, originally it was just a mapping to the regular layout, then later it was silent remapping of bad clusters but eventually it evolved into total management of the entire blockmap such that you might defrag your file system and get *logically* contiguous files, but now the physical layout of the file is fragmented...
It wasn't uncommon to find performance dropped after a big defrag. But then again, if you never defragged the additional file system overhead from non-contiguous access would also be a killer... Basically, be thankful modern file systems are designed better. Your old FAT32 drives were broken by design.
LBA doesn't change the fact that hard disks still internally work by CHS, anon.
One could make an argument that the OS being aware of the physical location of a sector on a media could allow for a drive performance profile to be used (or even educated guesses) as to how the data should be arranged for maximum performance (things like track skew, sector interleave, all those tricks we did up until the early 1990s) - but the gains would be infinitesimal today, considering how intelligent a drive is now.
>3770k
is that the same socket as the 2500? If so, I don't see why not since the 3770k is like $45 on ebay. If it's not, good fricking luck finding a nice mobo for an old socket. That's why buying old CPUs is never sensible unless you already have the board.
Buy a Ryzen 5 3600 in order to alleviate your mental anguish.
This. Get a processor from the last couple generations. E-waste tier stuff is cheaper in the short term, but given how much sooner you have to replace it, it ends up costing you more.
Consider suicide
no dummy the real upgrade path would be to keep everything and just swap for an i7-2600. Now you have another 4 threads and are set for LIFE.
Just stay on vanilla bios and disable all mitigations. You dont do anything important anyway amirite.
>vanilla bios
real lga1155 chads use modified bios so you can boot from an nvme drive on a pcie aib.
no need for such fancy things. you boot from ssd to the nvme on pcie, taking away from those precious pcie lanes accepting the bottleneck.
yes in a few years. 2600 is blazing fast for now.
Black person, you're using 16 lanes for the gpu, you still have lanes left, stfu
have fun with those 4 from the chipset. ever ran a nvme from them? try it.
Isn't SATA routed through the chipset as well?
yes but 600MBps max does not hurt
my sata in raid 0 breaches 900MB/s
prolly could get better results with better drives (mine cost 20 bucks each)
yes i meant 600 per sata port, if sata3.
What's the throughput of the chipset to cpu connection though? Even basic chipsets support half a dozen SATA and USB3 connections in addition to whatever PCIe lanes they carry. I'd expect PCIe4x through the chipset to still have significant advantages over SATA.
depends heavily on chipset. Z has more than H or B.
lower latencies
iirc sata goes through the modern equivalent of southbridge
pcie has a direct line to the processor
For PCIe lanes that connect directly to the CPU. Consumer grade chips often only have 16 lanes with direct connections for the gpu, and extra pcie lanes are routed through the chipset. Extra latency is expected (not like a few extra nanoseconds matters for storage devices), but is bandwidth affected at all? Even if you had every other possible device running through the chipset saturated?
>but is bandwidth affected at all?
i know it is. but i couldnt tell to what extent, what is the actual bottleneck 🙁
You'll likely find that add in cards connected to the chipset will be limited to 1x. On my mobo I have to disable the extra USB3 ports to get 4x to the chipset slot. Which I did because I hardly use USB on this machine but you'd be surprised at how little it matters in day to day use. Like I was getting 500MB/s on 1x and 1600MB/s on 4x and everything takes about the same amount of time to load. The overwhelming majority of games, professional applications, windows itself, everything doesn't just blat data from the drive straight into RAM and start using it. Everything has to be parsed, processed, assembled, sometimes even compiled before use and as a result your 10+ year old CPU is the bottleneck, not chipset PCIe.
what i mean is that raiding a couple drives is less hassle than coercing a mobo to support nvme
It's literally as simple as opening a bios from a Haswell board, copying the nvme driver, and pasting it into a sandybridge/ivybridge bios, then flashing.
>flashing
:/ always carries a risk
and still more hassle than pressing a key and following instructions
i dont say there isnt a use case either
starting with mere curiosity
theres also working with many small files
i would do it if i could on my rig. but then i have reasons for wanting to cut down latency in loading files as much as possible
but for most uses, i would go with raided satas
i run one on raid 0 for 5years now(?)
not a single issue despite having problems with my machine randomly turning off (too little power, then thermal limits, then my 12v molex literally melting down)
I'm a brainlet and I managed to add nvme boot support to my z77 board on the first attempt. Probably took less time that it would to just read about the different RAID options available. My board has dual bios with a physical switch, so I felt comfortable with experimenting though.
there are in fact premium z77 boards that offered boot from pcie in late bios updates. i had one of those myself. asrock z77 extreme4. one of the finest boards i ever had.
nice.
the last time i "flashed" a mobo it consisted in physically removing the chip lmao
now i have dual boot too, but since i have pcie 2.0 nvme is a no go
no big deal tho, sata SDD is pretty fast, and if the latency is a problem i can code around
and the menu isnt confusing at all.
its press x then press y and voila!
you are the happy owner of raided sata
>no big deal tho, sata SDD is pretty fast
This is the truth. The benchmark numbers from NVMe drives don't seem to translate into real performance. Going from SATA to a 2500MB/s nvme drive reduced my windows boot time from like 12s to 10s. Heavily modded games that take several minutes to load also saw marginal gains. File transfer and copy speeds were also underwhelming and were much lower than the capabilities reported by the manufacturer or crystalmark tests.
bottlenecks.
bottlenecks all the way down
also advertised performance is obtained when tests are run in ideal conditions
When I run crystalmark on my nvme drive, it confirms the advertised speeds. Those speeds are never realized in real performance though. I get the discrepancy for loading programs, the cpu can get bottlenecked processing assets, but even just copying files from a SATA ssd drive to a PCIe NVMe drive never even comes close to saturating the SATA read speed. What gives?
>what gives
well first off, you are using an ssd, or a moron drive as I call them.
second off, i fricking hate ssd's
>second off, i fricking hate ssd's
that's because you're brain-dead
>second off, i fricking hate ssd's
And I thought 2500k threads were peak irony posting
>buying Seagate products in the current year
.... why?
where's your backup? i use shit like your spinning rust to keep clones of important files and i keep them OFF. you're just an imbecile
my backup is in your mothers vegana
Keeping them off has its own problems. When your drive is idle the firmware is programmed to verify the data. Magnetically charged particles don't hold their charge indefinitely, it spreads around, weakens, gets damaged by environmental factors... While the drive is powered but not in use the drive will maintain itself. If you just copy new data then unmount and switch off, you risk the oldest data suffering rot. Typically this rot will be corrected by the ECC when you read it, but eventually the rot might get too much for that to handle. At the very least if you insist on leaving them unpowered most of the time then periodically do a full read scan of the entire drive so it can catch the active problems at least.
Black person, don't you teach me how to do backups
It's amazing how nobody ever experienced bitrot until SSDs were invented. Absolutely amazing.
jesus christ is this how morons actually view the internet? why is the image so warm?? Im fricking triggered right now
>super ultra wide monitor
>night light mode set thru windows 10
>zoomed 200% tomorrow mode IQfy
yup, its gamer time
well, you got latency between loading your files
you got the hardware-caused latency, but it can vary depending on your filesystem too.
then your sata has a buffer too. [dont know about nvme, didnt look into it yet (since i dont own one)]
so depending on how your data is arranged your mileage can vary.
and thats without even considering OS-caused latency or the hard limit in the amount of data that can go through your CPU or data bus
like i wrote, its bottlenecks all the way down
to get advertised speeds you would have to run a barebone system, on perfectly tuned hardware, with a workload tailored to the specs of your drives. which obviously wont ever happen IRL
Regardless of how bogged down and far from perfect tuning my current system is, my samsung sata ssd reports full 550MB/s read speeds in crystalmark benchmarks, and my WD Black NVME drive reports full 2500MB/s write speed, yet in practice copying a file starts at 200 MB/s and drops down to 50MB/s.
same here
afaik its:
fragmented disk + multiple files + other stuff accessing your drive.
you might want to ask
that anon seems way more knowledgeable in the matter than i am
Is disk fragmentation even a thing on modern ssd's and operating systems? Windows handles defragmentation of hdd's in the backgrond now, and ssd's have their own protocols for how to store data to memory cells and keep it refreshed.
At any rate, when the benchmarking software runs its tests, the hardware is capable of delivering the advertised results, but it never seems to be realized in practice.
If
can offer some explanation to what's taking place, it would be appreciated.
To add, Wendel at Level1Techs ran into a problem where a massive NVMe array was saturating PCIe and everything related so hard that the drives were timing out trying to send data. What was happening was the drive was responding to the requests but the packet was being "lost" due to no available CPU to handle it, this led to massive performance drops while the drive waited for a response that would never come.
It could very well be that your NVMe is just too fast for your CPU. Sucks, but if Wendel couldn't find a solution, I doubt you or I can.
Wendel is based. I still don't get it though.
Pic rel is the benchmark, run simultaneously on the samsung sata ssd (left) and WD black nvme (right), with a 2GB test file size. I omitted secondary tests since we're looking at advertised performance under ideal circumstances, like transferring a single large file.
Then I copied a 1.5GB file from the samsung drive to the WD drive. It peaked at 250MB/s, then quickly dropped to ~50MB/s.
Is even a sandybridge era cpu so handicapped that it can't even handle transferring a file form at more than 50MB/s? How tf does it manage conducting two simultaneous benchmarks without the same crippling performance issues?
At this point I'm afraid we're trying to debug the windows kernel.
Well, we can rule out bandwidth saturation since you were able to get full wire speed with the two benchmarks simultaneously. Do you get the same performance drop when you go the other way? From WD to Samsung and Samsung to WD?
I'm wondering if it's a Native Command Queueing bug. Are your drives on the latest firmware? It might also be the "cheat factor" of the drives. SSDs are cheap because they use QLC NAND and such but it's slow to write. So they often use large RAM caches AND a small SLC section that can absorb smaller writes temporarily to keep the performance up. But if you're writing large files with filesystem overhead you might have saturated that and are now getting the "real" speed of your QLC NAND...
There's just so many things to test. I would be inclined to boot from linux, try raw dd to /dev, try ext4, try ZFS, try frickin' BSD... Try another mobo, try another SATA controller card...
Sorry I can't be more help, it's just a fricker of a problem you've got there.
What happens if you run crystaldiskmark on your SSD *and* your NVMe at the same time?
One of the major issues that NVMe turned up is that storage drivers in windows and linux were just not intended to have so much available bandwidth. They were intended to issue a request then go away and do something else and wait for the interrupt that says "data is ready." NVMe is so fast that the kernel gets overwhelmed just trying to deal with the incoming data and it winds up massively impacting performance elsewhere.
Unfortunately trying to figure out what to do about it will involve a whole lot of trial and error and fiddling with settings. Ultimately the fault might be the specific SATA chipset you have that just can't handle buffer bloat, always assuming the drive will be the bottleneck.
Does your SATA drive achieve saturation speeds on its own? This is where linux is more helpful, you can dd data around while minimising OS and filesystem overhead to narrow down what's causing the bottleneck.
It might be something really stupid like your SATA cables. SATA connectors have a really low repeated connection limit in their design. They were intended to be connected up and left like that for years and so the design of the cables is not sturdy by design. And SATA the protocol is very stable so you may never notice that it's fricked. I have fixed so very very many weirdo RAID issues by replacing all the SATA cables that I now never assume the cables are fine just because SMART isn't reporting data transmission errors and windows is reporting 6Gb/s links.
Adding NVMe/PCIe boot to any BIOS takes literally 5 minutes doing it yourself and you'll have up to date boot firmware and not the beta shit. Look it up.
yes i agree. but all they did is what you describe. its just bios 2.90 with nvme boot added from asrock themself.
maybe dig a little deeper, moron, and come back with what you find
maybe not? learn something or try something, check the results then come back. nvmes didnt exist back then and are crippled in latency and speed through the chipset. above all its just pcie2. dumb frick.
Black person, you're a fricking Black person, there's no point wasting my time with you
thats the smartest thing you said. now go take your medication and leave for a while. a fresh breeze of air will give you the energy you need to keep calling everyone smarter than you a Black person when you are back.
>no need for such fancy things. you boot from ssd to the nvme on pcie, taking away from those precious pcie lanes accepting the bottleneck.
Waiting for the OS to load is the only time you can't alt-tab and do something else. Might as well not even bother with nvme if you're not even going to boot from it.
3770k should be good until 2032 when support for the current W10 LTSC build ends, right?
Anon, how old are you? SSD on ivy bridge builds are adequate, sure you wait like 10 seconds to get to the login screen because you don't have nvme, but I doubt OP has the autism to be bothered by something like that. Those computers are more than enough to browse the internet or watch YouTube on.
Pcie lanes are better used for capture cards or USB 3.0 cards if the motherboard is quite old. I had a riser to test nvme on a very old computer and compared to a regular SSD you only get like 1-3% improvement in performance. If you use a riser on something really old, like pre 2010, then you get drastically higher speeds compared to using motherboard SSD headers which are using really old sata standards that will never be able to reach transfer speeds which a pciex8/16 from the same era could push more through, assuming you don't want to use a dedicated GPU and your motherboard has onboard video. It might be something desirable for a computer stuck using node locked software/peripherals.
and this still leaves a future upgrade path open for 2700k
genius!
xeon e3-1280 v2/ E3-1290 v2 or the ones with igpu if you need it
Can you just drop those into consumer lga1155 systems with regular RAM?
yes
You ought to be fine with ivy 3770k + W10 until you're forced to use W11 for security updates, then you'll need a new computer.
>forced to use w11
never going to happen
just use linux and wine to shit and emulate everything w10 and w11, ms can eat shit
>t. micro atx motherboardlet
>tfw stuffing three slot gtx 570 into little mobo
sounds like a very pleasurable experience
well, depending on whos the mobo actually
theres also riser cables tho
My haswell cpu finally broke, I put it on another haswell computer same result. BSOD on windows and completely freeze on linux after a few seconds, wonky boot, bios losing settings, etc, etc. The rest of my computer is fine, I put the i3 on it until I get desperate enough to upgrade mobo+cpu+ram.
intel cpu's die? thats a new for me
Same, forget about the bios part it was the battery but I swapped my 4690k with an i3 4130, both computers had the same issues with the 4690k. It was overlocked to 4.2 @ 1.2v for only 2 years.
I would presume the ivr's on haswell are the most likely thing to die. I recently had a 4670 die, which was the second cpu I've ever seen die in like 10 years of working with computers.
didnt Intel abandoned IVR with Skylake Microarchitecture?
yes, it was moved back to the motherboard
>i5 2500 chads, what's a sensible upgrade path? If I get a 3770k, I should be set for another few years right?
I never noticed a difference in performance between my 2700k and the 4960k that I had to replace it with after dumping an entire cup of water into my computer. And it wound up being replaced by a 3600 when I was getting bottlenecked in thread count playing VR.
Don't buy anything less than a 11900k or 5900x. You don't have to be extremely spendy for a good upgrade on current/last gen hardware. If you really want to chump the change for ddr5 then even the 12600k is really good, albeit on a not so great platform for the price and cooling requirements.
>Don't buy anything less than a 5800X or 5900X
That's basically what you're saying. I wouldn't recommend less than 12th gen for Intel (which can be used with DDR4 btw).
>I wouldn't recommend less than 12th gen for Intel (which can be used with DDR4 btw).
yeah but then you're buying an inferior board with memory that is often slower than zen 3. Unless you get a 12900k with its full l3 cache, you shouldn't be considering DDR4 and alderlake.
4u
>yeah but then you're buying an inferior board with memory that is often slower than zen 3
So just buy Zen 3 then. Problem solved. I would never recommend Rocket Lake to anyone.
>I would never recommend Rocket Lake to anyone.
Someone on a budget can scoop a 11900k and z590 for less than a 5800x and be at least 10% faster across the board. Cooler is required on both if you didn't realize so at that point the costs really tip in intels favor.
>Someone on a budget can scoop a 11900k and z590 for less than a 5800x
What the frick? Where? In Europe the 11900K costs almost as much as a 5900X. Are you talking about buying used?
>be at least 10% faster across the board
Maybe (in some workloads), but it will use ridiculous amounts of power to do so.
You can buy a Ryzen Zen1 for a few bucks
A Ryzen 1600 might be a few $, but you'd need a new mobo and ram. I bet a 3770k with a mild overclock outperforms it in stuff like games too.
>3770k
remember to delid it before overclocking
Former 3770k owner here. Delidding is definitely worth while due to the shitty thermal paste. It was already dried up and crusty when I did it half a decade ago. The heatspreader on my chip also was substandard, with a significant concavity. My thermals would get worse and worse over time until repasting. I suspect it was due to the thermal paste between the IHS and heatsink getting 'pumped' out over time through repeated expansion/contraction thermal cycles. Eventually ground the IHS down flat and got that sorted.
Q9650 vs I5-2500k which one was better?
I was on a 2600k overclocked to 4.6ghz and upgraded to a xeon 1650 v2 6c 12t @4.5ghz.
I do see somewhat better 1% lows but that's it.
>4790k to 12700k
Nothing but headaches. So many issues that I wish I’d just kept waiting. Just keep using your 2500k, OP. You don’t need to upgrade yet.
>hourly poorgay cope thread
>full of screaming morons
Pottery
>browsing IQfy for any other reason than getting your daily dose of seethe from troony morons shilling linux distros or venting your frustration by making new i5 2500 threads
ngmi
Rocking a 1280v5 at bclk 104 here. Comfy.
did you coffeemod or what board are you using?
Supermicro workstation board for this machine.
Motherboard init is ass slow but that stuff is crazy stable.
too shit a processor, wouldn't bother with all of that. might as well go with e5 v6
is bclk overclocking sensible? I've seen discussion about how it can cause corruption on pcie connected devices, like nvme drives. Is it viable because you're running an old system with no pcie storage devices?
It's about couple of MHz, boo real performances gains unless process uses all cores.
It can cause corruption and crash pcie devices if they can't handle it. Never had issues with 104, had mystic issues going higher.
Also forced memory to 2400 (module supports 2666 according to Kingston, but CPU doesn't handle it )
captcha S0Y0A
just get some chink xeon of you aren't overclocking, last time I checked they were cheap as frick
1285 v6 still above 500usd
>buy more expensive 2.5" 860 evo than 970 evo plus nvme because i'm moronic
was meant for you
at sandy bridge launch there still was ATI branding on gpus. damn.
Bought a 2500k back in 2011 and have buyer's remorse I didn't get a 2600k. Cheaping out at the time has cost me years of future use.
I don't understand. Upgrade? With my 2500k I don't thnk I ever need to
>iirc defrag is not a thing with linux
That's true, but it's because A) multi-gigabyte-per-file pirated media (by far the most common type of file stored by Linux users) isn't really affected by fragmentation that much, and B) many Linux filesystems "reserve" up to 10% for their own odds and ends, so there's workspace for storing files contiguously. Windows users, on the other hand, would rightfully complain at length if they got "out of disk space" when there was still 50GB free on their drive.
>iirc ntfs is quite inefficient in fetching files
Nope. This myth is based upon the terrible NTFS drivers (such as ntfs-3g) FOSS has shat out over the decades, coupled with the fact that Linux doesn't have anything resembling FindFirstFile()/FindNextFile(), which is an order of magnitude faster than stat(). As a result, FOSS liquishit does everything the POSIX way (calling stat() on a loop), so freetards hurrdurr away with NTFS is slow/inefficient.
thanks for the update
i didnt look into these matters since i stopped overclocking (15 years ago)
why would someone use ntfs on linux when there is btrfs and ext4
"Because 90% of computers can't read either of those" would be the main reason.
beats me
its still a thing tho
> Windows users, on the other hand, would rightfully complain at length if they got "out of disk space" when there was still 50GB free on their drive.
Manually defragging used to be regular house keeping on Windows, but that was like a decade ago. It's handled in the background now.
>but that was like a decade ago.
yeah, im that old
i am old enough to say win 98SE was the best windows imo.
kek
It's all good friend. Not like anything worth getting excited about has happened in the last decade.
thanks friend.
would have been to busy to notice anyways
More like two decades - it's been a idle-time scheduled task since Windows XP to my recollection.
But, understand that the background defrag has always been a simple file cluster consolidation (because it's way way faster, so it can be cancelled fast if the machine stops being idle): free space remains fragmented, and therefore so will the next file you store.
Defragging made sense when drives were accessed via Cylinder, Track and Sector addressing. The physical layout of the blocks were directly controlled by the OS and so it just made sense for the files to be contiguous. But when we moved to LBA the drive firmware started to make "executive decisions" about how to lay out the data. Sure, originally it was just a mapping to the regular layout, then later it was silent remapping of bad clusters but eventually it evolved into total management of the entire blockmap such that you might defrag your file system and get *logically* contiguous files, but now the physical layout of the file is fragmented...
It wasn't uncommon to find performance dropped after a big defrag. But then again, if you never defragged the additional file system overhead from non-contiguous access would also be a killer... Basically, be thankful modern file systems are designed better. Your old FAT32 drives were broken by design.
LBA doesn't change the fact that hard disks still internally work by CHS, anon.
One could make an argument that the OS being aware of the physical location of a sector on a media could allow for a drive performance profile to be used (or even educated guesses) as to how the data should be arranged for maximum performance (things like track skew, sector interleave, all those tricks we did up until the early 1990s) - but the gains would be infinitesimal today, considering how intelligent a drive is now.
i was thinking valves gaydeck low power apu is really not much more than the i7-2600 and would keep this ancient chips alive forever even for gaymers.
but its not half as popular as it seemed to be and no one cares.
>3770k
That is not an upgrade.
>double the amount of threads
>more performance
>better igpu
>3770k
is that the same socket as the 2500? If so, I don't see why not since the 3770k is like $45 on ebay. If it's not, good fricking luck finding a nice mobo for an old socket. That's why buying old CPUs is never sensible unless you already have the board.