Do I really need to use windows 11 for ecores? I can't just use windows 10?
It's All Fucked Shirt $22.14 |
Tip Your Landlord Shirt $21.68 |
It's All Fucked Shirt $22.14 |
Do I really need to use windows 11 for ecores? I can't just use windows 10?
It's All Fucked Shirt $22.14 |
Tip Your Landlord Shirt $21.68 |
It's All Fucked Shirt $22.14 |
sir efficient core is future or something sir x86 is totally not on it's way out and desperately trying to become ARM. x86 still strong and efficient sir!!
so when are we actually going to get our fricking RISC-V processors? I don't want pozzed computing
XP is king
more like ree cores
No, you need windows 15 + Enterprise Abo, an online account as well as an Quantum Computer, the soul of a first born child and the blood of a virgin
Average scrum meeting at M$.
>Enterprise Abo
e-cores aren't a bad idea if the OS and applications know how to manage them well (not very easy)
i just hate how intel dares to market "8 pcores + 16 ecores" as "24 cores" when in reality its an 8core processor with some haswell cores taped on
glad to be a xeon w enjoyer who gets pcores only
common xeon w
It got good up until Alder lake, then they just stagnated (again (again))
>Alder lake
>still ecucked
>non monolithic high latency arch
>atrocious input latency on top
>good
moron
works on my 12400f
>atrocious input latency
Source?
NTA,
>testing input latency on the desktop
lmao. you realize windows is applying all power savings here right?
>latencymon
testing for 1 minute is moronic as frick. also NVIDIA will spike if you don't have power states disabled which is what he is seeing. He doesn't show the processes or drivers tabs...
He banned me from his channel from this video btw. Was subbed for 10 years. Wrote a large comment talking about his testing methodology.
>He banned me from his channel from this video btw. Was subbed for 10 years. Wrote a large comment talking about his testing methodology.
why do gays do this?
they can basically make a whole new video built on talking about well thought out comments.
youtubers are fricking cucks. can't imagine actually watching any.
>He banned me from his channel from this video btw. Was subbed for 10 years. Wrote a large comment talking about his testing methodology.
You need to make a video exposing him then
Stay away from his NTLite ISO as well. He removed too much shit and creates problems. Can't believe people even recommend that.
it has atrocious core-to-core latency, but practically, you have to write some mega Black personlicious code that causes a lot of cache thrashing that it becomes a problem and your problem is probably not very parallelizable anyhow.
the "infinity fabric" AMD shit is much better in this case, but I don't know what gaymd trades for it. bing bus is pretty easy to understand from a CPU design perspective, but it does suck, but I can't fathom where you'd shove all the machinery to make it p2p like amd did.
where is your source?
And this is multi-chiplet problems that AMD suffers from which is why you want 1 chiplet playing vidya.
this is what I get on cuck cores.
Source?
my own computer using c2clat.
What is the setup like? Have you done anything to Windows, CPU or Power plan/.
I don't use Windows and use the default Intel CPU Governor in Linux.
Lol linux is garbage
ok moron.
Why
so it's basically the same, but more cuck cores?
bing bus bros?
>core to core latency is worse within an E core cluster
what the frick is intel doing?
iirc the reason is that e-cores need to go through the L3 bing bus to communicate with each other, but they're clocked slower than p-cores and take a longer time to do it
Why is the color scale like this man
because that's what c2clat chose and I don't feel like amending it.
Can someone more knowledgeable tell us what are the best options currently regarding this latency problem?
I don't know about intel but for amd tightening your ram timings works well
This is about inter-core latency, your RAM timings have nothing to do with it.
for cross ccx/ccd communication you have to go over the infinity fabric. this is fairly esoteric knowledge, I'm trying to find a source for you
Oh right, that is true. This shouldn't be an issue for a 5500 or most other Ryzens since unifying CCXs on a CCD though.
Yes it's an issue on zen 2 3600, 3800x, not an issue on zen 3 5600, 5800x.
always an issue on dual ccd chips
I couldn't find the source sorry. This was back in 2020 or 2021 when I was tuning my zen 2. There was a guide about using sisoftware sandra inter core test to optimise ram timings for best performance
I'm so fricking tired of these fake-cores, neural-units, high latency, cpu bugs and exploits, fake-turbo-mode frequency bullshit, where is the actual good stuff?
Do I have to buy server CPUs or what?
+gay hyperthreading*
Arrow Lake will be ditching that, at least.
AMD has all the momentum. Arrow Lake won't be nearly good enough to make a dent.
Core to core latency is a bigger problem on server chips.
Just get a 10900k or 7800x3d then
There are still plenty of good options around and will be that way until amd starts shoving e-cores down our throats
I think it's buy AMD and just taskset out any cross core complex connections. the intercore latency with the infinity fabric in the "complex" seems to be about half or more than ringbus
Buy a 7800X3D and don't worry about it. If you need extra multi-core performance, buy a 7950X3D, flip the BIOS switch to tell it to always prioritize the cache cores so your games run there automatically and then don't worry about it.
see
see what? I'm on Alder Lake and it works fine. The cuck cores boost my cooooooompiling Gentoo a lot. I wish I had more cuck cores for compiling or a cuck core coprocessor if I could just offload gcc workloads to it.
nobody cares about shared memory
how do you do any parallelism then? you have to share something between tasks eventually.
Alder lake was the last monolithic arch, next gen and forward is tiled
>atrocious input latency on top
source: my ass
its real though
I've used alder and raptor lake since 2020 and they lag just from browsing through your files on an nvme, compared to the 9900k I used before
>anecdotal evidence
It's either you have a shitty SSD or it's already dying
Also where's the input latency on that? LMAO
this is pure unadulterated cope
I swapped back to the 9900k and its been smooth ever since
read again dumb dumb
>Raptor lake
>2020
Stop lying LMAO
>I've used alder and raptor lake since 2020 and they lag just from browsing through your files on an nvme, compared to the 9900k I used before
I literally haven't experienced this. Just use Linux and you won't have this problem.
When it comes to actual CPU design, it is a cheap trick. It doesn't take a lot of skill, and it can increase both efficiency and multicore performance quite a bit in benchmarks.
>OS and applications know how to manage them well (not very easy)
Yep. Which we on PC, so far, don't have. So they're not very useful for anything but all-core performance, while they hamper few-core performance.
There is also the fact that on phones and tablets, they have a semi-sleep state (screen off) where the user doesn't expect perfor
mance and it will last minutes to hours. It's very easy to manage power states then, when the screen is off, use only efficiency cores. For a computer, there is no such clear use case.
I guess it can be useful for file indexing.
Nailed it.
Efficiency cores are fine for background tasks that you aren’t waiting on, like Spotlight indexing and analyzing faces in your photos and running JavaScript in background tabs
Keeping the desktop cool means it doesn’t need to ramp up the fans and suck in more (slightly) dusty air that you’ll need to blow out with canned air later
>Efficiency cores are fine for background tasks
I agree for background loading file, network, but you don't need more than 2 e-cores.
The real reason they push this shit is only for synthetic benchmark.
compiling, encoding, and other heavily multi-threaded tasks benefit from them
just because you dont do any serious work on your computer doesnt mean other people dont
>Xeon
You have less L3 cache with those don't you ?
the w7-2495X i run has 45MB of L3 cache, so a bit higher than a 14900K (36MB)
Why don't they just scrap the shitty cores, put like 500mb of L3 cache on an 8 core and see what happens? Is it because it doesn't look as good in benchmarks?
>500mb of L3 cache
that would be a frickhuge feature. SRAM is frickhuge.
>he boughted post-10th gen intel
goyslop deluxe
I thought intel chips made a big performance jump right after 10th gen, correct me if I'm wrong.
12th gen was the first big performance uplift in a long while. Part of that came from introducing E cores but it was mainly Intel finally figuring out 10nm.
>>he boughted post-10th gen intel
did not though, reasons (mainly hackitosh).. also, I boughted the 10th gen when 12th gen was already out, was a good discount, almost half what the 12th gen would of costed
e-cores more like reeeeee-cores
e-cores more like e-ck cores amirite
Apparently some of the kernel / scheduler has been backported but not all.
Personally I haven't noticed a difference between Win10 and Win11 but I don't run "balanced" power plan and have C-States disabled anyway.
Damnn, so many PC spergs ITT seething about e-cores for no good reason.
e-cores are good if you cooooooooooooooommmmmmmpile a lot of code since the cuck cores have all the fast functionality needed for compiler bullshit.
win 10 starts to be legacy OS. if you want OS to support effective cores, try update your system
haha what the frick is e-cores
-sent from windows 7
extra cores. your box probably has at most 8 old cores that loses e-cores. now if you add up the fact there is power cores, your cpu will be obsolete.
Cores with lower clock speed and no multi-threading. They can most likely outperform your current e-waste at lower TDP.
https://www.intel.com/content/www/us/en/support/articles/000091896/processors.html
IDGAF if my computer uses 10w or 100w, My i7 2600 lives another day (until I get around to replacing it lol, but it's good enough for now)
>IDGAF if my computer uses 10w or 100w
Your parents most likely pay for the electricity bill so It is obvious that you don't care.
>i7 2600
https://www.cpubenchmark.net/compare/1vs5157/Intel-i7-2600-vs-Intel-N100
dirt-cheap beelink with N100 is running circles around your e-waste while drawing allmost 16 times less power. Not to mention the lack of modern CPU instructions or codecs.
geniunely moronic or falseflag, not sure at this point
>expects me to keep up with tech news (though clearly i don't even keep up with microshart releases)
>Six months from now
Okay guys here is why amd cuck cores are actually good and are a great innovation.
Unironically yes, gaming feels smoother on Win11.
you "need" Windows 11 for a lot of basic shit, like a compositor that isn't complete shit and properly unredirects for borderless windowed, remembering window positions on display geometry changes and more. Win 10 is shit, Windows 11 is wonky, but a mild improvement.
btw I use GNU + Linux so I don't have these wincuck problems. at least you gays are getting features GNOME 3 has had for a decade though. congratulations I guess.
use windows 11 to get advertisement fed directly into youe system
Let me install sisoftware sandra in lutris real quick and I'll show you ass to ass latency on a monolithic laptop desktop cpu
why?
the shitsoft nonfree garbage will just be using wine implemented primitives based on pthreads / c++ threads just like c2clat on linux.
also c2clat is foss.
https://github.com/rigtorp/c2clat
Thanks I'm moronic
in case you don't know how to use gnuplot, you might need to adjust the c2clat -p output.
e.g. to output to a file:
# terminal type is the raster format
# need to manually specify size or you'll get numbers badly spaced in the cells.
set terminal png size 800, 600
# filename here
set output "muh latency.png"
# this puts 0 at the top left corner, like the other graphs
set yr [] reverse
# gnuplot> help is your friend
also I'm too lazy to fix the palette.
I tried to fix yrange but I couldn't figure it out
It's a ryzen 5500
These are at least twice as bad as anandtech review numbers. They say they use an "in house test"
pthread and Linux semantics might not be the same as Windows. Windows synchronization primitives are like a confusing zoo though. Hell, Windows added Linux futex style ones too in like, Windows 8? 10? I can't remember. They only did it though because C++ Threads basically forced pthread semantics on everyone.
I want to add gayOS has futexes too because of C++ and I imagine the homosexuals at Applel still seethe about it because they don't expose it with a C ABI.
>These are at least twice as bad as anandtech
>twice as bad
as long as they're consistently "off", it probably means they're valid with some hidden costs or accidental discounting of cost that's a constant factor.
picrel
we can't compare c2clat to anandtech numbers
e-cores are a good idea and i hope amd copies them.
They already did. You can go out and buy an amd cpu with C cores I think they call them
It's only on low end mobile chips and certain servers for now. Most next gen mobile chips will have them but no desktops.
I like them too, and AMD's implementation is better than Intel's.
>clickbait outrage farm youtube thumbnail
>moronic OP
>half the thread is filtered out by buzzword spam
>hurr durr why do new thing require modern os why 10 year old os not support new thing
I miss the time before reddit redefined the IQfy experience to be irony-poisoned toddlers being all like "haha wouldn't it be funny if the videogame board didn't play videogames? wouldn't it be funny if the technology board were full of technology haters and nitwits?"
10 year old OS can use new hardware if you make it so. And XP will always be the best os.
either Intel or Microsoft are unable to get their shit together with the sheduler.
This is bad news for future chiplet designs btw.
In 5 years were back at 10900k-like CPUs with just big monolithic P-cores because software just can't get its shit together
Literally no one wants e-Cores or this biglittle bullshit except the marketing homosexuals at Intel who want to false advertise core counts.
>In 5 years were back at 10900k-like CPUs with just big monolithic P-cores
Good, E-cores are fricking stupid and no one cares if they go away.
I want it and everyone is increasingly doing it. Cry more.
>literally nobody wants low power cores that system tasks and other lightweight applications can run on
>mommy pays the electric bill and wont' complain when I leave the computer on high performance mode andmine shitcoins all day
>literally nobody wants low power cores that system tasks and other lightweight applications can run on
>t. marketing executive
P-cores do it better and do it without causing microstutters
>In 5 years were back at 10900k-like CPUs with just big monolithic P-cores
I hope this is true. I don't care what anyone says e-cores sound like a good idea but they suck and at the end of the day are just making random guesses based on proven unreliable software.
so just use Loonigs?
I use to have random, well timed, stutter in DOOM Eternal due to cuck cores and after awhile they disappeared and I didn't need to taskset to lock it to using only P-Cores anymore. Linux seems to have it working now.
e-cores are like pajeets. it should be rebranded to Intel Jeet-Cores
they are bundled with the hard working cores and only do basic shit and lazy around while everyone else does the hard work for them. and they out number the actual good workers. sirs i cannot do this hard task you do for me sir please, please sir.
but when it comes to anything inclusive they pretend to be just as important as the real cores. much like intel marketing the e-cores as full on p-cores (8p + 16e = 24p!!) SIRS I AM PART OF THIS GROUP LIKE YOU SIRS DO NOT EXCLUDE
i for one will never redeem intel cores
>tfw STILL no 1P+4E+256EU laptop processor
Why does Intel hate money?
>256EU
Isn't that GPU cores? Why?
Because reasonably powerful integrated graphics is good for a laptop if you want good battery life and the ability to use it for more than web browsing and the same time.
I agree in principle, but I suppose we have different definitions of what reasonable is. Current intel and AMD offerings have plenty of iGP power and next gen AMD will bump it up significantly.
Current Intel and AMD offerings have an excessive amount of CPU power robbing die space and TDP from their IGPs. They treat core count as the only important feature and graphics as an afterthought, when the reality is that most people aren't doing desktop-level multitasking on their laptops, so all you really need is one powerful core for working on the current task and a handful of little cores for offloading the tasks from all of the preinstalled shitware so you can actually use the powerful core.
>Current Intel and AMD offerings have an excessive amount of CPU power
ok moron go doomscroll
Enjoy your 5 fps in a 20 year old game, or alternatively your 30-pound ""desktop replacement"" laptop.
Current AMD laptop chips are pretty well balanced in terms of CPU cores and iGP. Rembrandt was an underappreciated upgrade and Strix Halo seems to be another massive leap.
I'd be interested to see Intel improve their iGPs. Didnt they segment their chips by graphics separately in 10th and 11th gen?
I'm curious what you think "most people" need from a laptop chip, though.
>gayming
Ah right, you're a manchild.
>gaming is for kids
>I'm a real adult
>I do real adult things on the computer
>like take screenshots of a terminal and argue on IQfy
manchild site
why not just buy a 7800x3d and live like a king?
>zen5 25% ipc uplift hype
because 9800x3d will be out soon(tm)
The problem with e-cores is there are too many of then compared to P cores in most any CPU sold. The scheduler is constantly flopping around trying to figure out what application or resource needs the P cores and fricks the shit half the time.
They are fine for office work, shit for literally anything task intensive like Teams
The point of E cores is that there's a shitton of them compared to P cores. They're there for heavy multithreading.
Yeah I get that, however the ratio is clearly not balanced
I don't really agree, 4-8 P cores should be enough for applications which depend on single core perf. If the scheduler is putting foreground tasks on E cores that's a problem with the scheduler.
That's great anon, my i7-1225 has 2 fricking P cores and 8 e cores. I'm not just simply talking about the desktop market here, laptops are where the P/E core ratios get completely fricked. Luckily my new work PC is an i9 14900 with 8 P cores but for most end users they are stuck with some wonky 2P core 4-8e core setup
1255*
https://www.intel.com/content/www/us/en/products/sku/226259/intel-core-i71255u-processor-12m-cache-up-to-4-70-ghz/specifications.html
Ah, you're talking about laptop U chips. I see your point but Intel's P cores guzzle power so I can see why they leaned more on E cores. Previous i7s at that wattage capped out at a mere 4 cores.
>laptop
>form factor designed to reduce power usage so billy mchomosexualon and jane janeson don't whine about their laptop "only" lasting 5 hours at the coffee shop without a charger
>confused and angry about the greater reliance on e-cores
So buy a fricking desktop if you want more P cores. Dipshit. moron. Absolute troglodyte.
You're on a new architecture, so the i7 here doesn't mean shit, it's not the same meaning it had before they dropped the core nomenclature after 14th gen. You've got two hyperthreaded cores that can boost to 4.7GHz, and eight e-cores that can boost up to 3.5GHz, which is faster than the turbo speed of an actual high-end laptop processor from a decade ago that couldn't last more than 2 hours under any form of load that actually uses power. All that for 15-55 watts.
Anon doesn't even need a desktop, a P or H class laptop chip will offer plenty of cores.
But frankly if you're buying a laptop you want AMD instead, Intel's node disadvantage will always burn them.
I would have loved to go AMD but we needed some of the security/BIOS settings for encryption+wiping and out of band management for remote wipe only intel had. Not to mention intel laptops were cheaper by about 50 dollars a pop, spread over +200 devices you can easily understand why that would matter.
>So buy a fricking desktop if you want more P cores. Dipshit. moron. Absolute troglodyte.
Anon the workforce is hybrid now for many companies, we order laptops because it's far easier to manage WFH staff, mobile staff, and in office staff than desktops. Most VAR's give you sizable discounts when you buy in bulk and from a business standpoint it makes sense to order 200 during a HW refresh, image them all, and go to town.
I'm not talking about (you)r home use case on a scale of (you), for the workforce laptops often make sense due to them being the second monitor+mobility+cost being not that much more+battery for any power loss during bad weather. If you're ordering purely for home use great order specifically what you need, however when you are thinking company wide standardization the laptop P/E core ratios fricking suck still.
cool it with the insults schlomo, your kippah is showing
>y-you're israeli if you know anything about technology
This is why /misc/troons need a GR15.
>>y-you're israeli if you know anything about technology
probably true given israelites are high IQ.
2P+8E is ideal.
>The scheduler is constantly flopping around trying to figure out what application or resource needs the P cores and fricks the shit half the time
Bulldozer moment
I have a resource monitor visible on my 2nd monitor at all times. Not once has an e-core been chosen for any CPU intensive task. As far as I can tell, the e-cores dont start being used unless Im doing something HIGHLY threaded, like video encoding.
THE SIMPLE FACT IS NO
NO YOUR WINDOWS 10 COMPUTER WONT BE ABLE TO RUN WINDOWS 11, JUST UPGRADE ALREADY SO YOU CAN RUN WINDOWS 11
what's the oldest and newest desktop intel cpu worth getting?
and
what's the oldest and newest laptop intel cpu worth getting?
>what's the oldest and newest desktop intel cpu worth getting?
4004 and 14900KS
>what's the oldest and newest laptop intel cpu worth getting?
8086 and 185H
why does my 5600X idle at 25W on the SoC? what is that little guy doin?
I'm running W10 22H2 with a 14900k, same shit as W11 WWH2 with a better interface. There's ZERO reason to use W11.
>WWH2
22H2 my bad, typo
E-cores are just there to looking good in Cinebench.
e cores are actually awesome.
Think about what a GPU is. Its a large number of tiny cores. Even smaller and less capable then e cores.
Have you noticed that GPUs are great for a lot of use cases?
What the frick does E stand for in "E-core"?
estrogen
https://letmegooglethat.com/?q=what+does+e+in+e+core+stand+for
why the frick do desktop cpus have cuck cores anyway
on laptops sure it makes sense but desktops?
>but muh power bill
just turn off your moronic meme overclocks and your shit will go down from 400w to 50w
its getting really old repeating the same basic shit.
4 e cores take as much die space as a single p core. 4 e cores outperform a single e core in multi-threading
any other moronic questions idiot?
*4 e cores outperform a single p core in multi-threading
4 ecores = 1 2500k
I'm thinking ecores are based
>4 e cores take as much die space as a single p core
It's not quite that good. You can look at die shots to see that an E core cluster is slightly fatter than a P core. But they do have denser multithreading performance.
Yeah... 4x e-cores take up the space of 1.33 p-cores
They could've made a 13 p-core cpu with full avx512 support and a simple scheduler instead of the 8+16 housefire
its 1.2x the space, way to just make shit up
>They could've made a 13 p-core cpu
way to continue just making numbers up
why are you so full of shit
they could have made a 12 p core, yah. but then it would have worse multi-threading than the way they did it.
you sound autistic ng l
not an argument
So is disabling ecores a meme or not for gayming?
I've got a 12600k
first of all, you're a fricking moron, but it's like a 20-30% speed boost on 11
if you don't give a shit then just use iot ltsc 2021, if you do, still use iot ltsc 2021 but wait for iot ltsc 2024 to officially release.
the question is do they work on linux?