One objectively bad thing is that virtual function calls (which is how subtype polymorphism is usually implemented) is several times slower than static function calls and simple if checks.
Seems trivial but if you build whole systems around that idea it quickly adds up.
The higher level reason for why polymorphism is bad is that abstractions are bad, because most of the problems you're trying to solve can't be neatly abstracted away behind a neat interface.
For one it obfuscates what the program is actually doing, which can be immensively frustrating if that is what you're trying to figure out.
Secondly, trying to conform to general abstractions, (like polymorphic abstract classes/interfaces/traits) often leads to stupid compromises, because your use case actually doesn't fit exactly onto that abstraction. It's like trying to fit a square peg into a round hole.
And thirdly, it prevents you from taking advantage of the details of your implementations, since the implementation details are hidden away from you.
2 months ago
Anonymous
The C equivalent to virtual functions is function pointers right? I’m writing games for fun and it’s getting increasingly difficult to avoid those
2 months ago
Anonymous
eh... not really.
Look up what vtables are and you should get how virtual function calls are implemented.
Hint: it involves function pointers. But that doesn't mean that function pointers are the same.
If you truly do have a unmanageable number of different possibilities for a given behavior then use a function pointer, or even polymorphism if pointers also become to unmanagable.
The thing is that often you only have a closed set of possibilites which you could easily manage with just enums and switch statements.
But in the "use polymorphism for eveything" mindset and the langauge that came from it, you're supposed to use polymorphism, which requires that you extract common interfaces from all your possibilites. i.e. multiple levels of indirection and abstraction.
This gets especially bad when your different cases don't naturally have the same interface, since at that point you're supposed to model everything in hiearchies of abstraction which is pure brainrot, or resort to other workarounds.
That doesn't mean there are no situations at all where could be useful.
2 months ago
Anonymous
not that anon, but what's the threshold of number of cases where a function pointer becomes better than a plain old switch case? I get that switch cases can be super fast, some languages implement computed gotos which compile a switch catch to a jump table when is the fastest afaik for a virtual machine implementation for example.
I think it's the hardware/language's responsibility to have hard and fast rules to guide devs how to chose one way over the other especially when there's a dozen way to achieve the exact same thing, the compiler hiding the implementation detail is just bad. "just benchmark it" is the easiest answer, but people are lazy and tend to over engineer their solution. function pointers/vtables are the most flexible way to implement generic interfaces that can even allow loading a vtable from a dynamic library for example. People just want the most frictionless solution even if it's slow or bad. That's how we got javascript for backend and electron app bloats.
2 months ago
Anonymous
>polymorphism means dynamic
void* brainrotted cnile.
2 months ago
Anonymous
> Seems trivial but if you build whole systems around that idea it quickly adds up.
Never a problem, and the lower cost of maintenance makes it worth it.
>Modern languages try to solve real world problems
... which they created while trying to solve real world problems, which they created while...
protip: don't use recursive code unless you're really sure about what you're doing
And it does successfully. I understand arguing semantics is the last grasp of straw for a cnile but I hate to say it, C is just as much relevant as Ruby in 2024.
>Nim
Nim is transpiled to C, not even compiled, transpiled, with all the additional Black personlicious runtime and extra garbage, it can never beat C. It's a shitty scripting language built on top of C.
>Pascal slower than Rust
That's a joke table right? It's not even slower than C. I get
Pascal: 9.226 s
C: 12.238 s
using the very same code and build comands as those morons, only difference would be the CPU, mine is like 5 years old.
>Literal rounding error tier difference >Uses GCC for C, when everything else listed is LLVM >Doesn't try and keep it fair by using a C frontend that uses LLVM (clang)
All this test tells me, at best, is that gcc's optimizer is better.
>over 50 years of collective hindsight and better automatic optimisation/vectorisation can beat whatever shitty assembly some smelly neet can write in a lifetime
bad assembly is slow, who would've guessed
whatever happened to Nim? I love the syntax and it's compiled and fast. yet it doesn't seem like anything has come of it. just popularity contest hijinks like in highschool?
I had Karax in mind but yea jester too
Isn't Nim supposed to be productive? Why do Rust frameworks shit out their website and reference for 0.0.1 frameworks but nim cant?
>bajillion half assed garbage collectors >each GC comes with its own bag of bugs >STD library breaks the moment you change GC >claims to be a system language yet the language will break on a fundamental level the moment you decide to turn the GC off >cannot build a program and dynamic libraries with Nim because nimRTL is broken, the cause is, you guessed it, GC, so you have to use different languages for it to work (lmao) >async is broken >people are so tired of it they ditched it for odin and other languages >most libraries are unmaintained and broken >even if you want to update said libraries, they rely on so much macro magic it's basically written in arcane DSLs >have fun debugging macros
It's not even death by a thousand cuts, it's born crippled, moronic, and lives 24/24h on life support.
Source: https://github.com/drujensen/fib
This is really bad bait. Table is sorted by compile time + run time. I had a quick glance at C and Ada and they're not even the same implementation, C version uses unsigned ints (wrapping implied) and the Ada one uses signed ints which results in slightly different codegen. Didn't bother looking at any of the other snippets because the author is clearly moronic.
There's nothing wrong with using compile time + runtime, but it's mostly useful for comparing against dynamic langs.
It's just the way the table is sorted. The data is all there. >C version uses unsigned ints (wrapping implied) and the Ada one uses signed ints which results in slightly different codegen
Maybe Ada is faster with signed? Fib is always positive so unsigned makes the most sense. It's FOSS so you're free to remix the bench yourself and try things out :^)
ASM is the future.
I program using AI now.
I basically use AI to write hyper efficient versions of what I want in assembly.
There is literally no point of learning languages anymore.
Just tell the AI to program it in assembly for that specific task. As efficiently as possible.
If you are learning or making or improving languages in 2024, you are doing it wrong.
I would like to know what ai and programs you are writing and using that you get useful ASM.
LLM's are notoriously bad at anything remotely complex in ASM, I have yet to see an LLM not spit out garbage for any embedded AVR / ARM system and x86 becomes an absolute shitshow beyond really simple pipelining or branching in babbies first ASM program.
Also >As efficiently as possible.
Unless you are legitimately very talented and have been on the ASM grind for a long time you are not writing better faster ASM than compilers for 99% of cases, since if you figure out some shortcut or optimization someone just needs add it to the compiler code and then it spits out once again optimal code. Obviously there are times where compilers are dumb but it is very rare.
ASM is only a go to language when you cannot cross compile to your system, so prototyping with FPGA's only and even then you can usually cross compile it just may be unoptimal for a time, and then if you are actually engineering some new chip you will build out libraries and api's for it so humans can actually program on it and write useful code.
A lot of the time compilers do output suboptimal assembly. Even today plenty of crypto is done in assembly since you need it constantly and everywhere. Assembly gives it a big speed increase.
if you're talking about the bundled instruction pipelines for AES or other cryptography standards in x86 for example that was almost entirely so devs, who shouldn't ever be trusted with crypto, wouldn't frick it up. Also so that cpu's could do crypto quickly since crypto instructions pipeline directly into quick vector instructions that the circuits are built to do. But this is not the same thing as compilers spitting out suboptimal code it's that suboptimal code is being written in the first place. It's like if you were to write your own square root functions from bitwise operators in c. Of course it's going to be almost always worse than (or best case equal to) the x86 instructions.
Compilers rarely spit out suboptimal code for the high level language they are given. At the end of the day they aren't magic garbage in garbage out, good code in almost always good machine code out.
C was done when people cared about computational resources and knew machine code.
Current languages care about abstractions. It is very difficult to visualize what the machine is doing from those. Inherently the C programmer knows the code is going to be inefficient and can improve it on time.
One of the best examples is string manipulation. It is pointer to a character array for a reason in C.
Modern languages try to solve real world problems, not being fast at fizzbuzz.
Lumping polymorphism with those other useful things is so moronic
Polymorphism is simply Black person-tier
Why is polymorphism bad?
cniles think type erasure is the only way to achieve polymorphism.
One objectively bad thing is that virtual function calls (which is how subtype polymorphism is usually implemented) is several times slower than static function calls and simple if checks.
Seems trivial but if you build whole systems around that idea it quickly adds up.
The higher level reason for why polymorphism is bad is that abstractions are bad, because most of the problems you're trying to solve can't be neatly abstracted away behind a neat interface.
For one it obfuscates what the program is actually doing, which can be immensively frustrating if that is what you're trying to figure out.
Secondly, trying to conform to general abstractions, (like polymorphic abstract classes/interfaces/traits) often leads to stupid compromises, because your use case actually doesn't fit exactly onto that abstraction. It's like trying to fit a square peg into a round hole.
And thirdly, it prevents you from taking advantage of the details of your implementations, since the implementation details are hidden away from you.
The C equivalent to virtual functions is function pointers right? I’m writing games for fun and it’s getting increasingly difficult to avoid those
eh... not really.
Look up what vtables are and you should get how virtual function calls are implemented.
Hint: it involves function pointers. But that doesn't mean that function pointers are the same.
If you truly do have a unmanageable number of different possibilities for a given behavior then use a function pointer, or even polymorphism if pointers also become to unmanagable.
The thing is that often you only have a closed set of possibilites which you could easily manage with just enums and switch statements.
But in the "use polymorphism for eveything" mindset and the langauge that came from it, you're supposed to use polymorphism, which requires that you extract common interfaces from all your possibilites. i.e. multiple levels of indirection and abstraction.
This gets especially bad when your different cases don't naturally have the same interface, since at that point you're supposed to model everything in hiearchies of abstraction which is pure brainrot, or resort to other workarounds.
That doesn't mean there are no situations at all where could be useful.
not that anon, but what's the threshold of number of cases where a function pointer becomes better than a plain old switch case? I get that switch cases can be super fast, some languages implement computed gotos which compile a switch catch to a jump table when is the fastest afaik for a virtual machine implementation for example.
I think it's the hardware/language's responsibility to have hard and fast rules to guide devs how to chose one way over the other especially when there's a dozen way to achieve the exact same thing, the compiler hiding the implementation detail is just bad. "just benchmark it" is the easiest answer, but people are lazy and tend to over engineer their solution. function pointers/vtables are the most flexible way to implement generic interfaces that can even allow loading a vtable from a dynamic library for example. People just want the most frictionless solution even if it's slow or bad. That's how we got javascript for backend and electron app bloats.
>polymorphism means dynamic
void* brainrotted cnile.
> Seems trivial but if you build whole systems around that idea it quickly adds up.
Never a problem, and the lower cost of maintenance makes it worth it.
I don't know what concepts are, but traits and generics are a form of polymorphism
concepts is a c++-ism for trait-constrained generics
>code runs slow as shit
>"noooo i solve real problems"
COPE
C is lower level than anything else that's popular, and if you ever studied the PDP-11 you might be surprised at how similar things are today.
Post code.
>>code runs slow as shit
it doesn't
>Modern languages try to solve real world problems
... which they created while trying to solve real world problems, which they created while...
protip: don't use recursive code unless you're really sure about what you're doing
The operative word here is “try”.
And it does successfully. I understand arguing semantics is the last grasp of straw for a cnile but I hate to say it, C is just as much relevant as Ruby in 2024.
show zig
C has 50 years worth of optimizations built in, notice fortran is even older and runs just as fast.
Being old is not always an advantage, there is this thing called "bit rot" for a reason.
For instance:
>https://www.phoronix.com/news/GNU-Coreutils-9.5-Released
>why can't modern languages beat C?
They beats C in practical applications, C will always win in meme tests
in your image fortran, Nim, and V all beat C
>Nim
Nim is transpiled to C, not even compiled, transpiled, with all the additional Black personlicious runtime and extra garbage, it can never beat C. It's a shitty scripting language built on top of C.
Hello rustrannny
Dial 8
Post source
>Pascal slower than Rust
That's a joke table right? It's not even slower than C. I get
Pascal: 9.226 s
C: 12.238 s
using the very same code and build comands as those morons, only difference would be the CPU, mine is like 5 years old.
Because the people that made it had to code it closer to the silicon.
C Is not a low-level language and your computer is not a fast PDP-11.
>Literal rounding error tier difference
>Uses GCC for C, when everything else listed is LLVM
>Doesn't try and keep it fair by using a C frontend that uses LLVM (clang)
All this test tells me, at best, is that gcc's optimizer is better.
So wait, V Lang really is the fastest language? Holy shit what the frick I thought everyone was saying it was a scamlang?
It is a scam language
Explain how it's faster than C then?
Black scam magic?
Because bad benchmarks. One fun one had php at faster than c lmao.
>hand-written assembly slower than C
lmao m8
>over 50 years of collective hindsight and better automatic optimisation/vectorisation can beat whatever shitty assembly some smelly neet can write in a lifetime
bad assembly is slow, who would've guessed
>Total
>Time
>Time
This chart has been written by a brainlet and I bet his code is also utter garbage.
Buy an ad bell labs homosexual.
whatever happened to Nim? I love the syntax and it's compiled and fast. yet it doesn't seem like anything has come of it. just popularity contest hijinks like in highschool?
Their flagship web framework doesn't even have a website. Not even an API reference.
lmao
Also their gtk binding maintainer is unable to ship API reference and suggests using autocomplete lmfao
To be fair to them the API is going to match one-to-one with the C one unless they're doing anything to make GTK "more Nim".
Using the GTK devhelp tool is not a bad idea.
do you mean jester? is it just a lack of contribution then?
I had Karax in mind but yea jester too
Isn't Nim supposed to be productive? Why do Rust frameworks shit out their website and reference for 0.0.1 frameworks but nim cant?
>bajillion half assed garbage collectors
>each GC comes with its own bag of bugs
>STD library breaks the moment you change GC
>claims to be a system language yet the language will break on a fundamental level the moment you decide to turn the GC off
>cannot build a program and dynamic libraries with Nim because nimRTL is broken, the cause is, you guessed it, GC, so you have to use different languages for it to work (lmao)
>async is broken
>people are so tired of it they ditched it for odin and other languages
>most libraries are unmaintained and broken
>even if you want to update said libraries, they rely on so much macro magic it's basically written in arcane DSLs
>have fun debugging macros
It's not even death by a thousand cuts, it's born crippled, moronic, and lives 24/24h on life support.
the surprising thing here is how fast cython is
Source: https://github.com/drujensen/fib
This is really bad bait. Table is sorted by compile time + run time. I had a quick glance at C and Ada and they're not even the same implementation, C version uses unsigned ints (wrapping implied) and the Ada one uses signed ints which results in slightly different codegen. Didn't bother looking at any of the other snippets because the author is clearly moronic.
There's nothing wrong with using compile time + runtime, but it's mostly useful for comparing against dynamic langs.
It's just the way the table is sorted. The data is all there.
>C version uses unsigned ints (wrapping implied) and the Ada one uses signed ints which results in slightly different codegen
Maybe Ada is faster with signed? Fib is always positive so unsigned makes the most sense. It's FOSS so you're free to remix the bench yourself and try things out :^)
gnat uses the gcc backend and it produces identical codegen if you change either version to match the other.
ASM is the future.
I program using AI now.
I basically use AI to write hyper efficient versions of what I want in assembly.
There is literally no point of learning languages anymore.
Just tell the AI to program it in assembly for that specific task. As efficiently as possible.
If you are learning or making or improving languages in 2024, you are doing it wrong.
I would like to know what ai and programs you are writing and using that you get useful ASM.
LLM's are notoriously bad at anything remotely complex in ASM, I have yet to see an LLM not spit out garbage for any embedded AVR / ARM system and x86 becomes an absolute shitshow beyond really simple pipelining or branching in babbies first ASM program.
Also
>As efficiently as possible.
Unless you are legitimately very talented and have been on the ASM grind for a long time you are not writing better faster ASM than compilers for 99% of cases, since if you figure out some shortcut or optimization someone just needs add it to the compiler code and then it spits out once again optimal code. Obviously there are times where compilers are dumb but it is very rare.
ASM is only a go to language when you cannot cross compile to your system, so prototyping with FPGA's only and even then you can usually cross compile it just may be unoptimal for a time, and then if you are actually engineering some new chip you will build out libraries and api's for it so humans can actually program on it and write useful code.
A lot of the time compilers do output suboptimal assembly. Even today plenty of crypto is done in assembly since you need it constantly and everywhere. Assembly gives it a big speed increase.
if you're talking about the bundled instruction pipelines for AES or other cryptography standards in x86 for example that was almost entirely so devs, who shouldn't ever be trusted with crypto, wouldn't frick it up. Also so that cpu's could do crypto quickly since crypto instructions pipeline directly into quick vector instructions that the circuits are built to do. But this is not the same thing as compilers spitting out suboptimal code it's that suboptimal code is being written in the first place. It's like if you were to write your own square root functions from bitwise operators in c. Of course it's going to be almost always worse than (or best case equal to) the x86 instructions.
Compilers rarely spit out suboptimal code for the high level language they are given. At the end of the day they aren't magic garbage in garbage out, good code in almost always good machine code out.
It's surprising to see Cython that up the list
Judging by your picture Nim and V are faster than C?
haskell beats C though
https://research.microsoft.com/en-us/um/people/simonpj/papers/ndp/haskell-beats-C.pdf
>fortran is faster
>but noooooooooo my coooompile time
I hate you lying homosexuals so much
>C shill thread
>expects any level of honesty
>nim compiles to C
>is faster than C
who was the moron who made this
C was done when people cared about computational resources and knew machine code.
Current languages care about abstractions. It is very difficult to visualize what the machine is doing from those. Inherently the C programmer knows the code is going to be inefficient and can improve it on time.
One of the best examples is string manipulation. It is pointer to a character array for a reason in C.
constexpr C++ beats C
Why does it take 30 microseconds if it's constexpr? Shouldn't it be 0?
You need to enter the function, put the constant into rax or whatever, exit the function; and there's probably some timing jitter.
Interesting.
Would consteval make any difference here?
My question is, what the FRICK is wrong with COBOL?
It was designed by a woman.