why can't modern languages beat C?

why can't modern languages beat C?

Nothing Ever Happens Shirt $21.68

Yakub: World's Greatest Dad Shirt $21.68

Nothing Ever Happens Shirt $21.68

  1. 2 months ago
    Anonymous

    Modern languages try to solve real world problems, not being fast at fizzbuzz.

    • 2 months ago
      Anonymous

      Lumping polymorphism with those other useful things is so moronic

      Polymorphism is simply Black person-tier

      • 2 months ago
        Anonymous

        Why is polymorphism bad?

        • 2 months ago
          Anonymous

          cniles think type erasure is the only way to achieve polymorphism.

        • 2 months ago
          Anonymous

          One objectively bad thing is that virtual function calls (which is how subtype polymorphism is usually implemented) is several times slower than static function calls and simple if checks.
          Seems trivial but if you build whole systems around that idea it quickly adds up.

          The higher level reason for why polymorphism is bad is that abstractions are bad, because most of the problems you're trying to solve can't be neatly abstracted away behind a neat interface.
          For one it obfuscates what the program is actually doing, which can be immensively frustrating if that is what you're trying to figure out.
          Secondly, trying to conform to general abstractions, (like polymorphic abstract classes/interfaces/traits) often leads to stupid compromises, because your use case actually doesn't fit exactly onto that abstraction. It's like trying to fit a square peg into a round hole.
          And thirdly, it prevents you from taking advantage of the details of your implementations, since the implementation details are hidden away from you.

          • 2 months ago
            Anonymous

            The C equivalent to virtual functions is function pointers right? I’m writing games for fun and it’s getting increasingly difficult to avoid those

          • 2 months ago
            Anonymous

            eh... not really.
            Look up what vtables are and you should get how virtual function calls are implemented.
            Hint: it involves function pointers. But that doesn't mean that function pointers are the same.
            If you truly do have a unmanageable number of different possibilities for a given behavior then use a function pointer, or even polymorphism if pointers also become to unmanagable.

            The thing is that often you only have a closed set of possibilites which you could easily manage with just enums and switch statements.
            But in the "use polymorphism for eveything" mindset and the langauge that came from it, you're supposed to use polymorphism, which requires that you extract common interfaces from all your possibilites. i.e. multiple levels of indirection and abstraction.
            This gets especially bad when your different cases don't naturally have the same interface, since at that point you're supposed to model everything in hiearchies of abstraction which is pure brainrot, or resort to other workarounds.
            That doesn't mean there are no situations at all where could be useful.

          • 2 months ago
            Anonymous

            not that anon, but what's the threshold of number of cases where a function pointer becomes better than a plain old switch case? I get that switch cases can be super fast, some languages implement computed gotos which compile a switch catch to a jump table when is the fastest afaik for a virtual machine implementation for example.

            I think it's the hardware/language's responsibility to have hard and fast rules to guide devs how to chose one way over the other especially when there's a dozen way to achieve the exact same thing, the compiler hiding the implementation detail is just bad. "just benchmark it" is the easiest answer, but people are lazy and tend to over engineer their solution. function pointers/vtables are the most flexible way to implement generic interfaces that can even allow loading a vtable from a dynamic library for example. People just want the most frictionless solution even if it's slow or bad. That's how we got javascript for backend and electron app bloats.

          • 2 months ago
            Anonymous

            >polymorphism means dynamic
            void* brainrotted cnile.

          • 2 months ago
            Anonymous

            > Seems trivial but if you build whole systems around that idea it quickly adds up.
            Never a problem, and the lower cost of maintenance makes it worth it.

      • 2 months ago
        Anonymous

        I don't know what concepts are, but traits and generics are a form of polymorphism

        • 2 months ago
          Anonymous

          concepts is a c++-ism for trait-constrained generics

    • 2 months ago
      Anonymous

      >code runs slow as shit
      >"noooo i solve real problems"
      COPE

      C Is not a low-level language and your computer is not a fast PDP-11.

      C is lower level than anything else that's popular, and if you ever studied the PDP-11 you might be surprised at how similar things are today.

      • 2 months ago
        Anonymous

        Post code.

      • 2 months ago
        Anonymous

        >>code runs slow as shit
        it doesn't

    • 2 months ago
      Anonymous

      >Modern languages try to solve real world problems
      ... which they created while trying to solve real world problems, which they created while...
      protip: don't use recursive code unless you're really sure about what you're doing

    • 2 months ago
      Anonymous

      The operative word here is “try”.

      • 2 months ago
        Anonymous

        And it does successfully. I understand arguing semantics is the last grasp of straw for a cnile but I hate to say it, C is just as much relevant as Ruby in 2024.

  2. 2 months ago
    Anonymous

    show zig

  3. 2 months ago
    Anonymous

    C has 50 years worth of optimizations built in, notice fortran is even older and runs just as fast.

    • 2 months ago
      Anonymous

      Being old is not always an advantage, there is this thing called "bit rot" for a reason.

      For instance:
      >https://www.phoronix.com/news/GNU-Coreutils-9.5-Released

  4. 2 months ago
    Anonymous

    >why can't modern languages beat C?
    They beats C in practical applications, C will always win in meme tests

  5. 2 months ago
    Anonymous

    in your image fortran, Nim, and V all beat C

    • 2 months ago
      Anonymous

      >Nim
      Nim is transpiled to C, not even compiled, transpiled, with all the additional Black personlicious runtime and extra garbage, it can never beat C. It's a shitty scripting language built on top of C.

      • 2 months ago
        Anonymous

        Hello rustrannny
        Dial 8

  6. 2 months ago
    Anonymous

    Post source

  7. 2 months ago
    Anonymous

    >Pascal slower than Rust
    That's a joke table right? It's not even slower than C. I get
    Pascal: 9.226 s
    C: 12.238 s
    using the very same code and build comands as those morons, only difference would be the CPU, mine is like 5 years old.

  8. 2 months ago
    Anonymous

    Because the people that made it had to code it closer to the silicon.

    • 2 months ago
      Anonymous

      C Is not a low-level language and your computer is not a fast PDP-11.

  9. 2 months ago
    Anonymous

    >Literal rounding error tier difference
    >Uses GCC for C, when everything else listed is LLVM
    >Doesn't try and keep it fair by using a C frontend that uses LLVM (clang)

    All this test tells me, at best, is that gcc's optimizer is better.

  10. 2 months ago
    Anonymous

    So wait, V Lang really is the fastest language? Holy shit what the frick I thought everyone was saying it was a scamlang?

    • 2 months ago
      Anonymous

      It is a scam language

      • 2 months ago
        Anonymous

        Explain how it's faster than C then?
        Black scam magic?

        • 2 months ago
          Anonymous

          Because bad benchmarks. One fun one had php at faster than c lmao.

  11. 2 months ago
    Anonymous

    >hand-written assembly slower than C
    lmao m8

    • 2 months ago
      Anonymous

      >over 50 years of collective hindsight and better automatic optimisation/vectorisation can beat whatever shitty assembly some smelly neet can write in a lifetime
      bad assembly is slow, who would've guessed

  12. 2 months ago
    Anonymous

    >Total
    >Time
    >Time
    This chart has been written by a brainlet and I bet his code is also utter garbage.

    Buy an ad bell labs homosexual.

  13. 2 months ago
    Anonymous

    whatever happened to Nim? I love the syntax and it's compiled and fast. yet it doesn't seem like anything has come of it. just popularity contest hijinks like in highschool?

    • 2 months ago
      Anonymous

      Their flagship web framework doesn't even have a website. Not even an API reference.
      lmao

      • 2 months ago
        Anonymous

        Also their gtk binding maintainer is unable to ship API reference and suggests using autocomplete lmfao

        • 2 months ago
          Anonymous

          To be fair to them the API is going to match one-to-one with the C one unless they're doing anything to make GTK "more Nim".

          Using the GTK devhelp tool is not a bad idea.

      • 2 months ago
        Anonymous

        do you mean jester? is it just a lack of contribution then?

        • 2 months ago
          Anonymous

          I had Karax in mind but yea jester too
          Isn't Nim supposed to be productive? Why do Rust frameworks shit out their website and reference for 0.0.1 frameworks but nim cant?

    • 2 months ago
      Anonymous

      >bajillion half assed garbage collectors
      >each GC comes with its own bag of bugs
      >STD library breaks the moment you change GC
      >claims to be a system language yet the language will break on a fundamental level the moment you decide to turn the GC off
      >cannot build a program and dynamic libraries with Nim because nimRTL is broken, the cause is, you guessed it, GC, so you have to use different languages for it to work (lmao)
      >async is broken
      >people are so tired of it they ditched it for odin and other languages
      >most libraries are unmaintained and broken
      >even if you want to update said libraries, they rely on so much macro magic it's basically written in arcane DSLs
      >have fun debugging macros
      It's not even death by a thousand cuts, it's born crippled, moronic, and lives 24/24h on life support.

  14. 2 months ago
    Anonymous

    the surprising thing here is how fast cython is

  15. 2 months ago
    Anonymous

    Source: https://github.com/drujensen/fib
    This is really bad bait. Table is sorted by compile time + run time. I had a quick glance at C and Ada and they're not even the same implementation, C version uses unsigned ints (wrapping implied) and the Ada one uses signed ints which results in slightly different codegen. Didn't bother looking at any of the other snippets because the author is clearly moronic.

    • 2 months ago
      Anonymous

      There's nothing wrong with using compile time + runtime, but it's mostly useful for comparing against dynamic langs.
      It's just the way the table is sorted. The data is all there.
      >C version uses unsigned ints (wrapping implied) and the Ada one uses signed ints which results in slightly different codegen
      Maybe Ada is faster with signed? Fib is always positive so unsigned makes the most sense. It's FOSS so you're free to remix the bench yourself and try things out :^)

      • 2 months ago
        Anonymous

        gnat uses the gcc backend and it produces identical codegen if you change either version to match the other.

  16. 2 months ago
    Anonymous

    ASM is the future.
    I program using AI now.
    I basically use AI to write hyper efficient versions of what I want in assembly.

    There is literally no point of learning languages anymore.
    Just tell the AI to program it in assembly for that specific task. As efficiently as possible.

    If you are learning or making or improving languages in 2024, you are doing it wrong.

    • 2 months ago
      Anonymous

      I would like to know what ai and programs you are writing and using that you get useful ASM.

      LLM's are notoriously bad at anything remotely complex in ASM, I have yet to see an LLM not spit out garbage for any embedded AVR / ARM system and x86 becomes an absolute shitshow beyond really simple pipelining or branching in babbies first ASM program.

      Also
      >As efficiently as possible.
      Unless you are legitimately very talented and have been on the ASM grind for a long time you are not writing better faster ASM than compilers for 99% of cases, since if you figure out some shortcut or optimization someone just needs add it to the compiler code and then it spits out once again optimal code. Obviously there are times where compilers are dumb but it is very rare.

      ASM is only a go to language when you cannot cross compile to your system, so prototyping with FPGA's only and even then you can usually cross compile it just may be unoptimal for a time, and then if you are actually engineering some new chip you will build out libraries and api's for it so humans can actually program on it and write useful code.

      • 2 months ago
        Anonymous

        A lot of the time compilers do output suboptimal assembly. Even today plenty of crypto is done in assembly since you need it constantly and everywhere. Assembly gives it a big speed increase.

        • 2 months ago
          Anonymous

          if you're talking about the bundled instruction pipelines for AES or other cryptography standards in x86 for example that was almost entirely so devs, who shouldn't ever be trusted with crypto, wouldn't frick it up. Also so that cpu's could do crypto quickly since crypto instructions pipeline directly into quick vector instructions that the circuits are built to do. But this is not the same thing as compilers spitting out suboptimal code it's that suboptimal code is being written in the first place. It's like if you were to write your own square root functions from bitwise operators in c. Of course it's going to be almost always worse than (or best case equal to) the x86 instructions.

          Compilers rarely spit out suboptimal code for the high level language they are given. At the end of the day they aren't magic garbage in garbage out, good code in almost always good machine code out.

  17. 2 months ago
    Anonymous

    It's surprising to see Cython that up the list

  18. 2 months ago
    Anonymous

    Judging by your picture Nim and V are faster than C?

  19. 2 months ago
    Anonymous

    haskell beats C though
    https://research.microsoft.com/en-us/um/people/simonpj/papers/ndp/haskell-beats-C.pdf

  20. 2 months ago
    Anonymous

    >fortran is faster
    >but noooooooooo my coooompile time
    I hate you lying homosexuals so much

    • 2 months ago
      Anonymous

      >C shill thread
      >expects any level of honesty

  21. 2 months ago
    Anonymous

    >nim compiles to C
    >is faster than C
    who was the moron who made this

  22. 2 months ago
    Anonymous

    C was done when people cared about computational resources and knew machine code.

    Current languages care about abstractions. It is very difficult to visualize what the machine is doing from those. Inherently the C programmer knows the code is going to be inefficient and can improve it on time.

    One of the best examples is string manipulation. It is pointer to a character array for a reason in C.

  23. 2 months ago
    Anonymous

    constexpr C++ beats C

    • 2 months ago
      Anonymous

      Why does it take 30 microseconds if it's constexpr? Shouldn't it be 0?

      • 2 months ago
        Anonymous

        You need to enter the function, put the constant into rax or whatever, exit the function; and there's probably some timing jitter.

        • 2 months ago
          Anonymous

          Interesting.
          Would consteval make any difference here?

  24. 2 months ago
    Anonymous

    My question is, what the FRICK is wrong with COBOL?

    • 2 months ago
      Anonymous

      It was designed by a woman.

Your email address will not be published. Required fields are marked *