What went wrong,

and why C is the culprit.

Talk:

?si=zYnIpnwCDYQ6P0RV

Homeless People Are Sexy Shirt $21.68

CRIME Shirt $21.68

Homeless People Are Sexy Shirt $21.68

  1. 2 months ago
    Anonymous

    Great talk. I don't know how I didn't run into it before.
    I don't think C is the culprit.
    I don't like C that much but Sel4 is written in C.
    The mess we are in is more the result of bad engineering and capitalism more than anything else.
    All those things mentioned in the talk were research projects, I think none of them were commercial products. None of them had to deal with tight deadlines, sales numbers, backward compatibility, bad actors, etc.

    • 2 months ago
      Anonymous

      (OP)
      I think C heavily promotes serial code. Its memory model didn't even support concurrency/parallelism until C11, and even that part isn't mandatory by the standard.

      • 2 months ago
        Anonymous

        Also forgot to mention that most of the development tooling isn't a comercial product either.

      • 2 months ago
        Anonymous

        parallelism is unneeded by 99% of applications.

        • 2 months ago
          Anonymous

          >parallelism is unneeded by 99% of applications.

          Parallelism would probably make 60% of those 99% better. You are used to believe that it isn't "needed" because current tooling makes the process of making it multithreaded non intuive.

          • 2 months ago
            Anonymous

            it likely would, but 1.6 * 0.01 is only 0.016. Most applications also don't need that performance boost, they're already bounded by things like I/O.

          • 2 months ago
            Anonymous

            Are you aware most of high performance applications that do I/O are concurrent (and in extreme cases, also parallel) by default, right? Single threaded I/O would be a LOT slower. It can be mitigated if the kernel exposes some kind of async i/o, but even then, you would need to implement some kind of event loop on your application.

            Should be noted that single execution != single threaded. Eg. JS on the server is single execution, the runtime can use more threads to do stuff like I/O on other threads.

          • 2 months ago
            Anonymous

            >Single threaded I/O would be a LOT slower.
            It legitimately isn't. This kind of optimization is only needed by applications that are bounded by it. If you're operating on a byte stream and writing it to the disk, chances are the disk will be your bottleneck rather than the number of threads, RAM speed, etc. Unless you have an extremely expensive algorithm of course like AES encryption for example or need to operate on multiple files like find/grep.

        • 2 months ago
          Anonymous

          you do know that your OS is running things in parallel, right? or at least should...

          • 2 months ago
            sage

            Different anon. Multithreading != multiprocessing. The OS can run programs on different threads. But programs can communicate between processes. You can also do multithreading inside one program. But its just unnecessary to do that unless you really need to reduce the overhead of IPCs

          • 2 months ago
            Anonymous

            OSs can do both multithreading AND multiprocessing with or without parallel processing. it has nothing to do with it.

          • 2 months ago
            sage

            >OSs can do both multithreading AND multiprocessing with or without parallel processing. it has nothing to do with it.
            Like I said, different anon. So I don't know what his reasoning was. And yes, OSs can do both. I never said they couldn't. And never tried to imply that it was either one or the other. But in general you'll be further ahead picking the IPC solution unless you need more latency or throughput. Your OS comes with a lot of great features built into it (even winblows) and it doesn't make sense to re-implement those features unless you have a reason to. But it also depends on the project

    • 2 months ago
      Anonymous

      >The mess we are in is more the result of bad engineering and capitalism more than anything else.
      You have to understand that magic thinking runs throughout this industry.
      - With the right magic language everything will be secure.
      - With the right magic language/frameworks/paradigms everyone will be able to code and min wage coders will be just as good as highly paid engineers now.
      - With the right boot camp we'll have plenty of low salary "engineers" who are just as good as our highly paid engineers with EE degrees.
      - With the right magic abstraction you'll never have to learn or think about the hardware.
      - With the right magic hardware, performance will never be an issue no matter how bloated the code is.
      - And now, with the right magic AI we won't even need devs! Imagine that! A bunch of fat, lazy, stupid, silver spoon billionaires producing revolutionary new apps for the app store to earn them even more billions, all without employees! (Never mind that nobody will be left who can afford the apps.)

      Much of this is driven by midwit investors and managers who think their jobs are special and irreplaceable but the engineers can and should be replaced for higher quarterly profits. And they never face any consequences for the flaws they create through piss poor hiring, mismanagement, and impossible demands/schedules, even when it kills people. (Boeing anyone?)

      And it's all bullshit. C is not responsible for security holes. Rust and Java are not going to save us. You want a secure system? You need high IQ engineers who are experts in the relevant domains. And even then you face the problem that our hardware is compromised by the same fricking class of midwits causing issues that plague software. Your caches are not secure (Spectre style attacks). Your memory is not secure (row hammer was never fully mitigated). Your computer has a black box "management engine" which I promise you, is not secure.

      It's all so tiring.

      • 2 months ago
        Anonymous

        come back to this post after you watched this:

        • 2 months ago
          Anonymous

          >nooo i swear rust will save us!
          You guys are worse than Jehovah’s Witnesses.

        • 2 months ago
          Anonymous

          Kys seriously save the Linux community and that is doing your part

      • 2 months ago
        Anonymous

        Reasons like this are the reason I want to pick at some time on a microcontroller and try to write a basic OS on it.

        • 2 months ago
          Anonymous

          No, that is the hacker way and is the reason everything is shit. That is how Windows and Unix were written. In a very ad-hoc way.
          You didn't have a committee sit down and engineer the system and define a formal spec down to the last detail before it was built.
          A bunch of programmers just sat down and began writing code with just a rough outline and hoped for the best.
          The problem is deeper than just "hurr durr write in low level languages".
          If you want to learn how the current situation could be turned around, study first order logic.

          • 2 months ago
            Anonymous

            >You didn't have a committee sit down and engineer the system and define a formal spec down to the last detail before it was built.
            Such a committee would take 200 years to agree on a formal spec

          • 2 months ago
            Anonymous

            Well, you don't have to specify the behavior of the whole thing outright. You can specify and analyze after you develop and then correct the original implementation as you find issues, as long as the general guiding principles are sound.

          • 2 months ago
            Anonymous

            >specify and analyze after you develop and then correct the original implementation as you find issues
            Is the opposite of
            >sit down and engineer the system and define a formal spec down to the last detail before it was built
            And frankly sounds more similar to
            >A bunch of programmers just sat down and began writing code with just a rough outline and hoped for the best
            Which you were b***hing about earlier
            Enlighten me since I'm clearly not as smart as you

          • 2 months ago
            Anonymous

            Maybe it sounds similar, but it's an important distinction.
            Sel4 is based on a system that wasn't formally specified at first, but eventually it was not only formally specified, it also has been formally verified, that is, there are formal proofs that the actual code meets the specifications.
            I don't know if any OS or really any big software project has been fully specified first and only then implemented. Compcert probably has been since having a verified compiler was the whole point from the beginning.
            As an example of a software project that is formally specified but not proven, you have Standard ML, but again that is based on stuff that was implemented before it was formally specified.
            Most examples probably come from mathematics, especially when algorithms were discovered before the invention of computers. The sieve of Eratosthenes was obviously discovered before computers existed. But the thing is, once you define what a prime number is, you kinda defined the basic functional spec for ALL prime number finding algorithms no matter how complex. That is why logic and specifications are so important. Once you define a formal specification for a certain module of a software system you should be able to take a program that uses a prime finding algorithm and plug in any prime finding algorithm to make an existing program run faster, without even looking into how the rest of the program works.
            But yeah, this is a very niche approach to software development, so there aren't many examples.

      • 2 months ago
        Anonymous

        Generally I agree with you but sometimes you don't need software to be fast. If something is going to be running as a batch job handling a low volume of data but with a complicated business logic, then you don't give a shit about performance. But you still want correctness.
        In this case you should use a language that abstracts away the hardware as much as possible. Ideally you want a language that accepts a program that reads as closely as possible to the spec. Most of the time this is going to be a declarative language. Or sometimes you might not have a spec and you need to prototype to find out what solution fits the program better. In that case you want something that is as dynamic and easy to manipulate, while still being easy to reason about and prove at least some correctness properties that your program needs to meet.
        Even in the case of web pages. Let's be honest, the problem is not abstracting away the hardware. It's not reasonable to ask people to write their web pages in C. The problem is that we're not using the right abstractions.
        Ideally you want a language that lets people build their web pages in an intuitive way, and then is compiled down to fast code.
        If the web specs were simple and easy to implement and people didn't bloat their pages with megabytes of javascript, then you could build a web browser in Haskell and it would feel fast.

      • 2 months ago
        Anonymous

        high tier take anon, thanks for writing

      • 2 months ago
        Anonymous

        >Java
        >not going to save us
        laughs in log4j

      • 2 months ago
        Anonymous

        >And it's all bullshit. C is not responsible for security holes. Rust and Java are not going to save us. You want a secure system? You need high IQ engineers who are experts in the relevant domains.
        The problem is the language and the people. The kind of people who want to use C are not going to be able to solve the problem. They don't care about making things better. They think programming can't get any better than what Dennis Ritchie did on a PDP-11. They don't like Lisp machines or any other improved computer designs. They think hardware should be made for C, operating systems should be made for C, etc.

    • 2 months ago
      Anonymous

      >capitalism
      You mean israelites?

  2. 2 months ago
    Anonymous

    zoomers need a new language, os, de, window manager, and much more to feel relevant, special, and not like a goyim corporate drone despite being one body and soul
    c will never be replaced, a compiler flag will come that erases all of these "memory safety issues" and that will be that, rust and other languages will become obsolete overnight and every diversity hire illegal alien working on rust will be fired for fixing problems that were never problems

    • 2 months ago
      Anonymous

      You're completely missing the point. Computers in the past were single core, had no caches and were simpler in a lot more ways.
      C does not only misrepresents computers nowadays (leading to worse performance), as well as is uses outdated programming enviroments (text dump as input).

      Noted that there isn't any language that solves the latter, maybe besides LISP repls with nice GUI representation.

      • 2 months ago
        Anonymous

        >as well as is uses outdated programming enviroments (text dump as input).
        how is this outdated anyhow? What's replaced treating input as a series of bytes/characters?

        • 2 months ago
          Anonymous

          Check the OP's talk.

          Some example's are the speaker's work:

          • 2 months ago
            Anonymous

            m8 just tell me, I'm not listening to the guy talk. If you can't describe it in your own words, then it's self-evidently too complicated to replace anything.

        • 2 months ago
          Anonymous

          Smalltalk style IDEs where the objects are presented through a special interface that is not just a general purpose editor but is visually structured according to how the semantics of the language itself are structured.

        • 2 months ago
          Anonymous

          >C does not only misrepresents computers nowadays (leading to worse performance), as well as is uses outdated programming enviroments (text dump as input).
          Completely wrong.

          • 2 months ago
            Anonymous

            How is CPU cache represented in C then? One of the most important reasons for performance nowadays

          • 2 months ago
            Anonymous

            The whole point of cpu caches is they're transparent. No language other than asm "represents" them.

          • 2 months ago
            Anonymous

            Not even asm represents them, and it would still be ideal to manipulate cache lines in certain ocasions. What people tend to do nowadays is try to "manipulate" the caching behaviour indirectly through generic code.

          • 2 months ago
            Anonymous

            Plenty of architectures have a prefetch command. That's about it.

          • 2 months ago
            Anonymous

            How is CPU cache represented in C then? One of the most important reasons for performance nowadays

            Not even asm represents them, and it would still be ideal to manipulate cache lines in certain ocasions. What people tend to do nowadays is try to "manipulate" the caching behaviour indirectly through generic code.

            If not even asm represents cache, then why would you make the claim "C leads to less performance"
            Compared to what, exactly? No language represents cache.
            1. Higher level languages all have a GC - there is no such thing as a hardware GC.
            A JVM machine does not exist. LISP machines were a failure.
            2. Higher level languages rely on a VIRTUAL MACHINE (including Rust btw) - again not representing hardware.
            What's worse, most of these VMs are stack machines.
            Only Lua and LLVM use register-based VMs.
            3. Forth targets stack machines directly, but most modern CPUs are register machines.
            This "hurr C doesn't represent hardware" myth comes from this misinformed article: https://queue.acm.org/detail.cfm?id=3212479
            It's main focus: concurrency.
            No, a GC doesn't represent hardware, neither do FP concurrency abstractions, nor virtual machines.
            Hardware concurrency is indeed designed as a fast PDP-11.
            Your computer is, in fact, a fast PDP-11.

          • 2 months ago
            Anonymous

            It's this reasoning that doesn't allow the scene to progress.

            There simply isn't a language that does it right. So yeah, compared to nothing done yet.

            No one claimed those constructs (GC. stack machines, bytecode [well, bytecode is kinda realistic because opcodes are compiled to micrones, but it is just adding an extra layer]) are mapped to hardware ones.

            Would be nice if newer languages focused (besides of the other points of pic rel)), had an up to date memory model.

          • 2 months ago
            Anonymous

            How can languages target a new memory model when the hardware does everything it can to conceal its actual inner workings from the programmer?

          • 2 months ago
            Anonymous

            Either:
            a) The hardware vendor adds more instructions
            b) The language compiler generates code that tries to guide the hardware to do such, even though not said explicitly

          • 2 months ago
            Anonymous
      • 2 months ago
        Anonymous

        c has had threads for 30 out of 50 years of threading existing

        • 2 months ago
          Anonymous

          C11 has released 13 y ago.

          m8 just tell me, I'm not listening to the guy talk. If you can't describe it in your own words, then it's self-evidently too complicated to replace anything.

          >Use text to describe the power of a video interface
          Deranged

          • 2 months ago
            Anonymous

            s/has/was

          • 2 months ago
            Anonymous

            How would you represent a video interface then? Rather than claim serial and parallel channels are deranged for sending raw bits, explain in detail a better solution.

            Thus far you just seem misconceived.

    • 2 months ago
      Anonymous

      >, a compiler flag will come that erases all of these "memory safety issues" and that will be that
      2 more weeks friendo

  3. 2 months ago
    Anonymous

    Bret Victor posting in 2024

  4. 2 months ago
    Anonymous

    Are you telling me that my computer is not a fast PDP-11?

    • 2 months ago
      Anonymous

      Yes, but not only.

  5. 2 months ago
    Anonymous

    If you want performance, that kind of actor model in an array of computers with message passing is just NGMI.

    • 2 months ago
      Anonymous

      The actor model doesn't need an array of computers. Each actor can be a thread/green thread.

  6. 2 months ago
    Anonymous

    I don't know what a computer is.

  7. 2 months ago
    Sage

    There's nothing wrong with C. Or with saging. Or with namefields

  8. 2 months ago
    Anonymous

    >and why C is the culprit.
    thats very flattering
    but we both know its fpbp

Your email address will not be published. Required fields are marked *