How would you improve UNIX?

What do you think is bad about UNIX / POSIX? How would you fix it? E.g.:
>POSIX locales are horrible because [...] and I'd fix them by [...]

Shopping Cart Returner Shirt $21.68

Black Rifle Cuck Company, Conservative Humor Shirt $21.68

Shopping Cart Returner Shirt $21.68

  1. 3 weeks ago
    Anonymous

    rm -rf /boot && reboot

    • 2 weeks ago
      Anonymous

      POSIX is okay.
      I just want to make a more robust kernel, you know those self-modifying viruses that link themselves into other programs? Yeah, that. Self modifying code, self compiling code, indestructible, infectious kernel with a stable kernel module API and ABI and lots of in-built networking support. Botnet OS for my botnet.

      watching this https://www.youtube.com/watch?v=UMsscWnM67g
      >most of the OS runs as a separate processes
      >the kernel's job is to handle the message passing
      Is it right? If you want to do open(), read() / write() / close(), you send message to the file system/IO process instead of making system (aka function) calls? That seems slow.

      It is. Two complete task switches instead of one:
      >your program -> kernel (does the write) -> your program
      vs
      >your program ->kernel (does nothing) -> writer service (does the write) -> kernel -> your program

      posix threads is the worst fricking interface ever lmao, using raw syscall is much easier.

      pthread is ok

  2. 3 weeks ago
    Worm

    :(){ :|:& };:

  3. 3 weeks ago
    Anonymous

    There are a lot of interesting improvements to UNIX in Plan9.

    • 3 weeks ago
      Anonymous

      Care to elaborate?

      • 3 weeks ago
        Anonymous

        No.

        • 3 weeks ago
          Anonymous
      • 3 weeks ago
        Anonymous

        I'm not nearly well versed in OS design so I recommend doing your own homework, but as I understand it Unix was designed for the mainframe while Plan 9 was more geared towards networks of computers so its file system is more geared for dealing with remote resources.

        Another thing that's different is processes use namespaces within the filesystem so you can control what the process has visibility to. There's a root namespace and you can make a namespace of a subset of files when spawning a process.

        And the last interesting point on it that I remember (sure there are a shit load more) is that the graphics stuff is builtin like NT versus the Linux way of external display servers.

        • 3 weeks ago
          Anonymous

          >I'm not nearly well versed in OS design so I recommend doing your own homework, but as I understand it Unix was designed for the mainframe while Plan 9 was more geared towards networks of computers so its file system is more geared for dealing with remote resources.
          I know you're just repeating a meme so I don't want to seem like I'm jumping on your shit, but I fricking hate the "designed for a mainframe" meme.
          - Modern PCs and smartphones are more powerful than those mainframes and minis were.
          - The set of problems that have to be solved by an OS do not fundamentally change because you're running on a PC or phone.
          - If anything, while the nature of the problems remains the same, the demands are higher on a modern PC or phone. Go ahead and look at the number of processes and threads running on your system. Most mainframes didn't have that kind of a load, only massive timeshare systems did.

          Plan 9 had some lofty ideas but failed to gain any traction because its unique features quite frankly were not critical to most users the way UNIX features were. "Everything is a file" is one of the critiques of UNIX and Plan 9 takes this to the extreme with "everything is a network file", which isn't the best idea in a world of hostile actors. Nor is it necessarily the most efficient way to implement various APIs. There are some interesting ideas but Plan 9 does not improve upon UNIX the way UNIX improved upon literally all the desktop OSes of the 1980s.

          • 3 weeks ago
            Anonymous

            >I know you're just repeating a meme so I don't want to seem like I'm jumping on your shit
            The most polite rebuttal in IQfy history lol

            I for sure see your point, but one qualifier I would add is I didn't intend "designed for the mainframe" as in "designed for a system with a large pool of resources," more so "designed to operate on one computer system" while plan 9 was designed for operating on a network of interconnected computer systems so its primitives take that into account versus something like NFS where you "trick" the system into viewing a remote filesystem as though it were local.

            Other points taken. I haven't really heard too much criticism of the "everything is a file" paradigm for the design of UNIX systems, care to elaborate on where that introduces pain points or brittle APIs?

          • 2 weeks ago
            Anonymous

            >I haven't really heard too much criticism of the "everything is a file" paradigm for the design of UNIX systems, care to elaborate on where that introduces pain points or brittle APIs?
            File I/O just isn't the best abstraction for everything. Often objects or relationships (in the SQL sense) are a better way to handle something. In a traditional OS an API call might return (or fill) an already composed structure or class instance. If everything is a file, you need to read the file, then parse it and compose it into a useful structure in RAM for your code to work with.

            On the flip side, when everything is a file then IPC across networks becomes rather trivial.

          • 2 weeks ago
            Anonymous

            >The set of problems that have to be solved by an OS do not fundamentally change because you're running on a PC or phone.
            sure they do, back then for example the question was how to isolate users from each other but these days we want to isolate processes and are struggling with the old permission system

          • 2 weeks ago
            Anonymous

            >- Modern PCs and smartphones are more powerful than those mainframes and minis were.
            Irrelevant
            >- The set of problems that have to be solved by an OS do not fundamentally change because you're running on a PC or phone.
            Couldn't be more wrong. One trivial example: the relative performance of components change over time (e.g. network, disk, ram, cpu cache)
            >- If anything, while the nature of the problems remains the same, the demands are higher on a modern PC or phone. Go ahead and look at the number of processes and threads running on your system. Most mainframes didn't have that kind of a load, only massive timeshare systems did.
            Demands are higher because of pajeets like you who shitpost about shit that's about a lightyear above your head.

          • 2 weeks ago
            Anonymous

            >>- Modern PCs and smartphones are more powerful than those mainframes and minis were.
            >Irrelevant
            It's relevant because that means modern systems are juggling more tasks, which is why "muh mainframe OS" features like robust multitasking, multithreading, and protected memory matter. Nobody needed that on an 8-bit micro with 64K of RAM. There wasn't enough compute power or RAM to run more than one process at a time. There wasn't even a "process" per se, just whatever code was loaded.

            >>- The set of problems that have to be solved by an OS do not fundamentally change because you're running on a PC or phone.
            >Couldn't be more wrong. One trivial example: the relative performance of components change over time (e.g. network, disk, ram, cpu cache)
            A trivial and irrelevant example. Relative performance does not change the fundamental problems solved by an OS. You need robust multitasking, protected memory/process separation, hardware abstraction, memory management/virtual memory, I/O management, storage management, and a UI whether you're running a single processor 25 MHz 68040 or an eight core 2.4 GHz i9.

            >Demands are higher because of pajeets like you who shitpost about shit that's about a lightyear above your head.
            Once again, the people (or person?) who are the most ignorant about these things, and state the most easily disproved and dismissed nonsense, are also the ones throwing childish temper tantrums.

  4. 3 weeks ago
    Anonymous

    >How would you fix it?
    Wouldn't. Let it burn.

  5. 3 weeks ago
    Anonymous

    I would add another 50 user defined signals. Nothing else.

  6. 3 weeks ago
    Anonymous

    encourage making fifos seekable LINE_MAX bytes backwards
    it'd be cool

  7. 3 weeks ago
    Anonymous

    Many forms of operating systems have been tried, and will be tried in this world of sin and woe. No one pretends that POSIX is perfect or all-wise. Indeed it has been said that POSIX is the worst form of operating system except for all those other forms that have been tried from time to time.…

    • 3 weeks ago
      Anonymous

      >Many forms of operating systems have been tried
      genuine question, what elements in the design of UNIX is there to change? I have no clue as to what the OS design space looks like

      • 3 weeks ago
        Anonymous

        You can't really "change the design" of Unix in any meaningful way anyway. An operating system designed 60 years ago for minis with a mighty 256KB of memory had to make extreme compromises in (for example) code encapsulation: there simply wasn't enough room to isolate anything from anything else, they had to be interdependent to fit.

        This results in horrors that haunt Unix derivatives even today. OOM killers are the classic example: you have to overcommit on a system that has practically no RAM - and congratulations, now every program assumes malloc() will always succeed. No file locking is another: your OS doesn't support it from first precepts (no room for it in 256KB RAM), so it has to be bolted-on as an afterthought later... but you also need to allow for code that has no idea about file locks, so massive amounts of Linux's file handling code is dedicated to getting around this. These form a perfect storm around virtual memory on Unix-alikes: simple wins that Windows can do, like treating an .EXE or .DLL as a mini-swapfile for every running program, can't be done on Unix-alikes when the binary might be unlinked at any second. Then there are the weird POSIXy assumptions that file systems have to make, which has made it so historically difficult to use filesystems that didn't make POSIXy assumptions on Unix-alikes. And so on and so forth - there are hundreds of examples like this, and tens of thousands of LOC in Linux is dedicated to pretending to be a brain-damaged, barely-above-a-monitor OS from 1969. The irony is how little actual program-level backcompat this gives your average Linux distro, whereas you can fire up a DOS binary from 1981 in Windows 10 (32-bit of course), and you have a 50/50 chance it will run unmodified.

        • 3 weeks ago
          Anonymous

          >now every program assumes malloc() will always succeed
          Buy a RAM upgrade.

          >No file locking
          >but you also need to allow for code that has no idea about file locks
          They don't know about my file locks. I wish I was at home, programming my flightless bird. My fingers hurt.

          File locks were one of the things I hated most about programming for Windows. File locks are the devil.

          >treating an .EXE or .DLL as a mini-swapfile for every running program
          No and I refuse to elaborate.

          • 3 weeks ago
            Anonymous

            You didn't understand a single word I said.
            You think you did, and you're going to chuck a spastic tantrum for me pointing this out to you - but you didn't.

          • 3 weeks ago
            Anonymous

            It's okay to say you don't like me and that you won't respond like an adult. I can respect honesty.

            Try again?

        • 3 weeks ago
          Anonymous

          Thank you for the effort post anon, I appreciate it.
          >processes and memory management: processes can allocate seemingly infinte amount of memory except that when that fails the error recovery mechanisms are basically inexistent, the process crashes and the program's running state is lost (if there is a coredump it's only useful for reverse engineering purposes)
          >processes and concurrent access to the file system
          I also remember an anon saying that there are things that modern hardware can do that you can't have access in user space, things related to an I/O chip and to asynchrony/events for informing of read/write operations' status.

          What's you take on Jon Blow saying that processes should not have access to the file system by default for security purposes?

          • 3 weeks ago
            Anonymous

            and memory management: processes can allocate seemingly infinte amount of memory except that when that fails the error recovery mechanisms are basically inexistent
            So you're not checking malloc for null? That sounds like a you problem. If a memory allocation fails then you have a chance to save program state and recover gracefully. If accessing a valid and unreleased pointer after the fact fails, then there's something worse going on under the hood which makes it unlikely you could safely continue to run in order to save program state.
            >inb4 you're using a high level language that blows up while you're creating a billion class instances or doing something like naively processing a 1GB string
            That's not a UNIX problem.

            and concurrent access to the file system
            Parallel processing and concurrent access are inherently difficult. This is not a UNIX problem, and nobody has come up with a magic solution (yet). FP devs will jump up and say they have, but FP languages work in the exact opposite manner that the CPU does and are dog slow and inefficient as a result.

            >I also remember an anon saying that there are things that modern hardware can do that you can't have access in user space, things related to an I/O chip and to asynchrony/events for informing of read/write operations' status.
            And you shouldn't have access to them.

          • 3 weeks ago
            Anonymous

            >So you're not checking malloc for null?
            of course I do
            >If a memory allocation fails then you have a chance to save program state and recover gracefully
            technically yes, but that's a lot of manual work that you would have to do for every single program and maybe you'd have to organize your program in a way that lends itself well for this.
            What would be great is have a system call that would allow you to save the entire process's memory on to disk and then be able re-run it and the process would continue where it left off just like if it was a context switch. (yes abort() and coredumps, fork() for copying the process and memory, but nothing for the 2nd part).
            Another thing would be having more reliable signal callbacks and when you get a SEGFAULT, have the chance to compact the memory to stay running, maybe even recompiling some code for doing the program's intended work but using much more compact and even compressed data structures, or again be able to save the program state for later resumption.
            >If accessing a valid and unreleased pointer after the fact fails, then there's something worse going on under the hood which makes it unlikely you could safely continue to run in order to save program state.
            True but that's another issue, I'm talking about dealing with lack of memory problems.
            >That's not a UNIX problem.
            If the program's logic is sane, I don't see why the program should crash and lose its state just because there is no more memory. Imagine if your encoding a video with ffmeg and there is a problem at 80%, it's moronic to have to start over from the beginning while you could save the process's state and restart it where you left off.
            >Parallel processing and concurrent access are inherently difficult.
            True but there is a lot to improve, locks shouldn't be advisory.
            >And you shouldn't have access to them.
            >you shouldn't use your own computer's hardware how you see fit
            frick off

          • 3 weeks ago
            Anonymous

            >What would be great is have a system call that would allow you to save the entire process's memory on to disk
            I could go for on demand full dumps for debugging purposes. With the right language runtime support and tools you could then walk/inspect the entire dump to glean useful info.

            >and then be able re-run it and the process would continue where it left off just like if it was a context switch
            I don't know that this would ever be safely doable.

            >have the chance to compact the memory to stay running
            Classic Mac OS used Handles (ptr to ptr in Toolbox terminology) and Handle locks so that the Memory Manager could try to radically compact memory in the event of an allocation failure. Kind of important in an OS that originated with 128K of RAM and no VM. But also kind of a pain in the ass to work with. And today running into memory issues means GBs on the heap would have to be compacted which would kill performance if it kept happening, and likely still at some point fail. If you're processing massive data sets you've got to think about this before hand and design accordingly.

            >If the program's logic is sane, I don't see why the program should crash and lose its state just because there is no more memory.
            If you've run out of memory your logic is not sane.

            >Imagine if your encoding a video with ffmeg and there is a problem at 80%, it's moronic to have to start over from the beginning while you could save the process's state and restart it where you left off.
            Your fundamental problem is that you ran out of memory. If you reloaded an "image" of your program's running state, you would be out of memory. Again, the logic is not correct if you got there. It's not a UNIX problem.

            >>And you shouldn't have access to them.
            >>you shouldn't use your own computer's hardware how you see fit
            >frick off
            That shit is abstracted because otherwise it would be a major security/reliability issue. If you need it, write a driver.

          • 3 weeks ago
            Anonymous

            >things related to an I/O chip
            Well of course, hardware access almost always has to be done in kernel mode. That's why we have drivers in with the kernel. Windows uses a system called DPCs (Deferred Procedure Calls) to communicate back and forth between kernel and user mode code, and I have no doubt Linux has a similar system.

            >What's you take on Jon Blow
            He's probably right. We've been chugging along with the "processes inherit the privs of the user" paradigm for the best part of 50 years, and we're STILL plugging EoP bugs on a weekly basis. This, along with the fact that devices are increasingly going single-user (phones, laptops, etc.) suggests trying another paradigm - indeed, Microsoft has been trying it for well over a decade, first with filesystem virtualization starting in Vista (now you know what the LocalLow folder is for), and its various "Modern" app frameworks, where the privs are per-program instead of per-user.

        • 3 weeks ago
          Anonymous

          >You can't really "change the design" of Unix in any meaningful way anyway. An operating system designed 60 years ago for minis with a mighty 256KB of memory had to make extreme compromises
          Here come the memes from popular but wrong blog posts...

          >This results in horrors that haunt Unix derivatives even today. OOM killers are the classic example: you have to overcommit on a system that has practically no RAM - and congratulations, now every program assumes malloc() will always succeed.
          Programs assume malloc will succeed because every program has a massive virtual address space all to itself. This does not result in an over commit of resources. The exact opposite in fact as the OS can create backing pages on demand. Now this IS related to old computers being relatively memory starved. But it's efficient and successful, so why the hell would you change it? Outside of a mythical computer with 64-bits of real physical RAM at its disposal, you would run into memory issues in any system. As we get more RAM, software developers gobble it up. And you would end up solving it the same way.

          >No file locking is another
          POSIX defines file locking APIs.
          >hurr they don't work well!
          The issues have nothing to do with old computers or limited RAM. Just like mutexes and semaphores they sound like a simple solution to a problem but they rapidly become too complex for most programmers to model and predict in their minds (highly g-loaded). Maybe there's a better way, but I bet if someone finds it, it could have been made to work on an old mainframe.

          >rambling with no specific examples
          I honestly doubt you could come up with a real problem that is actually the way it is due to RAM on old mainframes, apart from VM which is not a problem but a blessing.

          • 3 weeks ago
            Anonymous

            Here we go, some 20-something who knows literally nothing about his OS, its history, its ancestry, or the capabilities of the machines it had to run on in 1969. Let's frick this chicken.

            >Programs assume malloc will succeed because every program has a massive virtual address space all to itself.
            >(four more lines of rambling about 21st century ISAs that have nothing to do with the 1960s machines Unix was born on snipped for its utter irrelevance to the discussion)
            Imagine thinking history began with 32-bit protected mode x86. Of course, in the real world, the PDP-11 had 256KB of RAM and no virtual memory in its most common configuration - that set the standard ("we have to overprovision memory allocations or we'll NEVER get a successful malloc()") that you're stuck with 60 years later - not that the 386 offered a 4GB address space decades later.

            >b-but posix l-locks
            Again, history didn't begin at whatever point in the 2000s you became conscious. POSIX didn't come along until 1988, and I can't find a hard date on POSIX file locking, but it doesn't appear to have been part of the initial specification. What's more, Unixes have had about four common file locking APIs across history, all incompatible enough with each other to make writing a wrapper for your breed of Unix a royal pain. Isn't being beholden to an ancient, insufficiently-specified OS for 1960s minis wonderful? This means, hilariously enough, even MS-DOS had file locks before Unix/POSIX, let alone NT.

            >b-but you provided no specific examples!
            Glad I didn't waste my time - your view of Unix in general is so myopic that you would have said even more stupid shit.

            and memory management: processes can allocate seemingly infinte amount of memory except that when that fails the error recovery mechanisms are basically inexistent
            So you're not checking malloc for null? That sounds like a you problem. If a memory allocation fails then you have a chance to save program state and recover gracefully. If accessing a valid and unreleased pointer after the fact fails, then there's something worse going on under the hood which makes it unlikely you could safely continue to run in order to save program state.
            >inb4 you're using a high level language that blows up while you're creating a billion class instances or doing something like naively processing a 1GB string
            That's not a UNIX problem.

            and concurrent access to the file system
            Parallel processing and concurrent access are inherently difficult. This is not a UNIX problem, and nobody has come up with a magic solution (yet). FP devs will jump up and say they have, but FP languages work in the exact opposite manner that the CPU does and are dog slow and inefficient as a result.

            >I also remember an anon saying that there are things that modern hardware can do that you can't have access in user space, things related to an I/O chip and to asynchrony/events for informing of read/write operations' status.
            And you shouldn't have access to them.

            >If a memory allocation fails then
            Your system in hosed in a typical Unix - you can't guarantee you can even allocate I/O buffers to write your program state anywhere. In any case, it's likely that the OOM killer is coming for you - hope it isn't going to reap your security processes!

          • 3 weeks ago
            Anonymous

            >OOM killer is coming for you
            swapon /mnt/bigboi/swapfile

          • 3 weeks ago
            Anonymous

            This confuses and enrages the UNIX hater.

          • 3 weeks ago
            Anonymous

            This confuses and enrages the UNIX hater.

            >these were the only words i understood in the whole post
            And there it is.

          • 3 weeks ago
            Anonymous

            Are you Dave Cutler? You sort of whine and fangirl over NT like him but I thought he would be too busy to shitpost. I don't know of anyone else who so vehemently hates UNIX and POSIX.

          • 3 weeks ago
            Anonymous

            >you got me i know nothing about the os family i simp for
            Why didn't you just say this? It's shorter and means exactly the same thing.

          • 3 weeks ago
            Anonymous

            >i'm mad because microsoft keeps me on a contractual leash and tells me when to eat and where to shit
            Why didn't you just say this? It's shorter and means the same thing.

            Make an updated Win2k ISO with a recent NT build for us, Dave.

          • 3 weeks ago
            Anonymous

            >continues to cry that his "born again virgin" waifu os was slighted
            Meds

          • 3 weeks ago
            Anonymous

            to cry that his "born again virgin" waifu os was slighted
            >Meds
            Actual projection of genuine mental illness.

          • 3 weeks ago
            Anonymous

            >Here we go, some 20-something who knows literally nothing about his OS, its history, its ancestry,
            You gleaned that from my discussing facts about systems architecture?

            >>(four more lines of rambling about 21st century ISAs that have nothing to do with the 1960s machines Unix was born on
            Nice straw man, but if you have to lie about what I said, then you've lost the debate.

            >that set the standard ("we have to overprovision memory allocations or we'll NEVER get a successful malloc()")
            How do you over provision a memory allocation when there's no VM? Do you even have the slightest clue how computer memory works? You think on a PDP-11 with no VM you asked for 64 KB and got 1 MB???

            >Again, history didn't begin at whatever point in the 2000s you became conscious.
            Stop projecting. It's painfully obvious at this point you're describing yourself.

            >What's more, Unixes have had about four common file locking APIs across history, all incompatible enough with each other to make writing a wrapper for your breed of Unix a royal pain.
            "Four different forms of file locking" != "no file locking." If file locking wasn't in the very first prototype of UNIX it's because they had not faced nor tried to solve that problem yet. Not because of RAM. What in the frick makes you think they would have come up with the perfect solution had the PDP-11 had 1 MB of RAM? Or even 4 GB?

            >This means, hilariously enough, even MS-DOS had file locks before Unix/POSIX, let alone NT.
            So then file locking is not a RAM issue. You've just conceded both points. What are you b***hing about again?

            >>b-but you provided no specific examples!
            >Glad I didn't waste my time
            No, you're wasting our time b***hing about "muh UNIX" with zero specific examples except "no file locks due to RAM" and you just disproved yourself.

            >you can't guarantee you can even allocate I/O buffers to write your program state anywhere.
            Funny I've never run into that in software processing TB's of data.

          • 3 weeks ago
            Anonymous

            >How do you over provision a memory allocation when there's no VM?
            Windows 1.0 did it easily by not having a 1960s-tier shitbox memory manager. It worked like this - I'll write it in moronese so you have a hope of understanding it:
            >have (say) 1mb memory
            >load 512kb program - program is in 64kb segments (it's an x86 real mode environment), but for simplicity, say all 512kb is paged in
            >program says "hello windows, i'd like 640kb memory block"
            >windows goes "hmm, there's only 512kb free"
            >windows runs through the segments of the loaded program looking for the lru segments
            >throws out two segments (128kb) of the program - it can be reloaded from its exe easily (the "mini swapfile" part i pointed out earlier)
            >windows can now satisfy the 640kb malloc() (well, localalloc() or globalalloc())
            This was all done in real mode, without any sort of CPU/protection help - that's the difference between an OS designed in the 1960s, and one designed in the 1980s.

            >my mommy says im smarter than you cope seethe rage
            Don't care, NEET.

            >pretending to be smart by repeating something that i already said
            Want a participation medal?

            >didn't actually understand what was said
            Imagine my shock.

            >more seething and b***hing
            Again, don't care.

            >yet another "b-buh muh detail of 21st century architecture that didn't exist when the os was designed"
            Drop off kid, you clearly don't have the mental faculties, let alone the insight, to make any sort of coherent argument. Love your 60 year-old OS - I don't care, literally nobody does. But I'm not your daddy who split the moment your mother said "I'm preg...", and I won't put up with your screaming asspulled bullshit because someone knows more than you about how operating systems work.

          • 3 weeks ago
            Anonymous

            >I'll write it in moronese so you have a hope of understanding it:
            First of all...moron...I have no idea why you're describing this as "over provisioning." Mac OS classic did the same thing with CODE resources. I can only assume that you think "UNIX bad" because it didn't do this and therefore "over provisioned." But the way both early Windows and classic Mac OS did it was ad hoc and limited to certain types of data. It also often resulted in disk/memory thrashing, poor performance, and crashes.

            From the beginning UNIX supported virtual memory which is the correct, robust method of solving these types of memory problems. Not all implementations of UNIX had what we would call a modern MMU available for this task. So early on they had to use whatever hardware support was available to accomplish the same thing. But that was only for a short time because every ISA was adopting full, modern MMUs. It was universally agreed that demand paging was how you solved these issues, and a full MMU avoided the performance hit of doing it all on the CPU.

            >This was all done in real mode, without any sort of CPU/protection help - that's the difference between an OS designed in the 1960s, and one designed in the 1980s.
            Thank you for confirming your earlier projection. Proper VM is a decade older than UNIX, and there were IBM System/360 mainframes with legit, modern MMUs, among others. Early Windows and Mac OS classic rolled their own ad hoc solutions because they started on CPUs with no support for proper VM. Trying to do it without a MMU would have hampered performance. Once x86 and 68K got full MMUs, both adopted legit VM.

            I should charge you for the history lessons.

            >various lines of seething
            Frick off child.

            >>yet another "b-buh muh detail of 21st century architecture that didn't exist when the os was designed"
            Tell us please which modern CPU architecture feature did not exist any where when UNIX was designed. We're listening. Tell us how it has hurt you.

          • 3 weeks ago
            Anonymous

            >I have no idea
            That's obvious. It's because it's promising resources that may not be actually there - you know, the very dictionary definition of "overprovisioning".

            >b-but (drags in even more architectures irrelevant to the discussion)
            Unix was not designed on, and certainly not initially targeted to, S/360s or (obviously) systems that were the best part of a decade away. This is your fundamental misunderstanding - early OSs were very tightly coupled to the architectures they ran on. Portability came later, and it was rare that any particular flavor of Unix (as they were just ports, almost always by different people) ran on more than one architecture. This is why Unix has always orbited the "source compatible" concept, because there was nothing else you could even attempt to guarantee in the era.

            Further, none of this changes the fact that the memory manager behavior was pretty much set in stone at this point, based exclusively around how the cheapest model of PDP-11 worked.

            >proper VM is a decade older than UNIX
            Yet again irrelevant, because it wasn't developed or initially available on systems with VM. PDP-11s didn't support it (at least in the base model), and you had to wait for the VAX for it to become standard.

            >n-no u
            This is why you get bullied at school, anon.

            (continued)

          • 3 weeks ago
            Anonymous

            >That's obvious.
            "Over provisioning" is not the correct terminology. If you have 512K of code you need 512K of space + space for data. Any system which guesses at what you don't need and removes it to be loaded later is a primitive form of swap.

            >It's because it's promising resources that may not be actually there
            If malloc succeeded then the resources are available, it just might not all be RAM.

            >>(drags in even more architectures irrelevant to the discussion)
            It's relevant because you think VM was invented in the 1980s.

            >Unix was not designed on, and certainly not initially targeted to, S/360s
            They knew what was out there, what was being adopted. You don't roll an ad hoc solution tied to an ISA when you can prepare for a proper solution. They didn't expose or rely upon PDP-11 details in a way that would lock UNIX to that machine. They did the exact opposite of what you accuse them of.

            >or (obviously) systems that were the best part of a decade away.
            Demand paging and MMUs were solved problems being adopted industry wide. The PDP-11 series itself got MMUs.

            >early OSs were very tightly coupled to the architectures they ran on
            Early Windows and classic Mac OS exposed and relied upon their ISAs in ways which made it nearly impossible to port them. UNIX did not, which is why it ended up being ported to everything.

            >Further, none of this changes the fact that the memory manager behavior was pretty much set in stone at this point, based exclusively around how the cheapest model of PDP-11 worked.
            Wrong. The cheapest model PDP-11 did not have hardware support for proper VM, yet UNIX has proper VM.

            >This is why you get bullied at school, anon.
            If you ever grow up you'll learn how to have a conversation without being an insulting twat. You might also learn not to project your own issues and experiences.

          • 3 weeks ago
            Anonymous

            >i missed the childhood psychological developments required to understand that im not the center of the universe
            >and being autistically emotionally invested in my waifu os (so much so i know absolutely nothing about its history and early implementations)
            >im psychologically incapable of understanding that you may not be
            Anon, understand that some people are far more developed than you. It's why I keep calling you a moron - there are parts of you missing that were fully present in my own children by age 4.

            OP asked what was terrible about POSIX, and I answered: the memory manager, and its interplay with how its filesystems works. This triggered your fragile little fee-fees because you're an autist who thinks Unix is the perfect incarnation of blessed K&R (hallowed be their name etc. etc.), and anybody who disagrees with you must be deliberately affronting your True and Correct religious believe (for that's all it really is). It's not actually my problem.

            The "how do you fix it" part is simply "throw it away and create something that is based on less than 60 year-old CS concepts, or hardware of the same age.

            If you can't handle any of this, go hide in whatever safespace you need - Tumblr is that way, for example. I'm done with your infantile nonsense.

          • 3 weeks ago
            Anonymous

            >Anon, understand that some people are far more developed than you.
            Is that why you were wrong on the following points?
            - "No file locks" but they were there four decades ago (1984).
            - "Because not enough RAM" but DOS had file locks so it can't be because of RAM.
            - "Tied to the PDP-11/forces the PDP-11 on us" but UNIX was intended to be portable and did not expose/rely upon PDP-11 details in ways which would lock it to that ISA. Which is why you can't name any PDP-11 limitation that you think still applies today "because muh UNIX on muh PDP-11."

            And is that also why you keep breaking down into seething fits, hurling insults, and making accusations which quite frankly reveal far too much about your own life and childhood?

            >OP asked what was terrible about POSIX, and I answered: the memory manager, and its interplay with how its filesystems works.
            Yes but you can't give a single actual example of either. So far your examples are "no file locks" which is obviously false. And "I want to unload code segments" which would be stupid on 64-bit processors with full MMU supported VM. "Segments" of code no longer even exist.

            Now, can you provide a REAL example? Without seething, raging, hurling insults, or crying about your childhood?

            >lines and lines of seething
            Some funny ironies in this exchange.
            - You blame UNIX for imposing PDP-11 limitations on us when it was portable years before Mac/Windows existed. Your example of "superior" memory management was tired directly to the original x86 ISA. Likewise Mac OS CODE resources were tied to the 68K ISA and disappeared with PowerPC.
            - You tell me I'm triggered while you rage, seethe, insult, and weep.

    • 2 weeks ago
      Anonymous

      There is 0 positive correlation between commercial success and quality. If anything it is typically a strong negative correlation.

  8. 3 weeks ago
    Anonymous

    Linux needs a ton of improvement mostly in kernel space. As for posix I would say c is the problem it should be decoupled from the standard or at least modified for the purpose

    • 3 weeks ago
      Anonymous

      >needs a ton of improvement
      Yet no examples are provided.

    • 3 weeks ago
      Anonymous

      >nooo not c
      C is awesome if you recognize its place and what it does best. The complaints leveled against C are mostly moronic and based on false claims/information.

      The #1 thing I would change in regard to C and UNIX:
      Given a pointer which a process has rights to access, said process should be able to query total size in bytes and remaining bytes at any time. You shouldn't have to have the original pointer to query total size, any pointer within the block should work. This would make it trivial to deal with memory safety issues without resorting to cycle wasting ref counts, garbage collection, or annoying borrow checkers. The heap manager knows this information, so I'm at a loss as to why query functions aren't part of the standard. One caveat is that a similar API would have to be provided by the OS for any blocks shared between processes, or by the process and kernel.

      The #2 thing I would change?
      Duplicate every string function in the standard library with a version that does NOT use null termination. Take the string ptr and length in bytes to process, and return lengths where required. This is #2 so that these new versions could use remainingBytes to verify the input length and not buffer overrun. Now null termination can be phased out.

      • 2 weeks ago
        Anonymous

        You have to be genuinely braindead to believe C is awesome no matter what qualifiers you would like to add. It was widely outclassed by virtually every single last one of its contemporaries and has never caught up.

        • 2 weeks ago
          Anonymous

          In modern times, what would you use instead?

          • 2 weeks ago
            Anonymous

            You use C for what C is best at: tight, fast, efficient code. Especially when that code needs to directly manipulate bytes in memory. OS kernels, drivers, high performance computing libraries, libraries which accelerate tasks linked by higher level languages. Oh, and the runtimes of higher level languages. C is used all over the place to implement higher level languages that people ironically think should replace C.

            You can certainly have good performance without C. So if you don't need the fastest/most efficient portable code, then use something with more abstractions to help you get work done.

          • 2 weeks ago
            Anonymous

            >C is used all over the place to implement higher level languages that people ironically think should replace C.
            lol this

            >You can certainly have good performance without C
            For scripting I like Lua, and it binds pretty easily with C too.

        • 2 weeks ago
          Anonymous

          >allows faster solutions than any contemporary
          >"outclassed by virtually every single contemporary"
          Isn't it ironic that the people most easily proven wrong tend to also be the people throwing the most insults?

      • 2 weeks ago
        Anonymous

        >C
        >moronic build system
        >inconsistent syntax

        http://xahlee.info/comp/why_i_hate_C_lang.html
        Xah was right.

        • 2 weeks ago
          Anonymous

          >hurr programmers today no understand n-based numbers!
          LOL what shit south Pacific school did he go to where this was the case?
          >durr it C fault!
          Pretty sure it would be the school's fault.
          >students no understand digits in hexadecimal!!!
          Who the frick doesn't know hex includes 0-9?
          >why you use bitmask??? bitmask bad!!!
          Thank you for that link anon. Now I know to never consider any video or post he ever makes. Not that I would have any way.
          >why pointer very bad reference bad too!!! YOU NO USE!!!
          Ahhh...a midwit who got filtered by bitmasks and pointers. Makes sense now.

          • 2 weeks ago
            Anonymous

            >midwit who got filtered by bitmasks and pointers. Makes sense now.

            Heavy projection. Flexing about pointers is as midwit as you can get. And you ignored the main point he says about c having sloppy random syntax.

          • 2 weeks ago
            Anonymous

            >not liking bad design = filtered
            ironic how cniles and webshits have so much in common

          • 2 weeks ago
            Anonymous

            And its always "filtered" with something stupid like radnom usles syntax quirks. I swear these poeple must be pajeets
            >pointer syntax ilogical
            >pajeet memorises it
            >pajeet claims poeple who dont like it dont understand pointers.
            >pointers can me explained to a child in 2 min

  9. 3 weeks ago
    Anonymous

    POSIX existing are horrible because it exists and I'd fix them by installing Windows.

    • 3 weeks ago
      Anonymous

      >POSIX Subsystem for NT
      >Windows Services for UNIX
      >Windows Subsystem for Loonix
      >Windows Subsystem for Loonix 2: Virtualization Boogaloo
      [ You are here. ]
      >Windows 12 based on GNU core utils and MINIX3 microkernel

      • 3 weeks ago
        Anonymous

        >t-the next version of windows will be based on eunuchs honest guize its really happening this time!!!1!!1!!one!1!!1!eleven
        He's actually did the meme. Fricking decades of being utterly humiliated, and he's still doing it.

        • 3 weeks ago
          Anonymous

          >the next version of windows will be based on eunuchs
          You sound upset. Also MINIX is not Linux or UNIX.

      • 3 weeks ago
        Anonymous

        you have to be over 18

        • 3 weeks ago
          Anonymous

          You have to be under 600lbs.

          • 3 weeks ago
            Anonymous

            >said the loontroon unironically

      • 3 weeks ago
        Anonymous

        >and MINIX3 microkernel
        >windows kernel is now running inside IME

      • 3 weeks ago
        Anonymous

        the nt kernel is the best part of windows by far, why the hell would they change it

        • 3 weeks ago
          Anonymous

          Legitimately the only good part of the OS.

        • 3 weeks ago
          Anonymous

          Not to mention you're throwing 30 years of drivers in the trash. You have half a chance of an NT 4.0 driver working on Windows 10 32-bit.
          Compare and contrast to how hardware support has been in the top two complaints about Linux for decades now. It's example #2^19937-1 of how freetards can't handle reality on any level.

          • 2 weeks ago
            Anonymous

            What kind of bizarro world are you from? Hardware support in linux has always improved over time and in the past decade, it's been the OS with the best driver support by far.
            Wangblows on the other hand makes it impossible to use your old hardware OR your new hardware. You can't even use non-pozzed CPUs from 2 years ago in wangblows 11.

          • 2 weeks ago
            Anonymous

            >2 years ago
            im not poor so generally i dont use hardware that isnt from current year

        • 3 weeks ago
          Anonymous

          Legitimately the only good part of the OS.

          >the nt kernel is the best part of windows
          That's the joke but everyone here is too busy crying and pissing. NT is actually flexible enough to support entire userspace subsystems that have zero Windows programs, features, binary compat, etc. You could probably have a GNU/NT distro running X11 and *nix userspace with a bit of effort if MS ever open sourced it.

          >said the loontroon unironically

          >no u
          Day of the treadmill soon!

          >You can't really "change the design" of Unix in any meaningful way anyway. An operating system designed 60 years ago for minis with a mighty 256KB of memory had to make extreme compromises
          Here come the memes from popular but wrong blog posts...

          >This results in horrors that haunt Unix derivatives even today. OOM killers are the classic example: you have to overcommit on a system that has practically no RAM - and congratulations, now every program assumes malloc() will always succeed.
          Programs assume malloc will succeed because every program has a massive virtual address space all to itself. This does not result in an over commit of resources. The exact opposite in fact as the OS can create backing pages on demand. Now this IS related to old computers being relatively memory starved. But it's efficient and successful, so why the hell would you change it? Outside of a mythical computer with 64-bits of real physical RAM at its disposal, you would run into memory issues in any system. As we get more RAM, software developers gobble it up. And you would end up solving it the same way.

          >No file locking is another
          POSIX defines file locking APIs.
          >hurr they don't work well!
          The issues have nothing to do with old computers or limited RAM. Just like mutexes and semaphores they sound like a simple solution to a problem but they rapidly become too complex for most programmers to model and predict in their minds (highly g-loaded). Maybe there's a better way, but I bet if someone finds it, it could have been made to work on an old mainframe.

          >rambling with no specific examples
          I honestly doubt you could come up with a real problem that is actually the way it is due to RAM on old mainframes, apart from VM which is not a problem but a blessing.

          >POSIX defines file locking APIs.
          The difference is that under Windows it's a very common (default?) behavior so if any other process has your file open it's locked. There are ways to debug that and find the process that has the lock but I never could find a straightforward method that made sense. I remember having to write down a list of steps and it included gentoo-tier CLI fiddling.

          • 3 weeks ago
            Anonymous

            >The difference is that under Windows it's a very common (default?) behavior so if any other process has your file open it's locked. There are ways to debug that and find the process that has the lock but I never could find a straightforward method that made sense. I remember having to write down a list of steps and it included gentoo-tier CLI fiddling.
            I'll reiterate that file locks in UNIX like OSes are not perfect. But it's also not because "muh mainframe OS" (which may not have even been your comment).

          • 3 weeks ago
            Anonymous

            >I'll reiterate that file locks in UNIX like OSes are not perfect.
            True. By the way do you know if it's possible to lock portions of files under POSIX like on Windows? I have no use case but I'm curious.

            >But it's also not because "muh mainframe OS" (which may not have even been your comment).
            Not me, I only program for *nix these days.

          • 3 weeks ago
            Anonymous

            >True. By the way do you know if it's possible to lock portions of files under POSIX like on Windows? I have no use case but I'm curious.
            I'd have to look it up anon. I don't know off hand.

          • 2 weeks ago
            Anonymous

            >NT is actually flexible enough to support entire userspace subsystems that have zero Windows programs, features, binary compat, etc.
            That's basically what WSL1 was. It worked, but it was really, really shitty to use.
            WSL2 was what they should have done to begin with. It's %CurrentCentury%, everyone and their mother has a CPU that supports hardware-accelerated VMs.

            You can't improve on perfection, though loontroons will keep trying with their perfection-likes.

            The irony here is that not even Macgays (with jobs) use crap like Launchpad.

      • 3 weeks ago
        Anonymous

        GAHNOO+Winderz

  10. 3 weeks ago
    Anonymous

    rm -r /
    simple as

  11. 3 weeks ago
    Anonymous

    rewrite Plan 9
    WA LA, you now have the best OS you could wish for

  12. 3 weeks ago
    Anonymous

    >keyboard goes wonky
    unwonk the keyboard
    function reset_keyboard {
    id=$(xinput |
    grep SEMICO |
    grep -v "Consumer Control" |
    awk '{print $5}' |cut -c 4-)
    echo "id=$id"
    xinput disable $id
    xinput enable $id
    }

  13. 3 weeks ago
    Anonymous

    so there are some morons that think that only one guy believe unix is shit?
    what the hell man

    • 3 weeks ago
      Anonymous

      >so there are some morons that think that only one guy believe unix is shit?
      No, I think there's mainly one guy who would seethe this fricking hard over something as specific as UNIX vs NT file locks. Literally who else?

    • 3 weeks ago
      Anonymous

      People who b***h about UNIX or C typically have a list of bullshit complaints which originate with poor blog posts. Neither is perfect and there are valid complaints about both. But both are pretty damn good and the complaints offered are typically false ones.

      "muh file locks!" is a perfect example. Of course there are file locks. But we don't get a specific example of what's wrong with them or how it should be done. Only b***hing that it's the PDP-11's fault and if someone had installed 512K of RAM the first compiled copy of UNIX would have had the perfect file lock solution for all use cases ever.

      This anon

      >So you're not checking malloc for null?
      of course I do
      >If a memory allocation fails then you have a chance to save program state and recover gracefully
      technically yes, but that's a lot of manual work that you would have to do for every single program and maybe you'd have to organize your program in a way that lends itself well for this.
      What would be great is have a system call that would allow you to save the entire process's memory on to disk and then be able re-run it and the process would continue where it left off just like if it was a context switch. (yes abort() and coredumps, fork() for copying the process and memory, but nothing for the 2nd part).
      Another thing would be having more reliable signal callbacks and when you get a SEGFAULT, have the chance to compact the memory to stay running, maybe even recompiling some code for doing the program's intended work but using much more compact and even compressed data structures, or again be able to save the program state for later resumption.
      >If accessing a valid and unreleased pointer after the fact fails, then there's something worse going on under the hood which makes it unlikely you could safely continue to run in order to save program state.
      True but that's another issue, I'm talking about dealing with lack of memory problems.
      >That's not a UNIX problem.
      If the program's logic is sane, I don't see why the program should crash and lose its state just because there is no more memory. Imagine if your encoding a video with ffmeg and there is a problem at 80%, it's moronic to have to start over from the beginning while you could save the process's state and restart it where you left off.
      >Parallel processing and concurrent access are inherently difficult.
      True but there is a lot to improve, locks shouldn't be advisory.
      >And you shouldn't have access to them.
      >you shouldn't use your own computer's hardware how you see fit
      frick off

      at least had some ideas. I understand where he's coming from. I just don't think it's realistic to expect the OS to make it possible to pause a process then resume it with newly compiled code so that you can side step a memory allocation failure. That would require a lot of man hours of work, ABI changes, API changes, tight coupling between the OS and language runtimes, all for a recovery feature almost no one would ever use. It's on the programmer to understand their problem space and the memory requirements, and to plan accordingly.

      But, at least he did have a suggestion for an OS improvement. There are languages that let you literally change code, roll back, and continue in an interpreter, never leaving a running state. It's productive to work that way. But it's also really a language feature and requires an interpreter.

      Plan 9 also always comes up in these discussions. Plan 9 is an interesting thought experiment, but if it had superseded UNIX, people would be b***hing about it. It's far from perfect. A valid criticism of UNIX is the "everything is a file" philosophy. Well, Plan 9 turns that into "everything is a network file." Both are bad taken to an extreme.

      • 2 weeks ago
        Anonymous

        >requires an interpreter
        No it doesn't.

  14. 3 weeks ago
    Anonymous

    Do we have any OS projects out there trying to make these improvements over UNIX? I would hope the people who are smart enough to notice the problems have at least written some good papers about them that encouraged hobbyists to try out some solutions

    • 3 weeks ago
      Anonymous

      Most R&D OSes are honestly "pie in the sky" abstractions that have their own problems. Which is kind of how I feel about Plan 9. Microsoft did have a research OS that tried to eliminate virtual address spaces via tagged pointers. So security wasn't via virtual memory process separation but via a block level permissions check on every memory access. This has a performance penalty (possible to mitigate on silicon), but also a performance gain because you do away with traditional mode switching and simplify shared memory. Under this system a true microkernel OS architecture would make sense. I'm not convinced it would actually improve security though. EoP would just occur at the level of an allocated pointer rather than a process.

    • 2 weeks ago
      Anonymous

      ares/helios but it's not really going anywhere at the moment https://archive.fosdem.org/2023/schedule/event/heliosuk/

      • 2 weeks ago
        Anonymous

        >not really going anywhere at the moment
        That has been th emain problems with new operating systems. Redox-OS was launched with much fanfare but is just not getting anywhere. Genode is interesting but development is glacial. Atheos ended. And now it seems also ToaruOS has been terminated.

  15. 3 weeks ago
    Anonymous

    Rewrite it in Zig

    • 3 weeks ago
      Anonymous

      andrew is literally a midwit

  16. 3 weeks ago
    Anonymous

    throw the Wayland devs in Guantanamo

  17. 3 weeks ago
    Anonymous

    >nothing but lying, screaming cope
    I think I broke it.

  18. 3 weeks ago
    Anonymous

    A classic one is that the /usr folder was meant to be user data (i.e. /home). So why do we have /usr/bin /usr/lib etc?
    http://lists.busybox.net/pipermail/busybox/2010-December/074114.html
    TLDR: basically in the olden origin days of unix they ran out of space so just shoved stuff in /usr. This has been convention since. But it's not a clean design since it literally exists due to a full disk 50 years ago.

    Design change would be to integrate /usr/bin /usr/lib and all that into the main fold, put user folders in /usr. I think one of the linuxes fixes this, the OCD one obsessed with docker, forget the name

  19. 3 weeks ago
    Anonymous

    End users love to do everything with one program, not combining small programs.

  20. 2 weeks ago
    Anonymous

    You can't improve on perfection, though loontroons will keep trying with their perfection-likes.

    • 2 weeks ago
      Anonymous

      There are too many fashion oriented designers, that is true, but let us not forget that there are people who are genuinely concerned with old fashioned concepts such as performance and reliability.
      Pic. related.

    • 2 weeks ago
      Anonymous

      Before mac
      >thumbnails
      After mac
      >no thumbnails
      :thinking:

  21. 2 weeks ago
    Anonymous

    far2l concole extentions

  22. 2 weeks ago
    Anonymous

    Everything about unix/posix is terribad by design. From shell-based interactions to the garbage file-and-folder metaphor to the ridiculous everything-is-a-file-but-not bullshit.It's both over and under specified, meaning everything is incompatible but also everything is buggy as frick because you need 50 layers of bugs to get a functional system. It also thus hampers security efforts.

    • 2 weeks ago
      Anonymous

      >From shell-based interactions to the garbage file-and-folder metaphor to the ridiculous everything-is-a-file-but-not bullshit.
      Got an outline for what it should be like instead?

      • 2 weeks ago
        Anonymous

        Of course he doesn't.

      • 2 weeks ago
        Anonymous

        Everything is an object. Instead of a hierarchical filesystem you have a tagged object store.
        You don't need OOP just a hierarchical record type.

        • 2 weeks ago
          Anonymous

          So how do you ensure unique file identification? Tags can resolve to multiple files.

          • 2 weeks ago
            Anonymous

            In regards to the ability for programs to reference their resources; by direct (possibly weak) reference. This would make it trivial for multiple versions of programs to coexist with some shared and some unique data in the object store. In this case tags are entirely for the users convenience.

            In regards to users finding files, they can be queried by combining tags with explicit or implicit operators (e.g. and, or, etc.) to narrow the search. This does allow for multiple objects to be stored with the same name and tags but the result is no worse than a duplication of files under an identical but misordered hierarchy that is possible under current file systems.
            Of course a program or script can query the object store analogously.

            Also an object store does not preclude any sort of hierarchy, it should be entirely possible to have an object store as a child of another object store. Though I struggle to think of a situation where this would be required or add more clarity.

            It would also basically solve all the problems that regedit, dconf, etc. are trying to.

  23. 2 weeks ago
    Anonymous

    I'd throw it in the garbage and adopt 9front as the rolling standard

  24. 2 weeks ago
    Anonymous

    The fix is abolishing it and adopting a radical microkernel approach.
    Cutler was right.

    • 2 weeks ago
      Anonymous

      So you want your core i9 to run like a 6502? Microkernels were never adopted because the mode switching murdered performance.

      • 2 weeks ago
        Anonymous

        The Nintendo 3DS and the Nintendo Switch run microkernels and they're doing just peachy. Fact of the matter is that we've figured this shit out a long time ago, but practitioners aren't paying attention because of these outdated stereotypes.

        • 2 weeks ago
          Anonymous

          >The Nintendo 3DS and the Nintendo Switch run microkernels and they're doing just peachy.
          Classic Mac OS also had a microkernel in OS 8 and 9. Here's the problem: the way they're using the word is not the way microkernel advocates are using the word. Those examples are still "monolithic" and avoid mode switching just above that microkernel. A true microkernel architecture would not do this. Every OS service would be small and a separate process. It's the UNIX philosophy of "do one thing and do it well" taken to the extreme. systemd would be broken up under this architecture (as an example).

          The problem is that we still use separate address spaces and ring privileges to try to protect processes and the OS. Today a user process calls an OS API and there's a pair of mode switches. Maybe a few pairs if there's some back and forth. In a true microkernel design there are a ton of mode switches as disparate components cooperate to service the API call.

          I mentioned a Microsoft R&D OS above that used pointer tagging instead of virtual address spaces for security. It eliminated a costly aspect of mode switching (assuming tag support in the CPU). In that system a true microkernel approach would make sense.

          >Fact of the matter is that we've figured this shit out a long time ago, but practitioners aren't paying attention because of these outdated stereotypes.
          They're not paying attention because of speed. NeXT hired a key engineer behind Mach, and even they stopped short of using the fully microkernel Mach+BSD design. They stuck with the latest hybrid version before it for performance reasons in NeXTSTEP. That choices continues all the way through to the latest macOS/iOS.

  25. 2 weeks ago
    Anonymous

    what's a microkernel? what's the difference between whatever UNIX is and a microkernel?

    • 2 weeks ago
      Anonymous

      https://en.wikipedia.org/wiki/Microkernel

    • 2 weeks ago
      Anonymous

      Microkernel = do as much shit as possible in userspace. Windows NT and Linux are monolithic kernels. XNU is somewhere in between.
      The theory is that it will make operating systems more secure and modular. The practice is that it usually makes them slower and more limited.
      Microkernels are typically very limited in scope (minix, horizon for 3DS and Switch) or in eternal development hell (GNU Hurd)

      • 2 weeks ago
        Anonymous

        >Microkernel = do as much shit as possible in userspace
        Ok but what exactly? Context switching? memory management? drivers? it's got to be one those three afaik

        • 2 weeks ago
          Anonymous

          Kernel is only responsible for context switching, virtual memory, and basic IPC. Everything else from networking to device drivers is implemented as a userspace process.

          • 2 weeks ago
            Anonymous

            I see, the microkernel deals process concurrency and memory, but drivers and everything else is in userspace.
            It almost as if you the kernel was an Erlang VM and not one bit more.

    • 2 weeks ago
      Anonymous

      >The Nintendo 3DS and the Nintendo Switch run microkernels and they're doing just peachy.
      Classic Mac OS also had a microkernel in OS 8 and 9. Here's the problem: the way they're using the word is not the way microkernel advocates are using the word. Those examples are still "monolithic" and avoid mode switching just above that microkernel. A true microkernel architecture would not do this. Every OS service would be small and a separate process. It's the UNIX philosophy of "do one thing and do it well" taken to the extreme. systemd would be broken up under this architecture (as an example).

      The problem is that we still use separate address spaces and ring privileges to try to protect processes and the OS. Today a user process calls an OS API and there's a pair of mode switches. Maybe a few pairs if there's some back and forth. In a true microkernel design there are a ton of mode switches as disparate components cooperate to service the API call.

      I mentioned a Microsoft R&D OS above that used pointer tagging instead of virtual address spaces for security. It eliminated a costly aspect of mode switching (assuming tag support in the CPU). In that system a true microkernel approach would make sense.

      >Fact of the matter is that we've figured this shit out a long time ago, but practitioners aren't paying attention because of these outdated stereotypes.
      They're not paying attention because of speed. NeXT hired a key engineer behind Mach, and even they stopped short of using the fully microkernel Mach+BSD design. They stuck with the latest hybrid version before it for performance reasons in NeXTSTEP. That choices continues all the way through to the latest macOS/iOS.

      >Here's the problem: the way they're using the word is not the way microkernel advocates are using the word.
      >Every OS service would be small and a separate process.
      Microkernel advocate here. It's even more extreme since under a "proper" microkernel architecture you would only really have a tiny stub or core of a kernel sitting in the driver seat where a monolithic kernel would normally go. It's like a tiny control program or executive layer that can heal the system by restarting all other vital processes, both inside and outside of what's considered kernel space in terms of permissions.

  26. 2 weeks ago
    Anonymous

    watching this https://www.youtube.com/watch?v=UMsscWnM67g
    >most of the OS runs as a separate processes
    >the kernel's job is to handle the message passing
    Is it right? If you want to do open(), read() / write() / close(), you send message to the file system/IO process instead of making system (aka function) calls? That seems slow.

  27. 2 weeks ago
    Anonymous

    All the problems people have with monolithic kernels sound like skill issues. If monolithic and hybrid are faster and optimized for real OS tasks, and people out there have maintained them for decades, if they're too complex for you that's perhaps a you problem. Unless you can make a simple microkernel and then optimize its connections to drivers and graphics to the point that the difference with monolithic kernels is negligible, but that seems like something that advocates don't want.

    Anyone know how Haiku, L4, 9front, Fuchsia and Helios are doing in this regard? Fuchsia seems the most promising, I wouldn't be surprised if we saw Chromebooks running it within the decade.

  28. 2 weeks ago
    Anonymous

    Has Minix 3 development ended?

  29. 2 weeks ago
    Anonymous

    posix threads is the worst fricking interface ever lmao, using raw syscall is much easier.

  30. 2 weeks ago
    Anonymous

    FYI:
    https://wiki.installgentoo.com/wiki//aosg/_Alternative_Operating_System_General

  31. 2 weeks ago
    Anonymous

    design a follow up to POSIX that actually standardizes all the new glibc/linux extensions

  32. 2 weeks ago
    Anonymous

    unix is already perfect and everybody should be moving to bsd.

Your email address will not be published. Required fields are marked *