Is there a good reason to use this instead of ext4 as a casual desktop user?

Is there a good reason to use this instead of ext4 as a casual desktop user?

Thalidomide Vintage Ad Shirt $22.14

Tip Your Landlord Shirt $21.68

Thalidomide Vintage Ad Shirt $22.14

  1. 2 years ago
    Anonymous

    Not an expert in any way, but just the fact that you can store more data and resize root partitions WHILE IN USE convinced me.

    • 2 years ago
      Anonymous

      >store more data
      elaborate please, anon
      >resize root partitions WHILE IN USE
      how often do you find yourself doing that?

      • 2 years ago
        Anonymous

        >store more data
        btrfs uses filesystem compression so theoretically you can store more than 100Gb on a 100Gb drive
        >how often
        As a person with one drive, it's very nice to be able to do it.
        I don't do it often because i'm not a distrohopper but it does come in handy from time to time.
        ext4 isn't as flexible with that. I'd need a live iso just to resize a partition.

        • 2 years ago
          Anonymous

          >btrfs uses filesystem compression so theoretically you can store more than 100Gb on a 100Gb drive
          that's pretty cool, thanks
          >I don't do it often because i'm not a distrohopper but it does come in handy from time to time.
          right on. some things are good to have just in case you need them, i suppose.

      • 2 years ago
        Anonymous

        >>how often do you find yourself doing that?
        It came in handy for me just the other day
        >boot a live usb to move my os to a bigger ssd
        >dd all my shit over
        >reboot
        >forgot to resize the partition too
        >no matter, i can just do it now

        I could also imagine it being useful if you're dual booting Windows and want to remove the Windows partition and then reclaim the space.

    • 2 years ago
      Anonymous

      >and resize root partitions WHILE IN USE convinced me
      If you use btrfs subvolumes instead of partitions you don't even need to bother with resizing.

    • 2 years ago
      Anonymous

      >resize root partitions WHILE IN USE
      You can do this with ext4 and most other filesystems just fine.

      • 2 years ago
        Anonymous

        Uh, no. You can usually expand. But you typically can't shrink while in use. Even Red Hat's favored child XFS can't do online shrinks.

        • 2 years ago
          Anonymous

          >You can usually expand
          Which is resizing, yes.
          >But you typically can't shrink while in use
          Can you shrink a btrfs partition while in use? I'm not aware of any way to safely do that other than removing the partition from the pool, destroying it, recreating it smaller, and then adding it back to the pool as fresh space. It it's the only partition in the pool then I'm pretty sure there's just no way to safely shrink it.

          • 2 years ago
            Anonymous

            >>You can usually expand
            >Which is resizing, yes.
            Resizing includes expanding AND shrinking. While technically correct, if a filesystem performs one and not the other or with special limitations then it's not "just as good" as you implied.
            >>But you typically can't shrink while in use
            >Can you shrink a btrfs partition while in use?
            You can shrink a btrfs filesystem while in use.
            You can't shrink an ext4 filesystem while in use.
            You can't shrink an XFS partition period (have to recreate smaller and restore).
            >I'm not aware of any way to safely do that other than removing the partition from the pool, destroying it, recreating it smaller, and then adding it back to the pool as fresh space. It it's the only partition in the pool then I'm pretty sure there's just no way to safely shrink it.
            pic rel
            https://linuxhint.com/resize_a_btrfs_filesystem/
            btrfs isn't just a filesystem like ext4. It basically incorporates functionality of LVM and mdadm into the project itself. Because of this you can do things like shrinking a btrfs filesystem while in use. The DISK partition size itself doesn't change but that has nothing to do with the filesystem.

          • 2 years ago
            Anonymous

            >pic rel
            Neat

          • 2 years ago
            Anonymous

            >The DISK partition size itself doesn't change but that has nothing to do with the filesystem.
            You can just gdisk that after you resize your filesystem just fine.

        • 2 years ago
          Anonymous

          >Even Red Hat's favored child XFS can't do online shrinks.
          XFS is well known for not being able to shrink at all, regardless of the system state.

    • 2 years ago
      Anonymous

      >WHILE IN USE
      Something NTFS has been able to do for over twenty years. Is this the power of Linux?

      • 2 years ago
        Anonymous

        Please show me a demonstration of you shrinking your C partition online.

        • 2 years ago
          Anonymous

          Please record your reaction.

          • 2 years ago
            Anonymous

            How can I move my C partition to another physical disk in Windows? Hell, I'll even take an offline version, as long as it doesn't involve stuff like restoring backups of files after a reinstall. btrfs makes this trivial.

          • 2 years ago
            Anonymous

            Just use a program like Macrium Reflect. dd can do it too I think, via GOW. Pretty sure you could just dd your C: to a different partition.

          • 2 years ago
            Anonymous

            >Pretty sure you could just dd your C: to a different partition.
            Can I do that? I remember reading that this would render it unbootable and all guides I can find online just say to use system restore instead.

          • 2 years ago
            Anonymous

            If you clone the whole disk, it should be fine, because it'll take the EFI partition with it. And even if that gets borked, it's easy enough to restore.

          • 2 years ago
            Anonymous

            >it'll take the EFI partition with it
            Wait, can I not use the EFI partition on another disk? A part of what I want to do is to shunt Windows off to some secondary disk and continue using the same EFI (also /boot) partition as before.

          • 2 years ago
            Anonymous

            Sure, you'll just have to rewrite the EFI config via bootrec on your Windows USB so it gets pointed at the new partition.

  2. 2 years ago
    Anonymous

    if you have more than one drive and want them all set up in an array so that you logically only interface with them as one unit at the user level; alongside using features like redundancy or snapshotting
    if you have a single drive there's not much point

    • 2 years ago
      Anonymous

      Say I have two 500gb drives and stored less than 500gb on both combined. How hard is it to replace one of the drives with a larger one? Is there some command "remove drive A from the pool and migrate data to drive B until I add drive C to the pool"? I haven't worked with logical volumes before

      • 2 years ago
        Anonymous

        haven't had to do that myself yet but based on this stackoverflow answer my guess at balancing the array and then removing the drive seems sound enough
        https://serverfault.com/questions/476260/how-can-i-remove-a-drive-from-a-btrfs-raid-1-setup-to-make-it-just-a-single-btr
        should apply for similar setups, not just raid1 style

      • 2 years ago
        Anonymous

        >How hard is it to replace one of the drives with a larger one?
        Not hard. You just do a btrfs replace
        >Is there some command "remove drive A from the pool and migrate data to drive B until I add drive C to the pool"?
        Yeah if you don't want to do it all at once with a `replace`, you can do a `remove` (which will automatically migrate all data off the device you're removing and put it on any remaining devices it can) and then later do an `add` (which will make the new drive's space available to the pool but will not immediately move any data to it) and then you can do a `balance` to redistribute data among all the drives in the pool.

        haven't had to do that myself yet but based on this stackoverflow answer my guess at balancing the array and then removing the drive seems sound enough
        https://serverfault.com/questions/476260/how-can-i-remove-a-drive-from-a-btrfs-raid-1-setup-to-make-it-just-a-single-btr
        should apply for similar setups, not just raid1 style

        You don't need to balance before removing the drive.

        • 2 years ago
          Anonymous

          haven't had to do that myself yet but based on this stackoverflow answer my guess at balancing the array and then removing the drive seems sound enough
          https://serverfault.com/questions/476260/how-can-i-remove-a-drive-from-a-btrfs-raid-1-setup-to-make-it-just-a-single-btr
          should apply for similar setups, not just raid1 style

          Sounds great. I am on a laptop with two ssds so that would be my use case. Have been using the other one for unused windows install since I haven't set up lvm and mounting it doesn't work as nicely. Which partitions are safe to use with btrfs? Home for sure, what about root and boot? Also, how does encryption work? Do I use luks or is something similar built-in?

          • 2 years ago
            Anonymous

            with btrfs you generally can just make subvolumes for partitions
            can do luks in front of the entire btrfs filesystem for encryption

          • 2 years ago
            Anonymous

            You usually wouldn't bother to separate root and home into different partitions with btrfs. just make home a subvolume. Boot is a different story, since your bootloader needs to support btrfs if you want /boot to be on a btrfs volume. Might be easier to keep that separate.
            >Also, how does encryption work?
            Handled outside of btrfs. Generally you'd set up luks on the disks and then btrfs on top of that.

      • 2 years ago
        Anonymous

        just the one replace command, btrfs will move data off the drive being replaced onto the remaining drives automatically

  3. 2 years ago
    Anonymous

    There is no good reason to use btrfs instead of ext4 as a casual desktop user.

    • 2 years ago
      Anonymous

      >two good reasons stated before
      >comes along and posts literal fake information
      Nice one moron

      • 2 years ago
        Anonymous

        > two irrelevant reasons for a casual user stated before (literally only power users would care)
        > reads a correct factual post
        > loses their shit
        yeah, you're a fricking moron

        go back to fixing your computer all day instead of actually get paid to use it. maybe you can go resize your root partition while it's in use and flex on the normies.

        • 2 years ago
          Anonymous

          what fixing do i need to do? btrfs just werks

          • 2 years ago
            Anonymous

            nice deflection, moron

            op you should use ext4 just so you have nothing in common with this moron

          • 2 years ago
            Anonymous

            > doesnt respond anything other than ad-hominems
            > hurr durr youre a moron

        • 2 years ago
          Anonymous

          normies don't know what a filesystem is

  4. 2 years ago
    Anonymous

    Yes, files are checksummed so u know when bitrot occurs
    it has cow so faster copying/less data use
    transparent compression

    • 2 years ago
      Anonymous

      big whoop, ntfs on zvols has all that shit and more

  5. 2 years ago
    Anonymous

    Is bcachefs usable yet?

  6. 2 years ago
    Anonymous

    Instant copying of gigabytes of data is very nice.

  7. 2 years ago
    sega

    yeah but it's clear your dumb enough to not gain anything from doing so

  8. 2 years ago
    Anonymous

    compression is the biggest selling point for me

    • 2 years ago
      Anonymous

      is there any performance impact when using compression?

      also, does btrfs use 64-bit inodes, 32-bit inodes, or does it let you choose? one unfortunate "advantage" of ext4 is that it still uses 32-bit inodes, which means it works better with some stubborn games that shit the bed on filesystems with 64-bit inodes.

      • 2 years ago
        Anonymous

        >inodes
        in particular, the extent counters

      • 2 years ago
        Anonymous

        >is there any performance impact when using compression?
        There can be. Depends a lot on the hardware and the compression flags. You'll probably get slower speeds if you're using a very fast SSD, since it takes time to compress/decompress the data. You'll also get slower speeds if you use a slow compression algorithm or a slow compression profile. In the general case though (HDDs, a reasonable compression scheme, and a non-shit CPU) you usually end up with a little bit *better* performance because there's a bit less disk IO and that outweighs the time taken to perform the compression/decompression.

        • 2 years ago
          Anonymous

          sounds like a feature i wouldn't want on my root partition, but now i'm seriously considering it for my NAS. would love to crank the compression flags up to the max and see how much i can cram in there.

          • 2 years ago
            Anonymous

            >sounds like a feature i wouldn't want on my root partition, but now i'm seriously considering it for my NAS
            That's how I use it too. No compression on root, suitable compression for NAS.
            Keep in mind though that it's not all or nothing. You can turn on compression for a single file, a directory tree, a subvolume, or an entire filesystem with btrfs. Also, if you have compression on and btrfs detects that a file wouldn't compress well (typically because it's an already compressed file), it just won't attempt to compress it further and will read/write to it like a normal file with no compression enabled.

          • 2 years ago
            Anonymous

            wtf, sorcery

          • 2 years ago
            Anonymous

            >btrfs detects that a file wouldn't compress well
            That detection is not perfect and using force-compress flag actually improves compression ration by another ~10%. In general, I don't think compression is much of a bottleneck. zstd compresses at >500MB/s per core and decompression is almost 2GB/s per core. btrfs also uses all available cores to do it so it's probably not going to be noticeable in most cases.

            >kind of weird how freetards are so averse to btrfs
            Anyone who has substantial history with btrfs is at least a little averse to it. I'm [...] and [...] and as much as I love the features of btrfs, I don't ever put any data on my btrfs volumes that I'm not okay losing. It's just not stable enough yet. I've lost data from btrfs bugs at least three times now, with one of the incidents preventing the system from booting (even though the root partition was ext4 and btrfs was just on a media partition) because of a bug in the btrfs kernel module that would quickly allocate all available memory to its un-freeable kernel thread and cause everything in userspace to get oom-killed.

            >Anyone who has substantial history with btrfs is at least a little averse to it
            Not at all. I've been using it since the 2.6 days and I never lost any data or had bugs like this. Worst thing was the shitty space allocation code that would tell you the fs is full even though there's some free space left.

          • 2 years ago
            Anonymous

            >That detection is not perfect and using force-compress flag actually improves compression ration by another ~10%
            Obviously that improvement depends a lot on the kind of data you're storing and your chosen compression scheme. I really doubt most data would fail the compression heuristics often enough to see anywhere near an extra 10% compression with force-compress.

          • 2 years ago
            Anonymous

            It worked for my root partition (including /home) and on my bulk storage partition where I keep all the good stuff.
            Unless you're only storing compressed data and absolutely nothing else force-compress will give you measurably more space.

          • 2 years ago
            Anonymous

            >That detection is not perfect and using force-compress flag actually improves compression ration by another ~10%
            Obviously that improvement depends a lot on the kind of data you're storing and your chosen compression scheme. I really doubt most data would fail the compression heuristics often enough to see anywhere near an extra 10% compression with force-compress.

            If you use zstd, you should probably use force-compress anyway since zstd does its own check like that, so you gain very little from using the btrfs one.

          • 2 years ago
            Anonymous

            >If you use zstd
            zstd is the only sane choice in this case. gzip is slower without compressing better and so is lzo.
            I use force-compress=zstd:3.

          • 2 years ago
            Anonymous

            >zstd:3
            >:3

          • 2 years ago
            Anonymous

            lol never noticed before

          • 2 years ago
            Anonymous

            Well, yeah. The point was that compress-force is fully useless and redundant with it, while it may be theoretically useful in some cases with the others.
            I like zstd compression level 6, since I don't write enough for the speed loss to be notable, but I get significantly denser compression.

          • 2 years ago
            Anonymous

            It's not useless, force-compress is still necessary if you want to force compression (or use zstd's detection algorithm or whatever).
            YMMV, but I think it's almost always a win to force compression even if you think it might not save any space.

          • 2 years ago
            Anonymous

            Oh, sorry, my bad. I meant to write that compress is useless and force is what you should use. We agree on that.

          • 2 years ago
            Anonymous

            >transparent compression
            is in the general case useless because the majority of data is already compressed, such as images, video, et al.
            the only data format that is not already compressed is sometimes executable files, which are marginal
            on the other hand your cpu will be constantly used in order to compress files in the background, which is noticeable and will kill high IO. use zstd if you want to actually do this since it uses the least cpu

            force-compress is braindead. you're going to use up your cpu for no reason to attempt to compress gigabytes of incompressible data. if you download videos it will be especially harmful there, since normally it would look at it and frick off but now it will process the entire file, check if the compressed size is smaller and then write the uncompressed data to your drive. moron.

          • 2 years ago
            Anonymous

            You are moronic and all you said is wrong. I have tested this personally. You are talking out of your ass.

          • 2 years ago
            Anonymous

            Everything I said is exactly how it works, dude. Are you denying that compression takes up CPU time?
            Most of the large data formats have built-in compression because we've been using computers before hard drives were as big as they are today, so compression was necessary.
            I use btrfs myself but transparent compression is not really a huge selling point.

          • 2 years ago
            Anonymous

            Stop doubling down on your moronation. zstd is so fast that it doesn't matter. You absolutely do get substantially less disk usage using compress-force. I measured ~10% less on both my root and storage fs over just compress. Another guy proved it:

            Just use compress-force, moron.

            >compression is not really a huge selling point
            Then don't use compression at all.

            In which case you would end up copying all the data anyways. For organization purposes, hardlinks are strictly better.

            It's copy on write, so no. Only changes are written do disk. Think of it like diff on write.

          • 2 years ago
            Anonymous

            >It's copy on write, so no. Only changes are written do disk. Think of it like diff on write.
            I'm pretty sure if you modify a reflink it will copy the file there and take up the space required to hold the entire file, separately from the file it linked to.

          • 2 years ago
            Anonymous

            >I'm pretty sure if you modify a reflink it will copy the file there and take up the space required to hold the entire file, separately from the file it linked to.
            Nope. It won't.

          • 2 years ago
            Anonymous

            >zstd does its own check like that
            It does? source?

          • 2 years ago
            Anonymous

            https://facebook.github.io/zstd/zstd_manual.html#Chapter23
            >When a block is considered not compressible enough […]
            https://github.com/facebook/zstd/blob/dev/lib/compress/huf_compress.c#L1285
            zstd should skip incompressible chunks significantly faster than the speed it'd try to actually compress them. This is active either way, so also using btrfs' naive compressability tests can actually add more overhead to compression time.

          • 2 years ago
            Anonymous

            >This is active either way
            If btrfs decides not to compress a file, zstd's check isn't running on nearly the entire file, which is going to be faster still.

          • 2 years ago
            Anonymous

            >which is going to be faster still
            You assume that the btrfs test is actually faster than zstd's, at least on small files (on large ones, the btrfs test is more likely to be wrong anyway). I'm not sure if it is or not, but it's certainly not equivalent to compressing the entire file.

          • 2 years ago
            Anonymous

            Confusing but insightful. So if zstd thinks it is a wash to compress a file it won't? Why would the default btrfs case not be force compression then? Hmm I'll give it a shot.

          • 2 years ago
            Anonymous

            >So if zstd thinks it is a wash to compress a file it won't?
            No, it'll just fast-forward past the data that it thinks it can't compress, using a (probably) better algorithm than btrfs' tests. If a file has parts that are compressible and significant parts that aren't, it should compress the former and quickly go through the latter without compressing them.
            >Why would the default btrfs case not be force compression then?
            btrfs still uses zlib by default, where its naive check is somewhat useful. In practice, if you want compression, you basically always want compress-force=zstd:$level.

          • 2 years ago
            Anonymous

            >You assume that the btrfs test is actually faster than zstd's
            Do you know what the btrfs check is? It's just checking the performance of the underlying compression algorithm on the first chunk of the file. If it's a good compression ratio it keeps using it for the rest of the file (and doesn't keep running the check), otherwise it stops using compression and just writes the whole file normally. The check is nearly free and more importantly doesn't take time proportional to the size of the file. If you force the use of zstd anyway on an incompressible file, zstd has to unnecessarily process the entire rest of the file to marshal it into uncompressed zstd frames for no reason when it could otherwise be avoided and the file written to disk directly.
            There's another bit of btrfs logic involved that only writes compressed blocks if they're actually smaller than the uncompressed blocks, and this is applied even if you force compression. Uncompressed zstd frames are slightly larger than the source data because of frame overhead, so if you force compression on an incompressible file, the block will be passed to zstd, zstd will take the time to process the block into uncompressed zstd frames, and then btrfs will throw away the compressed block because it's not compressed. It's a ridiculous waste of processing.

          • 2 years ago
            Anonymous

            You realize zstd compresses at multiple GB/s on the average laptop CPU, right?

            You can do this on any filesystem via hardlinks moron-kun

            Hardlinks aren't snapshots. If you change the file using one link it'll also change the file when accessed through the other link. Reflinks will give you two independent files for the price of one.

          • 2 years ago
            Anonymous

            In which case you would end up copying all the data anyways. For organization purposes, hardlinks are strictly better.

          • 2 years ago
            Anonymous

            >all the data
            Not necessarily. You may edit only a part of the file. Alternatively, you may be unsure if you're going to edit the file, but want to allow for the possibility.

          • 2 years ago
            Anonymous

            >You realize zstd compresses at multiple GB/s on the average laptop CPU, right?
            And you realize it's certainly a waste to be running zstd when you're going to throw away the entire result, right?

            >Do you know what the btrfs check is?
            Do you? What you described is just one part of the test.
            https://github.com/torvalds/linux/blob/master/fs/btrfs/compression.c#L1805
            >more importantly doesn't take time proportional to the size of the file.
            This is also its main downside, since big files are more likely to have mixed compressible and incompressible data while small ones will be fully checked by this anyway.
            >It's a ridiculous waste of processing.
            It's not when you remember that btrfs' compressability detection algorithm is not all that great. If you prefer speed over density, then you might as well just disable compression entirely.

            >Do you?
            Yes
            >What you described is just one part of the test.
            Yeah, I left out the statistical heuristic because it's still not dependent on the size of the data and is irrelevant to what I'm describing.
            >This is also its main downside
            The btrfs heuristic is a tradeoff. It finds files that are very unlikely to be compressible, and doesn't bother running compression on them. Obviously sometimes the heuristic will be wrong and it'll decide to not try compressing something that could have benefitted from compression. But the vast majority of the time it will correctly identify incompressible files and efficiently skip compression processing for the file.
            >If you prefer speed over density, then you might as well just disable compression entirely
            Or you could use the reasonable middle ground that's provided by default, where CPU isn't being wasted shoving files through zstd just to discard the result, and only a very few files end up without compression that they could have slightly benefitted from.

            You can come up with some pathological case where the data fools the heuristic most of the time and causes it to do more harm than good, but most people do not have data like that and are better served by the default heuristic. It's ridiculous for you to be claiming that everyone should be forcing compression all the time. That would be the default behavior if it were actually beneficial the majority of the time.

          • 2 years ago
            Anonymous

            >But the vast majority of the time it will correctly identify
            Does it really? People regularly find that compress-force ends up with significantly denser data than merely compress.

          • 2 years ago
            Anonymous

            >Does it really?
            It does for me. Your pathological case may differ.

            Oh, and
            >That would be the default behavior if it were actually beneficial the majority of the time.
            Just like how zstd is the default compression instead of zlib, the worst of the options, right?

            Changing the default compression algo would be a backwards incompatible change for everyone relying on the default. The compression heuristic is intentionally opaque so that it can be changed in the future, potentially including a change to make it always-compress, if that were generally beneficial. It's not, so it hasn't been changed.

          • 2 years ago
            Anonymous

            Oh, and
            >That would be the default behavior if it were actually beneficial the majority of the time.
            Just like how zstd is the default compression instead of zlib, the worst of the options, right?

          • 2 years ago
            Anonymous

            >Do you know what the btrfs check is?
            Do you? What you described is just one part of the test.
            https://github.com/torvalds/linux/blob/master/fs/btrfs/compression.c#L1805
            >more importantly doesn't take time proportional to the size of the file.
            This is also its main downside, since big files are more likely to have mixed compressible and incompressible data while small ones will be fully checked by this anyway.
            >It's a ridiculous waste of processing.
            It's not when you remember that btrfs' compressability detection algorithm is not all that great. If you prefer speed over density, then you might as well just disable compression entirely.

      • 2 years ago
        Anonymous

        >does btrfs use 64-bit inodes, 32-bit inodes, or does it let you choose?
        Btrfs doesn't actually use inodes internally at all, and just fakes them for compatibility. By default, the faked inodes are 64-bit and are not unique across multiple subvolumes in the same filesystem. There are mount flags for controlling the size of the faked inodes and making inodes more likely to be unique between subvoumes, but you can still get collisions if your inode size is too small.

        • 2 years ago
          Anonymous

          cool, thank you anon

  9. 2 years ago
    Anonymous

    I use a ton of btrfs features and love them. My backup server uses btrfs and uses compression to save disk space, snapshots for incremental backups, and raid6 for redundancy (with workarounds for the write hole issue).
    I've changed the size or raid profile of a btrfs volume online a number of times, and that's pretty convenient. I've even fully replaced every single disk in a volume without ever having to take it offline.

  10. 2 years ago
    Anonymous

    some years ago I had the computer I was using as my router fail to boot because of a bad sector. yes, despite the fact that it was booting from an SSD. drive just coughed up I/O errors. It wasn't anywhere near its rated write endurance. That was a real pain in my ass, since it was my router, I couldn't just download an ISO and reinstall or anything. So now everything I care about boots off a Btrfs RAID1 filesystem.

  11. 2 years ago
    Anonymous

    Snapshotting is underrated. You can set up hourly snapshots (see snapper) and be able to revert anything you accidentally do on the filesystem. It also goes nicely with send/receive for incremental backups.

    • 2 years ago
      Anonymous

      this, it rapes NTFS in almost every possible aspect while being just as fast
      kind of weird how freetards are so averse to btrfs when it's a legitimate reason to use desktop loonix

      • 2 years ago
        Anonymous

        >kind of weird how freetards are so averse to btrfs
        Anyone who has substantial history with btrfs is at least a little averse to it. I'm

        I use a ton of btrfs features and love them. My backup server uses btrfs and uses compression to save disk space, snapshots for incremental backups, and raid6 for redundancy (with workarounds for the write hole issue).
        I've changed the size or raid profile of a btrfs volume online a number of times, and that's pretty convenient. I've even fully replaced every single disk in a volume without ever having to take it offline.

        and

        >sounds like a feature i wouldn't want on my root partition, but now i'm seriously considering it for my NAS
        That's how I use it too. No compression on root, suitable compression for NAS.
        Keep in mind though that it's not all or nothing. You can turn on compression for a single file, a directory tree, a subvolume, or an entire filesystem with btrfs. Also, if you have compression on and btrfs detects that a file wouldn't compress well (typically because it's an already compressed file), it just won't attempt to compress it further and will read/write to it like a normal file with no compression enabled.

        and as much as I love the features of btrfs, I don't ever put any data on my btrfs volumes that I'm not okay losing. It's just not stable enough yet. I've lost data from btrfs bugs at least three times now, with one of the incidents preventing the system from booting (even though the root partition was ext4 and btrfs was just on a media partition) because of a bug in the btrfs kernel module that would quickly allocate all available memory to its un-freeable kernel thread and cause everything in userspace to get oom-killed.

        • 2 years ago
          Anonymous

          >Anyone who has substantial history with btrfs is at least a little averse to it.
          I always hear stories like this and have been using btrfs for at least a decade now and I'm still waiting for the hammer to fall. I've had absolutely 0 issues with casual usage and only praise to sing it for my NAS box. RAID 10 across 10 drives, one day find out that one of my SATA controllers has been fritzing out and resetting every 30-60 seconds or so for the past week. btrfs didn't care, just kept going like a tank. Later a drive starts dying and I still haven't set up email alerts, I eventually notice when my mount goes read only because a second drive starts failing. 2nd drive isn't dead dead yet so decide to yank the 1st drive and put a new one in. Shit, don't have any matching size spare drives - good thing btrfs doesn't care. So put an 8tb in with my mix of 4tbs, run btrfs replace, golden. New drives show up from Amazon the next day, then do the same for the 2nd dying drive and emergency drive. Full scrub, 0 corruption.

          • 2 years ago
            Anonymous

            >I always hear stories like this and have been using btrfs for at least a decade now and I'm still waiting for the hammer to fall
            That's expected. If 10% of users experience data loss ever, that's insanely unreliable for a filesystem but you would still expect to see 90% of people never having a problem.
            A lot of it depends on what exactly you're doing and what features you're using, since some parts of the btrfs codebase are flexed more often than others. In my case, every time I can't figure out what the frick is happening, I drop by the btrfs irc channel and a dev eventually helps out. In every case they've confirmed it was a btrfs bug and that my data was probably gone.

          • 2 years ago
            Anonymous

            Agreed, but is it 10% or 1% or .01%? Hard to tell as an outsider if the issue is general end user moronery or confirmed bugs like your cases. Ultimately I would guess that the number is probably negligible if two big name distros (Fedora, OpenSUSE) make it the default.

            I do think it's probably a little dangerous for the "just above casual" user who might try to clone a drive using dd, or who use their drives enough to use up all the free space and not knowing that'll actually become an issue.

            For normies though, I think btrfs is fine because they don't care about filesystems anyway. Maybe slightly better because of CoW, transparent compression, and you won't ever get the forced fsck prompt when you reboot.

  12. 2 years ago
    Anonymous

    the biggest selling point should the copypaste though
    making a copy of a file does not use more space
    I can literally duplicate terabytes in a instant with no additional space requirement(except the file metadata)
    where is this useful? in organizing
    copy a massive amount of unorganized shit, and then you can do whatever you want, delete, rename, move, etc... without needing the free space

    • 2 years ago
      Anonymous

      >making a copy of a file does not use more space
      It usually does with most userspace copy tools. You have to use a specific flag with cp to make it do a CoW copy, otherwise it'll perform a normal copy where it duplicates the entire file on disk.

      • 2 years ago
        Anonymous

        just use a file manager like a normal person then

        • 2 years ago
          Anonymous

          Those also create normal copies

          • 2 years ago
            Anonymous

            I don't think so, it has always worked
            I have 10TBs on my 6TB disk because of duplicates and I didn't have to change anything

          • 2 years ago
            Anonymous

            What file manager do you use?

          • 2 years ago
            Anonymous

            stock lubuntu so pcmanfm-qt

          • 2 years ago
            Anonymous

            Not him but Dolphin on OpenSUSE uses reflinks when you copy.

          • 2 years ago
            Anonymous

            Yeah it looks like most of the file managers I've checked have started using reflinks by default if available. Guess I was wrong to say they don't.

    • 2 years ago
      Anonymous

      You can do this on any filesystem via hardlinks moron-kun

  13. 2 years ago
    Anonymous

    Is it good to use? I hear it is slower tha ext4.

  14. 2 years ago
    Anonymous

    stora/g/e gays shill zfs and bash btrfs for reasons I am too dumb to understand, red hat has lots of smart people on payroll and they say btrfs is cool and I should use it, fedora just works so btrfs it is

    • 2 years ago
      Anonymous

      >stora/g/e gays
      You mean the idiots that install proprietary corporate NAS distros meant to be moron-proof?

      • 2 years ago
        Anonymous

        no the ones who consolidate all their storage on homebuilt SANs running FreeBsd

  15. 2 years ago
    Anonymous

    Yes performance.. https://www.phoronix.com/scan.php?page=news_item&px=Linux-5.14-File-Systems

    • 2 years ago
      Anonymous

      *You shouldn't use it if you care about performance

      • 2 years ago
        Anonymous

        any additional info with that statement sir?

    • 2 years ago
      Anonymous

      Useless and terrible benchmarks

    • 2 years ago
      Anonymous

      Meanwhile, the same (garbage) benchmarks run a few months before

    • 2 years ago
      Anonymous

      Meanwhile, the same (garbage) benchmarks run a few months before

      Everyone knows btrfs doesn't do well in database workloads with CoW enabled. That's why you typically disable CoW for databases or VM disk images.
      Show me a benchmark that creates millions of 4K files, renames, moving, deletions, etc.

  16. 2 years ago
    Anonymous

    fat32 is more than enough for casuals.

    • 2 years ago
      Anonymous

      I use fat16 for /

  17. 2 years ago
    Anonymous

    Just use compress-force, moron.

  18. 2 years ago
    Anonymous

    - compression, zstd is practically free space, since it can do gigabytes per second on low end cpus
    - snapshots, set up something like snapper and never worry about running "rm -rf ~" commands again
    - checksums, even on a single disc it's nice to know if something has gotten corrupted so you can restore a snapshot/backup, check the disc's health, etc
    - can be turned into a multi-disc volume with a single command at any time
    just to name a few

  19. 2 years ago
    Anonymous

    If I am using -o compress now, how do I recompress all the stuff btrfs didn't? I'm going to try compress-force tomorrow morning. Do I need a rebalance or what?

    • 2 years ago
      Anonymous

      defragment with the appropriate options

    • 2 years ago
      Anonymous

      sudo btrfs fi de -rvczstd /

    • 2 years ago
      Anonymous

      btrfs filesystem defragment -v /

      >You realize zstd compresses at multiple GB/s on the average laptop CPU, right?
      And you realize it's certainly a waste to be running zstd when you're going to throw away the entire result, right?
      [...]
      >Do you?
      Yes
      >What you described is just one part of the test.
      Yeah, I left out the statistical heuristic because it's still not dependent on the size of the data and is irrelevant to what I'm describing.
      >This is also its main downside
      The btrfs heuristic is a tradeoff. It finds files that are very unlikely to be compressible, and doesn't bother running compression on them. Obviously sometimes the heuristic will be wrong and it'll decide to not try compressing something that could have benefitted from compression. But the vast majority of the time it will correctly identify incompressible files and efficiently skip compression processing for the file.
      >If you prefer speed over density, then you might as well just disable compression entirely
      Or you could use the reasonable middle ground that's provided by default, where CPU isn't being wasted shoving files through zstd just to discard the result, and only a very few files end up without compression that they could have slightly benefitted from.

      You can come up with some pathological case where the data fools the heuristic most of the time and causes it to do more harm than good, but most people do not have data like that and are better served by the default heuristic. It's ridiculous for you to be claiming that everyone should be forcing compression all the time. That would be the default behavior if it were actually beneficial the majority of the time.

      >And you realize it's certainly a waste to be running zstd when you're going to throw away the entire result, right?
      That's not what's happening. You gain an additional 10% of compression.
      >The btrfs heuristic is a tradeoff. It finds files that are very unlikely to be compressible
      It's a shitty heuristic that's often wrong.
      >CPU isn't being wasted
      Stop saying moronic shit.
      >It's ridiculous for you to be claiming that everyone should be forcing compression all the time
      If you want compression, then yes. Use compress-force.
      >That would be the default behavior if it were actually beneficial the majority of the time.
      So you can't think for yourself, huh?

      • 2 years ago
        Anonymous

        Are you the same poster I was discussing this with? Why did you suddenly become moronic?

        • 2 years ago
          Anonymous

          No, they're someone else. I disagree with you on the tradeoff being meaningful for the standard user and think that the overhead of running zstd over incompressible data is rather trivial for most standard use cases and think that the btrfs compression test is very underdeveloped and naive, but I'm not saying that you're outright wrong about the technical parts.

  20. 2 years ago
    Anonymous

    >muh compression
    Stuff that takes up space (media files) is not compressible. The few kb you get from compressing your text files is not worth it.

    • 2 years ago
      Anonymous

      my machine contains more than just videos and small text files
      my root volume for example goes from 20G > 8.9G
      things like games often compress well also, like i have a copy of the sims here (3.4G) which is using only 2.1G on disc

      >WHILE IN USE
      Something NTFS has been able to do for over twenty years. Is this the power of Linux?

      can you replace your C: without rebooting? it's possible with btrfs

      Please show me a demonstration of you shrinking your C partition online.

      pretty sure you can, but only down to the nearest unmovable file, not completely (has to be offline to move files which are open)

      • 2 years ago
        Anonymous

        >can you replace your C: without rebooting? it's possible with btrfs
        Do you mean that it'll be booted off the other partition without rebooting? If so, that's pretty cool. If not, yes.

        • 2 years ago
          Anonymous

          i mean you can transfer a running system to another drive, then remove the original drive all without interruption
          like you can upgrade a single hdd to a single ssd

      • 2 years ago
        Anonymous

        >20G > 8.9G
        That's as good as nothing if you have TB available.

  21. 2 years ago
    Anonymous

    for those talking about compress vs. compress-force
    i find the main problem with btrfs's heuristic is that it both bails out very easily, and more importantly, happens only once, not per write, this means that if it deems a file incompressible initially, then it won't compress the whole file forever
    i've found that the non-forced compression hardly ever compresses anything, only in extreme cases will it consider files compressible
    i can understand this for slow compression algorithms, but for something like zstd you're just throwing away a ton of potential savings

Your email address will not be published. Required fields are marked *