Every community I care about is dead

  • 0 Posts
  • 44 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle



  • Yote.zip@pawb.socialtoLinux@lemmy.mlFile System Benefits
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    I like BTRFS’s checksumming abilities and compression the most. BTRFS will keep track of every block of data’s checksum, and when you run a scrub it will detect bitrot. If you want to heal the bitrot you’ll need to run e.g. RAID1. RAID5/6 are not stable so don’t use those. ZSTD:1 compression is basically free storage with no downside, and can massively speed up file operations if you’re using spinning rust.

    Personally I run BTRFS on any disk that only needs a single drive, like OS disk or games drive. My NAS runs a ZFS array for any mass storage, which includes basically the same feature set as BTRFS, except RAID actually works and everything is a tiny bit better. A ZFS NAS isn’t very good unless you pump a decent amount of money into it to get it going, so if you’re on a tight budget I’d recommend MergerFS+SnapRAID backed by BTRFS disks, which is very similar to Unraid in terms of storage paradigm except free.




  • Yote.zip@pawb.socialtoLinux@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    I agree with all this. I still think “Arch broke, but it’s not Arch’s fault” is valid in a lot of cases because when you install Arch Linux you implicitly agree to be on the bleeding-edge, and Arch Linux delivers that to you as requested. Arch is working as Arch is expected to work, and you probably shouldn’t be using Arch Linux if you don’t have a usecase that necessitates this downside/risk. If Arch wants to make things more stable it would end up looking like Tumbleweed. If Arch wants to make things even more stable it would end up looking like Debian. Arch wants to be at the level of bleeding-edge that it is, and this is roughly what it looks like when you choose that.

    My only complaint with Tumbleweed is that the software repository is smaller compared to Arch and Debian. Other than that I think it’s a top-tier distro, and I especially like how much effort they put into making sure everything works properly via their OBS testing suites. I agree that using distrobox or other methods is much safer than the AUR, and ideally the AUR shouldn’t really even be used at all. Like I said before I strongly believe that with the options we have today, true bleeding-edge distros like Arch Linux have become a small niche, as picking and choosing a couple dozen packages to be on the cutting/bleeding-edge is a lot more stable than running everything fully bloody.


  • Yote.zip@pawb.socialtoLinux@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    1
    ·
    edit-2
    1 year ago

    I used Arch for 4-5 years, and I’d say that Arch itself generally doesn’t break (shout out to when they bricked everyone’s GRUB and then took days to make a news post about it.), but user apps (from the normal repos) frequently had minor bugs because they’re bleeding edge. There’s a bit of a difference here, and I’d say it’s important.

    Ultimately, when you use Arch Linux you’re knowingly using bleeding edge software and that will always have the potential for bugs. Arch Linux manages this as best as it can, and it does it just about perfectly. If you want slightly more stability you probably want something closer to OpenSUSE Tumbleweed’s approach, with heavy automated testing.

    Nowadays with Flatpaks and other non-root package managers (Homebrew, Cargo, Nix, Distrobox, and even bin), I’d say the average user shouldn’t really be using bleeding edge distros anymore. I switched to Debian Stable + Flatpaks/etc and it’s basically the same experience as Arch Linux to me. The problem with Arch Linux is that you have to run your whole system as bleeding edge, and I don’t think that’s very sane for a lot of usecases.


  • “Overrated” is a very specific word here. Some of the distros he just talks about their users and not the distro itself. Confusingly, he also then ignores the users entirely for other distros. I went into this assuming it would be low effort content, but it went even lower and ended up being just a “what comes to my mind when I think of this distro” list, which doesn’t seem very fair towards some of the distros (near the top of the list even!) that don’t have real complaints weighed against them.


  • Yeah I really don’t trust GUI package managers yet. I feel like they shouldn’t be that hard to get working properly, but I always seem to get quirky behavior when I try to use them. As for readability apt is one of the worse tools IMO. I’ve been using nala lately and really like how it lays out its operations. Contrast that format to what Linus saw in his video.

    Maybe we could have a blacklist of packages/metapackages marked “important” that cause warnings, like xorg, pipewire, pulseaudio, kde-desktop, gnome-desktop, etc. If you’re uninstalling something like that you better hit confirm twice because that’s not typical behavior.




  • I think auto-upgrading Debian Stable is probably the one exception I’d make to “no blind upgrades”, though I still don’t feel comfortable recommending it due to potential dependency/apt problems that could somehow happen. In the case of Debian Stable it barely ever has package upgrades anyway so I’d just do it manually once a week and it would take like 30 seconds to grab 4 packages. If you’re public-facing you might want a tighter system for notifying about security upgrades, or just auto-upgrade security patches.


  • I’m not a real sysadmin so take it with a grain of salt, but in all reality this is probably why you would choose something like Debian for a server instead a bleeding-edge distro. Debian quickly backports security updates and fixes but otherwise keeps everything else stable and extremely well-tested, which pretty much 100% prevents serious bugs from reaching its Stable branch. You may still need to figure out an appropriate strategy for keeping your Mastodon container updated, but at least the rest of your system isn’t at risk of causing catastrophic errors like this. Also, Debian Stable does allow you to auto-upgrade security patches only, if you still want that functionality.


  • Blind automatic upgrades are a bad idea even for casual home users. You could run into a Linus Tech Tips “do as I say” scenario where it uninstalls half your system due to a dependency issue. Or it could accidentally uninstall part of your system that you don’t notice.

    I’m not sure how stable Gentoo’s default branch is but I know that daily upgrades on Arch Linux is close to suicide - you have a higher chance of installing a buggy package before it’s fixed if you install every package version as it comes in.

    I’m surprised this strategy was approved for a public server - it’s playing with a loaded revolver and it looks like you were finally shot.






  • Copy on write is likely to introduce significant performance decreases in cases where large or medium size files have a couple bytes changed. It’s usually recommended to turn CoW off on those files

    Do you happen to have a source or benchmark for this? My understanding of CoW is that the size of the file does not matter, as BTRFS works with blocks and not files. When a block is changed, it’s written to a new location. All the old blocks that are not changed are not written again - this wouldn’t even make sense in the context of how BTRFS deduplicates blocks anyway.

    So:

    • 10 kB base file

    • modify 1kB of the content

    • == 11kB total “used” space, and 1kB of new written blocks.

    that old 1kB that is no longer part of the file will eventually be cleaned up if needed, but there’s no reason to delete it early.