Ehhh all my important files are sync’d to my NAS. I have a script that just apt installs everything I normally use. Sometimes it’s just faster than troubleshooting. Usually if I’m about to do something whacky I clone my disk and use the clone. If it works it’s my new primary, if it doesn’t nothing lost.
Nah, reinstall goes brrrr whenever you run into an issue
idk but if my system becomes unbootable, i’d reinstall. Otherwise, set up a NAS for yourself and upload snapshots to it, configure this process to be automatic. Especially if you like to tinker with your system or/and you use a rolling release
I literally had an official support person tell me to reinstall Ubuntu to get a specific app running.
That is idiotic, there is absolutely a reason to reinstall in some cases
And often the fastest option even lol
Not when you’re “stuck”, tho. You understand the problem, boot live system, fix it and learn from your mistakes. Like, my first reinstalls of arch were due to not understanding I can just chroot or pacstrap some packages I forgot, for example.
Some times but not most, like Windows. macOS is the same way thanks to its *nix underpinnings. I honestly can’t remember a time I ever reinstalled the system to fix a problem.
With the way most distros are structured, you should never need a reinstall, since reinstalling the packages will fix any issues with broken system files. Broken configuration wouldn’t be as easy to fix, but still something you should be able to fix.
The only reason to be reinstalling, in my eyes, is if you have a mess of packages and configuration you don’t remember, and want to get a clean slate to reconfigure instead of trying to figure out why everything was set up in a certain way.
As an IT guy who has worked professionally as a Linux sysadmin.
While you are correct, the factor you are missing is time.
There have been countless times I have reinstalled Linux machines because it is faster than troubleshooting the issue
Professionally on a non-recurring issue - absolutely.
With my stuff at home? Only if the wife suffers from the downtime.If you do it right you should be able to trigger rebuild within about 20 min by kicking off the right automation.
Virtualization and containerization are your friends. Combine that with Ansible and you are rock solid.
Fair, but machines at work as sysadmin are a different thing - hopefully there you’re also dealing with fast deployment, prepared ahead of time. But if the issue is that you messed something up on your own computer, ignoring the issue in favor of reinstalling sounds likely to leave you oblivious to what the issue was, and likely to repeat your mistake.
That is fair, but ignores compounding issues like installing several software packages over years and forgetting about them, and something like that causes an issue years after installing and forgetting about the software, then it is far easier to just reinstall.
Meh, snapper rollback
Unless the drive gets corrupted or infected with malware, you can just load a previous snapshot. That’s much faster and easier than reinstalling.
Snapshot as in a VM?
Most people run their OS on physical hardware.
Btrfs has snapshots. They can be created instantly and don’t use any extra space until the files are changed.
Ah, yeah, I have read about that, I do feel a bir hesitant to use BTRFS so I didn’t think about that.
The Linux machines I have worked with all ran ext3/4 or xfs.
To be completely fair, I never gave BTRFS a proper chance, at first because it felt too new and unstable when I heard about it, and later I heard that it was developed by Facebook and let my distaste for that company color my perceptions of btrfs.
But I just checked the wikipedia article and saw that plenty of reputable oranizations have worked on btrfs, so I guess I’ll get it a go when I build a NAS…
Thanks for reminding me of it, I may get set in my ways from time to time but I do genuinely try to learn and change my way of thinking.
I wouldn’t use it for a NAS. You want ZFS for that.
Btrfs is good for small setups with either single or dual disks.
Just don’t use RAID 5 or 6, it’s still under development and not ready for use yet.
You can run your desktop inside of a VM with the GPU and USB PCIe devices passed though.
However, I think they are talking about btrfs
It’s a joke.
I thought jokes were supposed to be funny…
Tbf “funny” is, by nature, subjective. Something may be funny to others but not to you, just as you may like onions while I may not, or I may find Shakira attractive while you may not, or I may be into pokemon but you may not, etc.
So, jokes are supposed to be funny, to someone, but you’re not necessarily that “someone.”
Whaddya talking about I nuke my shit all the time.
Why learn nixOS, if not to reinstall every morning?
Yeah, that’s my issue, NixOS is so stable I never had to reinstall.
Me too 😁 I have no important file not saved in cloud ever. I can nuke any of my clients without any afterthought
This saves so much time…
I think people do that even in Linux, sometimes problemes are still very hard to solve and reinstalling is just faster, maybe I’m the only one. On another hand there is distro hopping ╮(︶▽︶)╭
Fucking up your computer so much you decide to “distro hop” by reinstalling a new os.
Isn’t that what everyone meant? Just me? Oh
My distro hops have been more like distro evacuations.
The whole point of doing a separate partition for your home directory is to do just that… The fuck is this even supposed to mean.
If you got a problem, reinstall and do the same stuff again, you’ll almost certainly get the same problem again. So, no, it’s only productive if you are in a fucked-up environment where changes bring more breakage than they fix.
It’s useful if you don’t plan to do the same thing again, though. So if you are just trying random stuff, yeah, go ahead.
If you got a problem, reinstall and do the same stuff again, you’ll almost certainly get the same problem again
Sure, but nobody’s likely to do that. If I wiped my system now, I doubt I could get it back to exactly the same state if I tried. There are way too many moving parts. There are changes I’ve forgotten I ever applied, or only applied accidentally. And there are things I’d do differently if I had the chance to start over (like installing something via a different one of the half-dozen-or-so methods of installing packages on my distro).
For example, I have Docker installed because I once thought a problem I had might have been Podman-specific. Turned out it was not. But I never did the surgery necessary to fully excise Docker. I probably won’t bother unless and until there is a practical reason to.
Try Root on ZFS.
If you run into an issue suddenly, you can restore to snapshot.
Good advice!
This is also available with BTRFS. Personally I am leveraging this feature via Snapper, simply because it was the default on OpenSuse and was good enough that I never bothered looking into alternatives. I’ve heard good things about Timeshift, too.
This has saved my butt a couple times. I’ll never go back to a filesystem that doesn’t support snapshots.
I really liked ZFS when I used it many years ago, but eventually I decided to move to BTRFS since it has built-in kernel support. I miss RAIDZ, though. :(
BTRFS is a damn good option too. I’m happy to hear how easy it is to use. I haven’t used it(yet), I went with ZFS because of its flexible architecture. On a desktop level, BTRFS makes sense, but in a server? What is it like in a Hypervisor?
I’m working on standing up a Cloudstack host as a Hypervisor. Now, I want this host to be able to run 5 kubernetes VMs, so it needs to have quick access to the disks. Now, I do not have a RAID card, only an HBA. In such a scenario, I would typically use a RAID 10. But a ZFS Raid 10 outperforms an mdraid 10 anyways (in terms of writing, not necessarily reading). So that is what I’ve decided. It may not be a good idea, it may not even be feasible. But I’m heckin willing to give it a shot.
I’m actually jealous that you automatically have built in kernel support though. I am a little curious about BTRFS in terms of how(or if) it connects multiple disks, I’m simply uninformed.
Install Ubuntu 24.04 on ZFS RAID 10 - Github Repository
Edit: There are a few drawdowns to using ZFS, lousy docker performance being one that I’ve heard about. I’m curious how this will be affected if I have docker running inside a VM.
BTRFS can work across multiple disks much like ZFS. It supports RAID 0/1/10 but I can’t tell you about performance relative to ZFS.
Just be sure you do NOT use BTRFS’s RAID5/6. It’s notoriously buggy and even the official docs warn that it is only for testing/development purposes. See https://btrfs.readthedocs.io/en/latest/btrfs-man5.html#raid56-status-and-recommended-practices
Edit: Another interesting thing to note between the two file systems is deduplication. ZFS supports automatic deduplication (although it requires a lot of memory). BTRFS supports deduplication but does not have built-in automatic dedup. You can use external tools to perform either file-level or block-level deduplication on BTRFS volumes: https://btrfs.readthedocs.io/en/latest/Deduplication.html
Thanks for the insight!
I may have to resort to using BTRFS for this host eventually if ZFS fails me. I do not expect a lot of duplication on a host, even if I have it, who cares I have 60 TB despite the raid 10 architecture. Having something with kernel support may be a better approach anyways.
It’s interesting to me that it struggles with raid 5 and 6 though. I would have expected that to be easy to provide.
That’s neat
I treat all my data as ephemeral, no need for separate home partition.
Uh… I don’t have a separate partition for /home. I have a separate zfs filesystem for it though. If I run into issues, I can restore from snapshot and not affect it.
Same, but BTRFS
That’s fair. I chose ZFS because I’ve used it before. And understand it fairly well already. I know nothing about BTRFS, so perhaps you could educate me a little. I’m working on setting up a cloudstack host using ZFS RAID 10. Does BTRFS have a flexible architecture to where you could do something similar?
Edit: Perhaps you could also inform me of the speeds of BTRFS too. From what I understand, ZFS outperforms BTRFS in large datasets, but I don’t know where the cutoff is. I’ll let you know it would need to run 12 ea 10TB HDDs.
Best would be to search up BTRFS vs ZFS, or listen the the entire back catalog of 2.5admins; they regularly discuss both. ZFS is probably what you want, I only went BTRFS because it is what I got introduced to via OpenSUSE
I reinstall at the drop of a hat. Pretty much any excuse to try another distro or configuration I was uncertain about.
One of the things I noticed when I first switched, was the difference of advice on forums. Linux users would ask for reports and pinpoint errors giving a fix. Windows forums would be wild random often unrelated guesses ultimately leading to “just reformat”.
Until you find the answer on a Windows forum posted by some Indian dude performing unpaid labor.
Or the “don’t worry I fixed it” one time poster
“Just Google it,” they said. So, you Google it. You find one result. It’s a forum post. From you.
I’d like to make a law that anyone who says “just google it” and doesn’t also provide the very first link they found on google that solves the problem should be castrated.
There’s a special place in hell for those people.
If the issue doesn’t resolve itself, reinstall, that works for me as a catch all solution because I use Linux like a Chromebook, web browsing.
I like immutable Linux for this reason. If you use almost exclusively containers and flatpaks you can rebuild easily.
Sometimes I love trawling through logs at speed and making magic happen because it reminds me of my heydays solving L3 support issues when the shit hit the fan.
Then I have to do it at work and it crushes me.
I use Nixos. I is immutable if you don’t use flatpaks if possible (sometimes flatpaks work better)
However I broke even that… Had to reinstall.
As someone I’d still consider a noob, I did this less than a month after getting a new laptop last January. I probably broke something trying to get the headphone jack working on it and then Bluetooth stopped working properly as well after installing Steam, so I started over. All I know for certain is I ended up destroying a folder I shouldn’t have on accident, which bricked the system pretty much and made nothing launchable, terminal included. This was on MX and haven’t had issues since reinstalling.
configure automatic snapshots and invest in a few terrabytes of storage on a home cloud, like a NAS server
It’s kind of a moot point now, but I’ve definitely been keeping snapshops from timeshift just in case I truly break something and can’t fix it, like the time I somehow nearly bricked Plasma by just trying to install virtualbox.
I just rollback my snapper.
My brother reinstalled his windows more often than I ran “zypper dup”
One of the big selling points of Linux to me was I can automate my install from end to end. I haven’t bothered automating the installer, but once it boots I run a playbook to set everything up and restore most of my homedir from backup. Everything down to setting my custom keyboard shorts, extensions and wallpaper is covered.
These days I run Silverblue and I’m trying to find the time to put together my own build pipeline to build my own images on top of Silverblue’s.
Either way, I have no fear of reinstalls.