

I ain’t about to play headgames on what I have and haven’t salvaged already, I must keep track of what device stores what, what filename is what, and what dates are what.
This is precisely the headache I’m trying to save to you from: micromanaging what you store for the purpose of saving storage space. Store it all, store every version of every file on the same filesystem, or throw it into the same backup system (one that supports block-level deduplication), and you won’t be wasting any space and you get to keep your organized file structure.
Ultimately, what we’re talking about is storing files, right? And your goal is to now keep files from these old systems in some kind of unified modern system, right? Okay, then. All disks store files as blocks, and with block-level dedup, a common block of data that appears in multiple files only gets stored once, and if you have more than one copy of the file, the difference between the versions (if there is any) gets stored as a diff. The stuff you said about filenames, modified dates and what ancient filesystem it was originally stored on… sorry, none of that is relevant.
When you browse your new, consolidated collection, you’ll see all the original folders and files. If two copies of a file happen to contain all the same data, the incremental storage needed to store the second copy is ~0. If you have two copies of the same file, but one was stored by your friend and 10% of it got corrupted before the sent it back to you, storing that second copy only costs you ~10% in extra storage. If you have historical versions of a file that was modified in 1986, 1992 and 2005 that lived on a different OS each time, what it costs to store each copy is just the difference.
I must reiterate that block-level deduplication doesn’t care what files the common data resides in, if it’s on the same filesystem it gets deduplcated. This means you can store all the files you have, keep them all in their original contexts (folder structure), without wasting space storing any common parts of any files more than once.
Either I’m massively misunderstanding why it is you want to curate your backup by hand, or you’re missing the point of block-level deduplication. Shrug, either is possible.
across the different devices and media, the folder and file structure isn’t exactly consistent.
That’s the thing: it doesn’t need to be. If your backup software or filesystem supports block-level deduplication, all matching data only gets stored once, and filenames don’t matter. The files don’t even have to 100% match. You’ll still see all your files when browsing, but the system is transparently making sure to only store stuff once.
Some examples of popular backup software that does this are Borgbackup and Restic, while filesystems that can do this include BTRFS and ZFS.
Client feedback.
Wtf, it made your TrackPoint disappear.
Nailing this regex will save me hours.
says the shirt
That definitely makes sense. Also, the scripts in a .deb should be incredibly short and readable, if you choose to check them out.
It’s worth knowing that .deb files can contain setup scripts that get run as root when installed, so you should trust them too.
Great, Needs One More Extension
I also choose this rat’s answer.
The problem with this (I would imagine) is that that thieves these days are probably savvy enough to look for and disable a phone ASAP, paired with phones being big enough that you can’t exactly hide one in a bag like you can with something AirTag-sized.
Welcome!
To show they’re fully GoaTSE 1.0 compliant.
I think a good followup question for this one would be “Were you able to answer the question from memory?”
I couldn’t remember, so I had to do some typing to see. And based on the amount of visible keycap wear, I’d say they get used equally.
My one and only purpose was to warn them that their “drawback” is more of a gator pit. It’s noble that you’re here defending rsync’s honor, but maybe let them know instead? My preferred backup tool has “don’t eat my data” mode on by default.
Sure, but that’s not in their answer.