Admiral Patrick

I’m surprisingly level-headed for being a walking knot of anxiety.

Ask me anything.

Special skills include: Knowing all the “na na na nah nah nah na” parts of the Three’s Company theme.

I also develop Tesseract UI for Lemmy/Sublinks

Avatar by @SatyrSack@feddit.org

  • 108 Posts
  • 815 Comments
Joined 3 years ago
cake
Cake day: June 6th, 2023

help-circle


  • I wouldn’t recommend it to anyone in real life. There are parts that are just way too jarring.

    Ugh, this. And I hate that it’s like that.

    Like, I used to have my instance open to whoever to sign up. My guiding principle was to have a place that wasn’t overrun with [parts that are just way too jarring]. Holy shit was that an impossible goal to do alone so I shuttered it up and now it’s just a private instance / testbed for Tesseract.

    My friends knew I was active on Reddit, and that was fine. But I wouldn’t tell them I spend any amount of time here because what they would see going to almost any random instance will probably definitely not look good on me by association despite that I’m nowhere near that.

    So if anyone shares this desire, I am open to un-mothballing my instance, rebranding, and taking on new admins and re-opening to users who also want a place like that.





  • I haven’t been to Odysee for a good while, but is it still Rumble-lite?

    I only learned of Odysee because I saw a video linked to it here and went directly to the video. When I saw it had embed code, I added support in Tesseract UI so the videos would play from the post. Then I went to the main site and saw the front page full of rightwing nutjob rants and vaccine skepticism and was like “nope”. Had I saw that beforehand, I wouldn’t have added embed support, but the work was already done so I left it in. That’s basically why I refuse to add embed support for Rumble.

    Wondering if ownership/leadership/policies have changed since about 2 years ago when I wrote the embed components for it and last interacted with it.





  • What do you expect? Video hosting at scale is expensive and most instances aren’t setup or financially equipped to handle videos larger than clips of a few seconds in length (if even that).

    You occasionally see direct MP4 links there to catbox or imgur, but those don’t always work universally or are slow or hugged to death. Recently I’ve noticed that a few users here utilize their own homelabs to host larger media uploads, and I like the thought of that. Those seem to just be private “repos” for them, though. I would NOT want to run something like catbox, open to everyone, on my own infrastructure. The thought of what I’d end up with gives me the heebie-jeebies. I don’t know how catbox deals with problematic uploads, but that’s far beyond my comfort zone.

    Ok, so what about Invidious? Invidious links are just shittier and slower Youtube links because they’re just proxying, so that’s not really a solution. And I’d rather see a YT link posted than an Invidious link since my Lemmy client and/or browser plugin can rewrite YT links to my preferred Invidious server but can’t do that for the infinite number of random Inv links in the wild. That then forces me to manually massage the URL to go to my preferred server or click through to the slow, overloaded server on the other side of the world with the original link. Yuck to both.

    Wait, what was I talking about? lol

    Oh, yeah. There’s also the expectation that videos linked here should be universally accessible, so linking to a Netflix documentary or something on Paramount+ isn’t really going to go over well.

    There’s just not that many open video platforms. Odysee is one (and I have support for it in Tesseract UI), but it’s kind of sketchy. Vimeo isn’t really an open/general purpose video platform but does have quality content if you can find it (Tess also supports embeds to there). However, Vimeo was recently acquired by a private equity firm and isn’t doing great.

    What’s left?

    Well, we have Loops. That’s kind of niche in that it’s for short-form videos but it has potential if that’s your thing.

    Peertube is also pretty great, but we’re back to the “video hosting at scale is expensive” problem. I recently setup Peertube for my instance and have been trying to share links to that instead of to elsewhere, so I’m at least trying to address what the meme is saying. Lemmy.WTF also has a Peertube, I believe.

    Did I miss anything?






  • I also run (well, ran) a local registry. It ended up being more trouble than it was worth.

    Would you have to docker load them all when rebuilding a host?

    Only if you want to ensure you bring the replacement stack back up with the exact same version of everything or need to bring it up while you’re offline. I’m bad about using the :latest tag so this is my way of version-controlling. I’ve had things break (cough Authelia cough) when I moved it to another server and it pulled a newer image that had breaking config changes.

    For me, it’s about having everything I need on hand in order to quickly move a service or restore it from a backup. It also depends on what your needs are and the challenges you are trying to overcome. i.e. When I started doing this style of deployment, I had slow, unreliable, ad heavily data-capped internet. Even if my connection was up, pulling a bunch of images was time consuming and ate away at my measly satellite internet data cap. Having the ability to rebuild stuff offline was a hard requirement when I started doing things this way. That’s now no longer a limitation, but I like the way this works so have stuck with it.

    Everything a service (or stack of services) needs is all in my deploy directory which looks like this:

    /apps/{app_name}/
        docker-compose.yml
        .env
        build/
            Dockerfile
            {build assets}
        data/
            {app_name}
            {app2_name}  # If there are multiple applications in the stack
            ...
        conf/                   # If separate from the app data
            {app_name}
            {app2_name}
            ...
        images/
            {app_name}-{tag}-{arch}.tar.gz
            {app2_name}-{tag}-{arch}.tar.gz
    

    When I run backups, I tar.gz the whole base {app_name} folder which includes the deploy file, data, config, and dumps of its services images and pipe that over SSH to my backup server (rsync also works for this). The only ones I do differently are ones with in-stack databases that need a consistent snapshot.

    When I pull new images to update the stack, I move the old images and docker save the now current ones. The old images get deleted after the update is considered successful (so usually within 3-5 days).

    A local registry would work, but you would have to re-tag all of the pre-made images to your registry (e.g. docker tag library/nginx docker.example.com/nginx) in order to push them to it. That makes updates more involved and was a frequent cause of me running 2+ year old versions of some images.

    Plus, you’d need the registry server and any infrastructure it needs such as DNS, file server, reverse proxy, etc before you could bootstrap anything else. Or if you’re deploying your stack to a different environment outside your own, then your registry server might not be available.

    Bottom line is I am a big fan of using Docker to make my complex stacks easy to port around, backup, and restore. There’s many ways to do that, but this is what works best for me.


  • Yep. I’ve got a bunch of apps that work offline, so I back up the currently deployed version of the image in case of hardware or other failure that requires me to re-deploy it. I also have quite a few custom-built images that take a while to build, so having a backup of the built image is convenient.

    I structure my Docker-based apps into dedicated folders with all of their config and data directories inside a main container directory so everything is kept together. I also make an images directory which holds backup dumps of the images for the stack.

    • Backup: docker save {image}:{tag} | gzip -9 > ./images/{image}-{tag}-{arch}.tar.gz
    • Restore docker load < ./images/{image}-{tag}-{arch}.tar.gz

    It will backup/restore with the image and tag used during the save step. The load step will accept a gzipped tar so you don’t even need to decompress it first. My older stuff doesn’t have the architecture in the filename but I’ve started adding that lately now that I have a mix of amd64 and arm64.







  • I use the web version rather than the app, but I want to say the app can store the library on the SD card if you have one of sufficient size lying around and if the Redmi has the slot for one. But as someone else said, there are smaller versions you can download if you can’t fit the full one.

    Not trying to push Kiwix on you, but I just can’t emphasize enough how handy it is to have offline Wikipedia always on hand.


    • Termux has lots of possibilities
    • Pair it with a Meshtastic node and make it a dedicated communicator
    • I run HomeAssistant and Emby and have several old smartphones to work with, so one lives in each room and act as remotes for those
    • Setup Asterisk and make a VoIP system using old smartphones and SIP clients as handsets
    • Check if PostmarketOS supports it. I haven’t used it, but it basically turns your phone into a Linux machine if I understand correctly
    • Use it as your “ugh, I have to use an app for [THIS]?!” phone. Basically things that require an app for setup or one-off apps you can’t avoid using.
    • Make your own little portable Library of Alexandria. Install Kiwix and download a bunch of ZIMs from their library. If you’ve got at least 130 GB to work with, you can even fit the entire Wikipedia dump with images and have that locally.