A JavaScript VM in the kernel is inevitable.
A JavaScript VM in the kernel is inevitable.
not by any means modern, but I used to really like pal
More than that, your editor doesn’t run with root permissions, which reduces the risk of accidentally overwriting something you didn’t mean to.
it feels to me, like they’re less looking for new people to start doing this “work”, but more to connect with people who already happen to be enthusiastically going to events and showing off their laptops.
I really think that’s the secret end game behind all the AI stuff in both Windows and MacOS. MS account required to use it. (anyone know if you need to be signed in to apple ID for apple ai?) “on device” inference that sometimes will reach out to the cloud. when it feels like it. maybe sometimes the cloud will reach out to you and ask your cpu to help out with training.
that, and better local content analysis. “no we aren’t sending everything the microphone picks up to our servers, of course not. just the transcript that your local stt model made of it, you won’t even notice the bandwidth!)”
Are you using PersistentVolumes? If your storage class supports it, looks like there’s a volume snapshot concept you can use, have you looked into that?
Not sure what you’re doing, but if we’re talking about a bog standard service backed by a db, I don’t think having automated reverts of that data is the best idea. you might lose something! That said, triggering a snapshot of your db as a step before deployment is a pretty reasonable idea.
Reverting a service back to a previous version should be straightforward enough, and any dedicated ci/cd tool should have an API to get you information from the last successful deploy, whether that is the actual artifact you’re deploying, or a reference to a registry.
As you’re probably entirely unsurprised by, there are a ton of ways to skin this cat. you might consider investing in preventative measures, testing your data migration in a lower environment, splitting out db change commits from service logic commits, doing some sort of blue/green or canary deployment.
I get fairly nerd-sniped when it comes to build pipelines so happy to talk more concretely if you’d like to provide some more details!
I use these two vim plugins for the same functionality without leaving $EDITOR:
I’ve also started dabbling with using fzf in scripts for the team to use. Don’t sleep on the --query
and --select-1
flags!
is that more or less cursed than cat image.img > /dev/whatever
?
dd if=image.img of=/dev/disk/flashdrive
is usually all you need
Definitely not what you’re talking about, but still: https://www.destroyallsoftware.com/talks/a-whole-new-world
just to give you the term to search for, these types of applications are called snippet managers. for example, https://snibox.github.io/
there’s a ton of them around. I don’t have a particular one that I recommend, since it’s not something I use in my workflow.
grep -r
exists and is even more faster and doesn’t require passing around file names.
grep -r --include='*.txt' 'somename' .
Better than that, git config supports conditional includes, based on a repo URL or path on disk. So you can have a gitconfig per organization or whatever, which specifies an sshCommand and thus an ssh key.
yep. they’re still here. they got smaller, and we call them “tracking pixels” now.
it’s just an image, which, server side, you can count the number of times it got loaded. easy to embed and no js required.
That’s interesting, okay. Is svn doing compression of those binaries for you?
Not to say “you’re holding it wrong”, but I’m curious about your workflow here. You clone these binaries every time you come back to a project?
I don’t get it, who in their right mind hosts development stuff on a Windows clunker?
Same question, but Subversion. Switch to git. Import your repos with git-svn.
just to add a little more explanation to what the other posters are suggesting… a hard drive, from the perspective of your OS is very very simple. it’s a series of bytes. for the sake of this example, let’s say there are 1000 of them. they are just a series of numbers.
how do you tell apart which numbers belong to which partitions? well there’s a convention: you decide that the first 10 of those numbers can be a label to indicate where partions start. e.g. your efi starts at #11 and ends at #61. root at starts at #61 and ends at #800. the label doesn’t say anything about the bytes after that.
how do you know which bytes in the partions make up files? similar sort of game with a file system within the bounds of that partion - you use some of the data as a label to find the file data. maybe bytes 71-78 indicate that you can find ~/.bash_histor at bytes 732-790.
what happened when you shrunk that root partions, is you changed that label at the beginning. your root partion, it says, now starts at byte #61 and goes to #300. any bytes after that, are fair game for a new partion and filesystem to overwrite.
the point of all this, is that so far all you’ve done is changed some labels. the bytes that make up your files are still on the disk, but perhaps not findable. however - because every process that writes to the disk will trust those labels, any operation you do to the disk, including mounting it has a chance to overwrite the data that makes up your files.
this means:
ONLY after that is done, the first thing I’d try is setting that partion label back to what it used to say, 100gb… if you’re lucky, everything will just work. if you aren’t, tools like ‘photorec’ can crawl the raw bytes of the disk and try and output whatever files they find.
good luck!
I think you can link a second Whatsapp app, similar to the web client. your primary one needs a webcam to read the QR code though