Computer algorithms solve problems all over the world for companies already. I bet airlines already have teams of people using computer algorithms to figure crew management, flight routing, cost optimization, etc.
The fact that they’re exploring quantum computers and non-classical algorithms just suggests that gate allocation is NP-Hard. Sure things go wrong when computers fail already, Look at Southwest or Delta’s recent meltdown, but to act like this a bad thing is just nonsense. This should be looked at as a good thing that airlines are working on.
Why do you think this is going to replace air traffic control work? It’s picking which gate to park the plane at. This was done by airline and airport operations teams, not ATC. Imagine if you could automatically pick gates to reduce the time a plane spends taxiing and/or minimize time passengers spend walking. That’s 100% a useful application for computer optimization algorithms. Humans aren’t going to do that better and it’s not a function of safety that tower or ground control needs to do.
If you are port forwarding. I recommend not exposing it on the default port of 25565 and instead expose it as a random port. Then, assuming you have a domain name, create an SRV record that points to your IP and port. This will cut down on the drive by scanners who scan by ports, but won’t totally eliminate it. If you do use the SRV record, your friends won’t even notice there’s a different port.
The alternative is to let certain countries de facto claim a region because others are too afraid to call them on their BS
As a professional software dev, I worked with pretty much every OS daily. My personal computer was a Windows, my work laptop was a Mac, and I ran my code on Linux so I was familiar with the things I liked and disliked about each. I also ran my own set of server with my websites, mail servers, and various research projects to learn and grow.
Then I decided it was time to order a new laptop and I didn’t want to go to Windows 11 because I felt Microsoft was going too much into features I didn’t want like Ads, more tracking, pushing AI. Don’t get me wrong, I like AI, but it was too much about forcing me to use it to justify their stock valuations.
I also was working on reducing my usage of big tech, setting up self hosted services like pi-hole, Home Assistant, starting to work my own Mint alternative. It just felt natural to get a Framework laptop and try running Linux on it.
I still have a Windows desktop for games and other things, I still use Mac at work. I still like the Mac for it’s power efficiency and it doesn’t get as hot. Linux has some annoyances here and there, like dbus locking up, or weird GNOME issues, or for a while my screen would artifact until set some kernel params, or the fact that my wifi card would crash and I had to replace it with an Intel card, but I’ll stick with it.
There’s two main ways of doing geo-based load balancing:
Of course, this doesn’t matter for companies that only have one data center.
Sorry, what do you mean route it directly? Maybe I didn’t clarify well enough.
My DNS is routed over the VPN but Internet traffic is routed directly. The problem is the load balancing is done based on where the DNS server is so say Google even though the traffic egresses directly to the internet bypassing the VPN it still goes to a Google DC near my home. Not all websites do this so its not always an issue.
Yes, but if you hit a company doing DNS based load balancing, DNS is going to return an IP that’s near to your DNS server which may not be near your device. That’s going to add to the latency.
I have Wireguard and I forward DNS and my internal traffic from my phone over the VPN to my pi-hole at home. All other traffic goes directly over the Internet, not the VPN. So that means only DNS encounters higher latency.
However, because a lot of companies do DNS based geo load balancing that means even if I’m on the east coast all my traffic gets sent to the West Coast because my DNS server is located there. That right there has the biggest impact on latency.
It’s tolerable on the same continent, but once I start getting into other continents then it gets a bit slow.
Right, it’s a lot better to give somebody a better alternative first if you want the public on board. Build up public transit, build up regional and high speed rail and leave planes for long distances that are unfortunately suited for trains and cars (e.g. international, cross-continental, etc.)
I think this a problem with applications with a privacy focused user basis. It becomes very black and white where any type of information being sent somewhere is bad. I respect that some people have that opinion and more power to them, but being pragmatic about this is important. I personally disabled this flag, and I recognize how this is edging into a risky area, but I also recognize that the Mozilla CTO is somewhat correct and if we have the option between a browser that blocks everything and one that is privacy-preserving (where users can still opt for the former), businesses are more likely to adopt the privacy-preserving standards and that benefits the vast majority of users.
Privacy is a scale. I’m all onboard with Firefox, I block tons of trackers and ads, I’m even somebody who uses NoScript and suffers the ramifications to due to ideology reasons, but I also enable telemetry in Firefox because I trust that usage metrics will benefit the product.
Why is telemetry useful or why is it needed to use pi-hole to block telemetry?
Telemetry is useful to know what features your customers use. While it’s great in theory to have product managers who dogfood and can act on everyone’s behalf, the reality is telemetry ensures your favorite feature keeps being maintained. It helps ensure the bugs you see get triaged and root caused.
Unfortunately telemetry has grown to mean too many things for different people. Telemetry can refer to feature usage, bug tracking, advertising, behavior tracking.
Is there evidence that even when you disable telemetry in Firefox it still reports telemetry? That seems like a strong claim for Firefox.
Accidentally typo your password and get blocked. And if you’re tunneling over tor, you’ve blocked 127.0.0.1 which means now nobody can login.
Paperless does support defining a folder structure that you can use to organize documents within that paperless media volume however you should treat it as read only.
OP could use this as a way to keep their desired folder structure as much as possible, but it would have to be separate from the consumption folder.
I don’t fully understand what you’re saying, but let’s break this down.
Since you say you get an NGINX page, what does your NGINX config look like? What exactly does the NGINX “login page” say? Is it an error or is it a directory listing or something else?
Then try something like:
Create Quanity unit of ml and a liter unit
In your product use: Unit stock: bottle or liter Unit purchase: bottle Consume: ml Price unit: ml
Set a product specific QU conversion of bottle to ml
Weirdly, the quick consume unit is based on the stock unit, not the consume unit. That seems like a bug.
The problem with Grocy is that going too fine grained means you’re unlikely to keep it up to date or it be accurate. I would not try to track your usage in ml. Just track it at the bottle level.
However you can still track the price per ml because grocy lets you independently set units. Just define a mapping between bottle and ml.
If you’re running Docker for servers not development, then you can make Hyper-V work. I used to do that before I got a separate Linux server and it worked out.
Just setup a network adapter that gets bridged to your Ethernet adapter, then create a VM that uses that bridged adapter. The Linux VM will appear like its another computer on your LAN and you can use Docker with host Network.
They’re forging the GPS to look like it’s in the EU. Do we turn off all the EU terminals too?
I’m a little surprised they can’t identify spoofing by comparing the incoming signal to the proposed location. They already have antennas that can be steered using the phased array.