“Artificial intelligence” is for the marketing department’s benefit. At least mainly so. What people envision with the term AI is because of preconceived notions based science fiction not what it actually is.
“Artificial intelligence” is for the marketing department’s benefit. At least mainly so. What people envision with the term AI is because of preconceived notions based science fiction not what it actually is.
“For the security” is starting to sound a lot like “for the children”. I hope this works out better than secure boot. When these new ideas emerge that have, let’s call them, “side effects” like disabling ad-blockers or preventing Linux from being installed I am suspicious.
Use DNS filtering. I use NextDNS which has a free tier that meets my needs. You can add popular filter lists and your browser will never even see those ads, trackers etc. Or you can use Vivaldi and Firefox of course. But DNS cuts it off before it even gets to your machine.
I like Proton and I guess this kind of makes sense for them, sort of, but its weird.
That is pretty interesting and thanks for posting it. I hear the words and its intriguing but to be honest, I don’t really understand it. I’d have to give it some thought and read more about it. Do you have a place you suggest going to learn more?
I use chatgpt-4o currently for learning python and helping with grammar. I find it does great with grammar but even with relatively simple python questions it can produce some “creative” answers. Like its in the ball park but its not perfect and for a learner, that’s learning the hard way. To be fair I don’t use the assistant/code interpreter, which I have no idea about but based on its name I assume it might be better. So that’s what I based my somewhat skeptical opinion of ai on.
From my understanding, AI is a essentially a statistical method so naturally it will use a confidence level. Its hard for me to take the leap of faith to confidence level will correlate to accuracy. Seems to me it would be more dependent on its data set. If its data contains a commonly held belief, that is incorrect, would it not have a high confidence level on an answer with that incorrect info? If we use a highly authoritative data set, that will be very limited and we’d be back to more of a keyword system than a LLM. I am sure with time, we’ll be in more of a middle ground where accuracy will be better but what will that be? 5% 3% 10%?
I’ll freely admit I am not an expert in this at all.
That is so funny.
chatgpt: “Artificial Intelligence (AI) represents a transformative investment opportunity, characterized by robust growth potential and broad applicability across industries. The AI market, projected to exceed $190 billion by 2025, offers substantial upside in sectors such as healthcare, finance, automotive, and e-commerce. As businesses increasingly adopt AI to enhance efficiency and innovation, associated firms are poised for significant returns. Key investment areas include machine learning, natural language processing, robotics, and AI-driven analytics. Despite risks like regulatory challenges and ethical concerns, the strategic deployment of capital in AI technologies holds promise for long-term value creation. Diversification within this space is advisable to mitigate volatility.”
It wont know it doesn’t know. At the current state of AI, it doesn’t seem to have almost any sense of what is right and wrong or a way to validate that - even when you tell it, it is wrong. Maybe there are systems that can but I am not aware of them.
Putting aside the crypto aspect, this is a simple story of a lack of zoning and government regulation. I am sure it sucks for those who live near these places but, the problem is why they were allowed to be built near residential areas at all. There will always be noisy or polluting industry but sensible planning puts these sorts of places away from where they will most harm people and disrupt their lives. And forces them to minimize the amount of noise and pollution they produce to start with.
This is just one example of so many for why we should want to put up with govt regulation. Trust me I know how annoying it can be but we’re doomed without it. Now that the Supreme Court has defanged our institutions i.e. the Chevron deference, you can expect a lot more of these sorts of problems and with less ability to fight it.
The types of crypto that web3 use are proof of stake and not proof of work type chains, so energy usage is not much different than any web based service. People don’t do nuisance, so bitcoin uses proof of work, uses lot of power and that as much as people know. There are thousands of different blockchains and almost all of them besides bitcoin use proof of stake. So just from that point of view, its not an significantly different than any web project for the climate.
That said, the article posted above, I have no idea and other concerns about crypto and AI still apply.
Nice, you are well on your way to forming a unicorn valuation start-up!
You are right no one would invest in that. To be a real start-up plan, you need future projections built in. No one is going to invest in a static gold-pooping rate. If you can scale that production year over year, now you have an investable project.
I don’t know the source, so it’s hard for me to comment but logically the problem as stated is plausible. i.e. legacy debt preventing the move to more efficient methods.
However, the conclusion i.e. therefore replace humans with humanoid robots does not. And then tacking on unionization is just a different subject altogether. You can staff some aspects of a factory with robots and the human’s work shifts from production to maintenance. I’ve talked to automation people and robots can be very problematic and something “advanced” I would imagine much more so.
Although not recent, some referred to the robots as “Bob” blind one-arm builders. If very well calibrated and designed for a specific task, they can be ok, except when they go wrong. To think some “AI” driven general purpose robot is going to substantially replace human labor any time soon… I very seriously doubt that. Especially with that kook as leadership.
In my understanding, derivatives amplify the problems and risks. Underlying that are the money people who push on these systems as hard as they can and exploit every angle. Along the lines of pushing the boundaries, the practice of brokers “loaning” shares seems like another place that’s bound to cause issues at its limits. I really wish the govt would step in and impose much stricter regulation. I’d like to trust that buying stock is investing in a company rather than feeling like the stock market is a school of small fish swimming with sharks who cheat as much as they believe they can get away with. If the focus was on dividends vs growth, I think we’d be better off. Maybe I am wrong but that’s how I see it.
I think of it like network security. Anything you do not explicitly disallow will be used, tried, and used in ways you probably didn’t think of. It isn’t a matter of expecting people to do the right (or legal) thing, most will but it’s a surety that some will not. That’s normal and why security is a process and systems have to adapt over time in response.
It may do the job of a simple conveyor belt but actually, it’s a multimillion-dollar AI-powered, um, robot,
The great thing about the stock market compared to other investments like crypto is that stocks are based on the inherent value of the business they represent. Stocks are based on financial fundamentals. You can believe in those investments because they are based on something real and not simply rampant speculation. For example.
Tesla. Worth more than most of the rest of the car market combined because… reasons?
Paypal. Lost 80% of its value starting in July 2021 over a year and never recovered because of terrible problems? Huge losses? Nope, because it “only” grew at 8-9%.
2008 US housing rated as “AAA” investment i.e. “good as cash” based on actual trash.
Calling LLMs, “AI” is one of the most genius marketing moves I have ever seen. It’s also the reason for the problems you mention.
I am guessing that a lot of people are just thinking, “Well AI is just not that smart… yet! It will learn more and get smarter and then, ah ha! Skynet!” It is a fundamental misunderstanding of what LLMs are doing. It may be a partial emulation of intelligence. Like humans, it uses its prior memory and experiences (data) to guess what an answer to a new question would look like. But unlike human intelligence, it doesn’t have any idea what it is saying, actually means.
The word gimp in disability circles once upon a time meant “generally impaired.”
Firefox or Vivaldi. I prefer Vivaldi with its built-in blocking. I also use NextDNS for DNS level blocking. Free plan is good enough for my use.
Since they still exist, only time will tell if the promise of nuclear power and/or cryptocurrencies come to be.
AGI and even IMHO AI do not exist. Whatever product is being marketed as AI isn’t what I would consider AI. “AI” can have its uses but I really do not think they will be what people expect because it fundamentally lacks what I would consider crucial aspects of human intelligence.
AI makes for a very good grammar checker. It is good at producing filler content for SEO. And it is good at producing “stuff” that looks like it could be right. Probably will have some uses in creative work since it doesn’t have to be “correct” so as a tool to aid an artist, that’s seems pretty cool - I’m sure that is already happening. It will have its uses and a lot of companies will find out the hard way, it is not that they think. That’s my prediction.