It’s not an AI, it’s just word prediction, which also just follows stupid algorithms, just like those who determine search results. Both can be tricked / manipulated if you understand how they work. It’s still the same principle for both cases.
Regardless of what they call it, they’re the ones presenting it. I’m not arguing they can’t be tricked. I’m arguing they are fundamentally different concepts. One is offering you a choice of sources, the other is making a claim. That’s a pretty big distinction in a whole mess is different ways. Not the least of which is legal.
I’m sorry but no. It’s not Google making that claim, it’s just the LLM replying in a confident way because that’s how they are expected to work. As I said, word prediction. You can install the tiniest / most dumbest model on your local PC too and ask the same question. It will give you some random hallucinated number and act like that’s what you’re looking for due to its default system prompt telling it to sound like an AI assistant. In the case of search engines the LLM is directly hooked into the search engine itself and just does the same thing you’d do and search for a hopefully fitting search result. So scammers playing those search algorithms to get a good spot will end up becoming the recommendation for the LLM to tell the user. It’s the same thing, just displayed slightly differently. All the cool AI assistant stuff they try to present this as, is just an illusion, a word based roleplay. The only benefit here is that they can somewhat understand abstract questions, which is helpful for certain search queries, but in the end it is always the user’s responsibility to check the actual search result.
It’s not an AI, it’s just word prediction, which also just follows stupid algorithms, just like those who determine search results. Both can be tricked / manipulated if you understand how they work. It’s still the same principle for both cases.
Regardless of what they call it, they’re the ones presenting it. I’m not arguing they can’t be tricked. I’m arguing they are fundamentally different concepts. One is offering you a choice of sources, the other is making a claim. That’s a pretty big distinction in a whole mess is different ways. Not the least of which is legal.
I’m sorry but no. It’s not Google making that claim, it’s just the LLM replying in a confident way because that’s how they are expected to work. As I said, word prediction. You can install the tiniest / most dumbest model on your local PC too and ask the same question. It will give you some random hallucinated number and act like that’s what you’re looking for due to its default system prompt telling it to sound like an AI assistant. In the case of search engines the LLM is directly hooked into the search engine itself and just does the same thing you’d do and search for a hopefully fitting search result. So scammers playing those search algorithms to get a good spot will end up becoming the recommendation for the LLM to tell the user. It’s the same thing, just displayed slightly differently. All the cool AI assistant stuff they try to present this as, is just an illusion, a word based roleplay. The only benefit here is that they can somewhat understand abstract questions, which is helpful for certain search queries, but in the end it is always the user’s responsibility to check the actual search result.