

Last time I was looking for job I just looked up companies from my field and sent them an email. I sent two emails and got 1 interview. Didn’t get the place though, so I just employed myself then.
A contrarian isn’t one who always objects - that’s a confirmist of a different sort. A contrarian reasons independently, from the ground up, and resists pressure to conform.
Last time I was looking for job I just looked up companies from my field and sent them an email. I sent two emails and got 1 interview. Didn’t get the place though, so I just employed myself then.
I believe that, in reality, wolves domesticated themselves. They started hanging around humans because it was a mutually beneficial arrangement.
Dogs and wolves are the same specie - just a different subspecie. A Chihuahua could breed with a wolf.
I’m not 100% sure but I don’t see why not if that’s the name you gave them when registering as a customer. They all read in my ID as well.
I’ve only broken up with my ex-partners.
Does this help?
You’re not hoping anything, you’re just trying to look clever by pretending to be worried about phrasing no one actually misunderstood.
Concern trolling / weaponized empathy - Pretending to care as a disguise for judgment or hostility.
I have 3 first names and I’m legally allowed to use any of them.
Ironically, I had to use AI to figure out what this is supposed to mean.
Here’s the intended meaning:
The author is critiquing the misapplication of AI—specifically, the way people adopt a flashy new tool (AI, in this case) and start using it for everything, even when it’s not the right tool for the job.
Hammers vs. screwdrivers: A hammer is great for nails, but terrible for screws. If people start hammering screws just because hammers are faster and cheaper, they’re clearly missing the point of why screws exist and what screwdrivers are for.
Applied to AI: People are now using large language models (like ChatGPT) or generative AI for tasks they were never meant to do—data analysis, logical reasoning, legal interpretation, even mission-critical decision-making—just because it’s easy, fast, and feels impressive.
So the post is a cautionary parable: just because a tool is powerful or trendy (like generative AI), doesn’t mean it’s suited to every task. And blindly replacing well-understood, purpose-built tools (like rule-based systems, structured code, or human experts) with something flashy but poorly matched is a mistake.
It’s not anti-AI—it’s anti-overuse or misuse of AI. And the tone suggests the writer thinks that’s already happening.
I don’t feel like their wealth changes the equation that much. I don’t expect them to just hand me money just because I’m their biological child - and since I’m doing fine on my own anyway, I wouldn’t really need them to.
A self-aware or conscious AI system is most likely also generally intelligent - but general intelligence itself doesn’t imply consciousness. It’s likely that consciousness would come along with it, but it doesn’t have to. An unconscious AGI is a perfectly coherent concept.
What do you not agree with the graph?
No, it generates natural sounding language. That’s all it does.
The models definitely have some level of consciousness.
Depends on what one means by consciousness. The way I hear the term used most often - and how I use it myself - is to describe the fact of subjective experience. That it feels like something to be.
While I can’t definitively argue that none of our current AI systems are conscious to any degree, I’d still say that’s the case with extremely high probability. There’s just no reason to assume it feels like anything to be one of these systems, based on what we know about how they function under the hood.
LLM “hallucinations” are only errors from a user expectations perspective. The actual purpose of these models is to generate natural-sounding language, not to provide factual answers. We often forget that - they were never designed as knowledge engines or reasoning tools.
The fact that they often get things right isn’t because they “know” anything - it’s a side effect of being trained on data that contains a lot of correct information. So when they get things wrong, it’s not a bug in the traditional sense - it’s just the model doing what it was designed to do: predict likely word sequences, not truth. Calling that a “hallucination” isn’t marketing spin - it’s a useful way to describe confident output that isn’t grounded in reality.
LLMs have more in common with humans than we tend to admit. In split-brain studies, humans have been shown to invent plausible-sounding explanations for their behavior - even when scientists know those explanations aren’t the real reason they acted a certain way. It’s not that these people are lying per se - they genuinely believe the explanations they’re coming up with. Lying implies they know what they’re saying is false.
LLMs are similar in that way. They generate natural-sounding language, but not everything they say is true - just like not everything humans say is true either.
It means Artificial General intelligence and the term has been around for almost three decades.
The term AGI was first used in 1997 by Mark Avrum Gubrud in an article named ‘Nanotechnology and international security’
By advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed. Such systems may be modeled on the human brain, but they do not necessarily have to be, and they do not have to be “conscious” or possess any other competence that is not strictly relevant to their application. What matters is that such systems can be used to replace human brains in tasks ranging from organizing and running a mine or a factory to piloting an airplane, analyzing intelligence data or planning a battle.___
If you have a better term, what is it?
Large Language Model. AI is correct as well but that’s just way more broad category.
AI is a parent category and AGI and LLM are subcategories of it. Just because AGI and LLM couldn’t be more different, it doesn’t mean they’re not AI.
Plumber by training, but these days I work as a self-employed general contractor / handyman.
My thinking is that companies looking for employees get flooded with nearly identical applications, so it’s hard to stand out. I’d rather just email, call, or even show up in person and ask for work - whether they’re actively hiring or not. It shows initiative.
Honestly, I didn’t even want the position - I only applied to keep my unemployment payments going. I spent maybe five minutes writing the application and still got the interview.