My Dearest Sinophobes:
Your knee-jerk downvoting of anything that features any hint of Chinese content doesn’t hurt my feelings. It just makes me point an laugh, Nelson Muntz style as you demonstrate time and again just how weak American snowflake culture really is.
Hugs & Kisses, 张殿李
To paraphrase someone far more clever than me: “If you can’t be bothered to actually write it, I can’t be bothered to read it.”
Woohoo! Finally a company I can boycott that’s not already in my boycott of all American companies!
Allow me to fix that for you:
computer scientists: We have invented virtual dumbasses who are constantly wrong: Tech CEOs.
No they won’t. Any place that uses an AI for doing any kind of interaction with me doesn’t get my business.
You are being a little bit pedantic. People talking about “AI” today are talking about “LLMs”, not the older tech that turned out not to actually be “AI”. (Rather like the current stuff isn’t actually “AI”.)
Using it at all, really. Given the environmental costs, the social costs, and the fraud it entails, using it at all is pretty much unethical.
My favourite example, though, was the lazy lawyer who used ChatGPT to write a legal brief for him.
Perplexity is the only one I would think of using seriously, and then only when I want it to, say, summarize something I already know.
After which I fact-check it like crazy and hammer at it until it gets things right.
One annoying habit it has is that somewhere in the chain of software before or after the LLM it looks for certain key topics it doesn’t want to talk about and either comes out and says it (anything involving violence or crime) or has a visibly canned hot take that it repeats without variance no matter what added information you provide or how much cajoling you try.
At other points it starts into the canned responses, but when you catch it it will try again. Like I frequently want song lyrics translated and each time I supply some that it recognizes as such it throws up a canned response about how it will not be a party to copyright breaking. Then after a few rounds back and forth about how I’m clearly not doing this commercially and am just a fan who wants to understand a song better it will begrudgingly give me the translation.
Then five minutes later in the SAME CONVERSATION it will run through that cycle all over again when I give it another song.
Lather. Rinse. Repeat.
It’s about the same in terms of what it does (which means it hallucinates just as strongly and can’t be trusted). It just takes less to do it. MUCH less.
I can’t. I share the same opinion.
If I see evidence of AI in any space on a page (aside, obviously, from one that is analyzing AI) I assume that the page has nothing worth reading.
I doubt I will miss anything of value by this assumption.
So I’m with you. Putting AI “art” on an article is just a sign of dishonesty and taints the writing as well.
Grand Theft Autocorrect has no feelings to hurt. Has no nervous system to signal pain. Spicy Madlibs is less self-aware than an ant.
Respect. The whole huqin family is fiendishly difficult to play.
I want to get a DIY kit to make a kalimba tuned to one of the Chinese pentatonic scales.
Wash an executive’s what now?
Kalimbas are the greatest little instrument for just plinking around and having fun musical explorations! You can sit down and learn hard … or you can just play around and see what cool little riffs you can come up with in an almost trance-like state.
Warning! You got a cheap kalimba. That’s how I started. #5 was … ah … significantly more expensive…
I’m not even a pro musician (though it was a career option I’d considered after high school). I dabble.
And I’ll be utterly and thoroughly railed by a rusty railroad spike before I let some soulless sociopath tells me that I don’t enjoy making music!
“As a black woman …”
You also seem nice.
Toward the end they mention also the bit about the writing also being bullshit generators, no?