Respect. The whole huqin family is fiendishly difficult to play.
My Dearest Sinophobes:
Your knee-jerk downvoting of anything that features any hint of Chinese content doesn’t hurt my feelings. It just makes me point an laugh, Nelson Muntz style as you demonstrate time and again just how weak American snowflake culture really is.
Hugs & Kisses, 张殿李
Respect. The whole huqin family is fiendishly difficult to play.
I want to get a DIY kit to make a kalimba tuned to one of the Chinese pentatonic scales.
Wash an executive’s what now?
Kalimbas are the greatest little instrument for just plinking around and having fun musical explorations! You can sit down and learn hard … or you can just play around and see what cool little riffs you can come up with in an almost trance-like state.
Warning! You got a cheap kalimba. That’s how I started. #5 was … ah … significantly more expensive…
I’m not even a pro musician (though it was a career option I’d considered after high school). I dabble.
And I’ll be utterly and thoroughly railed by a rusty railroad spike before I let some soulless sociopath tells me that I don’t enjoy making music!
“As a black woman …”
You also seem nice.
You seem nice.
Huh. I’d never have guessed that.
I thought art was making cthonic horror for hands.
How is someone who was caught fooling the police who … caught them?
I’ve never been worried about AI. AI isn’t a problem because it doesn’t exist. (Hint: you can’t make an artificial version of something you can’t even define in ways that are broadly accepted.)
What I’ve always been worried about is the people pushing “AI”. They’re the source of all the trouble.
Google’s evil was more in the background; you had to be paying attention to find it until they decided that they were powerful enough to be openly evil.
LLMs don’t know anything. You’d have to have programs around the AI that look for that, and the number of things that can be done to disguise the statement so only a human can read it is uncountable.
##### # # ### #### ### ####
# # # # # # #
# ##### # ### # ###
# # # # # # #
# # # ### #### ### ####
#### # # # # #### # # ### #####
# # # # # # # # # # #
#### # # # # ### ##### # #
# # # # # # # # # # #
#### ### ##### ##### #### # # ### #
Like here’s one. Another would be to do the above one, but instead of using #
cycle through the alphabet. Or write out words with capital letters where the #
is.
Or use an image file.
That’s what I’m talking about. We use the Degenerative AI to create a whole pile of bullshit Tlön-style, then spread that around the Internet with a warning up front for human readers that what follows is stupid bullshit intended to just poison the AI well. We then wait for the next round of model updates in various LLMs and start to engage with the subject matter in the various chatbots. (Perplexity’s says that while they do not keep user information, they do create a data set of amalgamated text from all the queries to help decide what to prioritize in the model.)
The ultimate goal is to have it, over time, hallucinate stuff into its model that is bullshit and well-known bullshit so that Degenerative AI’s inability to actually think is highlighted even for the credulous.
Most people lack empathy for self-inflicted injury, and the problem is that the perception of what is self-inflicted varies from person to person.
See, personally, I think anybody on any corporate social media verges on the self-inflicted by now, given that it’s been common knowledge for years that Facebook and Google and and and are completely and utterly unhinged in their lack of humanity.
They have become a lot more convincing, not a lot better.
They’re still misinformation amplifiers with a feedback loop. There’s more misinformation on most topics out there (whether intentional, via simplification, or accidental) than there is information. LLMs, which have no model of reality and thus cannot really assess the credibility of sources, just hoover it all up and mix it all together to return it to you.
You (the generic you, not you in specific … necessarily) then take the LLM’s hallucinated garbage (which is increasingly subtle in its hallucinations) and post it. Which the LLMs hoover up in the next round of model updates and …
Imagine trying to use history to support your point of view while steadfastly ignoring the history of AI.
Just like the Spinning Jenny back then, AI is as bad as it’s ever going to be today. It’s only going to get better and jobs will be made redundant, I’ll put my money on that. It’s a real fear that many people have, whether they’ll admit it or not.
It’s also as good as its ever going to be today (or the near future).
Degenerative AI has already dropped off in usage to the point that major stakeholders in it are terrified. It’s going to go into the same winter that every previous “no really this time we’ve got it right” AI crazes went.
Grand Theft Autocorrect has no feelings to hurt. Has no nervous system to signal pain. Spicy Madlibs is less self-aware than an ant.