• 14 Posts
  • 22 Comments
Joined 3 years ago
cake
Cake day: June 11th, 2023

help-circle

  • memfree@beehaw.orgtoMemes@sopuli.xyzCan anyone confirm?
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    6 months ago

    Does not work for ANY phrase. It seems to be presuming that the person asking is referencing something. Sample results copied here in order of AI’s least theorizing to its most.

    • horses before giraffes meaning

    “Horses before giraffes” has no scientific meaning because giraffes are not ancestors of horses…

    • put your horses before giraffes meaning

    “Put your horses before giraffes” is not a recognized English idiom. The similar and well-known idiom is “put the cart before the horse,” …

    • always put horses before giraffes meaning

    The phrase “always put horses before giraffes” is a variation of the well-known medical aphorism: “When you hear hoofbeats, think of horses, not zebras”…

    • titrated solutions beget relief meaning

    The phrase “titrated solutions beget relief” means that carefully adjusted or fine-tuned treatments can bring about an end to a problem…




  • memfree@beehaw.orgtoScience@mander.xyz*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    12
    ·
    6 months ago

    It has always been strange to me that anyone would think animals don’t have a wide range of emotions. I understand that a scientist can’t ask how an animal is feeling, and must instead record avoidance/seeking behaviors, but it also seems vanishingly improbable that emotions aren’t part of a long and useful evolutionary methodology to get to the next generation. Cows have friends. Sure, it took effort to prove, but why wouldn’t we expect that? We see mothers nurture their offspring, and we could easily call it love and concern. It is good to see we now have proof that it isn’t just the cuddly creatures with emotions, but at least as far down the scale as fish.



  • I read that as including human interaction as part of the pain point. They already offer bounties, so they’re doing some money management as it is, but the human element becomes very different when you want up-front money from EVERYONE. When an actual human’s report is rejected, that human will resent getting ‘robbed’. It is much easier to get people to goof around for free than to charge THEM to do work for YOU. You might offer a refund on the charge later, but you’ll lose a ton of testers as soon as they have to pay.

    That said, the blog’s link to sample AI slop bugs immediately showed how much time humans are being forced to waste on bad reports. I’d burn out fast if I had to examine and reply about all those bogus reports.


  • These attacks do not have to be reliable to be successful. They only need to work often enough to be cost-effective, and the cost of LLM text generation is cheap and falling. Their sophistication will rise. Link-spam will be augmented by personal posts, images, video, and more subtle, influencer-style recommendations—“Oh my god, you guys, this new electro plug is incredible.” Networks of bots will positively interact with one another, throwing up chaff for moderators. I would not at all be surprised for LLM spambots to contest moderation decisions via email.

    I don’t know how to run a community forum in this future. I do not have the time or emotional energy to screen out regular attacks by Large Language Models, with the knowledge that making the wrong decision costs a real human being their connection to a niche community.

    Ouch. I’d never want to tell someone ‘Denied. I think you’re a bot.’ – but I really hate the number of bots already out there. I was fine with the occasional bots that would provide a wiki-link and even the ones who would reply to movie quotes with their own quotes. Those were obvious and you could easily opt to ignore/hide their accounts. As the article states, the particular bot here was also easy to spot once they got in the door, but the initial contact could easily have been human and we can expect bots to continuously seem human as AI improves.

    Bots are already driving policy decisions in government by promoting/demoting particular posts and writing their own comments that can redirect conversations. They make it look like there is broad consensus for the views they’re paid to promote, and at least some people will take that as a sign that the view is a valid option (ad populum).

    Sometimes it feels like the internet is a crowd of bots all shouting at one another and stifling the humans trying to get a word in. The tricky part is that I WANT actual unpaid humans to tell me what they actually: like/hate/do/avoid. I WANT to hear actual stories from real humans. I don’t want to find out the ‘Am I the A-hole?’ story getting everyone so worked up was an ‘AI-hole’ experiment in manipulating emotions.

    I wish I could offer some means to successfully determine human vs. generated content, but the only solutions I’ve come up with require revealing real-world identities to sites, and that feels as awful as having bots. Otherwise, I imagine that identifying bots will be an ever escalating war akin to Search Engine Optimization wars.







  • Amazon offered up “Treatments for High Cholesterol” along with a link for an Amazon One Medical consultation as well as links to prescription medications.

    That’s weird, because my doctor and my wife are the only people who know about my cholesterol numbers. They’re pretty good, too! But there are certainly data points, including my age, my food preferences, and my past purchases, maybe even news stories I’ve read elsewhere on the web, that might suggest I’d be a good candidate for a statin, the type of cholesterol-lowering medication Amazon recommended to me. And while I’m used to Amazon recommending books I might like or cleaning products I might want to buy again, it felt pretty creepy to push prescription drugs in my direction.

    What did the author expect? Is anyone surprised that a big business is pushing people to buy more product?

    HIPAA, the federal law that protects health privacy, is narrower than most people think. It only applies to health care providers, insurers, and companies that manage medical records. HIPAA requires those entities to protect your data as it moves between them, but it wouldn’t apply to your Amazon purchases, according to Suzanne Bernstein, a legal fellow at the Electronic Privacy Information Center (EPIC).

    HIPAA has always been a questionable law that does more for Pharma than for citizens. By signing a HIPAA form, patients basically allow their medical info to be distributed/sold to drug makers and other product/treatment vendors. I’m glad health information is legally considered private until you sign, but I’m not sure why the public is okay with signing away their privacy on every trip to a new doctor.

    should my Amazon purchases be associated with Amazon’s health care services at all?

    Well, Amazon isn’t going to restrict itself, so we – as the public – will have to make a fuss about it if we want anything to change.




  • I knew about the police getting access, but I missed that home insurance companies were checking properties with drones. I guess I don’t mind them spending their own money to send their own drones to verify properties they insure, but I agree that using MY camera that I bought to get info or sell MY data is at least unethical and ought to be illegal. It should be required that they get my explicit consent to that sort of thing for each instance of data collection or sale.