• 11 Posts
  • 18 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle
  • These attacks do not have to be reliable to be successful. They only need to work often enough to be cost-effective, and the cost of LLM text generation is cheap and falling. Their sophistication will rise. Link-spam will be augmented by personal posts, images, video, and more subtle, influencer-style recommendations—“Oh my god, you guys, this new electro plug is incredible.” Networks of bots will positively interact with one another, throwing up chaff for moderators. I would not at all be surprised for LLM spambots to contest moderation decisions via email.

    I don’t know how to run a community forum in this future. I do not have the time or emotional energy to screen out regular attacks by Large Language Models, with the knowledge that making the wrong decision costs a real human being their connection to a niche community.

    Ouch. I’d never want to tell someone ‘Denied. I think you’re a bot.’ – but I really hate the number of bots already out there. I was fine with the occasional bots that would provide a wiki-link and even the ones who would reply to movie quotes with their own quotes. Those were obvious and you could easily opt to ignore/hide their accounts. As the article states, the particular bot here was also easy to spot once they got in the door, but the initial contact could easily have been human and we can expect bots to continuously seem human as AI improves.

    Bots are already driving policy decisions in government by promoting/demoting particular posts and writing their own comments that can redirect conversations. They make it look like there is broad consensus for the views they’re paid to promote, and at least some people will take that as a sign that the view is a valid option (ad populum).

    Sometimes it feels like the internet is a crowd of bots all shouting at one another and stifling the humans trying to get a word in. The tricky part is that I WANT actual unpaid humans to tell me what they actually: like/hate/do/avoid. I WANT to hear actual stories from real humans. I don’t want to find out the ‘Am I the A-hole?’ story getting everyone so worked up was an ‘AI-hole’ experiment in manipulating emotions.

    I wish I could offer some means to successfully determine human vs. generated content, but the only solutions I’ve come up with require revealing real-world identities to sites, and that feels as awful as having bots. Otherwise, I imagine that identifying bots will be an ever escalating war akin to Search Engine Optimization wars.







  • Amazon offered up “Treatments for High Cholesterol” along with a link for an Amazon One Medical consultation as well as links to prescription medications.

    That’s weird, because my doctor and my wife are the only people who know about my cholesterol numbers. They’re pretty good, too! But there are certainly data points, including my age, my food preferences, and my past purchases, maybe even news stories I’ve read elsewhere on the web, that might suggest I’d be a good candidate for a statin, the type of cholesterol-lowering medication Amazon recommended to me. And while I’m used to Amazon recommending books I might like or cleaning products I might want to buy again, it felt pretty creepy to push prescription drugs in my direction.

    What did the author expect? Is anyone surprised that a big business is pushing people to buy more product?

    HIPAA, the federal law that protects health privacy, is narrower than most people think. It only applies to health care providers, insurers, and companies that manage medical records. HIPAA requires those entities to protect your data as it moves between them, but it wouldn’t apply to your Amazon purchases, according to Suzanne Bernstein, a legal fellow at the Electronic Privacy Information Center (EPIC).

    HIPAA has always been a questionable law that does more for Pharma than for citizens. By signing a HIPAA form, patients basically allow their medical info to be distributed/sold to drug makers and other product/treatment vendors. I’m glad health information is legally considered private until you sign, but I’m not sure why the public is okay with signing away their privacy on every trip to a new doctor.

    should my Amazon purchases be associated with Amazon’s health care services at all?

    Well, Amazon isn’t going to restrict itself, so we – as the public – will have to make a fuss about it if we want anything to change.




  • I knew about the police getting access, but I missed that home insurance companies were checking properties with drones. I guess I don’t mind them spending their own money to send their own drones to verify properties they insure, but I agree that using MY camera that I bought to get info or sell MY data is at least unethical and ought to be illegal. It should be required that they get my explicit consent to that sort of thing for each instance of data collection or sale.




















  • The amazing thing is that almost ALL the staff signed a letter and threatened to quit, too! From: https://www.wired.com/story/openai-staff-walk-protest-sam-altman/

    “The process through which you terminated Sam Altman and removed Greg Brockman from the board has jeopardized all of this work and undermined our mission and company,” the letter reads. “Your conduct has made it clear you did not have the competence to oversee OpenAI.”

    Remarkably, the letter’s signees include Ilya Sutskever, the company’s chief scientist and a member of its board, who has been blamed for coordinating the boardroom coup against Altman in the first place. By 5:10 pm ET on Monday, some 738 out of OpenAI’s around 770 employees, or about 95 percent of the company, had signed the letter.

    Supposedly, Microsoft has said they’ll hire the whole team… but I wonder if it’ll really play out that way or if they’d just become short-term hires and then kicked out once OpenAI collapses. Note that Microsoft has invested a lot of money in OpenAI.

    Vox also has a lengthy article with lots of details and consideration of what it all means, such as:

    … There is an argument that, because OpenAI’s board is supposed to run a nonprofit dedicated to AI safety, not a fast-growing for-profit business, it may have been justified in firing Altman. (Again, the board has yet to explain its reasoning in any detail.) You won’t hear many people defending the board out loud since it’s much safer to support Altman. But writer Eric Newcomer, in a post he published November 19, took a stab at it. He notes, for instance, that Altman has had fallouts with partners before — one of whom was Elon Musk — and reports that Altman was asked to leave his perch running Y Combinator.

    “Altman had been given a lot of power, the cloak of a nonprofit, and a glowing public profile that exceeds his more mixed private reputation,” Newcomer wrote. “He lost the trust of his board. We should take that seriously.”