• 3 Posts
  • 81 Comments
Joined 1 year ago
cake
Cake day: June 5th, 2023

help-circle
  • People developing local models generally have to know what they’re doing on some level, and I’d hope they understand what their model is and isn’t appropriate for by the time they have it up and running.

    Don’t get me wrong, I think LLMs can be useful in some scenarios, and can be a worthwhile jumping off point for someone who doesn’t know where to start. My concern is with the cultural issues and expectations/hype surrounding “AI”. With how the tech is marketed, it’s pretty clear that the end goal is for someone to use the product as a virtual assistant endpoint for as much information (and interaction) as it’s possible to shoehorn through.

    Addendum: local models can help with this issue, as they’re on one’s own hardware, but still need to be deployed and used with reasonable expectations: that it is a fallible aggregation tool, not to be taken as an authority in any way, shape, or form.


  • On the whole, maybe LLMs do make these subjects more accessible in a way that’s a net-positive, but there are a lot of monied interests that make positive, transparent design choices unlikely. The companies that create and tweak these generalized models want to make a return in the long run. Consequently, they have deliberately made their products speak in authoritative, neutral tones to make them seem more correct, unbiased and trustworthy to people.

    The problem is that LLMs ‘hallucinate’ details as an unavoidable consequence of their design. People can tell untruths as well, but if a person lies or misspeaks about a scientific study, they can be called out on it. An LLM cannot be held accountable in the same way, as it’s essentially a complex statistical prediction algorithm. Non-savvy users can easily be fed misinfo straight from the tap, and bad actors can easily generate correct-sounding misinformation to deliberately try and sway others.

    ChatGPT completely fabricating authors, titles, and even (fake) links to studies is a known problem. Far too often, unsuspecting users take its output at face value and believe it to be correct because it sounds correct. This is bad, and part of the issue is marketing these models as though they’re intelligent. They’re very good at generating plausible responses, but this should never be construed as them being good at generating correct ones.







  • I believe the wording is “drinks which burn the throat” which naturally means:

    • ❌ Coffee
    • ❌ Alcoholic drinks
    • ❌ Coca Cola
    • ❌ Hot tea
    • ❌ Chai lattes
    • ✅ Sprite, other non-caffeinated soft drinks
    • ✅ Hot Chocolate
    • ✅ Kombucha
    • ✅ Energy drinks???
    • ✅ Herbal Tea (even while hot, but mostly if you’re sick as a home remedy)

    Most of the focus is interpreted as “contains caffeine and/or alcohol” but the wording is vague enough that it leaves for a lot of weird wiggle room people try to argue (based on convenience usually). It’s quite silly





  • Driving is more fun when there are more viable alternatives. I don’t like driving, but it’s my only real choice where I live so I do it begrudgingly, and you have to share the road with me. Think of all the people who don’t want to drive (on account of it being dangerous, costly and/or mentally taxing) suddenly not being in cars, and how much traffic that would free up for you to zip around instead!

    Also, calling a public service “bankrupt” is really weird to me. How many tax dollars are we spending on public highways and freeways again? Do suburbs, which are designed to be car-dependent, provide a net gain or net cost in tax revenue to cities?








  • Y’know, that’s fair. I think I misspoke, and meant to say that the admins of your instance can see your IP but not the admins of another (assuming you’re not self hosting on your home PC without a VPN), but I’m not 100% sure that’s true because I’ve never looked at the protocol.

    If every interaction is already public on the backend/API level, then simply not showing the info to users is just a transparency issue.

    The more I’m thinking about this, the more I believe it’s a cultural/expectations thing. On websites like Tumblr, all of your reblogs and likes are public info, but it’s very up front about that. Social media like Facebook, IG, and sites like Discord, it’s the same; you can look through the list of everyone who reacted.