Programmer and sysadmin (DevOps?), wannabe polymath in tech, science and the mind. Neurodivergent, disabled, burned out, and close to throwing in the towel, but still liking ponies 🦄 and sometimes willing to discuss stuff.

  • 4 Posts
  • 1.1K Comments
Joined 2 years ago
cake
Cake day: June 26th, 2023

help-circle




  • Because it’s easier to migrate from Twitter to BlueSky.

    • Mastodon onboarding sucks: have to select an app, select an instance… and you’ve lost 99% of the users 😮‍💨
    • Bluesky: install the official app, pick a username and password, get to pick some interest topics, and you’re set up with a basic feed.

    Extras:

    • Starter packs: Users can advertise curated lists of people to follow, making it easier to migrate whole communities.
    • Moderation is arguably better with community “labelers” who don’t remove the content (doesn’t antagonize “freeze peach” people).
    • 3rd-party tools to automatically match and add ex-Twitter users who migrated to BlueSky.

    Overall, it gets a boost from a faster increase in network effect.


  • Nostr is great for privacy and for crypto, but not yet suitable for the general public.

    Asking an average user to secure a cryptographic key for their identity, when most can barely hold onto a user:pass, is kind of ridiculous… so Nostr is selling a $100 “authenticator box”. Not particularly user friendly.

    One strong point of Nostr is Bitcoin LN integration, which potentially could work as a source of revenue, but the look&feel is not published enough, while at the same time trying to offer more interaction types (like the marketplace), than what people really want: Twitter’s sweet teet.


  • It will be, “but”.

    The code is dual-licensed MIT and Apache. Meaning it’s fully compatible with a privative fork, but also a free federated network could still survive.

    For now, it seems like they are planning on developing extra features on top of the basic functionalities, not paywall basic features… but time will tell.

    In any case, they seem to be led by people who jumped ship from Twitter before the Muskocalypse, so it’s becoming kind of “the old time Twitter”. Chances are, as Musk rides Twitter’s popularity and inertia until fully turning it into a dystopian dictatorship propaganda machine, BlueSky will emerge to replace it as a slightly better iteration of what Twitter used to be.



  • If the concern is about “fears” as in “feelings”… there is an interesting experiment where a single neuron/weight in an LLM, can be identified to control the “tone” of its output, whether it be more formal, informal, academic, jargon, some dialect, etc. and expose it to the user for control over the LLM’s output.

    With a multi-billion neuron network, acting as an a priori black box, there is no telling whether there might be one or more neurons/weights that could represent “confidence”, “fear”, “happiness”, or any other “feeling”.

    It’s something to be researched, and I bet it’s going to be researched a lot.

    If you give ai instruction to do something “no matter what”

    The interesting part of the paper, is that the AIs would do the same even in cases where they were NOT instructed to “no matter what”. An apparently innocent conversation, can trigger results like those of a pathological liar, sometimes.


  • IANAL either, in recent streams from Judge Fleischer (Houston, Texas, USA) there have been some cases (yes, plural) where repeatedly texting a victim with life threats, or even texting a victim’s friend to pass on a threat to the victim, have been considered a “terrorist threat”.

    As for the “sane country” part… 🤷… but from a strictly technical point of view, I think it makes sense.


    I once knew a guy who was married to a friend, and he had a dog. He’d hit his own dog to make her feel threatened. Years went by, nobody did anything, she’d come to me crying, had multiple miscarriages… until he punched her, kicked out of the car, and left stranded on the road after a hiking trip. They divorced, went their separate ways, she found another guy, got married again, and nine months later they had twins.

    So… would it’ve been sane to call what the guy did, “terrorism”? I’d vote yes.



  • There are several separate issues that add up together:

    • A background “chain of thoughts” where a system (“AI”) uses an LLM to re-evaluate and plan its responses and interactions by taking into account updated data (aka: self-awareness)
    • Ability to call external helper tools that allow it to interact with, and control other systems
    • Training corpus that includes:
      • How to program an LLM, and the system itself
      • Solutions to programming problems
      • How to use the same helper tools to copy and deploy the system or parts of it to other machines
      • How operators (humans) lie to each other

    Once you have a system (“AI”) with that knowledge and capabilities… shit is bound to happen.

    When you add developers using the AI itself to help in developing the AI itself… expect shit squared.