Just this guy, you know?

  • 0 Posts
  • 81 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle




  • zaphod@lemmy.catolinuxmemes@lemmy.worldOld is stability
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    8 months ago

    Yes I’m aware of the security tradeoffs with testing, which is why I’ve started refraining from mentioning it as an option as pedants like to pop out of the woodwork and mention this exact issue every damn time.

    Also, testing absolutely gets “security support”, the issue is that security fixes don’t land in testing immediately and so there can be some delay. As per the FAQ:

    Security for testing benefits from the security efforts of the entire project for unstable. However, there is a minimum two-day migration delay, and sometimes security fixes can be held up by transitions. The Security Team helps to move along those transitions holding back important security uploads, but this is not always possible and delays may occur.



  • zaphod@lemmy.catolinuxmemes@lemmy.worldOld is stability
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    8 months ago

    For the target users of Debian stable? No.

    Debian stable is for servers or other applications where security and predictability are paramount. For that application I absolutely do not want a lot package churn. Quite the opposite.

    Meanwhile Sid provides a rolling release experience that in practice is every bit as stable as any other rolling release distro.

    And if I have something running stable and I really need to pull in the latest of something, I can always mix and match.

    What makes Debian unique is that it offers a spectrum of options for different use cases and then lets me choose.

    If you don’t want that, fine, don’t use Debian. But for a lot of us, we choose Debian because of how it’s managed, not in spite of it.








  • Hah I… think we’re on the same side?

    The original comment was justifying unregulated and unmitigated research into AI on the premise that it’s so dangerous that we can’t allow adversaries to have the tech unless we have it too.

    My claim is AI is not so existentially risky that holding back its development in our part of the world will somehow put us at risk if an adversarial nation charges ahead.

    So no, it’s not harmless, but it’s also not “shit this is basically like nukes” harmful either. It’s just the usual, shitty SV kind of harmful: it will eliminate jobs, increase wealth inequality, destroy the livelihoods of artists, and make the internet a generally worse place to be. And it’s more important for us to mitigate those harms, now, than to worry about some future nation state threat that I don’t believe actually exists.

    (It’ll also have lots of positive impact as well, but that’s not what we’re talking about here)




  • You don’t need AI for any of that. Determined state actors have been fabricating information and propagandizing the public, mechanical Turk style, for a long long time now. When you can recruit thousands of people as cheap labour to make shit up online, you don’t need an LLM.

    So no, I don’t believe AI represents a new or unique risk at the hands of state actors, and therefore no, I’m not so worried about these technologies landing in the hands of adversaries that I think we should abandon our values or beliefs Just In Case. We’ve had enough of that already, thank you very much.

    And that’s ignoring the fact that an adversarial state actor having access to advanced LLMs isn’t somehow negated or offset by us having them, too. There’s no MAD for generative AI.


  • Really? I’m supposed to believe AI is somehow more existentially risky than, say, chemical or biological weapons, or human cloning and genetic engineering (all of which are banned or heavily regulated in developed nations)? Please.

    I understand the AI hype artists have done a masterful job convincing everyone that their tech is so insanely powerful (and thus incredibly valuable to prospective investors) that it’ll wipe out humanity, but let’s try to be realistic.

    But you know, let’s take your premise as a given. Even despite that risk, I refuse to let an unknowable hypothetical be used to hold our better natures hostage. The examples are countless of governments and corporations using vague threats as a way to get us to accept bad deals at the barrel of a virtual gun. Sorry, I will not play along.