polite leftists make more leftists

☞ 🇨🇦 (it’s a bit of a fixer-upper eh) ☜

more leftists make revolution

  • 7 Posts
  • 348 Comments
Joined 2 years ago
cake
Cake day: March 2nd, 2024

help-circle









  • Personally, I have P(AGI within 10 years) around 15%. I think anyone who is saying definitely no or definitely yes to AGI within this time frame is vastly overconfident in their understanding of the technology, one way or the other. Or they vastly underestimate the utility of bullshit. Of course, I also have P(doom|AGI) probably around 40-70%.




  • It’s true, I fear AGI, not the current state of AI if it were to remain frozen and not improve at all. I am also not terribly afraid of climate change if the climate were to remain fixed at this point. Sure, we have lots of forest fires, and people are dying of heat, but it could get much worse.

    I think maybe the root of our disagreement is that we’re appraising the current state of AI differently. I’m looking at AI now vs AI five years ago and seeing an orders-of-magnitude increase in how powerful it is – still not as good as a human, but no longer negligible – but you’re looking at both of these and rounding them to zero, calling it snake oil. Perhaps, in the Gartner hype cycle, you’re in the trough of disillusionment?

    I don’t want to be a shill for big AI here, but I reject the idea that AI in its current state is useless (though I would agree it’s overhyped and probably detrimental to society overall). It’s capable of doing a lot of trivial labour that previously was not automatable, including coding tasks and graphics, and while it can’t do it with great reliability, or anywhere near as well as a human expert, and it’s much worse in some areas than others (AI-written news articles are much worse than useless, for instance), it’s still turning out to be a productivity benefit (read: reduction in jobs) for those who know how to use it to its strengths. I think the “snake oil” aspect is when lay-people are using it expecting it to be reliable or as good as a human – which is basically how big tech is pitching it.


  • I think we’re looking at this from completely different angles if you are "hope"ful that AI will improve.

    Also, you’re looking at AI completely wrong if you’re analyzing its performance on traditional CS problems in terms of time complexity. Nobody credible is hoping that AI is going to be solving NP problems just by feeding the problem into its context window like a quarter into a vending machine.







  • jsomae@lemmy.mlto196@lemmy.blahaj.zonerule
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 months ago

    I guess I dislike this post because Zionists usually phrase Israeli’s actions in terms of self-defense. So to people who are already anti-zionist, this post is interpretable; but to pro-zionists, it is at best nonsensical or at worst hateful. Either way – it’s polarizing.

    Just seems like the way we should be not using social media.