polite leftists make more leftists

☞ 🇨🇦 (it’s a bit of a fixer-upper eh) ☜

more leftists make revolution

  • 9 Posts
  • 408 Comments
Joined 2 years ago
cake
Cake day: March 2nd, 2024

help-circle









  • Personally, I have P(AGI within 10 years) around 15%. I think anyone who is saying definitely no or definitely yes to AGI within this time frame is vastly overconfident in their understanding of the technology, one way or the other. Or they vastly underestimate the utility of bullshit. Of course, I also have P(doom|AGI) probably around 40-70%.







  • that’s very pragmatic, but you can also flip this around – almonds are a luxury compared to other more practical foods, whereas LLMs can help a coder net an income if used properly. I don’t think you can justify almonds if you’re going to claim AI usage is unethical on purely environmental grounds. And dairy milk is twice as much as almond milk in terms of water, so if you have dairy in your diet, cutting that out is going to be a lot more effective for reducing your water footprint than not using LLMs.

    Anyway, check out the third link for more info on the total water usage of data centers; it doesn’t really add up to much compared to much larger things like golf courses. I don’t get why anyone would use water usage as a reason to agitate against AI for given that there are so many worse problems AI is causing.



  • It’s true, I fear AGI, not the current state of AI if it were to remain frozen and not improve at all. I am also not terribly afraid of climate change if the climate were to remain fixed at this point. Sure, we have lots of forest fires, and people are dying of heat, but it could get much worse.

    I think maybe the root of our disagreement is that we’re appraising the current state of AI differently. I’m looking at AI now vs AI five years ago and seeing an orders-of-magnitude increase in how powerful it is – still not as good as a human, but no longer negligible – but you’re looking at both of these and rounding them to zero, calling it snake oil. Perhaps, in the Gartner hype cycle, you’re in the trough of disillusionment?

    I don’t want to be a shill for big AI here, but I reject the idea that AI in its current state is useless (though I would agree it’s overhyped and probably detrimental to society overall). It’s capable of doing a lot of trivial labour that previously was not automatable, including coding tasks and graphics, and while it can’t do it with great reliability, or anywhere near as well as a human expert, and it’s much worse in some areas than others (AI-written news articles are much worse than useless, for instance), it’s still turning out to be a productivity benefit (read: reduction in jobs) for those who know how to use it to its strengths. I think the “snake oil” aspect is when lay-people are using it expecting it to be reliable or as good as a human – which is basically how big tech is pitching it.


  • I think we’re looking at this from completely different angles if you are "hope"ful that AI will improve.

    Also, you’re looking at AI completely wrong if you’re analyzing its performance on traditional CS problems in terms of time complexity. Nobody credible is hoping that AI is going to be solving NP problems just by feeding the problem into its context window like a quarter into a vending machine.