• 1 Post
  • 812 Comments
Joined 1 year ago
cake
Cake day: March 22nd, 2024

help-circle




  • Heartwarming: the worst person you know just outed themselves as a fucking moron

    Even the people who are disagreeing are still kinda sneerable though. Like this guy:

    Even in the worst case, DOGE firing too many people is not a particularly serious danger. Aside from Skynet, you should be worried about people using AI to help engineer deadly viruses or nuclear weapons, not firing government employees.

    That’s still assuming that the AI is a valuable tool for the purpose of genetic engineering or nuclear weapons manufacturing or whatever! Like, the hard part of building a nuke is very much in acquiring the materials, engineering everything to go off at the right time, and actually building it without killing yourself. Very little of that is meaningfully assisted by LLMs even if they did work as advertised. And there are so many people in that very thread alone going into detail on how biological engineering is incredibly hard in ways that similarly aren’t bottlenecked by the kinds of things current AI structures can do. The level of comedically missing the point of the folks who keep trying to explain reality is off the charts.




  • I would be more inclined to agree if there was an actual better alternative wait to fill in the gap. Instead we’re probably going to see the loss of US soft power be replaced by EU, Russian, and particularly Chinese soft power. I’m not sufficiently propagandized to say that’s strictly worse than being under US soft power, especially as practiced by the kinds of people that support EA. But it also isn’t really an improvement in terms of enabling autonomous development.


  • Yeah. I don’t think you need the full ideological framework and all its baggage to get to “medical interventions and direct cash transfers are consistently shown to have strong positive impacts relative to the resources invested.” That framework prevents you from adding on “they also avoid some of the negative impact that foreign aid can have on domestic institution-building processes” which is a really important consideration. Of course, that assumes the goal is to mitigate and remediate the damage done by colonialism and imperialism rather than perpetuting the same structures in a way that the imperialists at the top can feel good about. And for a lot of the donor class that EA orgs are chasing I don’t think that’s actually the case.


  • I also think that some of the long-termism criticisms are not so easily severable from the questions he does address about epistemology and listening to the local people receiving aid. The long-termist nutjobs aren’t an aberration of EA-type utilitarianism. They are it’s logical conclusion. Even if this chapter ends with common sense prevailing over sci-fi nonsense it’s worth noting that this kind of absurdity can’t arise if you define effectiveness as listening to people and helping them get what they need rather than creating your own metrics that may or may not correlate outside of the most extreme cases.








  • See I would frame it as practicioners of some of the last few non-bullshit jobs (minimally bullshit jobs) - fields that by necessity require a kind of craft or art that is meaningful or rewarding - being routed around by economic forces that only wanted their work for bullshit results. Like, no matter how passionate you are about graphic design you probably didn’t get into the field because shuffling the visuals every so often is X% better for customer engagement and conversion or whatever. But the businesses buying graphic design work are more interested in that than they ever were in making something beautiful or functional, and GenAI gives them the ability to get what they want more cheaply. As an unexpected benefit they also don’t have to see you roll your eyes when they tell you it needs to be “more blue” and as an insignificant side effect it brings our culture one step closer to finally drowning the human soul in shit to advance the cause of glorious industry in it’s unceasing march to An Even Bigger Number.


  • I mean past a certain point LLMs are strictly worse tools than Stack Overflow was on its worst day. IDEs have a bunch of features to help manage complexity and offload memorization. The fundamental task of understanding the code you’re writing is still yours. Stack Overflow and other forums are basically crowdsourced mentorship programs. Someone out there knows the thing you need to and rather than cultivate a wide social network you can take advantage of mass communication. To use it well you still need to know what’s happening, and if you don’t you can at least trust that the information is out there somewhere that you might be able to follow up on as needed. LLM assistants are designed to create output that looks plausible and to tell the user what they want to hear. If the user is an idiot the LLM will do nothing to make them recognize that they’re doing something wrong, much less help them fix it.