• 2 Posts
  • 1.34K Comments
Joined 2 years ago
cake
Cake day: March 22nd, 2024

help-circle
  • So two thoughts:

    1. Per Saltman’s comments the improvised incendiary bounced off the side of the house rather than breaking and spreading the gas on the house proper. Apparently if you want the bottle to break the way you intend you gotta really just whang that thing because glass bottles are sturdier than you’d think.

    2. One thing I find ironic about his referencing the New Yorker article on him was that part of my takeaway from that article was how mundane he is, individually. Like, he’s a snake, but not in any way that isn’t pretty standard once you start getting that level of wealth and power. He credibly pretended to be a proper AI cultist for the critihype, and then as the rubber started hitting the road he pivoted towards the direction that gave him and the company more money, even if it meant sacrificing the values that it turns out a lot of other people really cared about (however dumb I might think they are). That’s shitty, but it’s shitty in the most boring way that so many things are in the rot economy, and it’s not like even if they had managed to kill Altman himself there wouldn’t be another bunch of enterprising sociopaths ready to move into the same position. That profile is one of the strongest pieces of evidence why even if you are a hardcore AI doom cultist you shouldn’t focus your ire on the man himself, because he’s just not that special.



  • True. I will say that the shitty infosec teams are probably being hit less hard than the SMEs they offloaded their jobs onto, because from their perspective it doesn’t actually matter whether it’s f5 support engineer or a chatbot that tells them the answer; either way they’ve successfully offloaded the task of validating security onto another entity that can make up for their shortcomings with a combination of accuracy and authority. Nobody is going to get fired for not fixing a bug that the vendor SME told them wasn’t actually an issue for them, effectively. And when the org has been pushing AI as hard as so many of them have its pretty easy to throw the chatbot under the same bus and expect the bus to stop instead.


  • Yeah, they lost me at the middle managers bit too. In my experience your manager is probably the one pushing the metrics to show their team’s contributions to the knowledge base that is feeding into the AI model that’s replacing them. They’re already creatures of the bureaucracy and are more likely to try and fight each other over the few remaining roles that will exist after the majority of their teams are replaced with the confabulatron, rather than be concerned about their own replacements. After all, their job stops existing because their team got downsized, but their time in that job may be dependent on their enthusiastic participation in the process that leads there.


  • I don’t disagree about the massive costs necessarily associated with thia industry. Even the smaller and lighter models she mentions only exist because of the massive fuckers. At the same time, I think those arguments are for the realm of public policy more than individual choice to use chatbots or not. We’ve talked at length here over the last year or so about how the economics of the bubble are driven largely by a broken B2B SaaS pipeline that separates purchasing decisions from actually having to use the products and by an investment capital sector desperately trying to recapture the glory days of the pre-2008 omnibubble and throwing obscene amounts of money at anything with the right narrative regardless of the numbers. I feel like that keeps happening regardless of how many individual users fall for the hype and make it part of their normal workflows.

    I feel like the analogy to the drug trade is still pretty relevant given the violence and predation that the black market pretty much inevitably attracts and sustains. Like, maybe you know a guy who has his own grow op or whatever, but cocaine and heroin money is going through the cartels at some point in the chain and they’re going to use some portion of it for bullets that end up in some journalist’s kids or something. The downstream harms are massive even if the drug industry could theoretically avoid them in ways the AI industry can’t, but any given individual user’s contribution to them is incredibly minor and given the addictive and self-destructive nature of the product it’s both more humane and more effective to treat them as a victim of a broken world that (falsely) offered this as a step up. While I don’t think we should allow slop to invest every forum any more than addicts should be allowed to shoot up on every corner, I think that if shaming makes people less likely to acknowledge that they’re going down a dead-end road and reach out to their communities and support networks for help addressing the root of what drove them to these maladaptive antisolutions in the first place then shaming is making things worse, not better.

    Also as the father of a small child I can unfortunately say from recent personal experience that shaming, be it public or private, is far less effective as a means of motivating behavioral change than we want it to be, even for things as basic as not shitting on the goddamn lawn.


  • Found an interesting take on YouTube, of all places. Her argument can be summarized (with high compression losses) as “AI companies and technologies are bad for basically all the reasons that non-cultist critics say, but trying to shame and argue people out of using them entirely is less effective than treating them as a normal tool with limitations and teaching people how to limit the harm.” She makes the analogy to drug policy.

    I think she makes a very compelling argument, and I’m still digesting it a bit because I definitely had the knee-jerk rejection as an insider shill, but especially towards the end as she talks about how the AI industry targets low-literacy users as ideal customers (because the more you know about it the less you’re likely to actually use them) I found myself agreeing more than not. I do wish she had addressed the dangers of cognitive offloading more, since being mindful of which tasks you’re letting the computer do for you is pretty significant part of minimizing those harms, especially for students and some professionals who face a strong incentive to just coast by on slop if they can get away with it.


  • I can’t validate any of the internal stuff, but the attitude of layering manual solutions and mitigation scripts on top of bad design choices and praying you could keep building the next bit of the bridge as the last one collapsed underneath you would explain a lot of experiences I had supporting systems running on Azure. The level of weird “Azure just does that sometimes” cases and the lack of ability for their support to actually provide insight was incredibly frustrating. I think I probably ended up providing a couple of automatic recovery scripts for people to use inside their F5 guests because we never could find an actual explanation for the errors they were getting, and the node issues they describe could have explained the bursts of Azure cases that would come in some days.


  • XCancel link for those of us sick of being badgered to sign up/in

    On a more productive note, this feels likely to be tied in with the usual issues of AI sycophancy re: false positive rate. If you ask the model to tell you about security vulnerabilities, it’s never going to tell you there aren’t any, any more than existing scanners will. When I worked for F5 it was not uncommon to have to go down a list of vulnerabilities that someone’s scanner turned out and figure out whether they were actually something that needed mitigation that could be applied on our box, something that needed to be configured somewhere else in the network (usually on their actual servers) or (most commonly) a false positive, e.g. “your software version would be vulnerable here, which is why it flagged, but you don’t have the relevant module activated and if an attacker is able to modify your system to enable it you’re already compromised to a far greater degree than this would allow.” That was with existing tools that weren’t trying to match a pattern and complete a prompt.* Given that we’ve seen the shitshow that is Claude Code I think it’s pretty clear they’re getting high on their own supply and this announcement ought be catnip for black hats.





  • Man, this one is a weird read. On one hand I think they’re entirely too credulous of the “AI Future” narrative at the heart of all of this. Especially in the opening they don’t highlight how the industry is increasingly facing criticism and questions about the bubble, and only pay lip service to how ridiculous all the existential risk AI safety talk sounds (should be is). And they don’t spend any ink discussing the actual problems with this technology that those concerns and that narrative help sweep under the rug. For all that they criticize and question Saltman himself this is still, imo, standard industry critihype and I’m deeply frustrated to see this still get the platform it does.

    But at the same time, I do think that it’s easy to lose sight of the rich variety of greedy assholes and sheltered narcissists that thrive at this level of wealth and power. Like, I wholly believe that Altman is less of a freak than some of his contemporaries while still being an absolute goddamn snake, and I hope that this is part of a sea change in how these people get talked about on a broader level, though I kinda doubt it.


  • I mean that’s just the classic realist security paradox, right? The Iranian regime feels, not without reason, like they need to have a lot of military options to keep themselves safe against both internal and external threats. Those options include missile forces, the nuclear program, the ability to close the Strait of Hormuz, and a variety of regional proxies that can act in their interest and keep their regional adversaries from stabilizing and forming a real threat. However, having all those different security apparatuses makes other nations that have to interact with them (either because they’re also in the region, or they rely on the Strait of Hormuz, or they would also die in a nuclear apocalypse) more likely to feel the need to increase their own security apparatus, which in turn increases the threat they can pose to Iran. Meanwhile the fact that all this investment is going into the military means that there are fewer resources available and less inclination to try and solve problems by other means, making it increasingly likely that any conflict is going to be resolved kinetically, which in turn further reinforces the need for all that military investment.


  • At best it’s the same shitty arguments we heard from crypto grifters and their suckers. Let’s take a process that’s complex and manual by design to allow for independent validation and securing against fraud and make it faster by cutting those parts out and throwing some high-tech nonsense at the problem that we can claim replaces all the verification and validation. (The fact that they called their system “trustless” in the face of this is deeply ironic.) But maybe it’s the cynicism talking but I’m even less inclined to give anyone other than maybe the author of that sub stack the benefit of the doubt that they actually believed it.

    The ideal customer for this service is the kind of “Visionary Leader” with the “Founder Mindset” and “Drive to Innovate” that lets them see that all those privacy, security, fraud prevention, anti-embezzlement, and whatever else those standards and their associated compliance mechanisms are meant to provide are just pointless obstacles on the path to making obscene amounts of money by burning the world behind you. Often the shit we talk about here makes me think the world has gone mad or stupid, but every so often I feel like I’m staring at the face of capital-E Evil and this is one of those times.




  • Anthropic is constrained in that some of the fixes which should be pushed to users are things which would have significant trade-off in the form of cost or context window, neither of which are palatable to them for reasons this community has discussed at length.

    I think I’m missing something somewhere. One of the most alarming patterns that Jonny found imo was the level of waste involved across unnecessary calls to the source model, unnecessary token churn through the context window from bad architecture, and generally a sense that when creating this neither they nor their pattern extruder had made any effort to optimize it in terms of token use. In other words, changing the design to push some of those calls onto the user would save tokens and thus reduce the user’s cost per prompt, presumably by a fair margin on some of the worst cases.