• 0 Posts
  • 290 Comments
Joined 3 years ago
cake
Cake day: July 29th, 2023

help-circle


  • Just seen a clip of aronofsky’s genai revolutionary war thing and it is incredibly bad. Just… every detail is shit. Ways in which I hadn’t previously imagined that the uncanny valley would intrude. Even if it weren’t for the simulated flesh golems, one of whom seems to be wearing anthony hopkins’ skin as a clumsy disguise, the framing and pacing just feels like the model was trained on endless adverts and corporate speaking head videos, and either it was impossible to edit, or none the crew have any idea what even mediocre films look like.

    I also hadn’t appreciated before that genai lip sync/dubbing was just embarrassing. I think I’ve only seen a couple of very short genai video clips before, and the most recent at least 6 months ago, but this just seems straight up broken. Have the people funding this stuff ever looked at what is being generated?

    https://bsky.app/profile/ethangach.bsky.social/post/3mdljt2wdcs2v







  • I have mixed feelings about this one: The Enclosure feedback loop (or how LLMs sabotage existing programming practices by privatizing a public good).

    The author is right that stack overflow has basically shrivelled up and died, and that llm vendors are trying to replace it with private sources of data they’ll never freely share with the rest of us, but I don’t think that chatbot dev sessions are in any way “high quality data”. The number of occasions when a chatbot-user actually introduces genuinely useful and novel information will be low, and the ability of chatbot companies to even detect that circumstance will be lower still. It isn’t enclosing valuable commons, it is squirting sealant around all the doors so the automated fart-huffing system and its audience can’t get any fresh air.







  • So, there’s a kind of security investigation called “dorking”, where you use handy public search tools to find particularly careless software misconfigurations that get indexed by eg. google. One too, for that sort of searching it github code search.

    Turns out that a) claude chat logs get automatically saved to a file under .claude/logs and b) quite a lot of people don’t actually check what they’re adding to source control, and you can actually search github for that sort of thing with a path: code search query (though you probably need to be signed in to github first, it isn’t completely open).

    I didn’t find anything even remotely interesting (and watching people’s private project manager fantasy roleplay isn’t something I enjoy), but viss says they’ve found credentials, which is fun.

    https://mastodon.social/@Viss/115923109466960526




  • Armin Ronacher, who is an experienced software dev with a fair amount of open and less open source projects under his belt, was up until fairly recently a keen user of llm coding tools. (he’s also the founder of “earendil”, a pro-ai software pbc, and any company with a name from tolkien’s legendarium deserves suspicion these days)

    His faith in ai seems to have taken bit of a knock lately: https://lucumr.pocoo.org/2026/1/18/agent-psychosis/

    He’s not using psychosis in the sense of people who have actually developed serious mental health issues as a result of chatbot use, but software developers who seem to have lost touch with what they were originally trying to and just kind a roll around in the slop, mistaking it for productivity.

    When Peter first got me hooked on Claude, I did not sleep. I spent two months excessively prompting the thing and wasting tokens. I ended up building and building and creating a ton of tools I did not end up using much. “You can just do things” was what was on my mind all the time but it took quite a bit longer to realize that just because you can, you might not want to. It became so easy to build something and in comparison it became much harder to actually use it or polish it. Quite a few of the tools I built I felt really great about, just to realize that I did not actually use them or they did not end up working as I thought they would.

    You feel productive, you feel like everything is amazing, and if you hang out just with people that are into that stuff too, without any checks, you go deeper and deeper into the belief that this all makes perfect sense. You can build entire projects without any real reality check. But it’s decoupled from any external validation. For as long as nobody looks under the hood, you’re good. But when an outsider first pokes at it, it looks pretty crazy.

    He’s still pro-ai, and seems to be vaguely hoping that improvements in tooling and dev culture will help stem the tide of worthless slop prs that are drowning every large open source project out there, but he has no actual idea if any of that can or will happen (which it won’t, of course, but faith takes a while to fade).

    As always though, the first step is to realise you have a problem.



  • This is fun: a zero-click android exploit that allows arbitrary code execution and privilege escalation. Y’know, the worst kind. How did we get here?

    Over the past few years, several AI-powered features have been added to mobile phones that allow users to better search and understand their messages. One effect of this change is increased 0-click attack surface, as efficient analysis often requires message media to be decoded before the message is opened by the user. One such feature is audio transcription. Incoming SMS and RCS audio attachments received by Google Messages are now automatically decoded with no user interaction. As a result, audio decoders are now in the 0-click attack surface of most Android phones.

    AI, making everything worse, even before it runs!

    https://projectzero.google/2026/01/pixel-0-click-part-1.html

    Every now and then, I think about going back to android, and then I read stuff like this. FWIW, iOS had a closely related bug, but compiled the offending code with bounds checks, so it wasn’t usefully exploitable (and required some user interaction, too).

    Anyway, if you do android, maybe check if automatic transcription is enabled.