• 9 Posts
  • 987 Comments
Joined 2 years ago
cake
Cake day: January 16th, 2024

help-circle
  • The trope of somebody going insane as the world ends, does not appeal to me as an author, including in my role as the author of my own life. It seems obvious, cliche, predictable, and contrary to the ideals of writing intelligent characters. Nothing about it seems fresh or interesting. It doesn’t tempt me to write, and it doesn’t tempt me to be.

    When I read HPMOR, which was years ago before I knew who tf Yud was and I thought Harry was intentionally written as a deeply flawed character and not a fucking self-insert, my favourite part was Hermione’s death. Harry then goes into grief that he is unable to cope with, disassociating to such an insane degree he stops viewing most other people as thinking and acting individuals. He quite literally goes insane as his world - his friend and his illusion of being the smartest and always in control of the situation - ended.

    Of course now in hindsight I know this is just me inventing a much better character and story, and Yud is full of shit, but I find it funny that he inadvertently wrote a character behave insanely and probably thought he’s actually a turborational guy completely in control of his own feelings.











  • I mean if you ever toyed around with neural networks or similar ML models you know it’s basically impossible to divine what the hell is going on inside by just looking at the weights, even if you try to plot them or visualise in other ways.

    There’s a whole branch of ML about explainable or white-box models because it turns out you need to put extra care and design the system around being explainable in the first place to be able to reason about its internals. There’s no evidence OpenAI put any effort towards this, instead focusing on cool-looking outputs they can shove into a presser.

    In other words, “engineers don’t know how it works” can have two meanings - that they’re hitting computers with wrenches hoping for the best with no rhyme or reason; or that they don’t have a good model of what makes the chatbot produce certain outputs, i.e. just by looking at the output it’s not really possible to figure out what specific training data it comes from or how to stop it from producing that output on a fundamental level. The former is demonstrably false and almost a strawman, I don’t know who believes that, a lot of people that work on OpenAI are misguided but otherwise incredibly clever programmers and ML researchers, the sheer fact that this thing hasn’t collapsed under its own weight is a great engineering feat even if externalities it produces are horrifying. The latter is, as far as I’m aware, largely true, or at least I haven’t seen any hints that would falsify that. If OpenAI satisfyingly solved the explainability problem it’d be a major achievement everyone would be talking about.










  • My completely PIDOOMA take is that if you’re self-interested and manipulative you’re already treating most if not all people as lesser, less savvy, less smart than you. So just the fact that you can half-ass shit with a bot and declare yourself an expert in everything that doesn’t need such things like “collaboration with other people”, ew, is like a shot of cocaine into your eyeball.

    LLMs’ tone is also very bootlicking, so if you’re already narcissistic and you get a tool that tells you yes, you are just the smartest boi, well… To quote a classic, it must be like being repeatedly kicked in the head by a horse.