https://x.com/OOCcommunism/status/1999932339414032604

tbf, the final two sentences are actually kind of good, but it’s just funny to do this when the question is about 8 billion people, like bro, “living continuity of a revolutionary program”? humanity’s literally all dead, Trotsky’s the last guy left!

also why the hell is the WSWS launching an AI chatbot?!

    • Crazy how we aren’t able to make moral AIs yet. It’s incredibly difficult to make any ML system reliability serve an intended goal without goal mis-specification issues (Gemini has gone as far to identify if it’s being tested and “lie” to examiners when it detects that). Additionally, we can’t really quantify human morality into simple goals (and even then, those goals do not align with what model maker’s goals are) that a machine can interpret either.

      • TreadOnMe [none/use name]@hexbear.net
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        1 month ago

        We are literally in the middle of a terminology identification crisis in ethics, with both deontology and utilitarianism being identified as ‘maxim driven ethical frameworks’, with utilitarianism just becoming a subsection of deontology, wherein the ethical maxim focuses on the result, but is still technically a deontological formulation. And these kinds of ideas are being sectioned off from behavioral ethics, which is the study of how humans actually make ethical decisions, which itself posits that ethical frameworks don’t actually exist as a way to make decisions, but instead are a way to justify decisions that have been made. Meaning that we are all, for the most part, just post-hoc deontologists.

        And this is just general ethics, not even specifically about the morality of specific actions.