It’s not always easy to distinguish between existentialism and a bad mood.

  • 18 Posts
  • 643 Comments
Joined 3 years ago
cake
Cake day: July 2nd, 2023

help-circle

  • Good find, this is never-take-me-seriously stupid, and also does the beigeness thing of trying to gradually work around an accepted definition in order to almost make a point at the last minute, here being that since (we have apparently concluded that) (because of uh hypothetical brain surgery and stuff) accountability = improvability + punishability and nothing else so of course software can be held “accountable” in all the ways that matter.

    His big mistake is not doing it at novel length so it’s really obvious that he’s being willfully stupid about it.









  • I use AI sparingly to make sure the company-paid subscription is a net loss for the AI vendor.

    Hey, it could happen.

    Overall, I think it was a bit cookie cutter for an article of this type, but maybe It’s just the preaching to the choir effect. Even the fact that he ostensibly quit his job over this stuff doesn’t hit as hard as it should, it comes off as if he could have done so at any time but this way he gets to grandstand about it.

    Also stuff like this:

    It wasn’t a bad job, not by most metrics. It ticked the boxes a job is supposed to tick: good pay. Health insurance. Remote work. Time off. Nice coworkers.

    sounds like it should be in a how do you do, fellow workers copypasta.





  • Their heart seems to be in the right place, police interrogation will be exploitative and brainwashy with no real consequences for the interrogators, but they sure chose the dumbest possible way to make their point:

    Despite the claims of AI evangelists, chatbots aren’t people and haven’t achieved sentience. The differences between a chatbot and a real person, however, make Heaton’s ability to elicit a false confession more disturbing, not less.

    “ChatGPT lacks many of the vulnerabilities that make people more likely to falsely confess — like stress, fatigue, and sleep deprivation,” said Saul Kassin, a professor emeritus at John Jay College who wrote the book on false confessions. “If ChatGPT can be induced into a false confession, then who isn’t vulnerable?”






  • I checked it out because I was curious if CEV was some international relations initialism I’d never heard of, turns out its just My Guess About What He Wants in rationalese.

    Excerpt from the definition of Coherent Extrapolated Volition, or how to damage your optical nerve from too much eye rolling:

    Extrapolated volition is the metaethical theory that when we ask “What is right?”, then insofar as we’re asking something meaningful, we’re asking “What would a counterfactual idealized version of myself want* if it knew all the facts, had considered all the arguments, and had perfect self-knowledge and self-control?” (As a metaethical theory, this would make “What is right?” a mixed logical and empirical question, a function over possible states of the world.)

    A very simple example of extrapolated volition might be to consider somebody who asks you to bring them orange juice from the refrigerator. You open the refrigerator and see no orange juice, but there’s lemonade. You imagine that your friend would want you to bring them lemonade if they knew everything you knew about the refrigerator, so you bring them lemonade instead. On an abstract level, we can say that you “extrapolated” your friend’s “volition”, in other words, you took your model of their mind and decision process, or your model of their “volition”, and you imagined a counterfactual version of their mind that had better information about the contents of your refrigerator, thereby “extrapolating” this volition.