It’s not always easy to distinguish between existentialism and a bad mood.

  • 11 Posts
  • 287 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle
  • Maybe It’s just CEO dick measuring, so chads Nadella and PIchai can both claim a rock hard 20-30% while virgin Zuckeberg is exposed as not even knowing how to put the condom on.

    Microsoft CTO Kevin Scott previously said he expects 95% of all code to be AI-generated by 2030.

    Of course he did.

    The Microsoft CEO said the company was seeing mixed results in AI-generated code across different languages, with more progress in Python and less in C++.

    So the more permissive at compile time the language the better the AI comes out smelling? What a completely unanticipated twist of fate!




  • Conversely, people who may not look or sound like a traditional expert, but are good at making predictions

    The weird rationalist assumption that being good at predictions is a standalone skill that some people are just gifted with (see also the emphasis on superpredictors being a thing in itself that’s just clamoring to come out of the woodwork but for the lack of sufficient monetary incentive) tends to come off a lot like if an important part of the prediction market project was for rationalists to isolate the muad’dib gene.










  • Here’s a screenshot of a skeet of a screenshot of a tweet featuring an unusually shit take on WW2 by Moldbug:

    link

    transcript

    skeet by Joe Stieb: Another tweet that should have ended with the first sentence.

    Also, I guess I’m a “World War Two enjoyer”

    tweet by Curtis Yarvin: There is very very extensive evidence of the Holocaust.

    Unfortunately for WW2 enjoyers, the US and England did not go to war to stop the Holocaust. They went to war to stop the Axis plan for world conquest.

    There is no evidence of the Axis plan for world conquest.

    edit: hadn’t seen yarvin’s twitter feed before, that’s one high octane shit show.









  • The first prompt programming libraries start to develop, along with the first bureaucracies.

    I went three layers deep in his references and his references’ references to find out what the hell prompt programming is supposed to be, ended up in a gwern footnote:

    It's the ideologized version of You're Prompting It Wrong. Which I suspected but doubted, because why would they pretend that LLMs being finicky and undependable unless you luck into very particular ways of asking for very specific things is a sign that they're doing well.

    gwern wrote:

    I like “prompt programming” as a description of writing GPT-3 prompts because ‘prompt’ (like ‘dynamic programming’) has almost purely positive connotations; it indicates that iteration is fast as the meta-learning avoids the need for training so you get feedback in seconds; it reminds us that GPT-3 is a “weird machine” which we have to have “mechanical sympathy” to understand effective use of (eg. how BPEs distort its understanding of text and how it is always trying to roleplay as random Internet people); implies that prompts are programs which need to be developed, tested, version-controlled, and which can be buggy & slow like any other programs, capable of great improvement (and of being hacked); that it’s an art you have to learn how to do and can do well or poorly; and cautions us against thoughtless essentializing of GPT-3 (any output is the joint outcome of the prompt, sampling processes, models, and human interpretation of said outputs).