• 14 Posts
  • 187 Comments
Joined 2 years ago
cake
Cake day: July 19th, 2023

help-circle
  • I’m gonna be polite, but your position is deeply sneerworthy; I don’t really respect folks who don’t read. The article has quite a few quotes from neuroscientist Anil Seth (not to be confused with AI booster Anil Dash) who says that consciousness can be explained via neuroscience as a sort of post-hoc rationalizing hallucination akin to the multiple-drafts model; his POV helps deflate the AI hype. Quote:

    There is a growing view among some thinkers that as AI becomes even more intelligent, the lights will suddenly turn on inside the machines and they will become conscious. Others, such as Prof Anil Seth who leads the Sussex University team, disagree, describing the view as “blindly optimistic and driven by human exceptionalism.” … “We associate consciousness with intelligence and language because they go together in humans. But just because they go together in us, it doesn’t mean they go together in general, for example in animals.”

    At the end of the article, another quote explains that Seth is broadly aligned with us about the dangers:

    In just a few years, we may well be living in a world populated by humanoid robots and deepfakes that seem conscious, according to Prof Seth. He worries that we won’t be able to resist believing that the AI has feelings and empathy, which could lead to new dangers. “It will mean that we trust these things more, share more data with them and be more open to persuasion.” But the greater risk from the illusion of consciousness is a “moral corrosion”, he says. “It will distort our moral priorities by making us devote more of our resources to caring for these systems at the expense of the real things in our lives” – meaning that we might have compassion for robots, but care less for other humans.

    A pseudoscience has an illusory object of study. For example, parapsychology studies non-existent energy fields outside the Standard Model, and criminology asserts that not only do minds exist but some minds are criminal and some are not. Robotics/cybernetics/artificial intelligence studies control loops and systems with feedback, which do actually exist; further, the study of robots directly leads to improved safety in workplaces where robots can crush employees, so it’s a useful science even if it turns out to be ill-founded. I think that your complaint would be better directed at specific AGI position papers published by techbros, but that would require reading. Still, I’ll try to salvage your position:

    Any field of study which presupposes that a mind is a discrete isolated event in spacetime is a pseudoscience. That is, fields oriented around neurology are scientific, but fields oriented around psychology are pseudoscientific. This position has no open evidence against it (because it’s definitional!) and aligns with the expectations of Seth and others. It is compatible with definitions of mind given by Dennett and Hofstadter. It immediately forecloses the possibility that a computer can think or feel like humans; at best, maybe a computer could slowly poorly emulate a connectome.




  • Your understanding is correct. It’s worth knowing that the matrix-multiplication exponent actually controls multiple different algorithms. I stubbed a little list a while ago; important examples include several graph-theory algorithms as well as parsing for context-free languages. There’s also a variant of P vs NP for this specific problem, because we can verify that a matrix is a product in quadratic time.

    That Reddit discussion contains mostly idiots, though. We expect an iterative sequence of ever-more-complicated algorithms with ever-slightly-better exponents, approaching quadratic time in the infinite limit. We also expected a computer to be required to compute those iterates at some point; personally I think Strassen’s approach only barely fits inside a brain and the larger approaches can’t be managed by humans alone.



  • Read it to the end and then re-read 2009’s The Gervais Principle. I hope Ed eventually comes back to Rao’s rant because they complement each other perfectly; Zitron’s Business Idiot is Rao’s Clueless! What Rao brings to the table is an understanding that Sociopaths exist and steer the Clueless, and also that the ratio of (visible) Clueless to Sociopaths is an indication of the overall health of an (individual) business; Zitron’s argument is then that we are currently in an environment (the “Rot Economy” in his writing) which is characterized by mostly Clueless business leaders.

    Then re-read Doctorow’s 2022 rant Social Quitting, which introduced “enshittification”, an alternate understanding of Rao’s process. To Rao, a business pivots from Sociopath to Clueless leadership by mere dilution, but for Doctorow, there’s a directed market pressure which eliminates (or M&As) any businesses not willing to give up some Sociopathy in favor of the more generally-accepted Clueless principles. Concretely relevant to this audience, note how Sociopathic approaches to cryptocurrency-oriented banking have failed against Clueless GAAP accounting, not just at the regulatory level but at the level of handshakes between small-business CEOs.

    Somebody could start a new flavor of Marxism here, one which (to quote an old toot of mine @corbin@defcon.social that I can’t find) starts by understanding that management is a failed paradigm of production and that quotes all of these various managers (Galloway, Rao, and Zitron were all management bros at one point, as were their heroes Scott Adams and Mike Judge) as having a modicum of insight cloaked in MBA-speak.




  • I adjusted her ESAS downward by 5 points for questioning me, but 10 points upward for doing it out of love.

    Oh, it’s a mockery all right. This is so fucking funny. It’s nothing less than the full application of SCP’s existing temporal narrative analysis to Big Yud’s philosophy. This is what they actually believe. For folks who don’t regularly read SCP, any article about reality-bending is usually a portrait of a narcissist, and the body horror is meant to give analogies for understanding the psychological torture they inflict on their surroundings; the article meanders and takes its time because there’s just so much worth mocking.

    This reminded me that SCP-2718 exists. 2718 is a Basilisk-class memetic cognitohazard; it will cause distress in folks who have been sensitized to Big Yud’s belief system, and you should not click if you can’t handle that. But it shows how these ideas weren’t confined to LW.








  • I’ve been giving professional advice about system administration directly to CEOs and CTOs of startups for over half a decade. They’ve all asked about AI one way or another. While some of my previous employers have had good reasons to use machine learning, none of the businesses I’ve worked with in the past half-decade have had any use for generative AI products, including startups whose entire existence was predicated on generative AI.

    Don’t sign up for a dick-measuring contest without measuring yourself first.




  • The books look alright. I only read the samples. The testimonials from experts are positive. Maybe compare and contrast with Lox from Crafting Interpreters, whose author is not an ally but not known evil either. In terms of language design, there’s a lot of truth to the idea that Monkey is a boring ripoff of Tiger, which itself is also boring in order to be easier to teach. I’d say that Ball’s biggest mistake is using Go as the implementation language and not explaining concepts in a language-neutral fashion, which makes sense when working on a big long-lived project but not for a single-person exploration.

    Actually, it makes a lot of sense that somebody writing a lot of Go would think that an LLM is impressive. Also, I have to sneer at this:

    Each prompt I write is a line I cast into a model’s latent space. By changing this word here and this phrase there, I see myself as changing the line’s trajectory and its place amidst the numbers. Words need to be chosen with care, since they all have a specific meaning and end up in a specific place in latent space once they’ve been turned into numbers and multiplied with each other, and what I want, what I aim for when I cast, is for the line to end up in just the right spot, so that when I pull on it out of the model comes text that helps me program machines.

    Dude literally just discovered word choice and composition. Welcome to writing! I learned about this in public education when I was maybe 14.


  • I’m guessing that you’re too young to remember. Lucky 10000! In the 1990s, McDonald’s was under attack for a variety of anti-environmentalist practices, and by 2001 there was a class-action lawsuit against them for using beef tallow in fries from a coalition of vegetarians, vegans, and primarily Hindus who were deeply offended that they had been tricked into consuming what they consider to be a sacred animal. In a nutshell, it’s a very racist and revanchist move, not just an anti-environmentalist move.

    Unlike normal, I can’t link to good peer-reviewed articles on the topic. McDonald’s is one of the few groups who can successfully control their Internet presence, and they’ve washed away these controversies as best they can. I almost feel like linking to this summary of the case on Wikipedia is unhelpful, since it’s got so many apologetic caveats. They do this all over Wikipedia; McLibel or Liebeck are also heavily edited in favor of McDonald’s. You’ll have to explicitly add “hindu” or “indian” to search queries; for example, instead of “mcdonalds beef tallow”, try “mcdonalds beef tallow hindu indians”.