• 0 Posts
  • 172 Comments
Joined 2 years ago
cake
Cake day: July 29th, 2023

help-circle
  • In today’s torment nexus development news… you know how various cyberpunky type games let you hack into an enemy’s augmentations and blow them up? Perhaps you thought this was stupid and unrealistic, and you’d be right.

    Maybe that’s the wrong example. How about a cursed evil ring that when you put it on, you couldn’t take it off and it wracks you with pain? Who hasn’t wanted one of those?

    Happily, hard working torment nexus engineers have brought that dream one step closer, by having “smart rings”, powered by lithium polymer batteries. Y’know, the things that can go bad, and swell up and catch fire? And that you shouldn’t puncture, because that’s a fire risk too, meaning cutting the ring off is somewhat dangerous? Fun times abound!

    https://bsky.app/profile/emily.gorcen.ski/post/3m25263bs3c2g

    image description

    A pair of tweets, containing the text

    Daniel aka ZONEofTECH on x.com: “Ahhh…this is…not good. My Samsung Galaxy Ring’s battery started swelling. While it’s on my finger 😬. And while I’m about to board a flight 😬 Now I cannot take it off and this thing hurts. Any quick suggestions

    Update:

    • I was denied boarding due to this (been travelling for ~47h straight so this is really nice 🙃). Need to pay for a hotel for the night now and get back home tomorrow👌
    • was sent to the hospital, as an emergency
    • ring got removed

    You can see the battery all swollen. Won’t be wearing a smart ring ever again.




  • Oh hey, bay area techfash enthusing about AI and genocidal authoritarians? Must be a day ending in a Y. Today it is Vercel CEO and next.js dev Guillermo Rauch

    https://nitter.net/rauchg/status/1972669025525158031

    image description

    A screenshot of a tweet by Guillermo Rauch, the CEO of Vercel. There’s a photograph of him next to Netanyahu. The tweet reads:

    Enjoyed my discussion with PM Netanyahu on how Al education and literacy will keep our free societies ahead. We spoke about Al empowering everyone to build software and the importance of ensuring it serves quality and progress. Optimistic for peace, safety, and greatness for Israel and its neighbors.

    I also have strong opinions about not using next.js or vercel (and server-side javascript in general is a bit of a car crash) but even if you thought it was great you should probably have a look around for alternatives. Just not ruby on rails, perhaps.







  • Woke up to some hashtag spam this morning

    AI’s Biggest Security Threat May Be Quantum Decryption

    which appears to be over of those evolutionary “transitional forms” between grifts.

    The sad thing is the underlying point is almost sound (hoarding data puts you at risk of data breaches, and leaking sensitive data might be Very Bad Indeed) but it is wrapped up in so much overhyped nonsense it is barely visible. Naturally, the best and most obvious fix — don’t hoard all that shit in the first place — wasn’t suggested.

    (it also appears to be a month-old story, but I guess there’s no reason for mastodon hashtag spammers to be current 🫤)


  • One to watch from a safe distance: dafdef, an “ai browser” aimed at founders and “UCG creators”, named using the traditional amazon-keysmash naming technique and and following the ai-companies-must-have-a-logo-suggestive-of-an-anus style guide.

    Dafdef learns your browsing patterns and suggests what you’d do next After watching you fill out similar forms a few times, Dafdef starts autocompleting them. Apply with your startup to YC, HF0 and A16z without wasting your time.

    So… spicy autocomplete.

    But that’s not all! Tired of your chatbot being unable to control everything on your iphone, due to irksome security features implemented by those control freaks at apple? There’s a way around that!

    Introducing the “ai key”!

    A tiny USB-C key that turns your phone into a trusted AI assistant. It sees your screen, acts on your behalf, and remembers — all while staying under your control.

    I’m sure you can absolutely trust an ai browser connected to a tool that has nearly full control over your phone to not do anything bad, because prompt injection isn’t a thing, right?

    (I say nearly full, because I think Apple Pay requires physical interaction with a phone button or face id, but if dafdef can automate the boring and repetitive parts of using your banking app then having full control of the phone might not matter)

    h/t to ian coldwater



  • It isn’t clear to me at this point that such research will ever be funded in english-speaking places without a significant set of regime changes… no politician or administrator can resist outsourcing their own thinking to llm vendors in exchange for funding. I expect the US educational system will eventually provide a terrible warning to everyone (except the UK, whose government looks at the US and says “oh my god, that’s horrifying. How can we be more like that?”).

    I’m probably just feeling unreasonably pessimistic right now, though.



  • It is related, inasmuch as it’s all generated from the same prompt and the “answer” will be statistically likely to follow from the “reasoning” text. But it is only likely to follow, which is why you can sometimes see a lot of unrelated or incorrect guff in “reasoning” steps that’s misinterpreted as deliberate lying by ai doomers.

    I will confess that I don’t know what shapes the multiple “let me just check” or correction steps you sometimes see. It might just be a response stream that is shaped like self-checking. It is also possible that the response stream is fed through a separate llm session when then pushes its own responses into the context window before the response is finished and sent back to the questioner, but that would boil down to “neural networks pattern matching on each other’s outputs and generating plausible response token streams” rather than any sort of meaningful introspection.

    I would expect the actual systems used by the likes of openai to be far more full of hacks and bodges and work-arounds and let’s-pretend prompts that either you or I could imagine.


  • It’s just more llm output, in the style of “imagine you can reason about the question you’ve just been asked. Explain how you might have come about your answer.” It has no resemblance to how a neural network functions, nor to the output filters the service providers use.

    It’s how the ai doomers get themselves into a flap over “deceptive” models… “omg it lied about its train of thought!” because if course it didn’t lie, it just edited a stream of tokens that were statistically similar to something classified as reasoning during training.



  • I might be the only person here who thinks that the upcoming quantum bubble has the potential to deliver useful things (but boring useful things, and so harder to build hype on) but stuff like this particularly irritates me:

    https://quantumai.google/

    Quantum fucking ai? Motherfucker,

    • You don’t have ai, you have a chatbot
    • You don’t have a quantum computer, you have a tech demo for a single chip
    • Even if you had both of those things, you wouldn’t have “quantum ai”
    • if you have a very specialist and probably wallet-vaporisingly expensive quantum computer, why the hell would anyone want to glue an idiot chatbot to it, instead of putting it in the hands of competent experts who could actually do useful stuff with it?

    Best case scenario here is that this is how one department of Google get money out of the other bits of Google, because the internal bean counters cannot control their fiscal sphincters when someone says “ai” to them.