Programmer and sysadmin (DevOps?), wannabe polymath in tech, science and the mind. Neurodivergent, disabled, burned out, and close to throwing in the towel, but still liking ponies 🦄 and sometimes willing to discuss stuff.

  • 7 Posts
  • 2.67K Comments
Joined 2 years ago
cake
Cake day: June 26th, 2023

help-circle




  • I’m not in the US, and tend to treat the far-right scene about the same as flerfers and antivaxxers: block and avoid… wouldn’t be surprised if most people did the same.

    Apparently, this has been steadily brewing over time, the ideology can be traced back to… pretty much Plato’s “Republic”, 2400 years ago, advocating for “wise Kings” as ideal rulers of city-states. Everything from there on, is a rehash on the same old theme.

    Technocracy is a more modern take, starting before Von Braun and his Project Mars with an Elon ruler, followed by VISA and Dee Hock’s attempt at establishing a digital currency, from there to PayPal and Elon Musk’s failure to make it into one, a short lived digital gold attempt, Bitcoin and its “mining”, all the crypto sphere with DAOs and NFTs, with Musk’s attempts to make Project Mars a reality in between, adding to the mix works like Orwell’s 1984, Huxley’s Brave New World, Hubbard’s works and legacy, Neal Stephenson’s Cryptonomicon, and similar.

    Democrats used to see that line of thinking as a cautionary tale, Cypherpunk movements being at an intersection, while Neoliberal Capitalists seem to have shook hands with the far-right to work on expanding and implementing it as if it was a manual.



  • In 2024, the U.S. ran a trade deficit with Canada of about $55 billion. That same year, it ran a deficit with Vietnam of about $123 billion, more than twice as much, and with Thailand of about $46 billion

    Definition of trade deficit: someone accepted more of your made-up money (AKA, credit-backed fiat) in exchange for actual goods and services.

    …and your leaders make you believe that’s somehow bad for you 🙄







  • Should, but won’t. The genie is out of the bag, there’s not putting it back… and it was a flimsy bag to begin with.

    Reminds me of “The Bicentennial Man”, when people decided to turn against humanoid robots. It won’t happen, some people are already spending a fortune on humanoid silicone dolls, humanoid robot slaves is a much more likely future, with all it entails.

    Even worse: if modulation were to be forced by law to make AIs sound robotic… scammers —who are already breaking the law— would have a field day by using non-modulated voices.





  • BEWARE THE SPONGE!!!

    Inkjets waste tons of ink into a sponge-diaper with every cleaning cycle, that over time gets saturated and become prone to leaks. You DO NOT want ink from it spilling all over wherever you put it.

    Other than that, it’s hard to tell. If you can get the nozzles clean, and it doesn’t have software/firmware issues, a general disassembly, cleaning, and assembly, can bring a printer back to life. It usually takes an amount of time and effort that makes it not worth it… but if you have more spare time than change, it might be worth a shot.



  • The other way around. They started with Alibaba’s Qwen, then fine tuned it to match the thinking process behind 1000 hand picked queries on Google’s Gemini 2.0.

    That $50 proce tag is kind of silly, but it’s like picking an old car and copying the mpg, seats, and paint job from a new car. It’s still an old car underneath, only it looks and behaves like a new one in some aspects.

    I think it’s interesting that old models could be “upgraded” for such a low price. It points to something many have been suspecting for some time: LLMs are actually “too large”, they don’t need all that size to show some of the more interesting behaviors.



  • There are several parts to the “spying” risk:

    Sending private data to a third party server for the model to process it… well, you just sent it, game over. Use local models, or machines (hopefully) under your control, or ones you trust (AWS? Azure? GCP?.. maybe).

    All LMM models are a black box, the only way to make an educated guess about their risk, is to compare the training data and procedure, to the evaluation data of the final model. There is still a risk of hallucinations and deceival, but it can be quantified to some degree.

    DeepSeek uses a “Mixture of Experts” approach to reduce computational load… which is great, as long as you trust the “Experts” they use. Since the LLM that was released for free, is still a black box, and there is no way to verify which “Experts” were used to train it, there is also no way to know whether some of those “Experts” might or might not be trained to behave in a malicious way under some specific conditions. It could as easily be a Troyan Horse with little chance of getting detected until it’s too late.

    it’s being trained on the output of other LLMs, which makes it much more cheap but, to me it seems, also even less trustworthy

    The feedback degradation of an LLM happens when it gets fed its own output as part of the training data. We don’t exactly know what training data was used for DeepSeek, but as long as it was generated by some different LLM, there would be little risk of a feedback reinforcement loop.

    Generally speaking, I would run the DeepSeek LLM in an isolated environment, but not trust it to be integrated in any sort of non-sandboxed agent. The downloadable smartphone app, is possibly “safe” as long as you restrict the hell out of it, don’t let it access anything on its own, and don’t feed it anything remotely sensitive.