This repository is a backup of that leaked source, and this README is a full breakdown of what’s in it, how the leak happened and most importantly, the things we now know that were never meant to be public.
I am not making this up.
Claude Code has a full Tamagotchi-style companion pet system called “Buddy.” A deterministic gacha system with species rarity, shiny variants, procedurally generated stats, and a soul description written by Claude on first hatch like OpenClaw.


I was at a networking event once and a person did have an interesting POV that ever since the first abstraction from binary we’ve always been trying to communicate with the minerals.
did they mean “communicate to other humans using the minerals”? because that I can get behind
but if they meant, “commune with the EM fields and the phonons vibrating around in the crystals by talking to an llm”
They unfortunately meant the latter. The idea being it’s a new iteration of the same abstraction going from low level languages to higher ones. Like from Assembly to C to C++ to C#
The thing that has always made that argument sit funky with me is that the LLMs are nondeterministic - as much as people are claiming that output is now crossreferenced and repeatable, my understanding is that there’s still a black box issue. I work mainly in R, which is mainly a C wrapper for academics who need to make pretty charts and put asterisks after numbers in tables and don’t want to darken the doors of the CS department, and learning the ggplot syntax seems like a less painful lift than patiently explaining in plain English what I want my bar chart to look like (although I guess from my boss’s perspective I am basically a less tractable Claude in that sense
)
I think the only stochastic piece of the usual llm layout depends on the temperature variable of the last layer. Are there other sources of randomness? Agreed either way they’re black boxes
I’m not sure, I haven’t really dug into it to see if the code varies meaningfully across similarly-worded requests, but my understanding is that if you write code, the compiler will compile it the same way each time, but if you treat an LLM like an IDE and the prompts as the “code,” there might variations over repeated “compilations,” leading to drift if you’re working on something iteratively.