Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
TIL musk has a nobel peace prize nomination for this year
@swlabr @techtakes Anybody can nominate: the true sign that the simulation has been handed over to drunken frat boys will be if he *wins*.
but would it beat peace nobel prize for kissinger?
I thought it might be that kind of deal. I learned of this when I saw a pair of op-eds, one saying a W is deserved and the other saying the nom was insane.
Kelsey Piper continues to bluecheck:
What would some good unifying demands be for a hostile takeover of the Democratic party by centrists/moderates?
As opposed to the spineless collaborators who run it now?
We should make acquiring ID documents free and incredibly easy and straightforward and then impose voter ID laws, paper ballots and ballot security improvements along with an expansion of polling places so everyone participates but we lay the ‘was it a fair election’ qs to rest.
Presuming that Republicans ever asked “was it a fair election?!” in good faith, like a true jabroni.
i know that it’s about conservative crackheadery re:allegations of election fraud, but it’s lowkey unhinged that americans don’t have national ID. i also know that republicans blocked it, because they don’t want problems solved, they want to stay mad about them. in poland for example, it’s a requirement to have ID, it’s valid for 10 years and it’s free of charge. passport costs $10 to get and it takes a month, sometimes less, from filing a form to getting one. there’s also a govt service where you can get some things done remotely, including govt supplied digital signature that you can use to sign files and is legally equivalent to regular signature https://en.wikipedia.org/wiki/EPUAP
I saw that yesterday. I was tempted to post it here but instead I’ve been trying very hard not to think of this eldritch fractal of wrongness. It’s too much, man.
This isn’t even skating towards where the puck is, it’s skating in a fucking swimming pool.
What would some good unifying demands be for a hostile takeover of the Democratic party by centrists/moderates?
me, taking this at face value, and understanding the political stances of the democrats, and going by my definition of centrist/moderate that is more correct than whatever the hell Kelsey Piper thinks it means: Oh, this would actually push the democrats left.
Anyway, jesus christ I regret clicking on that name and reading. How the fuck is anyone this stupid. Vox needs to be burned down.
unifying demands
hostile takeover
Pick one, you can’t have both.
Presuming that Republicans ever asked “was it a fair election?!” in good faith, like a true jabroni.
Imagine saying this after the birther movement remained when the birth certificate was shown. “Just admit you didnt fuck pigs, and this pigfucking will be gone”.
those opinions should come with a whiplash warning, fucking hell
can’t wait to once again hear that someone is sure we’re “just overreacting” and that
star of davidpassbooksvoter ID laws will be totes fine. I’m sure it’ll be a really lovely conversation with a perfectly sensible and caring human. :|
Deep thinker asks why?
Thus spoketh the Yud: “The weird part is that DOGE is happening 0.5-2 years before the point where you actually could get an AGI cluster to go in and judge every molecule of government. Out of all the American generations, why is this happening now, that bare bit too early?”
Yud, you sweet naive smol uwu baby
esianboi, how gullible do you have to be to believe that a) tminus 6 months to AGI kek (do people track these dog shit predictions?) b) the purpose of DOGE is just accountability and definitely not the weaponized manifestation of techno oligarchy ripping apart our society for the copper wiring in the walls?An extreme Boss Baby tweet.
bahahahaha “judge every molecule.” I can’t believe I ever took this guy even slightly seriously.
The worst part is I can’t tell if that’s not meant to be taken literally or if it is.
Yud be like: "kek you absolute rubes. ofc I simply meant AI would be like a super accountant. I didn’t literally mean it would be able to analyze gov’t waste from studying the flow of matter at the molecular level… heh, I was just kidding… unless 🥺 ? "
He retweeted somebody saying this:
The cheat code to reading Yudkowsky- at least if you’re not doing death-of-the-author stuff- is that he believes the AI doom stuff with completely literal sincerity. To borrow Orwell’s formulation, he believes in it the way he believes in China.
That thread is quite something, going from “yud is extraordinarily thorough (much more thorough than i could possibly be) in examining the ground directly below a streetlamp, in his search for his keys”, that ‘he believes it like he believes in China’ to ‘honestly, i should be reading him. we have starkly different spiritual premises- and i smugly presume my spiritual premises are informed by better epistemology’
“judge every molecule” and “simulation hypothesis” probably have a bit of a fling going
“The AI is attuned to every molecular vibration and can reconstruct you by extrapolation from a piece of fairy cake” is a necessary premise of the Basilisk that they’ve spent all that time saying they don’t believe in.
Quantum computing will enable the AGI to entangle with all surrounding molecular vibrations! I saw another press release today
ah, the novel QC RSA attack: shaking the algorithm so much it gets annoyed and gives up the plaintext out of desperation
Interesting slides: Peter Gutmann - Why Quantum Cryptanalysis is Bollocks
Since quantum computers are far outside my expertise, I didn’t realize how far-fetched it currently is to factor large numbers with quantum computers. I already knew it’s not near-future stuff for practical attacks on e.g. real-world RSA keys, but I didn’t know it’s still that theoretical. (Although of course I lack the knowledge to assess whether that presentation is correct in its claims.)
But also, while reading it, I kept thinking how many of the broader points it makes also apply to the AI hype… (for example, the unfounded belief that game-changing breakthroughs will happen soon).
It’s been frustrating to watch Gutmann slowly slide. He hasn’t slid that far yet, I suppose. Don’t discount his voice, but don’t let him be the only resource for you to learn about quantum computing; fundamentally, post-quantum concerns are a sort of hard read in one direction, and Gutmann has decided to try a hard read in the opposite direction.
Page 19, complaining about lattice-based algorithms, is hypocritical; lattice-based approaches are roughly as well-studied as classical cryptography (Feistel networks, RSA) and elliptic curves. Yes, we haven’t proven that lattice-based algorithms have the properties that we want, but we haven’t proven them for classical circuits or over elliptic curves, either, and we nonetheless use those today for TLS and SSH.
Pages 28 and 29 are outright science denial and anti-intellectualism. By quoting Woit and Hossenfelder — who are sneerable in their own right for writing multiple anti-science books each — he is choosing anti-maths allies, which is not going to work for a subfield of maths like computer science or cryptography. In particular, p28 lies to the reader with a doubly-bogus analogy, claiming that both string theory and quantum computing are non-falsifiable and draw money away from other research. This sort of closing argument makes me doubt the entire premise.
Thanks for adding the extra context! As I said, I don’t have the necessary level of knowledge in physics (and also in cryptography) to have an informed opinion on these matters, so this is helpful. (I’ve wanted to get deeper in both topics for a long time, but life and everything has so far not allowed for it.)
About your last paragraph, do you by chance have any interesting links on “criticism of the criticism of string theory”? I wonder, because I have heard the argument “string theory is non-falsifiable and weird, but it’s pushed over competing theories by entrenched people” several times already over the years. Now I wonder, is that actually a serious position or just conspiracy/crank stuff?
Comparing quantum computing to time machines or faster-than-light travel is unfair. In order for the latter to exist, our understanding of physics would have to be wrong in a major way. Quantum computing presumes that our understanding of physics is correct. Making it work is “only” an engineering problem, in the sense that Newton’s laws say that a rocket can reach the Moon, so the Apollo program was “only” a engineering project. But breaking any ciphers with it is a long way off.
Comparing quantum computing to time machines or faster-than-light travel is unfair.
I didn’t interpret the slides as an attack on quantum computing per se, but rather an attack on over-enthusiastic assertions of its near-future implications. If the likelihood of near-future QC breaking real-world cryptography is so extremely low, it’s IMO okay to make a point by comparing it to things which are (probably) impossible. It’s an exaggeration of course, and as you point out the analogy isn’t correct in that way, but I still think it makes a good point.
What I find insightful about the comparison is that it puts the finger on a particular brain worm of the tech world: the unshakeable belief that every technical development will grow exponentially in its capabilities. So as soon as the most basic version of something is possible, it is believed that the most advanced forms of it will follow soon after. I think this belief was created because it’s what actually happened with semiconductors, and of course the bold (in its day) prediction that was Moore’s law, and then later again, the growth of the internet.
And now this thinking is applied to everything all the time, including quantum computers (and, as I pointed to in my earlier post, AI), driven by hype, by FOMO, by the fear of “this time I don’t want to be among those who didn’t recognize it early”. But there is no inherent reason why a development should necessarily follow such a trajectory. That doesn’t mean of course that it’s impossible or won’t get there eventually, just that it may take much more time.
So in that line of thought, I think it’s ok to say “hey look everyone, we have very real actual problems in cryptography that need solving right now, and on the other hand here’s the actual state and development of QC which you’re all worrying about, but that stuff is so far away you might just as well worry about time machines, so please let’s focus more on the actual problems of today.” (that’s at least how I interpret the presentation).
heh yup. I think the most recent one (somewhere in the last year) was something like 12-bit rsa? stupendously far off from being a meaningful thing
I’ll readily admit to being a cryptography mutt and a qc know-barely-anything, and even from my limited understanding the assessment of where people are at (with how many qubits they’ve managed to achieve in practical systems) everything is hilariously woefully far off ito attacks
that doesn’t entirely invalidate pqc and such (since the notion there is not merely defending against today/soon but also a significant timeline)
one thing I am curious about (and which you might’ve seen or be able to talk about, blake): is there any kind of known correlation between qubits and viable attacks? I realize part of this quite strongly depends on the attack method as well, but off the cuff I have a guess (“intuition” is probably the wrong word) that it probably scales some weird way (as opposed to linear/log/exp)
I’ve been listening to faster and worse (see https://awful.systems/comment/6216748 ) and I like it so I wanted to give it ups.
(I think this and the memory palace are the only micro podcasts I’ve listened to. idk why it isn’t a more common format)
thanks! It might be uncommon because it’s a real pain in the ass to keep it short. Every time I make one I stress about how easily my point can be misunderstood because there are so few details. Good way to practice the art of moving on
if it’s any reassurance, i’ve understood all your points perfectly! you’re basically making an argument for all UI to be more apple-like
holy shit, I really don’t know if this is real or a joke
:)
EDIT: ok it was a joke
really, thanks for listening! It’s fun making them and nice to know they are being listened to
this is also why pivot to AI is mostly 200-250 words, not 1200 or 2000 or 8000
It’s probably more sensible for me to try writing short bits too, instead of faffing around with videos
apparently video is just huuuuge
ran into this earlier (via techmeme, I think?), and I just want to vent
“The biggest challenge the industry is facing is actually talent shortage. There is a gap. There is an aging workforce, where all of the experts are going to retire in the next five or six years. At the same time, the next generation is not coming in, because no one wants to work in manufacturing.”
“whole industries have fucked up on actually training people for a run going on decades, but no the magic sparkles will solve the problem!!!11~”
But when these new people do enter the space, he added, they will know less than the generation that came before, because they will be more interchangeable and responsible for more (due to there being fewer of them).
I forget where I read/saw it, but sometime in the last year I encountered someone talking about “the collapse of …” wrt things like “travel agent”, which is a thing that’s mostly disappeared (on account of various kinds of services enabling previously-impossible things, e.g. direct flights search, etc etc) but not been fully replaced. so now instead of popping a travel agent a loose set of plans and wants then getting back options, everyone just has to carry that burden themselves, badly
and that last paragraph reminds me of exactly that nonsense. and the weird “oh don’t worry, skilled repair engineers can readily multiclass” collapse equivalence really, really, really grates
sometimes I think these motherfuckers should be made to use only machines maintained under their bullshit processes, etc. after a very small handful of years they’ll come around. but as it stands now it’ll probably be a very “for me not for thee” setup
what pisses me off even more is that parts of the idea behind this are actually quite cool and worthwhile! just… the entire goddamn pitch. ew.
https://www.theverge.com/news/614883/humane-ai-hp-acquisition-pin-shutdown
lol, lmao
edit: I’ve ragged on John Gruber in the past, but he’s really lighting them up 🔥
116 million
There’s no way that what they’re buying is worth that much.
https://mastodon.gamedev.place/@lritter/114001505488538547
master: welcome to my Smart Home
student: wow. how is the light controlled?
master: with this on-off switch
student: i don’t see a motor to close the blinds
master: there is none
student: where is the server located?
master: it is not needed
student: excuse me but what is “Smart” about all of this?
master: everything.
in this moment, the student was enlightened
In other news, Brian Merchant’s going full-time on Blood in the Machine.
Did notice a passage in the annoucement which caught my eye:
Meanwhile, the Valley has doubled down on a grow-at-all-costs approach to AI, sinking hundreds of billions into a technology that will automate millions of jobs if it works, might kneecap the economy if it doesn’t, and will coat the internet in slop and misinformation either way.
I’m not sure if its just me, but it strikes me as telling about how AI’s changed the cultural zeitgeist that Merchant’s happily presenting automation as a bad thing without getting backlash (at least in this context).
I mean, I love the idea of automation in the high level. Being able to do more stuff with less human time and energy spent is objectively great! But under our current economic system where most people rely on selling their time and energy in order to buy things like food and housing, any decrease in demand for that labor is going to have massive negative impacts on the quality of life for a massive share of humanity. I think the one upside of the current crop of generative AI is that it
threatensclaims to threaten actual white-collar workers in the developed world rather than further imisserating factory workers in whichever poor country has the most permissive labor laws. It’s been too easy to push the human costs of our modern technology-driven economy under the proverbial rug, but the middle management graphic design Chadleys of the US and EU are finding it harder to pretend they don’t exist because now it’s coming for them too.automation is good because big tv. once, no big tv. now big tv
more seriously I can’t really criticize automation in complete generality. it’s way too broad a concept. I like having abundant food and talking to gay people on my phone. but we all know the kind of automation merchant is referencing does very little besides concentrate power with the ultra wealthy
Some years ago I read the memoirs of a railroad union boss. Interesting book in many aspects, but what I thought of here was a time before he became a union boss. He was working at the railroad, was trusted in the union and got the mission to make store keeping of supplies and spare parts more efficient.
This wasn’t the first time the railroad company had tried to make it more efficient. Due to earlier mergers there was lots of local supplies and a confusing system for which part of the company was supplied from where. In short, it was inefficient and everyone knew that. Enter our protagonist who travels around and talks to people. Finally he arrives back to HQ and reports that it can’t be done. Unless HQ wants to enact a program where everyone who is made redundant gets a better job, with the company footing the bill for any extra training or education needed. Then it could be done, because then it would be in the interest of the people whose knowledge and skills they needed.
This being in the post war era with full employment policies, labour was a scare resource so the company did as they were told and the system got more efficient.
It’s all about who benefits from the automation. The original Luddites targeted employers who automated, fired skilled workers and decreased wages. They were not opposed to automation, they were opposed to automation at their expense.
On a semi-related note, I suspect we’re gonna see a pushback against automation in general at some point, especially in places where “shitty automation”.
will automate millions of jobs if it works, might kneecap the economy
will kneecap the economy if it works, too. Because companies certainly aren’t going to keep people employed in those millions of jobs.
this article came to mind for something I was looking into, and then on rereading it I just stumbled across this again:
Late one afternoon, as they looked out the window, two airplanes flew past from opposite directions, leaving contrails that crossed in the sky like a giant X right above a set of mountain peaks. Punchy with excitement, they mused about what this might mean, before remembering that Google was headquartered in a place called Mountain View. “Does that mean we should join Google?” Hinton asked. “Or does it mean we shouldn’t?”
But Hinton didn’t want Yu to see his personal humidifying chamber, so every time Yu dropped in for a chat, Hinton turned to his two students, the only other people in his three-person company, and asked them to disassemble and hide the mattress and the ironing board and the wet towels. “This is what vice presidents do,” he told them.
so insanely fucking unserious
Has he never heard of a humidifier? Good lord.
I am willing to bet the upshot here is that he has certain very specific ideas about how humidifiers can be improved, and of course will accept nothing less
Amazon Prime pulling some AI bullshit with, considering the bank robbery in the movie was to pay for surgery for a trans woman, a hint of transphobia (or more likely, not a hint, just the full reason).
What if HPMOR but Harry is Charlie Manson?
–2025, apparently
FR that’s basically this anime in which MC is isekai’d and starts a cult
new zitron https://www.wheresyoured.at/longcon/
Deep Research is the AI slop of academia — low-quality research-slop built for people that don’t really care about quality or substance, and it’s not immediately obvious who it’s for.
it’s weird that Ed stops there, since answer almost writes itself. ludic had a bit about how in companies bigger than three guys in a shed, people who sign software contracts don’t use that software in any normal way;
The idea of going into something knowing about it well enough to make sure the researcher didn’t fuck something up is kind of counter to the point of research itself.
conversely, if you have no idea what are you doing, you won’t be able to tell if machine generated noise is in any way relevant or true
The whole point of hiring a researcher is that you can rely on their research, that they’re doing work for you that would otherwise take you hours.
but but, this lying machine can output something in minutes so this bullshit generator obviously makes human researchers obsolete. this is not for academia because it’s utterly unsuitable and google scholar beats it badly anyway; this is not for wide adoption because it’s nowhere near free tier; this is for idea guys who have enough money to shell out $whatever monthly subscription and prefer to set a couple hundred of dollars on fire instead of hiring a researcher/scientist/contractor. especially keeping in mind that contractor might tell them something they don’t want to hear, but this lmgtfy x lying box (but worse, because it pulls lots of seo spam) won’t
OpenAI’s next big thing is the ability to generate a report that you would likely not be able to use in any meaningful way anywhere, because while it can browse the web and find things and write a report, it sources things based on what it thinks can confirm its arguments rather than making sure the source material is valid or respectable.
e: this is also insidious and potent
attack surfacemarketing opportunity against clueless monied people who trust these slop machines for some reason. and it might be exploitable by tuning seo just right
found in the wild, The Tech Barons have a blueprint drawn in crayon
speaking of shillrinivan, anyone heard anything more about cult school after the news that no-one like bryan’s shitty food packs?
wait that’s it? he wants to “replace” states with (vr) groupchats on blockchain? it can’t be this stupid, you must be explaining this wrong (i know, i know, saying it’s just that makes it look way more sane than it is)
The basic problem here is that Balaji is remarkably incurious about what states actually do and what they are for.
libertarians are like house cats etc etc
In practice, it’s a formula for letting all the wealthy elites within your territorial borders opt out of paying taxes and obeying laws. And he expects governments will be just fine with this because… innovation.
this is some sovereign citizen type shit
yeah shillrinivan’s ideas are extremely Statisism: Sims Edition
I’ve also seen essentially ~0 thinking from any of them on how to treat corner cases and all that weird messy human conflict shit. but code is law! rah!
(pretty sure that if his unearned timing-fortunes ever got threatened by some coin contract gap or whatever, he’d instantly be all over getting that shit blocked)
code is law, as in, who controls the code controls the law. the obvious thing would be that monied founders would control the entire thing, like in urbit. i still want to see how well cyber hornets defend against tank rounds, or who gets to get inside tank for that matter, or how do you put tank on a blockchain. or how real states make it so that you can have citizenship of only one state, maybe two. there’s nothing about it there
Non Fungible Tanks?
Some kind of Civ4-ass tech tree lets you get the Internet before replaceable parts or economics.
or how real states make it so that you can have citizenship of only one state, maybe two. there’s nothing about it there
come on we both know it’ll be github badges or something like that
Having read the whole book, I am now convinced that this omission is not because Srinivasan has a secret plan that the public would object to. The omission, rather, is because Balaji just isn’t bright enough to notice.
That’s basically the entire problem in a nutshell. We’ve seen what people will fill that void with and it’s “okay but I have power here now and I dare you to tell me I don’t” and you know who happens to have lots of power? That’s right, it’s Balaji’s billionaire bros! But this isn’t a sinister plan to take over society - that would at least entail some amount of doing what states are for.
Ed:
“Who is really powerful? The billionaire philanthropist, or the journalist who attacks him over his tweets?”
I’m not going to bother looking up which essay or what terrible point it was in service to, but Scooter Skeeter of all people made a much better version of this argument by acknowledging that the other axis of power wasn’t “can make someone feel bad through mean tweets” but was instead “can inflict grievous personal violence on the aged billionaires who pay them for protection”. I can buy some of these guys actually shooting someone, but the majority of these wannabe digital lordlings are going to end up following one of the many Roman Emperors of the 3rd century and get killed and replaced by their Praetorians.
I can buy some of these guys actually shooting someone, but the majority of these wannabe digital lordlings are going to end up following one of the many Roman Emperors of the 3rd century and get killed and replaced by their Praetorians.
i think it’ll turn out muchhh less dramatic. look up cryptobros, how many of them died at all, let alone this way? i only recall one ruja ignatova, bulgarian scammer whose disapperance might be connected to local mafia. but everyone else? mcaffee committed suicide, but that might be after he did his brain’s own weight in bath salts. for some of them their motherfuckery caught up with them and are in prison (sbf, do kwon) but most of them walk freely and probably don’t want to attract too much attention. what might happen, i guess, is that some of them will cheat one another out of money, status, influence, what have you, and the scammed ones will just slide into irrelevance. you know, to get a normal job, among normal people, and not raise suspicion
I’m probably being a bit hyperbolic, but I do want to clarify that the descent into violence and musical knife-chairs is what happens if they succeed at replacing or disempowering the State. The worst offenders going to prison and the rest quietly desisting is what happens when the State does something (literally anything, in fact. Tepid and halfhearted enforcement of existing laws was enough to meaningfully slow the rise of crypto) and they fail, but if they were to directly undermine that monopoly on violence I fully expect to see violence turned against them, probably at the hands of whatever agent they expected to use it on their behalf. In my mind this is the most dramatic possible conclusion of their complete lack of understanding of what they’re actually trying to do, though it is certainly less likely than my earlier comment implied.
the majority of these wannabe digital lordlings are going to end up following one of the many Roman Emperors of the 3rd century and get killed and replaced by their Praetorians.
this is a possibility lots of the prepper ultra rich are concerned with, yet I don’t recall that I’ve ever heard the tech scummies mention it. they don’t realize that their fantasized outcome is essentially identical to the prepper societal breakdown, because they don’t think of it primarily as a collapse.
more generally, they seem to consider every event in the most narcissistic terms: outcomes are either extensions of their power and luxury to ever more limitless forms or vicious and unjustified leash jerking. there’s a comedy of the idle rich aspect to the complacency and laziness of their dream making. imagine a boot stamping on a face, forever, between rounds at the 9th hole
That’s basically the entire problem in a nutshell.
I think a lot of these people are cunning, aka good at somewhat sociopathic short term plans and thinking, and they confuse this ability (and they survivor biassed success) for being good at actual planning (or just thinking that planning is worthless, after all move fast and break things (and never think about what you just said)). You don’t have to actually have good plans if people think you have charisma/a magical money making ability (which needs more and more rigging of the casino to get money on the lot of risky bets to hope one big win pays for it all machine).
Doesn’t help that some of them seem to either be on a lot of drugs, or have undiagnosed adhd. Unrelated, Musk wants to go into Fort Knox all of a sudden, because he saw a post on twitter which has convinced him ‘they’ stole the gold (my point here is that there is no way he was thinking about Knox at all before he randomly came across the tweet, the plan is crayons).
Unrelated, Musk wants to go into Fort Knox all of a sudden
you know, one of better models of schizophrenia we have looks like this: take a rat and put them on a schedule of heroic doses of PCP. after some time, a pattern of symptoms that looks a lot like schizophrenia develops even when off PCP. unlike with amphetamine, this is not only positive symptoms (like delusions and hallucinations) but also negative and cognitive symptoms (like flat affect, lack of motivation, asociality, problems with memory and attention). PCP touches a lot of things, but ketamine touches at least some of the same things that matter in this case (NMDA receptor). this residual effect is easy to notice even by, and among, recreational users of this class of compounds
richest man in the world grows schizo brain as a hobby, pillages government, threatens to destroy Lithuania
I’m sorry ‘they’ did what? Everyone knows you can’t rob Fort Knox. You have to buy up a significant fraction of the rest of the gold and then detonate a dirty bomb in Fort Knox to reduce the supply and- oh my God bitcoiners learned economics from Goldfinger.
oh my God
Welcome to the horrible realization of the truth. All things the right understands comes from entertainment media. That is also why satire doesn’t work, you need to have a deeper understanding of the world to understand the themes, else starship troopers is just about some hot people shooting bugs.
ok i watched Starship Troopers for the first time this year and i gotta say a whole lot of that movie is in fact hot people shooting bugs
Yeah, I have reread the book last year. (Due to all the hot takes of people about the book in regards with Helldivers) and the movie is a lot better propaganda than the book (The middle, where they try to justify their world, drags on and on and is filled with strawmen and really weird moments. Esp the part where the main character, who isn’t the sharpest tool in the shed, is told that he is smart enough to join the officers. You must be this short to enter)).
My life for super Earth 🫡
Prff, like you would be part of the 20% that survives basic training. I know I wouldn’t.
(So many people miss this little detail, or the detail that it is cheaper to send a human with a gun down to a planet to arm the nukes (sorry Hellbombs) than to put a remote detonator on the nukes, I assume you were not one of those people btw, it is just me gushing positively about the satire in the game (it is a good game) and sort of despairing about media literacy/attention spans).
see also: Yudowsky has never consumed fiction targeted above middle school
@Soyweiser @YourNetworkIsHaunted
@cstross But it’s not only that: satire can’t reach the fash because they internalise and accept the monstruous, and thus, what is satirised to expose in ridicule its and their monstruosity and aberrant values for average, still decent and sane people, for them it is “Yes, this is what we want.” It’s never “if you can’t tell it’s satire it’s bad satire”, you can be as “subtle” as a kick in the face and they won’t get it because it <is> what they want.Certainly, for a lot of them it is even worse. See how the neo-nazis love American History X. (How do we stop John Connor from becoming a nazi, seems oddly relevant).