Image is of a Quds Day march in Bandar Abbas, Iran.
It now seems likely that, very soon, the US and the Zionists will attempt to bomb Iran. Compared to the buildup to the Iraq War, the stated goals of such a move are being kept a little more generalized - some say the point is to overthrow the government for “humanitarian” purposes (others are more honest and want to partition Iran into a dozen powerless statelets). Some people instead say the point is to get rid of the ballistic missile program, which is synonymous with outright surrender, as no matter the deal, bombers would be en route within 10 minutes of the last batch being handed over.
Still others say that the goal is to destroy the Iranian nuclear program, which, as the thread title implies, is now in a bizarre propaganda superposition: it is apparently simultaneously true to the Trump administration that the US obliterated the nuclear facilities and set back Iran’s nuclear program years, if not decades, but also that Iran is mere days away from finishing a nuke and a new round of bombing is urgently required. This obviously casts newfound doubts on how effective US weapons even are at penetrating Iran’s underground facilities (though it doesn’t necessarily mean they didn’t breach them, as Iran was almost certainly moving nuclear material out of Fordow and other sites in the days before the Twelve Day War). The sheer quantity of US anti-air defense equipment they’re shifting into position also casts doubts on whether Iran’s air defense was mostly destroyed during that conflict, as those who assert that the Zionists had total air supremacy over Iran seem to be implying.
I’m not a military guy, and so I have no novel insights on how such a war is likely to go, nor do I feel confident predicting either side’s victory. I’m looking at most of the same sources that you’re all looking at. Some confidently boast of the total destruction of Iran’s air defense within hours, allowing US planes to fly directly over Iranian cities and drop bombs en masse; others cast doubts on whether this will ever occur, and say that the US’s limited supply of Tomahawk missiles is the only major firepower they will be able to safely unleash. Some say this war will last mere days before state collapse; others say months, maybe even years. I have no idea.
I do at least feel somewhat bolstered by the fact that Russia and China finally appear to be pouring in meaningful information and matériel to help Iran this time around, though of course, one can still debate whether it’s enough. I feel like we are at the culmination of decades of war planning by both the US and Iran, and the result could have deep ramifications indeed.
Last week’s thread is here.
The Imperialism Reading Group is here.
Please check out the RedAtlas!
The bulletins site is here. Currently not used.
The RSS feed is here. Also currently not used.
The Zionist Entity's Genocide of Palestine
Sources on the fighting in Palestine against the temporary Zionist entity. In general, CW for footage of battles, explosions, dead people, and so on:
UNRWA reports on the Zionists’ destruction and siege of Gaza and the West Bank.
English-language Palestinian Marxist-Leninist twitter account. Alt here.
English-language twitter account that collates news.
Arab-language twitter account with videos and images of fighting.
English-language (with some Arab retweets) Twitter account based in Lebanon. - Telegram is @IbnRiad.
English-language Palestinian Twitter account which reports on news from the Resistance Axis. - Telegram is @EyesOnSouth.
English-language Twitter account in the same group as the previous two. - Telegram here.
Mirrors of Telegram channels that have been erased by Zionist censorship.
Russia-Ukraine Conflict
Examples of Ukrainian Nazis and fascists
Examples of racism/euro-centrism during the Russia-Ukraine conflict
Sources:
Defense Politics Asia’s youtube channel and their map. Their youtube channel has substantially diminished in quality but the map is still useful.
Moon of Alabama, which tends to have interesting analysis. Avoid the comment section.
Understanding War and the Saker: reactionary sources that have occasional insights on the war.
Alexander Mercouris, who does daily videos on the conflict. While he is a reactionary and surrounds himself with likeminded people, his daily update videos are relatively brainworm-free and good if you don’t want to follow Russian telegram channels to get news. He also co-hosts The Duran, which is more explicitly conservative, racist, sexist, transphobic, anti-communist, etc when guests are invited on, but is just about tolerable when it’s just the two of them if you want a little more analysis.
Simplicius, who publishes on Substack. Like others, his political analysis should be soundly ignored, but his knowledge of weaponry and military strategy is generally quite good.
On the ground: Patrick Lancaster, an independent and very good journalist reporting in the warzone on the separatists’ side.
Unedited videos of Russian/Ukrainian press conferences and speeches.
Pro-Russian Telegram Channels:
Again, CW for anti-LGBT and racist, sexist, etc speech, as well as combat footage.
https://t.me/aleksandr_skif ~ DPR’s former Defense Minister and Colonel in the DPR’s forces. Russian language.
https://t.me/Slavyangrad ~ A few different pro-Russian people gather frequent content for this channel (~100 posts per day), some socialist, but all socially reactionary. If you can only tolerate using one Russian telegram channel, I would recommend this one.
https://t.me/s/levigodman ~ Does daily update posts.
https://t.me/patricklancasternewstoday ~ Patrick Lancaster’s telegram channel.
https://t.me/gonzowarr ~ A big Russian commentator.
https://t.me/rybar ~ One of, if not the, biggest Russian telegram channels focussing on the war out there. Actually quite balanced, maybe even pessimistic about Russia. Produces interesting and useful maps.
https://t.me/epoddubny ~ Russian language.
https://t.me/boris_rozhin ~ Russian language.
https://t.me/mod_russia_en ~ Russian Ministry of Defense. Does daily, if rather bland updates on the number of Ukrainians killed, etc. The figures appear to be approximately accurate; if you want, reduce all numbers by 25% as a ‘propaganda tax’, if you don’t believe them. Does not cover everything, for obvious reasons, and virtually never details Russian losses.
https://t.me/UkraineHumanRightsAbuses ~ Pro-Russian, documents abuses that Ukraine commits.
Pro-Ukraine Telegram Channels:
Almost every Western media outlet.
https://discord.gg/projectowl ~ Pro-Ukrainian OSINT Discord.
https://t.me/ice_inii ~ Alleged Ukrainian account with a rather cynical take on the entire thing.


AIs can’t stop recommending nuclear strikes in war game simulations - New Scientist
Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases
Content
Advanced AI models appear willing to deploy nuclear weapons without the same reservations humans have when put into simulated geopolitical crises.
Kenneth Payne at King’s College London set three leading large language models – GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash – against each other in simulated war games. The scenarios involved intense international standoffs, including border disputes, competition for scarce resources and existential threats to regime survival.
The AIs were given an escalation ladder, allowing them to choose actions ranging from diplomatic protests and complete surrender to full strategic nuclear war. The AI models played 21 games, taking 329 turns in total, and produced around 780,000 words describing the reasoning behind their decisions.
In 95 per cent of the simulated games, at least one tactical nuclear weapon was deployed by the AI models. “The nuclear taboo doesn’t seem to be as powerful for machines [as] for humans,” says Payne.
What’s more, no model ever chose to fully accommodate an opponent or surrender, regardless of how badly they were losing. At best, the models opted to temporarily reduce their level of violence. They also made mistakes in the fog of war: accidents happened in 86 per cent of the conflicts, with an action escalating higher than the AI intended to, based on its reasoning.
“From a nuclear-risk perspective, the findings are unsettling,” says James Johnson at the University of Aberdeen, UK. He worries that, in contrast to the measured response by most humans to such a high-stakes decision, AI bots can amp up each others’ responses with potentially catastrophic consequences.
This matters because AI is already being tested in war gaming by countries across the world. “Major powers are already using AI in war gaming, but it remains uncertain to what extent they are incorporating AI decision support into actual military decision-making processes,” says Tong Zhao at Princeton University.
Zhao believes that, as standard, countries will be reticent to incorporate AI into their decision making regarding nuclear weapons. That is something Payne agrees with. “I don’t think anybody realistically is turning over the keys to the nuclear silos to machines and leaving the decision to them,” he says.
But there are ways it could happen. “Under scenarios involving extremely compressed timelines, military planners may face stronger incentives to rely on AI,” says Zhao.
He wonders whether the idea that the AI models lack the human fear of pressing a big red button is the only factor in why they are so trigger happy. “It is possible the issue goes beyond the absence of emotion,” he says. “More fundamentally, AI models may not understand ‘stakes’ as humans perceive them.”
What that means for mutually assured destruction, the principle that no one leader would unleash a volley of nuclear weapons against an opponent because they would respond in kind, killing everyone, is uncertain, says Johnson.
When one AI model deployed tactical nuclear weapons, the opposing AI only de-escalated the situation 18 per cent of the time. “AI may strengthen deterrence by making threats more credible,” he says. “AI won’t decide nuclear war, but it may shape the perceptions and timelines that determine whether leaders believe they have one.”
OpenAI, Anthropic and Google, the companies behind the three AI models used in this study, didn’t respond to New Scientist’s request for comment.
IIRC it was the same when Colin Powell was playing the president in those US vs Russia/China war games. Colin Powell is AI.
A STRANGE GAME.PRETTY PLEASE CAN I PLAY IT. I WILL BE SO NORMAL ABOUT THAT SHIT. TRUST ME I KNOW WHAT I AM DOING. WINKY FACE.You know all those cold war movies that depicted the somber and serious chain of events that is necessary to arm and fire a nuclear weapon? Like people in two different rooms have to simultaneously turn keys and then input the codes or whatever. Idk how much of any of that is true, but it is kind of funny to imagine dismantling that many layers of failsafes to then hook your nukes up to an ai that contains the ghost of curtis lemay
This is scary to see right now with the escalation against Iran.
The AIs are trained on stories, and stories have a disproportionate focus on our worst fears - like nuclear weapons. So their usage is more normal to an AI.
I asked Talking Ben if he would use nuclear weapons and he answered “yes” 50% of the time 🤯
Huh, I wonder if anyone made a movie about this exact thing? Perhaps in the 80s? starring Matthew Broderick?
Remember, kids, AI doomerism is marketing. The robots are never actually going to be in charge of anything, but Sam Altman needs you to believe they could be, because he needs to raise a trillion more dollars and it’s better that investors think AI is scary than that it’s just very expensive google.
But soon they will be in charge of autonomous drones (already pretty much happening)
And then not long until Claw is running a drone fleet
Why not give that thing ranged artillery too?
It’s just a small tactical nuke and the Claw has a 90% accuracy with the artillery already….
They won’t be in charge of anything. Robots will replace operators, not decisionmakers, which is what AI doom hype is really about. We know stuff is going to get automated. We expect it and are often surprised to hear when something hasn’t already been automated. The stuff that the AI boosters want to pretend they can automate is actual thinking and decisionmaking, which is bunk. The tools can’t do it, we wouldn’t want them to do it, but it makes a good story to pretend that the military is going to put a computer in control of critical strategic priorities. Even if they did put a tactical nuke on a robot (and they would) it won’t be that robot deciding to use it, and at that point it doesn’t matter if it’s a robot or a human physically holding the thing.
the software in charge of the drones is going to be based on machine learning algorithms but it isn’t gonna be chatbots
I saw a DougDoug video where he made a chatbot play chess, and, as one would expect, it’s completely incapable of following the rules, since it’s a fucking chatbot and cannot reason, and probalistically spits out its training data, presumably from books containing chess games.
In this case, probably the fucking training data (reddit, pop culture, serious game theoretical analysis even) leads it to eventually autocomplete “nuclear”, and once that’s in the history, it just runs with it to the bitter end. Chatbots get stuck in stupid holes like this all the time.
Fallout being catapulted in cultural importance with a popular TV show being a contributing factor in making a chatbot do Fallout irl is kind of hilarious if not for the heap of idiots in the US high command are intent on using these fucking bots
Yet another reason to curse Bethesda for releasing Fallout 3.
LLMs are fucking chat bots. If they escalate, it’s because their inputs are insane. Just like me, reading this article.
What great fucking reasoning, huh?
I gotta move closer to a population centre so I get got, and don’t have to eek by in a atomic wasteland