In 2023, the US State Department listed the best practices for military use of AI, focused on ethical and responsible deployment of AI tools within a human chain of command. The report stressed that humans should remain in control of “decisions concerning nuclear weapons employment” and should maintain the capability to “disengage or deactivate deployed systems that demonstrate unintended behavior.”
Since then, the military has shown interest in using AI technology in the field for everything from automated targeting systems on drones to “improving situational awareness” via an OpenAI partnership with military contractor Anduril. In January 2024, OpenAI removed a prohibition on “military and warfare uses” from ChatGPT’s usage policies, while still barring customers from “develop[ing] or us[ing] weapons” via the LLM.
“If my enemies cannot predict the AI hallucinations I rely on for my decision-making, they cannot predict me…”


The AI battleplan
Didn’t Captain America: The Winter Soldier literally tell us that using AI for warfare is unethical? For fuck’s sake, you cannot trust a fucking robot to determine who’s an enemy combatant or not!! Let alone an LLM that hallucinates 80% of the time!! It’s like those people saw that and instead of thinking “that’s evil” they went “this is based actually”.
Peace Walker intensifies




