In 2023, the US State Department listed the best practices for military use of AI, focused on ethical and responsible deployment of AI tools within a human chain of command. The report stressed that humans should remain in control of “decisions concerning nuclear weapons employment” and should maintain the capability to “disengage or deactivate deployed systems that demonstrate unintended behavior.”
Since then, the military has shown interest in using AI technology in the field for everything from automated targeting systems on drones to “improving situational awareness” via an OpenAI partnership with military contractor Anduril. In January 2024, OpenAI removed a prohibition on “military and warfare uses” from ChatGPT’s usage policies, while still barring customers from “develop[ing] or us[ing] weapons” via the LLM.



“If my enemies cannot predict the AI hallucinations I rely on for my decision-making, they cannot predict me…”