
Agentic AI: Does the Future of Warfare Look Autonomous?
Agentic warfare—a model involving autonomous and semi-autonomous systems in military planning, logistics, operations, and intelligence—is rapidly reshaping global defense strategies, particularly amid growing techno-military competition between the U.S. and China. These agentic AI systems, capable of autonomous decision-making and adaptation, differ from traditional prompt-based AI and are being integrated into advanced military applications such as intelligence fusion, supply chain optimization, and real-time targeting. The U.S. has launched initiatives like the Replicator program and invested in companies like Sandtable, while China’s PLA is embedding AI in both combat and non-combat functions. Ongoing conflicts in Ukraine and Gaza highlight real-world uses of agentic systems, with autonomous drones and controversial surveillance-targeting tools like Israel’s Lavender. While human oversight technically remains, its effectiveness varies, raising ethical concerns. Critics warn of overhyped expectations, ethical risks, and a shift toward premature deployment, especially if adversaries like China adopt looser constraints. Ultimately, the future of agentic warfare will depend not just on AI capabilities, but on how well humans remain meaningfully involved in their use.
