With Artificial Intelligence, Short-Term Risk Aversion is Long-Term Risk Seeking

Published on November 13, 2022

On November 27, 2020, Iran’s top nuclear scientist was assassinated. The initial accounts differed wildly, and it took roughly ten months for the New York Times to break the real story. In prose that could have come from a sci-fi novel, the world learned that Israeli intelligence operatives had carried out the assassination with “a high-tech, computerized sharpshooter [rifle] kitted out with artificial intelligence and multiple-camera eyes, operated via satellite and capable of firing 600 rounds a minute.” A more salient, tactical manifestation of autonomous capabilities is drone warfare. Particularly lethal is the American-made, multipurpose, loitering munition Altius 600 that has a range of 276 miles and can operate at a ceiling of twenty-five thousand feet, providing intelligence, surveillance, and reconnaissance, counter–unmanned aircraft systems effects, and precision-strike capabilities against ground targets. Many systems like the Altius “will use artificial intelligence to operate with increasing autonomy in the coming years.” But AI-enabled weapons systems are already being used for lethal targeting—for example, the Israeli-made Orbiter 1K unmanned aircraft system, a loitering munition recently used by the Azerbaijani military in the Second Nagorno-Karabakh War, independently scans an area and automatically detects and destroys stationary or moving targets kamikaze­-style.

Read the full article here.