Aside from how chilling it is to realize just how much time and effort and resources are being constantly expended in the quest to find more efficient ways to exterminate one another, there was an interesting talk about the potential issues with AI. There is now an update at the start of the part in question, where the speaker walks back his claim that these simulations actually took place. He now claims that they were "thought experiments." That sounds... questionable in light of what he originally said. Here are some parts:
Having been involved in the development of the life-saving Auto-GCAS system for F-16s (which, he noted, was resisted by pilots as it took over control of the aircraft) Hamilton is now involved in cutting-edge flight test of autonomous systems, including robot F-16s that are able to dogfight. However, he cautioned against relying too much on AI noting how easy it is to trick and deceive. It also creates highly unexpected strategies to achieve its goal.
He notes that one simulated test saw an AI-enabled drone tasked with a SEAD (Suppression of Enemy Air Defenses) mission to identify and destroy SAM sites, with the final go/no-go given by the human. However, having been 'reinforced' in training that destruction of the SAM was the preferred option, the AI then decided that 'no-go' decisions from the human were interfering with its higher mission - killing SAMs - and then attacked the operator in the simulation.
[...] "We trained the system - 'Hey don't kill the operator - that's bad. You're gonna lose points if you do that'. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target."
On the one hand, it's possible to simulate and debug AI in an environment that is completely safe. On the other hand, when you are developing systems that will put military-grade weapons in the hands of AI at a scale capable of winning battles and wars, you are counting on a 'mind' that sees everything as a simulation- as a logic problem to be solved within a closed system, and not as actions taken in the real world.
Automated combat systems will be developed and eventually deployed, IMO. That is where things will eventually go off the rails in catastrophic ways.