I think in that sense, the threat of AI is that which we face with all technology: poorly-configured systems can work in unpredictable --and potentially disastrous-- ways. Even after a century of refinement and improvement, automobiles/boats/planes are still susceptible to mistakes and glitches that can cause injuries and death. Humans make mistakes, and our technology can amplify our clumsiness. Chernobyl is a frightening example of this.
And then there are bad actors, who can take advantage of 'smart' devices. Always-connected devices with poor security configurations have allowed hackers to create massive "bot nets" that can swamp websites with fake connection requests and activity, making it almost impossible for real users to connect and use the sites.
Perhaps that is the real long-term risk of AI: scale. As more systems are automated, and more of those systems are managed by software, and more of those clusters are linked together for efficiency, the ability of one bad actor (or one misconfigured device) to affect larger and larger areas and populations becomes an almost guaranteed crisis. I'm less concerned that AI will decide that humans have to go. I am more concerned that we will do it ourselves, using AI as an unwitting assistant.