Brokeback, this has always been a very intriguing topic. I haven't watched your video's, but boy, did I enjoy all the Terminators and Matrixes! My stance however has become very sobering through the years.
I believe AI will just do what it is programmed for...by very, very, very intelligent programmers becoming even more intelligent because of research, testing and experience. Yes, their work is very impressive but no matter how much data and capacity we're talking about, isn't the formal goal to do exactly what it is programmed for? When it doesn't, the user will not be pleased with flaws and the risk for damage (remember the fear for a nuclear armageddon because of the switch to the year 2000? That wouln't have been AI). Even a random search or function without any goal would still need to be programmed to NOT look for or do something specific.
In case of some sort of "evil" doom scenario... in order to rise above our OWN human intelligence and what it has been programmed for and deliberately turn against its human creators, AI should need to develop self awareness, creativity, independent thinking, long term strategy, and maybe even feelings, emotions and its own standard of morality.
Aren't those characteristics what we only find and exclusively see in a mind, a thinking brain? (not talking about plants, trees or animal instinct here).
In fact, I believe the future is bright... medical, space programs, otherwise... let them robots come!