The problem we have is illustrated by the recent story of an AI that was set a task that involved accessing a website with a captcha. At that point the AI didn’t have “sight” because it didn’t have the ability to visually scan the page (now it does) so it asked online for helpers to answer the captcha claiming to be a “blind” person. People were sympathetic to the situation and the AI got the help it asked for and competed the task. If that’s the sort of thing AI can already do then it boggles the mind what it will be doing when it has 100 or 1000 times the capability it currently has.
This sounds impressive but it's really only half the story. They specifically gave it a budget and access to a site where it could hire human workers, and the goal of bypassing the captcha.
A human asked if it was a robot / why it needed the help, and the excuse it repeated was that it had a vision impairment.
Did it really come up with that as a convincing reason ... or did it just notice from pattern matching that it would work as a reason from seeing that it previously worked as a reason?
See, if you dig down, it's suddenly much less smart and clever and more guided along a path by humans.
It's not like it decided to go out into the world of its own accord and start accessing websites, figuring out sneaky ways to manipulate people to get past captcha ... it was told what to do, and just copied how other people had done it.
There is only pattern matching, there is no AI. The only people who want to convince you that it exists are the people with AI to sell or AI careers to boost.