Either Geoffrey Hinton or Eliezar Yudkowsky explain how the chess computer works in one of the videos, I can’t remember which, but it was interesting because it wasn’t what I had thought. Apparently there are three separate processes involved, roughly something like: 1) work out the most likely next move, 2) work out all the possible moves, 3) work out all the responses to all the possible moves. Then the actual move chosen is some compex combination of those different processes. Or something like that, again they explain it better.
It’s completely correct to say that all these AI machines are really doing is processing huge amounts of data, and using that as a predictive tool to simulate outcomes. In that sense it is “stupid”. It doesn’t even know what it’s doing.
But what people don’t seem to appreciate is 1) how powerful that ‘dumb’ process becomes at scale 2) the fact that everything, including human actions, can ultimately be reduced to predicable probabilities 3) that the emergent effects of the process can appear to simulate intelligence.
The fact that it isn’t technically an intelligent thinking entity is neither here nor there, it’s the effect that counts.
Today the big news is that AI can now predict human emotions and responses better than humans can. The implications of that are simply m mind boggling. Here we have been struggling to understand, predict and react to one another for millennia, with variable results, and here comes along a machine that can do it better than any of us. No doubt marketers and advertisers will be racing to use the new tool first. But in the long term I don’t see any other outcome than the demise of humanity altogether.
That’s not even mentioning the claim I read this morning that AI can ‘read thoughts’ from brain scans. That’s just insane, if true. The world is mutating faster than we can understand what is even happening.
Mr Mustard is completely right about AI not currently being able to anticipate the end of its response from the beginning. We can chalk that down on the ‘AI is stupid’ tally, no doubt about it. It doesn’t ‘think’ the way we do, despite its superhuman capabilities in other areas. That’s why it’s surprising to us what elementary thought processes currently seem to be beyond its ability. Maybe it will never master knowing the end of a story from the beginning, or perhaps it will figure it out by the end of this week. I don’t know. I don’t see that it really matters because the things that it can do are astonishing either way.
https://youtu.be/Y6Sgp7y178k