Geoffrey Hinton, major inventor of artificial intelligence:
“If you take the existential risk seriously, as I now do—I used to think it was way off, but now I think it’s serious, and fairly close—it might be quite sensible to just stop developing these things any further, but I think it’s completely naïve to think that would happen. There’s no way to make that happen. If the US stops then the Chinese won’t.”
To be absolutely clear, by existential risk he is talking about the death of all humans in the near future, probably years rather than decades, and that he cannot think of anything we can do about it. I cannot recall any statement like that from anyone not a “crank” of some description. He only came to this conclusion himself recently, and reluctantly. He is not a crank, and nobody in a position to know what he’s talking about seems to be calling him a crank. There are relevant experts who argue that better outcomes are possible, some even say it’s probable, but vanishingly few seem to be ruling out existential risk.
I wonder, how should we respond to this? Should we take it seriously? What would “taking it seriously” even mean? Should we try to do something about it? Or accept we can’t do anything about it and make the best of the time left? Or just try to forget about it altogether? Does religion have anything relevant to say about this situation?