nicolaou: Imagine a future where anyone with a bad motive can easily utilise AI to produce the narrative and results they want.
I think we're already there. I understand that we have ways to determine if a picture or video is fake, but there is an old saying that "a lie can circle the world while the truth is getting its shoes on." Even amateur deep-fakes are pretty convincing, especially if you do not know that they're fake and have your guard down. A talented enough individual (to say nothing of a well-funded group) should already be able to create content that would easily fool most of us.
We are entering a period where trust will diminish and misinformation will really begin to take off. All you need is one incident where people were fooled and the consequences were substantial and bad, and trust in all content will disappear. What will we do when our ability to confirm anything via audio, video, and images is compromised? When governments aren't trying to solve the problem, but actively taking advantage of it?