There's an very interesting court case coming up in the USA. The NY TImes is suing Microsoft and OpenAI, ( the creator of ChatGPT) claiming misuse of articles published by the NY Times.
It brings into the open, an interesting aspect of AI, as the NYTimes case points out, AI has to be fed data - which I guess is exactly what happens with humans through various forms of education and training, including, of course, religious ideas.
I have little understanding of how Microsoft and OpenAI may have "misused" the NY Times copyrighted information, but perhaps it means that data fed to an AI program can be slanted - again just as in the example of the human mind.
A human mind fed with biased information can refuse to examine contrary ideas, our experiences as JW's (or, any other religion, for that matter) illustrates the problem, Rubbish in - rubbish out, some call it.
Will AI be like that???
FYI, I reads about this case, on an Australian Media service, the ABC, a government owned (but, supposedly independent) media group.
Check it at: https://www.abc.net.au/news/2023-12-28/new-york-times-sued-microsoft-bing-chatgpt-openai-chatbots/103269036