I can't agree with the Jury analogy. Courtrooms are presided over by judges who do not permit incompetent opinion as testimony; have it stricken from the transcript when it occurs and give jurors strict instructions on what they may and may not take into consideration. Current AI models have no such restrictions and will happily repeat back complete nonsense as fact.
I don't want to hijack Terry's thread, but being a "suspenders and belt" man, I'm going to flesh out my other observation with an actual example:
---------
About four years ago, I ditched Windows for Linux, but quickly found out that the OS is evolving so rapidly that articles just two years old can be partially or even completely wrong.
A good example is the free Foxit pdf reader. There was a native Linux version but it was discontinued years ago. The last supported version of Ubuntu (...and by extension Mint, Zorin, Kubuntu and others) was version 16. We are now at version 24. (I found this out the hard way. It took me nearly an hour to clean up the mess caused by a failed install.)
With that in mind, let's ask AI if you can install the Foxit Reader on Linux:

Microsoft Copilot gives virtually the same wrong answer and the same dead install links that Foxit took down years ago.
Grok3 qualifies the answer, which makes it substantially more accurate:
You can go through the 25 cited web pages and the only source for the "important considerations" are responses given by me, where I paraphrase an email from Foxit's tech support.
So, yes. It is entirely possible to influence the answer of AI with your own input. It is entirely possibly to have it quote your own words back to you, provided the question is esoteric enough. As Anony Mous points out, it doesn't happen overnight, but it can happen.
--And for the record, I do not have a "bias against AI" I'm the lead engineer at a manufacturing facility and use it daily for mathematical formulas and CNC programs.