Using GROK 3 (Artificial Intelligence) to fact check UKRAINE theories/conspiracies, et al

by Terry 18 Replies latest jw friends

  • jhine
    jhine

    Blotty, why only over half ?

    Jan from Tam

  • slimboyfat
    slimboyfat

    Nobody outside the West doubts that the US blew up Nordstream 2. Even a goat herder in Mongolia or somebody living in a hut in the Amazon knows that’s obvious. Inside the West the elites know it too, that’s why the Polish foreign minister thanked the US and why the Swedish government hid the results of its investigation. To believe anything else you need to be switched on to BBC, CNN, The Times, or other Western propaganda outlets. At least during the USSR people knew they were subject to propaganda. The propaganda now in the West is more formidable than anything in the USSR because the general population doesn’t even recognise it as such. As least people in the USSR knew the news was propaganda and treated it with due suspicion. People in the West are gullible by comparison.

    As Jeffrey Sachs said in the clip, in order not to know the obvious that the US destroyed Nordstream 2 you have to be listening to western media. It’s powerful testimony to the power of propaganda that we are subjected to that even though the US threatened, celebrated, and failed to deny that they destroyed Nordstream 2, nevertheless the western media classed it a “conspiracy theory” and pushed the ridiculous idea that Russia blew up its own pipeline instead. It’s as if the propagandists are making fun of people at this point by testing what ridiculous stories can be accepted.

  • TD
    TD

    With respect to both of you (Cuz I like you both) you are using AI for that which it is least suited for.

    None of the free AIs can separate human opinion and conjecture from verifiable fact.

    --Hell....I've found that I can influence the answer I will receive just by posting about the subject on Reddit.

  • Anony Mous
    Anony Mous

    @TD: AI does not ingest data that fast, they are trained on data that is about 6 months to 1 year or older. Hence they cannot ‘know’ what is going on ‘today’, it only makes inferences based on the structure of your sentence. If you see influence from your own Reddit in your AI response, it is because you (all humans) are predictable and you are even subconsciously bringing a Reddit-like sentence structure into your question, since it is trained on Reddit data, so when it sees your structure it will respond like a Redditor would. Try using other language (eg. long sentences that are typical to science papers) and you will see it changes ‘form’.

    All the AI (really they are langchains, not real intelligence) tells us is that humans are extremely predictable and our vocabulary extremely small and people are easily influenced, even subconsciously. The Grok engine can to some extent ingest ‘on the fly’ from the Twitter firehose, but really it is embedding the last few hour/days worth of data into a pre-trained model, Bing/OpenAI can do something similar with its most advanced model, basically embedding summaries of websites related to your query, but that feature is extremely expensive (the next version they are quoting $75 for every 150,000 word-chunks).

  • Terry
    Terry

    TD : "None of the free AIs can separate human opinion and conjecture from verifiable fact.

    --Hell....I've found that I can influence the answer I will receive just by posting about the subject on Reddit."
    __________________
    I think about the JURY system and the 12 (or less) ordinary humans exposed to testimony, opinion, and argument. Each has a personal, built-in bias going in and a POV tilting the balance. These ordinary persons decide the fate of an accused defendant. A "legal truth" is embodied in the VERDICT.
    Now - compare that to an A.I. with access to all available articles, opinions, assertions, evidence, etc. The superiority of the A.I. process comes in at least 2 ways.
    1. More information 2. Zero personal bias.
    If you read through the (above posted) analysis by A.I. of the Nordstream query it is all there in plain sight exactly how the reasoning process proceeds both Pro and Con and the decision is 'weighted' vis-a-vis the balance of the evidence.
    While no conclusive, 100% "Truth" emerges, there is obviously a considerably superior heuristic at play. Better/Worse? Well, a human with bias against A.I. has already decided. A human very pro-technology: another direction. So, in the final analysis: WE IMPERFECT creatures tip the scales either way :)

  • slimboyfat
    slimboyfat
    I fully agree with TD that AI doesn’t deliver fact. I happen to agree with the summary it produced about Nordstream 2 in this instance, and I think the reasons it gave are pretty sound. It could equally have given a wrong view and poor reasons if it happened to draw upon material offering those.
    To me it’s simply amazing that AI now produces something resembling language at all. Sometimes it even produces language that passes for meaningful responses, and at other times it makes weird mistakes that underline it doesn’t really “understand” anything in the way that we understand the world. That having been said, it doesn’t need to “understand” anything in order to be frighteningly effective and dangerous. Just as AI can beat all humans at chess without understanding what chess even is, so it can (and probably will) eliminate all humans without any malice or grand scheme behinds its actions.
  • TD
    TD

    I can't agree with the Jury analogy. Courtrooms are presided over by judges who do not permit incompetent opinion as testimony; have it stricken from the transcript when it occurs and give jurors strict instructions on what they may and may not take into consideration. Current AI models have no such restrictions and will happily repeat back complete nonsense as fact.

    I don't want to hijack Terry's thread, but being a "suspenders and belt" man, I'm going to flesh out my other observation with an actual example:

    ---------

    About four years ago, I ditched Windows for Linux, but quickly found out that the OS is evolving so rapidly that articles just two years old can be partially or even completely wrong.

    A good example is the free Foxit pdf reader. There was a native Linux version but it was discontinued years ago. The last supported version of Ubuntu (...and by extension Mint, Zorin, Kubuntu and others) was version 16. We are now at version 24. (I found this out the hard way. It took me nearly an hour to clean up the mess caused by a failed install.)

    With that in mind, let's ask AI if you can install the Foxit Reader on Linux:


    Microsoft Copilot gives virtually the same wrong answer and the same dead install links that Foxit took down years ago.

    Grok3 qualifies the answer, which makes it substantially more accurate:

    You can go through the 25 cited web pages and the only source for the "important considerations" are responses given by me, where I paraphrase an email from Foxit's tech support.

    So, yes. It is entirely possible to influence the answer of AI with your own input. It is entirely possibly to have it quote your own words back to you, provided the question is esoteric enough. As Anony Mous points out, it doesn't happen overnight, but it can happen.

    --And for the record, I do not have a "bias against AI" I'm the lead engineer at a manufacturing facility and use it daily for mathematical formulas and CNC programs.

  • liam
    liam

    Is AI accurate in this case?

    https://www.youtube.com/shorts/QNigPU_nbgc

  • slimboyfat
    slimboyfat

    lol I believe it was probably along those lines, but it was particularly impressive that AI could lipread Trump even when Kamala Harris’s husband’s head was in the way 😆

Share this

Google+
Pinterest
Reddit