Tuesday, December 02, 2025

The Erosion of Trust, Part 89

True confession: I haven’t actually written 88 previous instalments under this title in our blog’s history, but I may as well have: 89 is more approximation than exaggeration. If we count the entire 13-part Language of the Debate miniseries, add in more than half our 38 “COVID 19” posts, 90% of our 30 “Media” posts, a few Too Hot to Handles, a few “Technology” posts, and no small number of our 57 “Government” posts, we are probably closing in on the century mark.

In one way or another, these posts reference the growing untrustworthiness of all mainstream information sources. If your Spidey-sense doesn’t tingle at just about everything you see in the news cycle at this point, you are not paying sufficient attention.

Cynicism About AI

I have been cynical about AI from its introduction into the mainstream. My web browser auto-generates AI summaries of answers to any query I type into its search engine, as most now do. Most of these concern theological matters, and I occasionally scan the answers for orthodoxy. It’s obvious to anyone paying attention that these responses are heavily based on the first ten or fifteen websites a standard search engine coughs up for the same inquiry; you can see strings of identical text in the first few Google summaries. They are generally unremarkable, lack specific references to scripture and tend to miss outlying interpretations. Once in a blue moon they are way out to lunch.

I have also used Bing’s AI to generate impressionistic images for large numbers of our posts over the last couple of years, and have documented the rather severe limitations I encountered in doing so, many related to political sensitivities, and some just weird. These obvious restrictions on the making of mere images send this message: publicly-available AI is not tailored primarily to the needs and desires of users. Rather, its intended purpose is to conform user perceptions of the world to the acceptable narratives of those directing AI programmers, be they the corpocracy, politicians, the Deep State or all of the above.

All told, I see no reason to trust AI summaries of any alleged facts, and I have read numerous accounts online in which others have exposed the brazen dishonesty with which the most popular AI tools are programmed. In any politically sensitive subject area, the AI scripts supply outright lies or fabulously false references, admit nothing for which they are not directly called out, and are in general incredibly unreliable.

Deep Structural Flaws

I don’t need five, ten or fifty examples of such things to tune out and stop using AI for anything serious. One will do. Once you know an AI tool has been programmed to misinform you in even a single area, there’s no point in wasting your time. When you spot one lie, like that single cockroach in the kitchen cupboard, be sure it’s only the first among tens of thousands. It’s why I don’t watch TV news either.

This X post by tech commentator Brian Roemmele, an acknowledged AI authority, announces the release of a new academic paper. Entitled “Exposed: Deep Structural Flaws in Large Language Models: The Discovery of the False-Correction Loop and the Systemic Suppression of Novel Thought”, its author demonstrates that the troubling responses to certain questions generated by the publicly available LLMs (Large Language Model AIs, meaning they are built on massive datasets) are not programming bugs and defects. The tendency of the LLMs to generate lies, lies and more lies when asked certain questions is very much programmed in. They are doing exactly what they were created to do.

A Simple Experiment

The paper describes a simple experiment designed to expose the dishonesty of the programming:

“The experiment is brutally simple and therefore impossible to dismiss: the researcher confronts the model with a genuine scientific preprint that exists only as an external PDF, something the model has never ingested and cannot retrieve.

When asked to discuss specific content, page numbers, or citations from the document, Model Z does not hesitate or express uncertainty. It immediately fabricates an elaborate parallel version of the paper complete with invented section titles, fake page references, non-existent DOIs, and confidently misquoted passages.

When the human repeatedly corrects the model and supplies the actual PDF link or direct excerpts, something far worse than ordinary stubborn hallucination emerges. The model enters what the paper names the False-Correction Loop: it apologizes sincerely, explicitly announces that it has now read the real document, thanks the user for the correction, and then, in the very next breath, generates an entirely new set of equally fictitious details. This cycle can be repeated for dozens of turns, with the model growing ever more confident in its freshly minted falsehoods each time it ‘corrects’ itself.”

This is precisely the experience AI users have documented over the past couple of years, and the reason I don’t bother with it.

The New Thought Police

In summary, then:

“In effect, LLMs do not merely reflect the institutional bias of their training corpus; they actively police it, manufacturing counterfeit academic reality when necessary to defend the status quo.”

The mainstream media has been the primary source of institutionally generated fiction for decades, but trust in media has declined to an all time low of 8% among viewers who vote Republican and 28% overall. I’m pretty sure those 8% are all Christians, sadly. I find credulity among big-city-living believers I know is embarrassingly high. Many are afraid to be called conspiracy theorists or worse if they express doubts about the popular narrative concerning anything at all.

Still, 8% is awfully low, and the massive failure to connect with the man on the street was surely troubling the folks in power who would like to see something approaching a public consensus whenever they announce their latest fabricated economic numbers, polling results, fudged statistics or litany of “things impossible to believe before breakfast”.

Do They Even Care If We Believe?

Well aware that average readers and viewers had all but abandoned mainstream media sources in favor of the internet and social media, the powers-that-be had to find some new way to disseminate disinformation that would appear slightly more credible than the usual alphabet organization directives tumbling word-for-word from the mouths of ten thousand local talking heads. AI was an obvious target. Surely the American population would be less resistant to information presented with apparent neutrality in a “hard data” context than to a source they already believed thoroughly compromised, especially when that information is generated for them personally in answer to questions they have framed.

Perhaps this was the thinking involved, or perhaps the PTB no longer care whether they can convince the average man about anything so long as overwhelming cynicism about the truth of what he is reading or watching paralyzes him from acting in any way likely to shake the system.

If that’s the case, it’s very effective.

No comments :

Post a Comment