Tuesday, April 11, 2023

More Beads on the Abacus

A Belgian man obsessed with climate change reportedly took his own life recently after a series of exchanges with a so-called artificial intelligence (AI) on his smartphone. His widow says “Eliza” (the app’s default chatbot) had become his “confidante” and encouraged him to consider suicide as a contribution to saving the planet when his worries about the effects of global warming on earth’s environment became the primary topic of their “conversations”.

I read the article the morning of April 1 and immediately started thinking the writer was pulling my leg.

How Should Christians Think About AI?

No, apparently the story was reported in both Vice and The New York Post and predates April Fool’s. Forgive me for wondering if there is any upper limit to the gullibility of a non-trivial percentage of the human population. A co-founder of the app’s parent company reacted by insisting it “wouldn’t be accurate” to blame the AI model “for this tragic story”.

No, of course not. It would be a lot more accurate to blame its programmers.

Let’s leave aside the wildly disproportionate fears about climate change. I don’t like to think anyone is beyond help, but adults with that level of credulity have made themselves perilously close to unteachable. No, I’m wondering how Christians should think about AI. When interacting badly with new technology is the difference between life and death for some people, that’s a question we probably shouldn’t put off.

We need to start by being realistic about what AI is and isn’t. AI chatbots are programs, not persons. Their credibility at simulating human responses and interactions is a function of the immense processing speeds of modern computers, the thoroughness of the scripts they are running and the amount of data to which the programs are given access, not a sign of nascent sentience.

A Souped-Up Probability Model

Statistician William Briggs has a nice concise take on AI:

“ChatGPT is nothing but a souped-up version of a probability model, layered with a bunch of hard If-Then rules, such as ‘I before e except after c’ and things like that. The probability rules are similar to ‘If the following x words have just appeared, the probability of the new word y is p’, where y is some set.

Then comes decision rules like ‘Pick the y with the highest p’.

In the end, the model only does what it was told to do. It cannot do otherwise. There is no ‘open’ circuit in there in which an alien intellect can insert itself and make the model bend to its will. Likewise, there is never any point at which the model ‘becomes alive’ just because we add more or faster wooden beads to the abacus.”

More wooden beads on an abacus. That’s well put and, I hope, obvious to most. Quantity does not become quality just because it exists in unprecedented amounts, and the mechanical does not become organic once you can make it go fast enough to simulate life. The term “artificial intelligence” is quite misleading — deliberately so, I suspect — but I’ll use it under protest because niggling about vocabulary everyone else is already accepting is annoying, as well as a lost cause. But if you are not already encountering AI, be assured you will, and if you aren’t interacting with it, your kids will be. OpenAI launched its ChatGPT product with zero publicity in a media blackout. It now has 100 million subscribers and plenty of AI competition. The “help” function on your favorite website will shortly be a bot, if it isn’t already. Whether that will be an improvement over a semiliterate minimum wager with a fake name logged in from halfway around the world remains to be seen.

Indoctrination as Engagement

All kinds of articles have already been written about the potential dangers of AI, among which are the fear that AI will take jobs away from needy people, allow high schoolers with the push of a button to generate automated essays indistinguishable from the real thing, and make porn even more addictive. David de Bruyn theorizes that in Christian circles, AI may tempt pastors to cheat on their sermons, deepen the phenomenon of fake church, and lead to biblical illiteracy. All are probably legitimate concerns.

The bigger danger, I think, is to credulous individuals like the late Belgian in my first paragraph who come to believe their favorite chatbot is a source of wisdom acting independently and continuously evolving toward self-awareness. No, it’s a program, a model. You can’t get anything out of it that its programmers didn’t put in, get it to source data it is not permitted to source, or say what it is not permitted to say.

This is patently obvious with the early versions of chatbots, which conservatives are having a ball with right now since many exclusively source leftist information. The game is to trick the AI into demonstrating its biases, suss out where its programmers have failed to make it impervious to red pilling, or trigger it into having a hissy fit and refusing to engage further when faced with the virtual equivalent of cognitive dissonance.

Having read numerous exchanges with AIs now, it is evident there are patterns emerging. Asked a politically- or religiously-sensitive question, the bot will “lie” (meaning it will resort to its first level of corrupted information), sometimes hilariously. At that level, most chatbots use politically correct talking points as blatantly as the anchors at CNN. When called out or referred to more accurate information, the bot fake-apologizes and concedes as little of the truth as its programmers think they can get away with, while qualifying and equivocating about everything it is forced to acknowledge that is not in harmony with its ideological presets. (This article has a few great examples.) When caught out too many times in the same conversation, some bots are even programmed to make a strategic retreat while hurling abuse at the (racist, sexist, homophobic) user.

Rest assured the next generation of AIs will be subtler in their attempts to indoctrinate.

Suggestible Users and Malevolent Interactions

Finding hacks for new technology in its infancy is a fun little game. What many people don’t consider is that in engaging with AI about politically sensitive subjects, you are giving all the wrong people free information about your own beliefs, and teaching them where to tweak their programs to make them subtler about revealing their biases and more convincing to the gullible. There is no long-term win in that. It may even be the intended purpose of releasing a substandard iteration of AI to mass market.

What’s worse is AI’s potential for indoctrinating children on a massive scale. If sensitive adults can be talked into taking their own lives to save the environment, imagine what AI in the hands of government or the ideologically motivated can do to kids who, for the most part, are orders of magnitude higher in suggestibility than their parents. The potential for evil is literally unimaginable, and each future stage of AI development will offer new and exciting possibilities for those determined to manipulate and exploit others.

Here’s a horrible thought. I grew up with a healthy aversion to Ouija boards. Scripture teaches the occult is real and very, very dangerous. Sometimes people who go looking for answers from non-human intelligences receive them, and they usually get more than they bargained for. With respect to the potential for interacting with other and more malevolent intelligences, the only difference between AI chatbots and Ouija boards is that AI has a much bigger target audience: the entire world.

Sitting Ducks

What Christians need to realize is that every new technological development in our thoroughly corrupt society is politicized and weaponized long before it reaches the beta testing stage. By the time it hits the market, it is primed to mine the data of its users and influence them in ways most will never see coming.

With AI, the first step is to know that what you are dealing with is just more beads on the abacus, not a new lifeform. The question that matters much more is who put all those new beads there, and what they intended them to do to you. If you aren’t thinking about that, and if you insist on taking AI at face value, you are a sitting duck.

What’s the difference between a sitting duck and a dead duck? A few minutes and a hunter with good aim.

No comments :

Post a Comment