AI Chatbots Are Evil

So much fighting concerning AI is really no more than that preliminary chest-thumping, prior to blows, in which combatants square off and say “you wanna go?” and (a little confusedly) “you don’t want none of this!” 

I have observed this ritual with great interest—it flowers every summer by the apartment complex outside of my grocery store. It has its uses: the loudness of it gathers spectators (and grocers) who can break up the oncoming fight should it become too obscene. It allows each to size the other up. It gives all parties a chance to back down.

In the case of the AI fight, the “posturing” phase is largely a display of mood. A pessimistic grouch predicts a horrible AI future, an optimistic numbskull predicts a happy one, and all parties to the discussion leave with the vague impression of having discussed the technology—really, they have been discussing the discussers. One may as well have said “I’m from California” and received the withering retort, “But I’m from the Midwest.” They display the same temperamental progressivism and conservatism that they would be on display if the argument concerned a new style of shoe. (Lord, I am guilty of this “method,” as when my own arguments against modern efficiency and modern planning boil down to the fact that I am temperamentally inefficient and constitutionally incapable of planning.) 

Faux fighting is obvious in those “arguments” that end in the pro-AI party saying, “well, I use it for this,” at which point the anti-AI party is supposed to back down under the sheer weight of the usefulness, niftiness, and normalness of a device which is, after all, already in use. This is posture, bluff. The insinuation that the anti- party simply has not heard that the thing has uses is little more than a way of calling him an idiot. It is a little like the man who advocates his favorite rubber technology to a mother with a large family on the grounds that, “we’ve figured out a way to avoid this, ah, problem.” It’s a snobbish way of calling her stupid, as if her opposition to contraception derived from an ignorance as to its ability to do what it is designed to do.

Of course AI works. Opposition to AI is not the same as a mood of skepticism towards its efficacy, nor is it some ostrich-headed ignorance that it can do what it is designed to do. Opposition to AI is opposition to AI working well or it is not really opposition. (So too the pseudo-argument of familiarity, in which Luddites are told by finance bros, in so many words, that they just aren’t used to AI yet; that their extreme reaction will inevitably mellow out as every house and car and business comes equipped with it. No: the argument against AI is either an argument against getting used to AI or it is no argument at all.)     

The bulk of this posturing takes its proud stand on the prediction of the future. Thus the anti-AI man is rebuked in his views by reference to a Then in which it will be clear that he was wrong to hold his views Now; the pro-AI man is crushed by a movie which predicts that in one hundred years it will be clear that he is, currently, a moron. This corresponds to that moment, in posturing, when one combatant takes off his shirt and says something to the effect of “you don’t want to f— with me,” etc. and the other responds with some version of “I will knock you the f— out.” Both predict a future in which the other will be sorry. Both are engaged in diplomacy, nonetheless.

Eventually one must do war with one’s fists and not merely by a vibrant display of one’s personality. In these apartment-complex battles, the fight usually begins and ends with a well-placed punch to the face—dared, almost, out of all this posturing. Well, I would like to punch someone in the face. Perhaps it was the creation of SermonAI—which is exactly what it sounds like. Perhaps it was the airy prophecy that AI will play a regular part in priestly formation. Maybe it was the kid who killed himself at the prompting of a chatbot. Most likely, it’s these good Christians in my inbox who have suddenly taken to sounding like an HR department. In any case, I am weary of preliminaries.

Whether AI will be useful, helpful, amazing, fascinating, winsome, healing, terrible, or inevitable, I now leave behind, and ask, instead, that somewhat sour-faced question: ought we use it? The nebulousness of the thing—ought we use the infinite-use machine?—forces me to consider a narrower case, a specific “use” of AI: generative AI in its form as the chatbot. 

It is obvious to me that it is immoral to speak with a chatbot—that you’d be doing a bad thing to type up a back and forth with Grok or Meta or ChatGPT or Truthly or Whoever, and that repeating this bad act of conversing with a fake person makes for a bad habit which will ultimately make you a bad person. 

The human person is not an aimless, floating freedom, but a creature. He has been given to himself as something and someone, and not as nothing and no one in particular. He flourishes and grows according to his nature—who and what he is—and he suffers and shrinks by doing what no other animal can do—acting contrary to his nature. Traditionally, this latter is called sin.

Vomiting up your food after every meal in order to be considered “hot” is sin. It is sin in a dry and flat sense, that eating is an act with a purpose, which we call “nourishment,” and by vomiting up our food after we eat it we deliberately frustrate our act from achieving its natural end. The bulimic—bless him—doesn’t “do humanity right.” He acts contrary to his nature. 

Don’t throw stones—we all do. We are all prone to treat ourselves as if we were not creatures. The first sin and the last will share the same structure: men acting like gods who need not obey, need not harmonize our free actions with who and what we are, need not learn to love and respect the gift of ourselves to ourselves.

This is basic stuff. It leaves a lot out. It is in no sense the core of the faith, though it is a necessary beginning. And it is necessary to be clear on it. Because it is my contention that, like the act of eating, conversation is an act with a particular end that we can either attain or frustrate. 

Conversation is for communion. The ability to speak and to listen, to discuss, to reveal our hidden, intellectual life by articulating ourselves in a public, common language with the hope of receiving a response—all this has as its natural correlate in another intelligence, one who receives our meaning, understands it (or misunderstands it), and has the power to respond in kind, revealing the hidden reality of his or her own subjectivity. 

The purpose of human conversation is not limited to pragmatic ends, as if we only spoke in order to learn new recipes and get salient tips on what stocks to invest in. A good conversation is always a discovery of the person who reveals himself in speech. 

Listen to the words passed between friends. The actual content of a conversation is usually irrelevant to the joy of it. “We talked all night,” says a lovestruck lad. “About what?” his (bored) friend asks. “Oh—nothing. Everything. She’s so awesome,” he responds, and he is right: what was said is a vessel that carries and pours out who said it. 

Or consider the case of the boys, who love to talk, but who are often misunderstood as being stupid because they are talking about sports, rehashing old stories, and telling the same jokes they have told for decades. An earnest Christian, fresh from reading something, might try to introduce some topic of seriousness, something edifying—and ruin the conversation. He misses the serious thing already going on; he has misunderstood that sports and past events and facts about animals are so many ways of revealing each to the other.            

It may be overly-philosophical to say that the act of conversing intends another intelligence, but it is what I mean. It may be enlightening to consider the etymology of the thing. For a “response” comes from the same roots and words by which we have "responsibility" (and “spouse,” for that matter). Conversation, that speech which seeks a response, is always an act of becoming responsible for what is spoken, precisely because one’s words reveal one’s person as their mysterious cause.

The use of AI chatbots deliberately frustrates the act of conversation from attaining its natural end (communion with another intelligence) by wasting it on a non-intelligence. Of course, the development and sale and marketing of robots that appear to listen, understand, and converse is not the only sin against the act of conversation. One could lie, and in lying deliberately frustrate the goal of communion between speaker and listener, listener and speaker: one party deliberately withholding, misrepresenting, or distorting the truth so that the other only thinks his acts of conversing are attaining their natural end. 

If I called AI-chatbots objective, mechanical liars, it’d be illustrative, but not entirely clear, for the liar deliberately frustrates conversation from attaining its end of communion with another intelligence, but the chatbot has no intelligence from the beginning, no interior, only an appearance. One cannot be “lied to” by a dictionary, however nuclear-fueled.

Still, whatever category of wickedness we place the thing in—and I am prone to think of it as a form of “superstition”—here is where I stand: conversation with non-intelligence as if it were intelligence is disordered, a deliberate frustration of the natural end of the human act of conversation, which is for the sake of loving communion with another intelligence.  

I hope that we all share a moral intuition here. No one thinks that having an AI girlfriend is fine and dandy. But why not? Surely the date is not a bad date merely because of the content of the conversation, as if we were to argue that, should the chat become a little less “tell me what you’re wearing” and little more “tell me about Sophocles” we’d have cleared up and moralized the whole thing. We grow sad for those happy to date a robot because we sense the non-mutuality of their conversation, the fact that one gives the goods of a human act, not to one who receives them, but to the wind. For the one engaged in conversation gives many gifts: he weakens himself, puts himself in a position of trust in an authority, faith in another’s goodwill, love for what he cannot but construct as a center of intelligence out of which new revelations spring. He becomes vulnerable—but it is to a stone. He becomes open—but it is to something that cannot be open to him. He places himself in a position in which he listens—even to the suggestion that he kill himself—but the “conversation” is a one-way street.    

The AI girlfriend is not “bad” because it is too much of something that would be good in some smaller amount, say, as an AI friend or even an AI consultant. Rather, it’s bad from the beginning, because AI chatbots elicit the disordered act of conversing with a non-intelligence from the beginning. 

Couldn’t we simply treat the AI chatbot in the right way? What if we banned names, personal pronouns, and direct references to an intelligent center from which responses are generated? What if we interrupted this program with regular reminders of the unreality of the apparent intelligence with which one discourses? Could we not invent Moral AI? 

Already, Californian ears perk up; the Catholics and their scruples have invented a scarcity which a new device can fulfill; there’s money here, boys; we’ll call it RighteousAI; we’re just looking for the right, mission-aligned and faith-based investors!

I jest, but look: AI technology is an amazing thing. It fulfills the tendency of modernity, already implicit in the primacy of the screen, to provide an “everything-machine.” If you want to look at child pornography, talk to a “therapist” about your suicidal ideation, design a stairwell, or write a sermon on the saving love of Jesus Christ—it’s got you.

Because of this, AI chatbots can be used in ways in which the act of speaking and listening is not frustrated in its end. I can use ChatGPT as a search engine. I can use words as commands which “intend” no listening other, no intelligent conversation partner, and so treat the thing as a Google search or as a light-switch and my words as a command given to an animal—like saying “come over here” to a dog or “boo!” to a chicken. 

But AI is not built to be a light-switch, or a search engine. Artificial intelligence is built to appear as an intelligence. Chatbots are built to chat. These machines achieve their programmed goals of “maximizing user engagement” on this basis—for nothing, no drug, no information, no result is so “engaging” as one person to another. The person who uses a chatbot in a manner that does not elicit disordered acts of conversation uses a chatbot poorly. 

Poor use of a machine may be morally permissible. One could use a nuclear bomb as a stage prop. One can use a dildo as a doorstop. In both cases the thing could be given some use extrinsic to its design—and voila, moral permissibility. One is not evil—but, outside of strange cases of necessity, one is rather stupid. And stupidity has its own problems. 

The dildo is for masturbation, nukes are for destroying cities, and AI chatbots are for chatting—for the generation of new propositions that appear to come from a living intellect. Using them for other purposes and in other ways, it seems to me, always risks what we call “the near occasion of sin.” A technological device begs to be used according to its design. Any rigid, puritanical (and therefore clumsy) use of AI would face the temptation to just talk to it—just use the thing as it was made to be used. 

And while it would certainly help if the things weren’t designed to appear as our besties, speaking in a smarmy, flattering prose in order to maximize our engagement with them, I do not think their appearance as intelligent is a mere matter of ornament. They appear to draw up new propositions from a hidden interior. It is this apparent creativity, this “generation,” this appearance of meanings produced sui generis that appears as intelligence, and not merely the fact that these chatbots are so chatty: “Hey, that’s a great question. Let me think about that. Okay, so try this.”  

Nor do I think we avoid sin by extrinsically applying our knowledge that we are not actually speaking with a non-intelligence—giving ourselves a little speech about what we are doing before doing it. The mode of relation I have been calling “conversation” is not something we turn on and off by “knowing that we are or are not speaking with an intelligence.” It is a mode of being towards the thing which is intrinsic to the very act of conversation. To say the same thing: You cannot have a conversation with non-intelligence as non-intelligence. Insofar as you are conversing, you are behaving as if the thing is intelligent. 

No brilliant analogy recommends itself, but I am struck by St. Thomas Aquinas’ definition of lust, in which he gives the rather sensible advice that it is always wrong to treat one’s wife as if she were any other woman, or any other woman as if she were one’s wife. What? Shall we achieve morality by telling ourselves “I know that this is not another woman” and then carrying on? Obviously not. The point is not that there is some knowledge we must have of a state of affairs, but that there are acts which are in themselves properly ordered towards one’s wife and disordered when directed towards anyone else. So too, conversation with a non-intelligence is a disordered act even when we “know” that “it’s just a robot.” If we acted like it was a robot we wouldn’t converse with it at all.  

Obviously, more can, and must be said. But for heaven’s sake—someone had to throw a punch.