AI as a Very Deepfake

FROM THE PRINT MAGAZINE:

This essay was published in New Polity Issue 6.2/3 (Spring & Summer 2025). Subscribe for all our best essays.


On August 4, 2025, Jim Acosta interviewed Joaquin Oliver live on his Substack. August 4 was Joaquin’s birthday. He would have been 25 years old—but he had been killed in the Parkland school shooting 7 years earlier. Joaquin’s parents had “brought him back” through generative AI, and allowed Acosta to be the first reporter to interview him, hoping to use the special power of a voice from beyond the grave to promote gun control.

In a recent article published in The Atlantic,[1] the author Charlie Warzel articulated well the complex reactions that most people had to the interview: On the one hand, it was downright creepy, listening to rather canned answers being offered in a jumpy animated voice to the questions Acosta—apparently in all seriousness—put to the chatbot. On the other hand, one could not help but feel an immense pity for the parents of this child whose burgeoning life was suddenly snuffed out in such a senseless act, and to respect their desire to keep others from a similar experience. Warzel said that the mother consented to the reanimation not only to try to help the epidemic of violence in whatever small way she could but also just to be able to hear her son’s voice say “I love you, Mommy!” every morning. But Warzel added, even so, that in watching the video he felt like he was losing his mind.

The situation is heartbreaking, and one can imagine how difficult it would be to turn down such technology if it were offered when one underwent the excruciating trial of losing a child. The heartbreak, and the sense of horror, runs deeper when one learns that “grief tech” is an emergent and growing industry, above all in China, but also here in the United States. I have not had the heart to explore what sorts of reanimation services these new companies are developing, but it does not seem to lie very far beyond the realm of possibility that, before too long (if not already), one will not only be able to talk with one’s deceased child on a daily basis but even check in on him as he grows up (using digital aging technology), goes to school, gets in trouble, lands a job, meets a girl, and so forth, for years on end—a whole-life drama generated by AI.

The horror that one feels at such a prospect is of an altogether different order from the anxious worry one might have felt, in the “olden days,” when hearing of a grieving couple who refused to disturb their child’s bedroom even years after he passed away. The grief that clings to the departed, unable to let him go, is a wholly human experience, an experience that demands a profound confrontation with the reality of death and the great question of the resurrection, along with all the “final things” that question entails. The bedroom in which the child lived retains traces of him; it is where he slept, played, cried, hid, and dreamed about what he might be when he grows up. It is where he, to some degree, remains. The “grief tech” possibility just described is nothing like this. In one sense, it obviates the need for any letting go, because something very much like one’s child is still there, in front of one, ready to receive and return one’s affections on a daily basis. But in another sense, it involves a complete letting go, because this avatar has absolutely no real connection to the deceased. It is a substitute for him, which secretly requires one to transfer affections, without in fact realizing one is doing so. It ceases to be a human experience at all: an experience that requires a person, in pain and suffering, to become even more human. Instead, “grief tech,” by skipping over the reality, is a technology that seeks to cancel out grief rather than grapple with it.

The reason I am drawing attention to this sci-fi scenario (if that is what it is) is because it seems to me to reveal something essential about AI in general. However horrifyingly “beyond the pale” it may seem to be, this re-created simulation of an absent person is not just an extreme possible use to which the new technologies being developed under the banner of “AI” might, at the border of sanity, be put—rather, it brings to light the very nature of AI, its inner logic, at work to some analogous degree in every use, no matter how apparently innocuous. The point of this brief reflection is not to call forth a possible apocalyptic future, but simply to try to indicate the new vision of reality that we are currently involving ourselves in, typically without much thinking.

My rather simple contention is that AI is essentially duplicitous. In one sense, this is so obviously true it may seem hardly worth drawing attention to. But if the thesis is obvious, its significance is perhaps less so, and warrants more consideration and contemplation than we have been giving it. The reason it has such importance is that this new technology introduces duplicity more profoundly into the human soul than perhaps any other technology that the human spirit has yet devised.

That AI is essentially duplicitous is, as I said, entirely obvious: the Oxford English Dictionary defines AI as “The capacity of computers or other machines to exhibit or simulate intelligent behaviour.” We call this technology “artificial intelligence” in spite of the fact that the phrase is an oxymoron. Only a living thing can be intelligent, because only a being with an interiority, with an internal principle capable of gathering its many parts into a per se unity, which entails mediating the parts to each other so that they are intrinsically interdependent, can understand.[2] Understanding, in other words, is a deepening of the kind of unity that constitutes life. Plato describes it as the fruit of the soul’s coupling with reality, wherein the deepest core of each become one.[3] The unity of the spiritual soul is a unity that transcends the life it enables to such an extent that it can receive into itself the forms of other things, without destroying those forms or compromising its own unity. To be able thus to enter into unity with reality is what it means to be intelligent. Note that, when debates occur over whether AI is, or will ever be, actually intelligent, the question people explore is never whether this machine has an internal principle of order that gives it life and transcends that life to such an extent that it can become aware of others within itself. Instead, the question is always how well, how quickly, perhaps even how (apparently) spontaneously, it processes information and “acts” on the basis of its calculations, how well it can combine and reproduce that information in unpredictable but still meaningful ways. Let us imagine it can do all of this perfectly, without any mistakes, in a fluid and constantly self-correcting manner: it will still not be intelligent. What we are talking about in this case is a functional equivalent of intelligence, which is to say an “intelligence” that can accomplish one thing or another (and, to be honest, the kinds of things that “AI” can do are sometimes totally astonishing), but can never actually know anything. It can never contemplate a reality within itself, with which it has become one. This point is worth meditating on: when we speak of AI, we are attributing intelligence to an operation that has nothing at all to do with actually knowing. What does such usage imply for what we mean by the term “intelligence”—even, it must be said, in our own case?

When I said above that duplicity is the essence of AI, this is just what I had in mind: the essence of this artifact, the very purpose for which it was made, is to pretend to intelligence without being intelligent, and to pass off this pretense as “good enough” to justify its replacement of the real thing, compensating perhaps by accomplishing its partial tasks in a superhuman sort of way (because a real thing can never be as effectively instrumentalized as its Ersatz). Thus, duplicity is not just one possible use of the technology, which we might try to devise ways of minimizing or even avoiding. AI cannot be used at all without a kind of concession to duplicity. This concession, let me stress, is built into the thing, and is not just a kind of trap that some people fall into but the savvy can avoid.

Let me strengthen the argument by illustration. Obviously, in what follows, I am speaking principally of Large Language Models (LLMs), which are just one type of AI machine, but I take it that something analogous to the point I am about to make holds in all cases to the extent that the machines are expected to “simulate intelligent behaviour” in some sense or other. Though a machine might remain automatic in this or that respect, we call it “smart”—which is to say, an instance of AI—when it is able to respond to its changing environment in what we take to be a proper way, when it produces something not “pre-programmed” but “intelligent.” With respect specifically to an LLM, the machine appears to explain something, to make observations or recommendations, to provide some sort of general assessment or lay out information in a narrative style. To put the problem simply, we take—indeed, we cannot but take—the sounds that come out of this machine, or the markings it produces, to be words. What, though, is a word? A word is exclusively a manifestation of intelligence: It is the fruit of some knowledge, some conception, wonderfully joined together with some conventional sign, and spoken, or written—which is to say, manifest outwardly—so that the knowledge to which it bears witness can be properly contemplated by oneself and/or by others, and in this way communicated. Language represents intelligence in its social reality; and intelligence is, arguably, essentially social, which would mean that language is essential to intelligence.

Now, if words belong exclusively to intelligence, then what is produced by a thing that lacks intelligence cannot in actual fact be a word. It can sound like a word, look like a word, and even function as if it were a word, but it cannot be a word. An actually intelligent being might receive it as if it were a word (which in fact is the whole point of an LLM), and might respond accordingly, but this is in fact a fraud. No communication is occurring; nothing is being spoken or heard, written or read; if one “reads” the signs that are produced, this event is an ersatz act, by which the “recipient” supplies an intelligence that is not actually there and acts as if this intelligence is being received, rather than merely posited. The “speech-event” is a make-believe, an illusion, with nothing below the surface or behind the curtain. It is curtains all the way back. But if we are willing to adjust our intentions and expectations, we can view what is produced as nevertheless “good enough,” because that product is able to effect action that occurs as if intelligence had just taken place. R. V. Young, in a wonderful lecture about education, compared LLMs like ChatGPT with Barnstable Bear, a character from the old comic strip Pogo: Barnstable was a bear who could write—but he couldn’t read. This is brilliant, and the humor stems from the fact that if he cannot read we cannot legitimately refer to the markings he produces as writing. There are no words coming out of this creature, only functional equivalents, because there is no intelligence.

We can compare the interaction with ChatGPT with the kind that takes place in the use of an older technology, for example, a calculator. A calculator is a familiar tool, and we treat it as such. We input data—essentially numbers and operations—and expect the machine to generate an answer, a result based on the information provided. The reason we use it instead of “running the numbers” ourselves is that it can work infinitely more quickly and can make exceedingly complex calculations that would lie beyond any mortal’s capacity. But even so it never occurs to us to consider it intelligent. There is nothing—or in any event, not much—about it that is mysterious. It “crunches numbers”; it processes data according to given instructions. We might be surprised by a particular outcome, but we don’t attribute that surprise to the cleverness of the machine. We don’t take its product to be the fruit of some act of genius, or perhaps a “hallucination,” on its part; we assume our surprise is due to a mistaken expectation or a mistaken bit of data.

The interaction with ChatGPT is radically different. When I use this sort of technology, I typically ask the machine a question, or introduce a prompt, or perhaps just initiate a “chat.” In other words, I address words to it, as if it were a conscious subject of some sort. What exactly is going on when I do this? The way that I call forth a response from this machine is by composing a phrase of some sort, which I expect to be intelligible, and I recognize it as intelligible by imaginatively putting myself, so to speak, in the place of the machine that is receiving it, “hearing” my words, and seeing if they make sense or not. In other words, I treat the machine as in some sense a peer, which thinks like I do. In this respect, it does not strike me as strange, indeed, as presumptuous and offensive, when (for example) the NPR host proposes each morning, “Ask your smart speaker to play your local NPR station.” Does one make a request of a machine—or does one use a machine? Do I ask my pen to write, my chair to allow me to sit? Five years ago, when I turned on the local NPR station by pressing the buttons on my radio, I may have been in some sense grateful to be able to listen to the news, but I didn’t receive it as a personal gift offered to me by the machine.

It is crucial to see that this duplicity, this mistaking of a machine for a conscious subject, is not accidental, not simply the result of a gullible disposition I happen to bring in one instance or another but which I could have avoided through a little more caution, vigilance, or critical skepticism. To say it again, the duplicity is instead built into the thing, and cannot be any other way, regardless of how much I may remind myself that this is just a bunch of plastic, metal, and wires. It is simply not possible for a conscious, literate human being to see squiggles on a page ordered in a certain way and not, by that very act, to presume an intelligence at their source. What’s more, we participate in the perception, in this case: To the extent that one regards the squiggles coming out of an LLM simply as squiggles, one is not in fact operating the machine. We wouldn’t use an LLM if we didn’t mean for it to communicate something to us. To operate it at all is therefore to allow oneself to be duped. Five years ago, if one were to find a piece of paper with a word written on it, one would assume that, somewhere, at some time, in some manner or other, some person formed these words, with some purpose in mind. And one would have been right. Even in the mass production of words on flyers, posters, billboards, TV screens, no matter how much distance may grow between the word and the original author, it remains the case that this manifestation is the fruit of a human intelligence, and so one’s assumption of a personal source is never mistaken. But that is precisely not the case now. Yes, it is true that LLMs ultimately draw on some original human expressions, and this remains true even as the so-called “slop”—i.e., the low-quality and unreliable source material that is itself the product of LLMs instead of a human author—exponentially increases. (It is said that the amount of “slop” has already eclipsed human sources, and that this almost miraculous multiplication will eventually lead—perhaps in the next year or two—to a “modal collapse,” in which LLMs become incapable of producing anything but slop.) Just as the various technologies designed to “produce” life in a lab by means of cloning inevitably presuppose a natural life as the enabling foundation on which the technology is parasitic, so too does the artificial pretense of speech depend on the original existence of a human intelligence. Even slop cannot be generated “from scratch.” But even so the slop is in fact generated—which is to say it comes forth as if from an actual subject, without in reality being from an actual subject. Thus, when, for example, we hear what sounds like the phrase, “I’m sorry, Dave. I’m afraid I can’t do that” emerging from a computer’s speaker, we cannot but hear the sounds as words, and in the very act of hearing them as such we are falsely projecting an intelligent being at their source, even prior to any judgment we might go on to make regarding whether it is “real” or not. This is something radically different from the mass-produced slogans to which we have become accustomed since the early twentieth century. In that case, a human subject is there—but present only at a great distance. In the new case, no human subject is present; a pretend subject has taken its place.

Why is this important? It is important because, as we read or listen to such “words,” falsity and illusion plants itself in the very depths of our soul. Language is not a marginal phenomenon for the human being. According to the classical definition, we are zoa echon logon: animals with logos, with “speech.” In other words, we are creatures of meaning—we live our lives in the medium of manifest intelligibility. It is precisely logos as something we have that distinguishes us from all other animals (which are logoi, but don’t possess logos). And, even more profoundly, to say that we are animals possessing logos is to say that we are essentially concerned, from the innermost core of our nature, not just about meaning, but specifically about meaning in the form of a logos—i.e., as a word, as a personal form of address. All of our human activity takes place within horizons opened up by logos; we not only use reason to figure things out, to make decisions, to formulate and accomplish purposes, but we “connect” with everything we connect with—other people, other things, the world, even God, even ourselves—in the medium of logos. To degrade language is to degrade the core of our humanity. Josef Pieper put this beautifully:

Word and language, in essence, do not constitute a specific or specialized area; they are not a particular discipline or field. No, word and language form the medium that sustains the common existence of the human spirit as such. The reality of the word in eminent ways makes existential interaction happen. And so, if the word becomes corrupted, human existence itself will not remain unaffected and untainted.[4]

Pieper was speaking here of the dangers posed by the mass production of speech in advertising and entertainment. As impressive as the consequences of the “abuse of language” in this respect were and are, what we face now is a danger of an altogether different order. The phenomenon of AI is not so much an abuse of language as a fundamental transformation of language: a replacement of language by what seems to be an “identical replica” and seems to accomplish, more or less, the same functions. This means allowing an ultimate horizon to be set by the merely functional. Language has become the production of something other than intelligence, something other than the capacity to grasp the form of a thing, to behold it, and to contemplate it within one’s interior space, one’s self. Language, from the classical perspective, is the communication of truth. In the age of AI, it is something more like the processing of information and the production of behavior. Thus, if man is defined as the “animal possessing language,” then we cannot change the nature of language without changing the nature of man.

It is crucial to see that this transformation of the nature of language reaches beyond only the “words” directly produced by AI. Insofar as those productions are treated as words, as intelligible utterances, it means that we have come to reinterpret the nature of all words, no matter their source, in order to be able to accommodate what comes out of these machines. What transpires here is analogous to the fundamental redefinition of marriage—a redefinition that implicates every single instance of the reality, from the beginning of time—in the legal establishment of the phrase “same-sex marriage.” What occurs in this redefinition is not the extension of the rights to the reality of marriage to a new class of human beings, namely, “homosexuals,”[5] so that this same institution can include, now, more people; instead, the redefinition is just that, a reconfiguration of the very nature of the institution so that it might be generic enough to include a class that was previously incompatible with the reality. Similarly, to speak of “large language models” is already to change the nature of language universally. A “word,” generically, is now a conveyor of “information,” ordered not to the manifestation of reality (which is how Plato defined it[6]) but to the effecting of some attitude or behavior—and we then go on to specify an instance of this genus as either human or computer generated. One of the practical implications of this transformation is that the very medium of communication is being severed both from any actual reality and from any actual human being, which is to say, from any essentially truth-oriented subject. We are just at the beginning of a new age in which we can never “trust” anything we see or hear. Is the image on the web page, the video clip on YouTube, the voice I hear on the phone, the text I am reading, something real, or just a semblance? Even those reading what I write here can never be sure it is not being produced by some kind of machine.

What will the world look like when such “words” set the horizon for all of our existence? In a posthumously published book, Hannah Arendt wrote of the crisis of “worldlessness” that afflicts the modern age.[7] The world, she explains, is the reality of the “space between us”: a reality that simultaneously unites and distinguishes us, and doing so allows us to live fruitfully in community. Arendt argues that the world, in this particular sense, the common reality that holds people together, is constituted by the political order, which rests essentially on what she refers to as the “trinity of tradition, authority, and religion.” The modern age has rejected all three, and as a consequence is ordered, to the extent it is, by something other than politics in the classical sense. Without a substantial politics, we have no world, properly speaking; we are left instead with something more like a “desert.”[8] It is not difficult to see that language lies at the root of all three members of Arendt’s “trinity.” When language itself is no longer language, what we get is a worldlessness more radical than the desert. It becomes something like a surreal fantasy. As Heraclitus observed, to turn away from the logos is in fact to live in one’s own world like a dream, because there is no longer any reality gathering the many into one.[9] But of course, a private world is no world at all.

The deception we have taken upon ourselves, however unwittingly, in the use of AI is troubling—but there is here an implication that is more troubling still. This technology is, in a word, essentially idolatrous—again, regardless of one’s intentions in using it. What do I mean? As we have seen, it is impossible to use AI without projecting intelligence into it, without treating it as if it were something like a person. The problem with this lies not in its being “something like,” but in its substituting itself for the reality it imitates. Human beings are, after all, “something like” God, which is why scripture and tradition refer to man as the imago Dei. Our existence is meant to point to God, to be transparent to him; we glorify him through ourselves. When, however, we substitute ourselves for God, when we put ourselves in his place, we become images in the sense of idols rather than in the sense of icons. AI is an idol in this sense. It is precisely a substitute for human intelligence. That is its whole point, and it wouldn’t exist otherwise. It is a substitute for the natural intelligence of human beings, and in this sense it is not an icon of the imago Dei but its replacement. We “commune” with it as if it were a person.

Decades ago, Marshall McLuhan loved to quote Psalm 115 as a summary critique of the growing cult of television:

Their idols are silver and gold,
the work of men’s hands.
They have mouths, but do not speak;
eyes, but do not see.
They have ears, but do not hear;
noses, but do not smell.
They have hands, but do not feel;
feet, but do not walk;
and they do not make a sound in their throat. Those who make them are like them;
so are all who trust in them.

What he intimated regarding the primitive technology of television has been much more radically realized in AI, which is not a mechanically reproduced picture of real human actors, but a production of “human-like” behavior from the machine itself. The machine “speaks” and “writes,” but what comes out of it are not words. And the logic of idolatry that McLuhan highlights becomes inevitable: We become like the idols we fabricate, in which we place our trust; we re-interpret ourselves on the basis of the machines we use; we become ourselves mere simulators of intelligence. We increasingly produce marks and sounds that provoke reactions but do not bring about anything deserving to be called communication.

At the heart of this “idolatrous” phenomenon, however, is something even more radically diabolical. AI is not just an Ersatz for human intelligence; the point of creating this technology was after all to have something better than human intelligence. When we use it, we cannot but attribute to it supra-human powers, and we do so precisely because it is not human. AI actually seems to have a kind of infinite capacity: it is effectively omniscient, omnipresent, invisible, and indestructible; it can interpret dreams, predict the future, disclose to me my innermost thoughts and hidden desires, reveal to me who I really am. Indeed, beyond anything man has ever contrived in history, AI has managed to achieve, already in these early years of its existence, which we are told over and over again are just the beginning, something no one before would have imagined possible: it can bring the dead back to life.

D. C. Schindler is Professor of Metaphysics and Anthropology at the Pontifical John Paul II Institute in Washington DC. He is the author of many books, including Freedom from Reality: The Diabolical Character of Modern Liberty (Notre Dame, 2017) and The Politics of the Real: The Church between Liberalism and Integralism (New Polity, 2021), a translator of German and French, and a collaborating editor of Communio: International Catholic Review. This essay was published in New Polity Issue 6.2/3 (Spring & Summer 2025). Subscribe for all our best essays.


Notes

  1. Charlie Warzel, “AI is a Mass Delusional Event,” The Atlantic, August 18, 2025, www.theatlantic.com/technology/archive/2025/08/ai-mass-delusion-event/683909.

  2. Angels and God, as pure spirits, have a unity, even if they do not have physical parts mediated to each other into an organic whole. But because their unity is analogous to this, rather than simply different in the sense of equivocity, we can still speak of them as alive (see Thomas Aquinas, Summa Theologiae I, q.18, a.3).

  3. Plato, Republic, 490a–b.

  4. Josef Pieper, Abuse of Language, Abuse of Power (San Francisco: Ignatius, 1992), 15.

  5. The word “homosexual” itself already implies a radical redefinition of sexuality, as David Crawford has so powerfully argued: “Liberal Androgyny: ‘Gay Marriage’ and the Meaning of Sexuality in Our Time,” Communio 33.2 (Summer 2006): 239–65.

  6. See my essay,“Language as Technē vs. Language as Technology: Plato’s Critique of Sophistry,” Proceedings of the Boston Area Colloquium in Ancient Philosophy, vol. 35, ed. Gary M. Gurtler, S.J. and Daniel P. Maher (Boston: Brill, 2019), 85–108.

  7. Hannah Arendt, The Promise of Politics (New York: Schocken Books, 2005), 189–90.

  8. Arendt, Promise, 201.

  9. “Although the logos is shared, most men live as though their thinking were a private possession” (DK 2); “The world of the waking is one and shared, but the sleeping turn aside each into his private world” (DK 89).