The Vatican has issued an official document on AI. The Church is willing, in the face of an aggressively rising transhumanistic tide, to state the obvious: machines do not—and cannot—do what humans do or be who humans are.
There are many—myself included—who are tempted to give a knee-jerk reaction, one that goes something like this: “well, clearly AI can’t be human—it can’t feel the way humans do.” This is an appeal to the heart, and it is a sound appeal. Despite pop culture depictions of AI in love—think Her or Blade Runner 2049—any simulacra of emotion produced by AI is just that: a simulacrum. In Antiqua et Nova (A&N), wisdom “Old and New,” the Dicastery for the Doctrine of the Faith argues that, “despite the use of anthropomorphic language, no AI application can genuinely experience empathy. Emotions cannot be reduced to facial expressions or phrases generated in response to prompts … AI … cannot replicate the eminently personal and relational nature of authentic empathy” (§61).
But why not? Paul Griffiths argues that we should acknowledge the fact that AI really does learn, look, sound, and behave a lot like humans appear to learn, look, sound and behave. This is because writing and speaking really does produce a thing, an objective assemblage of words, detached from their author’s subjectivity, just “lying there,” as it were: “So taken, ‘writing’ refers to an act that artificial intelligences and human persons both perform, as women and men both also do.” Nothing is to be gained from denying this resemblance to human intelligence. (Neither should we deny that artificial intelligence is a proficient predictor, convincingly paralleling the results of humans who engage in prediction games.)
The Christian tradition—especially in the Patristic literature—identifies the imago Dei with intelligence, with rationality par excellence. If AI bears all these resemblances to human intelligence, how is AI’s intelligence different from human intelligence? A&N points out that artificial intelligence “operates by performing tasks, achieving goals, or making decisions based on quantitative data and computational logic”; therefore, “it remains fundamentally confined to a logical-mathematical framework, which imposes inherent limitations” (§30). Artificial intelligence, then, is limited to the domain of the quantifiable, measurable. Human intelligence, on the other hand, is able to transcend the quantifiable and verifiable:
“Describing the human person as a ‘rational’ being does not reduce the person to a specific mode of thought; rather … the ‘term “rational” encompasses all the capacities of the human person,’ including those related to ‘knowing and understanding, as well as those of willing, loving, choosing, and desiring; it also includes all corporeal functions closely related to these abilities’” (§15).
In other words: while artificial intelligence operates in the disembodied, crystalline realm of pure mathematics, human beings live in the world, kicking and screaming, loving and hating. And this informs what we know and so love: “A proper understanding of human intelligence, therefore, cannot be reduced to the mere acquisition of facts or the ability to perform specific tasks” (§29). It is because human intelligence is essentially different from artificial intelligence that “human intelligence is not primarily about completing functional tasks but about understanding and actively engaging with reality in all its dimensions” (§33).
A&N draws deeply from the Church’s intellectual tradition. It doesn’t exactly say anything ‘new.’ She is addressing a new topic—AI—but the stance taken here simply seems to build upon earlier pronouncements, like those made in Laborem Exercens, Laudato Si’, Dignitas Infinita, and the documents of Vatican II (especially Gaudium et Spes). All these documents presuppose man as an embodied reality, a being whose labor and bodily interaction with the world around him are intrinsic to his knowledge.
A&N builds upon Pope Francis’ encyclical, Dilexit Nos, which warns against imagining the heart, with its feeling for the value and importance of things, as something extrinsic to the head, “as if affectivity and practice were merely the effects of—and dependent upon—the data of knowledge.” (DN §24) Coming to know something, for man, is never merely a computational reality. Because the true and the good are intrinsically tied together, we have to say, with St. Augustine, that our knowing is always also an act of love.
“Dante, upon reaching the highest heaven in Paradiso, testifies that the culmination of this intellectual delight is found in the ‘light intellectual full of love, love of true good filled with joy, joy which transcends every sweetness.’ A proper understanding of human intelligence, therefore, cannot be reduced to the mere acquisition of facts or the ability to perform specific tasks” (§§28–29).
The problem for AI, then, according to A&N, is that it lacks this capacity for knowing-as-loving. To say that AI can feel emotion—that it can love—would be to reduce love to the algorithmically-driven, mechanically-governed, computational operations of a machine responding to input data. If you want to hold a desperately physicalist cosmology this is, de facto, all that love is, and can ever be, for you: mere data.
Moreover, AI cannot ‘desire’—or maybe, work toward—anything truly transcendent. For the Church, the human person is “both body and soul—deeply connected to this world and yet transcending it” (§13). Pascal, as Francis has already noted, perceived the “grandeur and misery of man.” Man is a weak, pathetic thing—yet created for a destiny beyond his dreams. Gaudium et Spes, though, tells us that Christ “fully reveals man to man himself and makes his supreme calling clear” (§22)—Christ demonstrates the transcendent telos of man as a being made for something more than this world.
What are the consequences if we lose this distinction between knowing-as-loving and knowing as the objectified results of knowing, that is, knowing as data? What happens if we reduce our own intelligence to mere calculation, devoid of morality?
First, though all tech ought “invariably work to benefit the human person” (§48), it probably will not. Many have noted what A&N echoes: “due to the concentration of AI applications in the hands of a few corporations—only those large companies would benefit” from anything decent that gets produced (§64). While many AI companies tell us its tech will improve human life, “current approaches to the technology can paradoxically deskill workers” and “replace human workers rather than complement them” (§§67–68). The prophet Jacques Ellul was right: “Modern man believes he is free because he can choose between hundreds of products, yet he is enslaved by the very system that offers those choices.” (Not to mention the hidden energy costs of AI that put more power into the hands of energy producers.)
A&N also points out that, if you blur the line between AI and human intelligence—accompanied by its embodied self—you get health-care systems in which “AI is used not to enhance but to replace the relationship between patients and healthcare providers—leaving patients to interact with a machine rather than a human being” (§73). If a patient is seen as mere data, (s)he won’t get the care that is needed.
Regarding AI’s impact on education, A&N states the obvious: generative AI “merely provides answers instead of prompting students to arrive at answers themselves or write text for themselves” (§82). In other words, AI produces idiotic, short-sighted individuals. And, if human intelligence is reduced to strips of data, any naturally intelligent student is taught merely to manipulate nature. (S)he is taught “the ‘language of the head,’” not “the ‘language of the heart’”—that is, values (§81). C.S. Lewis saw this all the way back in the 40s: we teach how, but not why (or why not); we “remove the organ”—the heart—“and demand the function”: love.
A&N could have given one question more attention: the relationship between AI and culture. For this, we can turn to Swiss theologian Hans Urs von Balthasar.
Balthasar called rapid technological development the spirit of the age: “Our age is primarily characterized by the successes of technology” (479).[1] He saw the socioeconomic impact of automation: “Ever since Genesis, man has been called to shape the earth after his own likeness, which is the likeness of God. … What, however, if the organization of human toil” is used to “increase a power that is exercised, not by workers, but by those who reap its fruits?” (482).
Humans have a divine imperative to cultivate nature—but, after the Fall, it seems like the ability to equitably distribute the goods of the earth is doomed from the get-go. Work is no longer simply a good thing. Now, “work, as such, aims at gaining power over nature, and the will-to-power increases with each success,” and the “best that can be achieved is a temporary balance … between a bearable poverty and a comfortable existence”(482). Tech-nique, tech-nology are tainted, and cursed in man’s hands.
The culture in which humans grow up determines his/her attitudes towards tech—and, vice versa, the tech humans use shape the culture in which they live. Balthasar acknowledges the goods of tech, but wonders if it “creates favorable conditions for a ‘cultural progress’ that ‘humanizes social life both in the family and in the whole civic community’” (479).[2] He believes that “the cultural goods that are now available to be foisted upon the poor (for which they themselves are striving) originate in that very realm of technology that is characterized by an instability and a mass culture (or nonculture!) that are destructive of the person” (483).
What does this mean? That the very tech we use to produce surplus tends to create monolithic, selfish, non-creative beings and so a decadent, boring culture. If generative AI is “eroding their [students’] ability to perform some skills independently” (§81) Balthasar contends that this is part and parcel of a “tendency to uniformity that characterizes the technologizing of the earth” (480). When men become decadent, “will they be moved to regard their manufactured cultural goods ‘as flowing from God’s hand,’ to be used and enjoyed ‘in a spirit of poverty and freedom?’” (480–481).[3]
We can answer just by looking around: no. What’s the solution, then? And how was Balthasar able to predict this homogenization of today’s culture?
For Balthasar, it’s a return to anthropology. Christ took on human flesh. In doing so, he reveals the human to him/herself: we are beings whose bodily existence has intrinsic worth. We are meant to know the world in and through the body. Any idea of the human that blurs the lines between artificial and human intelligences; any idea of the human that reduces its intelligence to mathematical data; any transhumanism that dreams of ghost abandoning the machine—anything, in short, which deviates from the paradigm of Christ is insufficient. An incarnational anthropology which attends to the messy and gritty details of life (perhaps ironically) distinguishes human intelligence from artificial intelligence.
In WALL-E, the captain of the Axiom does everything it can to keep the humans away from organic life; in fact, it is a human’s physical contact with a plant that wakes humanity up and catalyzes their return to earth. We can hope for something similar, that attending to the embodied nature of human intelligence will spark a renewal of creative cultural energy, energy which will produce individuals who stand in “reverence and loving obedience before the Lord,” who “treats each one of us as a ‘Thou,’ always and forever.”[4]
Notes
Hans Urs von Balthasar, Theo-Drama: Theological Dramatic Theory, vol. 4: The Action, trans. Graham Harrison (Ignatius, 1994).
Gaudium et Spes §53.
Gaudium et Spes §37.
Dilexit Nos §§25, 27.