“What is going to be created will effectively be a god,” claims Anthony Levandowski, speaking about artificial intelligence (AI). Levandowski is founder of Way of the Future, a church for people who are “interested in the worship of a Godhead based on AI.” He is quick to clarify, “It’s not a god in the sense that it makes lightning or causes hurricanes. But if there is something a billion times smarter than the smartest human, what else are you going to call it?”1
What indeed.
And Levandowski is not alone. Many computer engineers, programmers, and scientists have proclaimed AI as a singular technological breakthrough that in due course will allow computers to rival or surpass the human mind. The standard lexicon of AI now includes cerebral terms like “deep learning, ” “neural network, ” and “cognitive computing,” and some AI disciples have described advanced AI models as sentient, self-aware, and even omniscient. They foresee a creature greater than its creator—an alarming aspiration that calls to mind the ancient cry of the Babylonians: “Come, let us make a city and a tower, the top whereof may reach to heaven” (Gen. 11:4).2
The temptation to overstate AI’s capabilities is powerful, and it appears to grow as each successive advancement is introduced—from IBM’s Deep Blue supercomputer, to Siri and Alexa on our cellphones, to ChatGPT’s generative AI model, to the newest quantum AI tools and agentic AI processes. Capability leaps have been truly astonishing, and with the rapid pace of technology growth, it would seem that the heavens are the limit for AI. And yet, the prognosticators of its limitless potential always find themselves facing a hard truth they cannot seem to overcome: AI’s outputs are entirely the result of how the model is coded and trained by humans.
Without doubt, AI programs have an astounding ability to perform systematic tasks, calculate mathematical results, and derive deductive and inductive answers at tremendous speed and scale by utilizing quantitative data and computational logic. But it’s a veritable fact that these models and their algorithms are still ultimately composed of “0s and 1s” and provide answers based on how they are programmed and trained. In the 1983 film WarGames, Matthew Broderick’s character, David Lightman, makes this point when asked how NORAD’s supercomputer is able to pose questions. He replies matter-of-factly, “It’ll ask you whatever it’s programmed to ask you.” Exactly so, and what was true in 1983 is true today. Interestingly, without proper coding and training, the outcomes of AI are fair-to-middling at best and dismal at worst. A recent study, for example, found that when ChatGPT tried to solve hard, modern coding problems which it was presumably untrained to handle, its success rate was as low as 0.66%.3 Similarly, Google AI and ChatGPT often provide laughable answers to easy questions, simply because their models lack the programming or training needed to respond to the user’s query correctly.
This observation, however, begs a bigger question: what if we were to give the computer the right code, right data, and right training? Might it then surpass the human mind? As a recent Vatican letter on AI explains, such a possibility rests on a faulty assumption that “the activities characteristic of the human mind can be broken down into digitized steps that machines can replicate.” As we learn from natural law and scholasticism, the human mind is not so simplistic, nor is it purely computational. Its operations are in some ways quantifiable, but in most aspects are intangible and unmeasurable. This is because the mind is not merely a complex function, but rather a faculty of the human soul.4
This critical point allows us to see where AI falls short. To begin with, although AI can make decisions, it has no free will, and so—unlike humans—it can only derive them based on inputted data and algorithmic calculations and cannot make any decision in a nondeterministic manner. AI also lacks understanding, which is a critical component of reasoning. Computers can calculate, but they cannot understand in the human sense. It is true that more advanced AI models utilizing a massive store of data have a higher probability of simulating human understanding—by drawing on millions or billions of data points to yield answers that might be similar to ours—but even then, the computer is still computing, not reasoning. Finally, AI is missing the metaphysical elements that we have in the irrational part of our soul, including emotions and appetites. Computers cannot love, hate, and fear; they cannot discern or desire good and evil. Consequently, they lack “abstraction, emotions, creativity, and the aesthetic, moral, and religious sensibilities” that all men have as part of their nature.5 Only a man, not a machine, can pursue happiness and seek the true, the good, and the beautiful.
The Vatican’s AI document sums up this last point well:
“Since AI lacks the richness of corporeality, relationality, and the openness of the human heart to truth and goodness, its capacities—though seemingly limitless—are incomparable with the human ability to grasp reality. So much can be learned from an illness, an embrace of reconciliation, and even a simple sunset; indeed, many experiences we have as humans open new horizons and offer the possibility of attaining new wisdom. No device, working solely with data, can measure up to these and countless other experiences present in our lives.”6
If we fail to acknowledge the limitations of AI, practical and ethical problems arise. First, we begin to permit computer programs to take the place of human judgment. At the most basic level, this path can lead to reckless decision-making when AI outputs are accepted as facts, not probabilities that may have a large margin of error. Because data and AI models both are created by humans, and humans are fallible, AI outputs are ipso facto prone to error.7 Moreover, AI programs have drawn much scrutiny for filling voids in their logic chains by fabricating data and results, a phenomenon known as “hallucinating.” Thus, when using AI to aid decision-making, we must not forget that the “black box” is often wrong. And there’s an even greater issue than accuracy at stake: relying on AI to replace human wisdom abdicates our responsibility to think critically about moral issues. There is a danger in entrusting an algorithmic data model, even a highly sophisticated one, to resolve ethical questions—how best to help the poor in our parish, which employees in our business to fire when downsizing, what medical options to exercise or avoid when in danger of death, how to invest money in ethical mutual funds, or what targets weapons should strike in wartime. Human flourishing depends upon the reasoned judgment of man.
Second, an overreliance on AI quashes human ingenuity and invention. A common manifestation of this problem is the rampant use of generative AI by students to write essays and term papers, a practice that undercuts their own learning, self-discovery, and inventiveness. Others use AI in place of humans to produce poetry, music, and art. But because AI lacks a soul and has no emotions, feelings, or human experiences, it can never do these things well. AI poetry has been described as “doggerel of truly awesome inanity;”8 music and artwork appear as a cheap imitation of the real. Still others use AI to create sacred music and art, an activity that is particularly troubling, for these works are meant to evoke the divine—something AI cannot know. As Steven Umbrello of the Institute for Ethics and Emerging Technologies explains, a religious composition, “as an expression of faith and devotion, cannot be fully understood or appreciated if it is stripped of the spiritual and experiential depth that human artists bring to their work. ”9 Church polyphony, Gregorian chant, religious paintings, spiritual writing, and other sacred works of art should be the fruit of human devotion.
Third, a dependency on AI can begin to devalue the human person. To get the most accurate and relevant results from data models, AI companies frequently harvest personal information from users, often without their knowledge. Not only does this activity raise personal privacy concerns, it also violates the common good if done only for the benefit of the business or the market without regard for the individual.10 Unconstrained AI also devalues personhood by subverting authentic relationships. In the world of AI, human communication is increasingly being replaced with texts from chatbots and messages from audio and visual simulacra, interactive processes that create an “appearance of nearness”11 which is, in fact, only imagined. Going still further, AI has now introduced anthropomorphized “personas” that users can turn to for friendship and romance—an innovation that bounds into the realm of the absurd. Like any technology, AI may never be embraced at the expense of human dignity.
In the 1940s, C.S. Lewis wrote, “I agree Technology is per se neutral: but a race devoted to the increase of its own power by technology with complete indifference to ethics does seem to me a cancer in the universe.”12 Endorsing AI unconditionally without considering its limitations and potential for misuse is imprudent. Ensuring its proper and ethical use is both necessary and wise. It begins with understanding the human intellect and the human soul—for in doing so, we come to recognize that sound reasoning and right judgment are capabilities that are found, not in a machine, but in man himself, who is created in the image and likeness of God.
Mark Harris, “Inside the First Church of Artificial Intelligence,” Wired, Nov 15, 2017, https://www.wired.com/story/anthony-levandowski-artificial-intelligence-religion/, accessed Apr 5, 2025.
Douay-Rheims Bible, drbo.org. Deacon Greg Lambert elaborates on this theme in “Artificial Omniscience and the Tower of Babel,” May 8, 2023, https://catholicstand.com/artificial-omniscience-tower-babel/, accessed Apr 5, 2025.
Zhijie Liu et al., “No Need to Lift a Finger Anymore? Assessing the Quality of Code Generation by ChatGPT,” IEEE, Apr 23, 2024, https://ieeexplore.ieee.org/document/10507163, accessed Apr 6, 2025.
Antiqua et Nova, Dicastery for the Doctrine of the Faith and Dicastery for Culture and Education, The Vatican, 14 Jan 2025, ¶10.
Antiqua et Nova, ¶11.
Antiqua et Nova, ¶33.
Saverio Perugini, “AI and Faith: The Relation Between Human Rationality and Computing,” The Catholic Theology Show, July 23, 2024, https://catholic-theology-show.simplecast.com/episodes/ai-and-faith-the-intersection-of-human-intelligence-and-computing-dr-saverio-perugini, accessed Apr 9, 2025.
Nikolas Prassas, “Large Language Poetry,” First Things, https://firstthings.com/large-language-poetry/, accessed 12 Apr 2025.
Steven Umbrello, “Sacred Art or Synthetic Imitation? The Catholic Challenge to AI-Driven Creations,” Word on Fire, September 10, 2024, https://www.wordonfire.org/articles/sacred-art-or-synthetic-imitation-the-catholic-challenge-to-ai-driven-creations/, accessed 16 Apr 2025. On music, see Aubrey Gulick, “Keep AI out of the Choir Loft,” Crisis Magazine, November 26, 2024, https://crisismagazine.com/opinion/keep-ai-out-of-the-choir-loft, accessed 12 Apr 2025.
cf. Jacques Maritain, The Person and the Common Good, trans. John J. Fitzgerald, University of Notre Dame Press, 1994.
Antiqua et Nova, ¶65. On the depersonalization effects of AI, see Elena Bezzubova, “Am I a Robot?” Psychology Today, Apr 14, 2025, https://www.psychologytoday.com/us/blog/the-search-for-self/202504/am-i-a-robot, accessed 18 Apr 2025.
C.S. Lewis, letter to Arthur C. Clarke, December 7, 1943, in Walter Hooper, ed., Collected Letters of C.S. Lewis, HarperCollins, 2004, pp. 593-594, cited in Bradley J. Birzer, “C.S. Lewis: Imaginative Conservative,” The Imaginative Conservative, May 20, 2015, https://theimaginativeconservative.org/2015/05/cs-lewis-imaginative-conservative.html, accessed 18 Apr 2025.
Joe, you draw from, employ, and explain The Great Tradition with ease. Therefore, we, your readers, are well informed. Thank you for your work and this article.
Thanks Joe, well reasoned, written! Lots to think and be concerned about with the rush to AI and its potential for deceit-craft. Keep fighting for the pursuit of wisdom and understanding only found in His truth good sir!