Yes, AI Is a Tool, but It Is So Much More
INSTRUMENTAL RATIONALITY & HUMAN LONGING
Pope Leo XIV recently asserted that artificial-intelligence technology (AI) is “above all else a tool.” Although such a phrase is often repeated and easily overlooked, Leo used it to clarify his stance opposing the many public exaggerations of AI’s potential for rivaling or exceeding humans in intelligence, personality, and agency. The Pope also took the opportunity to remind us that it is human persons who bear the moral responsibility for the appropriate design and use of technology.
There are other benefits to describing AI as just a tool. It helps to underscore the fact that most applications of AI are meant to improve our effectiveness and efficiency, in particular, job-related or pragmatic human tasks — not grand projects like Sam Altman’s and Mark Zuckerberg’s dreams of attaining superintelligence. In an environment of anxiety-generating rhetoric about a post-human future, the notion that “AI is a tool” can keep us grounded in the more ordinary reality.
Focusing on AI as a tool that is intensely oriented toward efficiency also encourages a critical appraisal of the dangers of AI for society. For example, will workers who hand over tasks to highly capable AI systems lose interest in their labor or simply be displaced? Will constant reliance on AI cause individuals to lose important thinking and decision-making skills? There are plenty of research studies which indicate that it will — for example, the article “ChatGPT Decreases Idea Diversity in Brainstorming” in the journal Nature Human Behaviour (June 2025).
We nevertheless risk great controversy and confusion if we fail to look beyond the characterization of AI as primarily, or simply, a tool. One danger is that we may fall into the rhetorical trap of many corporate leaders and AI engineers who hope to avoid moral judgment about the wide-ranging effects of their creations. The notion of AI as just a tool encourages this evasion of responsibility. Most tools, of course, are rarely considered to be good or bad in themselves; we hardly evaluate the goodness of hammers, vacuum cleaners, or even many weapons, because these tools simply obey the moral direction of the persons wielding them. The supposed neutrality of tools is often misapplied to the nature of complex and society-shaping tools like AI. A popular quote from the roboticist Rodney Brooks, for example, declares that “artificial intelligence is a tool, not a threat,” implying — without justification — that there is somehow a meaningful, consistent dichotomy between tools (neutral) and threats (bad).
And yet tools are not always neutral, for their very presence and use change the way people think and act. As Neil Postman famously explains in Technopoly: The Surrender of Culture to Technology (1993), “Embedded in every tool is an ideological bias, a predisposition to construct the world as one thing rather than another, to value one thing over another, to amplify one sense or skill or attitude more loudly than another.” Consider, for example, the effect on our behavior from the simple choice of what chair to use, a plush recliner or a wooden dining-room chair: In which are you more likely to slothfully pass an afternoon drinking beer and eating potato chips? More dramatically, consider the passionate debates about the intrinsic morality of nuclear weapons, which bear many similarities to the uncontrolled proliferation, competition, and potentially world-altering impact of AI. It is worth noting that, in a highly symbolic move, the Vatican gathered leaders of multiple religions in 2024 to sign its “Rome Call for AI Ethics” in Hiroshima, Japan — the site of a nuclear bomb detonation in 1945.
We ignore at our peril that AI is not a typical tool. It is not even a complex machine or device. The technology that makes its use possible is always in the background, a “black box” that is barely perceptible and poorly understood even by the technicians who design it. AI is nevertheless present throughout our lives. It’s in our computers and smartphone applications, our smart speakers like Alexa, our vehicle dashboards, our favorite social-media platforms, and our personalized online shopping algorithms; it’s in healthcare devices and diagnostics, police surveillance, and much more. We can no longer disengage from AI the way we do when setting down a hammer.
The uniqueness of AI comes from the enhanced independence of machines when analyzing outcomes or predictions and, often, selecting their own operations. In fact, the whole point of AI is that it appears less like a tool and more like a conscious agent or coworker with every new innovation.
The most interesting, and concerning, aspects of AI might be the ones that differentiate it from other tools. For example, there is an unusually strong tendency among users of AI to anthropomorphize — that is, to act as if the AI-governed machines, devices, or robots have human personalities and traits. This has led to psychological issues, such as developing an emotional attachment to a conversational chatbot, as chronicled recently in The Atlantic (“The People Who Marry Chatbots,” Jan. 2), or even forming delusions about the mystical wisdom of such algorithm-driven applications. The risk that a person with mental-health difficulties might interact poorly with an AI-driven chatbot is a very real one; consider, for example, the tragic Reuters report about Stein-Erik Soelberg and his conspiracy-inspired murder of his mother (“OpenAI Sued for Allegedly Enabling Murder-Suicide,” Dec. 11, 2025). But it is not only the dramatic stories that matter, for nearly all of us carry emotional scars and yearnings that make us intellectually vulnerable to charismatic and authoritative machines, just as many of us can become victims to widespread scams, fake news, and ideological distortions.
AI can also generate confusion over the special dignity of human persons when compared to machines that seem to have many of the same intellectual capabilities. The Vatican’s doctrinal note Antiqua et Nova (Jan. 2025), offers clarity on this, explaining that human intelligence is not just a logical or calculative procedure but a broad capacity of body and soul that “includes abstraction, emotions, creativity, and the aesthetic, moral, and religious sensibilities.” We need to understand the difference between machine and human intelligence, but in a society increasingly enraptured by calculative capabilities, it will be hard for many to appreciate that distinction.
It is in the difficulties besetting interactions with AI interfaces and applications that we can see most clearly the human craving for loving, fulfilling relationships. A 2025 survey by Common Sense Media indicates that a third of U.S. teens use AI chatbots “for social interaction and relationships, including conversation practice, emotional support, role-playing, friendship or romantic interactions.” Interactions with AI chatbots, robots, or other applications are, however, essentially relationships of stark utility. When the user seeks an empathetic or even a romantic AI companion, the AI-driven artifact is always engaged as a slave, expected to pander within rules-based parameters to the needs and wants of the user. This one-way interaction of indignity is not so surprising, given the increasing willingness of the public to imagine that personhood — even human personhood — reduces merely to an array of cognitive capabilities. Where is the intrinsic goodness of the person if it is determined entirely by how well he thinks and calculates? What is self-giving love in a culture of hollowed-out selves? The loneliness of our teens is finding its resolution in the mechanical smiles and mathematically generated words of servile robots. In such a culture, the sacrifice and redemption of Christ can seem much more distant than it was in the intervening two millennia. Yet it is precisely through intimate knowledge of our Creator and His loving grace that the instrumental rationality of AI is inverted.
AI also can promote distortion of truth. As Pope Leo stated, AI “raises troubling questions on its possible repercussions on humanity’s openness to truth and beauty, on our distinctive ability to grasp and process reality.” The increased capacity of AI programs to generate writing that closely resembles a human product undermines the capacity of readers to trust the authenticity and sincerity of communications. AI photo and video “deepfakes,” false representations of real persons and situations, further erode our trust and wonder-filled enjoyment of digital content that we once expected to resemble the real world. Teachers increasingly distrust students’ homework submissions, scientists doubt colleagues’ research, and systemic bias and errors in the data and algorithms of AI are rapidly turning the content of the Internet into a pool of semi-manufactured “slop.”
Yet there is a broader way in which the presence of AI in our lives can obscure truth. Truth is best understood as that which we desire to know. What human beings most essentially want to know is God, not as a bearer of factual characteristics or as a strategic end but as initiator, participant, and sustainer of a relationship of love. Pope St. John Paul II taught that what matters in our encounter with the world is not so much “what” as “who.” The instrumental, calculative, and empirical “truth” that comprises the entire world of AI cannot, therefore, be our Truth. We cannot hope for virtuous wisdom or prudence from AI-governed devices, whether we are asking for relationship advice, weighing hiring and firing decisions, or seeking guidance in military exercises. If we attempt to translate such wisdom into computer code, we are severely distorting the data into instrumentally efficient and effective categories while demeaning the dignity of human persons who should look rather to the sovereignty of natural reason, informed by revelation and divinely bestowed intuition.
I share the enthusiasm of many others for the extraordinarily useful benefits AI does and will enable — they are too many to count. We need to be careful, however, that the essential orientation of AI toward increased productivity and efficiency does not transform our world into one of drudgery. In the workplace, where we would expect AI-enhanced capabilities to generate exciting new opportunities, its use may lead to longer-term loss of a sense of control, significantly decreased intrinsic motivation, and feelings of boredom due to a lack of novelty and challenges. Even with Internet searches, the public’s apathetic reliance on easy-to-scan AI summaries of search results is leading to a 50 percent drop in attention to search links — the source material from which the AI summaries are drawn. The effect may be a drastic reduction in the diversity of knowledge and exposure to multiple perspectives. Most importantly, we might remember that tasks, productivity, and achievements do not define our spiritual destiny, as Jesus made clear to Martha (cf. Lk. 10:38-42).
The tendency of AI to encourage instrumental rationality — pragmatically giving priority to the selection of means for a limited end — can overwhelm the human psyche and lead us toward the sin of acedia. St. Thomas Aquinas defines acedia as a potentially mortal sin where a person degrades from spiritual anxiety and sloth to consenting to “dislike, horror and detestation of the divine good” (Summa Theologiae II-II, q. 35, a. 2). Instrumental rationality leads a person toward efficient means and apparently feasible ends that have limited potential for enabling true joy in the person. If instrumental rationality is the dominant mode of being for a person, he will feel alienated from an end — his true beatitude — that seems ever more difficult to understand and experience because, fundamentally, his true end is not something that can be acquired, produced, calculated, or controlled. The result may be engagement in acedia as sorrow, anxiety, and resigned or resistant sloth. In a world of AI-governed robots, we must not allow human beings to despair of living a robotic existence themselves.
In essence, AI is a highly complex arrangement of physical parts and physical processes that characterize machines and devices. Like any tool, it can be applied to particular uses that produce expected or even unexpected benefits. Unlike a common tool, it has profound, spiritually significant, and ideological effects on the way we think about human nature. It can shape our social relations, our trust of truth, our experience of personal work and achievement, and our capacity for creating good or evil in our world in ways that are not only material but systematic and ideological.
As Christians, we will need to talk about AI in a way that opens the conversation to topics of virtue, sin, spiritual growth, and our relationship with our Creator. “AI as a tool” gets us part of the way. “AI as an ideology” gets us even closer to the important moral concerns.
“No generation has ever had such quick access to the amount of information now available through AI. But access to data — however extensive — must not be confused with intelligence, which necessarily involves the person’s openness to the ultimate questions of life and reflects an orientation toward the True and the Good…. Authentic wisdom has more to do with recognizing the true meaning of life than with the availability of data.” — Pope Leo XIV
©2026 New Oxford Review. All Rights Reserved.
To submit a Letter to the Editor, click here.
You May Also Enjoy
We produced all manner of digital distractions and then learned how to amplify them until — voilà — those distractions became “intelligent” and took over our psyches.
The Joplin tornado of May 2011 showed the populace had come to rely on media in place of their own eyeballs for weather reports.
We can resist what Pope Francis has called the “technocratic paradigm,” a technology-inspired ideology of acquisition, consumption, and control.