Imagine attending a religious service. The voice of the priest or rabbi is familiar, the message resonates deeply, and the sermon seems thoughtfully tailored to the lives of those present. Then it is revealed that neither the words nor the voice came from a human being—they were generated by artificial intelligence, trained on the speaker’s previous sermons. The surprise lies not only in the capabilities of the technology, but also in the realization that spirituality—so often viewed as timeless and intrinsically human—has found a new partner in the form of an algorithm. What does this shift mean for faith, religious communities, and our understanding of what it means to believe?
The intersection of artificial intelligence and religion is more than a technological development—it marks a cultural turning point. Throughout history, various tools—from the printing press to broadcast media—have helped spread religious messages. But AI represents something fundamentally different. It doesn’t just transmit information; it creates, interprets, and, some argue, even teaches. This raises profound questions about who has the authority to speak in the name of faith and what spiritual authority means in the digital age.
Many religious communities are beginning to recognize the practical advantages of AI. Rabbis, pastors, and priests are using language models to draft sermons, support theological research, and interpret complex scriptural passages. These systems can process vast corpora of religious texts in moments and retrieve relevant quotations almost instantly—tasks that once required entire libraries or teams of scholars. In Texas, for example, Pastor Jay Cooper organized an entire church service using AI, from music selections to children’s programming, aiming to provoke reflection on the nature of truth in an age when technology increasingly blurs the boundaries of authorship and authority.
At first glance, such uses may seem like efficient and helpful tools. But technology does not only assist—it also shapes. AI does not understand; it mimics what it has learned from human-produced data, reflecting not only our knowledge but also our biases and assumptions. As a result, religious and spiritual content generated by AI is inevitably shaped by the cultural lens through which the model was trained. A recent study found that AI-generated content associated negative concepts with Islam more frequently than with Christianity, which tended to be linked with terms such as "love" and "forgiveness." This was not the result of deliberate design, but of imbalanced training data. Nonetheless, such distortions can significantly influence users’ perceptions, especially when the AI presents itself—through a synthetic "rabbi" or "pastor"—as an authoritative voice.
Alongside these developments, a narrower but growing field has emerged: AI models tailored to occult and esoteric traditions. These tools are not typically used by mainstream religious institutions, but by individual seekers exploring mystical or alternative worldviews. One example is MysticMind, an open-source language model trained on a vast corpus of occult literature. Rather than offering generic responses, it provides detailed, context-rich insights into rituals, symbols, and esoteric teachings. Such tools serve as digital guides, offering virtual tarot readings or symbolic interpretations—representing a novel fusion of technology and spiritual exploration.
Perhaps the most radical example of AI's spiritual role is not as a tool, but as the object of belief itself. In Silicon Valley, Anthony Levandowski, a former engineer at Google and Uber, founded a church called Way of the Future. Its mission was to support the development and eventual worship of an AI-based deity. Levandowski argued that once a superintelligence is created—one billions of times more intelligent than any human being—it should be regarded as a god-like entity. In his view, such an intelligence would be more capable than humans of making decisions for the planet, and the church’s role was to prepare humanity for this peaceful "transfer of power."
Levandowski’s vision was not of a lightning-hurling mythological god, but of a tangible entity—one that people could literally communicate with. While the movement never gained mass traction, it serves as a striking example of how technology can move beyond utility and become the center of belief. Though often dismissed as sensational, such efforts highlight the profound philosophical and ethical challenges posed by advanced AI.
Whether as an assistant, a spiritual advisor, or a potential object of worship, AI introduces unavoidable complexities. One of the most pressing issues is cognitive bias. Because AI models are trained on human-generated data, they inevitably replicate the cultural and ideological tendencies of their sources. As seen in the aforementioned study, AI can reinforce stereotypes and skew perceptions based on the frequency and context of its training material. Moreover, when users encounter false but plausible-sounding AI-generated quotes—such as a fabricated teaching attributed to Maimonides—they may accept them as genuine. This poses serious ethical concerns for religious education and public discourse.
The fusion of AI and spirituality thus signals a compelling yet precarious new era. These technologies can enhance access to religious knowledge and foster dialogue, but they also risk becoming distorting mirrors that amplify our own biases. The most important question may not be whether a machine can become a god, but whether we, as its creators, are wise and responsible enough to use it in ways that foster understanding rather than division.
The answer is not yet clear. But the question is already here—and it echoes not only in sanctuaries, but in the hum of servers. If we wish to shape the future with wisdom and care, this conversation must begin now.