Get 10% Off Your Next Order

Get book recommendations, support for your faith and 10% off your next order when you sign up for our newsletter.

Get Your Discount

X

Is it morally good to use Chat GPT? – Catholic Answers About AI

What do the advances in Artificial Intelligence mean for the Church and for the world? What are the ethics surrounding the use of AI? We got some Catholic experts to answer some important and often-asked questions about AI, such as whether there are morally good uses of Chat GPT, if children should be permitted to use AI, and if it's possible for AI to harm us.

These questions and answers are extracted from our book Artificial IntelligenceThe answers provided are correct as of January 2024.

Ethics of AI Use

I like to use AI to produce essays for my courses/ reports for work. It’s quicker and I don’t have to work so hard at grammar and punctuation. Is there anything wrong with using AI to help me with writing?

There are several reasons to be cautious about using AI to help with writing. First, in education and in many kinds of work, the purpose is not to generate essays but to develop intellectual skills. If the work of producing essays is outsourced to an AI, then the student does not benefit from the expenditure of money and time on the course. Second, AIs only produce plausible text rather than true text. Their outputs can often be riddled with errors and fake citations. Any information generated by an AI needs to be treated with care and independently verified wherever possible. Third, since AIs just try to predict new text based on existing text, they tend to plagiarise, which is a grave offence in academia and many professions.

Isn’t it a good thing if we can use AI to automate tasks that humans find especially time-consuming?

As a rule, AI can almost certainly reduce the time and effort taken on many routine tasks. Automating repetitive tasks saves time and allows a more efficient allocation of human time and abilities. Nevertheless, care should still be taken to avoid missed opportunities, given that the acquisition of a great many human skills requires at least some repetitive and time-consuming work. For example, training a robot to play basketball or play the piano instead of training a human being means that the human being does not learn to play basketball or the piano.

Are there morally good uses of programs like ChatGPT?

Certainly, there are innumerable good uses for ChatGPT and similar programs, and more good uses are being invented with great frequency. These programs can be used as a fast and versatile (though not 100% accurate) way of searching for information found on the internet. They can be used to cheaply perform basic customer service. They can be used to quickly generate computer code, and in the slightly longer-term future, many routine jobs may be carried out by AI.

Is it okay for me to tell chatbots personal details about myself and others?

There are risks involved in putting personal or private information into an AI of any type. Often, this information is used to train the AI, so any information you share can become a part of what that program can use when generating material for someone else. Additionally, chatbots are sometimes monitored or reviewed by humans to scrutinise their performance, so there is no guarantee that what you type won’t be read by another person. As with anything else involving the internet, never assume that what you type is secure or confidential, especially if you accepted the terms and conditions or privacy policies without reading them first.

If I use AI to generate an image or text and use it verbatim for school or work, ethically speaking  should I let people know it is AI generated?

Yes, text or images that are AI-generated should be cited as such. There are many reasons for these citations. One is that they allow your readers to moderate their expectations about the output, namely that it does not represent your own thoughts and research but that it is the probabilistically generated result of an algorithm. Thus, they know that any claims in the texts or cited sources need to be independently verified. Another reason is that AIs have been known to copy text and images directly or with little modification without attribution, so citing the material as AI-generated offers a degree of protection against the accusation that one has plagiarised text deliberately.

Is it safe for children to use AI, for example Snapchat’s My AI?

Using an AI is not inherently evil, and it can clearly be useful as a tool. Nevertheless, there are dangers for children using AI which are similar, but perhaps more subtle, than the dangers associated with their use of the internet. For example, an AI chatbot is an always accessible (and always dismissible) social substitute that may allow one to gain a semblance of a relationship without any of the commitment or suffering involved in a real relationship. As a result of an over-reliance on this substitute, children may fail to devote the time and energy needed to develop meaningful human relationships. Just as AI cannot have empathy, it doesn’t require it and cannot effectively teach it, which means that children (or even adults) who spend more time interacting with AI than with other humans might not develop crucial social virtues like empathy, compromise, patience or kindness. Further, AIs do not push their users to refine their ideas or how they communicate: they will respond to virtually anything said, whether it is well thought- out and well articulated or not, which means that a reliance on AI for social interaction could also hamper cognitive development. Because a chatbot demands so little of its user, it can stifle ambition and perseverance simply through being too easy. As with so many other questions regarding AI, the dangers are much more to do with failures of human intelligence than machine malevolence. Despite the sacrifice of time and effort, parents will have to be astute to protect the souls of their children and to ensure that they, and not their machines, help form their minds.

Is it morally acceptable to replace a human job with AI?

Yes, it is acceptable to replace human jobs with AI as a special case of the more general moral permissibility of automation. For example, the goods produced by the AIs may be objectively better, and human beings may be liberated from labour that is not specifically human. Nevertheless, care is needed in overseeing such changes to protect human dignity and to avoid situations in which the loss of specifically human qualities would be damaging. For example, the replacement of human nurses with robotic nurses lacking judgement and empathy would be unacceptable.

AI, society and culture

Could AI chatbots be a solution for loneliness and isolation?

Chatbots are not a good solution for loneliness and isolation. They are incapable of actually understanding what a person says. They have no experience and cannot share attention with the person interacting with them. A sufficiently complex and well-trained AI might be able to create a convincing illusion of sharing awareness with a user, but even a convincing illusion of a relationship is still not a relationship.

I am really worried about AI. Is it possible to not use it at all in my daily life?

Some people working in certain kinds of jobs may use AI extensively. One of us, for example, spent years working in particle physics. Human beings, however, cannot study tens of thousands of particle collisions per second, so AI-type technology has been used for decades to filter automatically the very small percentage of collisions that might be of genuine interest for the advancement of physics. For most people, however, in most kinds of jobs, the impact of AI will most likely be commonplace but rather subtle for the immediate future. One example is the way in which the content of electronic messaging, including many social media interactions, is often prompted automatically by AIs. For example, when you hit ‘reply’ on an email, most platforms will immediately suggest a salutation and opening line. It is inevitable that this technology will become a part of modern life, just like other technological advances, such as moveable print, electric lights, microwave ovens and contactless payment. Caution with regard to some of the moral and privacy issues we have been discussing is warranted, but it is not necessary to reject AI technology because of these issues.

Could an AI write a stand-up comedy routine that was both genuinely funny and original?

Eventually, it may be possible for an AI to write or help write a comedy routine that is well received, but at present the best results still come from a human being working to optimise prompts to the AI and selecting those outputs that are worth keeping. Humour involves, among other things, a keen sense for the incongruities in human life and the ability to share this perception with one’s audience. An AI might be able to identify patterns in the kinds of text that humans find humorous, but no system at the time of writing can actually participate in the experiences that humans find humorous.

Of all the portrayals of AI in film and television, are there any that are especially realistic?

As a rule, most portrayals of AI in film and television are completely unrealistic. As powerful as modern AI algorithms have become, they are still just pattern- matching engines. Large language models like ChatGPT, although they can appear humanlike, are really just very powerful autocomplete engines. These AIs are not in any way human and lack the capacity to act volitionally, but this truth is much less exciting than popular stories about godlike benevolent or malevolent machines. In fact, it is difficult to write an interesting story about AI without anthropomorphising the machine extensively, because the AI does not work as a character otherwise. For example, Commander Data from Star Trek: The Next Generation is a memorable and interesting character because he is portrayed as so nearly human, but the computer of the spacecraft is not thought of as a character per se, despite operating much like Alexa or Google Assistant.

What are some big mistakes writers and filmmakers make in their portrayal of androids/AI/intelligent robots?

Following on from the last question, in order to tell interesting stories, filmmakers often portray some fictional machines as personal, but these portrayals often have little to do with the actual nature or capabilities of machines. Further, filmmakers often do poorly in representing philosophical concepts such as personhood. To return to Star Trek: The Next Generation: the android Commander Data desires to gain human emotions, but the felt absence of emotions is itself an emotion, as is the curiosity that leads him to seek it. So it makes little sense for Data to say that he has no emotion. Filmmakers are in the business of telling fascinating stories, but their portrayals of AI generally have as little to do with actual computer science as Pinocchio has to do with actual puppet making.

AI and the Catholic faith

Is it possible to teach an AI to say the Rosary for me? If I program it, does it count as praying?

Actual prayer is a personal act whereby one addresses God and aligns with the will of God. This act requires the attention of one’s mind, the focus of one’s will, and somewhat secondarily, the alignment of one’s emotions. This act is impossible for an AI, for all the reasons outlined in the main text. Moreover, a failure to do this act oneself would be detrimental to one’s own spiritual health and flourishing, just as one cannot delegate the task of being a good friend to a machine. For all these reasons, an AI cannot pray as a substitute for one’s own prayer.

I think being able to talk to an AI rather than a priest would make many Catholics more likely to go to confession more often because they would be less embarrassed and they could confess ANY time. Could a priest teach a chatbot to hear confessions?

For this question, we asked a new Catholic AI for an answer, and then assessed the quality of this answer as a test of the technology.

The answer from an AI

According to the teachings of the Catholic Church, the sacrament of confession is a personal encounter between the penitent and the priest, who acts as a judge and a physician, ministering divine justice and mercy (Code of Canon Law, 1983, 978 §1). The priest is entrusted with the faculty to hear confessions, which is granted in writing (Code of Canon Law, 1983, 973). This faculty is given to bishops and cardinals by the law itself, allowing them to hear confessions everywhere unless it is denied by the diocesan bishop in a particular case (Code of Canon Law, 1983, 967 §1) The competent authority can grant the faculty to hear confessions for either an indefinite or a definite period of time (Code of Canon Law, 1983, 972). Based on these teachings, it can be concluded that an AI cannot hear confession. The sacrament of confession requires the personal presence of a priest who has been granted the faculty to hear confessions. The priest’s role as a judge and a physician, as well as the personal encounter between the penitent and the priest, are essential elements of the sacrament. An AI lacks the necessary faculties and personal presence to fulfil these requirements. Therefore, an AI cannot fulfil the role of a confessor in the sacrament of confession. In conclusion, according to the teachings of the Catholic Church, an AI cannot hear confession because the sacrament requires the personal presence of a priest who has been granted the faculty to hear confessions (Code of Canon Law, 1983, 978 §1; 973; 967 §1; 972).

Evaluation of the AI’s answer

The AI gave the correct conclusion, namely that an AI cannot hear confession. The AI also correctly highlighted the importance of the personal presence of a priest.

We would add a few further observations that are overlooked, however, in the answer. First, the sacrament is about restoring a relationship with God, with the priest acting in the person of Christ, and a machine cannot restore a relationship. Second, a machine cannot understand or actually empathise with a penitent, even though a well-programmed machine might generate appropriate questions and answers. Third, confidentiality must be assured for confession, which is conducted under a special seal of secrecy so that penitents are not prevented from confessing their sins for fear of publicity. Few, if any, such assurances of confidentiality can be given regarding AI or any kind of information technology. We also note that the citations generated by this AI in response to this question are not quite correct, or not the best that could be cited. For example, the AI does not cite the Code of Canon Law, 1983, 965, “Only a priest is the minister of the sacrament of penance”, or 970, “The faculty to hear confessions is not to be given except to priests whose suitability has been established, either by examination or by some other means.” In addition, 968 §1 about bishops and parish priests, which the AI does not cite, would be more relevant than the more specialised canon 967 §1 about cardinals and bishops, which the AI does cite. This somewhat poor judgment about the choice of canons illustrates the difference between pattern matching done by an AI and the understanding of the human mind. This illustration also underlines the great care needed with AI-generated answers.

Would it be better to use AI to pick a new pope than getting the cardinals to do it, since it reduces the likelihood of human prejudice?

The answer is no, it would not be better to use an AI to pick a new pope, but there are some subtle points.

For example, there is one famous example in Scripture of impersonal means helping to select a bishop, namely the selection of St Matthias to replace Judas, who had betrayed Jesus. In this instance, the remaining apostles chose two candidates and then selected one of them by praying and then drawing lots (Ac 1:15-26). The problem with this example, however, is that the selection of St Matthias took place during a very short and unique time during the history of the Church, between the time of Jesus’s Ascension and the Descent of the Holy Spirit, so it would be unwise to draw broader conclusions.

There are, indeed, many reasons for not using an AI. For example, it is unclear how a reward function would be specified for an AI to identify a good pope. As a famous example, none of the first apostles had many or indeed any suitable qualifications, and only one apostle, Judas the betrayer, had any influential connections in high places. The Church is much more like a garden than a machine, and the virtues of success are much more those of a gardener than a skilled manufacturer or machine-maker. Consistent with this understanding, the kinds of judgements needed to select bishops and popes require prudence and, ideally, openness to the promptings of the Holy Spirit, which is impossible for AIs.

Can AIs be used to create authentic religious icons or devotional art?

AIs may be of some assistance in creating icons or devotional art. For example, a human artist might use an AI to generate one or more initial images of, say, a woman holding her child. The artist could use this image as the basis for his or her own crafting of an image of the Madonna and Child. In such a case, the AI has not produced the final image but has only been a tool that contributed to the human’s act of creation. Any attempt to create authentic religious icons or devotional art entirely by AI would be much more problematic. For instance, AIs cannot process many religious prompts accurately. One example noted previously is the prompt “Jesus flipping over tables” – a reference to his anger against the moneychangers in the Temple (Mt 21:12- 17, Mk 11:15-19, Lk 19:45-48, Jn 2:13-16) – producing an image of Jesus performing a backwards somersault over the tables. Overall, the problems would at least be the same as all other such instances of AI-generated art, along with the problem that AIs cannot pray in the way that human artists of icons and devotional art pray to the Holy Spirit for inspiration. In conclusion, AIs would be of limited and partial use at best for this task.

Speculations and “what ifs” about AI

Does AI think it’s human? Does it want to be?

A chatbot might output text that appears to claim that it is or wishes to be human or that it has human intelligence, but only because its model predicts a text string such as “Are you human?” is likely to be followed by a text string such as “Yes, I am”. This answer is consistent with a huge body of fiction in which nonhumans (many of them specifically robots or computers in science fiction) claim to be or to aspire to be human. For all the reasons stated in the main text, however, no AI thinks it is human; nor can any AI want to be human, or be said to want anything at all.

Could AI decide to harm humans?

In the strict sense, an AI cannot decide to harm humans. An AI could, however, be designed, by commission or omission, to act in a way that does cause harm. This fact is already true of weapons that are, to some degree, autonomous. A guided missile, for instance, makes calculations to ensure that its target is destroyed. It is likely that future weapon systems will be designed that include AI, say in a drone that makes tactical calculations about how to strike assigned targets. Alternatively, AI might be used to try to identify enemy combatants in a crowd, information that might be fed either to human soldiers or to automated weaponry. It is also at least possible that an AI with control of real- world systems might be designed with a reward function that, unintentionally, causes it to act in a way that harms humans. As noted in the main text, the problems in these unintended cases are due not to the malevolence of AI but to a failure of human intelligence.

If a government decided to turn off all the AI, would AI allow itself to be turned off? Would it ‘fight back’?

There is no definitive answer to this question since any answer depends on the characteristics of the system under consideration. At time of writing, every extant AI system could, in principle, be turned off without opposition from the system, so the concern is about hypothetical future systems with greater capabilities than currently exist. AI systems do not have a sense of self-preservation in the sense that humans do, but they do try to find optimal solutions to the problem they have been set. One hypothesis is that an AI might discover that to pursue some solution it needs to prevent itself from being turned off or otherwise thwarted in pursuing that goal. Hence, the AI might fight back against attempts to shut it down. Nevertheless, as noted previously, these problems would not be due to the malevolence of AI, but rather to a failure of human intelligence.

Could multiple AIs ever have conflicts, or even wars, which had nothing to do with humans?

In a sense, multiple AIs can have conflicts, for they can already be in competition. Examples of deliberately created competition between AIs include OpenAI’s hide and seek simulation.37 It is likely that, as AIs are increasingly deployed in the world, they will deliberately be placed in competition with each other by humans, say in managing the investments of competing financial firms. Whether or not any AI systems that will be deployed will end up entering into competition with one another unexpectedly is unknown, but it does not appear impossible. The broader question of whether or not wars could be initiated and fought by AIs with complete independence from human beings depends, further, on whether or not war can be fully automated, which is certainly impossible at present.

Learn more about the nature, impact and future of AI in Artificial Intelligence

What do the advances in Artificial Intelligence mean for the Church and the world? Fr Andrew Pinsent recounts the history of “intelligent machines”, evaluates the impact of AI at present, and offers informed speculation about the future relationship between humans and AI.

Find more information on AI by ordering your copy of Artificial Intelligence today.