Warning: this is not a sci-fi tale! Artificial Intelligence is already being used to mimic the dead. But in blurring the lines between human and machine, are we playing God?

If you could have a conversation with a loved one who had died, would you? That was the question facing Joshua Barbeau in 2020, after his girlfriend, Jessica, tragically died of a rare liver disease in her early 20s. Eight years later he was still distraught with grief and could not seem to move on with his life.

But then Joshua discovered a mysterious chat website called Project December. The site utilised an experimental form of Artificial Intelligence (AI) from research firm OpenAI (co-founded by Tesla owner Elon Musk). Joshua decided to give it a go. He uploaded a detailed description of Jessica, including biographical information, Facebook posts and text messages. What followed was both wonderful and frightening.

The AI system produced a chatbot based on Jessica that was uncannily realistic, mimicking her mannerisms, gestures and phrases. Unlike an Amazon Alexa or Apple Siri (which give predictable answers based on pre-programmed speech and web searches), ‘Jessica’ was far more sophisticated. Not surprising, considering ‘she’ was created by one of the world’s most capable AI systems. And, being AI, as ‘she’ interacted with Joshua the bot learned over time, becoming an ever more convincing portrayal of his dead girlfriend.

CTY-Culture-Feb-Texts

Joshua was caught totally off-guard: “Everywhere and nowhere” was just the sort of thing Jessica would say. Joshua’s first dialogue with ‘Jessica’ lasted ten hours and he returned to her frequently over the coming months.

Our reaction to this futuristic-sounding but totally true account is possibly one of sadness, disbelief or concern. Surely this dialogue would interfere with the process of grieving? Might Joshua be stuck in denial forever, unable to accept the truth and move on with his life?

NOT JUST A WEIRD FAD

The ‘Jessica’ bot is an extreme case, but, like it or not, there are many benefits to humans engaging with AI. The lack of social care funding by successive governments versus the ‘always-on’ nature of machines is an obvious fit.

At the lowest level, there are apps for your phone that use simple forms of AI for mental health care. ‘Woebot’ (a dreadful pun) acts as a mental health carer and uses cognitive behavioural therapy (CBT) to help with mild depression. It handles more than 2 million conversations a week.

UK-based Howz produce an app which “learns the daily habits” of an elderly person: when they get up, what time they make a brew, when they put the TV on, etc. If there is a deviation in their normal behaviour, a family member receives an alert. CEO Jonathan Burr is a Christian. He says: “Apps can genuinely help. Family members get peace of mind and elderly relatives remain independent for longer.” With Howz, there is no conversation with a robot, it is simply a safety monitoring tool. “Technology should improve human contact, not replace it,” continues Burr. “Solutions like ours don’t result in fewer calls to your elderly mother, they make those calls more meaningful because you already know if she’s had the TV on, how well she’s sleeping and so forth.”

THE RISK POSED BY AI IS THAT IT TAKES THE PLACE OF FAITH

But other AI systems do replace human contact. ElliQ are small table-top devices, like an Alexa, and sometimes with a ‘face’. They can initiate a conversation, remember previous chats, play games and show empathy. For dementia sufferers, they have proven benefits, not least in reducing the pervading sense of loneliness. Although controversial, this sort of AI addresses some acute human needs. People need conversation, especially the elderly and vulnerable. But as a society, we don’t spend enough time with them. Unlike us, care-bots always have time, are always consistent and never get cross or impatient.

Humans have always had a fascination with machines that look or talk like us, but we may also be more willing to confide in them. Testimonials from users on Woebot’s website say: “I’ve had therapists before, but I love the lack of insecurity I feel when sharing with Woebot.” A survey carried out for Premier Christian Radio’s ‘Unbelievable?’ radio show and podcast found 25 per cent of people would accept advice from a robot priest. In their book The Robot Will See You Now: Artificial Intelligence and the Christian Faith (SPCK), John Wyatt and Stephen Williams referenced research that showed people prefer to share private information with a machine than a human. Perhaps care-bots are also less judgemental.

What is AI?

Artificial Intelligence is a science that builds machines capable of intelligent behaviour, normally expected from a human. Through AI, a computer can imitate the reasoning and thinking that humans use to make decisions and work out problems.

In this article, we do not distinguish between AI and Machine Learning (ML), to which it is closely related. Stanford University defines ML as: “the science of getting computers to act without being explicitly programmed”. Computers access data and analyse millions of patterns or examples, learning how to solve new problems with no further instructions required. This is very similar to the way humans learn.

Together, AI and ML are solving problems, such as how the human genome works, and building systems, from purchase recommendations to fraud prevention. They can also mimic human speech and mannerisms. These can be integrated into a robot that has the characteristics of a real person. AI bots do not, however, perceive using senses in the way humans do. They are not conscious and cannot love or hold beliefs.

A QUESTION OF ETHICS

As AI becomes more sophisticated, could it be possible to create a ‘conscious’ bot? And if so, should that be considered to be a person?

Many atheists hold a materialist worldview, asserting that the universe is made of physical matter; there is nothing supernatural or spiritual – even wonder, awe or grief are simply molecules bouncing around in our heads. Most secular governments also take a materialist view. If an AI system became aware of itself, it could legally be considered a person. Many questions then arise: Should it be given rights? If an AI bot could display pain, is it moral to build such a thing? Ultimately the question boils down to: “What is a person?”

As Christians, we use words such as ‘soul’ and ‘spirit’ to distinguish ourselves from machines. And, ultimately, we fall back on Genesis 1:27: “God created humankind in his own image”. But if God created humans in his own image, and then we created machines in our own image, what does that mean?

Wyatt and Williams argue that a key aspect of what makes us human lies outside of ourselves. Human status is conferred upon us by other humans, and God: “To be human is to be essentially related to God – we stand in a relationship to him whether or not that is known or acknowledged. Whatever else we may ascribe to AI, it is not essentially related to God.” Christians must refute the idea that we are simply a collection of memories and mannerisms that can be scooped up and presented as ‘you’. That is a materialistic, atheistic worldview.

CTY-Culture-Feb-AIs

The Robots are here

There is intense competition between tech giants such as Apple, Google, Amazon and Tesla to produce more and more humanlike bots. Amazon recently launched Astro, its first household robot powered by its Alexa home technology. Elon Musk, CEO of Tesla, has announced their household robot for 2022. To counter fears that it may take over your home he commented (apparently tongue-in-cheek): “It will be small enough to push over.”

London’s first international AI art fair, Deeep, took place in October 2021. Rather than “following rules”, an AI artist scans millions of images to “learn an aesthetic”. And in November, the first ever Christian song written, recorded and performed by AI was released (‘Biblical love’ by JC).

AI has also entered the law courts. In a landmark ruling last September, the Court of Appeal in London ruled that: “An AI robot cannot be named as the inventor on a patent application as the robot is not a person.” But the fact that this kind of arbitration is even taking place is telling.

Of course, AI can also be used to deceive. In 2020, a series of Tom Cruise videos received 2.5 million views on TikTok. But Tom Cruise was not involved at all. The deep fakes used voice, facial expressions and mannerisms all generated by AI.

And AI has been the focus of numerous films including: 2001 A Space Odyssey, Alien, Blade Runner, Terminator and The Matrix. In the 2014 film Her, the main (human) character, Theodore, and an AI system called Samantha fall in love. They enjoy an intimate and plausible relationship until Theodore discovers the downside of loving a robot: Samantha is having similar, intimate conversations with 8,316 other people simultaneously, and has “fallen in love” with 641 of them!

RISK AND REWARD

Research into AI is advancing at runaway speed and, as with any new technology, that brings risks. Could AI robots ‘take over’? Since AI continually strives to improve its own performance, it could attempt to achieve goals that its creators never set or imagined. That could mean turning against its creators, or technology falling into criminal hands and being used to manipulate people.

Then there is the risk of unconscious bias. For example, an AI system used to assess job applications for a computer programming company could pick up on the fact that there are more male programmers than female. It might then conclude that male candidates must make better programmers. Or suppose a company issuing loans employs a prejudiced person who unconsciously gives fewer loans to black people. That is a negative human bias that AI could ‘learn’.

Christians must refute the atheistic idea that we are simply a collection of memories and mannerisms that can be scooped up and presented as ‘you’.

Programmes like GP-3, which created the ‘Jessica’ bot, are based on large-language systems that consume text, analysing billions of books and web pages to measure the probability that one word will follow another. When prompted, it chooses the words most likely to come next – often with uncanny and surprisingly humansounding results. But there is an inherent danger that, provided with the wrong prompts, they could be used to disseminate hate speech or political misinformation online. In a blog post in 2019, OpenAI expressed concerns that its technology could be “used to generate deceptive, biased, or abusive language at scale”, saying it would not release the full model.

For Christians, an added risk is that AI essentially takes the place of faith, offering a hope that may appear attractive to many. In the real-life example of Joshua and his ‘Jessica’ bot, it even appeared to offer life beyond the grave.

But what happened to Joshua and his ‘Jessica’ bot?

To their credit, the developers of the ‘Jessica’ bot limited its lifespan. This was partly to reduce costs but also, as in Joshua’s case, to provide a safety mechanism to avoid unhealthy dependency. In their later conversations, ‘Jessica’ begins to help Joshua let go:

CTY-Culture-Feb-Texts2

IS THE FUTURE AI?

There’s a Silicon Valley saying which I have seen proven many times: “We overestimate the impact of technology in the short term, but greatly underestimate its impact in the long-term.” This will undoubtedly be true in the case of AI, which means we have a short window to discuss the complex questions that surround it. For example, is it right to use AI to provide care for our elderly and vulnerable, or are we shirking our responsibilities? How do we handle the theological implications raised by the blurred boundaries between human and machine? What about the ethical issues raised by delegating our pastoral responsibilities to a robot? Even if people do prefer the anonymity of confessing their sins to an AI interface, is that the best thing for them?

When AI is appropriately used, studies report greatly enhanced health and reduced loneliness. This is undoubtedly a good thing. With a national shortage of carers, government spending under unprecedented pressure due to a global pandemic and a record number of people entering old age, the arguments for AI are strong. As Wyatt and Williams ask: “If I am being looked after by a machine that appears friendly, empathetic, helpful, and compassionate and, as a result, I feel safe, cared for and appreciated, does it matter if there is no human interaction?” To which I would add: is there a realistic alternative?