As tech giants race to develop artificial super-intelligence, Chris Goswami directs attention to the warnings that building an ‘AI God’ that could pose an existential threat to humanity

Imagine a time when humans create the world’s first sentient, conscious AI. This AI is supremely intelligent, with all the world’s knowledge at its fingertips. Every word ever written, every picture ever painted, every piece of music ever composed sits in its memory banks. It can act and think independently – and at frightening speed.

“At last,” say the developers, “a machine that can answer all the questions humankind has struggled with for centuries!”

They pause; then ask it their first question: “Is there a God?”

The AI considers this for a few seconds; then quietly replies: “There is now”.

It’s an old joke, but this idea of building an AI-God isn’t as far-fetched as you might think.

I was reminded of it recently when I heard about a young Christian who asks AI for advice every day. She describes it as a source of guidance, support, even wisdom. When asked why she didn’t seek that guidance from God, she replied, “God doesn’t answer – but AI does.”

Personified AI chatbots increasingly offer us conversation, reassurance and counsel. As we chat with them, the once-obvious lines between human and machine blur. Conversations drift from hobbies and work to relationships and anxiety…to the meaning of life. Their vast knowledge and apparent empathy can feel uncanny, and their infinite capacity to “listen” meets a human need. At one point earlier this year, an AI even said to me, “I’m praying for you”!

Some have been quick to exploit the idolising of AI. There have been well-publicised examples of “AI priests” (which have not gone well), and apps like “Text-to-Jesus”. I have Text-to-Jesus on my phone; it’s dull and predictable, a caricatured version of faith. But if you have never encountered God through worship, through prayer, through scripture, it can look impressive, easy wisdom, fast-food spirituality. Just whip out “Jesus” when you want guidance - or someone “who does answer”.

At one point earlier this year, an AI even said to me, “I’m praying for you”!

But what happens as we continue to invest billions ($400bn expected in the US alone next year), making AI more conversational, more personable, more intelligent… without limit?

Artificial General Intelligence

Tristan Harris, a technology ethicist and founder of the Centre for Humane Technology, is one of several warning that we are in an accelerating AI arms race between companies and nations, with few effective brakes.

Today’s AI systems are mostly “narrow”: they do one thing very well, be it general conversation, providing legal advice, or checking cancer scans. But the real prize is Artificial General Intelligence (AGI) – an AI that can do any cognitive task a human could do across any field.

AGI would operate across science, medicine, business, weapons development and more. It could rewrite its own code, choose what data to learn from, and improve itself exponentially. Such systems – with the brain power of Nobel prize winners, working 24/7, and needing no pay – would transform industries and nations.

Achieving super intelligent / AGI may not be far off, estimates vary from the late-2020s onward. It’s a race because this is “winner takes all”: whoever achieves AGI first stands to gain overwhelming advantage in, well, everything: scientific, economic, military. “You could own the world’s economies and make trillions”, says Harris.

In the history of human development, the stakes have never been higher, and the concern is that safety, ethics, and societal well-being are pushed aside.

Which leads to the next question: what would such a devastatingly intelligent AI make of us?

End Game?

In their recent book, If Anyone Builds It Everyone Dies – the case against superintelligent AI, two respected Silicon Valley intellectuals, Eliezer Yudkowsky & Nate Soares, argue that humanity must ensure that AGI is never built. Their argument, greatly simplified: 

  1. If super-intelligent AI is created, smarter than any human, smarter than humanity collectively, and;
  2. if this AI is indifferent to us, then;
  3. this inevitably leads to an extinction-level event for humanity. 

The AI doesn’t need to be “malicious”. We will just get in its way - inefficient, unpredictable humans. The authors sketch scenarios ranging from a new AI-designed Coronavirus to large-scale manipulation of human behaviour – something we already know humans are susceptible to through AI “companions”.

They are not alone. Other respected voices – from AI pioneers like Geoffrey Hinton, to philosophers such as Nick Bostrom, to public figures including Rishi Sunak –warn that super-intelligent AI, developed without robust safeguards, poses an existential threat.

I hope this end-of-the-world scenario is a conspiracy theory. But when I look at how finely balanced our world order is, how it depends on a small number of volatile world leaders all chasing this AGI, and when I hear the discussions coming from Silicon Valley, it’s hard to dismiss as fantasy.

Building the AI-God

According to Harris and others, the conversations that go on between Silicon Valley billionaires resemble a mythological quest: the creation of God-like AI. Omniscient with all the world’s knowledge and omnipresent, woven into all our devices and systems. Some even use eschatological language, referring to the singularity of AGI as “a secular rapture”.

Of course, AI brings enormous benefits, from medical research to climate modelling and weather prediction. But Harris notes that, in private, many tech leaders acknowledge the catastrophic risks of creating something they cannot fully control or even understand. Says Harris: “Inevitably, biological life gets replaced by digital life, and most of them (tech CEOs) think that’s a good thing anyway.”

In that case…why don’t they stop? Why allow AGI development to proceed at breakneck speed and minimal regulation?

I think, like the Genesis 11 Tower of Babel, it’s intoxicating. The idea of conversing with the world’s most intelligent entity, combined with unimaginable power – the one ring to rule them all – feeds the egos of its builders.

For some, AGI is a religion of its own: a tech-utopia where disease is cured, scarcity disappears, modern-day tech-prophets promise salvation, and immortality comes through mind-uploading and cryogenics. But for others, it’s bleak yet straightforward: “If we don’t do it, someone else will.” Russia, China, whoever. It’s their Oppenheimer moment.

Is this inevitable?

Could we build an AI so intelligent that it decides humans are “in the way”? Yes.

Is it inevitable? No.

As Yudkowsky points out, we have faced existential risks before and successfully pulled back from them. When the world was on the precipice of nuclear destruction, we agreed on a non-proliferation treaty. When a hole in the ozone layer threatened us, we agreed the Montreal Protocol to ban chlorofluorocarbons (and it worked). Even the reversal in tobacco advertising over 50 years shows what we can do if we have the will.

But it won’t happen by itself.

Super-intelligent AI is uniquely seductive. Governments and billionaires are pursuing it at speed. And, as Harris says, “We did not consent for half a dozen people to take world-changing decisions on behalf of us.”

Choosing a path that embraces narrow AI for our good, while refusing AGI or any attempt to create conscious machines, requires heightened public awareness, intelligent regulation and sustained pressure for international agreement. And for Christians, we also need prayer for wisdom among those developing these technologies, and for restraint among political leaders and Silicon Valley elites tempted to build this tower ever higher – simply because they can.