In the recent Hollywood blockbuster Blade Runner 2049, the film’s antagonist Niander Wallace declared: “Humanity cannot survive. Replicants are the future of the species.”
The original Blade Runner (1982) was a cult sci-fi movie that envisaged a dystopian Los Angeles (the future is always dystopian, isn’t it?) where bioengineered synthetic humans, called ‘replicants’, have broken free from their status as slaves to their human creators. Now 30 years later, LA has become an even-more-dystopian megalopolis, where new controls on replicants enforce their complete obedience.
Ryan Gosling plays Officer K, a blade runner tasked with brutally retiring the last remaining rogue replicants. Yet K, a replicant himself, finds himself confronting the boundary between machine and maker. There’s his romantic relationship with a hologram called Joi, the abuse he receives from co-workers for being an artificial ‘skin-job’, and the haunting claim that “only those who are ‘born’ have a soul”.
This conflict between what it means to be human and what it means to be a replicant plays out throughout the film as K’s character battles between his human instincts and his duty to be obedient to his creators. Could he, a replicant, have a soul?
The future is now
Artificial Intelligence (AI), according to the Oxford English Dictionary, focuses on the study of “computer systems able to perform tasks normally requiring human intelligence – such as visual perception, speech recognition, decision-making, and translation between languages”.
Most of us use AI without even thinking. From the virtual personal assistants such as Apple’s Siri, Amazon’s Alexa and Microsoft’s Cortana, to the recommendations we receive while shopping online – these are all forms of such technology.
But the questions raised by AI are not just about the automation of processes that once required human thinking, but whether it is possible that advanced systems could one day count as self-aware conscious beings with their own emotions, desires and moral rights. And if so, how would they be any different from humans?
Blade Runner 2049 arrives at a time when AI has become prevalent in popular culture. A slew of TV shows including Black Mirror, Humans, Westworld and Electric Dreams have imagined the ways in which such technology could change the near future of our world. New Scientist magazine has featured the topic of AI on two consecutive covers, and earlier this year it emerged that silicon valley pioneer Anthony Levandowski has founded a new religion dedicated to the worship of an ‘AI God’ when the technology finally comes of age.
Toby Walsh, an artificial intelligence expert, has told The Times that AI will change our lives sooner than we realise. In addition to driverless cars, he predicts gadgets and apps will increasingly replace a visit to the GP, and that our computers will detect early signs of dementia. Walsh believes robots will rob banks, not by smashing a sledgehammer through the front door, but instead using cyber-bots to electronically hack accounts. On the plus side, we’ll be able to star in our own movies alongside virtual reality copies of our favourite actors.
Tech-expert Nigel Cameron on the future of robotics
Why is robotics so important?
We have to face the future. The world may still be here in 1,000 years, or 1,000,000 years, and technology keeps moving. Robotics is the biggest thing humans have ever got into. The Church has utterly ignored it, like it has almost everything on the science or tech front. I have spent much of my life trying to change that, with little success.
Do you foresee a time when robots will be deemed to have the same rights as humans?
It all depends how special you think humans are. I think every Christian should watch Westworld - both the original Michael Crichton film and the TV series - and be made to think about it.
What would be your response to academics who believe it is possible to ‘live forever’ through downloading the storage in their brains and coupling that with a robot?
It's technically naive at the moment, but even if we could upload the data in our brain, we would not be the same. We would not exist in flesh and blood. So we would not be living forever even if our ideas kept going. This raises interesting questions about heaven, and about the resurrection body.
Nigel Cameron is President Emeritus at Washington based think tank The Center for Policy on Emerging Technologies and author of The Robots are Coming: Us, Them and God (Care).
As we increasingly rely on technology to make decisions for us, one of the biggest questions is how morality should be programmed into AI devices. The so-called ‘Trolley Problem’ has long been a philosophical conundrum of ethics. A driverless car is speeding towards a group of children crossing the road and cannot stop in time. Should it swerve onto the pavement and kill a single pedestrian, or carry on regardless? While the advent of driverless cars would likely significantly decrease road deaths overall, such morbid decisionmaking must still be programmed into automated vehicles. Avoiding the question is not an option.
Professor Nigel Crook from Oxford Brookes University is a Christian who is an expert in robotics and Artificial Intelligence. He identifies two approaches to teaching morality to robots. “In trying to create a robot that has moral competence one can look at it from a top-down approach. This is a laws-based approach where you give the robot rules and you allow it to apply them. As it then encounters situations, it consults the rules to decide how to act.
“This is different to a bottom-up approach where it has to make moral decisions about how it acts, and you then give it feedback on whether that’s right or wrong. In this way, it learns from experience. This approach is less brittle than the top-down approach where, if the robot finds itself handling a situation which doesn’t match the rules, it won’t know what to do.”
This is particularly relevant when considering honesty in robots: “If robots are going to be present in everyday situations, then there is a host of social norms which humans know about but which a robot won’t. The example I usually give is picking up somebody else’s mobile phone. We all know you don’t take someone’s phone, but robots are not equipped for such decision-making processes.”
Keeping control of our creations
Of course, you don’t need to be a robotics expert to understand the potential consequences of handing over the reins to automated machines.
In May 2016 Tesla Motors revealed that 40-year-old driver Joshua Brown had died after putting his Model S into autopilot mode. The car’s sensor system was not able to distinguish a large white truck and trailer crossing the motorway and attempted to drive full speed under the 18-wheeler.
Another looming question is whether robots can be held accountable for their actions.
Here Prof Crook urges caution: “My view is that robots would not be accountable and the analogy I give is that of a child. We have parental responsibility for our children and their behaviour and we should have responsibility for robots. “We ought to be in control of the design of them and the development process all the way through. Just as it would be irresponsible to develop a rocket which could launch itself without controls, we need to develop safety mechanisms within robots. There will always be a way of pulling the plug on the machine.”
At a wider level there are concerns about whether the prevalence of AI technology will result in a shortage of jobs once supplied by a manual labour workforce. Some analysts predict significant social unrest not dissimilar to the Luddite rebellion of the 19th century when textile workers destroyed factory machines they believed were threatening their jobs.
But it could be worse than that. Pre-eminent British scientist Prof Stephen Hawking has said that “The development of full artificial intelligence could spell the end of the human race”. Tesla electric car pioneer Elon Musk has also warned of the danger posed by efforts to create thinking machines.
Apparently, the kind of ‘robot takeover’ envisaged by films such as The Terminator is not just sci-fi fantasy.
AI IN THE MOVIES
2001: A Space Odyssey (1968)
Stanley Kubrick’s groundbreaking sci-fi film sees a group of astronauts investigate a mysterious object in space. But the spaceship’s sentient computer, ‘Hal’, has other ideas.
Blade Runner (1982)
Set in the future, humanity’s genetic engineering technology allows for a slave labour force of ‘replicants’ indistinguishable from humans. But they can only live for four years and aren’t allowed on earth.
The Terminator (1984)
In 2029 when machines reach a ‘singularity’ of self-aware status they take over the earth. A cyborg assassin Terminator (Arnold Schwarzenegger) time travels to 1984 to kill the mother of the leader of the human rebellion.
Steven Spielberg’s film follows the story of a highly advanced robotic boy who longs to become a ‘real’ person in order to regain the love of his human mother.
Ridley Scott’s film focuses on the crew of spaceship Prometheus as it follows a star map in a bid to find humanity’s origins, with a slightly creepy AI humanoid assisting the crew.
Ex Machina (2014)
A software engineer gets to stay at the secretive mountain home of a maverick tech pioneer. There he reveals Ava – a breakthrough in humanoid AI. But it doesn’t end prettily.
God, AI and the soul
Christians are increasingly responding from a biblical viewpoint to the questions posed by the rise of AI. Earlier this year, the Rt Rev Dr Steven Croft, Bishop of Oxford, became a member of the House of Lords’ Select Committee on Artificial Intelligence. He said at the time that he wanted to be “a voice of the Church in a very significant debate about what it means to be human”.
Films such as Blade Runner 2049 are already attempting to provide answers to some of the deepest questions. Are we just biological machines? Or are we made in the image of God, with a soul that transcends our physical nature?
One future possibility of AI is the concept of being able to upload the memory content of our brains 21 onto a computer before we die, and thereby ‘live forever’ in what amounts to a digital version of heaven. Such questions about identity have been part of Prof Crook’s research for the past decade.
“I believe we are more than biological machines,” he says.
“Working in AI is placing me in a situation where these questions naturally come up. There is also a huge uplift in robotics and AI. The problem is that it also generates a wave of hype which is not helpful and we need to be very careful what we promise with it.”
Prof Crook is sceptical of the speculative futures (dystopian or otherwise) envisaged by sci-fi films where an AI ‘singularity’ occurs and machines become self-aware as they transcend the intelligence of their human creators. “In my own opinion this debate is fuelled by an overoptimism in what can be achieved with AI algorithms, as well as an underestimating of how intelligent people really are. It’s important to have Christians involved in developing the technology as there are certainly some very vocal atheists working in this field as well.
“In the Church there is a lack of clarity about what our identity is in terms of body, soul and spirit, and also of the conventional biblical understanding of being human and how that differentiates us from animals and even machines. That’s something the Church will need to address.”
The concerns also revolve around how far humans should become dependent on robots. From the manufacture of ‘sexbots’ to the use of robotics in warfare, there are all kinds of ethical questions that need to be addressed. Plans are already afoot to use robots to provide support in caring for the elderly, something Prof Crook finds concerning.
“You really want people looking after people, and obviously machines aren’t substitutions for people. In Japan, there is development in this area as there is a real problem with the ageing population and there not being enough young people to look after the older generation.”
Bishop Steven believes the Church needs to come up with answers: “In the 19th century and for much of the 20th century, science asked hard questions of faith. Christians did not always respond well to these and to the evidence of reason. But in the 21st century, faith needs to ask hard questions of science. As Christians we need to think seriously about these questions and engage in the debate.”
There is no doubt that AI has the capacity to radically change the way we engage with our world and each other. A generation ago few people could have imagined the pervasiveness
of the internet revolution. Considering the pace of what has gone before, it’s easy to predict that the next 25 years will present us with even harder questions as humanity’s technological mastery progresses.
Christians have always been concerned with the future, but the kingdom of heaven promised by Jesus isn’t a dystopian one. We all need to step up and ask ourselves what kind of a future we want to live in.
Richard Woodall is a former newspaper journalist and writer. Follow him on Twitter @MrRWoodall