Search

 

What does it mean to exist in complex relationships with machines? What insights can be offered to our understandings of these relationships by the theologically significant theme of ‘love’? What critical assessments can be made of our multiple uses of technologies in shaping our futures, by reflecting on our pasts?

Project blog
Events and Outputs
> Conference: Care and Machines
Useful links 

Disquiet over the prevalence of social and economic individualism has a long history. In a world of mobile Capital and increasingly mobile people, communities of common tradition and locality appear to be under threat from the advent of a fragmented market society. Are these complaints against individualism justified? And crucially, how should Christians respond to them? Digging down into the substance of these questions, this project will consider the theological, liturgical and scriptural resources Christians have for understanding the notion of individualism in relation to issues of education, public life and the formation of democratic citizenship.

« 'Heaven is a place on earth?' - Uploading and Transcendence | Main | “Attached permanently to machines” – Life Support Machines and Divine/Human Will »
Tuesday
Oct252016

Human Armed Forces and the Likelihood of Comradeship with their Military Robots

This is a guest post by Canon Albert Radcliffe, who is based at St Chad's Church in south Manchester. Albert is interested in the interaction between science and religion and regularly holds events encouraging a fruitful dialogue between the two on topics ranging from Higgs-Boson to robots and drones. In this blog, Albert explores advancing technologies and their application in military environments, specifically how they necessitate and problematise questions of human-robot relationships, and he does so by using the theme of 'comradeship'. 

 

Lethal Autonomous Weapons (LAWs)

There is a growing interest of the armed forces in military robots, including LAWs, Warbots, Killerbots etc. These have been described as the third revolution in warfare after gunpowder and nuclear weapons. So far, autonomous killer-robots exist only in science fiction.  Except for Israel and South Korea, so far as is known, at the time of writing [October 2016] there are no autonomous robot systems in use in any armed service in the world; and even in Israel and South Korea they still have human operators.

The United Nations Convention on Certain Conventional Weapons [CCW], signed by 123 countries, came into force on 2nd December 1983 and limited or forbade the use of those weapons which were considered indiscriminate in their use or simply too dangerous.  These included booby traps, incendiaries, landmines and lasers designed to blind.

The desire to ban or regulate military robots is based on the need to protect civilian lives, to keep decision make in human hands and to be able to hold those responsible to account if things go wrong.

Since 1983, the UN CCW has widened its remit, and on 11-15 April '16 it held a third convention to discuss LARs.  The fear is that the use of military robots might start a dangerous arms race that would remove weapons from human control so that no state could remain unaffected; so far no conclusions have been possible.  Further meetings are planned.[1] 

Although many in the military, together with their expert advisers, welcome the recruitment of autonomous robots, there are many others who feel that this would have disastrous consequences especially whenever human control was relinquished.  Mixing the humans and autonomous machines, rendering them comrades and raising the issue of comradeship would therefore not work.  Opinion is divided, but developments in electronics by private corporations are ever on offer and the fear is that a world-wide arms-race could begin, giving an advantage to the first side to invest in the latest developments.  Others fear that it might lead to a kind of perpetual war. 

 

Imaging comradeship

According to theological anthropology, humans are made in the image of God and now we are making technological creatures, such as LAWs, in our image. This is predicated on an understanding of reason as a manifestation of that image, but an alternative way to regard it is by exploring it in terms of building relationships and having a capacity to love (imaging divine love, for example). If comradeship is a species of love, however, then are robots excluded? Can love be simulated, and even if so, then does this count theologically?

Comradeship is based on trust/faith often in extreme life/death situations. Even if this could be emulated, it would arguably still be far from genuine comradeship. In the latter, love can be defined as non-erotic affection, a bonding with someone who can return the feeling-based stance.

On the other hand, grief experienced at the loss of something that is loved can act as a measure of our attachment to them. Grief at the loss of a robot, especially one that had shared profound experiences with us and perhaps even saved our life would therefore potentially act as a measure of love for it. Whether or not this can be true of warbots, however, remains to be seen in an ambiguous and difficult context in which to make theological reflections.

 

The importance and possibility of comradeship in battlefield situations

 I recently had a conversation with a friend, an ex-brigadier, who noted that “at the heart of an effective fighting unit is the cohesion of the numerous small teams that make up the whole. Thus small team dynamics are critical and it is for this reason that so much is made in the military of unit morale: without good morale little can be properly achieved.” Because the morale of fighting units is so largely the result of the comradeship that animates those units, Napoleon's dictum that 'morale is to material as three is to one' is only true for units where levels of comradeship are high.  This poses the critical issue of the extent to which comradeship between human and robots soldiers is even possible.

In general terms, the more human the robot is in appearance, the more easily strong emotional bonds like those of comradeship can be formed, up to, that is, the point known as the 'Uncanny Valley' is reached.  This is the term coined by Masahiro Mori in 1970 to describe the capacity of the near human to frighten-off many people with the feelings of weirdness and eeriness that they produce.  In this situation the powerful and necessary bonds of emotional attachment we call comradeship may not be possible.

Nevertheless, there is ongoing research into the whole area of possible human emotional bond formation with military robots, for which see Dr Julie Carpenter's 'Culture and Human-Robot Interaction in Militarised Space: a War Story.  In an interview in February 2016, Dr Carpenter said, 'the bottom line is that these human AI/robot interactions are transactions and not reciprocal, and therefore probably not healthy[2] for most people to rely on as a long term means for substituting two-way affectionate bonds, or as a surrogate for a human-human shared relationship.' However, she noted that some men named their robots after celebrities, girlfriends and wives, and when a robot was destroyed it was often given a funeral.

So far, the precedents for mixed combat teams are not encouraging.  In April 2008 it was reported that in the previous year SWORDS[3], an armed combat bomb disposal robot employed in Iraq had turned its 5.56 mm M249 light machine gun on its human comrades.  No one was hurt and it was quickly rendered inoperable.  The incident appears to have discouraged further experiments with mixed teams.  However, unless, military robots are banned by international law, many believe that their use in mixed combat teams, where human soldiers can monitor and command them, is inevitable.

The problem for human soldiers arises when AI has developed to the point at which it exceeds human intelligence.  This has already happened as when, in 1996, the computer programme Deep Blue beat the world chess champion, Gary Gasparov, and even more convincingly in 2016 when Deep Mind's computer programme Alpha Go beat the Korean Go master Lee Sedol.  Mixed teams of humans and robots could hardly function effectively with the robots subordinate to humans if the robots were intellectually superior to the humans.  In many respects, these trends are indicative that the post-human world is already upon us and however necessary it might be for humans to remain in charge it's becoming obvious that that can't always be the case. 

 

Emulation and empathy

Can even the most sophisticated robots really be described as intelligent, or do they merely simulate human rationality?  Without being conscious, it's difficult to see how robotic intelligence can be anything more than a simulation.  Besides, human rationality always contains non-rational elements such as empathy and other feelings.  This is because as well as functioning in some ways as a computer, the human brain also uses feelings[4], intuition and the unconscious mind in its decision making and these are not available electronically to robots. This marks the limit to what they can emulate in human beings.  It also imposes a limit on the sort of relationships possible between people and their robots.

If military robots are kept under strict human control and in appearance are decidedly machine-like then there is little reason to suppose that any emotional attachments formed will differ much from a personal fondness for an old car or domestic appliance.  However, some deeper empathy-akin kinds of relationship might just be possible if the robot engages us emotionally.  Exemplifying these points, consider the case in 2014, when Rocco, a police dog, was stabbed in the line of duty, and 1200 people attended the funeral.  Would this ever translate to machines, though?

After discussing this with a friend, an ex-commander for the Royal Navy, he told me that “the entire ethos of naval life is built on team-work, respect, and loyalty. Whilst a ‘good’ ship is an efficient ship, the morale of the crew is the single most important element in any warship. AI may enhance the capability of the ship in action, but without the ability to empathise with the ship’s crew or to engage in a non-rational decision-making process, it is very difficult to see how a relationship in these technocultural machines could ever be developed.”  Comradeship and the emotional, non-rational engagement that goes beyond but also undergirds effective military operation, is thereby something that needs to be further explored, and these are deeply theological issues in terms of how we think about what we image, what images of ourselves we create, and how these interrelate.


[1]NB also the Human Rights Watch and Harvard Law School's International Human Rights Clinic report, Shaking the Foundations, The report also proposes a pre-emptive prohibition on fully autonomous weapons.

[2]A survey of 746 people showed that 80% either 'liked' or 'loved' their military robot, 

[3]An acronym for Special Weapons Observation Reconnaissance Detection System, developed by the weapons manufacturer Foster-Miller (owned by Quinetiq,  In August 2016 the Baghdad Post carried an article about an Iraqi  armed robot built to combat ISIS in the attack on Mosul/

[4]The chemical basis of human feelings has been comprehensibly explored  for some time now by neuroscientists and is known to involves such substances as dopamine, endorphines, oxytocin and seratonin.  This aspect of human behaviour, important is such experiences as comradeship cannot be reproduced in robots,

References (1)

References allow you to track sources for this article, as well as articles that were written in response to this article.

Reader Comments

There are no comments for this journal entry. To create a new comment, use the form below.

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>