Do We Become Less Human as We Interact with Androids?

June 12, 2018 Rev. Bill Johnson

Illustrated Graphic with blue and green vector art of circuits and an illustration of a mans torso split vertically with a robot, there is text that reads, Do We Become Less Human as We Interact with Androids?

On May 25, 2018, Quantic Dream released their much anticipated seventh game, Detroit: Become Human. (Warning: Link launches trailer, which contains swearing.) After spending a week or so with the game, I’ve found it has much worth recommending to even your casual video game player. But, more importantly, Detroit continues a cultural conversation that’s only going to grow in coming years, and it’s one the Church would do well to get involved in. What makes us human, and what moral value do non-humans have? Is life found in the essence of a thing or in its behavior and appearance? Does it matter how we treat objects if they’re not alive? Detroit seeks to answer the question of whether androids are human, but I think the bigger question isn’t whether the android is human (it isn’t). It’s whether the android’s owner will remain human if he or she learns to behave in inhuman ways.

There’s no good way to discuss the game and the issues it raises without significant spoilers, so consider this your spoiler warning. I’ll be tackling the game from beginning to end, including the possible endings for the storyline. If you’re still playing, or think you might want to, stop reading and go play. Much of the joy in this game is in the discovery process.

Setting

Detroit is, obviously enough, set in the city of Detroit (complete with iconic landmarks) in the year 2038. The urban environment is believable, and it realistically models many of the ongoing challenges facing Detroit today. Technological developments are incremental, for the most part, and nothing in the setting felt impossible or even implausible. The line from our present reality to that in which the game is set is largely a straight one, and I wonder if the timeframe might even be a little too long. Many of these innovations are closer than twenty years away.

Characters and Story

The story focuses on three primary protagonists, each an android who becomes a “deviant” model, expressing humanlike emotions and making decisions in circumvention of their programming. The first, Connor, plays a bit like your standard rookie cop. He’s paired with Hank, a burnt-out human cop who inexplicably hates androids and asked to solve the rising issue of android deviancy. The player’s choices determine whether Connor becomes a deviant or works to solve the case as programmed.

The second, Kara, is a companion android for a young girl, Alice. While Alice is later revealed to be an android herself, the game goes to great lengths to convince you that she’s a human girl living with an addicted, abusive father. Kara must break through her programming to protect Alice and eventually escape and seek shelter across the Canadian border.

The final story centers on Markus, the caretaker of an aging artist who finds himself caught in a legal misunderstanding. (The exact nature of the misunderstanding depends on the player’s choices, but the end result is the same.) Consigned to the junk yard, Markus finds the necessary parts to rebuild himself and finds his way to Jericho, the android sanctuary. In the optimal storyline, Markus works his way into leadership of Jericho and must decide whether to fight for android rights through violent or non-violent resistance, with non-violence offering the best possible outcomes for all characters.

All three storylines intertwine believably, and the dialogue and story itself are well written. The player finds himself or herself engaging with the characters, and a misstep that results in consequences or death for a character will send you back to the checkpoint to explore happier endings. (Kara’s best ending took me five or six tries to get right, and some of the alternatives were gut wrenching.) In the end, whether through peace or violence, Markus achieves his dream, and androids are on the path to gaining equal rights with their human creators.

Why Should We Care?

The questions Detroit raises are going to linger. We should be clear, though, that they’re not new questions. At the core of Detroit are the fundamental questions of what it is that makes us human and whether appearance is the same as essence. Asimov explored this decades ago in I, Robot (which bears no resemblance to the movie that borrowed its name . . .), and Detroit brings it from the science fiction novel to the living room.

Woven into the game’s core is the assumption that because the protagonists look human and experience human emotions they therefore are entitled to the rights and respect of humans. Those who disagree are ultimately the losers in the story, and the story does a magnificent job of encouraging the player to connect with the androids.

While it’s presently the stuff of science fiction, the gap between science fiction and science fact continues to narrow. Already we have weak AI agents running on many of our smart devices, and many homes have invested in standalone units for Google Home, Amazon Alexa, or Microsoft’s Cortana. Indeed, the legal discussions of whether these agents have rights has already begun. The age of humanoid androids with true artificial intelligence is closer than Detroit’s twenty years.

Striking the Balance

Scripture is clear on the idea that humanity is created uniquely and is uniquely placed in creation. I tackled that in April and won’t rehearse the same ideas here, but it’s worth remembering that there’s a balance to be struck here. On one side we have the argument that because androids look and act the same way as humans, they are therefore human. This functional view of humanity disregards significant pieces of the way humans are created and of our place in creation. (And we need to start making that cultural argument now, before the functional view takes hold in total!) On the other side, however, is the idea that because androids are precisely not human, it becomes morally acceptable to treat them in inhuman ways. Here we must be mindful that our actions not only change the world around us, but they also change us as humans and affect the ways we think. This conditions us to treat ourselves and others in similar ways. In short, an owner may not morally abuse an android not because the android isn’t property (clearly it is), but because such abuse would harm the owner.

This isn’t a new principle, though. We have long argued that knives should not be used for murder or self harm, not because the knife itself bears rights and its feelings should be considered, but because the harm of the usage is itself dangerous. The same can be said for most any possession that can be misused or abused. We have been given stewardship of great riches, and our use of those gifts should be viewed through that lens.

Ultimately, Detroit isn’t destined to be a cultural landmark that will change a generation. It’s good, but its audience is too narrow to have a wide effect. But it’s the first pebble of a coming landslide of cultural forces that will question the very nature of what it means to be human. Churches would do well to prepare for the coming questions by teaching their people about vocation and stewardship now, before the questions are asked.

Join the conversation with Bill about AI and the Christian faith as we live-chat with him!
Thursday, June 14 at 11:30 a.m. (CDT) on Facebook

Previous Article
Remembering 149 Years of Technology at CPH
Remembering 149 Years of Technology at CPH

Happy 149th birthday, Concordia Publishing House! Of course, there’s a bunch of excitement about...

Next Article
Protecting Your Family Online
Protecting Your Family Online

Because of their leadership with various communication channels, church communicators are someti...