This article will contain spoilers for the first 3 hours of Detroit: Become Human. This is your official warning.

Because they are already human.

Now, this is not some speculation about the origin of androids these are my thoughts about the humanity of AI in fiction overall and in Detroit specifically. This is quite a popular theme for fiction especially if the author wants to debate what it means to be human. They can be more or less subtle about it - or just give their name subtitle Become Human. Subtlety never was a strength of David Cage.

Generally, you need some amount of subtlety in this kind of story. There are several times in the works of Isaac Asimov when it’s debatable whether a robot operates our of its own free will or is it just a funny way of its software working out. In Mass Effect 2 and 3, you get a very close look at how Geth figure out the whole sapience thing, their own understanding of what it means to be a person and “does this unit have a soul”.

But Detroit is way closer to things like Blade Runner and Westworld. For some reason, in a relatively close future, every single of these multi-purpose androids in Detroit (dendroids?) possesses a true AI - and I’d say even more than that.

Advertisement

See, in Westworld and Blade Runner androids (they all have different names, so let’s call them androids) are programmed to be as human-like as possible, often to think that they are humans. But in Detroit the purpose of androids is not posing for humans - they’re even forced to wear an LED on them. And yet, regardless of the purpose of any given model they’re all extremely human.

But I’m not gonna critique Quantic Dream’s ability to make Sci-Fi in this article. For now, let’s look at why Androids are people and not machines from the start.

Advertisement

Exhibit 1: Kara.

Advertisement

Kara is a housemaid android, designed to cook, clean such. She is also capable of emotions beyond any mechanical reason. It’s not going overboard to assume that a housemaid android should be trying to make friends with the whole family, but she goes a bit beyond that.

Kara’s owner - Todd - is an unemployed drug addict, who most definitely has broke Kara once before. But that shouldn’t matter, right? In anything related or understanding Asimov’s Laws of Robotics, obeying humans goes before preserving self. A good tool should not care about being misused.

Advertisement

Except Androids feel pain. And fear. And joy, and hope and all of that. Kara actually becomes a Deviant - a rogue android that doesn’t obey the system - because she’s afraid that Todd will abuse his daughter, Alice. Why does she care? Sure, she would care about people she’s serving, but why would she care more about Alice than her direct owner, Todd?

But between two humans she chooses one that isn’t her owner - because Alice is a child and Alice is not bad to her. It’s not that Kara understands what emotions are and acts accordingly, it’s that Kara’s actions are dictated by her own emotional responses. And it’s not because she’s a deviant - that’s what makes her deviant, so her desire originates before she breaks programming, while she should still be restricted. And her going deviant is not provoked by some direct stimuli - she’s not being abused or directly affected by anything. It’s just the notion that her owner-human is going to harm another human that drives her over the edge.

Advertisement

Exhibit 2: Markus

Carl - left, Markus - right.

Advertisement

Markus is an android of Carl Manfred - a painter and a wheelchair user. He’s a domestic android, assisting the painter around the house.

He starts off as you’d expect from an android, but then Carl tells him to paint something. So you, as Markus paint an object of your choice. But that’s just a recreation of something you see and Carl wants Markus to paints something completely new and unique.

Advertisement

And Markus does. He states that he’s not programmed to paint, but Cals says “Nah, just paint” and Markus just paints. A completely new painting, based on your choices is born out of the mind of an android like the one in the picture above. It literally takes 1 try to “teach” an android to create. And those are not recreations or interpretation of something Markus saw before - they’re good enough to be in-universe mistaken for the work of Carl Manfred and be named a masterpiece.

The question of whether AI can create something new, create art is very often used in fiction to question how much of a person AI is. Well, that question is answered in Detroit quickly and definitively.

Advertisement

Exhibit 3 - Deviancy.

Advertisement

Androids in Detroit go rogue breaking the binds of their programming and gaining free will. And that’s where the problem lies for me.

Programming in Detroit is represented by a red wall that forbids you from doing things and going places. So when you go deviant, you literally break with your mind through that wall and you’re no longer restricted by it. To me, it looks like the androids already have full capacity for free will and thought, but the programming restricts that. It doesn’t seem like an evolving AI, but one freeing itself.

Advertisement

Deviancy is often caused by emotional stress - something that no machine should be built to be able to feel - and that’s also strange. First few deviants you see were trying to protect themselves and were afraid for their lives. Can you blame them for going rogue?

Well, I can blame someone, because there’s something strange. They become deviant and immediately act like humans - but it was also human-like emotions that caused them to go deviant. As if they were already operating at “human speed” but couldn’t show it because of the programming.

Advertisement

Deviants complain that the world is unfair to them, even though it always was like that. It’s unfair only when compared to humans, which androids should not compare to since they are not humans.

Except they totally are. They feel and think in the same way. Base their judgments on all the same notions as a human would have the same creative mind as a human would. They’re stressed of things that would stress a human, they don’t like things a human wouldn’t like. Almost as if the whole experience is designed to make a human feel bad.

Advertisement

The Conclusion

Can humans freely use AI is a self-aware AI deserving of its freedom? This is a question that is not present in Detroit. Androids are people from second zero, both from what they do and feel in the game and form the way they’re presented to the camera. Not once are you given any reason to doubt their “humanity” and as such, the argument is decisively resolved from the start.

Advertisement

For any creator that makes a tool capable of suffering deserves its retribution.