The blend of drama and humor on Almost Human is probably one of the primary factors in getting me to return every week. I’ve already said many times that I am all in, that this show is already targeting that empty void in my heart that Fringe left gaping open. The ideas that propel innovation and discovery in our modern culture are also the ones that provoke the most passion from the public on both sides of the issues and week after week Almost Human delivers.
Last week we were given a snapshot into the debate on cloning, a projection 30 years in the future of the possibilities that could follow revolutionary breakthroughs, a people group still too scared to allow its research to be permissible, and the abuse that censorship and anti-practice laws generate. This week, in a similar fashion, we take a look at the duplicity of corruption in the form of synthetic hearts; a practice that builds itself on a platform of acting in the best interest of those individuals who’ve been denied what is necessary to survive, while simultaneously using that life-saving mechanism as a merciless Hand Of God control over that person’s life. In the words of Honoré de Balzac: “Behind every great fortune there is a crime.”
Perhaps that is an unfair assessment. Blanket statements and stereotypes seem to encourage people to note all the exceptions; but unfortunately in this Almost Human universe, fortune without crime is the exception, not the rule. And it’s becoming increasingly clear to me that, despite these episodes airing out of order, they still manage to paint a very clear picture of the depth of corruption and the dismal approach to its incessant occurrence. The large corporations take advantage of the laws protecting synthetic body parts in order to increase profits and remain a relevant and primary source for bio-replacements or upgrades; meanwhile, the people who do not meet the qualifications of the corporations’ financial standards are lured into alternative solutions by the humanist who stands on his soap box and claims that the laws are unfair! The only reason the laws exist is to protect a pretty penny! And up until the moment the patient wakes up from the anesthesia to a beating heart repairing the internal damage and rigged to a timer they might believe this selling line. Why else would they agree to such a procedure?
When I was little, my dad used to ask me: “Would you like your allowance, or all the money I have in my wallet right now?” Of course, as a little girl, I’d think that everything was more than my allowance, until he’d open it up and there would be nothing in it (or, in any case, less than what I was expecting). And that kind of reminds me of this scenario; the corporations place a set value on their products. A new heart is x dollars. The black market tells you it’s free. It’s as free as clicking on a spam-rigged pop up ad. You keep paying for it long after you realized what a mistake it was to fall for that “Win a Free Laptop!” line again.
But I digress. In all aspects of life there are people willing to prey on the weak and naïve. Spammers and hackers prey on our ignorance, Craigslist stalkers prey on our need for inexpensive furniture. Honestly, it seems like the lower a person believes they are on the food chain, the more willing they are to justify their actions, believing they deserve or qualify for such behaviors because of a lack of appreciation or an inability to rise above his or her current status. This isn’t an exclusive statement, of course, just once again that it happens to be the trend.
There was a certain playfulness to Dorian this week that emphasized his continual readjustment into society and exposed a bit of the learning curve of machine-driven cognition. By which I mean, a cognitive paradigm built on 1s and 0s, but influenced by emotion. There was an article in Time Magazine earlier this year about regulating robots. We aren’t exactly to the DRN stage of the robotics revolution yet, but it reiterated the same fear that men have had concerning robots from the beginning: “As robots take on ever more complex roles, the question naturally arises: Who will be responsible when they do something wrong?”
Interestingly, we got a very valuable glimpse into the history of the DRN models through DRN494. This particular unit had been decommissioned from the police force after breaking protocol and causing the death of a human (violation of Law of Robotics #1). I’m assuming that when a robot is decommissioned and repurposed, their memories or databases are altered to reflect their new responsibilities; unlike humans, they don’t have to undergo that period of time in which they must forget about how something went down (i.e. Kennex). So it doesn’t really matter how much time has elapsed since the unit was on the force, but since it’s an emotive DRN model, it cares about this seemingly inconsequential fact. Not only does it care about how much time has passed, but it cares about why it was demoted.
Dorian tells us that a glitch was discovered with the DRN models and, for a time, all units underwent a test to determine whether they possessed The Flaw. (Note: I wasn’t able to catch what the name of the test was, I thought he said Lucre, which would be awesomely ironic, but I will simply refer to it as the Test.) This Test was dolled out for a while until the creators believed that the problem couldn’t be adequately ascertained and all DRNs were decommissioned.
Seems simple enough; creators taking responsibility for their creations. If the robot is acting contrary to its purpose, then its purpose has been violated and its existence is essentially worthless. Ryan Calo argues in favor of a “selective immunity for manufacturers of open robotic platforms for what end users do with these platforms.” (Keep in mind, Calo was probably not thinking about emotive androids whilst assembling this research!) Outside the context of Asimov’s fictional Laws of Robotics, when a robot causes property or personal damage, they are less marketable. People won’t buy something they think will end up costing them more in the long run. The continued use of robotic drones has a lot of the public stirring, but at large robotics is a huge part of warfare. Calo is mostly speaking of open robotic software, it seems, but I believe the idea still holds generally true to the DRN model when he explains the pros and cons of making manufacturers responsible for their robots. With immunity, would there be less incentive to make them safe? Without immunity, would we run the risk of prohibiting innovation? Calo argues for immunity under the following circumstances: 1) individual consumer decisions in usage, 2) applying “third party” software, and 3) physical, consumer-orchestrated modifications.
If you think I’m chasing a rabbit, I swear I’m bringing it back in. Essentially, what I get from this proposal is that there are two sides to every story. A manufacturer builds and produces a robot for a specific purpose. The robot is sold and deployed to an end user with the expectation that end user knows what the robot is for. If the end user decides to modify the robot in any way, that cannot be the responsibility of the manufacturer (assuming all the legal documents accompany the product, warning the end user of violating the warranty).
So, in a sense, the manufacturers built the DRN models as a means to improve upon a total logic-based Robocop. They wanted a smart machine that could make decisions above and beyond Asimov’s Laws of Robotics. However, when the DRNs exhibited behaviors leaning more toward the emotive side than the Do-What-I-Tell-You-To-Do side (it was probably worse than that, I’m just posing it as an example), the public outcry was most likely an influential factor in reassessing the combination of qualities for which a DRN was built. The manufacturers were rightly taking responsibility for the behavior of the robots, but were the robots really acting out of character? Just work with me here for a moment. The DRNs are doing exactly what they are designed to do, except that as it continues to cache all of this emotive logic data as a result of their work in the field, they are behaving in an unexpected way. This seems impossible to know without extensive testing, and how does one obtain that without extensive field work?
Maybe they were acting out of character. Clearly when given bad data, the DRN has the potential to make poor decisions. But then again, whose fault was it that the DRN was given bad data? In this case it was Dorian’s, yes, but…consider Alias season 4. In order to throw Nadia off his trail, Jack tells her that a certain man killed her mother. And as a result, she shoots the man to death. Her actions were based off an understanding of information that was intentionally given to her incorrectly. And this cycles back to the first part of this post. How can we compete fairly in a world where our only two options are corporations that legally take advantage of the sick or black market groups that illegally take advantage of the sick? How can robots, or their manufactures, be expected to take responsibility for faulty information upon which their entire cognitive paradigm depends? How can humans be expected to respond accurately when information is wrong?
The factor in this all that intrigues me the most is that when a DRN model is decommissioned, as we saw DRN494 at the end of the episode, there is a chillingly persistent quality to the internalization of their memory repository, in a way that only seems common to individuals possessing a soul. A memory is, at its most basic form, data. But then it is processed and influenced by emotion, filtered through our respective experiences and knowledge bases, resulting in an associative array of biochemical reactions, and feelings without details. The DRN could remember the boy called Philip, but he could not remember his cases.
The idea that someone or something can leave an indelible mark suggests the concept of a soul. But, frankly, I’m not afraid to admit I don’t believe for one second that these robots are supposed to have souls. So what in the world have these manufacturers invented? A robot that can simulate emotive responses similar to that of a person with a soul? Decisions based off of emotional experiences? It’s quite fascinating! And that in itself presents an argument for what qualifies as a machine. At what point does an android no longer treated as a robot, but as a human? If it can never be treated as a human, is it inhumane to even produce robot that can so closely match the responses of a human without being protected under the same moral laws?

I took the last scene to mean Dorian erased 494s case files but left him the memories of the boy. You brings up interesting concepts about the DRNs “emotions”. The case files wouldn’t contain emotional data so it would seem the DRNs create those emotions internally from basic factual data quite like we do. (Though our emotions are often devoid of facts).
Dorian said he gave 494 access to “his” (494s) files. And those files were out of data(as Dorians first were). That leads me to think the DRNs files are permanent to the unit and access is only allowed or blocked rather than erased.
This is a good point, David. I was thinking along the same lines as well, but it confused me. Either it just wasn’t thought through all the way, or they decided that a robot’s memory is more easily partitioned, but how can he have the memory of Phil but not of the circumstances that led him to the boy? Just theorizing here, of course, but even if Dorian had left the memory of Phil but removed all the case files, the effect the boy had on him was partly due to the manner in which he saved him, which would have been linked to the case files. And if he can’t remember those precisely, then the effect has more to do with persistent, emotional memory. I don’t know. It would be easier to analyze if it was a human, but we just don’t know how these robots work.
Another great post Emilee, as always.
Your last paragraph touches on my biggest question regarding the DRNs in Almost Human. The DRNs were created for the police force. When it was discovered that they were faulty they were decommissioned and re-tasked for other work, be it working maintenance in a hospital or replacing solar panels on the space station. Now because the manufacturer created these robot to simulate emotive responses similar to that of a person with a soul, how does it affect them in their new endeavour? Even though DRN 494 had no memories of being a cop he still knew that being a cop was what he was created for. Would this bring out an emotional response from him, knowing that he isn’t doing what he was meant to do? Is it possible for DRNs to be depressed? We know from DRN 494’s story that when Phil, the young boy was in trouble his emotions got the better of him and he broke protocol. Could it work the other way around as well? Could a DRN assigned to menial tasks become depressed to the point where it would affect his performance? Makes me wonder…
My other question is how ownership of the DRNs affect them. I’ve seen no evidence to suggest that they are considered “free individuals”. From what we’ve been told every police officer is assigned a synthetic as a partner. The MXs or the DRNs are assigned to the cops the same way they are assigned their service pistol and their cruiser. The androids are just another piece of police property. This would not affect the MXs, but for the DRNs, with their emotive synthetic soul wouldn’t it be akin to slavery? Would Dorian be taken seriously if he asked to be reassigned to another officer because of difficulty working with John? Or what if Dorian decided one day that he had enough and didn’t want to be a cop anymore (an emotional being could come to that decision) would he be allowed to walk away form the force? If he’s told no, that he’s police property and that he must stay, he would see himself as nothing more than an indentured servant. A perfectly rational emotive response. We already saw that Dorian sees himself as more than the MXs. He told John that he wanted his own place. But does the police force see him that way and would they take him seriously if/when he presses the matter?
I see a lot of interesting story lines in the future concerning Dorian’s emotive responses as well as his rights and individuality.
I like your comment about ownership, Mark. It reminds me back to the episode Skin when Kennex was asking that sexbot “who owns you?” and Dorian comes back with the comment, “She’s probably not aware of those terms.” All of what you said, that androids are just another piece of police property and are like a service pistol, that is so true. And it totally changes the game when a robot is unaware of the self. I hate to keep harping on the novella, but Asimov’s The Positronic Man/Bicentennial Man is such an ideal model for what this show is undertaking because of the manner in which Andrew wins over the Martin family and slowly turns into a human. I don’t think Dorian wants to be human, necessarily, but he does want his feelings acknowledged and accepted – which means the police would have to start taking his requests seriously!
I immediately thought of the movie Repo Men, where in the future your artifical organs are repo’d if you don’t pay.
But I was very surprised how much more of an episode this way. Almost Human is hitting its stride. Showing how feelings gets Dorian in Trouble, but making mistakes.
Dorian is not a perfect intelligence.
He felt to give the memories was a greater need than the risk.
I also love the mythology plot explanation not ever model qualified, a test allowed for what model could stay and what modle was recommissioned.
Just Great Writing. Here is hoping for a quick renewal on season 2.
Thanks for the comment Charlie! I haven’t seen Repo Man. It was in my queue for a while, and forgot about it. I want to go check it out now. Hoping for a quick renewal as well!
I agree — the last scene immediately reminded me of Fringe episode 402! The undercurrent of season 4 — for me — was the indelible mark that relationships leave on our souls… and this episode of AH was a wonderful bridge between that theme in Fringe and the overarching theme of AH: what does it mean to be human?
In other news, your dad is mean.
lol, I’m afraid I don’t understand this…
The allowance trick, I mean… dastardly.