The blend of drama and humor on Almost Human is probably one of the primary factors in getting me to return every week. I’ve already said many times that I am all in, that this show is already targeting that empty void in my heart that Fringe left gaping open. The ideas that propel innovation and discovery in our modern culture are also the ones that provoke the most passion from the public on both sides of the issues and week after week Almost Human delivers.

Last week we were given a snapshot into the debate on cloning, a projection 30 years in the future of the possibilities that could follow revolutionary breakthroughs, a people group still too scared to allow its research to be permissible, and the abuse that censorship and anti-practice laws generate. This week, in a similar fashion, we take a look at the duplicity of corruption in the form of synthetic hearts; a practice that builds itself on a platform of acting in the best interest of those individuals who’ve been denied what is necessary to survive, while simultaneously using that life-saving mechanism as a merciless Hand Of God control over that person’s life. In the words of Honoré de Balzac: “Behind every great fortune there is a crime.”

Perhaps that is an unfair assessment. Blanket statements and stereotypes seem to encourage people to note all the exceptions; but unfortunately in this Almost Human universe, fortune without crime is the exception, not the rule. And it’s becoming increasingly clear to me that, despite these episodes airing out of order, they still manage to paint a very clear picture of the depth of corruption and the dismal approach to its incessant occurrence. The large corporations take advantage of the laws protecting synthetic body parts in order to increase profits and remain a relevant and primary source for bio-replacements or upgrades; meanwhile, the people who do not meet the qualifications of the corporations’ financial standards are lured into alternative solutions by the humanist who stands on his soap box and claims that the laws are unfair! The only reason the laws exist is to protect a pretty penny! And up until the moment the patient wakes up from the anesthesia to a beating heart repairing the internal damage and rigged to a timer they might believe this selling line. Why else would they agree to such a procedure?

When I was little, my dad used to ask me: “Would you like your allowance, or all the money I have in my wallet right now?” Of course, as a little girl, I’d think that everything was more than my allowance, until he’d open it up and there would be nothing in it (or, in any case, less than what I was expecting). And that kind of reminds me of this scenario; the corporations place a set value on their products. A new heart is x dollars. The black market tells you it’s free. It’s as free as clicking on a spam-rigged pop up ad. You keep paying for it long after you realized what a mistake it was to fall for that “Win a Free Laptop!” line again.

But I digress. In all aspects of life there are people willing to prey on the weak and naïve. Spammers and hackers prey on our ignorance, Craigslist stalkers prey on our need for inexpensive furniture. Honestly, it seems like the lower a person believes they are on the food chain, the more willing they are to justify their actions, believing they deserve or qualify for such behaviors because of a lack of appreciation or an inability to rise above his or her current status. This isn’t an exclusive statement, of course, just once again that it happens to be the trend.

There was a certain playfulness to Dorian this week that emphasized his continual readjustment into society and exposed a bit of the learning curve of machine-driven cognition. By which I mean, a cognitive paradigm built on 1s and 0s, but influenced by emotion. There was an article in Time Magazine earlier this year about regulating robots. We aren’t exactly to the DRN stage of the robotics revolution yet, but it reiterated the same fear that men have had concerning robots from the beginning: “As robots take on ever more complex roles, the question naturally arises: Who will be responsible when they do something wrong?”

Interestingly, we got a very valuable glimpse into the history of the DRN models through DRN494. This particular unit had been decommissioned from the police force after breaking protocol and causing the death of a human (violation of Law of Robotics #1). I’m assuming that when a robot is decommissioned and repurposed, their memories or databases are altered to reflect their new responsibilities; unlike humans, they don’t have to undergo that period of time in which they must forget about how something went down (i.e. Kennex). So it doesn’t really matter how much time has elapsed since the unit was on the force, but since it’s an emotive DRN model, it cares about this seemingly inconsequential fact. Not only does it care about how much time has passed, but it cares about why it was demoted.

Dorian tells us that a glitch was discovered with the DRN models and, for a time, all units underwent a test to determine whether they possessed The Flaw. (Note: I wasn’t able to catch what the name of the test was, I thought he said Lucre, which would be awesomely ironic, but I will simply refer to it as the Test.) This Test was dolled out for a while until the creators believed that the problem couldn’t be adequately ascertained and all DRNs were decommissioned.

Seems simple enough; creators taking responsibility for their creations. If the robot is acting contrary to its purpose, then its purpose has been violated and its existence is essentially worthless. Ryan Calo argues in favor of a “selective immunity for manufacturers of open robotic platforms for what end users do with these platforms.” (Keep in mind, Calo was probably not thinking about emotive androids whilst assembling this research!) Outside the context of Asimov’s fictional Laws of Robotics, when a robot causes property or personal damage, they are less marketable. People won’t buy something they think will end up costing them more in the long run. The continued use of robotic drones has a lot of the public stirring, but at large robotics is a huge part of warfare. Calo is mostly speaking of open robotic software, it seems, but I believe the idea still holds generally true to the DRN model when he explains the pros and cons of making manufacturers responsible for their robots. With immunity, would there be less incentive to make them safe? Without immunity, would we run the risk of prohibiting innovation? Calo argues for immunity under the following circumstances: 1) individual consumer decisions in usage, 2) applying “third party” software, and 3) physical, consumer-orchestrated modifications.

If you think I’m chasing a rabbit, I swear I’m bringing it back in. Essentially, what I get from this proposal is that there are two sides to every story. A manufacturer builds and produces a robot for a specific purpose. The robot is sold and deployed to an end user with the expectation that end user knows what the robot is for. If the end user decides to modify the robot in any way, that cannot be the responsibility of the manufacturer (assuming all the legal documents accompany the product, warning the end user of violating the warranty).

So, in a sense, the manufacturers built the DRN models as a means to improve upon a total logic-based Robocop. They wanted a smart machine that could make decisions above and beyond Asimov’s Laws of Robotics. However, when the DRNs exhibited behaviors leaning more toward the emotive side than the Do-What-I-Tell-You-To-Do side (it was probably worse than that, I’m just posing it as an example), the public outcry was most likely an influential factor in reassessing the combination of qualities for which a DRN was built. The manufacturers were rightly taking responsibility for the behavior of the robots, but were the robots really acting out of character? Just work with me here for a moment. The DRNs are doing exactly what they are designed to do, except that as it continues to cache all of this emotive logic data as a result of their work in the field, they are behaving in an unexpected way. This seems impossible to know without extensive testing, and how does one obtain that without extensive field work?

Maybe they were acting out of character. Clearly when given bad data, the DRN has the potential to make poor decisions. But then again, whose fault was it that the DRN was given bad data? In this case it was Dorian’s, yes, but…consider Alias season 4. In order to throw Nadia off his trail, Jack tells her that a certain man killed her mother. And as a result, she shoots the man to death. Her actions were based off an understanding of information that was intentionally given to her incorrectly. And this cycles back to the first part of this post. How can we compete fairly in a world where our only two options are corporations that legally take advantage of the sick or black market groups that illegally take advantage of the sick? How can robots, or their manufactures, be expected to take responsibility for faulty information upon which their entire cognitive paradigm depends? How can humans be expected to respond accurately when information is wrong?

The factor in this all that intrigues me the most is that when a DRN model is decommissioned, as we saw DRN494 at the end of the episode, there is a chillingly persistent quality to the internalization of their memory repository, in a way that only seems common to individuals possessing a soul. A memory is, at its most basic form, data. But then it is processed and influenced by emotion, filtered through our respective experiences and knowledge bases, resulting in an associative array of biochemical reactions, and feelings without details. The DRN could remember the boy called Philip, but he could not remember his cases.

Remind you of anything?

The idea that someone or something can leave an indelible mark suggests the concept of a soul. But, frankly, I’m not afraid to admit I don’t believe for one second that these robots are supposed to have souls. So what in the world have these manufacturers invented? A robot that can simulate emotive responses similar to that of a person with a soul? Decisions based off of emotional experiences? It’s quite fascinating! And that in itself presents an argument for what qualifies as a machine. At what point does an android no longer treated as a robot, but as a human? If it can never be treated as a human, is it inhumane to even produce robot that can so closely match the responses of a human without being protected under the same moral laws?

Password Reset
Please enter your e-mail address. You will receive a new password via e-mail.