Journal 4- The Bicentennial Man

After learning the various different ethical theories in class this week, I have been drawn to the theory of Kant in regards to his categorical imperatives. His first categorical imperative states that you should morally act in ways that you would want to be universally implied. In other words, you have to think of applying a specific situation to a large scale, as if this one situation would set precedent for every other situation to come. Additionally, apply your moral decision as if it would be decided unto you. For example, if you want to lie to a friend about not having money to help them pay for a soda, you have to be okay with that same friend lying to you about having money when you need money to buy a soda. The second categorical imperative that Kant explains is the idea that you should never use people or things as a means to an end. Many people use Kant’s ethical theory because it tries to prevent discrimination against people.

While I was reading the Bicentennial Man, I immediately began to think about Kant’s theory in relation to Andrew and his basic rights. At one point in the story, two men go up to Andrew and tell him to dismember himself. At this point in the story, I had grown an extreme liking to Andrew because I got the impression that he actually experienced emotion and was similar to humans. I was upset when the men told Andrew to dismember himself because I know no human would ever be subjected to this demand or taunting. In the programming of robots, 3 rules are always followed. The first of which is that a robot should never kill or cause harm to a human; the second is a robot should always be obedient to a human, unless it would break the first rule. After this scene, I began to reason with Kant’s first categorical imperative as it applies to the 3 basic laws of robots’ functioning. These laws should be held universally to all things created, not just humans. It should not be okay for these men to degrade and humiliate Andrew, when the act would never be done unto them. The rules should be revised to state “A robot should be obedient to humans unless it caused harm to humans, or to themselves/ other robots”.

The second categorical imperative also is questioned in this story. This imperative states that people should never knowingly use others for their own benefit. In the beginning of the story, George uses to Andrew to help make him hundreds of thousands of dollars. At this point in the story, you assume Andrew to be a robot, so this imperative might not apply. However, as the story developed, I began to see Andrew in the likeliness of a human. When Andrew gains his “freedom” and owns himself, I no longer saw him as a robot. He was a functioning member of society. In Andrew’s case, I think the second categorical imperative should then apply. People should not be able to use Andrew as any means to their end. George, or anyone else, should not be able to order around Andrew for their personal gain.

Although I do think Andrew should have the same rights as a human based on Kantanism, I am utterly terrified by this idea. I do not think people will ever understand the capability of robots. Even though we create them, I think they have the capability to adapt far faster than a human can realize and understand. In other words, robots might be able to manipulate humans if we do not take them seriously. In a movie I watched called Ex Machina, an artificial intelligence fools her creator, and tricks a guy into thinking she love him. Her lover tries to free her, and she ends up killing both the men and escaping into the real world, surrounded by people who might never know she is not a “human”. In this scenario, the artificial intelligence is so advanced, that people will assume she is a human and she will have the human rights that Andrew did not have in the story. This is scary to me because in theory, she deserves the same rights as humans, which could be dangerous. People are creating these robots as a means to their end, but are making them so advanced that their roles could be reversed. Like Ex Machina, robots could end up using people as means to their end. I agree with the main principles of Kantanism, therefore, I think that if a robot was ever created to be just like a human, they deserve the same rights as us. However, it is because of this that I am hesitant of the creation of such robots, because of the danger they might pose to our society.