Journal 5

The reading covered this week, Asimov’s Three Laws of Robotics and Machine Metaethics, written by Susan Leigh-Anderson discusses the possibility of using robots as ethical advisors for humans. Asimov rejected his own three laws dealing with robotics because he believed a robot like Andrew Martin, shouldn’t be treated as a slave for human beings. Susan writes, “Humans treat machines like slaves and this makes it difficult for them to be ethical paragons.” By understanding this form of weakness in humans, it is beneficial for machines to instruct humans as to how to become more ethical. In the novel, The Bicentennial Man, Andrew Martin is bullied by a group of kids because they are simply afraid of a smarter being who can live longer than humans. By analyzing this scene, it becomes clear that the story is meant to remind us of the slavery of African Americans in the United States. African Americans acted as slaves and were bullied by whites in cruel forms. Anderson notes, “Humans act irrational when their interests are threatened and they have to deal with a being different from themselves.” The concept of machine metaethics then comes into play. The ultimate goal associated with this concept is to create a machine that follows an ideal ethical principle, and is guided by that principle in the decisions it makes. The article points out different characteristics which robots would need to build upon a principle of moral standing. Some of the characteristics include the obtainment of the faculty of reason, the capacity to communicate, and act of being self-conscious. Towards the end of the article, Susan Leigh-Anderson writes that Asimov’s three laws governing robots aren’t satisfactory. Robots should be created as ethical advisors to humans, not just autonomous machines.