Journal 5
I think sometimes people forget how much robots, in general, play a part in everyday life. As someone who hopes to go into the medical field has gotten a major surgery (ACL) in my lifetime, every time I look at the scar on my knee or participate in shadowing opportunities when I am home, I am reminded on how much robots have revolutionized medicine. Robots have made medicine more specialized, accurate, and less invasive. Robots continue to prove to many medical professionals that they are the best way to ensure the health of their patients and success of their procedures and those medical professionals who have not adapted their uses have largely been left in the dust.
However, when it comes to the development of humanoid robots that will be in charge of making ethical decisions, my stance differs severely. As I expressed on Thursday, I believe that one of the greatest gifts of humanity, the primary way (that we know of right now) as to how we differ from animals, is our ability to grapple with different ethical scenarios, have a firm conception of justice, and have opinions that can be changed through the acquisition of knowledge. I do not believe that there is or will ever be a sort of ideal formula when it comes to ethical situations, for humans are fickle and every decision that we make does not only take into account our conscious thoughts, but also our previously held unconscious judgments. Robots require programming, which means that they would at least have be able to calculate different ethical considerations that humans figured were the best theories to implement. An objective robot is programmed subjectively. Therefore, how can I trust it to make ethical decisions? There is so much more to this world than what formulas can solve. A situation that we continue to consider is the robotic car scenario in which a self-driving car must decide to kill the driver, crash into a minivan full of kids, or kill a motorcyclist. A robot, even with a face recognition system installed that may give it some insight into the past of the people involved has no way to predict the future.
Category: Uncategorized
Journal 5-Specieism
This week, we read a piece by Susan Leigh Anderson called Asimov’s Three Laws of Robotics and Machine Metaethics. Susan Anderson discusses the idea of robots as agents of ethical decisions for our society. She explained how the future of robots might be training them to help humans make ethical decisions. I learned of the term called specieism, which is basically our preference over our own species. Specieism is innate and sometimes we do not realize how our actions naturally reflect this belief.
While all humans are considered the same species, we often break down our species further through race, ethnicity, religion, or gender. It has become human habit to breakup the human race into different subcategories to separate groups of people from each other. Over our existence, this specieism of people without our own racial group has enabled slavery, segregation, and other forms of discrimination. When humans have to make ethical decisions, it is easy for the speciealty to takeover in our decision making process. As we already know, all people are deserving of the same rights and consideration. This is a reason why robots could be good agents for ethical decision-making. A robot could be programmed to be fair and impartial of any race, gender, sexuality etc.
It is for this reason that I think it would be a good idea for Robot to be programmed to follow principles close to Kant. Kant explained a few principles that people should follow. One of them is the universality clause in which he states that you should make an ethical decision considering if you would want it to be followed universally including your own actions. The second clause sounds similar to the golden rule of treating others the way you would want to be treated. Kant’s ethical theories are good at preventing discrimination. There are many other ethical theories that a robot could be programmed with, but I think Kant’s ideas are of the utmost importance.
Journal #5
In class on Thursday, we discussed the three ways in which slaves attempted to gain recognition for being human, rather than simply slaves. The three ways we discussed slaves attempting to gain recognition as humans were: through owner’s benevolence, trying to look/act white, or through rebellion. We observed the “Am I Still Not a Man and a Brother” picture which displays the concept of slaves being granted humanhood through the benevolence of their owners; a submissive view of slaves. Secondly, the picture of the slave holding the Bible sitting in elegant clothing and mimicking the iconic pictures of George Washington. By publishing a picture like this, slaves are looking to gain their humanhood through mimicking the look of whites with signs of affluence. Holding the Bible was a sign that slaves began to educate themselves and find larger meaning for life through their divine rights. Lastly, we looked discussed the image of Toussaint Louverture, the Haitan slave revolt leader. This image showed the third way in which slaves looked to gain humanhood; fighting for their rights. Upon further thought after class, i believe that there is a relationship to draw between this discussion of slaves and the conversation we had earlier about the introduction of humanoids into society.
There are many ethical issues that can be discussed in the conversation of flooding society with robots that are indistinguishable from humans. After taking the slave conversation into account, i can only imagine that robots could take the same measures in trying to blend into the society. In order to be treated as humans and gain their human rights, perhaps they will attempt one if not all three of the aforementioned methods used by slaves. It is scary to imagine a world in which robots inhabit the earth and mimc the attitudes and looks of humans. It is assumed that many humans will not grant robots humanhood out of their benevolence, therefore, robots will be forced to gain their rights in other, more forceful, manners.
In the event that robots must resort to rebelling against the people (assuming they are not programmed with Asimov’s three laws) to gain their humanhood, massive destruction could be brought upon Earth and the human race. Having this knowledge, might it be in the best interest of the human race to develop the robots right away and grant them humanhood upon creating in order to avoid this delayed rebellion that could lead to massive destruction. Perhaps the inevitable presence of humanoids should be expedited, and they should be accepted in order to create peace amongst the species.
Journal 4
The past week in class, my brain has really been working itself thinking about artificial intelligence and the incredible advancements humans have been making with technology. But, however, I then think, are these advancements really “incredible”? Do we want to be able to “invent” humans. If artificial intelligence could really be so similar to the real human brain and body then why even have humans. With all this technology being invented and all of this technology actually being smarter than humans are we not just defeating the purpose of human life? Why are we trying to invent robots that are equivalent to humans, why do we want them to be smarter? Humans have been on this world for such a small fraction of the time this world as existed and already we are inventing replacements for ourselves. The Bicentennial Man really put this into perspective for me. As cool and interesting as the story of the Bicentennial Man was, why would we ever want to see a world like that? I just don’t see the purpose in trying to make technology smarter than we are, because it can never be smarter than we are. If we invent something smarter than ourselves, aren’t we technically still smarter than the robot since we invented it to be smarter? We are the ones with the knowledge, the power, the tools and the emotions, there is no need to try and teach technology all of our gifts. If we do, what are we good for?
Ashton Radvansky Journal #4
Our discussion of ethics during class on February 14th caused me to think about life in a much broader way. As we go through our daily lives, they can often become routine and monotonous. As things become more familiar, we complete each motion as though it is an instinct, and we do not give much thought to the greater implications of our actions. During class I was forced to venture outside of the “Lewisburg bubble”, and to think about potential real-world scenarios; but even then I was initially limited by my own common sense.
In class, Professor Perrone depicted a scenario where an individual is driving a car, and does not have enough time to stop before running into someone. They must choose to stay straight and hit one person, or to quickly turn the wheel and run over two people. Immediately I thought in my head “Well of course they should stay straight. Losing one life is better than losing two”. After pondering this, Professor Perrone added that the one person in front of the driver is a pregnant woman of eight months, while the two people to the side of the driver are a pair of criminals who have just recently robbed a bank. I felt a gut-wrenching feeling in my stomach because while it would be unjust for an innocent pregnant woman to lose her life, it is just as equally unjust to run over two men for a crime they have committed.
I was born and raised in a religious household, and I believe that it is morally wrong to take the lives of other human beings. We do not have the right to choose who lives or dies in any scenario, much less a scenario of an accidental car crash. My argument follows that of the Divine Command Theory in saying that “Good actions follow the will of God and bad actions are contrary to the will of God”. Although scripture does not address all moral issues, I believe that God is the ultimate authority and judge over human life.