Honestly, I hate ethics. There is always more than one answer, one never really better than the other. Considering the trolley example, most would say to switch tracks an let the one die instead of the five. Almost everyone would change their mind if the one was someone they loved. In ethics, there is no non-biased answer either. If you chose to kill the one instead of the many, it’s because it’s easier for you to defend and cope with, less blood on your hands as it were. The decision isn’t made using any of the ethical principles we talked about. In situations like that there isn’t enough time.
This is were the debate for robots comes in. If we were able to create artificial intelligence that doesn’t mean we have created the solution for ethical and moral dilemma’s. There are too many theories to sift through in order to make a decision and we don’t know which method, if any, is the best.
Another thing is that we are not ready for artificial intelligence that takes a humanoid form. We have enough trouble dealing with the differences in our our species, do we really think we can handle the complications that a new being creates. Right off the bat people would be fighting over the subservient behavior some would want rather than give them rights. We will never create a robot with a human like capacity. There is no way to program grief, happiness or sadness only responses that may resemble such emotions and reactions. The human race would not be able to cope with another fairly sentient life form that could preform task and out live us without major fighting first.