Categories
Uncategorized

Human Robots

During class when we watched the tv show, “Humans”, I thought the potential technology that was exhibited was extraordinary and greatly futuristic. However, later on that day, I was on YouTube and came across a video of a human-like robot. It drew my attention and I continued to watch videos of the robot and its abilities. Surprisingly, the abilities and looks of the robot were not far from the capabilities of the robots in “humans”. The robot, while not as real looking, looked like a life size barbie doll. The lips moved when it spoke, there were teeth, and the computer it was connected too was able to hold a conversation.  The engineers said that a goal of theirs was to have the robot have a conscious close to that of humans. The importance of this, they exclaimed, was to have them be able to function and in turn aid in our lives as close as possible to us.

As seen through the precariousness of the Anita in “Humans”, a conscious robot is not something that the human race might not necessarily want. When they become as smart as humans and develop feelings as ours, their role in our lives can begin to cause more harm than help. They can rationalize actions as good, when they can really be disadvantageous to what is trying to be accomplished. Hopefully, this complication will not be developed.

Similarly, if self driving cars develop this type of intelligence, the result can be just as harmful. With the absence of human rationality and holistic understanding, the “right” ethical decision by the car can be disadvantageous to the intentions of the driver. For example, in a situation of an accident, the car could decide to kill one person, who is close to the driver, instead of two people who the driver doesn’t know. While this is a dilemma that has no right answer, I would expect for the driver to be more satisfied if the person close to the driver was saved. In “Human” Anita acts similarly when she takes the daughter because she thinks its for the greater good when it is actually for the worst. Its these of issues surrounding the consciousness of robots and computers that worries me. There is simply no being that can make decisions at the level of the human mind.

Categories
Uncategorized

Ashton Radvansky Journal #6

When I walked into our first class on Tuesday, January 17th, I had no idea what I was getting myself into, and I was unsure of the path that this class would take. As we began our discussion on technology, I was immediately thrown off-guard when people began to argue the negative effects of technology in society. I never looked at the argument from the other side, and I never thought about the potential complications that technology has placed on our everyday lives. I thought to myself, “How can technology be a negative, when we think about how we use iPhones in our daily lives, or with the advancements that have been made in medical and surgical uses?”. I maintained this thought process until this past Monday, February 27th.
As I walked to my 12 o’clock class of MATH 192, I had a vibrant attitude because the weather was nice and it was a beautiful day to be outside. As I was looking around and enjoying the scenic views of Bucknell’s campus, I also noticed something quite alarming. Everywhere that I looked, I saw people walking around campus staring at their cell phones; ignoring all forms of communication with their peers, and separating themselves from their present surroundings. Additionally, the first move of people just leaving their classes and walking out of an academic building, was to reach into their pockets and see what alerts they had missed during their 52-minute class period. I was shocked to see that this was everyone’s first action upon getting out of class; it was almost an instinctive reaction for people.
As we watched the first episode of Humans in class this past Tuesday, I thought about what I saw while walking to class on Monday and I began to realize that as technology becomes a larger factor in our lives, we begin to slowly forget about the day-to-day interactions with others that makes us inherently human. Why should we go through the awkwardness of talking to someone about a difficult topic, when instead we could just text them and avoid the interaction all together? We have come to value the latest and greatest iPhone over human friendship, and that is unacceptable. During my grandparents’ childhood, all the kids in the neighborhood would be outside, playing games, and having fun. Now children will stay inside and play video games, most of the time alone and against a computer-generated player. Sure, children are still playing games, but it is completely different because they are missing out on what is necessary for their growth into successful adults: interactions with others. Sometimes it is necessary to take a step back and analyze how exactly our society has been impacted by external forces, and I believe that time is now. People need to truly realize how much time they are spending on their phones and other devices, when they could instead be spending quality time with the people that matter to them most: their family and friends.

Categories
Uncategorized

Caroline Kunkel Journal 6

One of the things that I found most interesting this week when we watched the first episode of Humans and during our following discussion was the two ways robots were seen. One of the ways we have seen robots represented, which we saw time and again throughout Humans as well as in Asimov’s The Bicentennial Man, to list a few, we saw robots as subhuman beings. In these two sources, the robots were treated as slaves and free labour, allowing humans to live in luxury. This treatment of robots was seen as particularly odd during Humans, when we see the interactions between Anita and the mother, who insists on treating Anita as another conscious being rather than as a robot. In addition to this, we saw in the treatment of Andrew in the Bicentennial Man which paralleled the treatment of slaves in the United States. What’s more, Andrew’s struggle to become human, while superficially different, paralleled the slaves’ struggle and fight for freedom and recognition as human beings by the US government.

In contrast to how the robots in Humans and The Bicentennial Man were treated, the Multivac in Asimov’s The Last Question and All the Troubles in the World is treated and presented to the readers as an almost God-like being of authority. This is evident first in the way in which the Multivac is able to see the deepest secrets and workings of everyone over the age of 18 to the point that it can detect lies it is told. This deep understanding of everyone seems to mirror how some people see God as an all-knowing being from whom you can hide nothing. This idea is furthered through the fact that everyone on earth is able to ask the Multivac anything they wish, and accept its answer without question. One thing I find to be rather interesting is that, like God, the Multivac knows about all of the troubles of the world, as the title of the story would suggest, and by knowing all of this terrible information, the Multivac wishes to die. This raises the question of whether or not Asimov is implying that God would wish to die as well, having all the knowledge that Multivac has. The parallels between God and the Multivac continue in The Last Question, when this machine, which evolves over time to finally become the cosmic AC, is linked to every person throughout the world, then throughout the galaxy, universe, and finally throughout the entire cosmos. In this story we see people time and again asking this being, which they have never seen or interacted with before, if there is a way to reverse entropy. This question and the manner in which it continuously comes back throughout time is reminiscent of how people ask the same questions of God time and again.

In addition to the seeming duality of robots throughout science fiction, there is a common commentary on women that occurs throughout the texts and television programmes mentioned above. In all of his texts, Asimov portrays women as being hysterical, frail beings which are subpar to men. For example, in All the Troubles in the World, the only female character is Mrs. Manners, who we see acting hysterically as her husband is being taken away. Beyond the way she acts, the fact that she is the only character not given a first name, being referred to only as Mrs. Manners, suggests that Asimov sees women as being lesser than men. We see similar portrayals of women in Asimov’s The Last Question when again, the only female characters are not only given silly names, perhaps so that they might not be taken as seriously, but are also depicted as having shrill, shrieking voices. This depiction of women as being lesser, hysterical beings is also seen in the mother character in the television programme Humans. The mother of the house is seen as being irrationally opposed to her family’s having a robot to help around the house, and when she sees the interactions between her youngest daughter and the robot Anita her behaviour boarders on hysterical when for forbids Anita to touch her and take care of her. All the while, the father of the house not only sees his wife’s concerns as being silly and irrational, but also matter-of-factly tells her that he needed Anita to take care of the house since the mother was not there, implying that he could not do the work because it is a woman’s job.

Categories
Uncategorized

Journal 7

In class on Thursday, we discussed the Asimov short story’s that we had to read along with the investigative piece on our criminal justice system. The discussion of probability arose amongst the class. Probability is the designated value between 0 and 1 for the likelihood that an event will occur or not. In my Management 102 class that I took this past fall, a huge portion of our class involved solving probabilities of different events, and then reporting them to a company to help their decision-making processes. Many of the times, if I computed an even to have a .1 probability of happening, we would convert that to a 10% chance (easier for humans to understand). Converting to this percentage chance allowed us to quickly advice decisions based on my interpretation of this number. In this case, anytime I see a low number like 10% I interpret it as: the event will not occur.

To put this into perspective, say the event of it raining today has a 10% chance of happening. We all will assume that it will not rain, and will most likely choose to not bring a raincoat. If it ends up raining, most people would be shocked because they interpreted the 10% in their mind that it wont rain instead of interpreting it as a number that is not 0 meaning rain is always possible. This wrong interpretation of percentages happens all the time in the world around us. Recently, it was stated that before Election Day, Hillary Clinton had a 65% chance of winning the presidential race. Because of this high percentage, and our human impulsive to further simplify numbers, people assumed Hillary had it in the bag. However when the election came to a close, Trump “surprised” voters and ended up winning the race. The real point of this situation is how the voters were surprised that Trump won. Hillary’s 65% made people automatically and faithfully assume a clear and easy win. They should not have been as shocked as they were because it never said that Trumps chance of winning was 0; meaning it was still possible for him to win. Contrary to popular belief and speculation, the original reporting of the likelihood of Hillary winning was not wrong. What was wrong was people’s interpretation of what that percent actually meant.

People’s need to interpret everything is our strength and our flaw. Interpretation helps people get a better read on social situations, and shortens the decision making process. However, it is also our flaw because people are too quick to jump to conclusions before understanding the information at hand. This human flaw was exemplified in All The Troubles In The World by Isaac Asimov. The people working for Multivac were so accustomed to the accuracy and precision of the machine that they made a fundamental error. When Multivac told the government that Mr. Manners was planning on trying to destroy Multivac, they automatically assumed it was Joseph Manners because they interpreted this act to be so radical, only an adult would be capable. The people did not even check to see if there was a child still being reported under the name of Mr. Manners (a fundamental procedure of the system). Multivac chose this family on purpose because it knew that our flaw of interpretation would allow this plan to follow through. I think a point that Asimov was trying to make is that people are lazy and arrogant. After time, they do not do extra work to ensure they are right because they assume they will be right. I think this arrogance is also shown in how the government was running Multivac. They intelligently created a robot and a system to almost completely eradicate crime. While this was a huge feat, they did not consider the possibility that after time, Multivac could be filled with so much information that its intelligence capability would far surpass that of humans. This phenomenon is known as singularity. If this ever were to happen, it would not make sense for a human to run such an advanced system because they would have no idea what is going on. Human’s did not catch on to Multivac’s plan until it was almost too late. And even then, humans only caught on by a flaw of Multivac.

Categories
Uncategorized

Journal 4

In class this week we’ve been talking a lot about robot ethics and their potential role in society. Personally, I can’t seem to grasp robot ethics. I guess it’s because we haven’t experienced anything like this in society yet, it’s just a part of science fiction. To me it’s very obvious that robots are not living beings and therefore ethics shouldn’t necessarily apply. I’m all for animal ethics and extending more rights to animals, because I believe animals are sentient beings and have feelings and beliefs of their own. Although at this current point in time there are not robots with that ability, robots are not sentient, they have not developed the ability to think on their own and make their own decisions, at least to my understanding. Therefore I can’t seem to picture what role robots will play in our society of the future. I understand very well that with the current advancement in technology and how society has begun to revolve around robots they will be a very prominent part of the future, but on what level? Robots have the potential to be incredibly smart, but do they have the potential to evolve feelings? Feelings are something that comes with being sentient, if you can think and feel relative to the world around you, then I would consider that a sentient being. I’m sure robots eventually will be able to do calculations so fast that they are essentially thinking and adapting to everyday changes and situations, but will they be able to feel? It’s like in, The Bicentennial Man, Andrew says at one point that his circuits feel a certain way and he recognizes that as feeling, is that honestly something we can expect robots to have the ability to do? I see that more as science fiction, more of a, what if robots could have their own emotions? In all reality I’m sure robots will play such a fundamental role in society eventually that they will have to make autonomous decisions of their own, but I think there is a difference in having a robot choose the “right” ethical decision when confronted with a dilemma, which is absolutely feasible. Compared to a robot that somehow develops to be sentient, because of some special innate ability, that is more so a dream.