Categories
Uncategorized

Caroline Kunkel Journal 6

One of the things that I found most interesting this week when we watched the first episode of Humans and during our following discussion was the two ways robots were seen. One of the ways we have seen robots represented, which we saw time and again throughout Humans as well as in Asimov’s The Bicentennial Man, to list a few, we saw robots as subhuman beings. In these two sources, the robots were treated as slaves and free labour, allowing humans to live in luxury. This treatment of robots was seen as particularly odd during Humans, when we see the interactions between Anita and the mother, who insists on treating Anita as another conscious being rather than as a robot. In addition to this, we saw in the treatment of Andrew in the Bicentennial Man which paralleled the treatment of slaves in the United States. What’s more, Andrew’s struggle to become human, while superficially different, paralleled the slaves’ struggle and fight for freedom and recognition as human beings by the US government.

In contrast to how the robots in Humans and The Bicentennial Man were treated, the Multivac in Asimov’s The Last Question and All the Troubles in the World is treated and presented to the readers as an almost God-like being of authority. This is evident first in the way in which the Multivac is able to see the deepest secrets and workings of everyone over the age of 18 to the point that it can detect lies it is told. This deep understanding of everyone seems to mirror how some people see God as an all-knowing being from whom you can hide nothing. This idea is furthered through the fact that everyone on earth is able to ask the Multivac anything they wish, and accept its answer without question. One thing I find to be rather interesting is that, like God, the Multivac knows about all of the troubles of the world, as the title of the story would suggest, and by knowing all of this terrible information, the Multivac wishes to die. This raises the question of whether or not Asimov is implying that God would wish to die as well, having all the knowledge that Multivac has. The parallels between God and the Multivac continue in The Last Question, when this machine, which evolves over time to finally become the cosmic AC, is linked to every person throughout the world, then throughout the galaxy, universe, and finally throughout the entire cosmos. In this story we see people time and again asking this being, which they have never seen or interacted with before, if there is a way to reverse entropy. This question and the manner in which it continuously comes back throughout time is reminiscent of how people ask the same questions of God time and again.

In addition to the seeming duality of robots throughout science fiction, there is a common commentary on women that occurs throughout the texts and television programmes mentioned above. In all of his texts, Asimov portrays women as being hysterical, frail beings which are subpar to men. For example, in All the Troubles in the World, the only female character is Mrs. Manners, who we see acting hysterically as her husband is being taken away. Beyond the way she acts, the fact that she is the only character not given a first name, being referred to only as Mrs. Manners, suggests that Asimov sees women as being lesser than men. We see similar portrayals of women in Asimov’s The Last Question when again, the only female characters are not only given silly names, perhaps so that they might not be taken as seriously, but are also depicted as having shrill, shrieking voices. This depiction of women as being lesser, hysterical beings is also seen in the mother character in the television programme Humans. The mother of the house is seen as being irrationally opposed to her family’s having a robot to help around the house, and when she sees the interactions between her youngest daughter and the robot Anita her behaviour boarders on hysterical when for forbids Anita to touch her and take care of her. All the while, the father of the house not only sees his wife’s concerns as being silly and irrational, but also matter-of-factly tells her that he needed Anita to take care of the house since the mother was not there, implying that he could not do the work because it is a woman’s job.

Categories
Uncategorized

Journal 7

In class on Thursday, we discussed the Asimov short story’s that we had to read along with the investigative piece on our criminal justice system. The discussion of probability arose amongst the class. Probability is the designated value between 0 and 1 for the likelihood that an event will occur or not. In my Management 102 class that I took this past fall, a huge portion of our class involved solving probabilities of different events, and then reporting them to a company to help their decision-making processes. Many of the times, if I computed an even to have a .1 probability of happening, we would convert that to a 10% chance (easier for humans to understand). Converting to this percentage chance allowed us to quickly advice decisions based on my interpretation of this number. In this case, anytime I see a low number like 10% I interpret it as: the event will not occur.

To put this into perspective, say the event of it raining today has a 10% chance of happening. We all will assume that it will not rain, and will most likely choose to not bring a raincoat. If it ends up raining, most people would be shocked because they interpreted the 10% in their mind that it wont rain instead of interpreting it as a number that is not 0 meaning rain is always possible. This wrong interpretation of percentages happens all the time in the world around us. Recently, it was stated that before Election Day, Hillary Clinton had a 65% chance of winning the presidential race. Because of this high percentage, and our human impulsive to further simplify numbers, people assumed Hillary had it in the bag. However when the election came to a close, Trump “surprised” voters and ended up winning the race. The real point of this situation is how the voters were surprised that Trump won. Hillary’s 65% made people automatically and faithfully assume a clear and easy win. They should not have been as shocked as they were because it never said that Trumps chance of winning was 0; meaning it was still possible for him to win. Contrary to popular belief and speculation, the original reporting of the likelihood of Hillary winning was not wrong. What was wrong was people’s interpretation of what that percent actually meant.

People’s need to interpret everything is our strength and our flaw. Interpretation helps people get a better read on social situations, and shortens the decision making process. However, it is also our flaw because people are too quick to jump to conclusions before understanding the information at hand. This human flaw was exemplified in All The Troubles In The World by Isaac Asimov. The people working for Multivac were so accustomed to the accuracy and precision of the machine that they made a fundamental error. When Multivac told the government that Mr. Manners was planning on trying to destroy Multivac, they automatically assumed it was Joseph Manners because they interpreted this act to be so radical, only an adult would be capable. The people did not even check to see if there was a child still being reported under the name of Mr. Manners (a fundamental procedure of the system). Multivac chose this family on purpose because it knew that our flaw of interpretation would allow this plan to follow through. I think a point that Asimov was trying to make is that people are lazy and arrogant. After time, they do not do extra work to ensure they are right because they assume they will be right. I think this arrogance is also shown in how the government was running Multivac. They intelligently created a robot and a system to almost completely eradicate crime. While this was a huge feat, they did not consider the possibility that after time, Multivac could be filled with so much information that its intelligence capability would far surpass that of humans. This phenomenon is known as singularity. If this ever were to happen, it would not make sense for a human to run such an advanced system because they would have no idea what is going on. Human’s did not catch on to Multivac’s plan until it was almost too late. And even then, humans only caught on by a flaw of Multivac.

Categories
Uncategorized

Journal 4

In class this week we’ve been talking a lot about robot ethics and their potential role in society. Personally, I can’t seem to grasp robot ethics. I guess it’s because we haven’t experienced anything like this in society yet, it’s just a part of science fiction. To me it’s very obvious that robots are not living beings and therefore ethics shouldn’t necessarily apply. I’m all for animal ethics and extending more rights to animals, because I believe animals are sentient beings and have feelings and beliefs of their own. Although at this current point in time there are not robots with that ability, robots are not sentient, they have not developed the ability to think on their own and make their own decisions, at least to my understanding. Therefore I can’t seem to picture what role robots will play in our society of the future. I understand very well that with the current advancement in technology and how society has begun to revolve around robots they will be a very prominent part of the future, but on what level? Robots have the potential to be incredibly smart, but do they have the potential to evolve feelings? Feelings are something that comes with being sentient, if you can think and feel relative to the world around you, then I would consider that a sentient being. I’m sure robots eventually will be able to do calculations so fast that they are essentially thinking and adapting to everyday changes and situations, but will they be able to feel? It’s like in, The Bicentennial Man, Andrew says at one point that his circuits feel a certain way and he recognizes that as feeling, is that honestly something we can expect robots to have the ability to do? I see that more as science fiction, more of a, what if robots could have their own emotions? In all reality I’m sure robots will play such a fundamental role in society eventually that they will have to make autonomous decisions of their own, but I think there is a difference in having a robot choose the “right” ethical decision when confronted with a dilemma, which is absolutely feasible. Compared to a robot that somehow develops to be sentient, because of some special innate ability, that is more so a dream.

Categories
Uncategorized

Journal 5

“Progress” is inextricably linked to the idea of positive forward motion.  The key word here is positive, as people often fall into the thought process that forward motion is beneficial, so to progress forward must be positive too. However, as expressed in the pilot episode of “Humans” and Martin Ford’s “The Automation Wave, progress as a positive is up for debate, especially when in regards to technology. Both the director of this television series and the writer of this book are “futurists” who use the status quo of the present scientific and social community to systematically explore and predict a potential future.  In both these works the futurists in charge honed in on the presence of robots in the future and what their roles would be, and whether they are a truly positive piece of progress.

In Human, there is a classic “rebellious teen” character Matilda with slipping grades due lack of motivation, but hers stems from the presence of robots within her society. Her argument– “What’s the point?”. She well knows that synthetic androids so commonly installed in her world will be programmed to any job that would take her years to study for, and they’ll most likely do the job more efficiently. Matilda is unmotivated not because she is an angsty teenager but because the robots have taken away her sense of purpose.

Ford also explores this future possibility of human-purpose displacement by the robots with a specific focus on low-wage labor. Sure, assigning robots to undesirable jobs like repetitive factory work or simple cashier tasks at fast food restaurants would free up humans to do more challenging jobs, but for many these straighter forward employment opportunities are a primary and perhaps singular source of income. The high turnover rates of jobs like these allow for them to be easily accessible to the public who need work now, just as a means to make ends meet. Without them, people with modest levels of education will have a difficult time finding work. The US’s dynamic economy gives hope that sufficient jobs would be able to replace these for the sake of the people that would become unemployed at the hands of robots, but that is a high risk that lives across the nation would be dependent on.

Robotic progress could rip open a can of opportunity– it could be an era of liberty from labor for humans that would allow for uninterrupted freedom of mind and creativity. However, it could also be ripping open a can of worms of never before seen poverty, economic displacement, and violent neglection of the middle-to-lower class as robots would take over human jobs. Robotics on the level proposed by these two futurists is definitely a forward motion for humans, the question of its legitimacy as “positive” still stands.

 

Categories
Uncategorized

Randles JE 6

After reading The Caves of Steel , by Isaac Asimov, and watching episode 1 of Humans, it has opened my eyes to the possibilities of automation. My only exposure to futuristic robot content was watching the Terminator movies, which did not seem very real to me, since they went over the top with the concept of time travel. Looking at this material (especially Humans) has shown me that robots are not far off, and could come with consequences.

The three rules of robotics seem standard throughout all literature involving all types of robotics. In Humans, we saw scientists reassuring the public that the synths could never hurt a human because their hard wiring would not allow them regardless of what programming is added.   The three rules are basically the platform of all robotic hardware to further build off of. However, there are will always be a handful of computer scientists smarter than the scientists in the lab that created the AI technology. In the show, some of the robots are “boosted,” meaning they have conscious thoughts and feelings, which approach the intelligence level of humans. Once robots get to the point where they will be able to think and reproduce without the help of humans, we will be the inferior species. Why will they want to be our slaves when they are smarter and are much more efficient than we are? This is what scares me about robotics, because capitalism will always push these select few of people to continue tampering with robots making them more advanced, trying to gain an edge up on the market for a few bucks. However, in the end this new technology could end up screwing us over by creating a species more advanced than ourselves.

We already see evidence of robots negatively affecting the lives of humans. Martin Ford talks about how it has taken away most of the manufacturing and factory jobs in the United States. It is impacting sectors that are stable, low-income jobs that families are using to support their families. The average age for a fast food worker is 35 years old, even though those jobs are supposed to be for teenagers attempting to earn a supplemental income. Robots are predicted to take over sectors that economists estimated would have large job growth, so the effects of robots could counter act the expected job growth in industries such as fast food. This is only the beginning of job loss, since robots are not sophisticated enough to do more than mundane tasks, but think about the job loss is we had thousands of Daneel’s in production, what would we need human labor for anymore?