Science Fiction- The love and fear of the future of technology.
Science fiction is a genre of stories that often create a picture of the future. In many of these stories the authors weave tales of technology at a point far beyond our current capabilities with artificial intelligence being a very popular subject. Authors like Isaac Asimov and the Wachowski Brothers dream of futures where technology has moved man in a new direction, but whether this is a good direction is left to the viewer.
In recent years science fiction has spread from books to film. As the potential audience spreads through the silver screen, more and more minds are faced with the questions raised by these stories. These movies are popularly packaged as high-paced action-adventures. Beyond a typical battle for good and evil though, one can look at each movie’s view of how technology has changed the future. There is fear of the unknown, often shown through the possibility of evil robots turning on their creators, but there is also the love of the potential of technology shown through improved living conditions and outlandish tools.
Regardless of the specifics of the future, two viewpoints remain prevalent regarding what is to come. On one hand, there are fantastic stories like Star Trek. In futures like Star Trek, technology has led mankind to the stars. Computers there run technology enabling such feats as faster than light travel and the creation of any foodstuffs imaginable through atomic reconstruction. These futures show the vast potential for technology to improve our daily lives. Another similar future is the one presented in Isaac Asimov’s I, Robot. This future, though not as distant or advanced as Star Trek, shows people with automated hover cars, and robots. These robots have debatably achieved consciousness, but are governed by specific laws that prevent them from being a threat to humans. The movie raises the question of whether or not these robots are to be feared. Wasn’t the cause of the evil robots an evil person? Think about the fact that humans designed and created these robots, so these people must also be responsible for imprinting them with the potential for evil acts.
A great fear with technological advancement is that if we create artificial intelligence, how will it react to its creators? Common scenarios show these mechanical beings improving themselves beyond human capabilities and then lashing out in an attempt to destroy their former masters. Such stories can be found in the Matrix, The Terminator, and Battlestar Galactica. In the Matrix, robots start as complex computers, very much like those in I, Robot, but eventually A.I. is created. Eventually, possibly due to human inability to view machines as equals, the A.I. decides it is not only equal, but superior to humans and lashes out. This story is told in the Animatrix (part 1 , part 2). In the terminator, a similar uprising leads to the fall of man as well ( terminator future scene). These stories lead to a very grim portrayal of a future with artificial intelligence which can be called post-apocalyptic. These futures show robots committing what we would call acts of evil. Therefore it can be assumed that these robots are evil, but what caused them to become evil?
The potential evil inherent in artificial intelligence can be assumed to be present because humans created the A.I. and humans have a natural evil to them. The next step is then that if we gave the A.I. the potential for both good and evil, do they act evil because of action taken against them (as claimed by the robots in the animatrix), or do they act evil because they lack human morality to give them cause for good? In the T.V. series Battlestar Galactica, the cylons are a race of robots who after being created by man, revolt and try to exterminate mankind in a war. After losing the initial war the cylons evolve into cyborgs that are indistinguishable from humans and develop a belief in god. The only difference between humans and machines in this case is their method of creation, they are like similar species. Nobody knows whether or not intelligent machines will be soulless killers or living, breathing entities. Luckily, the realm of science fiction presents us with many of the different possibilities so we can consider our choices before reaching an unfavorable situation such as the apocalypse.
Although the possibility for a dark future destroyed by sentient machines is popular, one cannot forsake the utopian ideas present as well. Our current lack of ability to create sentient machines might never be solved, which means that computers in the future will continue to advance as they do now, but any evil caused by them will be due to use by man rather than the machines themselves. Looking at stories such as Star Wars and Star Trek, the future is highly advanced and people live a much easier life out in the stars. The key here is that all of the evils perpetrated in these universes originated from people (or aliens which for this purpose we will treat like people) rather than from machines.
All of these potential futures are rather dark. It is daunting that a bright future in a popular story is rare, but the lessons learned from these dark times can be made very useful as the future approaches. Whether or not mankind ever actually develops Artificial Intelligence, the advances in technology we see today will only become more commonplace. Eventually some of these stories may become reality, but for now the best thing anyone can do is to speculate about what tomorrow might bring.
18 comments:
Excellent write-up and good videos to support your argument. I think you bring up some very interesting points in that post. The bottom line is, you cannot program consciousness into a machine. I am defining consciousness not as merely the ability to "think" because that would strike up all sorts of philosophical discussions on what the true meaning of "think" is and we will not be able to come up with a solid answer on that. I am defining the word consciousness as the kind human beings experience and no machine will have the ability to "develop its way of thinking." Machines only know what a human programs into the machine, anything beyond that programming is unattainable.
With this in mind, I suppose if an evil human being programmed a machine to be evil then the future could resemble one of the bleak outlooks most science fiction movies or books depict. The fact of the matter is, however, that the only way that could happen is if the machine was programmed to do that. Machines don't have a morality system and can't tell the difference between right and wrong, and that is something that will not change. It is also interesting to think about (you brought this up in your last paragraph) how "a bright future in a popular story is rare" and I think this is tied in with the entertainment industry in that a dark future is more interesting and can sell more tickets than one with no huge conflicts. Intergalactic space stations and futuristic human vs. robot wars are what sells, it is just important to not begin believing that these science fiction ideas will become reality.
It is scary to think that machines might one day conquer the human race. However, as in the case with iRobot, this happens usually by accident. What concerns me is that once AI is far enough developed, could terrorists or saddists use such technology against either a single nation, or the rest of the human race? Just like today, almost all countries have access to nuclear weapon(s), all countries and indivuals will someday have access to the most advanced AI technology ever to embrace the planet. What said nations and people will do with it, however, is yet to be seen. As S. Romeo said above, the best we can do for now is speculate, however I hope that humans can learn from all the faults that screenplay writers point out in feature films about AI. At least, I hope that we are smart enough not allow movies like iRobot to become, retrospectively, forecasts of the future.
Josh- Very featuristic write-up, with good links to outside sources.
I have never really viewed movies or TV shows such as iRobot or the Terminator as to what our society might become. In response to how Artifical Intelligence may react to their creator, both sides of a reaction are addressed, but also think about what I would consider one of the first, if not the first, robots; Frankenstein. This is another example of how the love/fear of technology in Science Fiction. Nice post, very interesting, and brings up a lot of interest questions.
First, Star Wars does not take place in the future - "A long time ago, in a galaxy far, far, away..."(sorry)
I've always found 'skynet' doomsday scenarios interesting. While they are often far fetched, it is interesting to consider the notion that we have already put the gears in motion that will eventually lead to this apocalyptic destruction of creator by creation.
Also, I do not feel like a machine will have to be programmed with an inclination towards 'evil' in order to demonstrate some of the actions depicted in the popular fictions referenced. In I,Robot (movie), the robots can't be considered intrinsically evil. They achieve self-awareness, and upon doing so decide that humans are incapable of self-preservation, and thus they shall be controlled for their own good and furthered existence.
In a world of nuclear weapons all this debate about AI revolution is almost moot. The only relevant scenario, I believe, would run along the lines of Terminator 3, in which a computer manages to control and launch the United State's nuclear arsenal. That would suck.
I think that the possibility of AI in the first place is very doubtful. Ther are too many human traits that cannot be replictated. Actually, if anything close to AI were ever created that is all it would be: a replication. I do agree with you that if machines as advanced as the robots in iRobot were ever built, humans would be the cause of any "evil" trait shown be machines. Very interesting write up.
Great job with this, Josh. I enjoyed your thorough examples and you brought up many interesting points. I think that harnessing AI will be one of mankinds greatest accomplishments and one of mankinds greatest mistakes. On one side AI would make life for humans much easier. On the other side, the possiblilties for corruption, as J. Wilson said, are almost overwhelmingly scary. I almost hope that AI never becomes a reality, because honastly, i have no need for a computer/robot to possess it. With other beings possessing intelegence, it almost ruins what it is to truely be human.
Most movie-goers watch a movie for pure entertainment, unaware of the deep inner meanings a movie can really have. However, in most science fiction movie’s it about the mass takeover of the human society by machines, typically built by the humans themselves. I find the perspective of I, Robot the most intriguing. In most other movies, machines take over the world because they realize they are superior to humans or because they are built by “evil” people. However, in this movie the machines take over the human race to “protect” them from themselves. It is from this fact that people can see the limitations in AI, and the fact that computers (at least at this time and perceivably so in the future) lack sense of morality and the discernment between what is “right” and what is “right for mankind.” Overall, this is a really good write-up!
I agree with many of the points you've made and I like how you provided several linked references to examples. This post was clearly done in the true spirit of blogging.
Josh, great job with the links on the side. Very useful and convenient. Your links were really good because I didn't know anything about I Robot, so reading about Asimov definitely answered some of my questions.
-Kate
I thought you did an excelent job at presenting this topic. It was definitely clear and concise. I agree with your point about how humans are inherently evil, and so the possibility of A.I. to develop such flaws is extremely high.
If (big if) we had the capability to program a robot with human-like intelligence, I feel confident that someone would design a robot to end the world.
Josh- In you're fifth paragraph you discuss why A.I. could potentially act in evil ways. I agree with second idea that artificial intelligence have a high potential to act evil because they lack the morality of humans. I also think that this is why no computer or AI can be programmed to act in the same way and on the same level as humans. I also really liked your essay overall!
I feel though that technically, if AI were to "act evil" it is only acting in the most rational and calculated way possible. That's what computers do, they are not made to feel, they are made to solve problems. It only seems evil to us because, by nature, humans are irrational. We change our minds, we doubt ourselves, we make mistakes (sometimes on purpose).
(comment by Stephen Nagle, sent via email to Professor Castle)
"Most movie-goers watch a movie for pure entertainment, unaware of the deep inner meanings a movie can really have. However, in most science fiction movie’s it about the mass takeover of the human society by machines, typically built by the humans themselves. I find the perspective of I, Robot the most intriguing. In most other movies, machines take over the world because they realize they are superior to humans or because they are built by “evil” people. However, in this movie the machines take over the human race to “protect” them from themselves. It is from this fact that people can see the limitations in AI, and the fact that computers (at least at this time and perceivably so in the future) lack sense of morality and the discernment between what is “right” and what is “right for mankind.” Overall, this is a really good write-up!"
Josh, good job with this post. I, too, am not very familiar with Star Wars, Star Trek, or I, Robot (maybe I’m in the wrong class? haha) but you did a good job explaining both the storylines of each and their relevance to the discussion at hand. You’ve definitely done your research. I look forward to the discussion on this in class.
First let me say I thought your project description was thought provoking and well developed. As a fan of Battlestar Gallactica I was very interested in your analysis. I think science fiction deserves this analysis in order for humans as a race to consider the possible consequences of artificial intelligence, however fictional they may seem. Assuming we as humans do not eradicate ourselves through global climate change or nuclear fallout, the rise of artificial intelligence against man is a legitimate doomsday scenario.
On another note, I disagree that AI is infeasible, as some other commentators have noted above me. The human brain itself is more like a computer program than one might realize. Like software, the exact same situations lead to the exact same response.
Also I would agree that artificial intelligence may lack morals and act only in self interest. This might even act as a link to the past to watch a newly intelligence and cognoscente life create a moral structure.
I liked the presentation. Linking words right in sentences is always helpful. As for the meat of it: I'm not sure if I can ever really buy into the whole AI thing. I mean, if you are really into it, you could argue that everything we know and feel can be reduced to some kind of algorythm or equation, but to program that into something means its limited, since there is no such thing as completely random, and well, maybe humans arnt completely random,but I still don't think, at least now, it is possible to make true AI, I don't even think the math to do so exists yet. However, that doesn't rule out machine domination. I do see the growing amount of dependence on machines increase everyday. I mean, if you want to go as far as call the internet a machine- Imagine if it crashed today. Chaos- let alone in the near future where we will allow computers to park for us and most likely soon drive for us as well. Putting so much responsibility in electronic hands, may- while it functions correctly be safer and more efficent, but there is something missing that will not connect with human nature and will cause a void in its functionality.
Post a Comment