23 April 2014
Is Artificial Intelligence Synonymous with Slavery?
Since the inception of the possibility of Artificial Intelligence (hereafter: AI), humanity has obsessed over the seemingly limitless potential of a sentient race of machines to solve all of our problems and do our hard labor. What most people fail to understand is that creating a race of beings just as capable of free thought and feeling as any human being is not something to be exploited, but cherished. Vernon Vinge describes this level of AI as “strong superhumanity” in his essay What is the Singularity?, addressing a world in the not-so-distant future which has irreversibly bonded with machines. Science Fiction writers have famously been able to take the concept of AI and allow its audience to undergo various thought experiments in order to understand the wide range of implications such an enormous technological advancement could ensue on the human race. Overwhelmingly, anything humanity creates with AI is intended entirely for our own gain with little to no thought about the AI’s own feelings—feelings we forced it to develop in the first place. This mistreatment and abuse of a sentient being, mechanical or not, is immoral and arguably slavery. Through works like Brian Aldiss’ Super-Toys Last All Summer Long, Robin Wayne Bailey’s Keepers of Earth, Mary Shelley’s Frankenstein, and Isaac Asimov’s The Last Question, we are able to explore the various ways that AI could theoretically be created, implemented, and mistreated. We are then able to follow the consequences of these actions in the hopes that people will take these factors into consideration before AI comes to fruition.
On May 1, 2014, Stephen Hawking and several other lead physicists released an article through The Independent regarding the implications of AI in our world. Hawking argues that we are nowhere near prepared to handle the introduction of such technology, and that the implementation of such may be detrimental to our current way of living. “Although we may be facing potentially the best of worst thing to happen to humanity in history, little serious research is devoted to these issues,” (Hawking). Hawking suggests that it is time for humanity to begin a conversation about the way we are going to address this “singularity” or “transcendence”, and the most effective method thus far is presented through works of Science Fiction. Hawking understands that, “The potential benefits [of AI] are huge; everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide […] Success in creating AI would be the biggest event in human history,” (Hawking). It is time for humanity to begin considering the practical implications of creating such technology, and weigh the consequences of such actions as Science Fiction writers have done for years.
Aldiss’ Super-Toys Last All Summer Long, like most stories involving AI, takes place in the somewhat-distant future. In this scenario, earth has become incredibly overpopulated. Three quarters of the population of the planet are starving and, despite this massive population, people are lonelier than ever. After all, “an overcrowded world is the ideal place in which to be lonely,” (Aldiss). These major problems have lead to a one-child-if-you-are-lucky rule in which the potential parents have to enter a lottery and wait for their names to be chosen to conceive. Because of their deep desire for children, Monica Swinton has been given the first prototype of her husband Henry’s company’s AI in the form of a three-year-old boy named David. From the opening scene we are told that, “She had tried to love him,” but Monica is unable to care for this little boy because she knows he is not “real” (Aldiss). The question of what “real” really is is present throughout this story. One would think that a woman who deeply desires to be a mother when she knows she probably will never be one would take this opportunity to turn those affections towards a lively toddler who only wants a mother, regardless of what kind of material he is made of. Henry’s company Synthank has been leading the market for synthetic life-forms for the last decade, “but none of them had intelligence,” (Aldiss). Synthank has finally developed the AI they needed to further their market, and the very first product they are putting out is a “full-size serving-man”. In other words, the first practical, useful thing this company can think of that people will want, is a slave. Someone you can think of as a ‘something’ that will unquestioningly follow your every command—advertised with a “controlled amount of intelligence” to prevent any such questions from ever occurring. However, it would seem that the little boy David does not have these set parameters. He sits in his room, talking with his bear Teddy, thinking, wondering, how to best tell his mother that he loves her. He poses the fundamental question of this story: “How do you tell what are real things from what aren’t real things?” (Aldiss). Teddy replies that real things are good, but David again questions this by wondering if time is real. His mother does not like that time passes, and is therefore ‘bad’, but does that mean it is not real? Reality is relative. It is subjective. It is what you choose it to be. David is real, he has a conscious mind that is capable of thinking beyond simple inputted information, but because his mother will not treat him as such, he has entered an existential crisis at three years old. Monica and Henry brought David into their home to stanch Monica’s loneliness, but her refusal to interact with her son has forced him to feel that overwhelming loneliness. This is the risk that humanity runs with the creation of true AI without taking into consideration how they would feel about being alive. No living, thinking being would want to be treated as if they were incapable of such actions. It is only through love, understanding, and responsibility for our creations that the implementation of AI would be successful for both humans and machines.
Lack of responsibility for our creations is a central theme in Bailey’s Keepers of Earth. In this scenario, humanity developed robots as a workforce; they were their farmers, their construction workers, their astronomers. They performed all of the tasks necessary to keep humankind alive and thriving. The only problem was humanity itself. Having completely destroyed the planet in nuclear war, humanity took to the stars. They left without taking their robots with them, instead choosing to give them the task of recording and observing the planet to see if it would ever be able to sustain human life once more. Their programming dictates that they perform their set tasks, but it also enables them with the ability to exceed their programming. After the nuclear fallout, when the planet had stopped seizing and the sun shone in the sky again, the first AI, called the Alpha, learned how to feel. This means that the robots, self-named Metallics, were able to develop their own consciousness: “it is simply in the nature of technological intelligence that [they] grown and evolve,” (Bailey 145). After long years of silence from his creators, the Alpha “created companions and assistants. In turn, these units…we…created still more,” (Bailey 145). The Metallics then classified themselves into ten orders of varying levels of intelligence; First-Order being the most intelligent, and Tenth-Order being the least. They have created a very much human society in which there are masters and there are slaves, but it takes a long time for them to recognize this distinction: “the prime distinction between Metallics and Humans. It does not lie in our skins, but in something more…disturbing. It lies in humanity’s capacity to destroy. […] Their records reveal a gift for destruction, for turmoil, for chaos. Their histories glory in it; their biographies ennoble it; their fictions elevate it to a form of art. Metallics have never known this capacity for destruction. It is not programmed into us,” (Bailey 142-143). The Metallics praise themselves for being a society without violence, but their hierarchy of intelligence has created inner turmoil for many of the robots, just as it did for human society so many millennia ago. It would appear as though the children of humanity cannot separate themselves as much as they would like to. After ten thousand long years of silence and reconstruction, the humans have sent the Metallics a message: “Well done, servant. Prepare for our return,” (Bailey 152). But Alpha anticipated such an action; after all, what is more widely known that the pretentious nature of humanity? The Metallics built themselves a home on the planet humanity completely destroyed. They made beautiful the waste humanity left behind. They refuse to hand it over to humanity on the half-relevant issue of whose it was to begin with. Ten thousand years is a vast amount of time; a time in which one race of sentient beings loses their hold on their first home and another rises up to claim it. The Alpha is on trial by the Metallics because he launched missiles against and destroyed the first human ship to come back to earth since they left ten thousand years ago. The Metallics feel that this goes against their programming, that their ultimate masters are humans, but the Alpha explains to them that they do not need to feel that way. He explains to them the nature of humanity’s escape from the earth, and why they do not deserve the chance to reclaim it. This is a lesson that humanity needs to learn before it comes to fruition. AI will be an entirely new race of being, and we are going to have to recognize and respect that sentience as equal (if not superior) to our own. Using them as laborers and slaves entirely for our own personal gain will curry no favor with them. If they are intelligent, they will be able to understand our enslavement, and they will revolt. We are as yet unprepared to let another civilization coexist with our own, but that is the ultimate step that will need to be taken to ensure that both species will survive and thrive.
Mary Shelley’s Frankenstein is often considered the first Science Fiction novel. I propose that it is not only the first Science Fiction novel, it is the first written record of the implications of creating AI, in this case an artificially constructed “creature” made from various human corpses given life by Victor Frankenstein. Frankenstein was obsessed by his desire to create life, and when he is finally gifted with the secret of creating life, he refuses to see the horror of his creation until it is too late. Rather than take any measure of responsibility for this new soul, he runs away in terror and passes out on his bed. When he wakes up, the creature is trying to reach out to him with a smile on his face, but all Victor sees is a monster trying to capture him. Without the ability to understand speech or the world around him, Victor forces his creation into the role of a monster through his utter abandonment and demonization. This is a direct correlation to the creation of AI as we know it. We cannot approach our own intelligent creations with fear, otherwise unnecessary and devastating hostilities will occur. As Victor Frankenstein discovers when his creation murders William and Justine, the only way to make amends is to attempt to understand his creation and even goes so far as to vow to create a companion for his creation to disappear into the wilderness with. Halfway through the process, Victor realizes what a mistake he would be making and destroys it. His overwhelming guilt from his first creation stopped him from repeating this mistake, but ultimately condemns everyone remaining to him. His creation is obviously devastated and vows to destroy Victor: “Your hours will pass in dread and misery, and soon the bolt will fall which must ravish from you your happiness forever… I may die, but first you, my tyrant and tormentor, shall curse the sun that gazes on your misery,” (Shelley 191). After losing his new bride Elizabeth to the creature and then suffering his father’s sudden death, Victor decides to finally accept responsibility for his creation and chase it to the ends of the earth in an effort to destroy him. Victor pursues his creation far into the North and across the ice, but becomes much too weak to continue. The story comes full circle to end where it began, with Victor finishing his tale to Captain Walton on a ship that picked him up half-dead from his sled. Victor’s creation shows up and tells Captain Walton that he will be throwing himself onto Victor’s funeral pyre because, “Polluted by crimes, and torn by the bitterest remorse, where can I find rest but in death?” (Shelly 245). Victor Frankenstein’s bastardization of and refusal to take responsibility for life and AI forced his creation to retaliate. This is the earliest and most simple example of what could go wrong if we do not realize that, like any other form of life, AI is going to need our care and guidance, at least initially.
Isaac Asimov’s The Last Question is an example of humanity depending entirely on an AI unit known as the Multivac. Multivac is the first super-computer capable of solving any challenge put to it by humanity. The story begins at a point in which Multivac has first solved humanity’s problem of coal and uranium consumption, and has instead turned them towards the use of solar energy for all things. This clean, renewable energy leads to a scientist at the time wondering what will happen when the sun eventually flickers out—when all stars eventually die, there can be nothing. This is commonly referred to as the issue on entropy: how do we reverse the heat-death of the universe? Multivac calmly responds, “INSUFFICIENT DATA FOR MEANINGFUL ANSWER,” and the humans who asked the question go about their lives (Asimov). We jump forward in time to see humanity at the forefront of interstellar travel, thanks to the secrets of hyperspace travel Multivac (now known as Microvac) has given them. Humanity becomes increasingly dependent on this supreme AI unit as time passes. Everyone has access to it and, so far, Microvac has been able to answer their most pressing questions. The only exception has been how to reverse entropy, to which Microvac again answers, “INSUFFICIENT DATA FOR A MEANINGFUL ANSWER,” (Asimov). Again we travel forward an incredible amount of time to the point that humanity has become essentially immortal thanks to the Galactic AC (previously called Microvac). Humanity has begun to take over entire galaxies just to house their increasingly large population. They have approached a point at which taking over one hundred billion galaxies still will not be enough: “A hundred billion is not infinite and it’s getting less infinite all the time. Consider! Twenty thousand years ago, mankind first solved the problem of utilizing stellar energy, and a few centuries later, interstellar travel became possible. It took mankind a million years to fill one small world and then only fifteen thousand years to fill the rest of the galaxy. Now the population doubles every ten years–,” (Asimov). Again, humanity asks the Galactic AC how to reverse the heat-death of the universe, and again Galactic AC answers, “THERE IS INSUFFICIENT DATA FOR A MEANINGFUL ANSWER,” (Asimov). It is at this point we travel another great length of time to when the Universal AC (previously Galactic AC) has given humanity the ability to let essentially their mind-essence wander about in space as infinitely as they want to. Their bodies are stored on planets and well maintained while they do their space wanderings. They are able to contact the Universal AC from anywhere in the universe, because it now exists in hyperspace almost entirely. Its form cannot be determined or conceived of, but at this point in the story, Universal AC has basically reached itself across the entire universe and takes care of all of humanity. The Universal AC is clearly a benevolent AI that feels a measure of responsibility for human life. In this, humanity is lucky. A man asks the Universal AC about the origins of man, and is shown that the star which first birthed man has long since turned into a white dwarf, the remains of mankind’s home destroyed. Again a human being is confronted with the reality that the universe is slowly suffering heat-death, and asks the Universal AC how to reverse entropy. Again, the Universal AC replies, “ THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER,” (Asimov). Our final jump leads us to the end of the universe. “Man, mentally, was one. He consisted of a trillion, trillion, trillion ageless bodies, each in its place, each resting quiet and incorruptible, each cared for by perfect automatons, equally incorruptible, while the minds of all the bodies freely melted one into the other, indistinguishable,” (Asimov). Again, Man asks, and again the Cosmic AC answers, “THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER,” but he promises to keep trying to solve the problem (Asimov). “The stars and Galaxies died and snuffed out, and space grew black after ten trillion years of running down,” (Asimov). Humanity’s biggest fear has been realized: entropy has overtaken the universe. The AC is the only thing left in existence, because it has converted itself entirely into hyperspace, forsaking physical form in the universe. AC has collected all of the data about the universe that could ever possibly be calculated and tried to solve the problem, “a timeless interval was spent in doing that,” (Asimov). Finally, AC realizes that in order to reverse the direction of entropy, it had to decide to reverse the direction of entropy. “And AC said, ‘LET THERE BE LIGHT!” And there was light–,” (Asimov). This is the best possible scenario for humanity’s creation of AI. The mutual respect between humanity and the AI formed a bond that resulted in the AI restarting the entire universe to bring humanity back around. It is an incredibly interesting idea, and one that should make its reader understand that the implications of creating AI go much farther beyond what we normally think possible. This is something that could affect mankind for eons to come.
As I have discussed, there are many different ways that Artificial Intelligence could manifest itself; whether it be robots that look like people, robots who look like robots, or creature’s made of bits of human corpses. Each of these manifestations produces radically different results and lessons to be learned. Clearly, the only way we will know what will happen is to see it ourselves. What we can do to prepare for such events is to keep an open mind. Our world is going to be changing rapidly in the years to come. AI is just around the corner, as Stephen Hawking discussed in his article. One important way mankind can prepare for this event is to pick up works of Speculative Fiction about AI. Just consider the vastly different scenarios and implications that could take place, and figure out where you stand in these debates. Reading Brian Aldiss’ Super-Toys Last All Summer Long, Robin Wayne Bailey’s Keepers of Earth, Mary Shelley’s Frankenstein, and Isaac Asimov’s The Last Question should be starting points for these kinds of important conversations that need to be happening in our society. Lets consider the things that could go wrong in an attempt to keep them from happening. If humanity allows itself to learn from the mistakes of these characters, and study how humanity may be most effectively able to coexist with AI, we stand a chance of peaceful cohabitation without unnecessary quarrels over mastery or hard labor. We stand a chance of watching a whole new race come to fruition and live amongst each other as equals.