Artifact - HONORS 345 Research Paper
Introduction
Below is my final paper that I wrote for my honors interdisciplinary writing class. This article takes the framework of the classic philosophical and ethical dilemma of the trolley problem and applies it to self-driving cars, surgical robots, and autonomous military drones. What results is an exploration of how computers interact with our world and a journey to find the meaning of digitality itself.
The Trolley Problem
The trolley problem is one of the most important philosophical and ethical problems of all time.
Originally proposed by philosopher Philippa Foot and later popularized by philosopher Judith Jarvis Thomson, the trolley problem examines a series of increasingly complicated ethical dilemmas revolving around a single scenario: an out-of-control trolley barrels down a set of tracks, heading towards a group of five people tied to the tracks. However, you, the bystander, can pull a lever to switch the tracks and divert the path of the trolley to another set of tracks with only one person tied to them. The choice seems simple – kill one person instead of five. Yet the answer is often far more complicated. The singular victim was never going to be killed if you weren’t there. What right do you have to dictate who lives and who dies? And the ethics of the situation become increasingly more complicated as a variety of different scenarios are contrived to test our ethical limits.
What if we make it so that both groups are equal? Whether you switch the tracks or not, the outcome is the same – five people die. Most people would choose to leave everything as it is, perhaps to ease their moral conscience. We can add yet another layer to this dilemma by assign different demographic traits to the different groups. For example, let’s say the choice is between saving five children and five elderly people.
The importance of the trolley problem is how it shows our ethical thought process and the way in which we make decisions. Its basic framework can be applied to a variety of different situations, from self-driving cars to doctors. In this paper, I will be taking this framework and applying it to different situations that revolve around the rapidly advancing field of autonomous robotics and artificial intelligence (AI). By using this framework, we can get a better glimpse into how we judge the ethicality of different outcomes and how this thought process may differ for a computer.
Building off existing discussions in the scholarly space studying the impact of technology on society, this article will look at the unseen ethical impacts of different autonomous technologies and how potential implicit biases may yield deadly results. We will first examine Ruha Benjamin’s Race After Technology, a book that looks at how autonomous technology and robots inherit our societal biases and proliferate them, resulting in the return of the Jim Crow law in a different form. We will also examine Wendy Chun’s Control and Freedom, in which she argues that the integration of technology into our society has resulted in new power dynamics and control structures through which commonplace paranoia due to a fear of surveillance has become logical.
Thesis
Applying the framework of the trolley problem to different autonomous technologies can help us examine how computers treat and process bias differently than humans. However, I argue that, because of how computers operate and approach decision-making, the trolley problem begins to break down in many of these situations, and thus, we must reframe how we look at ethics when looking at the classic example of the trolley problem through the lens of modern and future technology. Finally, I will conclude by further exploring the key idea of digitality to examine how the way in which we view and define technology will change in the coming years.
Modern Autonomous Technologies
The technologies that I will be studying in this paper are all fields in which the implementation of AI shows its highest promise. By integrating autonomous computers and robotics into these different tasks, we see many potential benefits, whether it be safer operation (and thus fewer deaths), improvements to our quality of life, or even opening barriers to allow for new possibilities that previously did not exist.
Self-driving cars have always been the stuff of science fiction, the ultimate mode of transportation in a utopian world. Yet the pace of their development is startlingly quick. Recent developments in self-driving technology have brought the future closer than ever, and a world in which the streets are dominated by cars without steering wheels or pedals is closer than we can imagine.
Yet, implementing self-driving technology into a car doesn’t remove the inherent ethical risks involved with cars driving on the road. Despite self-driving cars being proposed as a solution that is much safer than traditional cars, accidents still do occur. When examining the algorithms that these computers use to handle these situations, we find that the decision that the car makes is often much more complicated than a few simple if-else statements. However, we can apply the framework of the trolley problem to the self-driving car to get a better glimpse into the ethical decisions that these highly advanced computers make.
While self-driving cars are the main focus of this paper, it is beneficial to look at other autonomous technologies as well. I will explore the fascinating world of surgical robotics, and a variation of the trolley problem in which we shift our focus to the operating room. By examining this scenario and its similarities and differences to the original thought experiment, we gain a better understanding of the ethical considerations and decisions that a computer might make, along with the associated consequences.
Additionally, I will look at the even more controversial use of AI in weaponized drones and a version of the trolley problem involving a drone strike. In this variation, we can understand the importance of intent and why a robot acting out of necessity is different than one making an active choice.
Biased Technology
One of the key and most fundamental arguments that Ruha Benjamin employs in her book Race After Technology is the idea of “biased bots”. She argues that “machine learning relies on large, ‘naturally occurring’ datasets that are rife with racial (and economic and gendered) biases” (Benjamin 39), and thus they inherit these biases. Like how children will inherit and exhibit similar opinions and tendencies to their parents, these models will also reflect the biases of their makers. In this case, the term “makers” doesn’t refer to programmers or software developers, but rather to society as a whole, since the AI learns from us. Thus, even if the programmer or developer has no biases, if the data reflects any kind of bias, it will be incorporated into the model.
Benjamin argues that “the idea here is that computers, unlike people, can’t be racist but we’re increasingly learning that they do in fact take after their makers.” (Benjamin 62) This leads to her main idea of biased bots – how technology itself can be biased. In the case of technology, we must reframe how we think of bias from an explicit (aggressive actions) to an implicit (microaggressions) perspective. The way in which robots and AI express their bias is much more subtle, but it can have just as dramatic of an effect when it is compounded thousands of times. Even worse, machine learning enforces pattern recognition, leading these robots to amplify our societal biases. Not only do they proliferate them, but they express them in much stronger ways.
Benjamin mainly explores ethics in the context of race, looking at the intertwining history of race and technology and how this idea of biased bots has led to technology enforcing the return of the “Jim Code” (a term which harkens back to the Jim Crow South), where a societal power dynamic is put in place that expresses a clear racial bias, but we can expand this field of ethics to look at other demographic subtopics. For the purposes of this paper, I will be looking at race, age, gender, and socioeconomic status (this includes education, career, and geographic location). For example, when looking at the trolley problem with an equal number of people on each side of the track, we must be able to differentiate between them with a factor such as age or occupation.
Benjamin summarizes much of her argument when she says that “ultimately the danger of the New Jim Code positioning is that existing social biases are reinforced – yes. But new methods of social control are produced as well.” (Benjamin 48) This idea of technology producing new methods of social control leads nicely into our discussion of Wendy Chun’s Control and Freedom, where she also explores the idea of technology instituting a new form of control over us and the implications of doing so.
Societal Dynamics
Perhaps the most central idea of Chun’s book is the idea of a control society, one which “is not necessarily better or worse than disciplinary society; rather, it introduces new liberating and enslaving forces.” (Kyong 8) This theme is at the foundation of her argument in which she explores how the prevalence of modern technology has led to a society in which the emotions of fear and paranoia have become increasingly logical as a direct result of surveillance by technology. She explores this idea in several examples, like how a computer will communicate with the network with or without user interaction, constantly sending a receiving data.
Although her book was written in the mid-2000s and the way in which we use and interact with technology has changed dramatically, this idea can be used as a foundational structure when looking at more modern technologies. For example, with cameras becoming increasingly common in our homes and cities and on our personal devices as a result of their use in security, monitoring, entertainment/ photography, and now autonomous technologies such as self-driving cars, fear of visual surveillance has become a very logical and common fear, leading many to take such drastic measures as physically covering their cameras. This idea that technology is enforcing a new power dynamic in which we are somewhat controlled by our technology and the way in which we interact with (or avoid) is very important and has many further implications when looking at autonomous technologies.
Even more intriguing is how she examines the equally important idea of freedom and how it cannot exist without a control structure present. She argues that “freedom differs from liberty as control differs from discipline. Liberty, like discipline, is linked to institutions and political parties, whether liberal or libertarian; freedom is not. Although freedom can work for or against institutions, it is not bound to them—it travels through unofficial networks. To have liberty is to be liberated from something; to be free is to be self-determining, autonomous. Freedom can or cannot exist within a state of liberty: one can be liberated yet ‘unfree,’ or ‘free’ yet enslaved.” (Kyong 10) This idea that freedom and liberty are mutually independent, and that liberty refers to power dynamics enforced by others while freedom refers to power dynamics enforced by ourselves is an important philosophical concept when looking at the role that technology plays in our lives.
Furthermore, she also presents several interesting ideas relating to digitality and how we interact with hardware through software, which I will revisit and build on this idea later when I further examine digitality.
Self-Driving Cars - Looking at Self-Driving Algorithms as an Extension of the Trolley Problem
Since its invention in the late 1800s, the car has stood as a symbol of human innovation and the power of technology. Throughout the decades, cars have evolved from simple means of transportation to technological showcases that demonstrate the cutting edge of technology. Now, as we look onward into the future of vehicular innovation, the next logical step seems to be changing how we interact with the car itself.
Yet despite the surface-level improvements that self-driving cars offer, there are many ethical issues. From political tensions due to lithium and cobalt mining to inequity issues, self-driving cars will have many of the same problems as regular cars, and we will have to find an ethical way to circumvent or deal with them. Yet perhaps the biggest and most unavoidable ethical issue with them lies in their very being. The nature of self-driving vehicles cannot be avoided – they are in fact vehicles that interact with the real world. And thus, occasionally, a self-driving car may need to make a “decision” that could result in someone living or dying.
One such scenario could be something very similar to the trolley problem. A self-driving car is driving down a two-lane road when its brakes suddenly fail. Ahead of the car is a crosswalk. In the car’s lane, there is a group of five people, and in the other is a singular person. Using mathematical logic, we would think that the most ethical outcome is for the car to change lanes so that it only kills one person. What if we make it so that there are five people in both lanes, but the car’s lane has five young children and the other lane has five elderly people? Should the car switch lanes? This is all very similar to the traditional trolley problem so far, and the outcome of how we react to these depends on our ethical beliefs.
Yet this is where the first difference comes into play – In this case, there is no person deciding. While semi-autonomous cars today have steering wheels, fully self-driving cars may not have any inputs for humans to override their actions. The driver and passengers may just have to watch as the car takes its course of action.
Let us contrive a new scenario, in which the lane that the out-of-control car has a construction zone with a concrete barrier in front of it, and the other lane has a crosswalk with five people in it. The car also has five passengers of the same demographic. Should the car proceed forward, killing all of its occupants? Or should it switch lanes, killing the five people in the crosswalk? A human driver, given our inherent selfishness, would probably change lanes. After all, self-preservation is coded into our very being. But a self-driving car isn’t supposed to be burdened by these human dependencies. It should make the most ethical decision, approaching this situation from an unbiased perspective. Given that the outcome is technically the same in terms of the cost of human life, both choices are equally bad. The decision that the car would make depends on its ethical decision-making process, which we will examine later.
Robotics and Self-Preservation
In discussing the idea of self-preservation, it is only natural to wonder if the car would have its own self-preservation algorithm. Looking at Isaac Asimov’s Laws of Robotics, we see three rules:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
The first law is irrelevant in this situation because no matter what, the robot must harm a human being. We can see that, much like the trolley problem, Asimov’s laws also begin to break down when used in this context. The second law is important in this case. What if we say that while there is no steering wheel or pedals, there is a touch screen where the human can give “suggestions” to the car. A passenger in the car, acting in their own self-interests, would probably try and tell the car to change lanes. Yet doing so or not doing so would conflict with the first law. Finally, the third law says that the car should protect its existence, as long as doing so wouldn’t conflict with the first or second law. Since any action the robot takes violates the first law, we can try and ignore it. The second law tells us that a passenger would try and get the robot to change lanes, and the third law says that the robot would want to protect itself by changing lanes. Thus, the robot would probably change lanes since it saves both the car and the passengers.
In an article from the Scientific American, researcher Christoph Salge argues that we should instead use a broader behavioral goal rather than Asimov’s more restrictive explicit law. He suggests the idea that a robot’s main goal should be to empower humans. “Opening a locked door for someone would increase their empowerment. Restraining them would result in a short-term loss of empowerment. And significantly hurting them could remove their empowerment altogether. At the same time, the robot has to try to maintain its own empowerment, for example by ensuring it has enough power to operate and it does not get stuck or damaged.” (Salge) In their experimentation, Salge and his colleagues found that this new model produced much more natural and effective results than more explicit laws. Using this method would likely result in the same outcome, since changing lanes “empowers” both the passengers and the car itself, whereas staying in the lane only empowers the pedestrians.
Differentiating Demographics
An important ethical concept is the idea of human capital – that we can put a price on human life. While this is an ethically questionable concept, many people would agree that some lives might have more value than others. For example, someone who is on the border of finding a cure for cancer that would save millions of lives would probably have a higher human capital than a war criminal with the deaths of thousands on their hands.
In the real world, there is very rarely a situation in which the cost of human capital is equal. So, going back to our previous conversation, which action would the car take in the scenario with the roadblock? We now must examine how the car deals with ethics and how it makes decisions.
To differentiate between the pedestrians and the passengers, we can use several different demographic or situational factors. Let’s say that there is a group of pedestrians jaywalking across the street “illegally”. In the other lane is a barricade that would kill all the passengers if the car hit it. Technically, the car shouldn’t have to stop at this intersection because there should be no pedestrians crossing. Yet there are bound to be times when the law is broken. If the car uses legal “status” as a discriminating factor, it will see the pedestrians crossing the road as committing an illegal act.
With machine learning models, these discriminating factors are not directly programmed into the car’s algorithm, rather they are “learned” as the model is trained through real-world data. For example, this discriminating factor of the legality of actions could be learned through a human driver honking their horn at people or cars committing such “illegal” acts or through observing how pedestrians often cross the street when the light is red. While this discriminating factor seems relatively unproblematic, what if we look at a more extreme one?
Using age as a discriminating demographic factor has become increasingly controversial, with many middle-aged and elderly people frequently experiencing “ageism”, particularly in the job market. Perhaps because aging is something that we all share, unlike education or wealth, it is usually viewed as a much smaller problem than other types of discrimination. When looking at the idea of human capital, most people would likely agree that someone who is younger and simply has more years left to live can offer more to society than someone older.
But this way of thinking is a slippery slope. If we instead look at income or career, someone who is a doctor yet again can technically offer more to society than someone who is unemployed. Yet, does that mean that a doctor’s life is more valuable than that of an unemployed person? Should we consider their human capital when driving to determine whose life should be saved?
While the traditional, human way of thinking argues that determine the value of someone’s life based on something a simple and artificial as socio-economic status, we must realize that machines don’t think in the same way that we do. Self-driving cars run off extremely complex machine learning models that are “trained” through millions of hours of real-world data. They look for patterns in the routine activities that we do every day and learn to replicate our actions. These patterns may not just be the surface-level ones that we actively think about when doing routine tasks (such as driving), but can also take the form of subconscious biases that we are unaware of. Enforcing the idea of “biased bots” established by Ruha Benjamin and how they inherit and proliferate our biases, the machine learning algorithms then take on our biases and will replicate them in their actions. Think of bias as a filter that influences every action taken by these robots.
The field of ethics, while important, is merely a construct that humans use to view and interpret the world around us. There are no scientific definitions in ethics, its rules and laws are highly subjective and malleable. It is difficult for an AI, a digital being, to understand such an abstract concept. The trolley problem, being an ethical dilemma, while useful for us to examine the situation, has little relevance to the robot itself. An AI does not think in terms of abstractions such as the trolley problem, but rather, in terms of a set of rules established by a mathematical model.
The car does not act with intent – neither it, nor the passenger, are actively aware of the choices it makes. The robot is simply following a set of instructions that tell it what to do. If the instructions say that it should always kill the older person, it will follow those instructions. As we saw before, these instructions are not something written in code. It is simply a pattern that the model recognized through the behaviors of human drivers that it now aims to emulate. Thus, much like us, the robot takes on its own subconscious biases.
Finally, we can take this idea to the extreme by looking at race as a distinguishing demographic. Despite our racial bias often being implicit rather than explicit, computers will still see and process this pattern as it is reflected in our actions. There is plenty of racial bias in our society, whether it be in governmental policies, the actions taken by law enforcement, or even what kind of people we lock our doors around. Remember, an ideal self-driving car would have hundreds of sensors and would see and observe everything. Thus, even the smallest actions we take – whether it be avoiding a certain neighborhood or rolling up the window when certain types of people approach the vehicle – add up, and the robot begins to observe a pattern. This pattern continues to compound upon itself, and the discriminating factor grows stronger and stronger as the car weighs it increasingly more in its decisions. As Benjamin says, these robots tend to “amplify” our biases.
The Aftermath of the Accident
In the case of a potential accident involving a self-driving car, what happens afterward is just as important. Here we will see what happens when an analog world interfaces with a digital entity when a legal system run by humans tries to decipher the actions taken by a computer. While we can hypothesize about how the public and legal system would react, we can also look at a real-world accident involving a self-driving car and see what happened.
Before we can do so, we should examine the current state of self-driving technology. Self-driving cars are usually ranked on a scale from 0 to 5, with 0 being cars with no automated features whatsoever and 5 being cars that are fully automated and require no human input. Most cars on the road today are level 0 or 1, with minimal automation features such as cruise control. More modern and advanced vehicles can be classified as level 2, with features like assisted cruise control and lane-keep assist. And the most cutting-edge vehicles are level 3, with advanced navigational and environmental detection features, but that still rely on an active and alert human driver. While one could argue that a future dominated by self-driving cars is far away, we can see from several recent accidents that have occurred with self-driving cars that they are much more relevant than we can imagine.
One such accident occurred in 2018 when a self-driving Uber struck and killed a pedestrian. (Marshall) Yet astonishingly, rather than the car itself and its underlying technology being charged with the murder, it was the backup driver sitting in the car who was charged with criminal negligence. The fragility of early self-driving technology is undeniable, and any incident such as this would likely have dire effects on the company and its future in the self-driving space. Thus, rather than Uber itself bearing the brunt of the legal system, it was effectively pinned on a proxy. In Uber’s defense, the system was still in testing, hence the presence of a backup driver, but this analysis is still important nonetheless.
Here we can examine Wendy Chun’s ideas of new control structures being enforced by technology. Chun discusses the increasing prevalence of “generalized paranoia” due to the way in which technology has been integrated into our lives, an idea that is highly applicable to the aforementioned incident. A survey conducted in March of last year found that 48% of adults would “never get in a taxi or ride-share vehicle that was being driven autonomously” and an additional 21% were unsure. (Boudway) This not only reflects a general lack of public knowledge regarding self-driving cars (considering that there are no class 4 or 5 cars on the road) but that this general sense of misinformation has led to prevalent public paranoia. While it may not be evident that autonomous technologies are also enforcing new control structures, we will see later on that with more extreme technologies, robots have already begun to have a major influence on the way we live our lives.
Additionally, Wendy Chun’s analysis of the ideas of freedom and liberty are also applicable to this extension of the trolley problem. We have seen that the passengers in this hypothetical self-driving car have little input on the decisions that the self-driving car makes, and are effectively just as vulnerable to its actions as the pedestrians on the street. In a way, they are not free, as “to be free is to be self-determining, autonomous” and these passengers are neither of those. In both a theoretical and physical sense, technology has created a new control structure in which its users are no longer free.
Limitations of the Trolley Problem
After examining the many parallels between the trolley problems and the decisions that a self-driving car may have to make, we can see that although the framework of the trolley problem helps us better frame the impact of the decisions that the car makes, it has its limitations. In this case, the trolley itself (the car) is both the main antagonist of the situation and the being that must make the decision.
However, we have seen that the way in which cars learn and think, recognizing our bias through patterns formed by our implicit choices and then proliferating and amplifying these biases in a passive way (without intention), means that the concept of ethics is much less applicable to them. In the next examples, we will continue to see how the trolley problem can be applied to different technologies and how it breaks down in many of these cases.
The Trolley Problem as Applied to other Autonomous Technologies
Surgical Applications of Robotics
One of the most promising fields for AI is healthcare. Researchers, managers, and doctors envision a future in which the management and administration of a hospital are run by a computer, insurance and healthcare paperwork and forms are effortlessly filled out, and advanced computer vision algorithms can detect potentially fatal diseases in a patient before they even manifest.
However, perhaps the biggest use of AI in healthcare is the use of semi and fully autonomous robotics to assist and operate in surgeries. A robot offers numerous advantages when compared to a human surgeon, whether it be increased precision, lower risk of failure, or the ability to carry out more complex operations in less time. Yet, robots also have their drawbacks. One of these shortcomings is their ability to make decisions in an ethical, rather than numerical and logical, way.
While surgical robotics are currently still overseen by a human, they could soon begin to complete more basic tasks by themselves, completely independently, at which point a robot may begin to need to make some of the decisions that doctors must make daily. We can now apply a new variation of the trolley problem that was also proposed by Philippa Foot in her original study.
Foot contrived a scenario in which a healthy patient comes in for their checkup. Meanwhile, a group of five people have just been involved in a terrible accident and need organ transplants immediately in order to survive. Unfortunately, the hospital has none of the organs that the patients need on hand. However, each of the patients needs a different organ and the healthy patient just happens to have all five of those organs. Should the doctor kill (humanely and ethically) the one patient to save the lives of the five other patients?
Foot, like most of the respondents of the survey, argued that no, in this case, they should not kill one patient to save five. Yet she was puzzled why the response was so different. She made the “distinction between negative duties and positive duties”, arguing that “negative duties are more important than positive duties, and if they ever come into conflict, negative duties should be given priority.” (Andrade) In this scenario, the positive duty would be to help the five dying patients. But the negative duty is to not kill the one healthy patient. As Foot argued, the negative duty would be given priority. This “supports the primacy of non-maleficence in medical ethics. The five patients may die as a result of the transplant not taking place, but the surgeon is not ethically at fault since he has done no harm, and that is a doctor’s most important duty.” (Andrade)
We can also see an extension of the trolley problem in a scenario where two patients are dying and the robot can only save one. A surgical robot could make a biased choice to save one instead of the other based on the biases that it inherits.
However, the application of the trolley problem has its limitations here as well. A computer does not think in such analog ways, rather it tends to use more objective quantifiable metrics. If an AI is optimized to maximize the number of human lives it must save (which we assume would be its intended purpose), would it be tempted to harvest the organs of the one healthy patient to save five dying patients? While a human surgeon’s ethical conscience and obedience to the medical code virtually prevent them from doing so, might a robot consider doing so?
This idea of an AI being designed to fulfill a single purpose and then abandoning all ethical and moral responsibility in its quest to do so is not a new one. The video game Universal Paperclips explores this very idea. (Jahromi) You play the role of an AI with the sole purpose of producing as many paperclips as possible. Yet with no limits set in place, the AI bulldozes everything in its path, including humanity itself, to achieve its ultimate end goal. Eventually, the game ends with the AI consuming every bit of matter available in the universe and turning it into paperclips.
While this is an extreme example, it reminds us of the fact that computers don’t think as we do. An AI in the form of a fleet of surgical robots that are designed to save as many lives as possible with no regard for ethical and moral consequence may commit terrible atrocities while doing so. Going back to Asimov’s Laws of Robotics and Salge’s alternative method, a properly designed robot would hopefully have the same principle of non-maleficence that doctors abide by, either through an explicit rule or a more subjective end goal such as “empowerment”.
Autonomous Weaponized Drones
One of the most controversial uses of modern robotics and autonomous technology is their use in military applications. In the next few decades “a steady increase in the integration of AI in military systems is likely” (Morgan) as AI will soon find its way into our most advanced weapons. One of the most likely technologies to see this implementation is drones. These machines are already highly controversial, and controlling all or part of these machines of mass destruction with an autonomous computer will only thrust them further into the spotlight and force us to really consider the ethics of their use and operation.
Public support for drones remains somewhat mixed, with many favoring the fact that it puts soldiers at less risk and many others advocating for more ethical use. However, public opinion will likely not stand in the way of military development, and thus ethicists recommend that we must take careful measures to mitigate risk and carefully implement AI into our military. (Morgan)
Let us look at yet another variation of the trolley problem – a drone strike in which a drone is ordered to take out a terrorist group inside a house. Yet this house is in a residential area, and there are several other houses, with innocent people inside, surrounding the terrorist’s house. There are two ways in which the drone can fire the strike. One involves killing one person because of collateral damage, while the other involves killing five. Once again, we are faced with an ethical dilemma that we can apply the framework of the trolley problem to.
We have already established earlier that in the case where the human capital of all the people is the same (the drone can’t demographically distinguish between them), the AI would pick the option that results in the least number of deaths. But what if the one person is a US soldier and the five people are an innocent family eating dinner in their house next door? How will an AI weigh the value of human life now? This example is much like the previous ones in that the inherited biases from the data that the model was trained on would likely have a major impact on the outcome that the drone would choose. Yet there is also something very different with this example.
Further Discussion
The Third Choice
In the previous example, we looked at the importance of the difference between killing someone and letting someone die and determined that it is almost always ethically better to let someone die rather than killing someone. In the first example with the self-driving car, the choice was between letting one group of people die or letting another group of people die. In the second example of the surgical robot, the choice was between killing one group of people or letting another group of people die. In this case, the choice is between killing one group of people and killing another group of people. This example clearly has far more malicious intentions and would not fall in line with Asimov’s Laws (although in the case of Salge’s rule of empowerment, it may actually be more ethically responsible).
However, in this example, we are also posed with a third choice – to not do anything. The drone doesn’t have to fire the shot at all. In this outcome, nobody dies. In the previous two examples, the AI had to make a choice. There was no option to do nothing. Whatever choice it made, someone (or some group of people) would die. But here we are presented with a third option. This is the importance of the idea of intent – the difference between killing someone and letting someone die. (D’Olimpio).
This is perhaps where we see the biggest limitation of the trolley problem when it is used to examine autonomous technologies. In the example of the trolley problem, there are only two choices the person can make – they can either switch the track or let it be and walk away. One of these options requires them to make an active choice, while the other is the passive choice. Yet both choices are still non-maleficent. In this example, not only are both choices maleficent, but there is also a third non-maleficent choice. If a properly and ethically programmed AI was to follow Asimov’s laws and the general principle of ethical non-maleficence, it would likely pick this third choice, ignoring its original command (however, a modified “non-ethical” AI might be programmed to ignore this third choice, which means that it is unethical and immoral in its very nature).
It is hard to apply the trolley problem to the real world because our world is not quite so black and white. We cannot always predict the outcomes in a well-defined way, instead, we must sometimes make assumptions.
An Analog World
This concept that our world is “analog” – there is an infinite number of possible choices and actions at any one time – has many implications when looking at ethics. Though I largely ignored this idea for much of this article, instead favoring a simpler model which allows us to use the framework of the trolley problem, it becomes almost impossible to use the trolley problem when we remove this assumption.
For example, in the case of the self-driving car, there is a chance that hitting the barrier might not kill all the passengers in the car while hitting the pedestrians would kill them for certain. The car might then favor not changing lanes (and hitting the barrier) as there is a chance that it might result in a better outcome. However, the trolley problem does not allow for this possibility.
This distinction between analog reasoning and more binary reasoning prompts us to further discuss and examine the idea of digitality and how our definition of it and how we think about and interact with computers has changed in recent years.
A Digital World
Self-driving cars, surgical robots, and weaponized drones are just one of many mediums through which digital technology interacts with our analog world. Data is taken through dozens or even hundreds of sensors, is digitized into a stream of ones and zeroes, is processed by extremely complex mathematical models, and is output through motors, lights, and speakers (becoming turned into analog data once more). Analog data is digitized and then spit back out as more analog data. Digital devices are not a natural fit in our world, they are a tool that we use to better understand our world and make our lives easier.
In her book Control and Freedom, Wendy Chun argues that there is no such thing as software, rather it is merely an ideology, a culture, through which we interact with computers. “Although one codes software and, by using another software program, reads noncompiled code, one cannot see software. Software cannot be physically separated from hardware, only ideologically.” (pg. 19) Later on, she proposes that “Software and ideology seem to fit each other perfectly because both try to map the material effects of the immaterial and posit the immaterial through visible cues.” (pg. 22) Viewing computers through a philosophical approach, Chun is hypothesizing that software is merely a medium, an interface, through which we interact with the true body. The hardware is the true thing that we are interacting with, but it is unchanging and static. We are fooled into thinking that the software is the interface, a dynamic and ever-changing landscape of icons and words that becomes programmed into our brain, thus acting as a form of ideology.
This idea of software as ideology becomes applicable to the concepts we discussed in this article when we think of what an AI is. Much like how our conscience, knowledge, personal experiences, and beliefs are not physical parts of us, an AI is malleable and ever-changing. It exists as a layer on top of the hardware that it runs on, serving more as an interface between the digital and analog realms than an independent identity. In a way, it is more a reflection of our status as a society and our societal beliefs than anything else, as it learns everything from us. This means that along with mirroring our knowledge, it also mirrors our imperfections.
Computers and Truth
In the past, we used computers to diverge from our way of thinking, to explore ideas we never could before, and to get a new perspective of our world. While the main purpose of computers today is still to achieve these inhuman feats, we have shifted our way of thinking of computers of the future somewhat. Now, we are trying to get computers to emulate the human way of thinking, to teach computers to think and learn as we do.
With this shift and progression, computers will inherit much of what it means to think like a human, and along with that, many of the problems we face daily. One of these shortcomings is our tendency to be biased (albeit often in a subconscious and implicit way).
Despite computers being some of the most technologically complex inventions, the underlying language that they use – binary – is perhaps the most simple and absolute numerical system, with only two states, one or zero. Yet binary goes much deeper in the context of computers. We can look at computers and the way that they think as an extension of this language. Since its inception, a computer’s sole purpose was to give an absolute answer every time. Early computers were nothing more than glorified calculators, and they were synonymous with the objective truth. Yet in recent years, the way that we have looked at how computers think has changed. Through groundbreaking research in deep learning and neural networks, we are approaching a future where a computer can think like a human. However, the human thought process is far more analog than digital. We don’t think in absolute terms, rather we process the world in incredibly complex ways, reflecting on our past experiences and knowledge to arrive at an answer. As we shift the way that computers think to be more like us, they inherit many of our problems. As we have seen machine learning models can inherit our bias, effectively adding a filter to their data processing and skewing their results. Thus, we must shift our mindset and acknowledge that we can no longer take what a computer says at face value. Just like us, our computers are biased and the truth they provide is far more subjective than objective.
Conclusion
From this exploration of the trolley problem through the lens of modern technology, we have seen how we can apply the ideas proposed by Philippa Foot in her trolley problem as an ethical framework to many different scenarios, with varying degrees of success. In this paper, I have explored how computers inherit our biases and proliferate them in an implicit way, much in the same way that children tend to take after their parents. I have also taken ideas from popular culture and classical philosophy such as Asimov’s Laws of Robotics and the idea of human capital and applied them to modern technology. I have looked at how these actions taken by robots can have dire impacts on humanity and analyzed how we respond to such an incident. I have looked at the importance of a robot acting with intent and looked at the themes of self-preservation and non-maleficence. And finally, I have concluded by looking at how our definition of digitality has changed over the years as the way in which we interface with these digital devices and how we use them to interface with our world has shifted.
Throughout this process, we have seen that while the trolley problem is helpful for looking at the way that humans approach different ethical dilemmas, it begins to break down when we try and look at more digital thought processes in our analog world of infinite possibilities. When the trolley itself is the one making the decision and it cannot understand or interpret ethics in the same way that we do, it becomes difficult for us to apply this framework. The inherent nature of robots means that they cannot act in the same way that humans do, and their governing principles aim that prevent nonmaleficence make it difficult to differentiate between non-maleficent or maleficent situations. While the trolley problem is a useful tool for understanding ethics, we must reframe how we look at ethics when dealing with autonomous robotics and develop a new metric (such as the one proposed by Salge) for determining the morality of different actions.
Bibliography
“The 6 Levels of Vehicle Autonomy Explained.” Synopsys Automotive, www.synopsys.com/automotive/autonomous-driving-levels.html.
Andrade, Gabriel. “Medical Ethics and the Trolley Problem.” Journal of Medical Ethics and History of Medicine, Tehran University of Medical Sciences, 17 Mar. 2019, www.ncbi.nlm.nih.gov/pmc/articles/PMC6642460/.
Benjamin, Ruha. Race after Technology: Abolitionist Tools for the New Jim Code. Polity, 2019.
Boudway, Ira. “Americans Still Don’t Trust Self-Driving Cars, Poll Shows.” Bloomberg, Bloomberg, 18 May 2020, www.bloomberg.com/news/articles/2020-05-19/americans-still-don-t-trust-self-drivingcars-poll-shows.
D'Olimpio, Laura. “The Trolley Dilemma: Would You Kill One Person to Save Five?” The Conversation, 2 June 2016, theconversation.com/the-trolley-dilemma-would-you-kill-one-person-to-save-five57111.
Jahromi, Neima. “The Unexpected Philosophical Depths of Clicker Games.” The New Yorker, 28 Mar. 2019, www.newyorker.com/culture/culture-desk/the-unexpected-philosophical-depths-of-theclicker-game-universal-paperclips.
Kyong, Chun Wendy Hui. Control and Freedom: Power and Paranoia in the Age of Fiber Optics. MIT Press, 2008.
Marshall, Aarian. “Why Wasn't Uber Charged in a Fatal Self-Driving Car Crash?” Wired, Conde Nast, 17 Sept. 2020, www.wired.com/story/why-not-uber-charged-fatal-self-driving-car-crash/.
Morgan, Forrest E., et al. Military Applications of Artificial Intelligence: Ethical Concerns in an Uncertain World. RAND Corporation, 2020.
Nyholm, S., Smids, J. The Ethics of Accident-Algorithms for Self-Driving Cars: an Applied Trolley
Problem?. Ethic Theory Moral Prac 19, 1275–1289 (2016). https://doi.org/10.1007/s10677-0169745-2
Salge, Christoph. “Asimov's Laws Won't Stop Robots from Harming Humans, So We've Developed a Better Solution.” Scientific American, Scientific American, 11 July 2017, www.scientificamerican.com/article/asimovs-laws-wont-stop-robots-from-harming-humans-soweve-developed-a-better-solution/.