I expect that in a few years autonomous cars will not only be widely used but they will be mandatory. The vast majority of road accidents are caused by driver error, and when we see how much deaths and injury can be reduced by driverless cars we will rapidly decide that humans should not be allowed to be left in charge.

This gives rise to an interesting philosophical challenge. Somewhere in Mountain View, programmers are grappling with writing the algorithms that will determine the behaviour of these cars. These algorithms will decide what the car will do when the lives of the passengers in the car, pedestrians and other road users are at risk.

In 1942, the science fiction author Isaac Asimov proposed Three Laws of Robotics. These are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

If the cars obey the Three Laws, then the algorithm cannot by action or inaction put the interests of the car above the interests of a human. But what if there are choices to be made between the interests of different people?

In 1967, the philosopher Philippa Foot posed what became known at “The Trolley Problem”.  Suppose you are the driver of a runaway tram (or “trolley car”) and you can only steer from one narrow track on to another; five men are working on the track you are on, and there is one man on the other; anyone on the track that the tram enters is bound to be killed. Should you allow the tram to continue on its current track and plough into the five people, or do you deliberately steer the tram onto the other track, so leading to the certain death of the other man?

Being a utilitarian, I find the trolley problem straightforward. It seems obvious to me that the driver should switch tracks, saving five lives at the cost of one. But many people do not share that intuition: for them, the fact that switching tracks requires an action by the driver makes it more reprehensible than allowing five deaths to happen through inaction.

If it were a robot in the drivers’ cab, then Asimov’s Three Laws wouldn’t tell the robot what to do. Either way, humans will be harmed, whether by action (one man) or inaction (five men).  So the First Law will inevitably be broken. What should the robot be programmed to do when it can’t obey the First Law?

This is no longer hypothetical: an equivalent situation could easily arise with a driverless car. Suppose a group of five children runs out into the road, and the car calculates that they can be avoided only by mounting the pavement, and killing a single pedestrian walking there.  How should the car be programmed to respond?

There are many variants on the Trolley Problem (analysed by Judith Jarvis Thompson), most of which will have to be reflected in the cars’ algorithms one way or another. For example, suppose a car finds on rounding a corner that it must either drive into an obstacle, leading to the certain death of its single passenger (the car owner), or it must swerve, leading to the death of an unknown pedestrian.  Many human drivers would instinctively plough into the pedestrian to save themselves. Should the car mimic the driver and put the interests of its owner first? Or should it always protect the interests of the stranger? Or should it decide who dies at random?  (Would you a buy a car programmed to put the interests of strangers ahead of the passenger, other things being equal?)

One option is to let the market decide: I can buy a utilitarian car, while you might prefer the deontological model.  Is it a matter of religious freedom to let people drive a car whose alogorithm reflects their ethical choices?

Perhaps the normal version of the car will be programmed with an algorithm that protects everyone equally and display advertisements to the passengers; while wealthy people will be able to buy the ‘premium’ version that protects its owner at the expense of other road users.  (This is not very different to choosing to drive an SUV, which protects the people inside the car at the expense of the people outside it.)

A related set of problems arise with the possible advent of autonomous drones to be used in war, in which weapons are not only pilotless but deploy their munitions using algorithms rather than human intervention. I think it possible that autonomous drones will eventually make better decisions than soldiers – they are less like to act in anger, for example – but the algorithms which they use will also require careful scrutiny.

Asimov later added Law Zero to his Three Laws: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” This deals with one variant on the Trolley Problem (“Is it right to kill someone to save the rest of humanity?”).  But it doesn’t answer the basic Trolley Problem, in which humanity is not at stake.  I suggest a more general Law Zero, which is consistent with Asimov’s version but which provides answers to a wider range of problems: “A robot must by action or inaction do the greatest good to the greatest number of humans, treating all humans, present and future, equally”.  Other versions of Law Zero would produce different results.

Whatever we decide, we will need to decide soon. Driverless cars are already on our streets.  The Trolley Problem is no longer purely hypothetical, and we can’t leave it to Google to decide. And perhaps getting our head around these questions about the algorithms for driverless cars will help establish some principles that will have wider application in public policy.

If you enjoyed this post, please consider leaving a comment below, and perhaps sharing this with other people using the buttons on the left. You can also sign up to have blog posts sent to you by email.

59 Responses to Google and the Trolley Problem

  • “This is not very different to choosing to drive an SUV, which protects the people inside the car at the expense of the people outside it”

    This is a common misconception, particularly (and tragically) on the part of SUV purchasers: Whilst yes, they inflict more damage on a smaller vehicle than a normal vehicle would, there is a catch. About 4/5 of crashes are single-vehicle – people driving off the road and into trees, buildings, ditches or off cliffs. An SUV driver is twice as likely to lose control due to the higher centre of gravity, mass, taller tyres and softer suspension. When they do lose control they are twice as likely to roll over, and their extra weight means the roof pillars are more likely to collapse. In the 1/5th of collisions involving other vehicles the SUV driver is increasingly likely to meet another SUV, therefore mitigating any advantage afforded by increased mass.

    Therefore putting your family inside an SUV puts them at more danger of death than inside a normal sedan car.

    • Cd2RPPm7cYj4

      Amazing stats in the comment about non-safety in SUV. I’d like to know some sources.

    • Then wouldn’t the choice NOT to buy an SUV mean you’re increasingly more likely to be involved in an accident with an SUV (less likely to lose control and crash so higher chance accident is multi-vehicle and when in a collision with another vehicle it is more and more likely the other vehicle is an SUV).

      So while choosing the SUV might not be a great option, isn’t not choosing the SUV a worse option by that logic?

      Not that I’m an SUV fan – I much prefer a “regular car”.

  • I was thinking recently about this point having watched this short film: http://io9.com/robot-bartender-struggles-with-asimovs-laws-in-this-ama-1575805995.

    It seems to me that one of the problems with Asimov’s First Law and Law Zero is that it sits squarely on the fence between deontological and consequentialist moral theories. So the robot is forced to compare the moral value of a rule (do not harm humans) against an outcome (do not allow humans to be harmed).

    Whilst this might reflect how humans do mix and match their ethical reasoning to suit their gut instinct, the two theories conflict regularly and it’s not surprising that Asimov’s Laws are rich pickings for interesting stories. (I’ve not actually read any Asimov but I can imagine that a lot of interest derives from the inherent dissonance in the laws.)

    I have to say that the idea of choosing the moral theory for your car is fascinating. And a market decision is probably a more realistic outcome than a single standard being applied.

    Having said that, the UN recently recommended a ban on Terminators…. http://www.un.org/apps/news/story.asp?NewsID=47794#.U5azGvldWSo

  • See
    Should Autonomous Cars Have Feelings About Crashes?
    http://www.technovelgy.com/ct/Science-Fiction-News.asp?NewsNum=4160

    Also
    Ethical Decision Making During Automated Vehicle Crashes (pdf)
    http://people.virginia.edu/~njg2q/ethics.pdf

    • Bill

      Very interesting. Of course, there is a difference between programming a robot to behave as if it is acting with feelings or ethics, and the (much simpler) task of a human creating an algorithm which will determine the actions of the car in particular circumstances.

      Owen

  • It’s been a long time since I’ve read Asimov. My recollection is that early robots crashed (in the computer sense) when they faced gray areas, and later robots were more ethically subtle (because it was in robot makers’ interests to keep their robots from crashing).

    Anyway. Ethical dilemmas are interesting, but how many of them really happen? More to the point, how many happen to drivers who are fully aware of their environments? If a driverless car sees a bunch of kids playing by the side of the road, it should slow down; it’s far less likely to miss them than a human is. Besides, a human driver is unlikely to have enough time to *decide* to avoid them via the sidewalk.

    This is why we have speed limits: they nullify many ethical dilemmas.

    Yes, these dilemmas are interesting. But we can maintain their science-fiction status. The simplest solution is to change the rules so they don’t come up.

    • Adam

      I agree that there will be many things we can do to avoid these dilemmas arising, and they should arise less often with driverless cars if they are better at anticipating and avoiding dangerous situations.

      But they cannot be avoided altogether, and the driverless cars will have to be programmed to respond somehow on the rare occasions when they do (even if the response is to do nothing, that is a response).

      Owen

      • Owen: I certainly agree with everything you’ve written. But I take issue with the hand-wringing itself.

        Questioning ourselves takes time, and time equals deaths. So it’s the Trolley problem all over again. As fictional rulers of a fictional country — let’s call it USA — do we spend 100 extra days finding rules that will drop robot cars’ fatality rate from 10 fatalities per day to five? Or do we rush our homicidal robot cars onto the roads, because 10 fatalities per day is better than 110? Assume it’s just as easy to add rules _after_ we put cars on the road as it before.

        Our fictional inaction (philosophizing) will cost 10,000 lives.

        I think the most ethical course of action is a policy one, taken step-by-step. First, get robot cars onto our roads. Then adjust rules to minimize the probability of dilemmas.

  • You are assuming that the car has not been programmed to account for contingency plans in case such a situation arises. What if the people at mountain view programmed the car to maintain an appropriate speed at all times so that when any such hazardous situation arises, the car can stop before hitting the obstacle. The foremost appeal of driverless cars is that they can account for far more variables than a human mind can, so that it is alert at all times.
    I know my argument is not very philosophical, but what I’m trying to say is that maybe such situations as you have mentioned would not arise in a driverless cars because it is constantly monitoring its environment and accounting for all possible risks, and mitigating them.

    • Zeeshan – I agree. It will be a major advantage of driverless cars that they can be programmed to travel at safe speeds and anticipate most problems. But even with that improved safety, there will be unanticipated problems, and they will have to be programmed to deal with those too. Owen

      • “But even with that improved safety, there will be unanticipated problems, and they will have to be programmed to deal with those too.”
        The definition of unanticipated would lead one to think they will not be programmed. Rare , unusual, extraordinary possibly, but unanticipated will need to be after market.

        • Russell – I don’t agree. The point of the algorithm is to respond to different possible circumstances. To get Rumsfeldian, it is a known unknown.

  • Of course, that was Isaac Asimov’s very point – his whole body of robot-themed fiction (including the extended Foundation series) pivoted on the difficulties robots faced trying to obey the 3 Laws and the consequences of being unable to do so, as well as the moral divisions between humans and robots.

    Asimov would have solved the Trolley problem like this: the robotic car would change tracks, kill the single man to save the 5 men, and then go mad or commit suicide for being unable to respect its programming.

  • I have to partially agree with Adam. The best choice in this scenario is to change the laws associated with the road to accommodate new technology. One of the main reasons people speed is to get to their destination faster. With an array of automated vehicles all being handled by computers one might assume that routes would be more efficient. Therefore the speed wanted to get to a destination might be lowered.

    All of the benefits of automated travel sound great. Yet being a paranoid human I still have a want for control. I would like an emergency manual mode installed in the case that I want the responsibility of who’s death is to occur. I still feel the need to control an outcome to the best of my abilities and driving in a dangerous situation is all the same. Perhaps maybe a few generations down the road no one will think of it this way though.

  • Something that we should consider, is it possible for a self driving car to avoid ALL of the normal human mistakes that cause accidents? We’re used to thinking of accidents as just a normal part of human life and expect them to follow us into the world of self driving cars, but computers can be told exactly how to behave at all times, which should limit, or even remove the chance for the same type of accidents that we normal think of.

    In the trolley scenario, a self driving car could be programmed to never exceed a speed that would always allow it to have the required stopping space for all observable track in front of it. If it has a long straight track visible in front it would go fast, and if it’s going around a corner it would slow down. So it would never be in a situation where it wouldn’t have stopping distance before hitting ANYONE. This solves the problem before the nasty ethics question.

    Google is in fact doing this with their self driving car, if you look at the video below it shows the car’s view of the world, including a railroad crossing. The car refuses to enter the crossing until the other side is clear. This is exactly what everyone is taught when learning to drive but so few of us do. This one change would effectively eliminate car/train accidents.

    https://www.youtube.com/watch?v=dk3oc1Hr62g

    When the self driving car is fully realized we should expect to see a dramatic drop in these type of human failures but an increase in accidents due to software mistakes. The downside to this, and something people will need to get used to, is these cars will drive like they are commanded by the entire legal department of major corporations. So they will be frustrating to the rider. Perhaps it’s a good thing the Google demo unit doesn’t have driver controls for the frustrated, impatient occupant to grab.

    • Mike – I agree with this, and think that the vast majority of avoidable accidents will be avoided when we take power away from humans. But cars will still have to be programmed to deal with the occasional unavoidable problem, won’t they?

  • “…must protect human life and future humans life equally.”

    I feel bad for the group of sterile humans who will be ran over to protect the few possible of creating life!

    • I hadn’t thought of this problem. Sorry Grandma! You have to die because you’re no longer capable of reproducing, whereas Jill over here could still pop out a bunch of kids. That’s a whole bunch of future humans whose interests need protecting.

  • “A robot must by action or inaction do the greatest good to the greatest number of humans, treating all humans, present and future, equally”.

    I had to chuckle a litlte at this proposed law zero since it seemed like a line out of one of the Terminator movies. Granted I let my mind wander a bit but I was imagining a scenario where killing a whole mass of people with a couple of bad or even genius guys in it might be the greatest good to the greatest number of humans in the future (e.g. nuking a whole city to kill a scientist manufacturing a deadly virus might be the better bargain). Yeah, I know we’re talking about cars here :)…

    • “Greatest good to the greatest number” is a completely meaningless phrase. What’s the largest city that’s closest to New York? You need some method of weighing a little bit of good to a lot of people against a lot of good to a few people. No such arithmetic exists.

      • @Cervantes – I agree that this is problematic and undefined – though not “completely meaningless”. It reflects an intuition that many people agree with, even though it is not straightforward to implement in practice.

    • I’m sorry, but treating “future humans” as equivalents to actual, currently existing humans is extremely problematic. For one thing: as practical matter, it leaves your car/robot with a problem that’s unsolvable, especially in a real-time situation such as driving a car. In the case of the trolley problem… so, not only do I have to decide whether to kill 1 worker or 5, I somehow have to know how many descendants would not come into existence for each group? How could that possibly be calculated?

      And from an ethics standpoint, I have a real problem with the idea, even if it were practical. To say that “future humans” are to be treated equivalently to actual humans is to say that not only are zygotes imbued with full human rights, but so are unfertilized eggs and sperm… because who can say that they won’t become “future humans”? Moreover, all the potential descendants of these cells would also suddenly have interests that have to be protected.

      That bit of language opens such a giant can of worms, both practically and ethically, that there’s no way it could ever be implemented.

      • Sean – Thanks. I agree that concern for future humans is problematic; but I also think that ignoring their interests is problematic too.

        My concern is not for potential future humans (your zygotes example) but for actual future humans. If I know for certain that a group of people will exist in 2050, then I should be concerned about how my actions today will affect those people. That is very different from saying that I have a moral obligation to bring those people into existence.

  • How about plain old personal responsibility?

    If your kid was dumb enough to walk into the road illegally, you the guardian should pay the owner of the autonomous vehicle for damages the child rendered to the vehicle.

    If we take into consideration that autonomous vehicles are constantly being streamed enough relevant information to safely navigate road systems +/- some free radicals allowing for an umbrella of contingency, then anything you the human do to violate this possibility of avoidance should be your responsibility, not the owner of the vehicle or the company which manufactured it (assuming their due diligence on the manufacturing and programming side)

    TLDR: If the car fits into a realm of ‘legally/regulatory perfection’ agreed upon by those whose legislate, and you violate that legislation then it’s your fault. If you disagree with the legislation propose more until everything reaches a happy medium. Don’t go on the whirly wheel and try to bungee jump onto a flaming crocodile, just ride the whirly wheel.

    • Grant

      In some countries – including the one where I live – it is not illegal to jay-walk; nor is it illegal for a child to walk into the road.

      Even in countries in which jay walking is illegal, it is not usually punishable by death. Are you suggesting that an autonomous car should be programmed to act in a way which leads to the certain death of a child who has run into the road compared, for example, to swerving to avoid the child and scratching the paintwork? Does the child’s illegal behaviour reduce to zero our concern for his or her life?

  • I would rule in favor of the person or people who are obeying the laws of traffic in this situation. If there are 5 school children in the road, where they presumably shouldn’t be, and killing an innocent bystander, presumably walking on the sidewalk in accordance to the law and common sense, is the only alternative, I’d say the innocent bystander should be spared at the expense of the kids.

    I feel that this situation is a corner case anyway, and rarely are human-operated cars in this kind of dilemma. Hopefully, the increased awareness and sensible driving protocols of the autonomous vehicles will further reduce the occurrence of “the trolly problem”.

    • Carlin – I agree that we must all hope that these circumstances will be relatively rare. Even so, the car will have to be programmed to respond one way or another (there is no “do nothing” option).

      Unlike you, I would vote for killing one innocent bystander and saving the 5 children. So there is a real choice for society to make here.

  • When you buy and raise a guard dog, you expect it to value you and your family more than a stranger. Similarly, I would expect a Google car I buy to value my safety and my family’s safety more than the safety of a stranger on the road.

    Crash into a tree killing me vs crash into a pedestrian killing him? Option A, always.

    Turn away from a group of five children to avoid hurting them, while cognizant of the fact that you will run over a lone pedestrian on the sidewalk? Several more questions to be asked by my ideal car – is one of those children my kid? Is the pedestrian my wife? Wouldn’t mind crashing into one stranger to save the life of my kid, but would prefer crashing into five kids if that would allow my wife on the sidewalk to live.

    That is what JBS Haldane, one of the great of biology, meant when he said he would not sacrifice himself for his brother, but would consider it for two brothers or eight cousins.

    This is the natural and common view of morality, which seems to get lost in the deontological / utilitarian debate. Because those two systems were devised by aspies who didn’t realize that human morality is composed of concentric circles of diminishing loyalty. You will always value your child more than a stranger’s child (irrespective of certain egalitarians’ protests to the contrary – watch what they do, not what they say. They allow African children to starve while their own son eats caviar, all while loudly agitating for more foreign aid).

    • Crepetio

      Your approach seems to resist making a distinction between “is” and “ought”. I don’t agree with you that they are the same.

      It isn’t a symptom of Aspergers to think that humans can and should rise above their instincts and desires and act instead consistent with a set of moral principles.

  • “A robot must by action or inaction do the greatest good to the greatest number of humans, treating all humans, present and future, equally”.

    That sounds great except when say for example one of the equal humans is somebody you know and love. If I had to make a choice to run over a family member or a crowd of strangers, you better believe the crowd is going to get hit. Just being honest, and I think *most* people would make that choice too. So will the Google cars have facial recognition and then you can apply prioties to people you know? With 7 billion people on the planet this sceniaro is likely to happen to somebody somewhere.

    • Are you saying that you would not only make that decision in the heat of the moment, but you would also precommit to that decision? While I cannot tell you what I would do reflexively, I would find it an easy decision to precommit to killing a single family member I know and love rather than killing two demographically-similar members of the families of strangers.

    • Kevin

      I distinguish between what people would in fact do, and what I think they should do.

      Owen

  • The premise of “the Trolley Problem” is that these vehicles are not going to require drivers to be present and able to take control at any moment. The premise that is going to allow these vehicles to be used in our society is that these vehicles are going to require drivers to be present and able to take control at any moment.
    Do you see where I’m going with this?

    I wouldn’t even have commented though if it weren’t for that “This is no longer hypothetical” stuff. This is all hypothetical.

    • Did you watch the Google video? The cars they are making have a single on-off switch. There is no steering wheel and there are no pedals which enable drivers to take control at any moment. It isn’t hypothetical: these cars exist.

  • Shouldn’t the car avoid calculations about human life altogether, and instead focus on obstacle avoidance, with humans simply being in the class of the highest-priority obstacles to avoid?

    Innocence. No?

    • Continuing this line of reasoning: the presence of an obstacle is measurable, but the value of a human life is immeasurable. So while the sensors of the car may be able to detect the presence of a living obstacle, how could the car then use such metrics to make crash-avoidance decisions if, in the process of doing so, calculations of the immeasurable are involved?

  • There’s an assumption here that people would still need to OWN their own cars. Wouldn’t self-driving cars be the ultimate public transport solution? With my European-Socialist upbringing, that makes me think all cars should naturally be owned by the government, and we just get to use the nearest available one when we need one… all cars would be created equal…

    • I do think the idea of owning your own car may become obsolete: I can just summon a car when I need one. (Not sure if they would be publicly or privately owned).

      But couldn’t I could choose whether to summon a utilitarian or a deontological car?

  • Hey Owen,

    Great article. A few thoughts.

    1. The good news is that increased automation will bring everyone back to studying philosophy and specifically, ethics. That’s all very necessary for this whole process to work.

    2. I’m inclined to think that, if we take law 0 as the most fundamental, then we’ve got to design the algorithms to flip a coin and decide randomly, no? If all humans, present and future, are to be treated equally, then all humans presently are to as well. But that’s just me. There are a few comcast executives and anti-net neutrality folks, all with a lot of money, that would disagree with me surely.

    3. Other good news – it seems like these events, when an automated driver kills a person, will most likely happen 1/10,000 (or less) the frequency of the rates that we see with human drivers. They might have the same frequency as something like an OJ Simpson murder, in which case we all look at each other, talk about the event, and decide democratically how to change algorithms.

    Thanks for posting.

    Ted

    • Owen – thanks for this.

      I think this is the most immediate version of what will become a more general problem of how to programme ethics into algorithms. When you have “narrow” AI like the google car example, the problem is pretty hard, when you have “general” AI and you have to start programming general ethical codes, it gets really very tough.

      Which is why I definitely agree with Ted and think that there is a fascinating potential for a resurgence in applied ethics focused on this field. It also has a very interesting impact on ethics – when you have to not only have an ethical theory, but be able to be precise enough to turn it into code, you have to get pretty specific about what you mean. I also think it is important that philosophers and policy-makers (and ideally at some level, the general public) do get involved in these kind of questions and it doesn’t just end up as a technical exercise hidden in the way in which the algorithms are themselves designed.

      The people that worry about this stuff most (Future of Humanity Institute and the like) are fascinating in having people who spend about half their time studying the philosophical issues and half their time studying the technical issues.

      As an aside – one of the solutions I have seen to how to practically turn the consequentialist versus deontological (versus virtue ethics?) debates into something usable by an algorithm is to line up a whole series of ethical codes as agreed by a group of reasonably well informed people and then to get those same people to apply probabilities to the different ethical systems being “correct”. These probabilities then give the computer the weightings to apply to each of the different ethical systems (62% utilitarian, 17% egalitarian, 16% deontological and 5% Nietzschian or whatever) when calculating what it should do. Clearly this is a big fudge, but it is hard to work out any solution that isn’t either “solving ethics” or being a bit of a fudge.

  • Hi Owen,
    In the trolley / car we have humans who can operate the emergency break.
    Also, because of the internet of things, a drone (car) can be switched off e.g. make the drone impulse become zero, by bystanders.
    Autonomous (war) drones are best compared to viruses; the issue is malicious use not accidents.
    Best, Jaap

  • Another dilemma comes from the phrase, “women and children first”. What if the choice is between a young child with an entire life in front of him and two old diseased men. I personally believe that youth should be valued above age. Another possible dilemma is as follows, the choice is between the president and two other normal people. Killing the president would not kill humans but harm many humans immensely so should the car value that harm above a human life.

  • 1. Collective action problem: given that there will almost always be fewer people inside the car than out, rational individuals would choose what you call the ‘deontological car’ every time. Which would lead to worse collective outcomes.

    2. What kind of software are you using? I’m keen to adopt. I’m still encumbered with MS Office and, far worse, MS Outlook. And, to be frank, my daily experiences with these leave all talk of IT systems capable of confronting ethical errors safely in the distant future.

  • Fabulous blog! Timely, provocative and clever all at once. Your stuff is the best!

    In fact it’s so provocative that one is provoked to feed back two thoughts…

    - Even those who are not religious might think that it was a matter of freedom of conscience to have the car with the algorithm they choose;

    - The blog chooses the classic Benthamite formulation of utilitarianism – greatest happiness of the greatest number. This formulation is catchy but it ends up being indeterminate – a car programmed by it would face undecidable choices.

    The difficulty, as mathematicians say, is that one can’t have two simultaneous maximands. If a soccer coach says he wants to get ‘the most goals from the most players’ he may well face a choice that the formulation leaves undecided: should he get more total goals from a smaller set of players, or should he get fewer total goals from a larger set of players?

    The post-Benthamite formulation (often attributed to JS Mill) is that the utilitarian goal is to produce maximum utility/happiness/well-being, etc. This takes the set of beings over which the maximization is to be performed to be set, thus allowing the formulation to focus on the key variable = utility/happiness/well-being.

    Sorry to pour pedantry over your fantastic column!

    • Thanks “A Philosopher”. I’m flattered (especially as I know who you really are!)

      I agree that there is a question of freedom of conscience even for non-religious people.

      I agree that the Benthamite formula is problematic (and Cervantes says the same elsewhere in the comments). But the Mill formula restates the problem: it doesn’t solve it. Either way you need a social welfare function.

  • Maybe the car would also have to consider the Star Trek problem. What if in saving the 5 people vs. the 1, the car kills a pregnant woman who is carrying a child who will grow up to cure cancer?

  • The people at Less Wrong have been trying to make progress on the general form of this problem: how to algorithmically describe human ethics. Their focus is on how to make a generalized AI provably friendly – if an AI self-improves to somewhere beyond human intelligence, it becomes impossible for humans to meaningfully constrain it, so it’d be important to get friendliness correct.

    I see the need for it, though I haven’t the philosophical chops to evaluate it or contribute.

  • I guess I have a problem with driverless cars deciding what’s the “greatest good for the greatest number” of people. This hasn’t worked out so well when people were doing it, especially if they gained control of the levers of government. The Eugenics movement comes to mind – suddenly people were being forcibly sterilized to prevent them passing on their allegedly inferior genes, and even worse things were done with the same basic rationale in mind.

    About the same time that these cars hit the roads in quantity, there’s a good chance that everybody will be identifiable remotely, through the RFID chips they carry in their wallets, watches, phone-like devices or implanted under their skins (there already are scanners that can pull information from your credit cards as you pass by.) So the artificial intelligences embedded in these cars will be able to tell exactly who you are, and swerve accordingly. Given the mindsets of the corporations building these things, who do you think will be hit: a couple of unemployed artists strolling along together, or a single wealthy “job creator”?

Follow Owen




Email subscription

I want to receive new posts by email

Owen on Twitter