The ethics of artificial intelligence: Issues and initiatives
63
Legal and ethical responsibility
From a legal perspective, who is
responsible for crashes caused by
robots, and how should victims be
compensated (if at all) when a vehicle
controlled by an algorithm causes
injury? If courts cannot resolve this
problem, robot manufacturers may
incur unexpected costs that would
discourage investment. However, if
victims are not properly compensated
then autonomous vehicles are unlikely
to be trusted or accepted by the public.
R
obots will need to make judgement
calls in conditions of uncertainty, or 'no
win' situations. However, which ethical
approach or theory should a robot be
programmed to follow when there's no
legal guidance? As Lin et al. explain,
different approaches can generate
different results, including the number
of crash fatalities.
A
dditionally, who should choose the
ethics for the autonomous vehicle —
drivers, consumers, passengers,
manufacturers, politicians? Loh and
Loh (2017) argue that responsibility
should be shared among the
engineers, the driver and the
autonomous driving system itself.
However, Millar (2016) suggests that the user of the technology, in this case the passenger in the
self-driving car, should be able to decide what ethical or behavioural principles the robot ought to
follow. Using the example of doctors, who do not have the moral authority to make important
decisions on end-of-life care without the informed consent of their patients, he argues that there
would be a moral outcry if engineers designed cars without either asking the driver directly for their
input, or informing the user ahead of time how the car is programmed to behave in certain
situations.
3.3.3 Case study: Warfare and weaponisation
Although partially autonomous and intelligent systems have been used in military technology since
at least the Second World War, advances in machine learning and AI signify a turning point in the
use of automation in warfare.
A
I is already sufficiently advanced and sophisticated to be used in areas such as satellite imagery
analysis and cyber defence, but the true scope of applications has yet to be fully realised. A recent
report concludes that AI technology has the potential to transform warfare to the same, or perhaps
even a greater, extent than the advent of nuclear weapons, aircraft, computers and biotechnology
(Allen and Chan, 2017). Some key ways in which AI will impact militaries are outlined below.
Ethical dilemmas in development
In 2014, the Open Roboethics initiative (ORi 2014a, 2014b)
conducted a poll asking people what they thought an
autonomous car in which they were a passenger should do
if a child stepped out in front of the vehicle in a tunnel. The
car wouldn’t have time to brake and spare the child, but
could swerve into the walls of the tunnel, killing the
passenger. This is a spin on the classic 'trolley dilemma',
where one has the option to divert a runaway trolley from a
path that would hurt several people onto the path that
would only hurt one.
36 % of participants said that they would prefer the car to
swerve into the wall, saving the child; however, the majority
(64 %) said they would wish to save themselves, thus
sacrificing the child. 44 % of participants thought that the
passenger should be able to choose the car’s course of
action, while 33 % said that lawmakers should choose. Only
12 % said that the car’s manufacturers should make the
decision. These results suggest that people do not like the
idea of engineers making moral decisions on their behalf.
Asking for the passenger’s input in every situation would be
impractical. However, Millar (2016) suggests a ‘setup’
procedure where people could choose their ethics settings
after purchasing a new car. Nonetheless, choosing how the
car reacts in advance could be seen as premeditated harm,
if, for example a user programmed their vehicle to always
avoid vehicle collisions by swerving into cyclists. This would
increase the user’s accountability and liability, whilst
diverting responsibility away from manufacturers.