An Interview with Dr. Catrin Misselhorn

From 2012 to 2019 Dr. Catrin Misselhorn was the Chair for the Philosophy of Science and Technology and Director of the Institute of Philosophy at the University of Stuttgart. Starting in April 2019 she accepted a Chair at the University of Göttingen.

Her research focuses on machine morality, or the moral behavior of artificially intelligent systems. More broadly, she is an expert on the philosophy of science and technology.

In this interview she discusses the complexities of machine morality in fields like healthcare and autonomous driving and whether or not a moral relationship exists between humans and machines.


 

DWIH: Your work offers some unusual perspective on the current debates about AI and machine learning. Will you describe to us your research and primary academic interests?

My main focus in this area is machine ethics. Machine ethics explores whether and how artificial systems can be furnished with moral capacities, i.e., whether there cannot just be artificial intelligence but artificial morality. This question becomes more and more pressing since the development of increasingly intelligent and autonomous technologies will eventually lead to these systems having to face morally problematic situations. Much discussed examples are autonomous driving, care robots and autonomous weapon systems. Since these technologies will have a deep impact on our lives it is important for machine ethics to discuss the possibility of artificial morality and its implications for individuals and society.

"The development of increasingly intelligent and autonomous technologies will eventually lead to these systems having to face morally problematic situations."
Dr. Catrin Misselhorn

DWIH: AI can only be as unbiased as the data fed into it. Morality is subjective and many philosophical approaches to it exist. When discussing machines programmed with morality, which morality are we talking about? Who decides what is moral for machines?

I would not say that morality is subjective in contrast to, for instance, taste which clearly is. Some people like spinach and some do not. The subjectivity of taste means that it does not make much sense to discuss whether one should like it or whether one should not. This is different with respect to morality. Nevertheless, I agree with John Rawls that there exists a reasonable pluralism in ethical matters in liberal democracies. This is due to epistemic constraints in ethics which Rawls calls the burdens of judgment. Yet, this does not undermine machine ethics.

In some areas like care I am favoring approaches which are sensitive to the moral views of the individual users. There may, for instance, be conflicts between different values in care like avoiding health risks and respecting privacy. Different users will probably weigh these values differently. I am dreaming of a care system that can adapt to the way how its users weigh different moral values.

Such an individual approach is, however, not suitable to all areas of application. Autonomous driving does affect not only the users of self-driving cars but also pedestrians, cyclists or children who do not have the possibility to consent. Therefore, generally binding regulations are needed in areas like these. I am confident that it is possible to find such regulations even in the face of reasonable pluralism. Examples where this was successfully done in Germany are abortion or assisted dying. There exist laws that most people can accept even though they may not exactly mirror their moral attitudes with respect to these issues.

DWIH: Are machines in some ways already or even inherently moral actors?

According to my gradual view of agency, an artificial agent can be considered a moral agent in a functional sense, i.e., if there is moral information processing of a certain kind going on. One has to be aware, though, that this functional form of moral agency which artificial systems might display does not yet amount to full moral agency as it pertains to human beings since machines are lacking freedom of the will, (self-)consciousness and self-reflexivity.

One can think of functional morality in terms of Lawrence Kohlberg’s scheme of moral development as in some respects comparable to the stage of conventional morality. That is, the moral standards that the artificial moral agent is following are programmed and do not arise from the system’s own deliberation. Moreover, artificial morality is restricted to certain areas of application whereas human morality is universal in that it can apply to all kinds of moral issues.

There have been developed some prototypes of machines with moral capacities, for instance, in care and service robotics or in autonomous weapon systems. However, none of these is available on the market by now.

DWIH: Let’s say an autonomous car kills someone based on a moral decision it made using machine-learning. Who takes responsibility for that action?

The main problem is that autonomous systems cannot bear responsibility because they are not full moral agents. But they can undermine responsibility ascriptions. This is due to the fact that holding agents responsible for their doings requires that they intended the consequences of their actions or were at least able to foresee them and that they can control them. There may be situations in which a system acts immorally although no human agent intended this, nobody was able to foresee it and nobody was able to control the system once it was activated. One might think about forcing the user to take responsibility when starting the machine. But this would require that the person has independent access to the relevant information and enough time for deliberation which is highly doubtful in practical contexts.

Is the treatment of this robot dog inhumane? Dr. Misselhorn weighs in. ©Boston Dynamics via Giphy

DWIH: Our questions so far have focused on how machines are imbued with morality affecting humans, but let’s reverse that. Do humans have a moral responsibility to machines? If, as seen on so may viral videos and GIFs, a human kicks a robot, for instance, is that morally wrong?

Human beings do not have moral responsibility to machines that lack morally crucial capacities like consciousness. But they do have a moral responsibility to their fellow humans which may also affect their behavior towards robots. As I have argued with respect to empathy with robots, it is morally wrong to abuse human- (or animal-) like robots habitually insofar as they are capable to elicit empathy, because this has a presumably negative effect on the capacity for moral judgment, moral motivation and moral development with regard to other human beings or animals.