An Interview with Christoph Lütge

In January 2019 Facebook announced it would endow the Institute for Ethics in Artificial Intelligence at the Technical University of Munich (TUM) with $7.5 million over five years. Research conducted at the Institute will investigate the use and impact of artificial intelligence, such as safety, privacy, fairness and transparency.

In an interview with us, Dr. Christoph Lütge, Director of the Institute for Ethics in Artificial Intelligence and Professor of Business Ethics at TUM, discusses data privacy, innovation vs. regulation, the relationship between industry and research, and more.

DWIH: There has been a lot of press about the AI Ethics Center and Facebook’s partnership with you. What excites you about this project and the research that will be conducted here?

Lütge: This is a very unique opportunity to work on ethical issues in a new technology on such a scale. It comes at a point in time when AI is on the forefront of a large number of both scientific and public debates. We have the chance to work on AI ethics issues in detail, and not just by doing research behind closed doors, but with an outreach to civil society, politics, and the corporate world.

DWIH: Some skeptics are instinctively wary of a major US corporation like Facebook investing $7.5 million into ethical AI research. How do you respond to these skeptics?

Lütge: First, I always state that there are no obligations whatsoever towards Facebook. The new institute is an independent research institute, which will also have an independent Advisory Board with no members of Facebook sitting on it. The money comes as a gift for research. It will be used to help make AI systems more ethical, not just by putting together some abstract principles, but by working on concrete issues, like algorithms, systems, robots or screening technologies, for example. Therefore, if the money from Facebook can be employed for advancing ethics and bringing (ethical) benefits to the users of AI (which we all either soon will be or already are), it will be beneficial for all sides.

DWIH: When writing about the AI Ethics Center, Facebook’s Joaquin Quiñonero Candela said “AI poses complex problems which industry alone cannot answer.” To what extent do you agree? What is the ideal relationship between academia and industry in relation to AI?

Lütge: When sitting on the German federal ethics committee for autonomous driving, I had a lot of discussions with representatives from automotive companies who felt the same way about this kind of technology: They were equally looking for academia and government to help them address ethical problems. There are some problems a single company – or even industry as a whole – cannot address on their own: These range from questions like how to organize AI accountability issues to very fundamental philosophical problems such as: How much dependence on certain technologies are we willing to accept as a society?

"If innovation is considerably stifled, it cannot bring about its ethical potential."
Dr. Christoph Lütge

DWIH: The U.S. government’s mentality toward tech and innovation has been largely hands-off for the sake of growth, risk and new development. Do you fear regulations could stifle innovation in any way?

Lütge: This is not a simple yes or no question. In general however, I believe regulation should be approached with caution at this point. The digitech markets are still very dynamic, and that has to be taken into account. It should also be clear beforehand that specific regulation would achieve the goals it aims at and not be counterproductive: if innovation is considerably stifled, it cannot bring about its ethical potential. Therefore, I favor an approach that relies on ethical guidelines first, and in which all parts of society participate.

DWIH: Google’s Sundar Pichai recently publicly advocated for the United States to adopt its own data protection legislation like Europe’s GDPR. Do you agree? In your opinion, what has prevented this?

Lütge: There is still a lot of critical discussion on the GDPR in Europe, and there are some valid arguments in it. I believe however that on the overall, the GDPR can become a tool for improving trust in digital technologies without putting the brakes on them. It could become a sort of blueprint for other regions of the world, as it sets a relatively clear regulatory framework for people to sell their data. This issue is certainly seen in a more liberal way in the US, where the use of data is not considered per se as problematic as in Germany, in particular. But still, there are a lot of critics in the US too, and a – revised and refined – GDPR could address these.

DWIH: The assumed privacy and anonymity of the internet can bring out the worst in human nature. People online cyberbully, search for contents they would never admit to and organize for collective acts of violence. Do you ever worry that too much privacy on the internet is a bad thing?

Lütge: The digital world is a mirror of society in many ways. Of course, there are a lot of activities going on which people would not openly admit to, some of them illegal, certainly. However, I am not sure that privacy (and in particular, privacy with respect to illegal activities) has increased when compared to the non-digital world. Was it not worse in a time when dictatorships around the world could shield their citizens from information from the outside or when companies could hide their activities easily without having to worry about the power of social networks? So in this regard, I believe we cannot complain about too much privacy in the digital world – in some ways at least, we had more privacy in the old days, with bad consequences sometimes. Still, I agree of course that disclosure of information is essential in many digital contexts and too much privacy can have bad consequences too.