Bertelsmann Stiftung

Algorithms influence our daily lives quietly without us knowing. They determine what gets displayed on our newsfeeds, which jobs we get interviewed for and for which loans we are eligible. But what are the consequences of machines judging people?

In the project “Ethics of Algorithms,” four researchers from central Germany’s Bertelsmann Stiftung investigate this question. They want to help design algorithmic systems that are more equitable and that facilitate greater social inclusion. They seek to reframe technology not by what is possible but by what is socially meaningful.

The project has three focus areas:

  1. Awareness-raising: they want to inform the public of the opportunities, risks and, above all, the relevance of algorithmic processes.
  2. Structure the debate: the project provides input to foster a fact-based, solutions-driven discussion about algorithms and ethics and promotes the intersectoral and interdisciplinary exchange of ideas.
  3. Develop solutions: here, the project tests promising approaches to be applied at the intersection of technology and society.

With the support of Berlin’s iRights.Lab, Bertelsmann published the Algo.Rules, a set of guidelines intended for everybody involved in the development and use of algorithmic decision-making systems. The rules are supposed to ensure ethical norms make their way into algorithmic code.

Algo.Rules #s 1 and 2

When first conceived by Bertelsmann researchers, the Algo.Rules were to be an ethics code for programmers much like the Hippocratic Oath for doctors. Because so many people from different background are involved in the development of algorithmic systems, however, the writers changed their focus.

“Error and human bias can come about through the data, the code itself, the underlying goals of the systems and also the way algorithmic outputs are interpreted through humans,” explained Carla Hustedt, Project Lead of Bertelsmann’s “Ethics in Algorithms.”

When discussing fighting bias, many tech professionals advocate increasing diversity in the industry. In one specific illustration of the importance of diversity, Hustedt pointed to early car safety tests.

“For a long time, women had a much higher chance of dying in a car accident than men because the crash-test-dummies used for building the security systems of the cars were of male body sizes,” said Hustedt.

The same can be seen in tech today where facial recognition systems struggle more identifying black women than they do white men. Still, Hustedt acknowledges increasing diversity will not solve all AI bias problems.

“Oversight bodies and civil society watchdog organizations should be strengthened legally and financially and there is an urgent need for competency building among policy makers, regulatory bodies and the public in general,” Hustedt said.

For more on algorithmic ethics and Bertelsmann’s research, visit their their blog: