AI and responsibility – Studying How to Make AI Fairer

In 2020, three projects based at the University of Mannheim were awarded funding under the Baden-Württemberg Stiftung’s “Fair AI” initiative. The projects wrapped up this year. What were the Mannheim researchers’ findings? And how could they help ensure AI is used more responsibly in the future?

AI is playing an increasingly important role in many areas – and that raises some fundamental legal and ethical questions. The Baden-Württemberg Stiftung launched the funding program “Fair AI” to address those questions, with a focus on research projects looking at interactions between AI-based technology and society. Funding recipients included three projects from the University of Mannheim. “Three of the ten funded projects are based at Mannheim, which is a big coup for us. It confirms Mannheim’s leading role in the fields of data science and AI,” said Professor Heiko Paulheim, Chair of Data Science, who was responsible for two of the projects, at the time. Three interdisciplinary teams, including staff from four Mannheim schools, spent three years working on the projects. What were their findings?

When AI sets prices
Websites like Amazon are increasingly using AI systems to monitor their competitors’ prices and set their own prices accordingly. But what if AI systems do not just use other companies’ prices as guides, but instead collaborate to fix prices? “From work in other areas, we already knew that algorithms can learn to coordinate and cooperate. That’s a particular problem when it comes to price setting, as competition law prohibits collusion,” explained Professor Paulheim. In the project Competition Law-Compliant AI (KarekoKI) he worked with Professor Thomas Fetzer, Chair of Public, Regulatory, and Tax Law, to develop a legal framework and strategies for technical prevention of AI-based price fixing. The researchers used market simulations to study how AIs can learn to cooperate in these scenarios. The results are unambiguous: The reinforcement algorithms that are commonly used can learn to set quasi-monopolistic prices and so have the potential to undermine functional markets and their mechanisms. “For legal scholars, that means we need to critically examine existing competition law with a view to pricing algorithms. This project showed that current legal standards are strongly geared towards humans and can only partly be applied to algorithms,” said Fetzer. The project therefore also made proposals on how current competition law could be amended to account for AI agents.

Is AI dividing society?
Paulheim was also involved in the project Responsible News Recommender Systems (ReNewRS), alongside Dr. Philipp Müller (Institute for Media and Communication Studies, University of Mannheim), Professor Harald Sack (FIZ Karlsruhe), and Professor Christof Weinhardt (KIT Karlsruhe). The team studied whether news recommender systems such as Google News and Bing News are responsible for polarizing or radicalizing users. A series of experimental studies was conducted to investigate the creation of filter bubbles by news recommender systems and measure their impact on opinions and user polarization. “We found that news recommender systems are not ‘neutral’: By personalizing recommendations, they provide users with an unbalanced selection of news. But we also found that – at least during the experiments, which mostly only lasted a short time – user polarization was very low,” said Paulheim. The team developed proposals for how news recommender systems could provide a more “neutral” selection of news without compromising the user experience.

Bureaucracy and digitalization
How fairly does AI treat people? A growing number of decisions that affect our day-to-day lives are made by machines. That includes decisions by government services. The project Fairness in Automated Decision-Making (Fair ADM), led by Dr. Ruben Bach and Dr. Christoph Kern from the University of Mannheim and Professor Frauke Kreuter from LMU Munich, looked at discrimination and fairness in algorithm-based decision-making processes in the German public sector. “ADM systems are intended to improve and speed up bureaucratic processes. But their use also raises new social and ethical questions,” explained Kern. One worry is that ADM might reinforce existing social discrimination. And so the researchers developed an algorithmic model to evaluate long-term unemployment risk in the German labor market and potential risks of unfairness. “The project found that involving social science perspectives is crucial to identify prejudices and injustices and to reduce social inequality in ADM systems,” said Kern.

All three projects clearly showed that whenever AI is tasked with making decisions, close scrutiny is needed. “Our task as researchers is very clear, in my view: We mustn’t just focus on technical development, we also need to precisely analyze and anticipate how AI behaves or will behave in different situations. AI development is very different from traditional software development, so we need specifically tailored ideas and rules, and we need to work more closely with researchers from other disciplines than we do in pure development. We are responsible not just for developing AI, but also for ensuring it is used fairly and responsibly,” said Paulheim. To shore up its expertise in this area, the School of Business Informatics and Mathematics has appointed a new junior professor, Dr. Philipp Kellmeyer, specializing in responsible data science.

Text: Jule Leger/December 2023