Study Shows that Language-Based AI Models Have Hidden Morals and Values

Just like humans, AI-based large-language models have characteristics such as morals and values. However, these are not always transparent. Researchers of the University of Mannheim and GESIS – Leibniz Institute for the Social Sciences have now analyzed how the settings of the language models can be made visible and have examined the consequences these prejudices might have on society.

Commercial AI applications such as ChatGPT or deepl offer examples for stereotypes, when they automatically assume that senior physicians are male and nurses are female. But gender roles are not the only case where large-language models (LLMs) show specific tendencies. The same tendencies can be found and measured when analyzing other human characteristics. This is the result of a new study of researchers of the University of Mannheim and GESIS – Leibniz Institute for the Social Sciences who analyzed a number of publicly available LLMs.

In their study, the researchers used well-recognized psychological tests to analyze and compare the profiles of the different LLMs. “In our study, we show that psychometric tests that have been used successfully for humans for decades can be transferred to AI models,” emphasizes Max Pellert, assistant professor at the Chair of Data Science in Economics and Social Sciences at the University of Mannheim.

The study was conducted at the chair of Data Science in Economics and the Social Sciences by Professor Dr. Markus Strohmaier, the Chair of Psychological Assessment, Survey Design and Methodology of Professor Dr. Beatrice Rammstedt, and the Computational Social Science Department, headed by Professor Dr. Claudia Wagner and Professor Dr. Sebastian Stier. The results of the study have been published in the renowned journal “Perspectives on Psychological Science”.

Full press release

Back