AI Guideline
as at 22 October 2025
In its meeting of 15 October 2025, the President’s Office of the University of Mannheim has adopted the following AI guideline.
Content
Preamble
1Artificial Intelligence (AI) is becoming increasingly important in all fields of work at the University of Mannheim. 2Regulation (EU) 2024/
1689 of the European Parliament and of the Council of 13 June 2024 (“AI regulation”) is applicable to the University of Mannheim. ³Development, procurement and use of AI hold a wide range of opportunities for research and teaching as well as for the central and decentralized service institutions; however, AI comes with complex legal issues, particularly with regard to data protection, copyright, IT security, transparency and fairness. 4On the one hand, this guideline seeks to enable members of the university to use AI systems in a lawful and responsible manner. 5On the other hand, the procurement, development and use of high-risk AI systems in particular are to be put under internal university control, as they may not only entail particular risks of harm to fundamental rights, safety or health of natural persons, but also require the university to take extensive precautions. 6In the instance that scientific research and development activities are excluded from the scope of AI regulation and from the scope of this guideline, this exclusion does not relieve them from upholding ethical standards and scientific integrity and ensuring that scientists use AI responsibly.
Section 1 – Purpose and scope
1This guideline lays down binding rules for the development, procurement, use, control, distribution and commissioning of AI systems for members and affiliates of the University of Mannheim, notwithstanding other applicable legal provisions, ethical and scientific standards and binding framework conditions of third parties. 2The aim is a legally compliant, ethically responsible and transparent approach to AI at the University of Mannheim—without unduly restricting academic and teaching freedom and, in particular, innovative research.
1This guideline, as well as the AI regulation referred to in article 2 subsection 6, does not apply to AI systems or AI models, including their output, which are developed and put into operation for the sole purpose of scientific research and development. 2In line with the AI regulation pursuant to article 2 subsection 8, the AI guideline also does not apply to research, testing and development activities on AI systems or AI models before they are placed on the market or put into service. 3Such activities are to be carried out in accordance with applicable European Union law. 4Tests under real-life conditions need to adhere to the corresponding requirements of the AI regulation.
The freedom of science, research and teaching remains unaffected.
Section 2 – Definition
For the purposes of this guideline, the definitions of the EU AI regulation apply in their respective valid version.
Section 3 – High-risk AI systems and prohibition of the use of AI systems for other purposes
1“High-risk AI systems” are AI systems that are considered high-risk in accordance with article 6 of the AI regulation. 2This includes, amongst others, AI systems listed in the following areas if they pose a significant risk of harm to the fundamental rights, safety or health of natural persons by, inter alia, significantly influencing the results of decision-making processes: general education and vocational training; employment, human resources management and access to self-employment; accessibility and utilization of basic private and basic public services and benefits. 3In an official context, members and affiliates of the University of Mannheim may only develop, procure, use, operate or distribute high-risk AI systems if the President’s Office has been informed of this in advance and has given its explicit approval. 4Without prior express consent of the President’s Office, the following is not permitted at the University of Mannheim:
- putting the name or trademark of the University of Mannheim on a high-risk AI system that has already been placed on the market or put into service;
- making a substantial modification to a high-risk AI system that has already been placed on the market or has already been put into service in such a way that it remains a high-risk AI system pursuant to article 6 of the AI regulation;
- modifying the intended purpose of an AI system, including a general-purpose AI system, which has not been classified as high-risk and has already been placed on the market or put into service in such a way that the respective AI system becomes a high-risk AI system in accordance with article 6 of the AI regulation.
5If a modification as defined in sentence 4 is made without the consent of the President’s Office, the parties responsible assume all legal obligations arising from the modification, including, in particular, those applicable to providers of high-risk AI systems under the AI regulation.
Section 4 – Central AI registry
1The University of Mannheim is establishing a central AI registry. 2Through systematic documentation, the central AI registry seeks to ensure that the development, procurement, operation and use of AI systems at the University of Mannheim are transparent, responsible and reproducible, and that legal, ethical and safety-related requirements are met.
Section 5 – Process for entry in the central AI registry and for obtaining the approval of the President’s Office for high-risk AI systems
In an official context, members and affiliates of the University of Mannheim must enter AI systems into the central AI registry prior to their development, procurement, operation and use; this also applies to pilot projects and test systems as well as cloud-based or API-based AI services.
1An entry in the central AI registry is made upon application to the AI core team responsible for the AI registry. 2The application must contain the following information:
- provider and name of the AI system;
- identification of necessity, indication of specific aims, desired features and possibilities of use for the AI system;
- office responsible for the use of the AI system;
- processed types of data;
- group of persons affected and
- assessment of the risk category according to the AI regulation (minimal, limited, high or unacceptable risk).
The AI core team reviews the information after having received the application.
1If the application concerns a high-risk AI system, the AI core team, after their review, forwards all relevant information to the President’s Office, and the President’s Office decides on the approval of the development, procurement, operation, use or distribution based on a risk assessment by the member of the President's Office responsible for digitalization. 2The decision is entered into the central AI registry.
1If the application does not concern a high-risk AI system, the AI core team may only postpone the entry in the AI registry if it is not possible to complete the review within two weeks; the applicants must be informed of the postponement and the further process. 2If the entry is not postponed, the AI core team approves the entry or if the AI core team does not object to the entry within two weeks of receiving the application, the development, procurement, operation or distribution by members or affiliates of the University of Mannheim is entered into the AI registry, including the features and possibilities of use indicated in the application; the right of the President’s Office to make a different decision with effect for the future remains unaffected. 3In case of an objection, the AI core team must immediately forward all information, including an explanation, to the President’s Office; in such cases, the President’s Office decides on the entry of the AI system based on a risk assessment by the member of the President's Office responsible for digitalization. 4The decision is entered into the central AI registry.
The AI core team responsible for the central AI registry declares further details on the process to be observed.
Section 6 – Basic principles for the use of AI systems
1To promote equal opportunity, innovation, and competitiveness, the University of Mannheim provides its members and affiliates with reliable technical and data-protection-compliant access to selected AI systems and offers information services and advice to support their competent and legally compliant use. 2Members and affiliates of the University of Mannheim are to give preference to these systems in an official context. 3To ensure compliance with the University of Mannheim’s data protection measures and to safeguard the security and integrity of operational, business, and other sensitive or confidential data, members and affiliates must process such data exclusively in AI systems that are explicitly designated as approved for this purpose in the AI registry. 4Members and affiliates of the University of Mannheim are not permitted to use AI systems that are marked as “not approved” in the AI registry.
1For the AI systems entered in the central AI registry, the University of Mannheim takes measures to ensure that all user groups (including persons who use AI systems as part of their work at the University of Mannheim or on behalf of the University of Mannheim) have an appropriate level of AI competence. 2In this regard, the respective technical knowledge, experience, training or instruction of the respective persons, the specific context for the use of the AI system as well as the persons or group of persons who intend to use the AI system, must be considered. 3For this purpose, the University of Mannheim provides appropriate information, qualification and training services.
1The users of the content are responsible for the accuracy, the lawful use and dissemination of content that has been produced with AI systems. 2Results produced by AI can be very accurate and precise, but they can also be entirely fictional, flawed and discriminating. 3Therefore, AI-generated statements, literature, sources and results must always be verified, reviewed critically and, if necessary, adapted before they are used responsibly. 4The ultimate responsibility always rests with the individual, particularly in cases involving decisions with significant consequences.
1The University of Mannheim, its members and its affiliates use AI responsibly and transparently. 2The use of AI must, in particular, be disclosed to persons who directly interact with an AI system (e.g., chatbots, language assistants), as well as to persons who are affected by decisions with personal consequences that are made by AI or with the support of AI. 3Users disclose the use of content that is entirely or partly AI-generated if a respective duty to notify applies (e.g., in case of possible deception).
Section 7 – Entry into force
This guideline will come into effect on the day after its publication in the Bulletin of the President’s Office (Bekanntmachungen des Rektorats).
AI systems within the meaning of section 5 subsection 1 that have already been procured, developed, operated or marketed prior to this guideline coming into effect must apply for entry in the central AI registry immediately after this guideline comes into effect.
Mannheim, 22 October 2025
Professor Thomas Fetzer
President
