A man and a woman are sitting at a table together, looking intently at a laptop.

FAQs: AI at the University of Mannheim

How may AI be used at the University of Mannheim? What does “high-risk AI” mean—and where can the central AI registry be found? These FAQs on the AI guideline (KI-RiL) provide a quick overview of key terms, processes and regulations regarding the use of AI at the university.

As at 18 December 2025

The AI guideline has been adopted by the President’s Office of the University of Mannheim on 15 October 2025. It came into effect on 4 November 2025.

I. General questions

  • What are the principles to guide the use of AI systems at the University of Mannheim?

    1) The University of Mannheim sees the developments in the field of artificial intelligence (AI) as a great opportunity for research, teaching and service institutions.

    2) People are responsible for their use of AI-generated content. Such content must always be reviewed critically before it can be used.

    3) In certain cases, it may be necessary to make transparent to users that they are interacting with AI. The use of AI systems must therefore be disclosed to the extent that an information obligation as set out in article 50 of the EU AI regulation applies.

    4) To comply with data protection provisions, confidential information and personal data may only be processed using AI systems with a corresponding approval in the central AI registry of the University of Mannheim (see section IV).

  • What does the AI guideline govern?

    The AI guideline describes how artificial intelligence is to be developed, used and controlled at the University of Mannheim using a legally compliant and responsible approach, without unduly restricting science, research and teaching (section 1 subsection 1 KI-RiL). It is in accordance with the EU AI regulation, implementing its requirements at the institutional level.

    In particular, the AI guideline takes into account the requirements set out in the EU AI regulation with respect to risk classification, transparency and documentation when using AI systems. It therefore applies in addition to other relevant regulations, in particular the EU AI regulation, data protection provisions and requirements of third parties (e.g., parties involved in research funding, publishers or contractual partners).

  • Who does the AI guideline apply to?

    The AI guideline applies to all members and affiliates of the University of Mannheim, i.e., in particular to employees in teaching, research and administration as well as students (section 1 subsection 1 sentence 1 KI-RiL).

  • Who can I turn to with questions regarding AI?

    If you have any questions or suggestions regarding AI at the University of Mannheim, please do not hesitate to contact the AI core team via ki.teammail-uni-mannheim.de. We will be happy to process your request directly or forward it to the persons responsible.

    For more information on the AI core team please see our intranet or (if you are a student) Internet page.

II. Terms

  • What is “AI”?

    AI (artificial intelligence) means machine systems that can perform tasks that typically require human intelligence—for example, in the areas of learning, problem-solving, language processing, perception or decision-making.

    For further information, we recommend the AI training course of the network for digitalization in teaching of higher education institutions in Baden-Württemberg (Hochschulnetzwerk Digitalisierung Baden-Württemberg, HND), which is available on ILIAS.

  • What is an “AI system”?

    An AI system is a machine system that independently generates results based on data or defined rules that can influence its environment and in particular human actions or decisions. The term “AI system” is defined in article 3 number 1 of the EU AI regulation. For guidance, the Bundesnetzagentur provides the KI-Compliance Kompass (available in German only), which you can use to check whether your system is likely to fall under the AI regulation (non-binding initial assessment). 

  • What are “high-risk AI systems”?

    High-risk AI systems are systems that pose a significant risk of harm to the fundamental rights, safety or health of natural persons in certain areas of use specified in the EU AI regulation. This includes, among other things, that they can significantly influence decision-making regarding people’s opportunities, participation or rights (section 3 sentence 2 KI-RiL).

    Corresponding risks may, for example, arise in the following areas:

    • education and vocational training, e.g., with respect to examinations or admissions,
    • employment and human resources management, e.g., in application or selection processes.

    The classification and risk assessment of the relevant AI system is performed by the AI core team or the member of the President’s Office responsible for digitalization.

    Provisions regarding the classification of high-risk AI systems can be found in article 6 of the EU AI regulation.

III. Use of AI systems at the University of Mannheim

  • Does the AI guideline also apply to research?

    The AI guideline and the EU AI regulation do, as a rule, not apply to scientific research and development activities, as these are protected by the freedom of science and research (section 1 subsections 2 and 3 KI-RiL). The freedom of science and research does not relieve anyone from the fundamental duty to use AI systems ethically and responsibly (preamble sentence 6 KI-RiL).

    The exception described includes all AI systems and AI models developed or used for the sole purpose of scientific research or development—regardless of whether they are used to conduct research on AI or, for example, to evaluate data in another discipline (section 1 subsection 2 sentence 1 KI-RiL).

    Example:
    A research team develops a neural language model to analyze how the German language has changed over time.
    → The AI guideline and the EU AI regulation do not apply, as the AI system is used exclusively for scientific purposes.

    Restrictions:

    The AI guideline does, however, apply as soon as AI is used for more than just research, for example:

    • if it is tested or used in real processes or applications (e.g., in teaching, administration, medicine, human resources) or
    • if it is developed on behalf of a third party or with a view to practical or commercial use.

    In these cases, the provisions of the AI guideline must be observed.

  • Can I use AI systems at the University of Mannheim?

    Generally speaking, yes. You may use AI systems available on the Internet, as long as you do not enter any confidential information or personal data. However, AI systems that are marked as “not approved” in the AI registry must not be used (section 6 subsection 1 sentence 4 KI-RiL). 

    Example: DeepL may be used for translations, provided that no confidential content or personal data is entered. 

    You should give preference to using the AI systems provided by the University of Mannheim (section 6 subsection 1 sentence 2 KI-RiL) though, as these offer reliable technical and data-protection-compliant access to selected applications. 

    However, if you intend to process confidential information or personal data, you are only allowed to use approved systems in the AI registry.

    If you develop an AI system or procure or operate one for the University of Mannheim (including pilot projects, test systems and cloud-based or API-based AI services) in an official context and the AI guideline applies, this system must be entered into the central AI registry in advance (section 6 subsection 1 KI-RiL).

    An overview of the AI systems and use cases included in the AI registry as well as an application form (DOCX) (in German only) can be found on the intranet.

  • If I want to use AI systems for research purposes, can I use other systems than those centrally provided by the University of Mannheim?

    Yes, you may also use other AI systems than those provided by the university if you use them as part of your research. The AI guideline does not apply to AI systems developed or operated specifically for the sole purpose of scientific research and development (section 1 subsection 2 sentence 2 KI-RiL). However, the freedom of science and research as reflected by this does not relieve you from upholding ethical standards and scientific integrity nor from adhering to other applicable provisions of European Union law, such as data protection law (preamble sentence 6, section 1 subsection 2 sentence 3 KI-RiL).

  • Can I use AI systems in teaching?

    When using AI in teaching, the restrictions set out in the AI guideline must be observed. For example, confidential information and personal data may only be processed using systems that have been explicitly approved for this purpose (section 6 subsection 1 sentence 3 KI-RiL). In addition, it must be ensured that users have a sufficient level of AI competence and that no prohibited systems are used (section 6 subsection 1 sentence 4 and subsection 2 KI-RiL, article 5 EU AI regulation). Applicable regulations, in particular those regarding legal aspects of examinations and data protection law, must be complied with.

    You may use AI systems in teaching if compliance with the requirements above can be ensured and 1) such use has been approved by the university or 2) these systems are used on a voluntary basis and students not using them are not disadvantaged in their studies or if 3) these systems are intended to be used in elective courses. In the last case, students can decide in advance whether they would like to attend the course under the given conditions (i.e., using the AI system). However, there must be a sufficient number of elective courses students can actually take where no use of AI systems is required.

    Exception:
    In courses focusing on AI, use of AI systems may be a mandatory component, for example in an “Introduction to AI” seminar in the bachelor’s program in Business Informatics. This applies regardless of whether the relevant course is elective or mandatory and of which department is offering it. 

  • Which ways of using AI systems in teaching are not permissible?

    Application contexts where not using an AI system results in disadvantages in the relevant students’ studies are explicitly prohibited unless an equal alternative is offered. This includes disadvantages when it comes to exam preparation, coursework, etc.

    Example 1: Mandatory lecture
    You are holding a mandatory lecture for second-semester students. To support your students, you provide a link to an application that allows them to prepare for the examination using flashcards from an AI provider. However, the use of the AI system has not or not yet been contractually agreed between the provider and the University of Mannheim (this includes, in particular, the processing of confidential information and personal data).
    → In this case, use of the AI system is not approved, as not using it would put students at a disadvantage in their exam preparation. It would only be approved if students who do not wish to use the application were offered an equivalent alternative.

    Example 2: Elective course
    You are offering an elective project seminar where you would like to require students to use an AI system throughout the semester. Students have a sufficient number of other elective courses with no mandatory use of any AI systems to choose from.
    → In this case, use of the AI system is approved, since students are free to decide whether to participate in your course. 
    High-risk AI systems (see What are “high-risk AI-systems”) may generally only be used with prior explicit approval of the President’s Office and entry into the AI registry.

  • Can I use AI systems in my studies?

    That depends on the specific context. Generally speaking, you are allowed to use AI systems for your studies. However, we cannot make any general statements as to what exactly is or is not permitted, since these details vary depending on the individual case. This applies in particular to examinations to be completed: Therefore, please always check with the teachers responsible for your examinations whether, and if so, under what conditions, you are permitted to use AI systems. If you do not comply with this requirement, this may, in serious cases, result in you failing the final attempt of an examination.  

    For further information, especially regarding ChatGPT, please see the web pages of the Teaching and Learning Center (ZLL).

  • Can staff members in the administration use AI systems?

    In the administration, you may use AI systems in accordance with the requirements set out under “Can I use AI systems at the University of Mannheim?

  • I intend to use a high-risk AI system. What do I have to take into account?

    High-risk AI systems may only be used with prior explicit approval of the President's Office (section 3 sentence 2 KI-RiL).

    The corresponding decision of the President’s Office is documented by way of an entry in the AI registry (section 5 subsection 4 sentence 2 KI-RiL).

  • What effect can using an AI system for other purposes have on whether it is classified as a high-risk AI system?

    An already approved AI system can become a high-risk AI system within the meaning of article 6 of the EU AI regulation if the system itself or its purpose is modified.

    An example from the area of teaching and examinations would be the use of a text improvement AI system to evaluate students’ work based on the number of AI-generated suggestions for improvement: If, for example, the number of suggestions were used as a basis for grading examinations, meaning the higher the number of suggestions the lower the grade, this would constitute a modification of purpose, which would, in turn, result in the AI system being classified as high-risk.

    You may only modify an AI system that has already been placed on the market or put into service in such a way that it becomes a high-risk AI system within the meaning of article 6 of the EU AI regulation if you have obtained the explicit approval of the President’s Office in advance (section 3 sentence 4 KI-RiL).

  • Does the University of Mannheim provide any AI systems?

    Yes, the University of Mannheim makes AI systems available to its members and affiliates after a thorough review. An overview of the AI systems provided by the university, which are to be given preference over others, and their use cases can be found on the pages of the AI core team.

  • Where can I get training regarding AI?

    The university offers information, qualification and continuing education services to promote safe and competent use of AI. Further details are published on the intranet on a regular basis. In addition, a basic training module is available on ILIAS.

  • Can I use AI-generated content on the university’s website?

    The AI guideline does not prohibit you from using AI-generated content (e.g., text and images) on the website. However, in cases where people interact directly with an AI system (e.g., chatbots, language assistants) or where their fundamental rights, safety or health may be significantly affected by decisions made using AI, you must take appropriate measures to make the fact that they are interacting with an AI system transparent to users (section 6 subsection 4 KI-RiL).

    Users of the relevant systems are themselves responsible for complying with any regulations that apply independently of the AI guideline (e.g., copyright provisions).

IV. Central AI registry of the University of Mannheim

  • What is the central AI registry?

    In the central AI registry, AI systems developed, procured or used at the University of Mannheim are documented systematically. It also includes information regarding which AI systems may be used for processing confidential information or personal data in connection with official tasks and which AI systems must not be used at all.

  • What has to be entered into the central AI registry?

    All AI systems and use cases intended for processing confidential information or personal data or involving the processing of such information or data must be entered into the AI registry (section 6 subsection 1 sentence 3 KI-RiL).

    The decision of the President’s Office on a high-risk AI system is documented by way of a corresponding entry in the AI registry. 

    In addition, all AI systems developed, procured, operated or distributed in an official context must be entered into the AI registry along with their functions and possible uses. This also applies to pilot projects and test systems as well as cloud-based or API-based AI services (section 5 subsection 1 sentence 3 KI-RiL).

    The requirement to be entered into the AI registry does not apply to AI systems that are developed and put into operation for the sole purpose of scientific research and development (section 1 subsection 2 sentence 1 KI-RiL) as well as research, testing and development activities on AI systems or AI models before they are placed on the market or put into service (section 1 subsection 2 sentence 2 KI-RiL).

  • What is the process for entering an AI system into the central AI registry?

    To enter an AI system into the central AI registry, the following steps are required (see section 5 KI-RiL):

    1. You submit an application containing certain basic information (including provider, name, identification of necessity, aims, features, possibilities of use, types of data, risk assessment) to the AI core team.
    2. The AI core team reviews the information after having received the application.
    3. If the application concerns a high-risk AI system, the AI core team forwards your application to the President's Office, which decides on the approval. This decision is documented in the AI registry.
    4. If the application does not concern a high-risk AI system and it is not possible for the AI core team to complete the review within two weeks, the applicants must be informed of the postponement and the further process.

    The corresponding application form can be found on the intranet under this link (DOCX) (in German only).

    If the system is approved or if the AI core team does not object to the entry within two weeks of the application being submitted, the system is entered into the AI registry as indicated in the application.

    In case of an objection, the AI core team forwards the application to the President’s Office, which makes a final decision on it.

  • What are the consequences of an AI system being entered into the AI registry as “not approved”?

    If an AI system is marked as “not approved” in the AI registry, it may not be used in an official context (section 6 subsection 1 sentence 4 KI-RiL).

Entry into force and transitional provision