Safe and Secure Use of AI in Research Projects
AI tools increasingly support all stages of research projects. At the same time, their use creates or amplifies risks related to research ethics, research integrity, and research governance (including legal and regulatory requirements).
This presentation introduces these risks, presents a risk management framework adapted to research processes, and outlines practical guidelines for managing risks across the entire research lifecycle. Both unintentional harms (AI safety) and deliberate threats (AI security) are addressed.
Participants will learn how to use AI tools safely, securely, and in compliance with ethical, integrity, and governance expectations. A key practical message of this presentation is that research groups and research projects need their own AI policies and checklists, as national and institutional guidelines are not domain- or project-specific. Practical tips for developing such policies and checklists will be also provided.
This presentation is a condensed version of the 6-hour workshop with slides, jupyter book, and workbook archived at Zenodo: Shigapov, R. (2025). Safe and secure use of AI in research projects. Zenodo. https://doi.org/10.5281/zenodo.17940942. Jupyter Book: https://shigapov.github.io/safe_ai/.
