Meeting report: “Challenges Posed by AI for the Work of Research Ethics Committees”. Conference, 2024, Hannover, DE

Andreas Brenneis*, 1, Tara Burden1

* Corresponding author: andreas.brenneis@tu-darmstadt.de

1 Department of Philosophy, Technical University of Darmstadt, Darmstadt, DE

© 2025 by the authors; licensee oekom. This Open Access article is published under a Creative Commons Attribution 4.0 International Licence (CC BY).

TATuP 34/1 (2025): p. 70–71, https://doi.org/10.14512/tatup.7159

Published online: 21. 3. 2025

From August 7 to 9, 2024, the theme week on ethics of science ‘Ethical Dimensions of AI research and AI in Research’ took place at the Xplanatorium Herrenhausen in Hannover. As part of this event, the symposium titled ‘Challenges Posed by AI for the Work of Research Ethics Committees: Issues in Evaluation, Resilience of Professional Standards, and Procedural Questions (AI@ResearchEthics)’ was funded by the Volkswagen foundation. The symposium was organized by Petra Gehring and Andreas Brenneis (both affiliated with TU Darmstadt and the Centre Responsible Digitality, ZEVEDI). The symposium’s aim was to discuss the working conditions and the criteriologies of research ethics commissions (RECs) in the light of the rapidly evolving possibilities of employing artificial intelligence (AI) in research processes across nearly all disciplines.

Assessing ethical challenges for research ethics

The invited experts were all very familiar with the practical realities of the current institutionalized landscape for research ethics and can therefore assess the ethical challenges for research ethics that come with the growing use of AI tools in academic research. The primary goal was to exchange experiences and to establish a situational overview of upcoming challenges faced by RECs. This was seen as a necessary step to have an evidence-based foundation for developing measures to cope with these challenges. [1] This focus on the specific challenges faced by ethics committees in assessing AI research projects formed the foundation of the participants’ discussions. About 25 members of RECs with various areas of expertise attended the symposium, including those specializing in medical research, computer science, the humanities, social sciences, and engineering, as well as committees with an interdisciplinary orientation. Additionally, experts from research funding agencies, political bodies, and Leopoldina contributed to the symposium.

Close linkage of data and algorithms in the use of artificial intelligence

The symposium opened with Gabriele Gramelsberger’s (RWTH Aachen) insightful keynote ‘AI as a Research Actor – Ethical and Epistemic Challenges’, with which she outlined recent developments regarding research on and with AI and thereby established a common ground for the subsequent discussions. The main part of the symposium focused on the discussion of fifteen anonymized case studies that the participants contributed from their respective ethics committees. The use cases were presented in anonymized and highly abstracted form, allowing the ethical dilemmas to be clearly identified and discussed while ensuring that the cases could not be traced back to specific instances. The critical analyzation of these use cases fostered an intense exchange of experiences, allowing practitioners from RECs and experts in science management and research ethics to share diverse perspectives and create a common space to address the challenges AI poses to the work of RECs. These cases illustrated various aspects of how AI can lead to ethical challenges within the research process.

Among the topics discussed were questions regarding training data for stress detection in autonomous vehicles, the recording of highly sensitive data during psychiatric therapy sessions, the use of AI for screening rare diseases, and AI applications in qualitative social research. A recurring theme in the discussions was the close intertwining of data and algorithmics in AI usage, where reliance on data can give rise to ethical issues that extend beyond mere data protection concerns. The reasons for this are manifold, ranging from the use of proprietary software to the potential for re-identifying anonymized data.

Established structures and new technologies

A frequently debated point was that research projects involving AI are often planned and executed even when established, less data-intensive methods exist, leading to redundant experiments. Another critical ethical issue raised was the use of software that can only be evaluated ex post, which becomes particularly problematic when informed consent from participants is required. Regarding informed consent, one challenge identified was the demand for consent without a complete understanding of the potential outcomes that AI might generate. In other cases, algorithms cannot be validated without training on data, which may conflict with individuals’ rights not to know certain outcomes.

This discussion format, which focused on use cases, proved to be highly effective, allowing participants to learn from each other’s challenges and gain new insights for their own work. It became evident that the field of medicine, with its well-established medical ethics framework, has set many standards applicable to the ethical evaluation of new and potentially disruptive technologies like AI. However, it was equally clear that AI also poses significant challenges to the ethical criteria and procedures developed in medical ethics thus far. One of the central questions that emerged was how to integrate computer science expertise into established structures.

Research ethics should play a central role as AI becomes more prominent in scientific research.

Artificial intelligence as research object and working tool

The highlight and concluding feature of the symposium was the panel discussion, moderated by Jan-Martin Wiarda, where the outcomes of the event were comprehensively reviewed. In a ‘fishbowl’ format, experts debated the conference’s key insights. One major focus was clarifying the term ‘AI’, stressing that it refers both to a tool and a distinct field of research. The importance of clear definitions was emphasized to prevent misunderstandings in interdisciplinary discussions. Another issue raised was the underrepresentation of computer science expertise on ethics committees. A central concern for participants was the development of effective formats and structures for ethical review processes, as well as supporting committees without AI expertise by integrating the necessary knowledge. It was noted that AI primarily functions as an automation tool, which risks sidelining human involvement in decision-making. Most AI solutions currently in use are provided by commercial vendors, raising challenges regarding transparency and control, particularly in situations where decision-making processes lack clarity. Many institutions either lack ethics committees altogether or are only in the early stages of establishing them.

All participants agreed that there is a clear need to drive change and make these bodies an integral part of the research infrastructure. It was also emphasized that RECs should not be seen primarily as moral authorities but rather as instruments for ensuring research quality. In the context of AI, rigorous quality assurance is crucial, especially given the rapid evolution of AI research, which in some cases pushes the limits of established research ethics practices. There was broad consensus that research ethics should play a central role as AI becomes more prominent in scientific research. However, the focus is not on the moral evaluation of research projects per se. Instead, research ethics should critically assess the scientific quality and integrity of projects, particularly in areas where other bodies may fail to meet this responsibility. In doing so, research ethics enables science to reflect on itself—its conditions, objectives, and tools. By explicitly advocating for a stronger role of research ethics and its institutionalization in response to the societal and scientific challenges posed by AI, the symposium made a significant contribution to the ongoing debate on AI technology assessment. It became clear that AI, as a disruptive technology, impacts not only society but also the research landscape, highlighting the urgent need for critical reflection on scientific standards.

Practical implications

As a follow-up to the symposium, the ZEVEDI guideline for supporting ethics committees in evaluating AI research projects will be revised and updated. This guideline, conceived as a flexible living document, will be refined based on the shared insights and discussions from the event. Particular attention will be given to disciplines that were underrepresented in the original policy paper, such as engineering. Additionally, various approaches and potential solutions will be incorporated and critically evaluated, including concepts like “Ethical Source Licenses” (OES 2024) or the idea of a “Nonviolent Public License” (Thufie 2024). As with the first edition, the revised document will be made available to ethics committees and the academic community online, open access via the website of the Centre Responsible Digitality. A recording of the panel discussion, along with a graphic recording of the symposium, is also available on the website of ZEVEDI’s project group ‘Regulatory Theories of Artificial Intelligence’ (ZEVEDI 2023b).

Footnotes

[1]   The organizers had already developed a guideline to assist RECs in evaluating AI research projects (ZEVEDI 2023a).

References

OES – Organization for Ethical Source (2024): Ethical source licenses. Available online at https://ethicalsource.dev/licenses/, last accessed on 24.10.2024.

Thufie (2024): Nonviolent public license. Available online at https://git.pixie.town/thufie/npl-builder/src/branch/main/nvpl.md, last accessed on 24.10.2024.

ZEVEDI – Zentrum verantwortungsbewusste Digitalisierung (2023a): Research ethics for AI research projects. Guidelines to support the work of ethics committees at universities. Available online at https://zevedi.de/wp-content/uploads/2023/02/ZEVED_AI-Research-Ethics_web_2023.pdf, last accessed on 24.10.2024.

ZEVEDI (2023b): Regulatory theories of Artificial Intelligence. Available online at https://zevedi.de/en/topics/regulatory-theories-of-ai/, last accessed on 24.10.2024.