<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article
  PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD with MathML3 v1.2 20190208//EN" "JATS-journalpublishing1-mathml3.rng">
<article xmlns:xlink="http://www.w3.org/1999/xlink"
         article-type="research-article"
         dtd-version="1.2"
         xml:lang="en"><?letex RNG_JATS-journalpublishing1-mathml3 ok?>
   <front>
      <journal-meta>
         <journal-id/>
         <journal-title-group>
            <journal-title>TATuP – Journal for Technology Assessment in Theory and Practice</journal-title>
         </journal-title-group>
         <issn pub-type="ppub">2568-020X</issn>
      </journal-meta>
      <article-meta>
         <article-id>7226</article-id>
         <article-id pub-id-type="doi">10.14512/tatup.7226</article-id>
         <article-categories>
            <subj-group>
               <subject>Research Article</subject>
            </subj-group>
            <subj-group>
               <subject/>
            </subj-group>
         </article-categories>
         <title-group>
            <article-title xml:lang="en">Human-centered design of artificial-intelligence-assisted work systems in healthcare</article-title>
            <subtitle xml:lang="en">Findings from multi-stakeholder dialogues</subtitle>
            <trans-title-group>
               <trans-title xml:lang="de">Menschengerechte Gestaltung KI-gestützter Arbeitssysteme im Gesundheitswesen</trans-title>
               <trans-subtitle xml:lang="de">Erkenntnisse aus Multi-Stakeholder-Dialogen</trans-subtitle>
            </trans-title-group>
         </title-group>
         <contrib-group>
            <contrib contrib-type="author"
                     corresp="yes"
                     id="Au1"
                     xlink:href="#Aff1 Aff2">
               <contrib-id contrib-id-type="orcid">https://orcid.org/0000-0002-2170-386X</contrib-id>
               <name name-style="western">
                  <surname>Schlicht</surname>
                  <given-names>Larissa</given-names>
               </name>
               <address>
                  <email>larissa.schlicht@partner.kit.edu</email>
               </address>
               <bio>
                  <boxed-text id="FPar1" specific-use="Style1">
                     <caption>
                        <title>Larissa Schlicht</title>
                     </caption>
                     <p>holds a Master’s degree in Cognitive Science and a Bachelor’s degree in Philosophy &amp; Economics. She was a research assistant at the German Federal Institute for Occupational Safety and Health from 2019 to 2024. Currently, she is pursuing her doctorate (Dr. phil.) at the Karlsruhe Institute of Technology, focusing on the human-centered design of AI-assisted healthcare systems.</p>
                     <fig id="Figa">
                        <label/>
                        <caption>
                           <title/>
                        </caption>
                        <graphic specific-use="Print" xlink:href="910000_2025_7226_Figa_Print.eps"/>
                        <graphic specific-use="HTML" xlink:href="910000_2025_7226_Figa_HTML.png"/>
                     </fig>
                  </boxed-text>
               </bio>
               <aff id="Aff1">
                  <institution>Karlsruhe Institute of Technology</institution>
                  <institution content-type="dept">Faculty of Humanities and Social Sciences</institution>
                  <addr-line>
                     <city>Karlsruhe</city>
                     <country>Germany</country>
                  </addr-line>
               </aff>
            </contrib>
            <aff id="Aff2">
               <institution>Federal Institute for Occupational Safety and Health</institution>
               <addr-line>
                  <city>Dresden</city>
                  <country>Germany</country>
               </addr-line>
            </aff>
         </contrib-group>
         <pub-date date-type="pub">
            <day>15</day>
            <month>12</month>
            <year>2025</year>
         </pub-date>
         <fpage>59</fpage>
         <lpage>64</lpage>
         <permissions>
            <copyright-year>2025</copyright-year>
            <copyright-holder>by the author(s); licensee oekom</copyright-holder>
            <license>
               <license-p>This Open Access article is published under a Creative Commons Attribution 4.0 International Licence (CC BY).</license-p>
            </license>
         </permissions>
         <abstract abstract-type="summary" id="Abs1" xml:lang="en">
            <title>Abstract</title>
            <p>Artificial intelligence (AI)-assisted technologies, such as decision support and monitoring systems, hold the potential to significantly improve efficiency and quality of care in the health sector. However, given the impact that such technologies can also have on work requirements and the moral agency of healthcare personnel, it is imperative – from an occupational safety and health perspective – to incorporate established criteria for human-centered work design and ethical design criteria into technology development throughout the entire life cycle. Existing AI guidelines and regulations such as the EU’s AI Act address this imperative; however, suitable approaches for effectively integrating corresponding criteria into risk assessment and compliance processes are still lacking. This article presents the methodological approach and key findings from two multi-stakeholder dialogues, which identify starting points for the human-centered development of AI-assisted healthcare technologies.</p>
         </abstract>
         <abstract abstract-type="summary" id="Abs2" xml:lang="de">
            <title>Zusammenfassung</title>
            <p>KI-gestützte Technologien wie Entscheidungsunterstützungs- oder Monitoringsysteme haben das Potenzial, die Effizienz und Versorgungsqualität im Gesundheitswesen deutlich zu verbessern. Da diese Technologien jedoch auch Einfluss auf die Arbeitsanforderungen und die moralische Entscheidungsfindung der Beschäftigten haben können, ist es – aus Perspektive des Arbeitsschutzes – geboten, Prinzipien der menschengerechten Arbeitsgestaltung und ethische Kriterien über den gesamten Lebenszyklus hinweg in die Technologieentwicklung einzubeziehen. Bestehende KI-Leitlinien und Regelwerke wie der EU AI Act greifen diesen Imperativ auf; bislang fehlen jedoch geeignete Ansätze, um entsprechende Anforderungen wirksam in Risikoanalyse- und Compliance-Prozesse zu integrieren. Der Artikel beschreibt das methodische Vorgehen und zentrale Ergebnisse zweier Multi-Stakeholder-Dialoge, die Ansatzpunkte für eine menschengerechte Entwicklung KI-assistierter Technologien im Gesundheitswesen aufzeigen.</p>
         </abstract>
         <kwd-group>
            <compound-kwd>
               <compound-kwd-part content-type="code">heading</compound-kwd-part>
               <compound-kwd-part content-type="text">Keywords</compound-kwd-part>
            </compound-kwd>
            <compound-kwd>
               <compound-kwd-part content-type="code"/>
               <compound-kwd-part content-type="text">artificial intelligence</compound-kwd-part>
            </compound-kwd>
            <compound-kwd>
               <compound-kwd-part content-type="code"/>
               <compound-kwd-part content-type="text">healthcare</compound-kwd-part>
            </compound-kwd>
            <compound-kwd>
               <compound-kwd-part content-type="code"/>
               <compound-kwd-part content-type="text">human-centered work design</compound-kwd-part>
            </compound-kwd>
            <compound-kwd>
               <compound-kwd-part content-type="code"/>
               <compound-kwd-part content-type="text">participatory technology assessment</compound-kwd-part>
            </compound-kwd>
         </kwd-group>
      </article-meta>
   </front>
   <body>
      <sec id="Sec1">
         <label>1</label>
         <title>Introduction</title>
         <p>Imagine the following scenario (Schlicht 2025)<fn id="Fn1">
               <p>This scenario is reproduced from the main text of the author’s dissertation.</p>
            </fn>: <italic>A university hospital adopts an artificial intelligence (AI)-assisted documentation system that integrates speech recognition and clinical decision support to reduce administrative burden and free up time for patient care. The system allows staff to dictate patient data, which is then transcribed and analyzed to generate treatment recommendations. To support work-integrated learning, the system provides explanatory feedback when care deviates from established guidelines. Initially seen as helpful, the system increasingly flags deviations, having learned that stricter feedback improves adherence. As a result, caregivers’ responsiveness to individual patients’ needs declines, with decision-making shifting from context-specific relational care to protocol adherence. Nurses report declining job control and growing moral distress, both of which contribute to greater psychological strain and job dissatisfaction.</italic>
         </p>
         <p>The situation described above is fictional but illustrates plausible occupational and ethical risks that may arise when AI tools are implemented to support healthcare personnel without having been designed in a manner that adequately considers human-centered design (HCD) criteria. Indeed, HCD criteria are currently not applied systematically throughout the development process of AI systems, raising concerns that the use of such technologies could weaken the relational and moral dimensions of healthcare by, for example, encouraging a shift from patient-centered care practices toward standardized interactions.</p>
         <p content-type="eyecatcher" specific-use="Style2">Healthcare has become a key sector for the advancement and integration of artificial intelligence.</p>
         <p>Healthcare has become a key sector for the advancement and integration of AI (OECD <xref ref-type="bibr" rid="CR10">2019</xref>). AI-driven tools, including diagnostic, monitoring, and workflow optimization systems, are increasingly being integrated into clinical practice. Their implementation is intended to assist healthcare professionals (e.g., physicians, psychotherapists, physiotherapists, nurses) by automating routine tasks, enhancing diagnostic accuracy, streamlining treatment procedures, and enabling more personalized approaches to patient care (Sharma et al. <xref ref-type="bibr" rid="CR18">2022</xref>; Yakusheva et al. <xref ref-type="bibr" rid="CR25">2025</xref>). At the same time, they may potentially also undermine the holistic approach to care that is fundamental to ethically responsible healthcare. While the healthcare sector is generally regarded as less amenable to automation, given that it is highly dependent on contextual information, AI integration may nonetheless impact a wide range of clinical practices, including those integral to patient-centered care (Yelne et al. <xref ref-type="bibr" rid="CR26">2023</xref>).</p>
         <p>Against this backdrop, this article addresses the following research requestion: <italic>What strategies and instruments can facilitate the sustainable integration of HCD criteria into the development of AI systems in healthcare?</italic> To answer this, the article first outlines key challenges in integrating HCD criteria into AI development processes. It then introduces <italic>participatory technology assessment</italic> (pTA) as a methodological approach capable of addressing these challenges. Finally, the article presents the methodology behind and findings from two multi-stakeholder dialogues, identifying concrete pathways toward human-centered AI development in healthcare.</p>
      </sec>
      <sec id="Sec2">
         <label>2</label>
         <title>Current challenges in integrating human-centered design criteria into the design of artificial-intelligence-assisted healthcare systems</title>
         <p>While there is a growing consensus regarding the importance of considering occupational and ethical criteria during the design of AI systems, the practical incorporation of these criteria remains sporadic and limited (Tidjon and Khomh <xref ref-type="bibr" rid="CR20">2022</xref>; Vakkuri et al. <xref ref-type="bibr" rid="CR22">2020</xref>). Numerous frameworks, including international guidelines (Corrêa et al. <xref ref-type="bibr" rid="CR4">2023</xref>) as well as regulatory instruments (e.g., the EU Artificial Intelligence Act (Regulation (EU) <xref ref-type="bibr" rid="CR5">2024</xref>/1689)) and standards (e.g., ISO/IEC 42001:2023, ISO/IEC 38507:2022 or IEEE 7000:2021), underscore the importance of incorporating HCD criteria into AI system design. However, it remains largely uncertain what specific approaches could support the effective consideration of such non-technical design requirements – especially in complex sociotechnical domains like healthcare (WHO <xref ref-type="bibr" rid="CR24">2023</xref>).</p>
         <p>Prominently, a central difficulty here lies in the typically generalized formulation of HCD criteria, which limits their applicability in concrete design and risk assessment processes (Sanderson et al. <xref ref-type="bibr" rid="CR14">2023</xref>). Another challenge stems from the <italic>adaptive nature of many AI systems,</italic> further complicating the criteria’s translation into practical technology development processes. As illustrated in the hypothetical scenario presented above, AI algorithms may evolve in response to dynamic data environments and user interactions, making it difficult – if not impossible – to fully anticipate the risks associated with their deployment during the technology design. However, traditional risks management methodologies typically lack iterative mechanisms for the <italic>continuous</italic> assessment and mitigation of emerging risks, i.e., they do not systematically incorporate observations of threats and hazards encountered in deployment contexts back into ongoing design and refinement processes (Siedel et al. <xref ref-type="bibr" rid="CR19">2021</xref>). As a result, within the scope of conventional methodologies, HCD criteria risk being rendered ineffective by the dynamic and context-specific challenges posed by adaptive AI systems.</p>
         <p>In response to these limitations, the AI Act stipulates mandatory ongoing risk management processes to identify and minimize emerging risks – at least for AI systems classified as high-risk, such as patient triage systems and emotion-recognition technologies (Recital 65, Article 9, EU <xref ref-type="bibr" rid="CR5">2024</xref>).<fn id="Fn2">
               <p>In parallel, several standardization organizations have developed technical frameworks aimed at promoting a lifecycle-oriented approach to AI risk management (e.g., DIN SPEC 92001 series; IEEE 7000:2021; ISO/IEC 23894:2023; ISO/IEC 38507:2022; ISO/IEC 42001:2023).</p>
            </fn> However, there is still a need to identify effective intervention points throughout the AI lifecycle that could verify the consideration of HCD criteria (Ortega-Bolaños et al. <xref ref-type="bibr" rid="CR11">2024</xref>; Prem <xref ref-type="bibr" rid="CR13">2023</xref>). In addition, although the AI Act implicitly promotes human-centered work design, it falls short of explicitly mandating such design criteria. However, emerging regulatory approaches present an opportunity to integrate internationally established <italic>criteria for human-centered work design</italic> at the earliest AI lifecycle phases.</p>
      </sec>
      <sec id="Sec3">
         <label>3</label>
         <title>Participatory technology assessment as a methodological framework</title>
         <p>The integration of HCD criteria, such as <italic>respect for autonomy</italic> or <italic>work-integrated learning</italic> (Beauchamp and Childress <xref ref-type="bibr" rid="CR3">2019</xref>; Ulich <xref ref-type="bibr" rid="CR21">2011</xref>), into verifiable measures that can be implemented and assessed throughout the development and implementation of AI systems faces distinct challenges. While existing risk assessment and mitigation strategies for technology development largely rely on quantifiable parameters, HCD criteria are difficult to express quantitatively (Poszler et al. <xref ref-type="bibr" rid="CR12">2024</xref>). Accordingly, there are to date only limited standardized procedures for evaluating the extent to which ‘soft’ criteria are considered to an effective degree during (re)design processes. Moreover, the consideration of specifically ethical criteria requires risk management procedures capable of considering the specific needs of individuals affected by AI systems (Mittelstadt <xref ref-type="bibr" rid="CR9">2019</xref>). Against this backdrop, it is clear that the integration of HCD criteria into verifiable measures cannot rely solely on technical expertise; properly considering these criteria requires multi-disciplinary expert knowledge from various disciplines, including human-computer interaction, occupational psychology, and applied technology ethics (Schmager et al. <xref ref-type="bibr" rid="CR16">2025</xref>).</p>
         <p content-type="eyecatcher" specific-use="Style2">Participatory technology assessment offers a particularly suitable framework for informing the development of approaches that consider human-centered design criteria.</p>
         <p>PTA offers a particularly suitable framework for informing the development of approaches that consider HCD criteria – as well as their inherent socio-technical complexities – to an effective degree in technology (re-)development processes. By engaging experts across multiple disciplines as well as direct stakeholders, pTA enables the use of contextual insights to enhance the accuracy of risk assessment and mitigation efforts (Grunwald <xref ref-type="bibr" rid="CR8">2018</xref>; Grobe <xref ref-type="bibr" rid="CR7">2021</xref>). Importantly, the pTA format employed in the multi-stakeholder dialogues was designed to create a reflective space in which diverse perspectives could be articulated, negotiated, and constructively engaged, thereby fostering co-learning, knowledge integration, and the joint development of various actionable strategies and instruments.</p>
      </sec>
      <sec id="Sec4">
         <label>4</label>
         <title>Procedure</title>
         <p>To identify strategies for systematically assessing and mitigating the occupational and ethical risk presented by AI systems in the healthcare sector, we conducted two multi-stakeholder dialogues in March and May 2024.<fn id="Fn3">
               <p>A two-day format was employed for each dialogue to facilitate comprehensive exchange and iterative reflection.</p>
            </fn> The dialogues were carried out as part of the research project <italic>F2574: KI-basierte Systeme im Gesundheitswesen – Werkstattgespräche zur Entwicklung eines multidisziplinären und gemeinwohlorientierten Gestaltungsansatzes aus Perspektive des Arbeitsschutzes</italic> of the Federal Institute for Occupational Safety and Health (BAuA <xref ref-type="bibr" rid="CR2">n.d.</xref>).</p>
         <p>Participants were selected through a targeted recruitment process to ensure the representation of a diverse array of stakeholder groups, disciplines, and institutional affiliations, supported by snowball sampling through existing professional networks. The final group of 25 participants included, among others, researchers (in the fields of applied machine learning, digital health engineering, occupational psychology, technology ethics, nursing science, technology assessment, and technology governance), policy officers from ministries and regulatory agencies, representatives of both employee and employer organizations, specialists from statutory insurances, and managers from companies in the field of nursing software and digital healthcare consulting (see Table <xref ref-type="table" rid="Tab1">1</xref>).</p>
         <table-wrap id="Tab1">
            <label>Table 1</label>
            <caption xml:lang="en">
               <title>Overview of stakeholder groups and participants’ roles and affiliations. <italic>Source: author’s own compilation</italic>
               </title>
            </caption>
            <table>
               <colgroup>
                  <col width="14.35*"/>
                  <col width="18.92*"/>
                  <col width="66.74*"/>
               </colgroup>
               <thead>
                  <tr>
                     <td style="width:auto">
                        <p>Stakeholder group</p>
                     </td>
                     <td style="width:auto">
                        <p>Roles</p>
                     </td>
                     <td style="width:auto">
                        <p>Institutions</p>
                     </td>
                  </tr>
               </thead>
               <tbody>
                  <tr>
                     <td style="width:auto">
                        <p>Research and academia</p>
                     </td>
                     <td style="width:auto">
                        <p>Researchers (e.g., professors, senior scientists)</p>
                     </td>
                     <td style="width:auto">
                        <p>Federal Institute for Occupational Safety and Health; Fraunhofer HHI; Fraunhofer IESE; IFA – Institute for Occupational Safety and Health of the DGUV; Karlsruhe Institute of Technology; OTH Regensburg; TU Dresden; University of Osnabrück; University of Tübingen; WZB – Berlin Social Science Center</p>
                     </td>
                  </tr>
                  <tr>
                     <td style="width:auto">
                        <p>Policy and public administration</p>
                     </td>
                     <td style="width:auto">
                        <p>Policy officers/advisors</p>
                     </td>
                     <td style="width:auto">
                        <p>Federal Ministry of Health; Federal Network Agency; Saxon State Chancellery</p>
                     </td>
                  </tr>
                  <tr>
                     <td style="width:auto">
                        <p>Professional associations and social partners</p>
                     </td>
                     <td style="width:auto">
                        <p>Representatives of employer/employee organizations, policy officers</p>
                     </td>
                     <td style="width:auto">
                        <p>BGW – German Social Accident Insurance Institution for the Health and Welfare Services; German Hospital Federation; ver.di – United Services Trade Union</p>
                     </td>
                  </tr>
                  <tr>
                     <td style="width:auto">
                        <p>Health insurance organizations</p>
                     </td>
                     <td style="width:auto">
                        <p>Policy specialists</p>
                     </td>
                     <td style="width:auto">
                        <p>AOK Federal Association</p>
                     </td>
                  </tr>
                  <tr>
                     <td style="width:auto">
                        <p>Industry and consulting</p>
                     </td>
                     <td style="width:auto">
                        <p>General managers</p>
                     </td>
                     <td style="width:auto">
                        <p>Companies providing nursing software and digital healthcare consulting</p>
                     </td>
                  </tr>
               </tbody>
            </table>
         </table-wrap>
         <p>The <italic>first dialogue</italic> was structured around the question: “At which phases of the AI lifecycle should action be taken to ensure the human-centered design of AI systems in healthcare?”. Following a brief introduction of the project’s objectives, participants collaboratively mapped existing and emerging AI applications in healthcare, reflecting on their technical functionalities, anticipated implications for work processes, and ethical dynamics. Participants were subsequently introduced to a set of HCD criteria based on (i) established <italic>criteria for human-centered work design</italic> aimed at promoting personality development, health maintenance, and employee performance (e.g., ISO 2016, 2019, 2024), and (ii) the <italic>principles of biomedical ethics</italic> by Beauchamp and Childress (<xref ref-type="bibr" rid="CR3">2019</xref>), a widely recognized framework for ethical assessment within healthcare domains. The latter criteria also constitute the normative foundation of the High-Level Expert Group on AI’s “Ethics Guidelines for Trustworthy AI” (AI HLEG <xref ref-type="bibr" rid="CR1">2019</xref>), a central reference point for ethical AI design under the AI Act. Using selected examples of AI systems, participants explored how these criteria can be sustainably applied across the different stages of the AI lifecycle (i.e., from the specification of requirements to the operational monitoring). In this context, the goal was not to develop isolated strategies for each criterion but to identify approaches that enable the general integration of HCD criteria into verifiable measures.</p>
         <p>In small-group follow-up sessions, the initial insights were deepened and specified using structured digital collaboration formats. The <italic>second dialogue</italic>, building directly on the results of the first session, was guided by the following question “What are particularly promising strategies and how can they be implemented?”. Drawing on key intervention points identified via finger voting, participants developed both individual and collective strategies for integrating HCD criteria into occupational safety and health assessments and broader AI governance mechanisms. Through iterative reflection formats, such as world cafés and listening circles, they then focused on implementation planning, clarifying necessary actions and responsible actors. In line with the participatory orientation of the overall approach, the design of these sessions deliberately avoided predefined outcome structures. Instead, the process was participant-driven, allowing the content and form of the results to emerge collaboratively. This openness also fostered joint ownership of the outcomes and encouraged participants to initiate follow-up activities within their own organizational and professional networks. All outcomes were recorded using Metaplan boards.</p>
      </sec>
      <sec id="Sec5">
         <label>5</label>
         <title>Results from multi-stakeholder dialogues</title>
         <p>Participants developed a broad range of proposals for means of advancing human-centered AI risk management processes in healthcare settings. A central recommendation was the establishment of iterative feedback loops between AI system providers and developers – including for systems not classified as high-risk. The participants emphasized that such iterative risk management processes are closely linked to the twofold requirement to enhance transparency in system design and functioning and to ensure that system characteristics are communicated in an accessible manner for a wide range of stakeholders. On a similar note, special attention was paid to the post-deployment phase, where continuous monitoring and adaptive feedback mechanisms were considered to be essential to address emerging risks. The inclusion of both user and affected stakeholder perspectives was recognized as particularly important in this regard. As part of ongoing collaboration, the participants eventually also began to develop concrete instruments and other intervention points. The key insights – along with the associated follow-up activities – are summarized in Table <xref ref-type="table" rid="Tab2">2</xref>.</p>
         <table-wrap id="Tab2">
            <label>Table 2</label>
            <caption xml:lang="en">
               <title>Summary of key insights from the workshops on strategies for the effective integration of HCD criteria throughout the AI development process, along with corresponding follow-up activities. <italic>Source: author’s own compilation</italic>
               </title>
            </caption>
            <table>
               <colgroup>
                  <col width="29.86*"/>
                  <col width="70.14*"/>
               </colgroup>
               <thead>
                  <tr>
                     <td style="width:auto">
                        <p>Key insights</p>
                     </td>
                     <td style="width:auto">
                        <p>Follow-up activities, e.g.</p>
                     </td>
                  </tr>
               </thead>
               <tbody>
                  <tr>
                     <td style="width:auto">
                        <p>Iterative feedback loops between AI system providers and developers – critical to managing the evolving risks associated with adaptive systems – require explainable AI systems.</p>
                     </td>
                     <td style="width:auto">
                        <p>Collaborative publication by Gilbert et al. (<xref ref-type="bibr" rid="CR6">2025</xref>) on the development of standardized model cards for healthcare AI applications, which detail aspects such as the intended use, target patient populations, functionalities, and known risks, to enhance transparency (as with pharmaceutical leaflets). The authors advocate for layered, verifiable information to prevent misleading claims. Additionally, they highlight the importance of integrating model cards with existing regulatory frameworks and ensuring that they are user-friendly for various stakeholders, including patients and healthcare providers.</p>
                     </td>
                  </tr>
                  <tr>
                     <td rowspan="3" style="width:auto">
                        <p>Developers should use iterative prototyping methodologies that allow for ongoing adjustments and include real-time auditability and feedback mechanisms that enable, i.a., healthcare professionals to report adverse events and suggest improvements.</p>
                     </td>
                     <td style="width:auto">
                        <p>Collaborative publication by Schönfelder et al. (<xref ref-type="bibr" rid="CR17">2025</xref>) on a framework for the development of in-house AI systems in hospital settings through multi-stakeholder collaboration. By identifying key stakeholders, outlining their respective contributions, and highlighting professional strategies with which to build consensus, the proposed framework aims to ensure that potential barriers to aligning AI systems with HCD criteria are acknowledged early and addressed through joint problem-solving.</p>
                     </td>
                  </tr>
                  <tr>
                     <td style="width:auto">
                        <p>Development of a checklist to assess the impact of AI technology on workplace stressors and resources in healthcare settings, intended for use in psychological risk assessments (currently under development).</p>
                     </td>
                  </tr>
                  <tr>
                     <td style="width:auto">
                        <p>Initiation of a conference on ethical evaluation tools for AI with a focus on the suitability of existing instruments, such as MEESTAR (Weber <xref ref-type="bibr" rid="CR23">2015</xref>), and adaptability to AI-specific challenges. Individual symposia may be initiated by dialogue participants or by other stakeholders.</p>
                     </td>
                  </tr>
                  <tr>
                     <td style="width:auto">
                        <p>Ongoing exchange on AI regulation among stakeholders is essential to the proactive contribution of design perspectives and practical expertise, particularly in the upcoming process of translating the EU AI Act into national law.</p>
                     </td>
                     <td style="width:auto">
                        <p>Initiation of a regular exchange format on AI regulation to promote ongoing dialogue and knowledge sharing among the participants.</p>
                     </td>
                  </tr>
                  <tr>
                     <td style="width:auto">
                        <p>To ensure alignment between technical capabilities and existing workflows and job demands, it is essential to involve direct stakeholders early and consistently in the AI design process.</p>
                     </td>
                     <td style="width:auto">
                        <p>Initiation of roundtable discussions with healthcare professionals (e.g., nurses, physicians) to identify their needs and requirements early in the design process, thereby supporting the development of AI systems that are adapted to specific healthcare contexts.</p>
                     </td>
                  </tr>
               </tbody>
            </table>
         </table-wrap>
      </sec>
      <sec id="Sec6">
         <label>6</label>
         <title>Conclusion</title>
         <p>The results of these multi-stakeholder dialogues underscore the value offered by participatory frameworks like pTA in developing effective strategies for human-centered AI design in healthcare. While involving diverse stakeholders is a complex and resource-intensive task, it proved to be highly beneficial in our workshops. The discourse among technical, social science, regulatory, and practice-oriented perspectives facilitated the development of various integrative pathways through which to embed work-related and ethical considerations into AI design – all in a manner tailored to the specific requirements of the healthcare sector. Through the follow-up activities, the participants began to collaboratively develop solutions, including measures to improve system transparency and comprehensibility, iterative prototyping methodologies, and checklists for psychological risk assessments. Moreover, further exchange formats were initiated, including with potential users of AI systems in healthcare, to promote ongoing dialogue and knowledge sharing.</p>
         <p content-type="eyecatcher" specific-use="Style2">… future activities may place greater emphasis on involving stakeholders directly engaged as healthcare professionals.</p>
         <p>However, the developed tools have not yet been empirically tested in real-world development and deployment scenarios. Their practical viability and overall effectiveness therefore remain uncertain, particularly with regard to the organizational, technical, and ethical challenges that may arise in applied settings. Relatedly, although most participants brought expertise relevant to the healthcare sector, future activities may place greater emphasis on involving stakeholders directly engaged as healthcare professionals. Their inclusion would help to ensure that perspectives from clinical practice are more fully represented and that proposed measures are grounded in the realities of healthcare delivery.</p>
         <p>Furthermore, future initiatives should investigate how context-sensitive, participatory frameworks for risk management can be scaled and institutionalized more broadly, including with a view to the national rollout of the EU AI Act. Ultimately, establishing such locally grounded risk management mechanisms will become increasingly critical as multi-agent AI systems (Moritz et al. <xref ref-type="bibr" rid="CR99">2025</xref>) are more frequently deployed in administrative processes as well as in clinical decision making. These systems are characterized by the interaction of multiple (semi-)autonomous AI agents, often incorporating large language models (e.g., GPT‑4, Gemini). Since such systems have a heightened potential to adapt to changing environments, there is an even growing need for iterative governance mechanisms – ideally developed in close collaboration with relevant stakeholders – that can flexibly address emerging occupational and ethical risks across diverse application contexts in the healthcare sector.</p>
      </sec>
   </body>
   <back>
      <ack>
         <p>
            <boxed-text id="FPar2" specific-use="Style1">
               <caption>
                  <title>Funding</title>
               </caption>
               <p>This article received no funding.</p>
            </boxed-text>
         </p>
         <p>
            <boxed-text id="FPar3" specific-use="Style1">
               <caption>
                  <title>Competing interests</title>
               </caption>
               <p>The author declares no competing interests.</p>
            </boxed-text>
         </p>
         <p>
            <boxed-text id="FPar4" specific-use="Style1">
               <caption>
                  <title>Ethical oversight</title>
               </caption>
               <p>The author confirms that all procedures were performed in compliance with relevant laws and institutional guidelines.</p>
            </boxed-text>
         </p>
      </ack>
      <ref-list id="Bib1">
         <title>References</title>
         <ref specific-use="2" id="CR1">
            <mixed-citation>AI HLEG – High-level Expert Group on AI (2019): Ethics guidelines for trustworthy AI. Brussels: European Commission. Available online at <ext-link xlink:href="https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419">https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419</ext-link>, last accessed on 25.09.2025.</mixed-citation>
         </ref>
         <ref specific-use="2" id="CR2">
            <mixed-citation>BAuA – Bundesanstalt für Arbeitsschutz und Arbeitsmedizin (n.d.): KI-basierte Systeme im Gesundheitswesen. Werkstattgespräche zur Entwicklung eines multidisziplinären und gemeinwohlorientierten Gestaltungsansatzes aus Perspektive des Arbeitsschutzes. Available online at <ext-link xlink:href="https://baua.de/DE/Forschung/Forschungsprojekte/f2574">https://baua.de/DE/Forschung/Forschungsprojekte/f2574</ext-link>, last accessed on 25.09.2025.</mixed-citation>
         </ref>
         <ref id="CR3">
            <citation-alternatives>
               <element-citation publication-type="book">
                  <person-group person-group-type="author">
                     <name content-type="author">
                        <surname>Beauchamp</surname>
                        <given-names>T</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Childress</surname>
                        <given-names>J</given-names>
                     </name>
                  </person-group>
                  <date>
                     <year>2019</year>
                  </date>
                  <source content-type="BookTitle">Principles of biomedical ethics</source>
                  <publisher-name>Oxford University Press</publisher-name>
                  <publisher-loc>Oxford</publisher-loc>
               </element-citation>
               <mixed-citation>Beauchamp, Tom; Childress, James (2019): Principles of biomedical ethics. Oxford: Oxford University Press.</mixed-citation>
            </citation-alternatives>
         </ref>
         <ref id="CR4">
            <citation-alternatives>
               <element-citation publication-type="journal">
                  <name content-type="author">
                     <surname>Corrêa</surname>
                     <given-names>N</given-names>
                  </name>
                  <date>
                     <year>2023</year>
                  </date>
                  <article-title>Worldwide AI ethics. A review of 200 guidelines and recommendations for AI governance</article-title>
                  <issue>10</issue>
                  <page-range>100857</page-range>
                  <volume-id content-type="bibarticledoi">10.1016/j.patter.2023.100857</volume-id>
                  <source content-type="journal">Patterns</source>
                  <volume>4</volume>
               </element-citation>
               <mixed-citation>Corrêa, Nicholas et al. (2023): Worldwide AI ethics. A review of 200 guidelines and recommendations for AI governance. In: Patterns 4 (10), p. 100857. <ext-link xlink:href="https://doi.org/10.1016/j.patter.2023.100857">https://doi.org/10.1016/j.patter.2023.100857</ext-link>
               </mixed-citation>
            </citation-alternatives>
         </ref>
         <ref specific-use="2" id="CR5">
            <mixed-citation>EU – European Union (2024): Regulation (EU) 2024/1689 of the European parliament and of the council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (artificial intelligence act). In: Official Journal of the European Union 2024/1689. Available online at <ext-link xlink:href="https://eur-lex.europa.eu/eli/reg/2024/1689">https://eur-lex.europa.eu/eli/reg/2024/1689</ext-link>, last accessed on 25.09.2025.</mixed-citation>
         </ref>
         <ref id="CR6">
            <citation-alternatives>
               <element-citation publication-type="journal">
                  <person-group person-group-type="author">
                     <name content-type="author">
                        <surname>Gilbert</surname>
                        <given-names>S</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Adler</surname>
                        <given-names>R</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Holoyad</surname>
                        <given-names>T</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Weicken</surname>
                        <given-names>E</given-names>
                     </name>
                  </person-group>
                  <date>
                     <year>2025</year>
                  </date>
                  <article-title>Could transparent model cards with layered accessible information drive trust and safety in health AI?</article-title>
                  <issue>1</issue>
                  <page-range>124</page-range>
                  <volume-id content-type="bibarticledoi">10.1038/s41746-025-01482-9</volume-id>
                  <source content-type="journal">npj Digital Medicine</source>
                  <volume>8</volume>
               </element-citation>
               <mixed-citation>Gilbert, Stephen; Adler, Rasmus; Holoyad, Taras; Weicken, Eva (2025): Could transparent model cards with layered accessible information drive trust and safety in health AI? In: npj Digital Medicine 8 (1), p. 124. <ext-link xlink:href="https://doi.org/10.1038/s41746-025-01482-9">https://doi.org/10.1038/s41746-025-01482-9</ext-link>
               </mixed-citation>
            </citation-alternatives>
         </ref>
         <ref id="CR7">
            <citation-alternatives>
               <element-citation publication-type="chapter">
                  <name content-type="author">
                     <surname>Grobe</surname>
                     <given-names>A</given-names>
                  </name>
                  <person-group person-group-type="editor">
                     <name content-type="editor">
                        <surname>Böschen</surname>
                        <given-names>S</given-names>
                     </name>
                     <name content-type="editor">
                        <surname>Grunwald</surname>
                        <given-names>A</given-names>
                     </name>
                     <name content-type="editor">
                        <surname>Krings</surname>
                        <given-names>B-J</given-names>
                     </name>
                     <name content-type="editor">
                        <surname>Rösch</surname>
                        <given-names>C</given-names>
                     </name>
                  </person-group>
                  <date>
                     <year>2021</year>
                  </date>
                  <chapter-title>Partizipative TA in Transformationsprozessen. Analoge und digitale Ansätze inklusiver, prospektiver Verfahrungen der Beteiligung</chapter-title>
                  <page-range>352–373</page-range>
                  <volume-id content-type="bibchapterdoi">10.5771/9783748901990-352</volume-id>
                  <source content-type="BookTitle">Technikfolgenabschätzung. Handbuch für Wissenschaft und Praxis</source>
                  <publisher-name>Nomos</publisher-name>
                  <publisher-loc>Baden-Baden</publisher-loc>
               </element-citation>
               <mixed-citation>Grobe, Antje (2021): Partizipative TA in Transformationsprozessen. Analoge und digitale Ansätze inklusiver, prospektiver Verfahrungen der Beteiligung. In: Stefan Böschen, Armin Grunwald, Bettina-Johanna Krings and Christine Rösch (eds.): Technikfolgenabschätzung. Handbuch für Wissenschaft und Praxis. Baden-Baden: Nomos, pp. 352–373. <ext-link xlink:href="https://doi.org/10.5771/9783748901990-352">https://doi.org/10.5771/9783748901990-352</ext-link>
               </mixed-citation>
            </citation-alternatives>
         </ref>
         <ref id="CR8">
            <citation-alternatives>
               <element-citation publication-type="book">
                  <name content-type="author">
                     <surname>Grunwald</surname>
                     <given-names>A</given-names>
                  </name>
                  <date>
                     <year>2018</year>
                  </date>
                  <source content-type="BookTitle">Technology assessment in practice and theory</source>
                  <publisher-name>Routledge</publisher-name>
                  <publisher-loc>New York</publisher-loc>
                  <volume-id content-type="bibbookdoi">10.4324/9780429442643</volume-id>
               </element-citation>
               <mixed-citation>Grunwald, Armin (2018): Technology assessment in practice and theory. New York, NY: Routledge. <ext-link xlink:href="https://doi.org/10.4324/9780429442643">https://doi.org/10.4324/9780429442643</ext-link>
               </mixed-citation>
            </citation-alternatives>
         </ref>
         <ref id="CR9">
            <citation-alternatives>
               <element-citation publication-type="journal">
                  <name content-type="author">
                     <surname>Mittelstadt</surname>
                     <given-names>B</given-names>
                  </name>
                  <date>
                     <year>2019</year>
                  </date>
                  <article-title>Principles alone cannot guarantee ethical AI</article-title>
                  <issue>11</issue>
                  <page-range>501–507</page-range>
                  <volume-id content-type="bibarticledoi">10.1038/s42256-019-0114-4</volume-id>
                  <source content-type="journal">Nature Machine Intelligence</source>
                  <volume>1</volume>
               </element-citation>
               <mixed-citation>Mittelstadt, Brent (2019): Principles alone cannot guarantee ethical AI. In: Nature Machine Intelligence 1 (11), pp. 501–507. <ext-link xlink:href="https://doi.org/10.1038/s42256-019-0114-4">https://doi.org/10.1038/s42256-019-0114-4</ext-link>
               </mixed-citation>
            </citation-alternatives>
         </ref>
         <ref specific-use="2" id="CR99">
            <mixed-citation>Moritz, Michael; Topol, Eeric; Rajpurkar, Pranav (2025). Coordinated AI agents for advancing healthcare. In: Nature Biomedical Engineering 9, pp. 432–438. <ext-link xlink:href="https://doi.org/10.1038/s41551-025-01363-2">https://doi.org/10.1038/s41551-025-01363-2</ext-link>
            </mixed-citation>
         </ref>
         <ref id="CR10">
            <citation-alternatives>
               <element-citation publication-type="book">
                  <string-name>OECD – Organisation for Economic Co-operation and Development</string-name>
                  <date>
                     <year>2019</year>
                  </date>
                  <source content-type="BookTitle">Artificial intelligence in society</source>
                  <publisher-name>OECD Publishing</publisher-name>
                  <publisher-loc>Paris</publisher-loc>
                  <volume-id content-type="bibbookdoi">10.1787/eedfee77-en</volume-id>
               </element-citation>
               <mixed-citation>OECD – Organisation for Economic Co-operation and Development (2019): Artificial intelligence in society. Paris: OECD Publishing. <ext-link xlink:href="https://doi.org/10.1787/eedfee77-en">https://doi.org/10.1787/eedfee77-en</ext-link>
               </mixed-citation>
            </citation-alternatives>
         </ref>
         <ref id="CR11">
            <citation-alternatives>
               <element-citation publication-type="journal">
                  <person-group person-group-type="author">
                     <name content-type="author">
                        <surname>Ortega-Bolaños</surname>
                        <given-names>R</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Bernal-Salcedo</surname>
                        <given-names>J</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Ortiz</surname>
                        <given-names>GM</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Galeano</surname>
                        <given-names>J</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Ruz</surname>
                        <given-names>G</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Tabares-Soto</surname>
                        <given-names>R</given-names>
                     </name>
                  </person-group>
                  <date>
                     <year>2024</year>
                  </date>
                  <article-title>Applying the ethics of AI. A systematic review of tools for developing and assessing AI-based systems</article-title>
                  <issue>5</issue>
                  <page-range>110</page-range>
                  <volume-id content-type="bibarticledoi">10.1007/s10462-024-10740-3</volume-id>
                  <source content-type="journal">Artificial Intelligence Review</source>
                  <volume>57</volume>
               </element-citation>
               <mixed-citation>Ortega-Bolaños, Ricardo; Bernal-Salcedo, Joshua; Germán Ortiz, Mariana; Galeano, Julian; Ruz, Gonzalo; Tabares-Soto, Reinel (2024): Applying the ethics of AI. A systematic review of tools for developing and assessing AI-based systems. In: Artificial Intelligence Review 57 (5), p. 110. <ext-link xlink:href="https://doi.org/10.1007/s10462-024-10740-3">https://doi.org/10.1007/s10462-024-10740-3</ext-link>
               </mixed-citation>
            </citation-alternatives>
         </ref>
         <ref id="CR12">
            <citation-alternatives>
               <element-citation publication-type="journal">
                  <person-group person-group-type="author">
                     <name content-type="author">
                        <surname>Poszler</surname>
                        <given-names>F</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Portmann</surname>
                        <given-names>E</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Lütge</surname>
                        <given-names>C</given-names>
                     </name>
                  </person-group>
                  <date>
                     <year>2024</year>
                  </date>
                  <article-title>Formalizing ethical principles within AI systems. Experts’ opinions on why (not) and how to do it</article-title>
                  <issue>2</issue>
                  <page-range>937–965</page-range>
                  <volume-id content-type="bibarticledoi">10.1007/s43681-024-00425-6</volume-id>
                  <source content-type="journal">Ethics</source>
                  <volume>5</volume>
               </element-citation>
               <mixed-citation>Poszler, Franziska; Portmann, Edy; Lütge, Christoph (2024): Formalizing ethical principles within AI systems. Experts’ opinions on why (not) and how to do it. In: AI &amp; Ethics 5 (2), pp. 937–965. <ext-link xlink:href="https://doi.org/10.1007/s43681-024-00425-6">https://doi.org/10.1007/s43681-024-00425-6</ext-link>
               </mixed-citation>
            </citation-alternatives>
         </ref>
         <ref id="CR13">
            <citation-alternatives>
               <element-citation publication-type="journal">
                  <name content-type="author">
                     <surname>Prem</surname>
                     <given-names>E</given-names>
                  </name>
                  <date>
                     <year>2023</year>
                  </date>
                  <article-title>From ethical AI frameworks to tools. A review of approaches</article-title>
                  <issue>3</issue>
                  <page-range>699–716</page-range>
                  <volume-id content-type="bibarticledoi">10.1007/s43681-023-00258-9</volume-id>
                  <source content-type="journal">AI &amp; Ethics</source>
                  <volume>3</volume>
               </element-citation>
               <mixed-citation>Prem, Erich (2023). From ethical AI frameworks to tools. A review of approaches. In: AI &amp; Ethics 3 (3), pp. 699–716. <ext-link xlink:href="https://doi.org/10.1007/s43681-023-00258-9">https://doi.org/10.1007/s43681-023-00258-9</ext-link>
               </mixed-citation>
            </citation-alternatives>
         </ref>
         <ref id="CR14">
            <citation-alternatives>
               <element-citation publication-type="journal">
                  <name content-type="author">
                     <surname>Sanderson</surname>
                     <given-names>C</given-names>
                  </name>
                  <date>
                     <year>2023</year>
                  </date>
                  <article-title>AI ethics principles in practice. Perspectives of designers and developers</article-title>
                  <issue>2</issue>
                  <page-range>171–187</page-range>
                  <volume-id content-type="bibarticledoi">10.1109/TTS.2023.3257303</volume-id>
                  <source content-type="journal">IEEE Transactions on Technology and Society</source>
                  <volume>4</volume>
               </element-citation>
               <mixed-citation>Sanderson, Conrad et al. (2023): AI ethics principles in practice. Perspectives of designers and developers. In: IEEE Transactions on Technology and Society 4 (2), pp. 171–187. <ext-link xlink:href="https://doi.org/10.1109/TTS.2023.3257303">https://doi.org/10.1109/TTS.2023.3257303</ext-link>
               </mixed-citation>
            </citation-alternatives>
         </ref>
         <ref id="CR16">
            <citation-alternatives>
               <element-citation publication-type="journal">
                  <person-group person-group-type="author">
                     <name content-type="author">
                        <surname>Schmager</surname>
                        <given-names>S</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Pappas</surname>
                        <given-names>I</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Vassilakopoulou</surname>
                        <given-names>P</given-names>
                     </name>
                  </person-group>
                  <date>
                     <year>2025</year>
                  </date>
                  <article-title>Understanding human-centred AI. A review of its defining elements and a research agenda</article-title>
                  <issue>15</issue>
                  <page-range>3771–3810</page-range>
                  <volume-id content-type="bibarticledoi">10.1080/0144929X.2024.2448719</volume-id>
                  <source content-type="journal">Behaviour &amp; Information Technology</source>
                  <volume>44</volume>
               </element-citation>
               <mixed-citation>Schmager, Stefan; Pappas, Ilias; Vassilakopoulou, Polyxeni (2025): Understanding human-centred AI. A review of its defining elements and a research agenda. In: Behaviour &amp; Information Technology 44 (15), pp. 3771–3810. <ext-link xlink:href="https://doi.org/10.1080/0144929X.2024.2448719">https://doi.org/10.1080/0144929X.2024.2448719</ext-link>
               </mixed-citation>
            </citation-alternatives>
         </ref>
         <ref id="CR17">
            <citation-alternatives>
               <element-citation publication-type="journal">
                  <name content-type="author">
                     <surname>Schönfelder</surname>
                     <given-names>A</given-names>
                  </name>
                  <date>
                     <year>2025</year>
                  </date>
                  <article-title>Collaborative and cooperative hospital digital transformation in the AI age. A framework compatible with European values</article-title>
                  <volume-id content-type="bibarticledoi">10.2196/preprints.80754</volume-id>
                  <source content-type="journal">JMIR Preprints</source>
               </element-citation>
               <mixed-citation>Schönfelder, Anett et al. (2025): Collaborative and cooperative hospital digital transformation in the AI age. A framework compatible with European values. In: JMIR Preprints. <ext-link xlink:href="https://doi.org/10.2196/preprints.80754">https://doi.org/10.2196/preprints.80754</ext-link>
               </mixed-citation>
            </citation-alternatives>
         </ref>
         <ref id="CR18">
            <citation-alternatives>
               <element-citation publication-type="journal">
                  <person-group person-group-type="author">
                     <name content-type="author">
                        <surname>Sharma</surname>
                        <given-names>M</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Savage</surname>
                        <given-names>C</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Nair</surname>
                        <given-names>M</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Larsson</surname>
                        <given-names>I</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Svedberg</surname>
                        <given-names>P</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Nygren</surname>
                        <given-names>J</given-names>
                     </name>
                  </person-group>
                  <date>
                     <year>2022</year>
                  </date>
                  <article-title>Artificial intelligence applications in health care practice. Scoping review</article-title>
                  <issue>10</issue>
                  <page-range>40238</page-range>
                  <volume-id content-type="bibarticledoi">10.2196/40238</volume-id>
                  <source content-type="journal">Journal of Medical Internet Research</source>
                  <volume>24</volume>
               </element-citation>
               <mixed-citation>Sharma, Malvika; Savage, Carl; Nair, Monika; Larsson, Ingrid; Svedberg, Petra; Nygren, Jens (2022): Artificial intelligence applications in health care practice. Scoping review. In: Journal of Medical Internet Research 24 (10), p. 40238. <ext-link xlink:href="https://doi.org/10.2196/40238">https://doi.org/10.2196/40238</ext-link>
               </mixed-citation>
            </citation-alternatives>
         </ref>
         <ref id="CR19">
            <citation-alternatives>
               <element-citation publication-type="chapter">
                  <person-group person-group-type="author">
                     <name content-type="author">
                        <surname>Siedel</surname>
                        <given-names>G</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Voß</surname>
                        <given-names>S</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Vock</surname>
                        <given-names>S</given-names>
                     </name>
                  </person-group>
                  <date>
                     <year>2021</year>
                  </date>
                  <chapter-title>An overview of the research landscape in the field of safe machine learning</chapter-title>
                  <volume-id content-type="bibchapterdoi">10.1115/IMECE2021-69390</volume-id>
                  <source content-type="BookTitle">Safety Engineering, Risk, and Reliability Analysis</source>
                  <series>Proceedings of the ASME 2021 International Mechanical Engineering Congress and Exposition</series>
                  <volume-series content-type="">13</volume-series>
                  <publisher-name>The American Society of Mechanical Engineers</publisher-name>
                  <publisher-loc>New York, NY</publisher-loc>
               </element-citation>
               <mixed-citation>Siedel, Georg; Voß, Stefan; Vock, Silvia (2021): An overview of the research landscape in the field of safe machine learning. In: Proceedings of the ASME 2021 International Mechanical Engineering Congress and Exposition. Volume 13: Safety Engineering, Risk, and Reliability Analysis. New York, NY: The American Society of Mechanical Engineers, p. V013T14A045. <ext-link xlink:href="https://doi.org/10.1115/IMECE2021-69390">https://doi.org/10.1115/IMECE2021-69390</ext-link>
               </mixed-citation>
            </citation-alternatives>
         </ref>
         <ref id="CR20">
            <citation-alternatives>
               <element-citation publication-type="journal">
                  <person-group person-group-type="author">
                     <name content-type="author">
                        <surname>Tidjon</surname>
                        <given-names>L</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Khomh</surname>
                        <given-names>F</given-names>
                     </name>
                  </person-group>
                  <date>
                     <year>2022</year>
                  </date>
                  <article-title>The different faces of ai ethics across the world. A principle-implementation gap analysis</article-title>
                  <volume-id content-type="bibarticledoi">10.48550/arXiv.2206.03225</volume-id>
                  <source content-type="journal">arXiv</source>
               </element-citation>
               <mixed-citation>Tidjon, Lionel; Khomh, Foutse (2022): The different faces of ai ethics across the world. A principle-implementation gap analysis. In: arXiv. <ext-link xlink:href="https://doi.org/10.48550/arXiv.2206.03225">https://doi.org/10.48550/arXiv.2206.03225</ext-link>
               </mixed-citation>
            </citation-alternatives>
         </ref>
         <ref id="CR21">
            <citation-alternatives>
               <element-citation publication-type="book">
                  <name content-type="author">
                     <surname>Ulich</surname>
                     <given-names>E</given-names>
                  </name>
                  <date>
                     <year>2011</year>
                  </date>
                  <source content-type="BookTitle">Arbeitspsychologie</source>
                  <publisher-name>Schäffer-Poeschel</publisher-name>
                  <publisher-loc>Stuttgart</publisher-loc>
               </element-citation>
               <mixed-citation>Ulich, Eberhard (2011): Arbeitspsychologie. Stuttgart: Schäffer-Poeschel.</mixed-citation>
            </citation-alternatives>
         </ref>
         <ref id="CR22">
            <citation-alternatives>
               <element-citation publication-type="journal">
                  <person-group person-group-type="author">
                     <name content-type="author">
                        <surname>Vakkuri</surname>
                        <given-names>V</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Kemell</surname>
                        <given-names>K-K</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Kultanen</surname>
                        <given-names>J</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Abrahamsson</surname>
                        <given-names>P</given-names>
                     </name>
                  </person-group>
                  <date>
                     <year>2020</year>
                  </date>
                  <article-title>The current state of industrial practice in artificial intelligence ethics</article-title>
                  <issue>4</issue>
                  <page-range>50–57</page-range>
                  <volume-id content-type="bibarticledoi">10.1109/MS.2020.2985621</volume-id>
                  <source content-type="journal">IEEE Software</source>
                  <volume>37</volume>
               </element-citation>
               <mixed-citation>Vakkuri, Ville; Kemell, Kai-Kristian; Kultanen, Joni; Abrahamsson, Pekka (2020): The current state of industrial practice in artificial intelligence ethics. In: IEEE Software 37 (4), pp. 50–57. <ext-link xlink:href="https://doi.org/10.1109/MS.2020.2985621">https://doi.org/10.1109/MS.2020.2985621</ext-link>
               </mixed-citation>
            </citation-alternatives>
         </ref>
         <ref id="CR23">
            <citation-alternatives>
               <element-citation publication-type="chapter">
                  <name content-type="author">
                     <surname>Weber</surname>
                     <given-names>K</given-names>
                  </name>
                  <person-group person-group-type="editor">
                     <name content-type="editor">
                        <surname>Weber</surname>
                        <given-names>K</given-names>
                     </name>
                     <name content-type="editor">
                        <surname>Frommeld</surname>
                        <given-names>D</given-names>
                     </name>
                     <name content-type="editor">
                        <surname>Manzeschke</surname>
                        <given-names>A</given-names>
                     </name>
                     <name content-type="editor">
                        <surname>Fangerau</surname>
                        <given-names>H</given-names>
                     </name>
                  </person-group>
                  <date>
                     <year>2015</year>
                  </date>
                  <chapter-title>MEESTAR. Ein Modell zur ethischen Evaluierung sozio-technischer Arrangements in der Pflege- und Gesundheitsversorgung</chapter-title>
                  <page-range>247–262</page-range>
                  <source content-type="BookTitle">Technisierung des Alltags. Beitrag für ein gutes Leben?</source>
                  <publisher-name>Steiner</publisher-name>
                  <publisher-loc>Stuttgart</publisher-loc>
               </element-citation>
               <mixed-citation>Weber, Karsten (2015): MEESTAR. Ein Modell zur ethischen Evaluierung sozio-technischer Arrangements in der Pflege- und Gesundheitsversorgung. In: Karsten Weber, Debora Frommeld, Arne Manzeschke and Heiner Fangerau (eds.): Technisierung des Alltags. Beitrag für ein gutes Leben? Stuttgart: Steiner, pp. 247–262.</mixed-citation>
            </citation-alternatives>
         </ref>
         <ref id="CR24">
            <citation-alternatives>
               <element-citation publication-type="book">
                  <string-name>WHO – World Health Organization</string-name>
                  <date>
                     <year>2023</year>
                  </date>
                  <source content-type="BookTitle">Regulatory considerations on artificial intelligence for health</source>
                  <publisher-name>World Health Organization</publisher-name>
                  <publisher-loc>Geneva</publisher-loc>
               </element-citation>
               <mixed-citation>WHO – World Health Organization (2023): Regulatory considerations on artificial intelligence for health. Geneva: World Health Organization.</mixed-citation>
            </citation-alternatives>
         </ref>
         <ref id="CR25">
            <citation-alternatives>
               <element-citation publication-type="journal">
                  <person-group person-group-type="author">
                     <name content-type="author">
                        <surname>Yakusheva</surname>
                        <given-names>O</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Bouvier</surname>
                        <given-names>M</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Hagopian</surname>
                        <given-names>C</given-names>
                     </name>
                  </person-group>
                  <date>
                     <year>2025</year>
                  </date>
                  <article-title>How artificial intelligence is altering the nursing workforce</article-title>
                  <issue>1</issue>
                  <page-range>102300</page-range>
                  <volume-id content-type="bibarticledoi">10.1016/j.outlook.2024.102300</volume-id>
                  <source content-type="journal">Nursing Outlook</source>
                  <volume>73</volume>
               </element-citation>
               <mixed-citation>Yakusheva, Olga; Bouvier, Monique; Hagopian, Chelsea (2025): How artificial intelligence is altering the nursing workforce. In: Nursing Outlook 73 (1), p. 102300. <ext-link xlink:href="https://doi.org/10.1016/j.outlook.2024.102300">https://doi.org/10.1016/j.outlook.2024.102300</ext-link>
               </mixed-citation>
            </citation-alternatives>
         </ref>
         <ref id="CR26">
            <citation-alternatives>
               <element-citation publication-type="journal">
                  <person-group person-group-type="author">
                     <name content-type="author">
                        <surname>Yelne</surname>
                        <given-names>S</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Chaudhary</surname>
                        <given-names>M</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Dod</surname>
                        <given-names>K</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Sayyad</surname>
                        <given-names>A</given-names>
                     </name>
                     <name content-type="author">
                        <surname>Sharma</surname>
                        <given-names>R</given-names>
                     </name>
                  </person-group>
                  <date>
                     <year>2023</year>
                  </date>
                  <article-title>Harnessing the power of AI. A comprehensive review of its impact and challenges in nursing science and healthcare</article-title>
                  <issue>11</issue>
                  <page-range>e49252</page-range>
                  <volume-id content-type="bibarticledoi">10.7759/cureus.49252</volume-id>
                  <source content-type="journal">Cureus</source>
                  <volume>15</volume>
               </element-citation>
               <mixed-citation>Yelne, Seema; Chaudhary, Minakshi; Dod, Karishma; Sayyad, Akhtaribano; Sharma, Ranjana (2023): Harnessing the power of AI. A comprehensive review of its impact and challenges in nursing science and healthcare. In: Cureus 15 (11), p. e49252. <ext-link xlink:href="https://doi.org/10.7759/cureus.49252">https://doi.org/10.7759/cureus.49252</ext-link>
               </mixed-citation>
            </citation-alternatives>
         </ref>
      </ref-list>
   </back>
</article>
