TY - JOUR
T1 - What's next for responsible artificial intelligence
T2 - a way forward through responsible innovation
AU - Herrmann, Heinz
N1 - Funding Information:
The concentric circle in Fig. 1 zooms in around RI. It can be seen that AI has become a technology focus for RI in academia ahead of nanotechnology. The European Union (EU) recognized that RI provides a suitable framework for building public trust [23,24] and became the first government globally to release draft regulations for the development and use of ‘trustworthy AI’, based on RI principles [25]. The EU's Horizon 2020 program for funding research and innovation in the period 2014–2020, including its AI projects, therefore, resides within this ‘inner circle’. For example, Horizon's Human Brain Project develops novel information and communication technology (ICT) architectures, based on convergence with neuroscience, and integrated governance for ethical and social issues [26]. Alternative terms often used for ‘trustworthy AI’ are ‘responsible AI (RAI)’, ‘beneficial AI’, or ‘ethical AI’ [24, 27–30]. The term RAI is used in the remainder of this research.It is therefore not surprising that primary studies emphasize the importance of trust-building for the acceptance, adoption, and diffusion of AI in a socially responsible way [ 53–55]. A report on innovation by the US National Security Commission on AI [56] cautions “if AI systems do not work as designed, or are unpredictable in ways that can have significant negative consequences, then leaders will not adopt them, operators will not use them, Congress will not fund them, and the American people will not support them” (p. 133).
Funding Information:
A notable 74% of RI publications In Scopus received research funding with 26% of all publications being funded by the European Commission. Just on the Horizon 2020 program alone, more than €100 m was spent between 2014 and 2020 on RI and RAI projects [ 101 ]. This demonstrates the importance of Europe's agenda for both fields. By the same token, such Eurocentrism [refer to Fig. 11 b) and 12b)] is a barrier to global RI/RAI acceptance, due to cultural and regulatory differences with the rest of the world and indeed, even within the EU [ 96 , 169 ].
Funding Information:
The concentric circle in Fig. 1 zooms in around RI. It can be seen that AI has become a technology focus for RI in academia ahead of nanotechnology. The European Union ( EU ) recognized that RI provides a suitable framework for building public trust [ 23 , 24 ] and became the first government globally to release draft regulations for the development and use of ‘trustworthy AI’, based on RI principles [ 25 ]. 1 1 The EU's Horizon 2020 program for funding research and innovation in the period 2014–2020, including its AI projects, therefore, resides within this ‘inner circle’. For example, Horizon's Human Brain Project develops novel information and communication technology (ICT) architectures, based on convergence with neuroscience, and integrated governance for ethical and social issues [ 26 ]. Alternative terms often used for ‘trustworthy AI’ are ‘responsible AI (RAI)’, ‘beneficial AI’, or ‘ethical AI’ [ 24 , 27–30 ]. The term RAI is used in the remainder of this research.
Publisher Copyright:
© 2023 The Author
PY - 2023/3
Y1 - 2023/3
N2 - Industry is adopting artificial intelligence (AI) at a rapid pace and a growing number of countries have declared national AI strategies. However, several spectacular AI failures have led to ethical concerns about responsibility in AI development and use, which gave rise to the emerging field of responsible AI (RAI). The field of responsible innovation (RI) has a longer history and evolved toward a framework for the entire research, development, and innovation life cycle. However, this research demonstrates that the uptake of RI by RAI has been slow. RAI has been developing independently, with three times the number of publications than RI. The objective and knowledge contribution of this research was to understand how RAI has been developing independently from RI and contribute to how RI could be leveraged toward the progression of RAI in a causal loop diagram. It is concluded that stakeholder engagement of citizens from diverse cultures across the Global North and South is a policy leverage point for moving the RI adoption by RAI toward global best practice. A role-specific recommendation for policy makers is made to deploy modes of engaging with the Global South with more urgency to avoid the risk of harming vulnerable populations. As an additional methodological contribution, this study employs a novel method, systematic science mapping, which combines systematic literature reviews with science mapping. This new method enabled the discovery of an emerging ‘axis of adoption’ of RI by RAI around the thematic areas of ethics, governance, stakeholder engagement, and sustainability. 828 Scopus articles were mapped for RI and 2489 articles were mapped for RAI. The research presented here is by any measure the largest systematic literature review of both fields to date and the only cross-disciplinary review from a methodological perspective.
AB - Industry is adopting artificial intelligence (AI) at a rapid pace and a growing number of countries have declared national AI strategies. However, several spectacular AI failures have led to ethical concerns about responsibility in AI development and use, which gave rise to the emerging field of responsible AI (RAI). The field of responsible innovation (RI) has a longer history and evolved toward a framework for the entire research, development, and innovation life cycle. However, this research demonstrates that the uptake of RI by RAI has been slow. RAI has been developing independently, with three times the number of publications than RI. The objective and knowledge contribution of this research was to understand how RAI has been developing independently from RI and contribute to how RI could be leveraged toward the progression of RAI in a causal loop diagram. It is concluded that stakeholder engagement of citizens from diverse cultures across the Global North and South is a policy leverage point for moving the RI adoption by RAI toward global best practice. A role-specific recommendation for policy makers is made to deploy modes of engaging with the Global South with more urgency to avoid the risk of harming vulnerable populations. As an additional methodological contribution, this study employs a novel method, systematic science mapping, which combines systematic literature reviews with science mapping. This new method enabled the discovery of an emerging ‘axis of adoption’ of RI by RAI around the thematic areas of ethics, governance, stakeholder engagement, and sustainability. 828 Scopus articles were mapped for RI and 2489 articles were mapped for RAI. The research presented here is by any measure the largest systematic literature review of both fields to date and the only cross-disciplinary review from a methodological perspective.
KW - responsible innovation
KW - RRI
KW - artificial intelligence
KW - ethics
KW - responsible artificial intelligence
KW - RAI
KW - Systematic literature review
KW - systematic science mapping
UR - http://www.scopus.com/inward/record.url?scp=85150284066&partnerID=8YFLogxK
U2 - 10.1016/j.heliyon.2023.e14379
DO - 10.1016/j.heliyon.2023.e14379
M3 - Article
AN - SCOPUS:85150284066
SN - 2405-8440
VL - 9
SP - e14379
JO - Heliyon
JF - Heliyon
IS - 3
M1 - e14379
ER -