The adoption of artificial intelligence (AI) to enhance business processes is a pervasive trend across many industries. The rapid increase in popularity of large language models (LLMs), such as ChatGPT by Open AI and Bard by Google, has sparked new initiatives for many organizations, drawing an increased focus on how AI can benefit organizations. However, organizations must make sound technological investments, including investing in AI, to move their business forward while balancing risk. With this rapid advancement in AI, regulations are imminent, and organizations must ensure they are prepared by deploying responsible AI (RAI). RAI deployment is becoming increasingly important to many enterprises, but there is often a significant gap between an organization’s aspirations and actions.1 Many organizations recognize the importance of deploying RAI, yet only a small percentage have implemented and matured their RAI programs. Some experts even believe that organizations will not implement controls to minimize risk within AI until they are required to do so through regulatory enforcement.2
There are consequences when organizations do not implement new technologies responsibly. These can include reputational damage, litigation, regulatory backlash, privacy concerns, a lack of defined accountability and diminished employee and customer trust.3 Due to these consequences, it is essential for organizations to adopt holistic RAI programs, keeping the impending adoption of regulations such as the EU AI Act and other global regulatory initiatives in mind.4
Defining RAI
A common definition of RAI is developing and using AI that aligns with ethical principles and values, including tenants such as transparency, fairness, accountability, privacy, security, accuracy and oversight.5 Although these core tenants vary slightly depending on the chosen framework, RAI is not merely a specific method of handling AI but rather a term focusing on the complex practices required to identify, define and manage AI systems while ensuring the appropriate oversight and accountability for the deployed system.6 These technologies can be used to gain insight from data, but how the strategies create risk and the unique challenges they present must be considered. As organizations across sectors continue integrating AI with organizational processes, enterprises are starting to understand that this technology is a double-edged sword.7
As organizations across sectors continue integrating AI with organizational processes, enterprises are starting to understand that this technology is a double-edged sword.
Operationalizing Responsible AI
Governance influences the decisions, culture, controls and accountability within an organization, ensuring consistency in application and ensuring decisions are aligned with organizational leadership. Organizations must establish governance as part of their RAI journeys. Education around shared terminology and core AI concepts is critical to ensuring that leadership understands the technologies in use as strategic decisions are made. Appropriate awareness of foundational AI topics will ensure that the implemented RAI program is relevant and that implemented governance matches the organization’s goals.8 Without appropriate education, individuals within organizations can fail to capture or understand what risk can exist with AI. Creating a baseline and educating all involved parties enables organizations to clearly understand the deployed system’s use and unique risk presented, enabling leadership to evaluate new threats and sources of harm that could result from the system. Addressing core tenants of RAI is critical throughout this process. The risk of deployed systems evolves, and risk identified today may differ in six months or one year. It is essential that organizations continuously evaluate and update their RAI programs as necessary.9
RAI is an ever-evolving pursuit that grows as an organization’s maturity grows. Leaders should consider how their RAI programs can advance business initiatives and impact stakeholders. A strategic approach to RAI, adapted from multiple sources, shown in figure 1, enables organizations to realize critical benefits as their programs scale. Establishing the organization’s RAI baseline, including standards, controls and tools, can help focus the RAI strategy and provide the ability to track the program’s evolution as maturity increases.
The essential components to starting an RAI program include seven steps:
- Step 1: Set the tone for RAI—Organizations must set the tone at the executive leadership level to ensure that RAI priorities and ethical considerations are clearly understood and communicated. Leaders must define clear expectations for how and when AI will be used within the organization.10
- Step 2: Empower leadership—Owners within the organization should be appointed and empowered to set expectations, drive change and develop a comprehensive RAI program.11
- Step 3: Establish RAI culture—Organizations should create and fully embrace an RAI code of conduct that contains principles defining how the RAI program aligns with the organization’s purpose, values and principles.12 The RAI program should include diverse perspectives and multistakeholder feedback; this should be implemented throughout the program.13 In addition, the organization should promote an environment where concerns can be reported and addressed effectively, including soliciting open feedback throughout the implementation process and ensuring that all submissions and reported concerns are assessed appropriately.14
- Step 4: Conduct initial risk assessments and establish a baseline for maturity—The completion of risk assessments is an ongoing and constantly evolving process. Organizations must have strategies in place to assess the risk that could result from a deployed system. This is especially important when implemented systems can perpetuate harm at scale.15 Harms can be realized in many different areas that leverage AI, including hiring processes, recidivism decisions, deepfakes, hallucinations (misinformation), privacy concerns, financial decisions, weapon system decision automation and other impacts related to biased data sets or model development. Current risk should be identified, the potential impact should be evaluated and remediation or mitigation strategies should be determined. It is also essential for organizations to understand the limitations of their data, models and systems. By understanding the inputs and outputs of the designed systems and the expected outcomes, organizations can ensure that the scope of the designed system is delivering the expected results.
- Step 5: Use appropriate tools and processes—Organizations should strive to deploy algorithms that are unbiased, transparent and safe. Even though entirely impartial or unbiased systems cannot exist in systems that genuinely represent society, the aim should be to balance technology implementation and understand the specific impacts that systems have. Processes should be in place to address potential biases from datasets, engineers and the environment, ensuring that biases are considered and mitigated where appropriate within the RAI program.16
- Step 6: Keep humans in the loop—Throughout the process, stakeholder feedback should include varied backgrounds and perspectives. Keeping humans involved in processes, especially where there is significant potential for disparate impact on society, is essential.17
- Step 7: Monitor and audit your RAI program and AI deployment—Ensure ongoing evaluation of identified risk is conducted to detect and understand how risk within the AI system can evolve.18 The RAI program and deployed technologies can be assessed through internal audits and ongoing monitoring to ensure that unintended harm does not occur through induced biases, design or other controllable attributes. This enables an organization to identify areas for improvement and minimize unintended impacts on society.19 Independent audits of the RAI program ensure that appropriate controls are in place and are aligned with external regulatory requirements.20
Leveraging maturity models can provide a guide for assessing where an organization stands and what it needs to improve.
Responsible AI Maturity
Leveraging maturity models can provide a guide for assessing where an organization stands and what it needs to improve. The organization can chart a path for attaining increased maturity for its RAI program. Maturity models such as those offered by Microsoft, and IBM can be leveraged to guide an organization’s RAI program as they move throughout their RAI journey. An RAI program, at the beginning, should be evaluated from a holistic approach. In figure 2, a general guide is presented to assess maturity at the program level.21
Conclusion
Ensuring that organizations design and deploy AI technologies thoughtfully and responsibly can bring value, drive profit and build trust by ensuring that AI does not negatively impact society. Organizations must identify where gaps currently exist and leverage tools such as current AI frameworks and industry guidance to help fill the gaps. Establishing an RAI program can help prevent AI systems from being used to perpetuate bias and discrimination; ensure that AI systems are explainable, that appropriate privacy and security controls are in place, and that AI systems are developed with the input of a diverse range of stakeholders; and hold accountable those responsible for any harm caused by AI systems.
Organizations should consider deploying RAI programs as they adopt new technologies and leverage a maturity matrix to understand where they are and what areas need focus. This is essential to building a future where AI is used for good and not for harm. Deployment of RAI can promote increased communication, increased collaboration and a heightened focus on products, resulting in improved quality, reduced errors and ultimately increased productivity. Organizations that are proactive and adopt responsible AI practices will help create a brighter, more sustainable future.
Endnotes
1 Theoto, T.; S. Küspert; K. Hefter; et al.; “Responsible AI for an Era of Tighter Regulations,” Boston Consulting Group, 10 October 2022, http://www.bcg.com/publications/2022/acting-responsibly-in-tight-ai-regulation-era
2 Sharief, B.; “Five Steps to Responsible AI,” Verta, 6 April 2023, http://www.verta.ai/blog/five-steps-to-responsible-ai; Scarpino, J. P.; “An Exploratory Study: Implications of Machine Learning and Artificial Intelligence in Risk Management,” Marymount University, Arlington, Virginia, USA, 2022; Gillis, A. S.; “Responsible AI,” TechTarget, http://www.techtarget.com/searchenterpriseai/definition/responsible-AI
3 Carroll, R.; “Identifying Risks in the Realm of Enterprise Risk Management,” Journal of Healthcare Risk Management, vol. 35, iss. 3, 2016, p. 24-30, http://doi.org/10.1002/jhrm.21206; Cheatham, B.; K. Javanmardian; H. Samandari; “Confronting the Risks of Artificial Intelligence,” McKinsey Quarterly, 26 April 2019, http://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/confronting-the-risks-of-artificial-intelligence Cubric, M.; “Drivers, Barriers and Social Considerations for AI Adoption in Business and Management: A Tertiary Study,” Technology in Society, vol. 62, 2020, http://doi.org/10.1016/j.techsoc.2020.101257; Misselhorn, C.; “Artificial Morality. Concepts, Issues and Challenges,” Society, vol. 55, iss. 2, 2018, p. 161-169, http://doi.org/10.1007/s12115-018-0229-y; Op cit Scarpino
4 Op cit Theoto et al.
5 Op cit Gillis; Lawson, A.; “AI vs. Responsible AI: Why Is It Important?” RAI Institute, 24 January 2023, http://www.responsible.ai/post/ai-vs-responsible-ai-why-is-it-important; Rouse, M.; “Responsible AI,” Techopedia, 21 January 2023, http://www.techopedia.com/definition/34923/responsible-ai; Vassilakopoulou, P.; E. Parmiggiani; A. Shollo; M. Grisot; “Responsible AI: Concepts, Critical Perspectives and an Information Systems Research Agenda,” Scandinavian Journal of Information Systems, vol. 34, iss. 2, 2022, http://aisel.aisnet.org/cgi/viewcontent.cgi?article=1875&context=sjis; Mills, S.; S. Singer; A. Gupta; F. Gravenhorst; F. Candelon; T. Porter; “Responsible AI Is About More Than Avoiding Risk,” Boston Consulting Group, 2022, http://www.bcg.com/publications/2022/a-responsible-ai-leader-does-more-than-just-avoiding-risk; Tabassi, E.; Artificial Intelligence Risk Management Framework (AI RMF 1.0), US National Institute of Standards and Technology, USA, 2023, http://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=936225; Op cit Scarpino
6 Op cit Theoto et al.
7 Op cit Cheatham
8 Op cit Theoto et al.
9 Op cit Scarpino; Op cit Mills et al.
10 Op cit Scarpino; Op cit Theoto et al.
11 Op cit Scarpino; Op cit Theoto et al.
12 West, D. M.; “Six Steps to Responsible AI in the Federal Government,” Brookings, 30 March 2022, http://www.brookings.edu/articles/six-steps-to-responsible-ai-in-the-federal-government/; Op cit Theoto et al.
13 Op cit West; Op cit Scarpino
14 Op cit West; Op cit Theoto et al.
15 Op cit Tabassi; Op cit Rouse
16 Op cit Scarpino; Op cit Rouse
17 Op cit Scarpino
18 Op cit Tabassi; Op cit Scarpino; Scarpino, J.; “Evaluating Ethical Challenges in AI and ML,” ISACA® Journal, vol. 4, 2022, p. 27-33, http://1ure.theabsolutelongestwebdomainnameinthewholegoddamnfuckinguniverse.com/archives
19 Op cit Scarpino, “An Exploratory Study: Implications of Machine Learning and Artificial Intelligence in Risk Management”; Op cit Rouse; Scarpino, J.; “Designing Ethical Systems By Auditing Ethics,” ISACA Journal, vol. 4, 2023, p. 29-33, http://1ure.theabsolutelongestwebdomainnameinthewholegoddamnfuckinguniverse.com/archives
20 Op cit Rouse
21 Ibid.; Mills, S.; S. Duranton; M. Santinelli; G. Hua; E. Baltassis; S. Thiel; O. Muehlstein; “Are You Overestimating Your Responsible AI Maturity?” BCG Global, 30 March 2021, http://www.bcg.com/publications/2021/the-four-stages-of-responsible-ai-maturity
JOSHUA SCARPINO | D.SC., CISM
Is the vice president of information security at TrustEngine. He leads the IT operations and security and compliance teams and is also responsible for developing and managing the responsible AI program. Scarpino is also the chief executive officer and founder of Assessed Intelligence LLC, a security and AI research consultancy firm. He has more than two decades of experience working in various US Department of Defense (DOD) roles, leading security operations for Fortune 500 companies, enhancing critical controls at financial and manufacturing organizations and leading, scaling and auditing security and compliance programs. His continued research focuses on ethical technologies and responsible AI, and he is passionate about educating the next generation of security and AI professionals. He is currently partnering with ForHumanity to develop frameworks for auditing AI systems, a member of the US National Institute of Technology’s (NIST’s) Generative AI Public Working Group and a member of the AI Risk and Vulnerability Database (ARVA).