While a great deal of attention is currently being focused on internal compliance with emerging AI regulations, Prevalent’s Alastair Parr argues that companies shouldn’t overlook a major external consideration: third parties.
Artificial intelligence (AI) is rapidly reshaping the modern world, and governments are rushing to build safeguards to ensure it is deployed responsibly. The rapid growth of technology has also led businesses in nearly every industry vertical to embrace AI, as it offers productivity and efficiency gains, with the ultimate goal of enhancing their bottom line.
However, alongside these opportunities come significant responsibilities for companies to deploy AI ethically and within the bounds of the law. This responsibility should extend not only to their own practices but also to those of all third parties they engage with, including vendors and service providers.
Navigating the many moving parts that come with safe and responsible AI deployment will be particularly challenging for companies based in regions at the forefront of AI regulation, including the U.S., Canada, the EU and the UK.
These regions are developing unique frameworks to regulate this fast-moving technology. Understanding and complying with these regulations will be critical for businesses operating in these regions to avoid legal repercussions and maintain trust with stakeholders.
The road ahead
Regulatory bodies worldwide are deciding how to regulate artificial intelligence, and businesses should pay close attention as proposals become binding laws. And though there will be variations country by country, most proposed rules focus on privacy, security and ESG matters regarding how businesses can ethically and legally use AI.
For example, in the U.S. the NIST AI risk management framework was introduced in January 2023 to “offer a resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.” This voluntary framework offers comprehensive guidance on developing an AI governance strategy for organizations.
Organizations should apply risk management principles to mitigate the potential negative impacts of AI systems, such as:
- Security vulnerabilities and AI applications: Without proper governance and safeguards, your organization could be exposed to system or data breaches.
- Lack of transparency in AI risk methodologies or measurements: Inadequate measurement and reporting practices can result in underestimating the impact of potential AI risks.
- Inconsistent AI security policies: When AI security policies do not align with existing risk management procedures, it can result in complicated and time-sensitive audits, potentially leading to negative legal or compliance outcomes.
All of the above relate not only to businesses but to the partners, vendors and other third parties with whom they do business. Increasingly, companies should expect to be held liable for how their vendors, suppliers and other third-party partners use AI, especially in terms of how they manage their customer data.
The coming years will clarify how organizations worldwide need to adapt their AI strategies, and managing third-party risk will likely become an increasingly important part of the equation.
With the passage of new laws will come new realities for businesses in every industry. It’s time to begin preparing for these new realities, including establishing acceptable use policies for AI and communicating those policies to third parties.
How Much Do You Really Know About Your Suppliers?
Ethical sourcing and due diligence have become crucial components of third-party risk management. But as Creditsafe's Matthew Debbage explains, many companies still aren’t taking the threat seriously enough.
Read moreMitigate third-party AI risk
Regardless of location, a cautious approach and proactive engagement with vendors are essential strategies for managing these risks. Companies must recognize that responsible AI governance extends beyond their internal operations and encompasses the practices of all parties involved in their AI ecosystem.
Every business has unique objectives and challenges, meaning relationships with third-party partners will vary widely. But there are some fundamental steps that any company can take to mitigate AI-related risks associated with third-party relationships proactively:
- Identify which third-party partners use AI and how they use it. Conduct a thorough inventory to identify which of your third-party vendors and suppliers are utilizing AI and the extent of their usage. This process involves asking relevant questions to understand the inherent risks associated with their AI applications, including data privacy, bias and accountability.
- Develop a system to tier and score third parties’ AI usage. Update your tiering system for third-party partners based on their AI usage and associated risks. Consider factors like the sensitivity of data they handle, the impact of their AI applications on stakeholders and business processes and their level of transparency and accountability in AI decision-making processes.
- Assess the risks in detail. Moving beyond surface-level assessments is essential and can be done by conducting detailed analyses of third parties’ AI practices. This includes evaluating their governance structures, data security protocols, transparency in AI usage and the extent of human oversight and intervention in AI decision-making. Utilize established compliance frameworks and industry best practices, such as the NIST framework, as a guide during the due diligence process.
- Wherever possible, recommend mitigation strategies. Based on what you discover from risk assessments and tiered scoring, recommend specific remediation measures to third-party partners. These measures may include enhancing data security protocols, implementing bias detection and mitigation strategies, ensuring transparency in AI decision-making and establishing contractual clauses to enforce ethical AI practices.
- Implement ongoing monitoring. Recognize that mitigating third-party risks is an ongoing process that requires continuous monitoring and evaluation. For this reason, develop mechanisms for ongoing monitoring of third parties’ AI practices, including regular audits, policy and control change reviews and staying informed about emerging AI-related issues that may affect your business.
As governments introduce new regulatory and legal frameworks around AI, businesses must increasingly look to their vendors and third-party partners as another source of risk that must be mitigated and managed. Taking these important steps requires expertise in AI governance, which is currently in high demand. Companies that lack dedicated AI risk management teams can find external assistance from organizations that specialize in navigating this complex landscape effectively.