Facebook has been a case study in how not to handle data. Compliance.ai’s Kayvan Alikhani discusses how companies can navigate the risks and complexities that lie ahead on the artificial intelligence front.
There are two schools of thought when it comes to regulating artificial intelligence at the launching point of a new decade.
Thought #1: Businesses must figure out how to deploy artificial intelligence in a way that does not harm consumers, violate consumers’ privacy or otherwise run afoul of laws. If that regulatory scenario does not occur, the AI commercial landscape could be strewn with major lawsuits and regulatory penalties, with companies on the hook for billions of dollars in costs.
Thought #2: Businesses know they have to act on regulating AI; they’re following the lead of their customers. Case in point: A 2018 survey from the Center for the Governance of AI notes that 84 percent of the American public believes that AI is a technology that needs to be carefully managed.
What’s the best way forward for the regulation of AI inside companies and organizations? Increasingly, several higher regulatory forces – including the U.S. government – believe in a “Goldilocks” model for AI regulatory compliance: Not too aggressive, and not too relaxed.
In January 2020, the White House issued some long-awaited guidance on AI regulation.
This from the White House report:
As stated in Executive Order 13859, “the policy of the United States Government [is] to sustain and enhance the scientific, technological and economic leadership position of the United States in AI.” The deployment of AI holds the promise to improve safety, fairness, welfare, transparency and other social goals, and America’s maintenance of its status as a global leader in AI development is vital to preserving our economic and national security. The importance of developing and deploying AI requires a regulatory approach that fosters innovation [and] growth and engenders trust, while protecting core American values, through both regulatory and nonregulatory actions and reducing unnecessary barriers to the development and deployment of AI. To that end, federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth.
The U.S. government’s AI regulatory efforts come from the Office of Science and Technology Policy (OSTP), which unsurprisingly mirrors the White House’s business-friendly approach to AI business compliance, noting that government agencies tasked with figuring out proper AI compliance rules should encourage components like “fairness, nondiscrimination, openness, transparency, safety and security.”
Additionally, any regulatory rollouts should wait until regulatory bodies conduct “risk assessment and cost-benefit analyses,” and should include “scientific evidence and feedback from the American public.”
A Delicate Balance: The Facebook Biometrics Saga
A light touch model may be the marching orders of the federal government going forward, but the heavy hand of public regulatory bodies have already made life difficult for companies deploying robust AI tools.
“Exhibit A” is Facebook, which ran into a regulatory roadblock with its facial recognition software.
Certainly, facial recognition software has already made substantial commercial and cultural inroads. Consumers can use facial recognition to unlock their digital devices and to choose the best cosmetics for a consumer’s skin tone and facial features, among other uses. Companies can leverage facial recognition to properly identify customers and to provide employee access to the workplace and to authorize purchases by the buying public – and that’s just for starters.
Facebook ran afoul of AI regulators with its biometrics tool (called Tag Suggestions) that enabled platform users to scan digital images of friends and family, with nuggets of information on each individual’s identity and other personal information.
That triggered a lawsuit against Facebook alleging that the social media giant stored information without those individuals’ consent, thus violating the Illinois Biometric Information Privacy Act. While Facebook settled the lawsuit for $550 million, the issue of AI tools and consumer privacy buzzed inside corporate boardrooms from Miami to Moscow.
The message to companies? Be wary that when unleashing the power of AI in any commercial enterprise, corporate decision-makers don’t run afoul of the burgeoning number of consumer privacy laws rising on the global regulatory landscape.
Already, consumer protection statutes exist, such as the EU’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), which AI industry observers believe will fuel more regulatory scrutiny, more allegations and more fines and penalties.
The Path Forward for AI Regulatory Compliance
What steps can companies take to better adhere to current AI regulatory mandates and plan for future ones?
For starters, get better clarity on the issue from a growing number of global entities committed to managing AI’s impact on consumer privacy and on society in general.
For instance, the emerging Global Partnership on AI (GPAI) is an alliance between France, Canada and the Organisation for Economic Co-Operation and Development (OECD). Their remit is to better prepare companies in their jurisdiction for AI regulations as compliance mandates expand globally.
Companies can also plug into AI regulatory frameworks like expert-in-the-loop (EITL), which merges regulatory checkpoints into company decision-making workflows, offering an insurance policy of sorts to protect companies from compliance allegations and help them build more robust consumer privacy protections.
By leveraging the tools and features that technologies like EITL provide (like data error reduction, more transparency and stronger risk mitigation), companies can avail themselves of regulatory protection models that allow them to harness the vast power of artificial intelligence (and all the commercial rewards that provides) – while staying safely within the compliance boundaries laid out by government regulators.