What’s growing faster, adoption of AI or the chorus of claims about how companies are using AI? While he doesn’t have an answer on that score, James Briggs, founder and CEO of AI Collaborator, believes companies have a responsibility to be open and clear about how they are using AI and related technologies.
Editor’s note: The author of this article, James Briggs, is founder and CEO of AI Collaborator, an AI marketplace.
In March, the SEC fined a pair of investment firms a combined $400,000, alleging they made false and misleading claims about their use of artificial intelligence (AI). As AI and machine learning have risen over the past couple of years, so have companies’ breathless declarations in sales and marketing materials regarding what AI can offer clients and customers.
Similar to greenwashing, in which companies make inflated claims about their ESG bona fides, AI washing is likely to increase as companies across industries attempt to capitalize on the AI hype by overstating capabilities for certain products or services. From technology, finance, retail and beyond, the appeal of being perceived as an AI-driven organization has led many to make exaggerated claims.
Emerging AI companies and those looking to invest or buy services need to know what the risks are and what claims to be wary of when it comes to navigating the rapidly evolving landscape.
Rocky Mountain High on AI: Colorado Emerges as the First Mover on State AI Law
New law set to go into effect in 2026, takes similar approach as EU AI Act
Read moreHow AI companies are misleading buyers
Without a commitment to responsible AI principles, enterprises may encounter several critical issues:
- Lack of transparency: If AI systems are not transparent, it becomes difficult for enterprises to understand how decisions are made. This lack of insight can lead to mistrust among stakeholders and customers, as well as potential legal and regulatory challenges.
- Bias and unfairness: AI systems that are not designed with fairness in mind can perpetuate and even exacerbate existing biases. This can result in discriminatory outcomes that harm certain groups of people, damage the enterprise’s reputation and lead to legal repercussions.
- Noncompliance with regulations: As regulatory frameworks around AI continue to evolve, enterprises that do not prioritize responsible AI may find themselves out of compliance. This can lead to hefty fines, legal battles and a loss of market standing.
- Security and privacy risks: Irresponsible AI can pose significant security and privacy risks. AI systems that are not properly governed can be vulnerable to cyber attacks, data breaches and misuse of sensitive information, compromising both the enterprise and its customers.
But following those points doesn’t necessarily mean companies can’t still mislead consumers by claiming their products and services are powered by AI when in reality, the technology plays only a minor role — or isn’t present at all.
Many companies outsource tasks to apps such as ChatGPT, which disguises the scope of organizations’ capabilities. It is also becoming more common for companies to use terms associated with AI, such as machine learning or deep learning without providing substantial AI-driven functionality. This can mislead stakeholders and consumers alike, which is particularly concerning, as it will undermine trust in AI technologies and companies that are genuinely developing real AI applications.
Moreover, some companies have resorted to deceptive marketing tactics, exaggerating the capabilities of their AI solutions. This includes presenting pre-programmed responses as AI-driven, falsely claiming full automation when human intervention is significant or inflating the accuracy and performance metrics of their AI models. These practices not only mislead consumers but also create unrealistic expectations about what AI can achieve. As a result, when the actual performance of these solutions falls short, it can lead to disillusionment and skepticism toward AI as a whole.
How to confirm AI claims are legitimate
The most effective way to avoid AI washing is by conducting thorough due diligence on the company’s team and product, rather than solely relying on live demos or slide decks. Scalable AI applications are complex to develop and require skilled resources and robust data infrastructure. Companies that develop these applications should be able to clearly explain how their systems are built and operated, providing detailed, transparent documentation and credible evidence of their technology’s performance. Companies that overstate the capabilities of their AI products may lack credible evidence to support their claims, which can lead to regulatory scrutiny and damage their reputation.