There’s no doubt that AI in hiring has the potential make HR processes more efficient. But implementing this technology also carries with it significant risks that can nullify any workflow gains and carry outsized downstream consequences.
Artificial intelligence (AI) technologies hold so much promise for the workplace. Technologies exist that can (at least in theory) improve productivity, reduce or eliminate human bias in salary setting and ensure the right workers are placed in the right roles. But there are significant legal risks involved with implementing AI in your workplace. This article will outline the pitfalls of AI in employment settings, as well as practical steps for mitigating legal compliance risk.
Pitfalls of using Artificial Intelligence in the Workplace
1. AI technologies may increase the potential for discrimination during hiring.
AI technologies are now available across all stages of the hiring process. It is becoming more ubiquitous for applicants to interact with an AI-powered chatbot at the initial stage of hiring to answer screening questions. From there, they may participate in a recorded interview where the applicants answer questions and have an AI-driven technology analyze their voice and body language. The AI will then offer up a list of candidates who exhibit certain pre-selected traits.
How are AIs capable of discrimination?
As a computer program, AI holds the potential to offer completely unbiased decision-making in the hiring process. This potential, however, is hindered by the difficulties that arise when programmers create code. While the coding process is very complex, AI systems “learn” based on data fed to them. This poses a significant risk of discrimination, as we saw when Amazon developed and subsequently scrapped an AI-driven hiring tool that developed gender bias.
Preventing discrimination caused by AI decision-making.
Your employees need to remain involved in the hiring process and alert to the fact that AI is just a tool available to them. Train the staff involved in hiring to ensure they know why the company is using AI, what AI is capable of and what its limitations are. They should also be aware of its potential for bias, and processes should be put in place to regularly confirm that AI hasn’t “learned” any biases against protected workers.
Employers are encouraged to work with their AI provider regarding learned behavior and/or bias of AI, too. The AI provider can assist with ascertaining where the potential for bias exists and what you can do to prevent or limit bias. Then implement processes internally in line with your provider’s advice and best practices.
Finally, ensure your contracts with the third-party provider offer robust protections that minimize the potential for discrimination. It should include provisions to immediately terminate the contract if biases are discovered.
2. AI surveillance is legal, but only where you have appropriate policies in place.
Generally speaking, employers have broad powers to observe employees during work hours where the employees know the surveillance is occurring. These laws extend to AI-driven employee surveillance. Technologies that enable employers to assess how long employees spend away from their desks, how many private messages they send in a day and even their tone of voice during conversations are (usually) legal. As a result, many employers are using these technologies to gather volumes of data about employee productivity.
However, it is essential that companies take steps to ensure the AI technologies don’t overstep the bounds of the law. Here are several examples of where AI technologies may pose compliance risk to your company:
AI recordings must be compliant.
Recording your employees is usually permitted in the workplace. However, there are exceptions you need to be mindful of.
For instance, most jurisdictions prohibit employers from recording employees in certain private areas, like bathrooms, lactation rooms and showers. Phone and video call recordings can also be problematic in states which require knowledge of the surveillance. Since these laws vary by state, you should consult counsel before implementing AI that records employee activity.
Surveillance during non-work hours.
Employers are typically permitted to observe employee activity on work devices and in situations where an employee accesses the company network from a personal device. This includes where the employee uses company-provided equipment outside of work hours.
In any situation where you’re surveilling your employees, you need to ensure they’re aware of the surveillance. You should consult legal counsel to ensure your company policies are drafted in a manner that protects the company from compliance issues.
Be alert to potentially discriminatory policies.
You must not discriminate against members of your workforce. If you want to surveil employees, you must record everyone or no one. Targeting specific employees is fraught with compliance risk and should be avoided.
Consider privacy laws in your jurisdiction.
Privacy laws are developing at a rapid pace. Virginia has recently enacted its comprehensive privacy law, a federal privacy overhaul is widely anticipated and other states are working toward privacy regulations, too. You must consider the privacy law implications of surveilling your employees and the data you collect and store based on that surveillance. Again, it’s best to consult with counsel to ensure compliance with privacy laws, including the development of robust privacy policies.
3. Data from an AI must be carefully analyzed before being used as the basis for a performance evaluation.
Consider this situation:
An employer who runs a warehouse has started using AI to assess how long employees spend talking to other employees and how much time they spend in their designated workspace. One employee is noted as spending more time talking to others and away from his work area. His productivity is also slightly lower than the average employee.
It may be tempting to assume, based on the data provided, that the employee is underperforming. However, there is no way to know exactly what the worker is doing while they’re speaking with others and/or away from their desk. It might be that they’re regularly sought out by others to provide assistance, in which case they’d likely be a strong candidate for a supervisory role. It might be that they have a disability and need to use the bathroom more often than other workers, in which case you should make accommodations in line with the Americans with Disabilities Act instead of providing a poor performance review.
To promote compliance when using AI in determining employee performance, you should alert staff to the fact that the AI provides data, but it’s their responsibility to draw appropriate conclusions, preferably based on their own observations and investigations. You should also take steps to ensure you aren’t targeting hourly employees with the use of AI.
4. You must avoid discriminatory practices when AI technologies displace workers.
AI technologies are often faster, more accurate and cheaper than human workers. As they’re rolled out, human workers are displaced. This trend looks set to continue into the future, with the World Economic Forum estimating that 85 million workers will be displaced by AI by 2025. Employers must remember their legal obligations when deciding if workers should be laid off following the rollout of an AI and, if so, which workers.
To promote compliance, you should:
- Put processes in place to ensure you aren’t engaging in discriminatory practices when letting workers go. Even the appearance of discrimination can form a strong basis for a claim.
- Ensure compliance with termination laws, including notice provisions for mass layoffs. You should review the relevant state labor law department website and consult with legal counsel before letting any workers go.
5. You must ensure your third-party AI provider has adequate cybersecurity measures in place.
Your company can be held responsible for the loss and/or disclosure of personal data in the event of a data breach at a third-party provider. This is in addition to the reputational damage you may suffer and any productivity losses caused by the breach.
It is your responsibility to ensure your third-party AI provider has robust technical protections, as well as a strong culture of cybersecurity and compliance, before you implement AI in your workplace. Given that cybersecurity risk is constantly evolving, it is recommended that your written agreements with your third-party AI providers include:
- Details of any industry certifications you require the provider to hold and maintain.
- Mandatory minimum cybersecurity standards, including (but not limited to): multi-factor authentication, strong passwords, access controls, data minimization processes, secure data disposal mechanisms and encryption.
- Provisions that allow you to monitor their compliance with your cybersecurity requirements, as well as industry best practices.
- The ability to update your mandatory security requirements so you can respond quickly to emerging threats.
- Early notification requirements in the event of a breach.
Finally, you should consult with legal counsel before providing any personal data to a third-party vendor.
The Key to Compliance: Your One Takeaway
If you take only one thing away from this article, let it be this: Implementing artificial intelligence in the workplace does not mean robots are doing the legwork on legal compliance. AI performs the role it is coded to do, nothing more. Highly trained humans need to be aware that any AI has the potential to be biased, and they need to take steps to ensure that bias doesn’t seep into the company’s operations.