Artificial intelligence (AI) is used almost everywhere and in almost every industry. In a study done last year by the Society for Human Resource Management, almost 25% of companies said they use AI to make decisions about hiring. There are many good reasons to do this.
For example, having AI sort through piles of applications or resumes might be more efficient and save money on labor costs. But managers need to be careful and smart if they want to use AI to help with employment-related tasks because it comes with a lot of risks.
In one recent case, the EEOC filed a consent order in New York on August 9, 2023, settling age discrimination claims against a company for $365,000. The company’s application software was set up to reject female candidates over 55 and male applicants over 60.
The EEOC has been working hard on this problem. They have released advice on the Americans with Disabilities Act and AI, as well as Title VII guidance on adverse impact discrimination caused by the use of AI in hiring processes.
Importantly, the EEOC has said that an employer who is sued for discrimination under Title VII or the ADA because they used AI cannot blame a third-party AI provider. No matter who made the software, a company can be held responsible if they use it and it leads to discrimination.
In the EEOC case we talked about above, it sounds like there was “different treatment,” or intentional prejudice. But the EEOC’s most recent advice is more focused on “disparate impact” discrimination, which happens when an employer uses a neutral-looking standard to make hiring decisions, but those choices hurt people in protected classes more than other people.
The EEOC’s advice tells companies to think about how any AI tool they use affects people differently. If a company knows that its AI will cause problems, it should take steps to fix the problem or use a different tool. Liability can be based on the fact that a less biased method was available but wasn’t used.
Some states, like New York, want to pass laws that would require independent auditors to look at the effects of AI software and require companies to share the audit results with their workers before putting the software to use.
In our mixed work world, where more and more companies hire people from all over the country, it’s also important to know what each state’s laws say.
The EEOC’s advice on the ADA and AI also points out other problems that companies can run into with AI. Does the software give the candidate or employee a chance to ask for a fair accommodation if they need one?
Does the algorithm’s process of making predictions accidentally break rules about questions about disabilities and medical exams? Does the tool block out people with disabilities on purpose or by accident, even though they might be able to do the job with an acceptable accommodation?
It’s reasonable that an employer would want to use AI to make hiring choices more efficient and save money, but it’s also clear that using AI comes with both obvious and secret risks. Employers must be careful and thoughtful when using AI to avoid getting sued because of it. Stay tuned with Crossover99 for more AI-related News and Updates.
Must Read: What Will Be the Influence of Robotics and Automation on the Healthcare Sector?