EEOC’s Latest AI Guidance Sends Warning to Employers

May 23 - Posted at 9:00 AM Tagged: , , ,

Employers using or thinking about using artificial intelligence (AI) to aid with workplace tasks received another reminder from the federal government that their actions will be closely scrutinized by the EEOC for possible employment discrimination violations. The federal agency released a technical assistance document on Thursday warning employers deploying AI to assist with hiring or employment-related actions that it will apply long-standing legal principles to today’s evolving environment in an effort to find possible Title VII violations. What are the five things you need to know about this latest development?

1. EEOC Confirms That Employers’ Use of AI Could Violate Workplace Law

The EEOC started by confirming its crystal-clear position in its technical assistance document: an improper application of AI could violate Title VII, the federal anti-discrimination law, when used for recruitment, hiring, retention, promotion, transfer, performance monitoring, demotion, or dismissal. The EEOC outlined four instances where the use of AI during the hiring process – and one example during an employment relationship – could trigger Title VII violations:

  • resume scanners that prioritize applications using certain keywords;
  • “virtual assistants” or “chatbots” that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements;
  • video interviewing software that evaluates candidates based on their facial expressions and speech patterns;
  • testing software that provides “job fit” scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived “cultural fit” based on their performance on a game or on a more traditional test; and
  • employee monitoring software that rates employees on the basis of their keystrokes or other factors.

The agency didn’t say that these are the only types of workplace-related AI methods that could come under fire – or that these types of tools are inherently improper or unlawful. It did say, however, that preexisting agency regulations (the Uniform Guidelines on Employee Selection Procedures) that have been around for over four decades can apply to situations where employers use AI-fueled selection procedures in employment settings.

The agency said this is especially true in “disparate impact” situations – where employers may not intend to discriminate against anyone but deploy any sort of facially neutral process that ends up having a statistically significant negative impact on a certain protected class of workers.   

2. “Four-Fifths Rule” Can Be Applied to AI Selections

The EEOC pointed out that employers can use the “four-fifths” rule as a general guideline to help determine whether an AI selection process has violated disparate impact standards (and we apologize in advance for the impending use of math). The test checks to see if a selection process is having a disparate impact on a certain group by comparing the selection rate of that group with the most “successful” selection rate. If it’s less than four-fifths of that selection rate, then you might be subject to a disparate impact challenge. If that sounds confusing to you, here is the example provided by the EEOC.

Assume your company is using an algorithm to grade a personality test to determine which applicants make it past a job screening process.  

  • 80 White applicants and 40 Black applicants take the personality test.
  • 48 of the White applicants advance to the next round (equivalent to 60%).
  • 12 of the Black applicants advance to the next round (equivalent to 30%).
  • The ratio of the two rates is thus 30/60 (or 50%).
  • Because 30/60 (or 50%) is lower than 4/5 (or 80%), the four-fifths rule says that the selection rate for Black applicants is substantially different than the selection rate for White applicants – which could be evidence of discrimination against Black applicants.

Note, however, that the EEOC said that this kind of analysis is merely a rule of thumb. It’s a rudimentary way to draw an initial inference about the selection processes. If you end up finding problematic numbers, it should prompt you to acquire additional information about the procedure in question, according to the EEOC, and isn’t necessarily indicative of a definitive Title VII violation. Similarly, just because your numbers clear the four-fifths hurdle doesn’t mean that the particular selection procedure is definitely lawful under Title VII. It can still be challenged by the agency or a plaintiff in a charge of discrimination.

3. EEOC Encourages Proactive Self-Audits

In a statement accompanying the release of the technical assistance document, EEOC Chair Charlotte Burrows said that employers should test all employment-related AI tools early and often to make sure they aren’t causing legal harm. This doesn’t mean just using the four-fifths rule, but also using a thorough auditing process involving a variety of potential examination methods on all AI functions. “I encourage employers to conduct an ongoing self-analysis to determine whether they are using technology in a way that could result in discrimination,” she said.  

But not mentioned by the EEOC: a reminder that you should approach any self-audit with the help of legal counsel. Not only can experienced legal counsel help guide you about the best methodologies to use and assist in interpreting the results of any audit, but using counsel can help cloak your actions under attorney-client privilege, potentially shielding certain results from discovery. This can be especially beneficial if you identify changes that need to be made to improve your process to minimize any unintentional impacts.

4. You’re On the Hook For Problems Caused by Your AI Vendors

The agency also noted quite clearly that you can’t duck your responsibilities by using a third party to deploy AI methods and then blaming them for any resulting discriminatory results. It said that you may still be responsible if the AI procedure discriminates on a basis prohibited by Title VII even if the decision-making tool was developed by an outside vendor.

“In addition,” said the EEOC, “employers may be held responsible for the actions of their agents, which may include entities such as software vendors, if the employer has given them authority to act on the employer’s behalf.” This may include situations where you rely on the results of a selection procedure that an agent administers on your behalf.

The EEOC recommends that you may want to specifically ask any vendor you are considering to develop or administer an algorithmic decision-making tool whether steps have been taken to evaluate whether that tool might cause an adverse disparate impact. And it also recommends asking the vendor whether it relied on the four-fifths rule of thumb or whether it relied on a standard such as statistical significance that is often used by courts when examining employer actions for potential Title VII violations.

5. EEOC’s Guidance is Part of Bigger Trend

This technical assistance document is part of a bigger trend we’re seeing from federal agencies that are increasingly interested in the ways that AI may lead to employment law violations. Just last month, in fact, EEOC Chair Burrows teamed up with leaders from the Department of Justice, the Federal Trade Commission and the Consumer Financial Protection Bureau to announce that they would be scrutinizing potential employment-related biases that can arise from using AI and algorithms in the workplace.

And within the past year, the EEOC teamed up with the DOJ to release a pair of guidance documents warning that relying on AI to make staffing decisions might unintentionally lead to discriminatory employment practices, including disability bias, followed by the White House releasing its “Blueprint for an AI Bill of Rights” that aims to protect civil rights in the building, deployment, and governance of automated systems.

While none of these guidance documents create new legal standards or can be relied upon with the force of law like a statute or regulation, they do carry weight, may signal where the agencies are focusing their enforcement efforts, and can be cited to by agencies and plaintiffs’ attorneys as best practices that employers should follow.  And states have gotten into the action too, with New York City’s law set to take effect in July, and a new bill advancing towards the Governor in California. And for that reason, you should take this guidance seriously and adapt your employment practices as necessary to stay up to speed with the pace of change that is rapidly unfolding before our eyes.

 

© 2024 Administrators Advisory Group, Inc. All Rights Reserved