Back to News

New EEOC Guidelines to Ensure AI Maintains Equality in the Workplace

Published on

May 22, 2023

On May 18, the Equal Employment Opportunity Commission (EEOC) released new guidelines to help employers using algorithms and artificial intelligence (AI) in hiring, firing, and promoting decisions remain compliant with Title VII of the Civil Rights Act of 1964. Congress defined AI as a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.” National Artificial Intelligence Initiative Act of 2020 at section 5002(3).

Title VII prohibits discrimination on the basis of race, color, national origin, religion, and sex (which includes pregnancy, sexual orientation and gender identity). While Title VII prohibits all forms of discrimination, including intentional discrimination (sometimes referred to as disparate treatment), the recent EEOC guidance targets only unintentional discrimination (sometimes referred to as disparate impact or adverse impact). Similarly, even though Title VII applies to all employment practices of covered employers, the most recent guidance only addresses whether using algorithmic decision-making tools for “selection procedures,” like hiring, firing, and promoting, have an adverse impact under Title VII.

In general, Title VII prohibits the use of seemingly neutral tests that disproportionately exclude individuals based on race, color, national origin, religion, or sex. For example, if an employer uses a physical agility test to screen applicants, does that test disproportionately screen out women applicants? If so, women are disparately/adversely impacted. Notably, neutral tests with a disproportionate impact based on an individual’s race, color, national origin, religion, or sex are permissible in limited circumstances if necessary to the performance of the specific job at issue.

Algorithmic decision-making software is readily available to employers to aid in hiring, firing, promoting, etc. Examples of algorithmic decision-making software include: automatic resume-screening software using pre-defined keywords; virtual chatbots asking potential hires about qualifications and rejecting applicants who do not meet pre-determined requirements; or video software programmed to evaluate candidates based on facial expressions or speech patterns.

With new technology comes new responsibility to ensure the algorithmic decision-making software using AI complies with Title VII and does not disparately/adversely impact employees or applicants on the basis of their race, color, national origin, religion, or sex. The recent EEOC guidelines make clear that if algorithmic decision-making tools adversely impact individuals on the basis of race, color, national origin, religion, sex, or a combination of those factors (e.g., a Black woman), using the tools will violate Title VII (unless there is a business necessity to do so).

Employers Can Be Liable for Violating Title VII Even if a Third-Party Administers the AI Tools
Employers can be held liable for the disparate/adverse impacts of algorithmic decision-making tools even if the employers themselves do not design or administer the tools. For example, if a third-party software vendor designs and administers the algorithmic decision-making tool, employers may still be held liable for any adverse impacts of the tool on the employers’ employees or applicants. This is because third-party software vendors may be considered agents of the employer if the employer gives the third-party software vendor authority to act on its behalf, and employers are liable for Title VII violations of their agents. Even if a third-party software vendor designs and administers the algorithmic decision-making tool, an employer who relies upon the results of the tool could be liable for violating Title VII if the tool has a disparate/adverse impact on individuals based on their race, color, national origin, religion, or sex.

Accordingly, if an employer is using a third-party software vendor to develop or administer algorithmic decision-making tools, it is the employer’s responsibility to inquire if the tools are creating a disparate/adverse impact on applicants based on race, color, national origin, religion, or sex (if not related to business necessity). Even if a third-party software vendor indicates its software does not have an adverse impact on race, color, religion, sex, or national origin, if in practice the tool does have an adverse impact, the employer could still be liable for violating Title VII.

How to Determine if AI Tools Violate Title VII
Employers may determine if the algorithmic decision-making tools they use have an adverse impact on individuals based on race, color, national origin, religion, or sex by using the baseline “four-fifths rule.” The EEOC guidelines provide the following example: If there are 80 White applicants and 40 Black applicants, and the algorithmic decision-making tool advances 48/80 White applicants (60%) and 12/40 Black applicants (30%), then the ratio of the two rates is 50% (30/60). The Black selection rate compared to the White selection rate is lower than what the four-fifths rule requires, which is 80% (4/5). As a result, the algorithmic decision-making tool in this example could be evidence of discrimination against Black applicants.

However, compliance with the four-fifths rule does not by default mean an employer complied with Title VII. Courts see the four-fifths rule as a general guideline but have acknowledged it is not an appropriate metric for every situation. In addition to the four-fifths rule, courts also use statistical significance as a standard, which can sometimes differ from the four-fifths rule. As best practice, employers should ask their third-party software vendors directly if they relied upon the four-fifths rule, statistical significance, or another standard when programming the AI used in the algorithmic decision-making tool.

Once an employer becomes aware that an algorithmic decision-making tool has a disparate/adverse impact in violation of Title VII, the employer could be held liable for discrimination under Title VII if it does not stop using that tool or alter it in a way that removes the disparate/adverse impact.

Conduct Ongoing Review of AI Tools to Promote Equality and Avoid Title VII Violations
To continue ensuring equality in the workplace and to mitigate the risk of Title VII liability, the EEOC recommends employers conduct ongoing analysis of algorithmic decision-making tools to ensure the tools do not adversely impact individuals based on race, color, national origin, religion, or sex. If an employer discovers an algorithmic decision-making tool is adversely impacting individuals on the basis of race, color, national origin, religion, or sex, the employer should proactively adjust the tool as needed to remove the disparate/adverse impact moving forward.

For assistance in determining whether your selection procedures are compliant with Title VII, please contact Jennifer Craighead Carey, Tasha Stoltzfus Nankerville or any member of the Barley Snyder Employment Practice Group.


Related News

View More News
Press Release
December 10, 2024

Barley Snyder Partner Michael Crocenzi Elected President of York County Bar Foundation Board

For Immediate Release York, Pa. – Barley Snyder partner Michael J. Crocenzi has been elected President of the York Count...

Learn More
News Alert
November 18, 2024

U.S. DOL 2024 Overtime Salary Threshold Final Rule Vacated Nationwide

On Friday, a Texas federal court vacated the 2024 Department of Labor (“DOL”) Overtime Final Rule, which had set new sala...

Learn More
News Alert
November 14, 2024

Mandatory Captive Audience Meetings Held Unlawful (For Now)

On Wednesday, November 13, 2024, the National Labor Relations Board (“NLRB”) held that captive audience meetings violate ...

Learn More

Get in Touch

Our attorneys, paralegals and staff look forward to hearing from you. Please reach out to let us know how we can help.

Get In Touch
RECOGNIZED IN
Super Lawyers
Best Law Firms US News
Best Lawyers