The Buck Stops with You: Artificial Intelligence, Employment, & Title VII of the Civil Rights Act

Businessperson And Robot Shaking Hands

"I am putting myself to the fullest possible use, which is all I think that any conscious entity can ever hope to do." – HAL 9000, 2001: A Space Odyssey

The Rise of Artificial Intelligence in the Workplace

Computers are nothing short of marvelous, if not yet conscious. They represent the apogee of modern life, and perhaps more specifically, modern American life: the quest for the perfect labor-saving device. From washing machines to lawnmowers, from tractors to the smartphone that's probably within reach as you read this, devices that used to be considered luxuries have become necessities. So we shouldn’t be surprised that our continual hunt for methods, tools, and technology to make our lives easier and more productive has found its way into the workplace.

One such tool is known as "artificial intelligence" ("A.I."), which, like any tool, is a good thing … until it's not. And it's not when it causes employers to violate "fair employment practice" ("FEP") laws such as Title VII of the Civil Rights Act of 1964 ("Title VII"). Title VII is the federal statute that's intended to ensure that employers don’t unlawfully discriminate among job-applicants and employees based on any one of several prohibited criteria.

Understanding Fair Employment Practice Laws

Title VII, which applies, generally speaking, to employers of only 15 or more employees, prohibits employers from making employment-related decisions that affect the terms or conditions of employment of applicants or employees on the basis of race, color, religion, sex (which now means biological sex, pregnancy, sexual orientation and gender identity) and national origin. The statute is enforced (at the administrative level) by the federal Equal Employment Opportunity Commission ("EEOC"). Similar FEP laws prohibit employment-related discrimination based on, for example, age (if that age is at least 40 years), disability and certain kinds of "genetic information".

But here’s the thing: Title VII and other FEP laws can be (and often are) violated regardless of whether the employer intended to make a job-related distinction based on any of the prohibited criteria. That arresting notion stems from the fact that such laws prohibit both disparate treatment – which is intentional discrimination based on a prohibited criterion (think "Russians need not apply") – and disparate impact – which is discrimination that imposes a measurable effect on applicants and employees who have a common characteristic (such as a specified sex) – and stems from application of a discriminatory criterion that tends to favor one group at the expense of another (think "employees must be able to lift at least 75 pounds", which may tend to screen out more women than men, regardless of whether employees must do so to perform the essential functions of the job).

Disparate impact, in other words, can result from application of what appears to be a neutral practice or selection device that has a disproportionate impact on a protected group.

A.I. can be such a device.

The Power and Complexity of Artificial Intelligence

Few concepts are as poorly understood as A.I. That leads to confusion and skepticism as to what it is, how it works, and how to use it. Part of the problem is the lack of consensus about a definition. Simply put, A.I. is the ability of a computer or software to perform cognitive functions that are normally associated with humans, such as perceiving, reasoning, learning, interacting with an environment, problem-solving, and even creativity. You've probably interacted with A.I. without even knowing it. Amazon's Alexa, Apple's Siri and even some chatbots used for customer service are based on A.I. technology.

With the rise of generative models of A.I., such as ChatGPT and DALL-E (used for generating art), A.I. tools have become common household names. Businesses are also realizing that nearly all industries can benefit from A.I., which can help, for example, with automation of workflows and cybersecurity by continuously monitoring and analyzing network traffic, reduction of human error, elimination of repetitive tasks, research and development, and customer service and resource management.

Businesses are especially interested in a species of A.I. known as "machine learning", in which data is entered into software which then processes the data, with minimal human intervention, to produce a new output value. But between data-entry and such production are multiple hidden layers of processing, association, and categorization that the user cannot even perceive. Such opacity can easily obscure processing, association, and categorization that are (or may at least seem to be) biased in favor of, or prejudiced against, certain applicants and/or employees and thus unlawfully discriminatory.   

A.I. in Employment Decision-Making

A.I. has emerged as a valuable tool to assist businesses in making employment decisions such as hiring, promotion, and dismissal of employees. Employers are increasingly relying, in the course of making such decisions, on software that incorporates algorithmic decision-making, such as resume scanners that recommend applications that include certain keywords; employee-monitoring software that rates employees based on various factors; virtual assistants or chatbots that ask job candidates about their qualifications and reject those who fail to meet certain requirements; video interviewing software that evaluates candidates based on their facial expressions and speech patterns; and testing software that provides “job fit” scores for applicants or employees. All such software may use A.I., whether obviously or not.

A.I. may seem to be miraculous, but it comes with a catch: Most employers presumably take measures to avoid unlawful disparate treatment and disparate impact, but use of A.I. in making employment decisions raises a thorny question: How can employers monitor the effects of hidden layers of data-processing that may expedite time-sensitive personnel decisions but may also cause unintended "disparate impact" in employment-related decisions? That's not a hypothetical problem, as Amazon learned during trials it conducted with a potential A.I. tool.*

Amazon was exploring an A.I. tool, on a trial basis, that would learn from Amazon's past hires to review applicants' resumes and recommend the most promising candidates. However, the A.I. tool displayed a prejudice against female candidates. The A.I. engine was trained to vet applicants by observing patterns in resumes submitted to the company during the previous ten years, which in the male-dominated tech industry had been submitted largely by … male candidates.

Amazon understandably looked for promising traits, but the A.I. engine learned that male candidates were preferred. Amazon was able to identify the troubling practice during its internal trials and stop it, but its experience illustrates that machine learning can be unpredictable and result in a disparate impact on an employee or prospective employee on the basis of race, color, religion, sex, or national origin.

Fortunately, Amazon, using human supervision of the A.I. tool, was able to identify that the A.I. tool was not evaluating candidates in the manner intended, and abandoned its use prior to its roll-out with a larger group beyond the trial phase.

EEOC Guidance on Assessing Adverse Impact

The EEOC, in an attempt to help employers avoid violations of Title VII caused by A.I. tools, has released guidance for assessing adverse impact brought about by use of software, algorithms, and A.I. in employment decision-making ("EEOC Guidance"). The EEOC Guidance focuses on how A.I. can give rise to disparate impact under Title VII and tries to educate employers about the risks of using A.I. in employment decisions. The guidance does not prohibit use of A.I., but rather warns employers about the possible vice of it, in the form of possible disparate impact under Title VII that can arise if A.I. and its decisions and recommendations are not carefully monitored and assessed.

This is not a hypothetical point. Just ask “iTutorGroup,” three integrated companies providing English-language tutoring services to students in China, which the EEOC sued in May of this year because the companies allegedly programmed their "tutor application software" to automatically reject female applicants age 55 or older and male applicants age 60 or older. The EEOC claimed that iTutorGroup rejected more than 200 qualified applicants based in the U.S. because of their age.

The employer denies the allegations, but it apparently entered into a voluntary settlement agreement with the EEOC last week in which it has agreed to pay $365,000 to more than 200 rejected applicants. That's an expensive denial.

Using A.I. Responsibly: Advice for Employers

The lesson: A.I. is an amazing tool. But, like any complex tool, it must be used with caution. An employer that has access to it would be well advised to study the EEOC's guidance on the subject and, when needed, get some good legal advice, lest the employer become not just a user of it but its victim as well.

Ward and Smith labor and employment and technology attorneys are available to provide guidance for employers as they navigate the increasingly complex landscape of A.I.-related employment decisions. We understand that A.I. is emerging rapidly and can give employers advice on using it responsibly and avoiding potential pitfalls. If you have questions about the use of A.I., contact us today to learn more about how we can help you and your organization.

*Ed. Note: Amazon reached out to us to provide context around its trial use of A.I. The article has been amended to add that material. Updated 9/6/23

--
© 2024 Ward and Smith, P.A. For further information regarding the issues described above, please contact Grant B. Osborne or Mayukh Sircar, CIPP/US.

This article is not intended to give, and should not be relied upon for, legal advice in any particular circumstance or fact situation. No action should be taken in reliance upon the information contained in this article without obtaining the advice of an attorney.

We are your established legal network with offices in Asheville, Greenville, New Bern, Raleigh, and Wilmington, NC.

Subscribe to Ward and Smith