The Equal Employment Opportunity Commission (“EEOC”) recently released a technical assistance document titled “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964” (the “Guidance”). The Guidance is part of an EEOC initiative to ensure that software and other technologies comply with federal civil rights laws, and addresses the fact that AI algorithms used by employers appear to have many of the same flaws as the human biases they purport to avoid.
Disparate Impact Discrimination
Disparate impact discrimination occurs when an outwardly neutral policy or practice has the effect of disproportionately excluding persons based on a protected characteristic (e.g., sex, race, religion, disability) – even if there is no discriminatory intention.
The Guidance urges employers to follow the Uniform Guidelines on Employee Selection Procedures to assess the potential disparate impact of AI tools when employers use them to “hire, promote, terminate, or take similar actions toward applicants or current employees.” While most employers are not intentionally setting up technology to select or exclude people based on protected characteristics, even facially neutral automated decision-making software might disproportionately exclude people of a certain race, gender, etc. For example, minimum height requirements and physical strength tests can have a disparate impact on women; or an algorithm to sort applicants based on their zip code’s proximity to the employer’s offices could inadvertently exclude applicants of a particular race whose neighborhood is further away. An employer’s use of the following tools can raise issues under Title VII:
- Résumé screening software;
- Chatbots for hiring and workflow;
- Employee monitoring and analytics software; and
- Video interviewing software.
Employer Liability
The Guidance confirms that employers typically will be responsible for any discriminatory results produced by software they license from third parties, regardless of who is responsible for developing the AI. Accordingly, the EEOC specifically advises employers to explore with the vendor the extent to which a tool has been tested for disparate impact, and the extent to which it has been validated as a reliable means of measuring job-related characteristics or performance.
The EEOC “encourages” “self-analyses on an ongoing basis,” including to “determine whether employment practices have a disproportionate negative effect on a basis prohibited under Title VII or treat protected groups differently.” “Failure to adopt a less discriminatory algorithm that was considered during the development process [] may give rise to liability.”
Key Takeaways
- Employers who use or are considering using an automated decision-making tool should carefully evaluate the tool for accuracy and bias, and ensure its results do not violate Title VII or any other civil rights laws.
- Employers should understand how they use AI and which employment-based decisions might impact their compliance with Title VII and a complicated web of related state and local laws.
- Employers should take stock of any AI tools used in employment decision-making and consider assessing their adverse impact. If they identify an adverse impact and continue to use the tool, they should conduct validity studies that demonstrate the legitimate basis for using the selection process.
- Employers should continue to analyze their use of automated decision-making tools and understand recordkeeping requirements.
- Employers who license automated decision-making systems should review their contracts and diligence processes, and have a clear understanding of how the AI developer tested and validated its tools for accuracy and bias and how liability may be allocated should legal issues involving alleged discrimination arise.
The EEOC is likely to impose a rebuttable duty for employers to seek out “less discriminatory” algorithms during the selection process. Conducting these analyses with the assistance and advice of counsel offers some protection from potential liability. Employers may wish to consult outside counsel for an independent bias audit on these tools and to discuss what other measures may be necessary in this evolving area. Should you have any questions about the use of AI in employment decisions, please feel free to contact any member of our Labor and Employment team.
The information contained in this document does not constitute legal advice.
[1] The agency previously issued a similar guidance on complying with the Americans with Disabilities Act when using such tools.
[2] This is especially important for federal contractors and companies with 100+ employees.