On May 16, 2025, United States District Judge Rita Lin granted a Motion for Conditional Certification of Collective Action in Mobley v. Workday, Inc. The Plaintiff alleges age, race, and disability discrimination from Workday’s AI-driven human resources tools that generate hiring recommendations. Similar allegations regarding bias embedded in AI-driven decisions have been made in the insurance industry about wrongful coverage denials and discrimination in underwriting and claims processing. At the core of these cases is whether reliance on AI-based decision making constitutes a common policy sufficient to render putative class members similarly situated for purposes of class certification.
AI Decision-Making and Discrimination Risk
Organizations often make decisions that impact prospective and current employees or customers. A subject of concern that may lead to litigation is the disparate treatment of members of a protected class, such as age, race, or gender, in decisions, including admissions, underwriting, coverage, hiring, promotion, compensation and termination. Reliance on AI-based algorithmic decision making is often intended to reduce subjective bias. Recent litigation, however, underscores that AI tools may not be bias-free and may incorporate, and even potentially accentuate, biases associated with the historical data on which these models are trained.
In Mobley vs. Workday, Inc., the plaintiff claims that Workday’s AI-based screening algorithm disproportionately disqualified applicants over 40 years old from employment opportunities. Workday operates a two-sided platform that allows job candidates to submit applications and employers to collect, process, and screen them. The plaintiff alleges that the algorithms were trained only on incumbent employee data, which resulted in a homogenous workforce that was not representative of the applicant pool and was discriminatory against applicants over 40. In granting preliminary class certification, Judge Lin identified Workday’s algorithmic decision-making tools as discriminatorily scoring applications based on age. Judge Lin further concluded that members of the proposed class were similarly situated because the basis for the denial of employment was that applicants were subject to the same common policy, in this case the same algorithmic decision-making tools, regardless of any disparate impact across claimants. Judge Lin, however, explained that this preliminary decision did not preclude the possibility that AI recommendations are in fact the result of individual employer preferences and recommendations, in which case an AI-based common policy would not be identifiable.
Why AI Systems May Not Constitute a Common Policy
This qualification highlights a critical question: are outcomes of interest attributable to individual human actions or common AI rules? Several factors suggest that AI-based hiring systems might not result in uniform policies.
First, even though predictive statistical models and algorithms provide information about objective metrics of performance, in practice, these objective metrics are often supplemented with human input to assess some subjective aspects of applications, such as cultural fit or interpersonal skills. In such cases, final decisions may not conform to a common policy and are better understood as the result of both model-driven analysis and a layer of human oversight involving individualized evaluation of a person’s attributes. This will ultimately depend on how these screening systems are used. For example, if a recommendation system automatically rejects individuals above a certain age, with no human review, those applicants may plausibly be subject to a common policy.
Second, AI-based decision making may involve employer-specific customization, with parameters defined on a case-by-case basis, which may explain why the AI system does not operate in a unified way. For instance, screening models may be trained on employer-specific historical hiring data and tailored to each employer’s criteria for a successful applicant. In such circumstances, disparate outcome analyses may not be susceptible to common proof.
Third, AI models evolve over time, which runs counter to the existence of a uniform policy and may be more consistent with a non-static policy. A defining characteristic of AI models is their reliance on feedback loops as a basis to learn and improve decision-making over time. Performance data provides a basis for re-training and refining algorithmic decision-making. Feedback loops, however, do not necessarily self-correct. For example, if an algorithm is trained on historical data showing that a company often hires candidates under 40 years old, the algorithm may initially learn that bias and amplify it with successive self-reinforcing recommendations in favor of younger hirings.
Economic Evidence in AI-Based Discrimination Claims
For these reasons, AI systems do not necessarily impose common or uniform policies and the existence of common impact and harm becomes an empirical question. Generally, economists assess matters of alleged discrimination by analyzing observed hiring recommendations and then determining the impact of an applicant’s characteristics over their likelihood of receiving a positive recommendation. A sharp difference in recommendation rates for applicants above versus below 40 years old, for example, may indicate age-based discrimination.
In the context of analyzing AI-based decision making, an economist may go beyond the conventional analyses of discrimination. For example, if an economist has access to the parameters and architecture of a machine learning model, they can emulate applicant profiles, vary protected characteristics, and observe changes in recommendations. This method allows economists to directly interrogate the algorithm itself, rather than infer discrimination indirectly from outcomes.
Despite its advantages, there might be challenges associated with this approach. In the case of the Workday platform, if every employer has trained their own hiring recommendation system, and if these models are being constantly re-trained with new data, it might be impossible to reconstruct the precise version of the system that generated each recommendation. Moreover, producing such data places substantial burdens on defendants, who may resist disclosure of their proprietary algorithms.
Implications for Class-Wide AI Discrimination Claims
In summary, the use of AI in decision making does not necessarily imply a uniform policy, particularly where human oversight or individualized input is involved. Whether AI driven outcomes reflect systemic bias remains an empirical question. As class-wide AI-related claims continue to emerge, economists now have an expanded set of tools to determine whether there is evidence of systemic discrimination.
Dr. Stuart Gurrea is a Managing Director at Secretariat and has offered testimony in Federal Court about the statistical identification of discriminatory practices. Dr. Nicolas Suarez is an Economist at Secretariat and has developed machine learning models to quantify economic impacts.
If you would like to discuss the issues raised in this article, please reach out to Dr. Stuart Gurrea or Dr. Nicolas Suarez to explore potential implications for your organization.