Workers challenge ‘hidden’ AI hiring tools in class action with major regulatory stakes.

Workers are getting fed up with AI-based hiring practices.

A new class action lawsuit filed in California alleges that human candidates are being unfairly profiled by “hidden” AI hiring technologies that “lurk in the background” to collect “sensitive and often inaccurate” information about “unsuspecting” job applicants.

The suit specifically targets Eightfold AI, claiming that tools used by the company should be regulated in the same way as credit report bureaus are via The Fair Credit Reporting Act (FCRA) and state laws based on it.

The case could have broad-reaching implications for the increased use of AI in hiring.

“This lawsuit is a pivot point,” said Sanchit Vir Gogia, chief analyst at Greyhound Research. “It tells us that AI isn’t just being scrutinized for what it does, but for how it does it and whether people even know it’s happening to them.”

Violating the 55-year-old FCRA

The suit was filed in the Superior Court of California by New York City-based law firm Outten & Golden LLP, on behalf of Erin Kistler and Sruti Bhaumik. The plaintiffs claim they were barred from employment on several occasions by companies using AI-based hiring tools.

The class action complaint asserts that Eightfold AI violated federal and state fair credit and consumer reporting acts and unfair competition laws by collecting data on applicants and selling reports to companies for use in employment decision-making. These practices “can have profound consequences” for job-seekers across the US, the lawsuit claims.

Eightfold markets itself as the “world’s largest, self-refreshing source of talent data” and incorporates more than 1.5 billion global data points, including job titles and worker profiles across “every job, profession, [and] industry.” It counts among its customers corporate giants including Microsoft, Morgan Stanley, Starbucks, BNY, Paypal, Chevron, and Bayer. 

The suit claims the Santa Clara-based company’s proprietary large language model (LLM) and deep learning-based technology analyze data from public resources including career sites, job boards, and resumé databases such as LinkedIn and Crunchbase. It also culls information from social media profiles, applicant locations, and behind-the-scenes tracking tools. None of these personal data points are ever included in job applications.

AI algorithms then rank a candidate’s “suitability” on a numerical scale of 0 to 5, based on “conclusions, inferences, and assumptions” about their culture fit, projected future career trajectory, and other factors. This method is intended to create a profile of the candidate’s “behavior, attitudes, intelligence, aptitudes, and other characteristics,” according to the lawsuit.

However, these reports are “unreviewable” and “largely invisible” to candidates, who have no opportunity to dispute their contents before they are passed on to hiring managers, the plaintiffs argue. “Lower-ranked candidates are often discarded before a human being ever looks at their application.”

This method of report creation violates longstanding FCRA requirements, and there is no stipulated exemption for AI use, according to the suit.

The FCRA broadly defines consumer reports as any written, oral, or other communication from a consumer reporting agency that includes information on a person to determine their access to credit and insurance, as well as for “employment purposes.” According to the lawsuit, this definition covers reports that contain information on “habits, morals, and life experiences.”

Plaintiffs argue that, while automated screening technology did not exist when the FCRA was established in 1970, lawmakers at the time expressed concern about growing accessibility to consumer information through computer and data-transmission techniques, and that “impersonal blips,” inaccurate data, and analysis by “stolid and unthinking machines” could unfairly bar people from employment.

Thus, the lawsuit argues, agencies like Eightfold must disclose their practices, obtain certifications, and give consumers a mechanism to review and correct reports. “Large-scale decision-making based on opaque information is exactly the kind of harm the statute was designed to address.”

Neither the lawyers for the plaintiffs nor for the defendants responded to requests for comment. The Society for Human Resource Management (SHRM) also declined to comment.

Defensibility becomes the new bar

This lawsuit exposes a “governance failure” and “fundamental accountability gap,” noted Greyhound’s Gogia.

And it’s not the first, nor will it likely be the last; HR company Workday, for instance, is facing a lawsuit alleging that its AI-powered hiring tools make decisions based on race, and also discriminate against older and disabled applicants.

If courts agree that AI evaluations function like credit reports, hiring will be pushed into regulated territory, Gogia noted; this means CIOs must establish clarity and set rules around notification, transparency, audit rights, and contestability.

“If your hiring tools operate like decision engines, they need to be governed like decision infrastructure,” he said. And when they influence employment decisions, enterprises will have to prove they’ve done their homework. This means showing the logic behind a model, understanding data provenance, and being able to explain why an applicant was rejected and the processes they have in place to correct bad calls.

“Defensibility will become the new bar,” said Gogia.

Where AI hiring helps, where it hurts

That’s not to say that AI can’t be valuable in hiring; many real-world examples have proven that it can. The Human Resources Professionals Association, for one, points to successful use of AI in initial talent sourcing, screening, and assessment, while AI scribes can quietly take notes, helping recruiters focus more intently on candidate discussions.

Gogia agreed that AI can filter and rank large applicant pools, automate repetitive HR tasks, and identify overlooked candidates within internal databases. This means hiring teams can move faster, hone their focus, be more consistent, and reduce friction.

“But the moment AI moves into judgement territory, things get messy,” he emphasized. Scoring personality traits, predicting future roles, or evaluating the quality of a candidate’s education are all “subjective inferences dressed up as mathematical objectivity.”

Gogia advises clients to insist on human-readable evidence from vendors, including logs, bias audits, and disclosures about model updates. They should ask questions like: What did the model evaluate? Why did it rank one candidate higher over another? What can the hiring manager say if asked to justify that outcome?

The answers to those questions can lead to process changes. One of Greyhound’s European manufacturing clients, for instance, redesigned its hiring pipeline so that managers had to log a rationale at every decision point, even if AI had already created a shortlist. This helped improve the audit trail, catch errors, and taught the team to “treat AI as input, not verdict,” Gogia noted. And another client slowed its final screening process for senior hires because it couldn’t defend the decisions AI was influencing and realized the system wouldn’t be able to survive scrutiny.

“CIOs, CHROs, legal, risk — all need to co-own this now,” said Gogia. “That starts by restoring the human’s role as an accountable actor, not just a passive observer. The future of hiring tech is human with machine, governed from day one.”

This article originally appeared on Computerworld.