Some may have predicted that one day employees would apply for jobs with the click of a button. Others may have anticipated that employers would turn to automated tools to manage the resulting flood of applications. And for several years now, AI-driven hiring systems have faced growing legal scrutiny—particularly around discrimination and transparency. But in Kistler and Bhaumik v. Eightfold AI Inc., a putative class action filed in California state court, plaintiffs have taken a more direct step: attempting to frame an AI-generated “Match Score” used to rank job candidates as a “consumer report” under the Fair Credit Reporting Act (FCRA), a statute Congress enacted in 1970—long before algorithmic hiring tools were conceivable.
The FCRA: Accuracy and Fairness for Credit Reports
The FCRA governs how consumer credit information is collected, reported, and used. Intended to promote accuracy, fairness, and privacy in credit reporting, it applies to credit bureaus as well as companies like lenders, fintech platforms, employers, landlords, and background check providers that obtain or supply consumer reports.
At its core, the FCRA requires companies to treat credit information carefully and use it responsibly. That means making sure the information is accurate before relying on it, giving people a meaningful chance to review and correct errors, and not using a credit or background report to deny someone a job, loan, or housing without first telling them and explaining their rights. When those procedural safeguards are not followed, litigation risk arises.
The Eightfold Complaint
Plaintiffs allege that Eightfold, an AI-powered recruiting platform, offers tools that algorithmically generate a “Match Score” for each job applicant. Employers can use Match Scores to rank candidates based on predicted fit and “likelihood of success” for a given position. The tool then ranks candidates based their Match Score.
Plaintiffs argue that Eightfold is a “consumer reporting agency” under the FCRA because it assembles and evaluates information about job applicants and delivers that analysis in “consumer reports”—the Match Score and rankings—to employers for use in hiring decisions. In their view, that makes the platform a business that furnishes “consumer reports” bearing on an applicant’s character or personal characteristics for employment purposes, triggering the statute’s compliance obligations.
They claim that Eightfold violated the FCRA by providing “consumer reports” for hiring decisions without complying with the statute’s gatekeeping requirements. Specifically, they allege Eightfold didn’t obtain the required certifications from employer clients confirming that those employers provided proper disclosures, obtained written authorization, and would observe a legally-mandated pause before rejecting candidates based on a covered report. They also contend that Eightfold didn’t provide required notices and take other statutory steps designed to prevent misuse of reports for employment purposes.
The complaint looks to recent federal guidance on AI in hiring. It points to an FTC blog post warning that companies offering background screening products—and the employers that use them—must comply with the FCRA when those tools influence employment decisions. It also cites CFPB Circular 2024-06, which explains that so-called “background dossiers” and algorithmic scores used in hiring can qualify as consumer reports, potentially triggering the statute’s requirements.
Does This Actually Involve “Consumer Reports”?
It’s unclear whether Plaintiffs’ theory that generating and transmitting Match Score based on job applicant data constitutes preparing a consumer report. Courts will likely examine whether platforms like Eightfold function more like employer-facing analytics tools than traditional consumer reporting agencies. If the inputs for Eightfold’s algorithm consist largely of employer-provided data or if the system operates as an integrated part of the employer’s decision-making infrastructure, courts could conclude the technology isn’t that much different from internal scoring or decision-support tools. On the other hand, if the inputs are largely composed of applicant-provided data—not employer-generated data—job seekers could be deemed to be in control of what the algorithm screens to generate Match Scores.
Compliance Consequences If The Theory Gains Traction
If a court did accept the premise that AI hiring platforms qualify as consumer reporting agencies, the compliance implications would be significant. Vendors could be required to obtain employer certifications under 15 U.S.C. § 1681b(b)(1), implement reasonable procedures to assure maximum possible accuracy, and establish dispute and reinvestigation processes. Employers, in turn, would need to treat algorithmic scores as consumer reports, potentially triggering standalone disclosure and authorization requirements, as well as pre-adverse action and adverse action notice obligations.
Such a ruling would effectively extend FCRA compliance beyond traditional background screening to encompass predictive hiring technologies that many organizations may not have historically treated as regulated consumer reporting activity. A handful of employment-specific state AI laws already exist for the purpose of assuring fairness and transparency in hiring processes, but none impose nearly the level of compliance obligations that this would require. For organizations deploying AI-driven hiring tools, the operational lift could be substantial.
Predictive “Scores” and Statutory Fit
The complaint’s emphasis on a numerical “match score” predicting likelihood of success is also noteworthy. Courts have long held that investigative consumer reports, which may include interviews, opinions, and reputation-based information, fall squarely within FCRA coverage. And evaluative and predictive reports bearing on employment eligibility have, in some circumstances, fallen within the statutory definition of a consumer report. The fact that the output is generated algorithmically does not, by itself, remove it from the FCRA’s scope.
The key inquiry will likely be whether the score is used to determine eligibility for employment and whether it is furnished by a third party in a manner consistent with the statute’s structure. If so, courts may be asked to apply existing FCRA principles to a technologically updated form of evaluative reporting.
Litigation Strategy and Broader Trends
The lawsuit also reflects a broader litigation strategy: rather than relying on emerging AI-specific regulatory regimes, plaintiffs are testing established statutes against novel technologies. The FCRA offers defined statutory damages, fee shifting, and a mature body of case law, making it an attractive vehicle for challenging AI-driven employment practices. By framing algorithmic hiring tools as consumer reporting activity, plaintiffs can situate AI within a familiar compliance framework while leveraging remedies that may be more predictable than those under newer or developing regulatory schemes.
This approach suggests that even absent comprehensive AI legislation, existing consumer protection statutes may continue to serve as vehicles for scrutinizing automated decision-making systems.