As Artificial Intelligence (AI) and automated technologies play a growing role in employment decisions and consumer data processing, California has introduced two major regulatory frameworks that will have a significant impact on how businesses operate in these areas. In the past few weeks, two major sets of regulations have been finalized (or are nearing final approval) that carry important implications for any organization leveraging AI-driven tools in the workplace.
At TCWGlobal, we monitor these changes closely, so our clients don’t have to. As part of our comprehensive compliance support, we help you stay ahead of regulatory developments and ensure you’re not caught off guard.
On June 30, 2025, the California Civil Rights Council (CRC) finalized long-anticipated revisions to Title 2 of the California Code of Regulations. These updates govern how businesses use AI-driven and automated tools in employment decisions under the state’s Fair Employment and Housing Act (FEHA). The regulations take effect October 1, 2025, and apply to systems referred to as Automated Decision-Making Systems (ADS), defined as any computational process used to make or assist in making employment decisions, such as hiring, promotions, or terminations.
These rules are particularly focused on preventing discriminatory outcomes from AI use. Businesses using ADS must now treat these tools as legally significant components of the hiring process, subject to the same anti-discrimination and fairness standards as human-led decisions.
Notably, the law extends its scope beyond predictive AI models—which use historical data to forecast outcomes, such as assessing whether a candidate might be a good fit based on their past employment history—to also include Generative AI systems, which produce content or analysis, like interview questions or applicant summaries, in response to user inputs.
Businesses must conduct anti-bias testing on any ADS tool used in employment decisions. This testing should assess whether the tool causes a disproportionate
impact on protected groups and must be conducted both prior to deployment and, ideally, on an ongoing basis.
The testing's quality, scope, outcomes, and how a business responds to concerning results will factor into compliance evaluations.
Businesses are expected to demonstrate due diligence when using ADS tools by thoroughly understanding how these systems function. They must document all testing and risk mitigation efforts to ensure transparency and accountability.
Additionally, if any adverse impacts or biases are discovered, businesses are required to take appropriate steps to address and correct those issues. These efforts are not just best practices, but evidence of due diligence may serve as part of a business's defense if discriminatory outcomes are alleged.
While earlier drafts of the regulation would have imposed joint liability on third-party developers and vendors, the final version scales this back. Only those directly involved in the employment decision process, such as businesses or staffing agencies utilizing these AI tools, are liable.
This narrows exposure for tool creators and places the primary burden of compliance on those deploying the tools.
The definition of ADS now includes more than just hiring algorithms. It encompasses tools that analyze tone of voice, facial expressions, reaction time, and other biometric or behavioral indicators, particularly if those tools are used to assess job fitness or performance. Businesses may also need to make accommodation for individuals with disabilities when using such tools.
Businesses are required to retain ADS-related records for a period of four years. This recordkeeping obligation includes maintaining all inputs and outputs generated by the automated decision-making system, as well as any data used to design, develop, or customize the system specifically for that business. While this is less onerous than earlier drafts (which required retaining training data and full system logs), it still demands structured documentation processes.
Overall, these regulations send a clear message: businesses using AI in employment decisions must treat those tools as part of their legal responsibilities, not as neutral or exempt technologies.
Separately, on July 24, 2025, the California Privacy Protection Agency (CPPA) unanimously approved a second set of regulations under the California Consumer Privacy Act (CCPA), focused on Automated Decision-Making Technology (ADMT) and broader consumer data protection obligations.
An ADMT is any technology that processes personal information and uses computation to replace or substantially replace human decision making. While not yet fully finalized, the regulations are now in review by the Office of Administrative Law, which has 30 working days to approve them for procedural compliance. Full approval is expected before the end of summer. These rules are intended to supplement, rather than replace, the regulations finalized by the CRC.
These regulations differ from the FEHA-based rules above but are no less impactful. They govern how businesses process personal information using automation, especially when such technology makes “significant decisions” about a person’s access to employment.
Resume screeners, productivity trackers, and AI scheduling tools may all qualify under this broad definition of ADMT.
The timelines for compliance with these requirements are staggered. Pre-use notices must be in place by January 1, 2027, for any existing ADMT systems. Risk assessments completed in 2026 or 2027 must be submitted by April 1, 2028. After that, they’re due by April 1 of the year following completion. Cybersecurity audits will be introduced gradually starting in 2028, with the timing of your first required audit based on your business’s gross annual revenue.
Both of these recently established AI regulations signal a shift in how California views businesses’ responsibility when using technology in the workplace. If you're using AI tools to screen applicants, evaluate performance, or process sensitive data, you need a plan to comply.
Even if your business isn’t directly impacted by California’s new AI laws, regulations in this area are emerging across the U.S. and globally. As AI tools become more advanced and widely used in employment decisions, lawmakers are moving quickly to regulate their impact. For U.S.-based companies, check out our comprehensive chart tracking AI laws across all 50 states.
At TCWGlobal, we stay ahead of all the evolving AI legislation and are here to help with our comprehensive contingent workforce management solutions. TCWGlobal remains committed to providing compliant staffing and Employer of Record solutions.
Need help managing your contingent workforce? Contact TCWGlobal today to learn more.
Whether you need expertise in Employer of Record (EOR) services, Managed Service Provider (MSP) solutions, or Vendor Management Systems (VMS), our team is equipped to support your business needs. We specialize in addressing worker misclassification, offering comprehensive payroll solutions, and managing global payroll intricacies.
From remote workforce management to workforce compliance, and from international hiring to employee benefits administration, TCWGlobal has the experience and resources to streamline your HR functions. Our services also include HR outsourcing, talent acquisition, freelancer management, and contractor compliance, ensuring seamless cross-border employment and adherence to labor laws.
We help you navigate employment contracts, tax compliance, workforce flexibility, and risk mitigation, all tailored to your unique business requirements. Contact us today at tcwglobal.com or email us at hello@tcwglobal.com to discover how we can help your organization thrive in today's dynamic work environment. Let TCWGlobal assist with all your payrolling needs!