A View From California: One Important Artificial Intelligence Bill Down, 17 Others Good To Go
Authors
Myriah V. Jaworski , Ali Bloom
As it did with data privacy, California seeks to lead the charge when it comes to regulating the use and deployment of Artificial Intelligence (AI) tools. Joining Colorado and Utah, the other two states with enacted AI laws, California has quickly taken the lead in AI activity. But, receiving the state-wide consensus to pass those laws – including from the Governor’s Office – can be challenging. This year saw the defeat of Connecticut’s comprehensive AI proposal after the state governor announced his intention to veto the bill and, as discussed below, a last-minute veto by California’s Governor of the SB1047 – the most comprehensive of the state AI proposals.
In total, the California legislature considered almost 50 bills that touched on AI in some way and passed 18 of them. From advancing AI literacy in schools and election integrity at the ballot, to prohibiting deepfakes and requiring AI watermarking, the AI bills passed are wide-ranging in scope and substance. Of the 18 total bills passed by the legislature, one bill – SB 1047 – was vetoed by Governor Newsom.
Here, we discuss the most significant California AI bills to pass, and what the Newsom veto of SB 1047 may mean for future legislative sessions. Each of the following California AI Acts go into effect on Jan. 1, 2026. Then, we discuss how these laws stack up against the Colorado and Utah AI laws.
AI Transparency
Senate Bill 942: California AI Transparency Act
The CA AI Transparency Act mandates that “Covered Providers” (AI systems that are publicly accessible within California with more than one million monthly visitors or users) implement comprehensive measures to disclose when content has been generated or modified by AI. This Act outlines requirements for AI detection tools and content disclosures and establishes licensing practices to ensure that only compliant AI systems are permitted for public use.
Key Obligations:
- AI Detection Tool: Providers must offer a free, publicly accessible tool for users to verify whether AI has generated or modified the content (including text, images, video, and audio), including system provenance data. While the detection tool must be publicly accessible, providers may impose reasonable limitations to prevent or respond to demonstrable risks to the security or integrity of their generative AI systems. Further, providers must collect user feedback related to the tool’s efficacy and incorporate relevant feedback into improvements.
- Manifest and Latent Disclosures: Providers are required to disclose AI-generated content clearly, conspicuously, and appropriately based on the medium of communication and in such a way that a reasonable person would understand. It should identify the content as AI-generated and be permanent or extraordinarily difficult to remove, to the extent technically feasible. Embedded disclosures must include the provider’s name, the generative AI system’s name and version number, the creation or alteration date, and a unique identifier. It should be detectable by the AI detection tool, consistent with industry standards, and permanent or extraordinarily difficult to remove.
- License Revocation: Providers of generative AI systems must contractually require licensees to maintain the system’s capability to include the mandated disclosures. If a provider discovers that a licensee has modified the system to remove required disclosures, the license must be revoked within 96 hours and the licensee must cease using the system immediately.
- Enforcement and Penalties: Covered providers that violate the Act are liable for a penalty of $5,000 per violation per day, enforceable through civil action by the CA Attorney General, city attorneys, or county counsel.
Status: Governor Newsom signed the act into law on Sept. 19, 2024. The act will enter into force on Jan. 1, 2026.
Assembly Bill 2013: Artificial Intelligence Training Data Transparency
The Artificial Intelligence Training Data Transparency Act mandates that “Developers” (defined as anyone who “designs, codes, produces, or substantially modifies” an AI system for public use) of “generative artificial intelligence” (any AI system that can “generate derived synthetic content, such as text, images, video, and audio) systems accessible to Californians must publicly disclose certain information about the data used to train their models on their websites. This law applies to every “generative artificial intelligence” system available to Californians that was released or “substantially modified” (an update that materially changes an AI system’s functionality or performance) on or after Jan. 1, 2022, including free and paid services. The act explicitly specifies the information that developers are required to provide.
Key Obligations:
- Summaries: Developers of any “generative artificial intelligence” system covered by this law must publicly post on their websites a “high-level summary” of the datasets used to train those systems. The summaries must contain:
- The sources or owners of the datasets
- A description of how the datasets further the intended purpose of the artificial intelligence system or service
- The number of data points included in the datasets, which may be in general ranges, and with estimated figures for dynamic datasets
- A description of the types of data points within the datasets. For purposes of this paragraph, the following definitions apply:
- As applied to datasets that include labels, “types of data points” means the types of labels used
- As applied to datasets without labeling, “types of data points” refers to the general characteristics
- Whether the datasets include any data protected by copyright, trademark, or patent, or whether the datasets are entirely in the public domain
- Whether the datasets were purchased or licensed by the developer
- Whether the datasets include personal information
- Whether the datasets include aggregate consumer information
- Whether there was any cleaning, processing, or other modification to the datasets by the developer, including the intended purpose of those efforts in relation to the artificial intelligence system or service
- The time period during which the data in the datasets were collected, including a notice if the data collection is ongoing
- The dates the datasets were first used during the development of the artificial intelligence system or service
- Whether the generative artificial intelligence system or service used or continuously uses synthetic data generation in its development.
- Enforcement and Penalties: AB 2013 does not specify a particular enforcement mechanism. However, the legislative commentary indicates the law will likely be enforced under California’s Unfair Competition Law. This law allows enforcement by the California Attorney General, district attorneys, and other government prosecutors. Additionally, it grants a private right of action, but only for plaintiffs who can demonstrate that they suffered injury and financial loss due to violations of the law.
Status: Governor Newsom signed the act into law on Sept. 28, 2024. The act will enter into force on Jan. 1, 2026.
AI Risk Management
Senate Bill 896: Generative Artificial Intelligence Accountability Act
The Generative Artificial Intelligence Accountability Act mandates that the California Office of Emergency Services conduct a risk analysis of potential threats that “generative artificial intelligence” (an artificial intelligence system that can generate derived synthetic content, including text, images, video, and audio that emulates the structure and characteristics of the system’s training data) poses to the state’s critical infrastructure, including scenarios that could result in mass casualty events. The office is required to provide a high-level summary of this analysis to the California legislature. Additionally, any state agency or department that uses generative AI to communicate with individuals about government services and benefits must ensure that such communications include a notice indicating that the message was generated by AI and information on how to contact a human employee of the department.
Status: Governor Newsom signed the act into law on Sept. 29, 2024. The act will enter into force Jan. 1, 2025.
AI and Healthcare
Assembly Bill 3030: Artificial Intelligence in Health Care Services
The Artificial Intelligence in Health Care Services bill requires health facilities, clinics, physician’s offices, and group practices that use generative AI to produce written or verbal communications about patient clinical information to include: (i) a disclaimer informing the patient that the communication was generated by AI, and (ii) clear instructions on how the patient can contact a human health care provider, employee, or other appropriate person. However, if the AI-generated communication is reviewed and approved by a licensed or certified healthcare provider, these disclosure requirements do not apply.
- Enforcement and Penalties: Any violation of the provisions of Assembly Bill 3030 will cause the physician to be subject to the jurisdiction of the Medical Board of California or Osteopathic Medical Board of California.
Status: Governor Newsom signed the act into law on Sept. 28, 2024. The act will enter into force on Jan. 1, 2025.
Senate Bill 1120: Artificial Intelligence in Health Care Coverage
The Artificial Intelligence in Health Care Coverage bill mandates that health care service plans and disability insurers— including specialized health care service plans and insurers—ensure compliance with specific requirements when using AI, algorithms, or other software tools for utilization review or management functions, or when contracting with entities that do. These tools must base their determinations on designated information and be applied fairly and equitably. Specifically, they should consider the patient’s medical history, individual clinical circumstances as provided by the requesting provider, or other relevant information in the patient’s medical record. Additionally, these tools must not rely on group datasets, override healthcare provider decision-making, or discriminate against patients in violation of state or federal law.
Status: Governor Newsom signed the act into law on Sept. 28, 2024. The act will enter into force on Jan. 1, 2025.
AI and Personal Information
Assembly Bill 1008: Amendments to California Consumer Privacy Act
The California Consumer Privacy Act of 2018 (CCPA) has been amended to introduce new privacy obligations for AI systems trained on personal information. This includes: (i) expanding the definition of personal information to encompass outputs from AI systems, such as model weights and tokens derived from personal data, as well as biometric data like fingerprints and facial recognition collected without a consumer’s knowledge; and (ii) broadening the definition of sensitive personal information to include neural data, which refers to information generated from measuring the activity of a consumer’s central or peripheral nervous system.
Status: Governor Newsom signed the act into law on Sept. 28, 2024. The act will enter into force on Jan. 1, 2025.
AI and Automated Dialing Systems and Voice Messages
Assembly Bill 2905: Telecommunications: Automatic Dialing-Announcing Devices: Artificial Voices
The Telecommunications: Automatic Dialing-Announcing Devices: Artificial Voices Act mandates that any call made using an automatic dialing-announcing device must start with a natural, unrecorded voice that explains the purpose of the call, identifies the business or organization involved, and seeks the recipient’s consent to proceed with the prerecorded message. With the new bill, if the prerecorded message features a voice created or significantly altered by AI, the caller must inform the recipient of this fact at the beginning of the call. This legislation updates the regulations surrounding the use of automatic dialing-announcing devices in telecommunications.
Status: Governor Newsom signed the act into law on Sept. 20, 2024. The act will enter into force on Jan. 1, 2025.
AI and Education
Assembly Bill 2885: Artificial Intelligence
The Artificial Intelligence bill establishes a uniform definition of “Artificial Intelligence” (engineered or machine-based system with varying levels of autonomy that can infer from input to generate outputs influencing physical or virtual environments) to use under various California laws.
Key provisions of existing legislation impacted by this definition include:
- Government Operations Agency: The Secretary of Government Operations is required to develop a plan to assess the effects of AI-generated or manipulated deepfakes on state government, businesses, and residents.
- Department of Technology: The Department of Technology must conduct a thorough inventory of high-risk automated decision systems utilized by state agencies that depend on machine learning and AI to assist or replace human decision-making.
- Local Agencies: Each local agency must publicly report on economic development subsidies and any job losses or replacements attributed to AI or automation.
- California Online Community College: This institution is tasked with using AI and related technologies to develop student support systems and industry-relevant online education programs.
- Social Media Companies: These companies are required to submit semiannual reports to the Attorney General outlining how content on their platforms is managed, including the role of AI in that management.
Status: Governor Newsom signed the act into law on Sept. 28, 2024. The act will enter into force on Jan. 1, 2025.
Vetoed: Senate Bill 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act
The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act aimed to prevent “critical harms” associated with AI, such as the development of chemical, biological, or nuclear weapons, as well as mass casualties and extensive damage from attacks on critical infrastructure. The law sought to strike a balance between safeguarding against these risks and fostering innovation. To achieve this, SB 1047 focused exclusively on large AI systems that surpassed specific thresholds for processing and training costs, which is reflected in the term “frontier models.” Developers of these models would have been required to implement various security measures.
Governor Newsome’s Veto of SB 1047
On Sept. 29, 2024, Governor Newsome decided to veto SB 1047 and return it to the California State Senate. Newsom expressed concern about the high quantitative thresholds outlined in the bill. He argued that basing safety measures solely on the size of a model was not an effective way to prevent harms. Instead, he suggested that we should assess a system’s actual risks by considering factors such as whether an AI system is deployed in high-risk environments, involves critical decision-making, or uses sensitive data. Despite his veto, Newsom continued to push for California to be a leader in AI regulation by passing various other AI laws and regulations.
What the Newsome Veto of SB 1047 May Mean for Future Legislative Sessions
Supporters of SB 1047 are likely to advance similar proposals in future legislative sessions. With the governor signing several other bills to regulate the AI industry, it’s clear that California will continue asserting its influence and leadership in AI regulation.
Colorado and Utah AI State Laws
Aside from California, two other states have enacted laws related to artificial intelligence. Colorado with its Senate Bill 205: Colorado Artificial Intelligence Act and Utah with its Senate Bill 149: Artificial Intelligence Policy Act. A review of those laws is below.
Colorado Senate Bill 205: Colorado Artificial Intelligence Act (SB 205)
The Colorado Artificial Intelligence Act (SB 205) mandates that “developers” (a person doing business in Colorado who develops or intentionally and substantially modifies an AI system) and “deployers” (a person or entity doing business in Colorado who utilizes a “high-risk” AI system) of “high-risk” AI systems and must take reasonable care to prevent algorithmic discrimination. The act outlines several key obligations for developers and deployers to achieve this.
Key Obligations:
- High-Risk AI System: Under the act, “high-risk AI systems” are systems used as a “substantial factor” in decision-making in an enumerated set of situations considered to be “consequential.” A “consequential decision” is one that has a “material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of.” These situations include: employment, education, financial or lending services, essential government service, housing, insurance, legal services, and healthcare. To be considered a “substantial factor,” the AI system must (1) assist in making the consequential decision; (2) be capable of altering the outcome of a consequential decision; and (3) be generated by an AI system.
- Notice to Consumers: Under the act, deployers are required to inform consumers when using a high-risk AI system. This notice includes details about the system’s purpose, the nature of the consequential decision, how the system operates, and, when applicable, the consumer’s right to opt out of personal data processing for profiling purposes. Notably, the act requires that consumers who face an adverse consequential decision must be provided with an opportunity to appeal. However, if it is “obvious” that a consumer is interacting with an AI system, SB 205 does not require this disclosure.
- Impact Assessments: SB 205 mandates that deployers conduct an annual impact assessment, as well as within 90 days of any intentional or significant modifications to the high-risk AI system. This assessment must cover the system’s purpose, intended use cases, benefits, known limitations, deployment context, transparency measures taken, post-deployment monitoring and safeguards, and the categories of data used as inputs and the resulting outputs.
- Algorithmic Discrimination: SB 205 specifically covers both developers and deployers of high-risk AI systems, requiring them to exercise reasonable care to protect consumers from any known or reasonably foreseeable instances of algorithmic discrimination. “Algorithmic discrimination” is defined as any situation where the use of an AI system leads to unlawful differential treatment or impact based on various protected classes under Colorado and federal law, such as race, disability, age, gender, religion, veteran status, and genetic information.
- Enforcement and Penalties: SB 205 grants exclusive enforcement authority to the Attorney General, allowing them to implement rules related to documentation, notice, impact assessments, risk management policies and programs, rebuttable presumptions, and affirmative defenses. Importantly, there is no private right of action under this bill.
Status: Colorado Governor Jared Polis signed the act into law on May 17, 2024. The Act will enter into force on Feb. 1, 2026. We expect rulemaking under the Act to commence in early 2025.
Utah Senate Bill 149: Utah Artificial Intelligence Policy Act (SB 149)
The Utah Artificial Intelligence Policy Act (SB 149) establishes a regulatory framework for the development, deployment, and use of artificial intelligence technologies within Utah. The act imposes disclosure requirements on entities using “Generative Artificial Intelligence” (an artificial system that: (i) is trained on data; (ii) interacts with a person using text, audio, or visual communication; and (iii) generates non-scripted outputs similar to outputs created by a human, with limited or no human oversight) tools.
- Disclosure: Under SB 149, a business or person that uses Generative Artificial intelligence “GenAI” to interact with an individual in connection with commercial activities regulated by Utah’s Division of Consumer Protection (Division), must clearly and conspicuously disclose to the individual that he or she is interacting with GenAI and not a human. This requirement applies only if the individual interacting with the GenAI prompts or asks the GenAI to disclose whether the individual is interacting with a human.
- Regulated Occupations: The act imposes stricter disclosure obligations on individuals offering services in “regulated occupations” such as clinical mental health, dentistry, and medicine. When using GenAI to provide these services, these individuals must clearly inform clients that they are interacting with GenAI. This requirement applies even if the client does not inquire whether they are speaking with a human. Furthermore, for disclosures related to GenAI in regulated services, the AI Law mandates that the information be communicated verbally at the start of oral conversations and through electronic messages before any written communication. The act also explicitly forbids attempts to evade consumer protection or fraud liability by attributing issues to GenAI as an intervening factor.
- The Office of Artificial Intelligence Policy: The Artificial Intelligence Policy Act (AIPA) establishes an Office within the Department of Commerce. The AIPA outlines the Office’s responsibilities, including: (a) managing the AI Learning Laboratory Program (Learning Lab); (b) consulting with state businesses and stakeholders on regulatory proposals; (c) engaging in rulemaking related to application fees and procedures for participation, criteria for invitations, acceptance, removal, data usage limitations, cybersecurity standards, and consumer disclosures for Learning Lab participants; and (d) providing an annual report to the Business and Labor Interim Committee detailing the Learning Lab’s proposed agenda, outcomes, findings, and suggested legislation based on those findings.
- The AI Learning Laboratory Program: The Learning Lab focuses on researching and analyzing the risks, benefits, impacts, and policy implications of AI. It aims to produce findings and legislative recommendations to help shape Utah’s regulatory framework. The Lab also strives to promote the development of AI technology in Utah and work with AI companies to evaluate the effectiveness and feasibility of current and proposed AI legislation.
- Enforcement and Penalties: The state’s Division of Consumer Protection may impose administrative fines of up to $2,500 for each violation. Additionally, the Division can pursue legal remedies, including a court judgment that declares a specific act or practice in violation of the Act, seek injunctive relief, and impose fines up to $2,500 per violation, alongside any administrative penalties. The Division may also seek disgorgement of profits and ensure that the funds are paid to those harmed by the violation. The attorney general’s office may seek $5,000 in civil penalties for each violation of a prior administrative or court order.
Status: Utah Governor Spencer Cox signed the Act into law on March 13, 2024. The Act entered into force on May 1, 2024.
Takeaways
California’s proactive approach to AI regulation sets a significant precedent as the state continues to lead in the legislation of emerging technologies. Despite the veto of SB 1047, the passage of these multiple AI bills demonstrates California’s commitment to setting a standard for the regulation of AI in the state.
As other states like Colorado and Utah also enact their own AI regulations, the nationwide conversation about AI governance continues. We can expect further developments as lawmakers respond to the evolving challenges and opportunities presented by AI.
This publication is intended for general informational purposes only and does not constitute legal advice or a solicitation to provide legal services. The information in this publication is not intended to create, and receipt of it does not constitute, a lawyer-client relationship. Readers should not act upon this information without seeking professional legal counsel. The views and opinions expressed herein represent those of the individual author only and are not necessarily the views of Clark Hill PLC. Although we attempt to ensure that postings on our website are complete, accurate, and up to date, we assume no responsibility for their completeness, accuracy, or timeliness.