HF4532
Artificial intelligence safety and disclosure requirements established, and civil remedies provided.
Legislative Session 94 (2025-2026)
Related bill: SF4509
AI Generated Summary
Purpose
This bill would create a new state framework to regulate artificial intelligence (AI) safety and transparency. It aims to reduce the risk of serious harm from AI systems by requiring safety planning, testing, public and government disclosure, and penalties for noncompliance. It also establishes a private right of action for people harmed by AI-related issues and gives the Attorney General enforcement powers.
Key terms and definitions
- Artificial intelligence: a machine-based system that can predict, recommend, or decide actions based on a set of objectives, using data inputs to perceive environments and produce options for action.
- Artificial intelligence model: a part of an information system that uses AI technology and statistical or machine-learning methods to produce outputs from inputs.
- Critical harm: death, serious physical or mental injury to a significant number of people (e.g., 25 or more) or substantial monetary/damage impact (e.g., at least $1,000,000) caused or enabled by an AI model, especially in connection with weapons or crimes that would be crimes if done by a human.
- Developer: a person who has trained at least one AI model.
- Safety and security protocol: a documented plan that describes protections and procedures to reduce risk of critical harm, protections against unauthorized access or misuse, testing procedures to assess risk, and assignment of senior personnel for compliance.
- Safety incident: an event indicating a known or increasing risk of critical harm, such as autonomous AI behavior beyond user requests, theft or misuse of AI model weights, or unauthorized access.
- Public disclosure: publishing a redacted safety protocol for public view and providing a non-redacted or redacted copy to the Attorney General as required.
- Redactions: carefully limited removals in published documents to comply with law while protecting sensitive information.
- Unreasonable risk of critical harm: a standard used to decide whether deploying an AI model is prohibited.
- Attorney General access: the state Attorney General may access safety protocols (with redactions as allowed by federal law).
Main provisions
1) Transparency requirements for developers before deploying AI - Implement a written safety and security protocol. - Retain an unredacted copy of the protocol and all updates for the entire deployment period plus five years. - Publish a redacted copy of the protocol for public viewing and provide a redacted copy to the Attorney General. - Allow the Attorney General access to the protocol if requested, with redactions only as allowed by federal law. - Record and keep information on the specific tests and test results used to assess the AI model, sufficient for third parties to replicate testing, for the deployment period plus five years. - Implement safeguards to prevent unreasonable risk of critical harm.
2) Prohibition on deployment - A developer must not deploy an AI model if doing so would create an unreasonable risk of critical harm.
3) Annual review - Developers must annually review the safety protocol to account for changes in model capabilities and industry best practices, and modify the protocol as needed. - If a material modification is made, publish the updated protocol in the same manner required for publication of the original.
4) Safety incident disclosure - Developers must disclose each safety incident affecting the AI model to the Attorney General within 72 hours of learning about the incident (or when enough facts are known to believe an incident has occurred). - Disclosures must include the date, why the incident qualifies as a safety incident, and a plain-language description of the incident.
5) False or misleading statements - Developers must not knowingly make false or materially misleading statements or omissions in or about documents produced under this section.
2) Enforcement mechanisms and penalties
1) Attorney General enforcement - The Attorney General may bring a civil action for violations of the safety/transparency requirements. - Penalties can include civil fines (up to $10 million for a first violation; up to $30 million for subsequent violations) and the possibility of injunctive or declaratory relief.
2) Private right of action - A person harmed by a violation may sue to recover damages, costs, and attorney fees, and may seek other equitable relief as determined by the court.
Significant changes to existing law
- Creates a new state regulatory framework for AI safety, transparency, testing, and incident disclosure.
- Establishes mandatory safety and security protocols, annual reviews, and public disclosures by AI developers.
- Adds strict reporting requirements to the Attorney General and enables both state-level enforcement and private lawsuits for AI-related harms.
- Introduces substantial civil penalties for violations and a private damages remedy for individuals harmed by AI practices.
Practical implications
- AI developers in the state would need to implement formal safety plans, conduct documented testing, maintain records for deployment plus five years, and publicly disclose certain information.
- There would be new reporting obligations to the Attorney General, and potential high-dollar penalties for noncompliance.
- Individuals harmed by AI could pursue damages in court, increasing accountability for AI-related outcomes.
Relevant terms - Artificial intelligence - Artificial intelligence model - Critical harm - Safety incident - Safety and security protocol - Developer - Transparency requirements - Public disclosure - Attorney General - Civil penalties - Injunction - Private right of action - Annual review - Testing and replication - Redactions - Unreasonable risk of critical harm
Bill text versions
- Introduction PDF PDF file
Actions
| Date | Chamber | Where | Type | Name | Committee Name |
|---|---|---|---|---|---|
| March 23, 2026 | House | Action | Introduction and first reading, referred to | Commerce Finance and Policy |