House Bill 2094 (HB 2094), titled the "High-Risk Artificial Intelligence Developer and Deployer Act," was passed by the Virginia General Assembly in February 2025. The bill establishes requirements for the development, deployment, and use of high-risk artificial intelligence (AI) systems, introducing civil penalties for noncompliance to be enforced by the Attorney General.
The act is scheduled to take effect on July 1, 2026.
Key Definitions:
- High-Risk AI System: An AI system specifically intended to autonomously make, or be a substantial factor in making, consequential decisions.
- Consequential Decision: A decision that materially affects a consumer's access to services such as education enrollment, employment, financial or lending services, healthcare services, housing, insurance, marital status, legal services, or parole and probation determinations.
- Developer: Any entity conducting business in Virginia that develops or intentionally and substantially modifies a high-risk AI system made available to deployers or consumers in the state.
- Deployer: Any entity conducting business in Virginia that deploys or uses a high-risk AI system to make consequential decisions within the state.
Requirements for Developers:
Developers of high-risk AI systems are mandated to exercise a reasonable duty of care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. Compliance with the following provisions establishes a rebuttable presumption of meeting this duty:
- Disclosure Obligations: Prior to providing a high-risk AI system to a deployer or another developer, the developer must furnish:
- A statement detailing the intended uses of the AI system.
- Documentation covering:
- Known limitations and reasonably foreseeable risks of algorithmic discrimination.
- The system's purpose, intended benefits, and uses.
- Summaries of performance evaluations conducted prior to making the system available.
- Measures implemented to mitigate foreseeable risks of algorithmic discrimination.
- Guidelines for proper use and monitoring of the system to detect and address potential discrimination.
- Risk Management Frameworks: Developers are encouraged to align with recognized frameworks such as the National Institute of Standards and Technology's (NIST) Artificial Intelligence Risk Management Framework or the International Organization for Standardization's (ISO) Standard ISO/IEC 42001. Conformance with these frameworks presumes compliance with the act's requirements.
- Synthetic Content Identification: Developers of high-risk generative AI systems that produce synthetic content must ensure outputs are identifiable and detectable using industry-standard tools, without hindering the content's accessibility or enjoyment.
Requirements for Deployers:
Deployers utilizing high-risk AI systems for consequential decisions are also required to exercise a reasonable duty of care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. Compliance with the following provisions establishes a rebuttable presumption of meeting this duty:
- Risk Management Policy: Before deploying such systems, deployers must design and implement a risk management policy and program to identify, mitigate, and document risks of algorithmic discrimination.
- Impact Assessments: Deployers are required to conduct impact assessments prior to the initial deployment of a high-risk AI system and before implementing significant updates. These assessments should include:
- The system's purpose, intended uses, and benefits.
- Potential risks of algorithmic discrimination and mitigation measures.
- Descriptions of data categories processed and outputs generated.
- Metrics for evaluating performance and limitations.
- Transparency measures and post-deployment monitoring protocols.
- Analyses of the system's validity and reliability.
- Retention of assessment records and related data for a minimum of three years.
- Consumer Disclosures: Deployers must inform consumers when interacting with a high-risk AI system, providing details about:
- The system's purpose and nature.
- The nature of the consequential decision being made.
- Contact information for the deployer.
- A plain-language description of the system, including assessed personal characteristics, assessment methods, relevance to the decision, human involvement, and the role of automated components.
- For adverse decisions based on data beyond what the consumer provided, deployers must explain the principal reasons, including the AI system's contribution, data types processed, and data sources.
- Opportunities for consumers to correct inaccuracies in their personal data and appeal adverse decisions, with provisions for human review when feasible.
- Public Risk Management Statement: Deployers are required to make available a clear and accessible summary of how they manage reasonably foreseeable risks of algorithmic discrimination associated with their high-risk AI systems.
Exemptions:
The act provides certain exemptions, including:
- Trade Secrets and Confidential Information: Developers and deployers are not obligated to disclose trade secrets, information that could pose security risks, or other confidential or proprietary information protected by law.
- Specific Technologies: Certain technologies are excluded from the definition of high-risk AI systems, though the act does not specify which technologies are exempt.
Enforcement and Penalties:
Noncompliance with HB 2094's provisions may result in civil penalties enforced by the Virginia Attorney General. The act does not specify the penalty amounts or detailed enforcement mechanisms.
Effective Date:
The provisions of HB 2094 are set to become effective on July 1, 2026.
For the complete text and detailed provisions of HB 2094, refer to the official Virginia Legislative Information System.
#Virginia #HB2094 #AIRegulation #AICompliance #TechPolicy #ArtificialIntelligence #DataPrivacy #AITransparency #AlgorithmicBias #EthicalAI #AIethics #AIgovernance #HunterForever #AIInnovation #CyberSecurity #AIAct #DigitalPolicy #AIrisks #FairAI #AIstandards