HR 6461
READ AI Models Act
Creates new compliance requirements or restricts common AI uses. Action needed.
TL;DR
Representative Ted Lieu introduced the READ AI Models Act (HR 6461) to require companies developing powerful AI systems to run safety tests and share the results with the government. The bill specifically targets frontier AI models (think GPT-4 level and beyond) and would force developers to test for dangerous capabilities like cyberattacks, bioweapon design, or autonomous replication before release.
How This Might Impact Your Business
AI companies developing models with over 10^26 computational operations (roughly GPT-4 scale) must conduct red team testing for risks including cybersecurity threats, CBRN weapons development, and self-replication capabilities
Developers must submit safety test results to NIST within 30 days of testing and before any public deployment
Companies must implement a written safety policy approved by senior leadership and conduct annual reviews
Violations carry civil penalties up to $50,000 per day of non-compliance
Cloud computing providers must report when customers use over 10^26 operations in a 120-day period
Exempts models trained purely on biological data for drug discovery and open-source models under certain conditions
What Should You Do
Assess whether your AI models or planned models exceed the 10^26 computational operations threshold (roughly $100 million in compute costs)
If developing frontier AI, establish a red team testing program focusing on the specific risks outlined: cyber, CBRN, deception, and self-replication
Create or update your AI safety policy to include board-level oversight and annual review requirements
Monitor committee progress; bill currently in House Science Committee with no scheduled hearings yet
Who It Affects
Status Timeline
committee
Referred to the House Committee on Science, Space, and Technology.
December 4, 2025
committee
Referred to the House Committee on Science, Space, and Technology.
December 4, 2025
AI-generated analysis for informational purposes only. Not legal advice. Always consult a qualified attorney for legal guidance.
Need help preparing your team for AI compliance?
Talk to LaunchReady about AI Training