Browse AI Bills
71 bills tracked across 2 jurisdictions
HR 8623
Rep. Blake Moore (R-UT) introduced HR 8623, which would force AI chatbot operators like ChatGPT, Character.AI, and Replika to verify users' ages and disclose key information about how their bots work. The bill aims to protect minors from AI chatbot harms and ensure users know when they're talking to AI rather than a human.
Last action: Apr 30, 2026
S 4476
Senator Mark Warner's bill creates a voluntary framework for AI developers and companies using AI to share data about how AI is affecting their workforce (think hiring, firing, task automation, and skill shifts). The Secretary of Labor would then compile and report this data to Congress and the public. Nothing here is mandatory, it's an opt-in disclosure program.
Last action: Apr 30, 2026
S 4407
Senator Ted Cruz (R-TX) introduced a bill requiring AI chatbot companies to create special family accounts for children under 13 and get verifiable parental consent for teens 13-17. Companies like ChatGPT, Claude, and Gemini would need to build parental control systems and age verification processes, similar to what social media platforms currently do under COPPA.
Last action: Apr 28, 2026
S 4402
Senator Adam Schiff (D-CA) introduced S 4402, which would require intelligence agencies to report how they use AI to analyze surveillance data collected under FISA (Foreign Intelligence Surveillance Act). The bill focuses on AI systems that access raw, unfiltered surveillance data before privacy protections are applied, requiring transparency about these tools without creating new restrictions on businesses.
Last action: Apr 27, 2026
HR 8526
Rep. David Schweikert (R-AZ) introduced a bill requiring mammography facilities that use AI systems to meet new FDA quality standards and undergo additional inspections. The bill would mandate that facilities using AI for breast cancer screening disclose this to patients and maintain specific documentation about their AI tools.
Last action: Apr 27, 2026
HR 8516
Rep. Ted Lieu (D-CA) introduced HR 8516, which would create a national AI licensing system requiring companies to get government permits before deploying high-risk AI systems. The bill establishes a new AI Safety Board with power to approve or deny AI deployments in healthcare, finance, employment, and other critical sectors, while also mandating bias audits and transparency reports.
Last action: Apr 27, 2026
HR 8488
Rep. LaMonica McIver (D-NJ) introduced a bill requiring companies to disclose detailed information before building AI data centers. Companies would need to report power consumption, water usage, environmental impacts, and community benefits to federal authorities before breaking ground on any facility primarily used for AI computing.
Last action: Apr 23, 2026
HR 8479
Rep. Valerie Foushee (D-NC) introduced a bill requiring companies that use AI to create or modify images, videos, or audio to add clear labels saying 'made with AI.' The bill tasks NIST with creating industry standards for AI-generated content detection and mandates disclosure requirements for any business using generative AI to create content.
Last action: Apr 23, 2026
HR 8382
Rep. Blake Moore (R-UT) introduced a bill that would ban AI chatbots in children's products. The bill makes it illegal to manufacture or sell toys, apps, or other products for kids under 13 that include AI chat features, with fines up to $5,000 per violation.
Last action: Apr 20, 2026
HR 8283
Rep. Bill Huizenga (R-MI) introduced legislation to prevent foreign adversaries, particularly China, from stealing or accessing advanced American AI models. The bill would block companies from exporting AI models above certain capability thresholds to China, Russia, Iran, and North Korea, and requires security reviews before sharing powerful AI systems with any foreign entity.
Last action: Apr 15, 2026
HR 8094
Rep. Don Beyer (D-VA) introduced legislation requiring companies that develop or deploy large AI models (like GPT-4 or Claude) to publicly disclose detailed information about their AI systems. Companies would need to report training data sources, model capabilities, safety testing results, and energy consumption to a new federal registry within 90 days of deployment.
Last action: Mar 26, 2026
S 4216
Senator Brian Schatz (D-HI) introduced a bill to repeal President Biden's Executive Order on AI, which currently requires federal agencies to develop AI safety standards and companies to share AI safety test results with the government. This would eliminate federal AI oversight requirements that the Executive Order put in place.
Last action: Mar 26, 2026
S 4214
Senator Bernie Sanders introduced a bill that would ban the construction of new AI data centers for two years. The moratorium would apply nationwide while Congress studies the environmental impact of massive data centers that power AI systems like ChatGPT and other large language models.
Last action: Mar 25, 2026
S 4199
Senator Markey (D-MA) introduced a bill that would ban companies from using AI to collect or process personal data from anyone under 17 without explicit consent. The Youth AI Privacy Act specifically targets AI systems that analyze biometric data, predict behavior, or make automated decisions about minors, requiring companies to delete collected data and conduct regular impact assessments.
Last action: Mar 25, 2026
SCONRES 30
This is a non-binding congressional resolution introduced by Senator Rick Scott (R-FL) that supports the 'Ratepayer Protection Pledge' announced March 4, 2026. It expresses Congress's view that electricity costs should be kept affordable as AI and data centers expand across the country. This resolution doesn't create any new laws or requirements; it's essentially Congress stating its opinion on energy policy related to AI growth.
Last action: Mar 25, 2026
S 4179
Senator Murkowski (R-AK) introduced a bill requiring states to involve tribal representatives when investigating child abuse cases involving Native American children. The bill mandates that state child protective services notify and coordinate with tribes within 24 hours when AI-powered risk assessment tools flag potential abuse cases involving Native children.
Last action: Mar 24, 2026
HR 8037
Rep. Baumgartner (R-WA) introduced a bill requiring companies to disclose when they use AI systems trained on data from China, Russia, Iran, or North Korea. Companies would face fines up to $5 million for failing to tell customers about these foreign data sources in their AI products.
Last action: Mar 24, 2026
HR 8048
Rep. Adelita Grijalva (D-AZ) introduced the AI/AN CAPTA bill, which despite the name appears to focus on American Indian and Alaska Native (AI/AN) provisions under the Child Abuse Prevention and Treatment Act, not artificial intelligence. The bill was referred to the House Committee on Education and Workforce and likely addresses child welfare services for tribal communities rather than AI technology regulation.
Last action: Mar 24, 2026
HR 8031
Representative Boebert introduced HR 8031 to repeal Biden's Executive Order on AI that established federal AI safety standards and oversight requirements. The bill would eliminate current federal AI governance frameworks, removing requirements for federal agencies to assess AI risks and for companies to report on their AI development activities.
Last action: Mar 20, 2026
HRES 1007
House Resolution 1007 is a non-binding resolution that expresses Congress's opinion on how AI should be used in banking, lending, and housing. It doesn't create any new laws or requirements; it just states that Congress thinks financial companies should use AI responsibly, avoid discrimination, and be transparent about their AI systems.
Last action: Mar 19, 2026
S 4154
Senator Roger Wicker (R-MS) introduced a bill requiring federal courts to create policies for how judges and court staff use AI tools. The Research and Oversight of AI in Courts Act would mandate each federal court to develop guidelines on AI use, track which AI systems they're using, and report annually on their AI practices to Congress.
Last action: Mar 19, 2026
HR 7997
Rep. Harriet Hageman (R-WY) introduced a bill requiring federal courts to study how AI is being used in the justice system. The bill would create a task force to examine AI use in everything from bail decisions to evidence analysis, then recommend guidelines for courts nationwide.
Last action: Mar 19, 2026
HR 7968
Rep. Suhas Subramanyam (D-VA) introduced this bill to help small businesses and startups access federal AI resources. It would create a new program at NIST (National Institute of Standards and Technology) that gives smaller companies access to government AI testing tools, datasets, and expertise that are currently only available to large corporations and research institutions.
Last action: Mar 17, 2026
S 4113
Senator Elissa Slotkin (D-MI) introduced the AI Guardrails Act to force federal agencies to set safety rules for AI systems before they can deploy them. The bill requires agencies to identify risks, establish testing procedures, and create ways to shut down AI systems that go wrong, with the Department of Defense and intelligence agencies mostly exempt.
Last action: Mar 17, 2026
S 4098
Senator Ted Budd (R-NC) introduced the Artificial Intelligence-Ready Data Act to create federal guidelines for how businesses prepare and manage data used in AI systems. The bill would establish new requirements for data quality, documentation, and transparency when companies use data to train or operate AI tools, affecting any business that develops or deploys AI systems.
Last action: Mar 16, 2026
S 4069
Senator Todd Young (R-IN) introduced a bill requiring NIST to create standardized formats for biological data used in AI systems. The bill focuses on making bio-data (like genomic sequences, protein structures, and clinical trial results) consistent and interoperable across different AI platforms, which would help pharmaceutical companies, biotech firms, and research institutions share data more easily for drug discovery and medical AI development.
Last action: Mar 12, 2026
HR 7907
Rep. Ro Khanna (D-CA) introduced a bill directing NIST to create standardized formats for biological data that AI systems can read and process. The bill focuses on making DNA sequences, protein structures, and other biological data work better across different AI platforms and research tools. It aims to accelerate biotech innovation by making it easier for AI companies to train models on biological datasets.
Last action: Mar 12, 2026
S 3982
Senator Harris introduced S 3982 to make companies criminally liable when their AI systems are used to commit fraud, even if the company didn't intend the fraud. The bill closes a legal loophole where businesses could claim their AI acted independently, forcing companies to take responsibility for fraudulent outcomes from their automated systems.
Last action: Mar 4, 2026
HR 7786
Representative Yvette Clarke introduced HR 7786 to make companies liable when their AI tools are used for fraud. If someone uses AI to create deepfakes, forge documents, or run scams, both the fraudster AND the AI company could face penalties unless the company took reasonable steps to prevent misuse.
Last action: Mar 4, 2026
HR 7783
Representatives Bilirakis and Matsui introduced HR 7783, which would require the FCC to study how well American telecom networks can handle AI workloads. The bill focuses on whether current broadband infrastructure has enough capacity, speed, and reliability to support growing AI applications across different sectors.
Last action: Mar 4, 2026
S 3952
Senator Peters introduced a bill that would create new compliance requirements for companies using AI in high-stakes decisions like hiring, lending, healthcare, and criminal justice. Companies would need to conduct annual bias audits, implement human oversight systems, and publicly disclose when AI makes decisions affecting people's lives.
Last action: Feb 26, 2026
HR 7697
This federal bill directs the State Department to develop a strategy for modernizing energy grids internationally to handle increased power demands from AI systems. Representative Obernolte introduced it to address the massive energy consumption of AI data centers, which could strain power grids worldwide. The bill requires a plan within 180 days but doesn't create any regulations for businesses.
Last action: Feb 25, 2026
HR 7696
Rep. Jackson Lee introduced HR 7696 to protect critical infrastructure from AI-powered cyberattacks. The bill would require companies operating power grids, water systems, and other essential services to implement specific AI security measures and conduct regular vulnerability assessments. It creates new federal oversight of AI systems used in critical infrastructure with mandatory reporting of AI-related security incidents.
Last action: Feb 25, 2026
HR 7576
Representatives Beyer and Obernolte introduced HR 7576 to create AI workforce training programs through tax credits. Companies that train workers in AI skills would get tax breaks, and the bill establishes government programs to help workers whose jobs are displaced by AI automation.
Last action: Feb 13, 2026
HR 7294
Rep. Robert Menendez (D-NJ) introduced the AI for Secure Networks Act to improve cybersecurity in critical infrastructure by using AI to detect and respond to threats. The bill would direct the Department of Homeland Security to develop AI tools for protecting power grids, water systems, and other essential services from cyber attacks.
Last action: Jan 30, 2026
HR 5764
Rep. Mark Alford (R-MO) introduced a bill requiring the Small Business Administration to create free AI training programs for small businesses. The bill would establish a nationwide network of AI resource centers at universities and community colleges, plus require SBA to publish guides on how small businesses can actually use AI tools to compete with larger companies.
Last action: Jan 26, 2026
HR 7058
Representative Jim Himes introduced HR 7058, which requires the State Department to create an office that evaluates AI risks from China, Russia, and other adversary nations. The bill doesn't regulate businesses directly but mandates government reports on foreign AI threats that could influence future regulations and federal AI procurement decisions.
Last action: Jan 14, 2026
HR 6996
The Full AI Stack Export Promotion Act (HR 6996) aims to boost US exports of AI technologies by streamlining export controls and creating new government programs to help American AI companies sell internationally. While the full text isn't available yet, the title suggests it covers the entire AI technology chain from chips to software, likely reducing barriers that currently make it hard for US companies to export AI products.
Last action: Jan 9, 2026
HB 1421
Indiana House Bill 1421 would completely ban employers from using automated decision systems (like AI hiring software, resume screening tools, or performance evaluation algorithms) to make employment decisions. The bill has just been introduced and sent to the Employment, Labor and Pensions Committee for review.
Last action: Jan 8, 2026
S 3586
Senator Todd Young (R-IN) introduced a bill to create a voluntary AI certification program specifically for small businesses. The bill would establish an 'AI Center of Excellence' at the Small Business Administration that helps small companies adopt AI responsibly through training, resources, and a certification process that could give them advantages in federal contracting.
Last action: Jan 7, 2026
HRES 963
Rep. Sara Jacobs (D-CA) introduced a resolution condemning antisemitism on AI platforms and urging tech companies to implement better safeguards. The resolution doesn't create any new laws or penalties; it's a formal statement from Congress expressing concern about AI-generated hate speech and calling for voluntary industry action.
Last action: Dec 18, 2025
HR 6875
Representatives McCaul and Krishnamoorthi introduced the AI OVERWATCH Act to monitor how foreign adversaries (specifically China, Russia, Iran, and North Korea) use AI for military purposes. The bill requires the State Department to create annual reports tracking these countries' AI capabilities and recommend ways to counter them. This focuses on national security rather than regulating domestic businesses.
Last action: Dec 18, 2025
HR 6529
Rep. Greg Landsman (D-OH) introduced HR 6529 to make AI data center operators pay for the energy grid upgrades their facilities require, instead of passing those costs to residential customers. The bill would prevent utility companies from charging families higher electricity rates to fund the massive power infrastructure needed by data centers running AI workloads.
Last action: Dec 9, 2025
HR 6461
Representative Ted Lieu introduced the READ AI Models Act (HR 6461) to require companies developing powerful AI systems to run safety tests and share the results with the government. The bill specifically targets frontier AI models (think GPT-4 level and beyond) and would force developers to test for dangerous capabilities like cyberattacks, bioweapon design, or autonomous replication before release.
Last action: Dec 4, 2025
HR 6356
Rep. Yvette Clarke (D-NY) introduced legislation requiring companies to audit their AI systems for bias and discrimination before using them to make decisions about people. The bill would give individuals the right to know when AI makes decisions about them and to appeal those decisions to a human.
Last action: Dec 2, 2025
HR 6304
Rep. Jennifer Kiggans (R-VA) introduced a bill to create a new AI research hub within the National Science Foundation that would fund academic research into AI safety and ethics. The bill would allocate $100 million annually for five years to universities studying how to make AI systems more transparent, secure, and aligned with American values.
Last action: Nov 25, 2025
S 3108
Senator Robert Casey Jr. introduced the AI-Related Job Impacts Clarity Act (S 3108), which would require companies to tell the government before using AI in ways that could affect jobs. Companies planning to deploy AI systems that might automate work or change employment would need to file advance notices with the Department of Labor, explaining how many workers could be affected and what support they'll provide.
Last action: Nov 5, 2025
S 2937
Senator Thom Tillis introduced the AI LEAD Act to regulate how federal agencies use AI systems. The bill requires agencies to tell Congress before buying or using AI, sets up testing requirements to catch problems before deployment, and creates new oversight rules with real penalties if agencies mess up their AI implementations.
Last action: Sep 29, 2025
S 2938
Senator Cantwell introduced the Artificial Intelligence Risk Evaluation Act, which would require companies developing AI systems to conduct safety evaluations before release and report critical failures to the government. The bill creates a new federal office to oversee AI safety and gives regulators power to investigate AI incidents, similar to how the NTSB investigates plane crashes.
Last action: Sep 29, 2025
HR 5511
Rep. Yvette Clarke (D-NY) introduced legislation requiring companies to conduct impact assessments before deploying AI systems that could affect consumers. The bill would give the FTC power to enforce these assessments and create a public registry of high-impact AI systems, marking the first major federal attempt to regulate commercial AI use across industries.
Last action: Sep 19, 2025
HR 4873
Rep. Jimmy Patronis (R-FL) introduced HR 4873 to turn Biden's Executive Order 14319 into permanent law, which requires federal agencies to avoid using AI systems that show bias based on race, gender, or other protected characteristics. The bill would make it illegal for any federal agency to use AI tools that discriminate in hiring, benefits distribution, or service delivery.
Last action: Aug 5, 2025
HR 4695
Representative Ted Lieu introduced HR 4695 to restrict how companies and government agencies use facial recognition technology. The bill would require businesses to get explicit consent before scanning faces, ban certain uses like emotion detection in hiring, and give people the right to opt out of facial recognition systems.
Last action: Jul 23, 2025
S 2367
Senator Durbin introduced S 2367, which would require companies using AI for important decisions (like hiring, lending, or healthcare) to explain how their AI works and prove it doesn't discriminate. Companies would need to conduct regular audits of their AI systems, tell people when AI makes decisions about them, and let people opt out of certain AI decisions.
Last action: Jul 21, 2025
S 2164
Senator Ron Wyden (D-OR) introduced the Algorithmic Accountability Act of 2025, which would require companies using AI and automated decision-making systems to conduct impact assessments and document how their algorithms work. The bill targets businesses using AI for critical decisions like hiring, lending, healthcare, and housing, forcing them to evaluate their systems for bias, discrimination, and privacy risks before deployment.
Last action: Jun 25, 2025
HR 3919
Rep. Darin LaHood (R-IL) introduced HR 3919 to require U.S. intelligence agencies to develop strategies for using AI to counter threats from China and Russia. The bill mandates the Director of National Intelligence to create a 5-year plan for deploying AI in intelligence operations, focusing on automation, data analysis, and threat detection.
Last action: Jun 11, 2025
S 1290
Senator Gary Peters (D-MI) introduced a bill requiring the National Institute of Standards and Technology (NIST) to create a standardized framework for AI workforce roles and skills. The bill would establish official job titles, required competencies, and career pathways for AI professionals across government and industry, similar to existing frameworks for cybersecurity roles.
Last action: Apr 3, 2025
HR 2385
The CREATE AI Act, introduced in the House of Representatives, would establish the National AI Research Resource (NAIRR) to give academic researchers and small businesses access to computing power and datasets for AI development. This federal program would level the playing field between Big Tech companies and smaller organizations by providing free access to expensive AI infrastructure that currently only major corporations can afford.
Last action: Mar 26, 2025
HB 1620
Indiana Representative King introduced HB 1620, requiring healthcare providers to tell patients when they use AI in medical decisions. If a doctor, hospital, or insurance company uses AI to diagnose you, recommend treatment, or decide coverage, they must disclose this to patients in writing.
Last action: Jan 21, 2025
HB 1296
Indiana HB 1296 would require state agencies to create inventories of all AI systems they use and develop policies for responsible AI deployment. The bill mandates transparency about how government uses AI but doesn't directly regulate private businesses.
Last action: Jan 13, 2025
SB 150
Indiana's SB 150, now signed into law, requires companies using AI in high-stakes decisions (like hiring, lending, or healthcare) to conduct regular bias audits and provide clear explanations when AI affects people's lives. The law creates new compliance requirements for businesses using AI tools, with penalties for companies that don't properly test their systems or notify customers about AI use.
Last action: Mar 13, 2024
SB 452
Indiana just passed SB 452 to regulate how banks and lenders use AI in credit decisions. The law requires financial institutions to explain AI-driven loan denials and conduct regular fairness audits of their automated credit scoring systems.
Last action: May 4, 2023
SB 468
Indiana has updated its commercial code to address AI and other automated systems in business transactions. The bill, signed into law, creates new rules for when AI systems can form contracts and make business decisions, and clarifies liability when AI systems malfunction or make errors.
Last action: May 4, 2023
SB 5
Indiana's SB 5 creates comprehensive consumer data privacy rules similar to California's CCPA and Europe's GDPR. The law gives Indiana residents rights to access, delete, and opt out of the sale of their personal data, while requiring businesses that collect data from Indiana residents to implement specific privacy practices and safeguards.
Last action: May 1, 2023
HB 1563
Indiana HB 1563 would regulate how businesses and government agencies can use facial recognition software. Representative sponsors are pushing this bill through the Roads and Transportation Committee (an unusual committee assignment that may signal focus on transportation-related uses). The bill would likely create new restrictions and requirements for any organization using facial recognition technology in Indiana.
Last action: Jan 19, 2023
HB 1554
HB 1554, introduced in Indiana, aims to protect consumer data privacy. The bill would likely create new requirements for businesses that collect and use personal data, similar to laws in other states like California and Virginia. Without the full bill text, specific requirements and scope remain unclear.
Last action: Jan 19, 2023
SB 358
Senator Freeman's SB 358 requires businesses to get explicit consent before using AI to analyze consumer data in Indiana. Companies would need to tell customers exactly how AI processes their information, let them opt out, and delete data on request. This brings GDPR-style data rights specifically to AI systems.
Last action: Feb 17, 2022
HB 1261
Indiana HB 1261 would create a comprehensive consumer data privacy law, giving residents rights to access, delete, and opt out of the sale of their personal data. The bill requires businesses that collect data on Indiana residents to provide privacy notices and honor consumer requests, similar to laws in California and other states.
Last action: Jan 10, 2022
SB 179
Indiana Senate Bill 179 requires election officials to implement cybersecurity measures for voting systems and creates mandatory incident reporting. The bill, now signed into law, establishes specific security protocols that election technology vendors and local election boards must follow, including annual security assessments and real-time breach notifications.
Last action: Mar 21, 2020
HB 1238
Indiana HB 1238 would require law enforcement agencies to get approval from local government bodies before buying or using surveillance technology like facial recognition, license plate readers, or predictive policing AI. Representative [sponsor not listed] introduced this bill that would force police departments to publicly disclose what surveillance tech they use and how they use it, giving communities a chance to weigh in before deployment.
Last action: Jan 7, 2020
SB 576
Indiana's SB 576 would ban employers from using AI systems that scan faces or voices during hiring unless they tell candidates first and get written consent. The bill, currently in committee, creates new rules for any company using AI-powered video interviews or voice analysis tools to screen job applicants.
Last action: Jan 14, 2019
HB 1540
Indiana HB 1540 creates new rules for healthcare professionals using AI to make medical decisions. The bill requires doctors, nurses, and other licensed healthcare providers to disclose when they use AI tools for diagnosis or treatment recommendations, and makes them legally responsible for any AI-generated medical advice they provide to patients.
Last action: Apr 26, 2017
AI-generated analysis for informational purposes only. Not legal advice.