Industry Impact
AI legislation affecting Indiana healthcare organizations
Indiana healthcare organizations face growing AI regulation at both the federal and state level. Bills in Congress and the Indiana General Assembly address AI-assisted diagnostics, clinical decision support, patient data privacy, telehealth, pharmaceutical research, and medical device software. Whether you operate a hospital system in Indianapolis, a clinic in Fort Wayne, or a telehealth platform serving Indiana patients, these bills could create new compliance requirements.
35
Bills Affecting Indiana Healthcare
19
High Risk
Key Compliance Considerations
Indiana providers using AI-assisted diagnostic tools may need to disclose AI involvement to patients under proposed state and federal legislation
Clinical decision support systems used in Indiana hospitals face new validation and testing requirements in multiple pending bills
Patient data used to train AI models has stricter consent and de-identification rules under both federal proposals and Indiana data governance bills
Telehealth AI serving Indiana patients must meet the same standard-of-care requirements as in-person care under proposed regulation
AI Bills Affecting Indiana Healthcare
HR 8094
Rep. Don Beyer (D-VA) introduced legislation requiring companies that develop or deploy large AI models (like GPT-4 or Claude) to publicly disclose detailed information about their AI systems. Companies would need to report training data sources, model capabilities, safety testing results, and energy consumption to a new federal registry within 90 days of deployment.
Last action: Mar 26, 2026
S 4199
Senator Markey (D-MA) introduced a bill that would ban companies from using AI to collect or process personal data from anyone under 17 without explicit consent. The Youth AI Privacy Act specifically targets AI systems that analyze biometric data, predict behavior, or make automated decisions about minors, requiring companies to delete collected data and conduct regular impact assessments.
Last action: Mar 25, 2026
S 4214
Senator Bernie Sanders wants to block all new data center construction in the US until Congress passes laws regulating AI safety. The bill would immediately halt permits and approvals for data centers (the facilities that power cloud computing and AI services) and create a presidential commission to study AI risks.
Last action: Mar 25, 2026
HR 8037
Rep. Baumgartner (R-WA) introduced a bill requiring companies to disclose when they use AI systems trained on data from China, Russia, Iran, or North Korea. Companies would face fines up to $5 million for failing to tell customers about these foreign data sources in their AI products.
Last action: Mar 24, 2026
S 3982
Senator Harris introduced S 3982 to make companies criminally liable when their AI systems are used to commit fraud, even if the company didn't intend the fraud. The bill closes a legal loophole where businesses could claim their AI acted independently, forcing companies to take responsibility for fraudulent outcomes from their automated systems.
Last action: Mar 4, 2026
S 3952
Senator Peters introduced a bill that would create new compliance requirements for companies using AI in high-stakes decisions like hiring, lending, healthcare, and criminal justice. Companies would need to conduct annual bias audits, implement human oversight systems, and publicly disclose when AI makes decisions affecting people's lives.
Last action: Feb 26, 2026
HR 7696
Rep. Jackson Lee introduced HR 7696 to protect critical infrastructure from AI-powered cyberattacks. The bill would require companies operating power grids, water systems, and other essential services to implement specific AI security measures and conduct regular vulnerability assessments. It creates new federal oversight of AI systems used in critical infrastructure with mandatory reporting of AI-related security incidents.
Last action: Feb 25, 2026
HB 1421
Indiana House Bill 1421 would completely ban employers from using automated decision systems (like AI hiring software, resume screening tools, or performance evaluation algorithms) to make employment decisions. The bill has just been introduced and sent to the Employment, Labor and Pensions Committee for review.
Last action: Jan 8, 2026
HR 6461
Representative Ted Lieu introduced the READ AI Models Act (HR 6461) to require companies developing powerful AI systems to run safety tests and share the results with the government. The bill specifically targets frontier AI models (think GPT-4 level and beyond) and would force developers to test for dangerous capabilities like cyberattacks, bioweapon design, or autonomous replication before release.
Last action: Dec 4, 2025
HR 6356
Rep. Yvette Clarke (D-NY) introduced legislation requiring companies to audit their AI systems for bias and discrimination before using them to make decisions about people. The bill would give individuals the right to know when AI makes decisions about them and to appeal those decisions to a human.
Last action: Dec 2, 2025
S 3108
Senator Robert Casey Jr. introduced the AI-Related Job Impacts Clarity Act (S 3108), which would require companies to tell the government before using AI in ways that could affect jobs. Companies planning to deploy AI systems that might automate work or change employment would need to file advance notices with the Department of Labor, explaining how many workers could be affected and what support they'll provide.
Last action: Nov 5, 2025
S 2938
Senator Cantwell introduced the Artificial Intelligence Risk Evaluation Act, which would require companies developing AI systems to conduct safety evaluations before release and report critical failures to the government. The bill creates a new federal office to oversee AI safety and gives regulators power to investigate AI incidents, similar to how the NTSB investigates plane crashes.
Last action: Sep 29, 2025
S 2367
Senator Durbin introduced S 2367, which would require companies using AI for important decisions (like hiring, lending, or healthcare) to explain how their AI works and prove it doesn't discriminate. Companies would need to conduct regular audits of their AI systems, tell people when AI makes decisions about them, and let people opt out of certain AI decisions.
Last action: Jul 21, 2025
SB 150
Indiana's SB 150, now signed into law, requires companies using AI in high-stakes decisions (like hiring, lending, or healthcare) to conduct regular bias audits and provide clear explanations when AI affects people's lives. The law creates new compliance requirements for businesses using AI tools, with penalties for companies that don't properly test their systems or notify customers about AI use.
Last action: Mar 13, 2024
SB 468
Indiana has updated its commercial code to address AI and other automated systems in business transactions. The bill, signed into law, creates new rules for when AI systems can form contracts and make business decisions, and clarifies liability when AI systems malfunction or make errors.
Last action: May 4, 2023
SB 5
Indiana's SB 5 creates comprehensive consumer data privacy rules similar to California's CCPA and Europe's GDPR. The law gives Indiana residents rights to access, delete, and opt out of the sale of their personal data, while requiring businesses that collect data from Indiana residents to implement specific privacy practices and safeguards.
Last action: May 1, 2023
SB 358
Senator Freeman's SB 358 requires businesses to get explicit consent before using AI to analyze consumer data in Indiana. Companies would need to tell customers exactly how AI processes their information, let them opt out, and delete data on request. This brings GDPR-style data rights specifically to AI systems.
Last action: Feb 17, 2022
SB 576
Indiana's SB 576 would ban employers from using AI systems that scan faces or voices during hiring unless they tell candidates first and get written consent. The bill, currently in committee, creates new rules for any company using AI-powered video interviews or voice analysis tools to screen job applicants.
Last action: Jan 14, 2019
HB 1540
Indiana HB 1540 creates new rules for healthcare professionals using AI to make medical decisions. The bill requires doctors, nurses, and other licensed healthcare providers to disclose when they use AI tools for diagnosis or treatment recommendations, and makes them legally responsible for any AI-generated medical advice they provide to patients.
Last action: Apr 26, 2017
HR 8031
Representative Boebert introduced HR 8031 to repeal Biden's Executive Order on AI that established federal AI safety standards and oversight requirements. The bill would eliminate current federal AI governance frameworks, removing requirements for federal agencies to assess AI risks and for companies to report on their AI development activities.
Last action: Mar 20, 2026
S 4113
Senator Elissa Slotkin (D-MI) introduced the AI Guardrails Act to force federal agencies to set safety rules for AI systems before they can deploy them. The bill requires agencies to identify risks, establish testing procedures, and create ways to shut down AI systems that go wrong, with the Department of Defense and intelligence agencies mostly exempt.
Last action: Mar 17, 2026
S 4098
Senator Ted Budd (R-NC) introduced the Artificial Intelligence-Ready Data Act to create federal guidelines for how businesses prepare and manage data used in AI systems. The bill would establish new requirements for data quality, documentation, and transparency when companies use data to train or operate AI tools, affecting any business that develops or deploys AI systems.
Last action: Mar 16, 2026
S 4069
Senator Todd Young (R-IN) introduced a bill requiring NIST to create standardized formats for biological data used in AI systems. The bill focuses on making bio-data (like genomic sequences, protein structures, and clinical trial results) consistent and interoperable across different AI platforms, which would help pharmaceutical companies, biotech firms, and research institutions share data more easily for drug discovery and medical AI development.
Last action: Mar 12, 2026
S 2937
Senator Thom Tillis introduced the AI LEAD Act to regulate how federal agencies use AI systems. The bill requires agencies to tell Congress before buying or using AI, sets up testing requirements to catch problems before deployment, and creates new oversight rules with real penalties if agencies mess up their AI implementations.
Last action: Sep 29, 2025
HB 1620
Indiana Representative King introduced HB 1620, requiring healthcare providers to tell patients when they use AI in medical decisions. If a doctor, hospital, or insurance company uses AI to diagnose you, recommend treatment, or decide coverage, they must disclose this to patients in writing.
Last action: Jan 21, 2025
HB 1554
HB 1554, introduced in Indiana, aims to protect consumer data privacy. The bill would likely create new requirements for businesses that collect and use personal data, similar to laws in other states like California and Virginia. Without the full bill text, specific requirements and scope remain unclear.
Last action: Jan 19, 2023
S 4216
Senator Brian Schatz (D-HI) introduced a bill to repeal President Biden's Executive Order on AI, which currently requires federal agencies to develop AI safety standards and companies to share AI safety test results with the government. This would eliminate federal AI oversight requirements that the Executive Order put in place.
Last action: Mar 26, 2026
S 4179
Senator Murkowski (R-AK) introduced a bill requiring states to involve tribal representatives when investigating child abuse cases involving Native American children. The bill mandates that state child protective services notify and coordinate with tribes within 24 hours when AI-powered risk assessment tools flag potential abuse cases involving Native children.
Last action: Mar 24, 2026
HR 7968
Rep. Suhas Subramanyam (D-VA) introduced this bill to help small businesses and startups access federal AI resources. It would create a new program at NIST (National Institute of Standards and Technology) that gives smaller companies access to government AI testing tools, datasets, and expertise that are currently only available to large corporations and research institutions.
Last action: Mar 17, 2026
HR 7907
Rep. Ro Khanna (D-CA) introduced a bill directing NIST to create standardized formats for biological data that AI systems can read and process. The bill focuses on making DNA sequences, protein structures, and other biological data work better across different AI platforms and research tools. It aims to accelerate biotech innovation by making it easier for AI companies to train models on biological datasets.
Last action: Mar 12, 2026
HR 7294
Rep. Robert Menendez (D-NJ) introduced the AI for Secure Networks Act to improve cybersecurity in critical infrastructure by using AI to detect and respond to threats. The bill would direct the Department of Homeland Security to develop AI tools for protecting power grids, water systems, and other essential services from cyber attacks.
Last action: Jan 30, 2026
HR 7058
Representative Jim Himes introduced HR 7058, which requires the State Department to create an office that evaluates AI risks from China, Russia, and other adversary nations. The bill doesn't regulate businesses directly but mandates government reports on foreign AI threats that could influence future regulations and federal AI procurement decisions.
Last action: Jan 14, 2026
HR 6996
The Full AI Stack Export Promotion Act (HR 6996) aims to boost US exports of AI technologies by streamlining export controls and creating new government programs to help American AI companies sell internationally. While the full text isn't available yet, the title suggests it covers the entire AI technology chain from chips to software, likely reducing barriers that currently make it hard for US companies to export AI products.
Last action: Jan 9, 2026
S 3586
Senator Todd Young (R-IN) introduced a bill to create a voluntary AI certification program specifically for small businesses. The bill would establish an 'AI Center of Excellence' at the Small Business Administration that helps small companies adopt AI responsibly through training, resources, and a certification process that could give them advantages in federal contracting.
Last action: Jan 7, 2026
HR 2385
The CREATE AI Act, introduced in the House of Representatives, would establish the National AI Research Resource (NAIRR) to give academic researchers and small businesses access to computing power and datasets for AI development. This federal program would level the playing field between Big Tech companies and smaller organizations by providing free access to expensive AI infrastructure that currently only major corporations can afford.
Last action: Mar 26, 2025
Frequently Asked Questions
What AI laws affect healthcare organizations in Indiana?
Indiana healthcare organizations face both federal and state AI bills targeting clinical decision support systems, AI-assisted diagnostics, patient data privacy, and telehealth. Federal bills like the proposed AI accountability acts would require bias testing and validation for any AI used in clinical settings. Indiana-specific bills address data governance and automated decision-making in healthcare. AI Law Tracker monitors all active bills and classifies their risk level for healthcare providers.
Do Indiana hospitals need to disclose AI use to patients?
Several pending bills at both the federal and state level would require healthcare providers to disclose when AI is involved in diagnostic or treatment decisions. While none have been enacted yet in Indiana, the trend is clear. Organizations should prepare disclosure processes now, especially for AI-assisted radiology, pathology, and clinical decision support tools.
How should healthcare companies prepare for AI regulation?
Start by inventorying all AI tools used in patient-facing workflows. Document how each tool factors into clinical decisions, what data it accesses, and whether a human reviews its outputs. Establish a validation process for AI diagnostic tools and create disclosure templates for patients. These steps align with requirements in most pending healthcare AI legislation.
Need help with Indiana healthcare AI compliance?
Our team helps organizations build AI governance frameworks tailored to their industry and risk profile.
Talk to Our Team