Industry Impact
AI legislation affecting Indiana financial services and insurance
Indiana banks, credit unions, insurance companies, and fintech firms face some of the broadest AI regulation in proposed legislation. Federal bills and Indiana General Assembly proposals cover algorithmic lending, AI-driven underwriting, fraud detection systems, consumer credit decisions, insurance pricing models, and trading algorithms. Indiana's financial services sector, from community banks to large insurance carriers headquartered in Indianapolis, must prepare for explainability mandates and disparate impact testing.
37
Bills Affecting Indiana Financial Services
23
High Risk
Key Compliance Considerations
Indiana lenders using AI-driven credit decisions must make them explainable to consumers under proposed federal rules
Insurance underwriting algorithms used by Indiana carriers face disparate impact testing requirements in pending legislation
Fraud detection systems at Indiana banks and credit unions may need human review processes for flagged transactions
Consumer-facing AI pricing and recommendation engines used by Indiana retailers and financial firms require transparency disclosures
AI Bills Affecting Indiana Financial Services
HR 8094
Rep. Don Beyer (D-VA) introduced legislation requiring companies that develop or deploy large AI models (like GPT-4 or Claude) to publicly disclose detailed information about their AI systems. Companies would need to report training data sources, model capabilities, safety testing results, and energy consumption to a new federal registry within 90 days of deployment.
Last action: Mar 26, 2026
S 4199
Senator Markey (D-MA) introduced a bill that would ban companies from using AI to collect or process personal data from anyone under 17 without explicit consent. The Youth AI Privacy Act specifically targets AI systems that analyze biometric data, predict behavior, or make automated decisions about minors, requiring companies to delete collected data and conduct regular impact assessments.
Last action: Mar 25, 2026
S 4214
Senator Bernie Sanders wants to block all new data center construction in the US until Congress passes laws regulating AI safety. The bill would immediately halt permits and approvals for data centers (the facilities that power cloud computing and AI services) and create a presidential commission to study AI risks.
Last action: Mar 25, 2026
HR 8037
Rep. Baumgartner (R-WA) introduced a bill requiring companies to disclose when they use AI systems trained on data from China, Russia, Iran, or North Korea. Companies would face fines up to $5 million for failing to tell customers about these foreign data sources in their AI products.
Last action: Mar 24, 2026
S 3982
Senator Harris introduced S 3982 to make companies criminally liable when their AI systems are used to commit fraud, even if the company didn't intend the fraud. The bill closes a legal loophole where businesses could claim their AI acted independently, forcing companies to take responsibility for fraudulent outcomes from their automated systems.
Last action: Mar 4, 2026
HR 7786
Representative Yvette Clarke introduced HR 7786 to make companies liable when their AI tools are used for fraud. If someone uses AI to create deepfakes, forge documents, or run scams, both the fraudster AND the AI company could face penalties unless the company took reasonable steps to prevent misuse.
Last action: Mar 4, 2026
S 3952
Senator Peters introduced a bill that would create new compliance requirements for companies using AI in high-stakes decisions like hiring, lending, healthcare, and criminal justice. Companies would need to conduct annual bias audits, implement human oversight systems, and publicly disclose when AI makes decisions affecting people's lives.
Last action: Feb 26, 2026
HB 1421
Indiana House Bill 1421 would completely ban employers from using automated decision systems (like AI hiring software, resume screening tools, or performance evaluation algorithms) to make employment decisions. The bill has just been introduced and sent to the Employment, Labor and Pensions Committee for review.
Last action: Jan 8, 2026
HR 6461
Representative Ted Lieu introduced the READ AI Models Act (HR 6461) to require companies developing powerful AI systems to run safety tests and share the results with the government. The bill specifically targets frontier AI models (think GPT-4 level and beyond) and would force developers to test for dangerous capabilities like cyberattacks, bioweapon design, or autonomous replication before release.
Last action: Dec 4, 2025
HR 6356
Rep. Yvette Clarke (D-NY) introduced legislation requiring companies to audit their AI systems for bias and discrimination before using them to make decisions about people. The bill would give individuals the right to know when AI makes decisions about them and to appeal those decisions to a human.
Last action: Dec 2, 2025
S 3108
Senator Robert Casey Jr. introduced the AI-Related Job Impacts Clarity Act (S 3108), which would require companies to tell the government before using AI in ways that could affect jobs. Companies planning to deploy AI systems that might automate work or change employment would need to file advance notices with the Department of Labor, explaining how many workers could be affected and what support they'll provide.
Last action: Nov 5, 2025
S 2938
Senator Cantwell introduced the Artificial Intelligence Risk Evaluation Act, which would require companies developing AI systems to conduct safety evaluations before release and report critical failures to the government. The bill creates a new federal office to oversee AI safety and gives regulators power to investigate AI incidents, similar to how the NTSB investigates plane crashes.
Last action: Sep 29, 2025
HR 4695
Representative Ted Lieu introduced HR 4695 to restrict how companies and government agencies use facial recognition technology. The bill would require businesses to get explicit consent before scanning faces, ban certain uses like emotion detection in hiring, and give people the right to opt out of facial recognition systems.
Last action: Jul 23, 2025
S 2367
Senator Durbin introduced S 2367, which would require companies using AI for important decisions (like hiring, lending, or healthcare) to explain how their AI works and prove it doesn't discriminate. Companies would need to conduct regular audits of their AI systems, tell people when AI makes decisions about them, and let people opt out of certain AI decisions.
Last action: Jul 21, 2025
SB 150
Indiana's SB 150, now signed into law, requires companies using AI in high-stakes decisions (like hiring, lending, or healthcare) to conduct regular bias audits and provide clear explanations when AI affects people's lives. The law creates new compliance requirements for businesses using AI tools, with penalties for companies that don't properly test their systems or notify customers about AI use.
Last action: Mar 13, 2024
SB 468
Indiana has updated its commercial code to address AI and other automated systems in business transactions. The bill, signed into law, creates new rules for when AI systems can form contracts and make business decisions, and clarifies liability when AI systems malfunction or make errors.
Last action: May 4, 2023
SB 452
Indiana just passed SB 452 to regulate how banks and lenders use AI in credit decisions. The law requires financial institutions to explain AI-driven loan denials and conduct regular fairness audits of their automated credit scoring systems.
Last action: May 4, 2023
SB 5
Indiana's SB 5 creates comprehensive consumer data privacy rules similar to California's CCPA and Europe's GDPR. The law gives Indiana residents rights to access, delete, and opt out of the sale of their personal data, while requiring businesses that collect data from Indiana residents to implement specific privacy practices and safeguards.
Last action: May 1, 2023
HB 1563
Indiana HB 1563 would regulate how businesses and government agencies can use facial recognition software. Representative sponsors are pushing this bill through the Roads and Transportation Committee (an unusual committee assignment that may signal focus on transportation-related uses). The bill would likely create new restrictions and requirements for any organization using facial recognition technology in Indiana.
Last action: Jan 19, 2023
SB 358
Senator Freeman's SB 358 requires businesses to get explicit consent before using AI to analyze consumer data in Indiana. Companies would need to tell customers exactly how AI processes their information, let them opt out, and delete data on request. This brings GDPR-style data rights specifically to AI systems.
Last action: Feb 17, 2022
HB 1261
Indiana HB 1261 would create a comprehensive consumer data privacy law, giving residents rights to access, delete, and opt out of the sale of their personal data. The bill requires businesses that collect data on Indiana residents to provide privacy notices and honor consumer requests, similar to laws in California and other states.
Last action: Jan 10, 2022
SB 576
Indiana's SB 576 would ban employers from using AI systems that scan faces or voices during hiring unless they tell candidates first and get written consent. The bill, currently in committee, creates new rules for any company using AI-powered video interviews or voice analysis tools to screen job applicants.
Last action: Jan 14, 2019
HB 1540
Indiana HB 1540 creates new rules for healthcare professionals using AI to make medical decisions. The bill requires doctors, nurses, and other licensed healthcare providers to disclose when they use AI tools for diagnosis or treatment recommendations, and makes them legally responsible for any AI-generated medical advice they provide to patients.
Last action: Apr 26, 2017
HR 8031
Representative Boebert introduced HR 8031 to repeal Biden's Executive Order on AI that established federal AI safety standards and oversight requirements. The bill would eliminate current federal AI governance frameworks, removing requirements for federal agencies to assess AI risks and for companies to report on their AI development activities.
Last action: Mar 20, 2026
S 4113
Senator Elissa Slotkin (D-MI) introduced the AI Guardrails Act to force federal agencies to set safety rules for AI systems before they can deploy them. The bill requires agencies to identify risks, establish testing procedures, and create ways to shut down AI systems that go wrong, with the Department of Defense and intelligence agencies mostly exempt.
Last action: Mar 17, 2026
S 4098
Senator Ted Budd (R-NC) introduced the Artificial Intelligence-Ready Data Act to create federal guidelines for how businesses prepare and manage data used in AI systems. The bill would establish new requirements for data quality, documentation, and transparency when companies use data to train or operate AI tools, affecting any business that develops or deploys AI systems.
Last action: Mar 16, 2026
HR 7576
Representatives Beyer and Obernolte introduced HR 7576 to create AI workforce training programs through tax credits. Companies that train workers in AI skills would get tax breaks, and the bill establishes government programs to help workers whose jobs are displaced by AI automation.
Last action: Feb 13, 2026
S 2937
Senator Thom Tillis introduced the AI LEAD Act to regulate how federal agencies use AI systems. The bill requires agencies to tell Congress before buying or using AI, sets up testing requirements to catch problems before deployment, and creates new oversight rules with real penalties if agencies mess up their AI implementations.
Last action: Sep 29, 2025
HB 1620
Indiana Representative King introduced HB 1620, requiring healthcare providers to tell patients when they use AI in medical decisions. If a doctor, hospital, or insurance company uses AI to diagnose you, recommend treatment, or decide coverage, they must disclose this to patients in writing.
Last action: Jan 21, 2025
HB 1554
HB 1554, introduced in Indiana, aims to protect consumer data privacy. The bill would likely create new requirements for businesses that collect and use personal data, similar to laws in other states like California and Virginia. Without the full bill text, specific requirements and scope remain unclear.
Last action: Jan 19, 2023
S 4216
Senator Brian Schatz (D-HI) introduced a bill to repeal President Biden's Executive Order on AI, which currently requires federal agencies to develop AI safety standards and companies to share AI safety test results with the government. This would eliminate federal AI oversight requirements that the Executive Order put in place.
Last action: Mar 26, 2026
HRES 1007
House Resolution 1007 is a non-binding resolution that expresses Congress's opinion on how AI should be used in banking, lending, and housing. It doesn't create any new laws or requirements; it just states that Congress thinks financial companies should use AI responsibly, avoid discrimination, and be transparent about their AI systems.
Last action: Mar 19, 2026
HR 7294
Rep. Robert Menendez (D-NJ) introduced the AI for Secure Networks Act to improve cybersecurity in critical infrastructure by using AI to detect and respond to threats. The bill would direct the Department of Homeland Security to develop AI tools for protecting power grids, water systems, and other essential services from cyber attacks.
Last action: Jan 30, 2026
HR 7058
Representative Jim Himes introduced HR 7058, which requires the State Department to create an office that evaluates AI risks from China, Russia, and other adversary nations. The bill doesn't regulate businesses directly but mandates government reports on foreign AI threats that could influence future regulations and federal AI procurement decisions.
Last action: Jan 14, 2026
HR 6996
The Full AI Stack Export Promotion Act (HR 6996) aims to boost US exports of AI technologies by streamlining export controls and creating new government programs to help American AI companies sell internationally. While the full text isn't available yet, the title suggests it covers the entire AI technology chain from chips to software, likely reducing barriers that currently make it hard for US companies to export AI products.
Last action: Jan 9, 2026
S 3586
Senator Todd Young (R-IN) introduced a bill to create a voluntary AI certification program specifically for small businesses. The bill would establish an 'AI Center of Excellence' at the Small Business Administration that helps small companies adopt AI responsibly through training, resources, and a certification process that could give them advantages in federal contracting.
Last action: Jan 7, 2026
HR 2385
The CREATE AI Act, introduced in the House of Representatives, would establish the National AI Research Resource (NAIRR) to give academic researchers and small businesses access to computing power and datasets for AI development. This federal program would level the playing field between Big Tech companies and smaller organizations by providing free access to expensive AI infrastructure that currently only major corporations can afford.
Last action: Mar 26, 2025
Frequently Asked Questions
What AI laws affect Indiana banks and financial services companies?
Indiana financial institutions face AI regulation from multiple directions. Federal bills target AI-driven credit decisions, algorithmic lending, and fraud detection systems. Pending legislation would require explainability for any AI that denies credit, sets insurance rates, or flags fraud. The CFPB is also pursuing enforcement around AI in consumer finance. Financial services companies in Indiana should track both federal proposals and state-level consumer protection bills.
Can banks use AI for credit decisions in Indiana?
Currently, yes, but multiple bills would add requirements. Proposed legislation requires that AI credit decisions be explainable to consumers, tested for disparate impact across protected classes, and subject to human review for denials. The Equal Credit Opportunity Act already prohibits discrimination, and regulators are increasingly scrutinizing whether AI models comply. Proactive testing and documentation is the safest path.
How should financial services firms prepare for AI regulation?
Start with model governance: document every AI model used in customer-facing decisions, including credit scoring, fraud detection, and pricing. Implement disparate impact testing on a regular schedule. Ensure consumers can request a human review of any AI-driven denial. Build explainability into your models now, before legislation mandates it. These steps align with requirements in both current enforcement actions and pending bills.
Need help with Indiana financial services AI compliance?
Our team helps organizations build AI governance frameworks tailored to their industry and risk profile.
Talk to Our Team