Guide

AI Compliance Checklist: 8 Steps for Indiana Businesses

A practical, step-by-step checklist for Indiana businesses preparing for AI regulation. Covers AI inventory, risk assessment, data governance, vendor management, transparency, human oversight, incident response, and ongoing monitoring.

Last updated March 21, 2026

Step 1: AI Inventory

The foundation of AI compliance is knowing what AI tools your organization uses. This includes approved enterprise tools and shadow AI (tools employees use without formal approval).

Create a comprehensive catalog of every AI tool in use across your organization. For each tool, document: the vendor name and product, what business function it supports, what data it processes, who uses it, and whether it was formally approved by IT or leadership.

Common AI tools to look for include: ChatGPT and other large language models, AI features built into existing software (Microsoft Copilot, Salesforce Einstein, etc.), automated marketing platforms, AI-powered analytics tools, hiring and recruitment AI, customer service chatbots, and AI-driven decision-support systems.

This inventory should be repeated at least quarterly. AI adoption moves fast, and new tools appear constantly. Assign an owner for the AI inventory and make updates part of your regular governance cycle.

Key Takeaway

You cannot comply with AI regulation if you do not know what AI tools your organization uses. Shadow AI (unapproved tools employees use on their own) is your biggest blind spot.

Step 2: Risk Assessment

Once you know what AI tools you use, assess the risk level of each one. Risk depends on what decisions the AI influences, what data it processes, and how much human oversight exists.

High-risk AI applications include: tools that make or influence employment decisions (hiring, firing, promotion), AI used in healthcare diagnosis or treatment, lending and insurance algorithms, AI that processes personal data of minors, and any AI system that makes autonomous decisions without human review.

Medium-risk applications include: marketing personalization, customer service chatbots, internal productivity tools, and AI-powered analytics that inform (but do not make) business decisions.

Low-risk applications include: AI spell-checkers, search algorithms, and basic automation tools that do not process personal data or make consequential decisions.

Use our free AI Risk Check tool to get a personalized risk assessment based on your industry, company size, AI usage, and location.

Step 3: Data Governance Review

AI systems are only as compliant as the data they use. Review how data flows into and out of every AI tool in your inventory.

Key questions to answer: What personal data does each AI tool collect or process? Is data being sent to third-party AI providers? Are employees inputting confidential business information or customer data into AI tools? Does your data retention policy cover AI-generated content? Do you have consent for the data being used?

Create a data flow map for each high-risk AI tool showing where data originates, how it enters the AI system, what processing occurs, where outputs go, and how long data is retained. This documentation will be required under several proposed bills and is essential for any data privacy compliance program.

Step 4: Vendor and Deployer Liability Review

Many proposed AI bills distinguish between AI developers (who build the tools) and deployers (who use them). As a deployer, your organization may be liable for harms caused by AI tools you purchased from vendors.

Review every AI vendor contract for: indemnification clauses covering AI-related claims, data usage rights (can the vendor use your data to train their models?), bias testing and audit commitments, transparency obligations (can the vendor explain how their AI works?), compliance representations (does the vendor commit to complying with applicable AI laws?), and incident notification requirements.

If your vendor contracts do not address these areas, negotiate updated terms. Document your due diligence process so you can demonstrate responsible vendor selection if a compliance issue arises.

Key Takeaway

Many proposed AI bills make deployers (not just developers) liable for AI harms. Your vendor relationship terms matter. If your contracts do not address AI liability, negotiate updated terms now.

Step 5: Transparency and Disclosure Requirements

Transparency is the most common requirement across proposed AI legislation. Nearly every bill includes some form of disclosure obligation when AI is used in consequential decisions.

Audit your current disclosure practices: Do customers know when they are interacting with an AI chatbot? Are job candidates notified when AI evaluates their application? Do patients know when AI influences their diagnosis or treatment plan? Are borrowers informed when AI is used in credit decisions?

For each AI application, determine whether disclosure is currently provided, what form the disclosure takes, and whether it meets the standards proposed in pending legislation. Many bills require specific disclosures: not just that AI is used, but how it is used and what role it plays in the decision.

Step 6: Human Oversight Policies

Multiple proposed bills require human oversight for high-risk AI decisions. This means a qualified person must review AI outputs before they result in consequential actions like hiring decisions, medical diagnoses, credit denials, or benefit determinations.

For each high-risk AI application, establish: who is responsible for reviewing AI outputs, what training they receive, what authority they have to override AI recommendations, how overrides are documented, and what escalation paths exist for edge cases.

Human oversight is not a rubber stamp. Reviewers must have the expertise, time, and authority to meaningfully evaluate AI outputs. If your human review process is perfunctory (approving every AI recommendation without genuine evaluation), it will not satisfy the intent of proposed legislation.

Key Takeaway

Human oversight is not a rubber stamp. Reviewers must have the expertise, time, and authority to meaningfully evaluate AI outputs. Perfunctory review will not satisfy proposed legislation.

Step 7: Incident Response Planning

AI systems can produce harmful, biased, or incorrect outputs. Your organization needs a plan for when (not if) an AI tool produces a bad result.

Your AI incident response plan should cover: how AI errors are identified and reported internally, who has authority to take an AI system offline, how affected individuals are notified, what remediation steps are taken, how incidents are documented for compliance purposes, and what changes are made to prevent recurrence.

Test your incident response plan with tabletop exercises. Consider scenarios like: an AI hiring tool is discovered to be screening out candidates from a protected class, a customer-facing chatbot provides inaccurate information that causes harm, an AI system exposes personal data through its outputs, or an AI decision-support tool produces consistently biased recommendations.

Step 8: Ongoing Monitoring and Legislative Tracking

AI compliance is not a one-time project. New legislation is introduced regularly, AI tools evolve, and your organization's AI usage changes over time. Establish an ongoing monitoring process.

Assign a compliance owner (individual or team) responsible for: tracking AI legislation at the federal and state level, updating the AI inventory quarterly, reviewing vendor compliance annually, auditing disclosure practices, and reporting to leadership on AI risk and compliance status.

Subscribe to the AI Law Tracker weekly newsletter for plain-English summaries of new bills, status changes, and compliance implications. Our bill tracker is updated daily so you always have the latest information on bills affecting Indiana businesses.

Frequently Asked Questions

What should be in an AI compliance policy?

A comprehensive AI compliance policy should cover: an inventory of all AI tools in use, risk classifications for each tool, data governance rules for AI systems, vendor management requirements, transparency and disclosure standards, human oversight requirements for high-risk applications, incident response procedures, and an ongoing monitoring and update process. The policy should be reviewed and updated at least annually as regulations evolve.

How do I audit my company's AI usage?

Start by surveying department heads and team leads to identify all AI tools in use, including shadow AI that may not have been formally approved. For each tool, document what it does, what data it processes, who uses it, and what decisions it influences. Classify each tool by risk level. Then review vendor contracts, disclosure practices, and human oversight processes for high-risk tools. Our free AI Risk Check provides a quick starting assessment based on your industry and AI usage patterns.

Do small businesses in Indiana need AI compliance?

Yes, though the scope may be smaller. Most proposed AI bills apply to businesses of all sizes, though some include thresholds that exempt very small organizations from certain requirements (such as annual bias audits). Even small Indiana businesses should conduct an AI inventory, review their data practices, and ensure transparency with customers and employees about AI use. Small businesses using AI in hiring, customer decisions, or data processing face the same core compliance obligations as larger organizations.

How often should I update my AI compliance program?

Review your AI inventory quarterly, update vendor contracts and conduct risk assessments annually, and monitor legislation continuously. The AI regulatory landscape is evolving rapidly, with new bills introduced in nearly every congressional and state legislative session. Subscribe to the AI Law Tracker newsletter for weekly updates on bills affecting Indiana businesses.

Need help preparing for AI compliance?

Our team helps Indiana organizations build AI governance frameworks tailored to their industry and risk profile.

Talk to Our Team

Get the Weekly AI Law Roundup

Plain-English summaries of the AI laws that matter for your business. Every Monday. Free.

No spam. Unsubscribe anytime.