Vendor Toolkit
AI Vendor Vetting Questions
12 questions to ask before signing. No jargon, no filler.
By Harrison Painter, AI Business Strategist · Last updated March 2026
You are evaluating an AI vendor. Maybe it is a chatbot, an analytics platform, or an AI-powered hiring tool. These 12 questions cut through marketing and get to what matters: security, compliance, liability, and transparency.
Tier 1
5-Minute Check
Ask these in every vendor meeting. They fit on a phone screen.
“Can you share your most recent SOC 2 Type II report or ISO 27001 certificate?”
Why it matters
This is the baseline. If they hesitate or say "we're working on it," that tells you where they are.
Good answer sounds like
"Yes, here's our Type II report dated [within last 12 months]."
Red flag
"We're SOC 2 certified" (certification doesn't exist for SOC 2).
“Where is my data stored, and who has access to it?”
Why it matters
Data residency and access controls are fundamental. AI tools often process data in unexpected locations.
Good answer sounds like
Specific data center locations, named access roles, encryption at rest and in transit.
Red flag
Vague "cloud-based" answer with no specifics.
“If your AI makes a wrong decision that harms my customer, who is liable?”
Why it matters
AI liability is a live legal question. The vendor's answer reveals their maturity.
Good answer sounds like
Clear contractual terms around liability, indemnification, and insurance.
Red flag
"That hasn't happened" or "our AI is very accurate."
“How do you handle bias testing and fairness monitoring?”
Why it matters
AI bias is the #1 regulatory concern. If you deploy a biased AI tool, you share liability.
Good answer sounds like
Regular bias audits, specific metrics tracked, documented testing methodology.
Red flag
"Our training data is diverse" with no specifics.
“What happens to my data if I cancel the contract?”
Why it matters
Data portability and deletion are non-negotiable.
Good answer sounds like
Defined data retention and deletion timeline, export capability, written confirmation of deletion.
Red flag
No data deletion policy or long retention periods.
Tier 2
Deep Dive
For serious evaluations. Use these when you are shortlisting vendors.
“What compliance frameworks are you certified or attested against? Can I see the certificates or reports?”
Vendors love listing logos on their website. Ask to see the actual documents. A vendor with real compliance will hand them over under NDA without hesitation. One that stalls or deflects probably does not have what they claim.
“How do you handle AI model updates? Do I get notified before changes that affect my use case?”
AI models change constantly, and updates can shift accuracy, bias profiles, and output quality overnight. You need to know their update cadence, testing process, and notification policy. A good vendor gives advance notice and documents what changed.
“What transparency documentation do you provide about how your AI makes decisions?”
Model cards, algorithmic impact assessments, or plain-language decision explanations are all acceptable. Vendors who say their model is proprietary and cannot be explained are asking you to trust a black box with your customers' outcomes.
“Do you have an incident response plan specific to AI failures? Can I see it?”
Traditional incident response covers data breaches and outages. AI failures are different: biased outputs, hallucinated data, model drift. A mature vendor has a separate plan for AI-specific incidents, including detection, triage, notification, and remediation steps.
“How does your product comply with [specific regulation, e.g., Illinois AI Hiring Law, Colorado AI Act]?”
Insert the regulation that applies to your industry and state. If the vendor does not know the law you are referencing, that tells you a lot about how closely they are tracking the regulatory landscape they operate in.
“Can I get a copy of your AI impact assessment or risk documentation?”
Multiple upcoming regulations require AI impact assessments. Forward-thinking vendors already produce them. If a vendor has never conducted one, they are likely not evaluating the downstream risks of their own product.
“What human oversight mechanisms are built into your AI system?”
Look for escalation paths, confidence thresholds that trigger human review, override capabilities, and audit logs. Any AI system making consequential decisions should have a human-in-the-loop option, not just a theoretical one, but a designed and documented one.
Warning Signs
Red Flag Checklist
If a vendor triggers any of these, slow down and dig deeper before moving forward.
- !Claims "SOC 2 certified" (SOC 2 uses attestation, not certification)
- !Won't share compliance reports under NDA
- !Compliance report is more than 12 months old
- !Certificate scope doesn't cover the services you're buying
- !No written AI incident response plan
- !"Our AI doesn't make decisions, it makes recommendations" (this is a dodge)
- !No bias testing methodology documented
- !Can't explain how their AI reaches its outputs
- !Data deletion policy is vague or nonexistent
- !No contractual AI liability terms
Need help evaluating AI vendors for your team?
LaunchReady helps organizations vet AI tools, build compliance strategies, and train teams to work with AI responsibly.
Book a CallRelated Resources
Framework Guide
SOC 2 Compliance
What it costs, who needs it, and what to watch for.
Framework Guide
ISO 27001
The international standard for information security management.
Assessment Tool
AI Risk Assessment
Evaluate your organization's AI compliance readiness.
Save this page or print it for your next vendor meeting. All 12 questions and the red flag checklist are formatted for easy reference.
Use your browser's print function (Ctrl+P or Cmd+P) to save as PDF.