Framework 1 · EU AI Act
EU AI Act Compliance
Enforceable August 2026. Applies to any business with EU customers or operating in the EU. US businesses serving EU visitors are in scope. Violations carry fines up to €35M or 7% of global turnover.
Signal 01
AI Chatbot / Virtual Assistant — Disclosure Absent
Critical
Article 50 requires that any AI system interacting with humans must clearly disclose it is not human — at or before the first interaction. The disclosure must be explicit, not buried. Applies to chatbots, virtual assistants, automated chat widgets, and any conversational AI regardless of sophistication.
How Detected: Page source scan for chat widget scripts (Tidio, Drift, Intercom, ManyChat, Crisp, Freshchat, custom GPT embeds). If widget detected → scan visible text for disclosure language ("AI", "bot", "automated", "not a human"). Flag if widget present but disclosure language absent from widget UI text or nearby copy.
Signal 02
High-Risk AI Use — Scoring / Ranking / Evaluating People
High
Article 6 and Annex III classify AI used to evaluate, score, or rank individuals in hiring, credit, education, or essential services as high-risk. High-risk systems require conformity assessments, transparency documentation, human oversight mechanisms, and registration in the EU database.
How Detected: Intake form answer (Q3 compliance pre-screen) + script scan for known AI hiring/scoring tools (HireVue, Workday AI, Pymetrics, etc.). If detected → immediate attorney referral flag. This cannot be self-remediated.
Signal 03
Emotion Recognition / Biometric Categorization — Disclosure Absent
High
Article 50 requires explicit notification before activating any AI system that recognizes emotions or infers personal characteristics from biometric data. Increasingly common in retail analytics, customer sentiment tools, and video-based service platforms.
How Detected: Script scan for emotion/sentiment analytics libraries and known platforms (Affectiva, Behaviorally, some Hotjar/FullStory configurations). Category check — flagged automatically for businesses in healthcare, retail, hospitality, and HR.
Framework 2 · FTC Guidelines
FTC Disclosure + Endorsement
FTC Guides apply to US businesses. Active enforcement 2024–present. AI-generated content, automated reviews, and undisclosed endorsements are specific targets. Penalties include injunctions and civil fines.
Signal 04
AI-Generated Content — No Disclosure
High
FTC Guides require disclosure when AI-generated content could mislead consumers — particularly in testimonials, reviews, product descriptions, and endorsements. Undisclosed AI-generated content that creates false impressions of human authenticity is a deceptive trade practice.
How Detected: Intake form answer (Q4) + page content analysis for disclosure language near testimonial/review sections. Flag if business confirmed using AI-generated content AND no disclosure language found ("AI-assisted," "generated with AI," etc.) near that content.
Signal 05
AI Use Policy — Absent
Medium
While not yet legally mandated for all US businesses, publishing an AI Use Policy is rapidly becoming an industry expectation and is required under emerging state-level AI transparency laws (Colorado, Texas, Illinois). Absence is a trust signal failure for AI-literate customers and a future compliance liability.
How Detected: Scan all linked pages for terms matching "AI policy," "artificial intelligence use," "how we use AI." Check footer links, Privacy Policy, Terms of Service. Flag if absent. Also check robots.txt for AI crawler permissions as proxy signal.
Signal 06
Automated Review Solicitation — Undisclosed
Medium
Automated post-purchase or post-appointment review requests using AI-generated messaging must be disclosed if the timing, content, or targeting is AI-driven. Incentivized or selectively solicited reviews have additional FTC requirements.
How Detected: Scan for review automation scripts and third-party review request services (Birdeye, Podium, NiceJob, Grade.us). Cross-reference with Privacy Policy disclosure of automated marketing. Flag if automation detected but not disclosed.
Framework 3 · HIPAA + AI
HIPAA + AI Intersections
HIPAA applies to covered entities and business associates. The intersection with AI creates new exposure: AI processing of PHI, AI tools used by staff accessing patient data, and health-adjacent data collection that may constitute PHI in AI context.
Signal 07
Health-Adjacent Data + AI Processing — No BAA
Critical
If a business collects health-related information (symptoms, conditions, appointment reasons, medications) AND uses AI tools to process, route, or respond to that data — a Business Associate Agreement (BAA) is required with each AI vendor. Most ChatGPT, Claude, and standard AI API agreements do NOT include BAA by default. Healthcare-adjacent businesses (dental, mental health, aesthetics, fitness) are frequently exposed here without realizing it.
How Detected: Business category check (healthcare-adjacent) + intake form Q2 answer + scan of contact/intake forms for health-related field labels ("symptoms," "medical history," "medications," "reason for visit," "health concerns"). If health fields + AI processing confirmed → immediate attorney flag. BAA status cannot be verified by scan — attorney must confirm.
Signal 08
AI Chatbot Handling Health Inquiries — No HIPAA Safeguard
High
An AI chatbot on a healthcare-adjacent site that accepts, processes, or responds to health questions may constitute processing of PHI — even if the business doesn't consider itself a HIPAA covered entity. Chatbot conversations containing health information that are logged, trained on, or routed via third-party AI create significant exposure.
How Detected: Chat widget detected (Signal 01) + healthcare-adjacent category + intake Q2 = health data collected. Compound flag — both Signal 01 and Signal 08 fire simultaneously. Attorney referral bundled.
Signal 09
Privacy Policy — No AI Data Processing Disclosure
Medium
Existing Privacy Policies — even well-drafted ones — typically predate AI processing. If AI tools are used to process any customer data (including routing support emails, generating responses, or analyzing form submissions), the Privacy Policy must disclose this. HIPAA-adjacent businesses face heightened obligation.
How Detected: Fetch Privacy Policy page. Scan for AI/automated processing disclosure language. Flag if: (a) no Privacy Policy exists, (b) policy does not mention AI or automated processing, or (c) policy last-updated date is pre-2023 (likely predates AI tool adoption).
Framework 4 · Shadow AI
Shadow AI Inventory Risk
Untracked employee AI tool usage is the #1 underestimated compliance risk for SMBs. 66% of workers use AI tools not inventoried by their employer. Each untracked tool is a potential data breach, IP exposure, or regulatory violation.
Signal 10
Untracked Third-Party AI Scripts Detected in Page Code
High
Page source scan detects third-party scripts with AI processing capabilities that are not disclosed in the Privacy Policy, Terms of Service, or any AI Use Policy. These may represent marketing tools, analytics platforms, or employee-installed widgets that the business owner is unaware of.
How Detected: Full script inventory from page source. Cross-reference against known AI-processing vendor list (200+ vendors tracked including analytics, chat, advertising, and productivity tools). Flag any AI-capable script not mentioned in Privacy Policy. Count and categorize by risk level.
Signal 11
No AI Tool Inventory — Intake Confirmed
Medium
Intake pre-screen answer confirms employees use AI tools but the business does not track which ones. Without inventory, the business cannot assess data flow, consent obligations, IP ownership of AI outputs, or regulatory exposure from AI-assisted decisions.
How Detected: Intake form Q5 answer = "Yes, untracked." Automatically flags with recommendation to conduct AI tool audit. AgentReady provides audit template as part of Compliance Pro package.
Framework 5 · ADA + Technical
ADA / Accessibility + Technical Trust
ADA Title III applies to websites of businesses open to the public. AI-powered features (chatbots, image recognition, content generation) introduce new accessibility obligations. Technical trust signals affect AI agent recommendation confidence.
Signal 12
AI Chatbot — Not Keyboard / Screen Reader Accessible
Medium
AI chat widgets embedded on public-facing websites must meet WCAG 2.1 AA accessibility standards to comply with ADA Title III. Many third-party chat widgets are not fully accessible — keyboard navigation fails, screen readers cannot interpret conversation flow, focus management is broken.
How Detected: If chat widget detected (Signal 01): automated WCAG scan of widget container using axe-core or equivalent. Check for: keyboard focus trap, ARIA labels on input/send button, role=dialog or role=log on conversation container, sufficient color contrast in widget UI.
Signal 13
AI Agent Crawler Access — Blocked or Restricted
Low
Robots.txt configurations that block AI crawlers (GPTBot, CCBot, Claude-Web, PerplexityBot) reduce the business's visibility to AI recommendation systems. While this is a business decision, unintentional blocking from outdated robots.txt configurations is common and directly depresses the AARI Discovery score.
How Detected: Fetch robots.txt. Parse Disallow rules for known AI crawler user agent strings. Flag unintentional blocks (wildcard Disallow: / with no AI-specific Allow). Distinguish between intentional opt-out and configuration error based on pattern analysis.
Signal 14
SSL / HTTPS + Security Headers — Absent or Misconfigured
Low
AI agents and recommendation systems weight security signals when evaluating trustworthiness of business endpoints. Missing HTTPS, expired SSL certificates, and absent security headers (HSTS, CSP, X-Frame-Options) reduce trust score and can prevent AI booking agents from completing transactions via exposed endpoints.
How Detected: HTTPS check + SSL certificate validity and expiration date. HTTP header scan for HSTS, Content-Security-Policy, X-Content-Type-Options, X-Frame-Options. Flag missing or misconfigured headers. Grade each.
Compliance Score Calculation
Compliance Score starts at 100 and deductions are applied per signal. Attorney referral is automatically triggered when total deductions exceed 30 points OR any Critical signal fires.
| Signal | Framework | Severity | Point Deduction | Attorney Auto-Trigger |
|---|---|---|---|---|
| 01 — Chatbot Not Disclosed | EU AI Act | Critical | −25 | If EU customers |
| 02 — High-Risk AI (Scoring People) | EU AI Act | Critical | −30 | Always |
| 03 — Emotion Recognition | EU AI Act | High | −20 | Likely |
| 04 — AI Content Undisclosed | FTC | High | −15 | If testimonials |
| 05 — No AI Use Policy | FTC | Medium | −10 | No |
| 06 — Review Automation Undisclosed | FTC | Medium | −8 | Usually no |
| 07 — Health Data + No BAA | HIPAA | Critical | −30 | Always |
| 08 — Chatbot Handling Health Data | HIPAA | Critical | −25 | Always |
| 09 — Privacy Policy Missing AI Disclosure | HIPAA / GDPR | Medium | −12 | Healthcare yes |
| 10 — Untracked AI Scripts | Shadow AI | High | −5 per (max −20) | If 3+ or healthcare |
| 11 — No AI Tool Inventory | Shadow AI | Medium | −12 | No |
| 12 — Chatbot Not Accessible (ADA) | ADA | Medium | −10 | If demand received |
| 13 — AI Crawlers Blocked | Technical | Low | −8 | No |
| 14 — SSL / Security Headers Missing | Technical | Low | −5 to −15 | No |
80–100
Compliant · Grade A
60–79
Minor Gaps · Grade B/C
40–59
Exposed · Grade D
0–39
Critical Risk · Grade F
Attorney Auto-Trigger Rules
Immediate referral: Any Critical signal fires (Signals 01, 02, 07, 08 with applicable conditions)
Referral recommended: Total deductions exceed 30 points OR 2+ High signals fire simultaneously
Referral offered: Healthcare-adjacent category regardless of score — HIPAA intersection risk is always present
No referral needed: Score ≥ 80 with no Critical or High signals. Template fixes sufficient.
⚖️ Important — Legal Disclaimer
AgentReady's 14-signal compliance scan is an automated risk-identification tool. It identifies patterns consistent with regulatory exposure — it does not constitute legal advice, legal opinion, or a determination of liability. Compliance determinations, remediation requirements, and legal obligations must be assessed by a licensed attorney with expertise in the applicable regulatory framework and jurisdiction. AgentReady is the diagnostic instrument. The attorney is the prescribing authority. This specification is for internal AgentReady operator use and should not be shared with clients as a legal determination.