AI Chatbot Compliance Guide: State Laws, Criminal Liability & Penalties (2026)

A practical 2026 guide to AI chatbot compliance: state disclosure laws, the new Florida AG criminal probe, FTC penalties, the EU AI Act countdown, a 5-point readiness scorecard, and a step-by-step path to compliant deployment.

AI Chatbot Compliance Guide: State Laws, Criminal Liability & Penalties (2026)
Created by
Do not index
Created time
Apr 27, 2026 07:29 PM

Key Takeaways

Topic
Key Fact
Chatbot disclosure
25 state AI laws passed in 2026 (1,561 bills introduced across 45 states); 6 specifically affect chatbot operators; Colorado AI Act effective June 30, 2026, $20,000/violation
Criminal liability
Florida AG launched criminal investigation into OpenAI (Apr 21, 2026), the first criminal probe of an AI company by a state AG
Your biggest risk
The FTC Act applies to ALL AI chatbots nationwide, penalties up to $50,120 per violation
EU AI Act deadline
Article 50 enforcement begins August 2, 2026, applies to any business serving EU customers
California SB 243
Companion bots only, expressly excludes customer service bots
Washington HB 2225
Companion bots only, effective January 1, 2027 (not summer 2026)
Broadest state laws
Utah (all AI interactions) and Maine (consumer transactions), no CS exemptions
Compliance time
Most businesses can achieve full compliance in under 30 minutes using the 5-point scorecard

Do I Need to Disclose My Chatbot Is AI?

Yes, if your chatbot uses AI, you should disclose it. While some state laws (like California's SB 243) specifically target companion bots and exclude customer service bots, the FTC considers non-disclosure potentially deceptive, and the EU AI Act makes it mandatory for anyone serving EU customers starting August 2, 2026.
The regulatory landscape for AI chatbots is accelerating fast. Law360 reported in April 2026 that 27 US states are now pursuing chatbot legislation. The Future of Privacy Forum tracked 98 bills across 34 states plus 3 federal proposals. And in roughly 103 days, the EU AI Act's Article 50 will make disclosure mandatory for any business serving European customers.
Regulatory acceleration (updated April 22, 2026): 25 state AI laws have passed in 2026 alone (19 in March), with 1,561 AI-related bills introduced across 45 states. An additional 27 bills have passed both chambers and await governor signatures. There is zero federal preemption. The White House published an AI framework on March 20, but Congress has not acted. The patchwork is accelerating, making compliance-readiness a competitive advantage for chatbot operators. Sources: Swept AI, Apr 14, 2026 | Troutman Pepper, Apr 13, 2026
NCSL update (April 22, 2026): The National Conference of State Legislatures AI Legislation Tracker shows 40+ AI-related bills enacted across 16 states in 2026 alone. Notable new entries include Kentucky S 4 (information protection), Maine H 1154a (consumer transaction transparency), Montana S 25 (deepfakes in elections), Maryland H 956 (AI implementation workgroup), and Minnesota H 2432 (public safety).
Of these 16 states with broad AI legislation, 6 have enacted chatbot-specific laws that directly affect your business: California, Washington, Utah, New Hampshire, Maine, and Nebraska. This two-tier framing matters. The regulatory wave is real, but what actually affects YOUR chatbot is a smaller, more manageable set. Our compliance scorecard (below) focuses on exactly those.
The good news: compliance is straightforward. Most businesses can implement proper AI disclosure in under 30 minutes. Use our Chatbot Compliance Readiness Scorecard (5 checkpoints) below to assess your status in 60 seconds.

What Are Chatbot Laws?

Chatbot laws are regulations requiring businesses to disclose when a customer is interacting with AI instead of a human. They span three levels: state disclosure statutes, federal consumer protection rules, and international AI governance frameworks.
2026 is the breakout year for chatbot regulation. California's SB 243 took effect January 1. Washington signed HB 2225 in March. And 27 states have active bills in their legislatures, according to Law360's April 2026 legislative tracker.

The Three Regulatory Buckets

1. State disclosure laws. California SB 243 and Washington HB 2225 target companion chatbots, AI that simulates human relationships or emotional connections, and expressly exclude customer service bots. Utah and Maine, however, apply their disclosure requirements broadly to all AI interactions.
2. Federal consumer protection. The FTC Act applies to all AI chatbots nationwide, regardless of state law. The FTC has maintained since 2018 that failing to disclose AI interaction may constitute deceptive business practices. This is the regulation most likely to affect your customer service chatbot.
3. International frameworks. The EU AI Act's Article 50 requires disclosure for all AI systems serving EU customers, effective August 2, 2026. No companion-bot exclusions. No customer-service exemptions. If a European user interacts with your chatbot, disclosure is mandatory.
Key insight: Even if your state hasn't passed a chatbot-specific law, the FTC Act provides federal-level enforcement authority across all 50 states. And if you serve any EU customers, the EU AI Act applies regardless of where your company is headquartered.

The 5 Key Chatbot Laws Every Business Owner Must Know

Five laws and proposed laws will define chatbot compliance in 2026 and beyond. Each serves a different scope and carries different penalties.

California SB 243

California SB 243 is a state law effective January 1, 2026 that requires businesses to clearly disclose when users are interacting with an AI companion chatbot. It targets companion bots specifically, AI that simulates human relationships, exhibits anthropomorphic features, or sustains emotional connections.
What it requires: Clear, conspicuous disclosure at the start of AI companion chatbot conversations. The disclosure must appear before any meaningful interaction occurs.
Who it covers: Companion chatbots ONLY. The law expressly excludes "a bot used only for customer service." If your chatbot handles support tickets, answers product questions, or processes orders, SB 243 does not apply to you directly.
Penalties: $1,000 minimum per violation (private right of action, the greater of actual damages or $1,000). The California Attorney General can pursue civil penalties up to $2,500 per violation.
Sources: Gunderson Dettmer, October 2025 | Sheppard Mullin, December 2025
What this means for your customer service chatbot: Your CS bot is excluded from SB 243, but the FTC Act still applies. See the Chatbot Compliance Readiness Scorecard for the disclosure requirements that actually apply to customer service operations.

FTC Act: Federal Chatbot Disclosure Requirements

The FTC Act is the primary federal regulation affecting AI chatbot disclosure in the United States. Since 2018, the FTC has maintained that failing to disclose when a user is interacting with AI rather than a human may constitute a deceptive business practice under Section 5 of the Federal Trade Commission Act.
Why the FTC Act matters more than state laws for customer service bots: Unlike California SB 243 and Washington HB 2225, which are limited to companion bots, the FTC Act applies to all AI chatbots nationwide, including customer service bots, sales assistants, and support automation.
Enforcement: The FTC has not yet brought a specific enforcement action against an undisclosed chatbot, but its position papers and guidance are unambiguous. FTC penalties can reach $50,120 per violation, a figure adjusted for inflation and far exceeding state-level penalties for most businesses.
What "disclosure" means under FTC standards: Disclosure must be clear and conspicuous. Burying "powered by AI" in your terms of service or footer does not meet the threshold. The disclosure should appear at the start of the interaction, in plain language, before the chatbot collects any personal information.

EU AI Act Article 50

Article 50 of the EU AI Act requires businesses to inform users when they are interacting with an artificial intelligence system. Enforcement begins August 2, 2026, approximately 103 days from now.
Who it applies to: Any business serving EU customers, including US-based companies. If a single user in Germany, France, or any EU member state interacts with your chatbot, Article 50 applies to you.
Scope: Article 50 applies to all AI systems. No companion-bot exclusions, no customer-service exemptions, no small-business carve-outs. If it's AI and a human interacts with it, disclosure is mandatory.
Penalties: Up to €35 million or 7% of global annual turnover, whichever is higher. This is the most severe penalty regime of any chatbot regulation currently in force.
What you need to do: Add a clear disclosure message at the start of every chatbot conversation. The disclosure must identify that the user is interacting with an AI system. No ambiguity, no fine print.

Washington HB 2225

Washington HB 2225 is a state law signed March 24, 2026 that regulates AI companion chatbots. It takes effect January 1, 2027, not summer 2026 as many early reports suggested.
What it requires: AI companion chatbot disclosure plus safety protocols. The disclosure must reappear every 3 hours for adults and every 1 hour for minors.
Who it covers: Companion chatbots ONLY. Like California SB 243, Washington's law excludes bots "only used for a business's operational purposes, productivity, and analysis", meaning customer service bots are expressly excluded.
Penalties: No statutory damages. Private right of action for actual damages only, plus Attorney General enforcement authority.
Sources: Fisher Phillips, March 2026 | Washington Legislature Bill Report

The CHATBOT Act (Federal, Proposed)

The CHATBOT Act is a proposed federal law introduced March 19, 2026 that would regulate AI chatbot impersonation of licensed professionals: doctors, lawyers, therapists, and other credentialed practitioners.
Status: Proposed, not yet law. The bill has been referred to committee but has not passed either chamber of Congress.
What it signals: Even if the CHATBOT Act doesn't pass in its current form, it reflects a clear federal direction toward stricter AI chatbot regulation. The trajectory is toward more disclosure requirements, not fewer.
Why it matters for planning: If you're building a chatbot that could be perceived as providing professional advice (legal, medical, financial), this proposed law is a signal to implement robust disclaimers now rather than waiting for enforcement.

Chatbot Disclosure Laws by State

No comprehensive state-by-state chatbot compliance table exists anywhere else on the internet. This table is designed as a reference tool. Keep it bookmarked and check back quarterly for updates.

States With Enacted Chatbot Laws

State
Law
Effective
Disclosure
Scope
Penalty
Private Right
California
SB 243
Jan 1, 2026
Yes, at conversation start
Companion bots only (excludes customer service)
$1,000 min/violation (AG: up to $2,500)
Yes
Washington
HB 2225
Jan 1, 2027
Yes (every 3hrs adults / 1hr minors)
Companion bots only (excludes CS)
Actual damages only
Yes
Utah
AI Policy Act
2024 to 2025
Yes
All AI interactions
$2,500/violation (up to $5K repeat offenders)
No (AG only)
New Hampshire
HB 143
Jan 1, 2026
Yes
Child safety only
$1,000/violation
Yes
Maine
AI Transparency Act
2025
Yes
Consumer transactions
$1,000/violation
No (AG only)
Nebraska
LB 525
Jul 1, 2027
Yes
Minors' conversational AI safety
TBD (rulemaking pending)
TBD
Sources: Gunderson Dettmer, Sheppard Mullin (CA); Fisher Phillips, WA Legislature (WA); Perkins Coie, Alston & Bird (UT); Wiley, NH Bulletin (NH); Verrill Law, CompliancePoint (ME); Nebraska Legislature, April 2026 (NE); Daily Nebraskan, Apr 20, 2026; Troutman Pepper, Apr 20, 2026
⚠️ Deadline approaching (June 30, 2026, ~10 weeks): Colorado AI Act SB 24-205 is the first comprehensive state AI law in the US. It covers 8 high-risk domains (employment, housing, financial services, healthcare, education, insurance, government services, legal services). Customer service chatbots MAY be covered if they influence "consequential decisions" (refunds, account access, service changes). Penalties: up to $20,000 per violation. Applies to ANY company whose AI affects Colorado consumers, regardless of HQ location. Companies with NIST AI RMF or ISO/IEC 42001 compliance qualify for statutory safe harbor. Sources: Troutman Pepper, Apr 13, 2026 | AI Compliance Documents, Mar 2026
Key takeaway: California and Washington specifically exclude customer service bots from their companion-bot laws. But Utah and Maine apply broadly, and the FTC Act applies to all AI chatbots nationwide. Don't assume you're exempt just because your state hasn't passed a chatbot-specific statute.

Proposed Federal Legislation

  • CHATBOT Act, introduced March 19, 2026. Targets AI impersonation of licensed professionals. Federal direction signal. Not yet enacted.
  • New York SB 7263, proposed (passed Internet & Technology Committee only). Targets chatbots impersonating licensed professionals. Not yet enacted.

States With Pending Legislation

27 states have active chatbot-related bills in their legislatures as of April 2026, according to Law360's legislative tracker. Additional states with enacted or advanced legislation worth monitoring:
State
Bill Number
Status
Key Requirement
Expected Timeline
California
AB 1609
Passed committee (9-4, Apr 17)
Customer service chatbot disclosure
Needs full Assembly + Senate + Governor
Oregon
SB 1546
Advanced
Chatbot safety standards
2026 session
Idaho
SB 1297
In committee
Conversational AI Safety Act
TBD
Colorado
AI Act
Enacted
AI impact assessments
Effective June 2026
Iowa
SF 2417
Passed both chambers
AI safety standards
Awaiting governor signature
Tennessee
SB 1700
Passed Senate
AI regulation
TBD
California
AB 1988
Unanimous committee approval
AI transparency
TBD
Oklahoma
SB 1521 + HB 3544
Committee advances
AI regulation
TBD
Why AB 1609 matters: California AB 1609 would be the first US law to specifically regulate customer service chatbot disclosure, unlike SB 243 which expressly excludes them. Passed the Assembly Privacy and Consumer Protection Committee 9-4 on April 17, 2026. If enacted, it would directly end the CS-bot exemption in California.

States With No Current Legislation

If your state isn't listed above, there is no state-level chatbot disclosure requirement, yet. However, the FTC Act provides federal-level enforcement across all 50 states, and regulatory momentum suggests most states will have some form of chatbot legislation by 2027. The Future of Privacy Forum's AI legislation tracker is the best source for monitoring new bills.

Chatbot Compliance Readiness Scorecard

notion image
Use this 5-point scorecard to assess whether your chatbot meets current and upcoming disclosure requirements. Each checkpoint maps to a specific regulatory framework.

1. Disclosure Mechanism /

Does your chatbot clearly state it's AI at the start of every conversation?
  • Is the disclosure conspicuous, not buried in terms of service or fine print?
  • Does the disclosure appear before the chatbot collects any personal information?
  • Is the language plain and unambiguous ("I'm an AI assistant" not "enhanced automation")?
Why this matters: The FTC considers conspicuous disclosure the baseline for avoiding deceptive practice claims. The EU AI Act requires it. California and Utah codify it.

2. Opt-Out / Human Escalation /

Can users request to speak with a human at any point during the conversation?
  • Is the escalation path clear and visible ("Type 'human' to speak with a person")?
  • Does the human handoff preserve conversation context so the user doesn't have to repeat themselves?
  • Is the escalation actually functional, tested regularly, not just a link to a contact form?
Why this matters: As Fisher Phillips noted in their April 2026 analysis of chatbot deployment mistakes, "Many businesses have no defined answer for when and how a conversation should be handed off to a human being." This is both a compliance risk and a customer experience failure.

3. Data Privacy Compliance /

Does your chatbot comply with relevant privacy frameworks?
  • GDPR compliance for EU customer data (data processing agreement in place)
  • CCPA compliance for California resident data
  • Privacy notice accessible from within the chatbot conversation
  • Data processed and stored securely (SOC 2 Type II, ISO 27001, or equivalent)
Why this matters: Privacy compliance is the regulatory backbone that supports disclosure. A disclosed chatbot that mishandles data is still non-compliant.

4. Safety Protocols /

Does your chatbot have guardrails for high-risk interactions?
  • Self-harm detection and escalation to crisis resources
  • Content filters for harmful or inappropriate outputs
  • Age-appropriate interaction modes if minors may use the chatbot
  • Emergency escalation pathway (e.g., 988 Suicide & Crisis Lifeline integration)
Why this matters: California SB 243 and Washington HB 2225 specifically require safety protocols for companion bots. Even for customer service bots, safety guardrails are becoming an industry standard and FTC expectation. As of April 21, 2026, Florida's Attorney General launched a criminal investigation into OpenAI over ChatGPT's safety protocols, the first criminal probe of an AI company by a state AG. Safety guardrails aren't just compliance anymore. They're evidence of good faith if regulators come knocking.

5. Documentation & Reporting /

Can you demonstrate compliance if audited?
  • Disclosure messages are logged and timestamped
  • Escalation paths are documented and tested quarterly
  • A compliance officer or responsible party is identified internally
  • Quarterly compliance review is scheduled (regulations change fast)
Why this matters: Both state and federal regulators expect businesses to demonstrate compliance, not just achieve it. Documentation is your proof.

Your Score

Score
Status
Action
5/5
Fully compliant
Maintain and monitor for new legislation
3 to 4/5
Mostly compliant
Address gaps within 30 days
0 to 2/5
Non-compliant
Take action immediately, see implementation guide
SiteGPT chatbots are compliant by default: Built-in disclosure messages, human escalation, SOC 2 Type II certification, HIPAA-eligible infrastructure, and GDPR-ready data processing. Start your free trial →

Penalties for Non-Compliance

Chatbot disclosure violations carry penalties ranging from $1,000 per violation at the state level to €35 million under the EU AI Act, and, as of April 2026, potential criminal liability for AI safety failures. Here is the consolidated penalty landscape as of April 2026.

Penalty Comparison Table

State/Law
Penalty
Who Can Enforce
Scope
Notes
Florida AG Investigation
Criminal probe (not fines, potential criminal charges)
Florida AG (James Uthmeier)
AI safety + harm prevention
Subpoenas issued Apr 21, 2026, first criminal investigation of an AI company by a state AG
California SB 243
$1,000 min/violation (AG up to $2,500)
Private right of action + AG
Companion bots only
CS bots excluded
Washington HB 2225
Actual damages only (no statutory minimum)
Private right of action + AG
Companion bots only
Effective Jan 1, 2027
Utah
$2,500/violation (up to $5K repeat)
AG only
All AI interactions
Broadest state scope, includes CS
New Hampshire HB 143
$1,000/violation
Private right of action (child/parent)
Child safety only
Not general disclosure
Maine
$1,000/violation
AG only
Consumer transactions
Broad AI transparency
Nebraska
TBD (rulemaking pending)
TBD
Minors' conversational AI safety
Effective Jul 1, 2027
FTC Act
Up to $50,120 per violation
FTC enforcement
ALL chatbots nationwide
The real risk for CS bots
EU AI Act
Up to €35M or 7% global turnover
National authorities
ALL AI systems
Applies to EU-serving companies
CHATBOT Act (proposed)
TBD
FTC
Licensed professional impersonation
Federal, if enacted
⚠️ Breaking, Criminal Liability Escalation (April 21, 2026): Florida Attorney General James Uthmeier launched a criminal investigation into OpenAI over claims that the accused gunman in the Florida State University shooting consulted ChatGPT before the attack. Subpoenas were issued April 21, seeking all user harm policies, safety training materials, and law enforcement cooperation records from March 2024 through April 2026. This is the first criminal investigation of an AI company by a state attorney general, signaling a shift from civil penalties ($1,000-$50,120 per violation) to potential criminal liability for AI providers that fail to implement adequate safety guardrails. What this means for chatbot operators: While this investigation targets the AI provider (OpenAI), not businesses using chatbots, it establishes a clear enforcement trajectory. State AGs are now treating AI safety failures as potentially criminal matters. Your chatbot's safety protocols, harm detection, escalation pathways, content filters, are no longer just best practice. They're evidence of good-faith compliance. Sources: NPR, April 21, 2026 | CNN, April 21, 2026 | NBC News, April 21, 2026 | New York Times, April 21, 2026
Key Insight: 98 AI chatbot bills were tracked across 27 states. Only 5 became law. Zero specifically target customer service bots. But the FTC Act applies to all of them, with penalties up to $50,120 per violation. And as of April 2026, the Florida AG has shown that AI enforcement can escalate beyond civil fines to criminal investigation.
For customer service chatbot operators, the FTC Act is your primary compliance concern, not state companion-bot laws. While California's SB 243 and Washington's HB 2225 specifically exclude customer service bots, Utah and Maine's laws apply broadly, and the EU AI Act covers all AI systems serving EU customers. The FTC can pursue enforcement against any undisclosed AI chatbot nationwide.

How to Make Your Chatbot Compliant: Step-by-Step

Nobody else publishes an implementation guide for chatbot compliance. This section gives you the exact steps to achieve compliance across all applicable regulations.

Step 1: Add Disclosure to Your Chatbot Greeting

Every chatbot conversation must begin with a clear statement that the user is interacting with AI. This disclosure must appear before any data collection or meaningful interaction occurs.
Compliant greeting examples:
"Hi! I'm an AI assistant here to help with your questions. If you'd like to speak with a human team member, just ask."
"This is an automated AI chatbot from [Company Name]. Type 'human' anytime to connect with a person."
"👋 You're chatting with our AI support assistant. It can answer most questions instantly, and transfer you to a human whenever you need one."
Non-compliant approaches to avoid:
  • Burying "powered by AI" in your website footer
  • Mentioning AI only in your terms of service
  • Using ambiguous language like "smart assistant" or "enhanced support"
  • Requiring users to click a link to discover they're talking to AI

Step 2: Implement Human Escalation

Every chatbot must offer a clear path to human support. This isn't just regulatory, it's also good business. As CMSWire reported in April 2026, "There is no coordination layer that determines which component, AI, knowledge system or human agent, should take control." The result: brittle systems where failure in one capability affects the entire customer interaction.
Requirements:
  • The escalation option must be visible within the chatbot interface at all times
  • The handoff must preserve conversation context, the human agent should see what the chatbot already discussed
  • The handoff must actually work. Test it regularly.
SiteGPT implementation: Every SiteGPT plan includes built-in human escalation. The chatbot automatically transfers to a human agent when the user requests it, and the agent receives the full conversation transcript. See SiteGPT features for details.

Step 3: Configure Data Privacy Notices

Your chatbot must link to your privacy policy and comply with applicable data protection regulations.
Requirements:
  • Add a privacy policy link to the chatbot widget (not just the website footer)
  • Ensure GDPR compliance if you serve EU customers (data processing agreement, right to deletion)
  • Ensure CCPA compliance if you serve California residents
  • Verify data security: SOC 2 Type II certification, ISO 27001, or equivalent
SiteGPT implementation: GDPR DPA available for all plans. SOC 2 Type II certified. HIPAA-eligible for healthcare customers.

Step 4: Add Safety Guardrails

Even customer service chatbots should have basic safety protocols for edge-case interactions.
Requirements:
  • Self-harm detection and escalation to crisis resources (e.g., 988 Suicide & Crisis Lifeline)
  • Content filters that prevent harmful, dangerous, or illegal outputs
  • Age-appropriate interaction settings if minors may use the chatbot
Why this matters for compliance: While safety protocols are primarily required for companion bots under CA SB 243 and WA HB 2225, the FTC expects all AI systems to operate responsibly. Implementing safety guardrails is both a compliance best practice and a brand protection measure.

Step 5: Document Your Compliance

Regulators expect you to demonstrate compliance, not just claim it.
Requirements:
  • Log all disclosure messages with timestamps
  • Document escalation paths and test them quarterly
  • Identify a compliance officer or responsible party
  • Schedule quarterly compliance reviews (the regulatory landscape changes fast)

Step 6: Test and Monitor

Compliance is not a one-time task. Regulations change, new laws are enacted, and your chatbot may evolve.
Requirements:
  • Test disclosure on all chatbot entry points (website widget, mobile, integrations)
  • Update your chatbot greeting and privacy notices whenever new laws take effect
Compliance shortcut: Companies that implement NIST AI Risk Management Framework (AI RMF) or ISO/IEC 42001 qualify for statutory safe harbor under the Colorado AI Act. This means investing in a recognized AI governance framework now can shield you from the $20,000/violation penalties taking effect June 30, 2026.
SiteGPT makes compliance automatic: Disclosure messages, human escalation, data privacy, and compliance documentation are built into every plan. No configuration required, it works out of the box. Build your compliant chatbot, free trial

Does My Customer Service Chatbot Need to Disclose AI?

Your customer service chatbot is probably excluded from the two most prominent state laws, California SB 243 and Washington HB 2225, but you should disclose AI anyway. Here's why the honest answer is more useful than what most law firms will tell you.

The Good News: Your CS Bot Is Probably Excluded

California SB 243 expressly excludes "a bot used only for customer service." The law targets companion chatbots, AI that simulates human relationships, exhibits anthropomorphic features, or sustains emotional connections. If your chatbot answers product questions, processes returns, or handles support tickets, it falls outside SB 243's scope.
Washington HB 2225 similarly excludes bots used for "a business's operational purposes, productivity, and analysis." Your customer service chatbot is not what these laws are regulating.
These are companion-bot laws, not customer-service-bot laws.
⚠️ Update (April 21, 2026): California AB 1609, which would be the first US law to require disclosure specifically for customer service chatbots, passed the Assembly Privacy and Consumer Protection Committee on April 17 (9-4 vote). If enacted, this would end the CS-bot exemption in California. We'll update this guide as it advances.

What Actually Applies to Your Customer Service Chatbot

The FTC Act applies to all AI chatbots nationwide. Failing to disclose AI is considered potentially deceptive under Section 5. This is your number-one compliance concern, and it applies regardless of whether your state has passed its own chatbot law.
The EU AI Act Article 50 applies to all AI systems serving EU customers, effective August 2, 2026. No companion-bot exclusions. No customer-service exemptions.
Utah's AI Policy Act requires disclosure for all AI interactions. No customer service exemption.
Maine's AI Transparency Act applies to consumer transaction chatbots. Broad scope.

Why Disclose Anyway, Even If You're Technically Excluded

Research from ThunderBit's April 2026 AI customer service statistics found that 37% of consumers disengage if they discover they were talking to AI when they expected a human. Disclosure isn't just a legal checkbox, it's a conversion strategy.
Additionally, Ringly.io's self-service completion data shows that "fewer than one in five actually finish" self-service interactions, and "the ones who fail get angrier than if they'd called a human first." Undisclosed AI compounds this frustration.
Four reasons to disclose even when you don't legally have to:
  1. Trust is a competitive advantage. "We disclose even where we don't legally have to" beats "we found a compliance loophole" every time.
  1. Regulatory momentum is real. Laws are broadening. Being ahead of regulation is cheaper than catching up.
  1. Customer experience improves. When users know they're talking to AI, they set appropriate expectations, and satisfaction goes up.
  1. Criminal enforcement is now real. On April 21, 2026, Florida's AG launched a criminal investigation into OpenAI over AI safety failures. State AGs are escalating from civil fines to criminal probes. Being able to demonstrate robust safety protocols, harm detection, escalation pathways, content filters, isn't just best practice. It's evidence of good-faith compliance.
"California and Washington's chatbot laws specifically EXCLUDE customer service bots, and SiteGPT complies anyway."
Every SiteGPT chatbot includes configurable disclosure messages, human escalation, and compliance documentation by default, because best practice isn't about checking legal boxes, it's about building customer trust.

EU AI Act: Countdown to August 2, 2026

notion image
103 days until mandatory EU AI chatbot disclosure. Article 50 of the EU AI Act takes effect August 2, 2026, requiring every business serving EU customers to inform users when they are interacting with an AI system.

What Article 50 Requires

Article 50 mandates that providers of AI systems ensure users are informed when they are interacting with an AI system, unless this is obvious from the circumstances and the user already expects AI interaction.
For chatbot owners, this means:
  • A clear disclosure message at the start of every conversation
  • No ambiguity, the user must know they're talking to AI
  • Applies to all chatbot types: customer service, sales, onboarding, support, lead capture

Who This Applies To (Hint: Probably You)

Any business serving EU customers must comply, including US-based companies. If a single user in France, Germany, or any of the 27 EU member states interacts with your chatbot, Article 50 applies. The regulation is based on where the user is located, not where your company is headquartered.
Article 50 applies to all AI systems. No companion-bot exclusions. No customer-service exemptions. No small-business carve-outs. No de minimis thresholds.

Penalties

Up to €35 million or 7% of global annual turnover, whichever is higher. This is the most severe penalty regime in the chatbot regulatory landscape.

EU Readiness Checklist

Use this checklist to prepare for August 2, 2026:
  • [ ] Disclosure message added to all chatbot entry points
  • [ ] Privacy policy updated to reference AI data processing
  • [ ] GDPR data processing agreement in place (required regardless of Article 50)
  • [ ] Human escalation path available for EU users
  • [ ] Compliance documentation logged and accessible
  • [ ] Team briefed on new disclosure requirements
  • [ ] Quarterly review scheduled for ongoing compliance
SiteGPT is EU-ready today: Every plan includes GDPR DPA, configurable disclosure messages, and human escalation. No additional configuration needed for Article 50 compliance. Start your free trial →

Frequently Asked Questions

Basics

Do I need to disclose my chatbot is AI?
Yes. While some state laws (California, Washington) specifically target companion bots, the FTC considers non-disclosure of any AI chatbot potentially deceptive. Utah and Maine require disclosure broadly. The EU AI Act makes it mandatory starting August 2, 2026.
What states have chatbot disclosure laws?
As of April 2026: California (SB 243, companion bots), Washington (HB 2225, companion bots), Utah (all AI interactions), New Hampshire (HB 143, child safety), Maine (consumer transactions), and Nebraska (LB 525, minors' conversational AI safety). 27+ states have pending legislation.
What is California SB 243?
A California law effective January 1, 2026 requiring businesses to clearly disclose when users are interacting with an AI companion chatbot. It targets companion bots, AI that simulates human relationships, and expressly excludes customer service bots. The penalty is $1,000 minimum per violation.

Compliance & Enforcement

Is it illegal to not disclose an AI chatbot?
Under the FTC Act, possibly yes. The FTC considers non-disclosure potentially deceptive. Some state laws also mandate disclosure. EU enforcement begins August 2, 2026. The risk is real and growing.
What are the penalties for chatbot disclosure violations?
Ranges from $1,000 to $2,500 per violation at the state level. FTC penalties can reach $50,120 per violation. EU AI Act penalties go up to €35 million or 7% of global turnover. The severity escalates with scale.
Can AI chatbot operators face criminal liability?
As of April 21, 2026, Florida Attorney General James Uthmeier launched a criminal investigation into OpenAI over ChatGPT's role in the Florida State University shooting. Subpoenas were issued seeking user harm policies, safety training materials, and law enforcement cooperation records. This is the first criminal investigation of an AI company by a state attorney general. While the investigation targets the AI provider (not businesses using chatbots), it signals that AI enforcement is escalating beyond civil penalties, and that safety protocols, harm-detection systems, and escalation pathways are now evidence of good-faith compliance.
Sources: NPR, CNN, NBC News, and the New York Times — all April 21, 2026 (links cited in the Penalties section above)
What is the FTC position on chatbot disclosure?
The FTC has stated since 2018 that failing to disclose AI chatbots may constitute deceptive business practices under Section 5 of the FTC Act. This applies to all chatbot types nationwide, no state-by-state variation, no companion-bot exemptions.
What is the CHATBOT Act?
A proposed federal law introduced March 19, 2026 that would regulate AI chatbot impersonation of licensed professionals like doctors, lawyers, and therapists. It has not been enacted but signals federal regulatory direction.

Practical Implementation

How do I make my chatbot compliant?
Add clear AI disclosure at the start of every conversation, implement human escalation, ensure data privacy compliance (GDPR, CCPA), add safety guardrails, and maintain documentation. See the step-by-step implementation guide for exact instructions.
Do customer service chatbots need to disclose AI?
California and Washington's laws specifically exclude customer service bots. But the FTC Act applies to all AI chatbots, the EU AI Act applies to all AI systems, and Utah and Maine apply broadly. Best practice: disclose anyway. 37% of consumers disengage if they discover undisclosed AI.
Does the EU require AI chatbot disclosure?
Yes, starting August 2, 2026 under Article 50 of the EU AI Act. This applies to any company serving EU customers, including US-based businesses. No exceptions for chatbot type or company size.

About the Author

Bhanu Teja P is the co-founder of SiteGPT, an AI customer support platform used by thousands of businesses worldwide. He writes about AI regulation, chatbot compliance, and building customer trust through transparency. Follow him on Twitter/X (@pbteja1998).
Last updated: April 2026. Pricing and features verified from official vendor sources.

Give Your Customers The Experience That They Deserve

Create A Chatbot In Minutes, Today

Create Your Chatbot Now

Written by

Bhanu Teja P
Bhanu Teja P

Founder @ SiteGPT.ai & SourceSync.ai