LSM Agents

Compliance · EU AI Act · Article 50

EU AI Act Article 50: Make your chatbot and voice agent compliant before 2 August 2026.

Article 50 of the EU AI Act enforces from 2 August 2026. Any system that interacts with a person — chatbot, voice agent, AI receptionist — must disclose itself as AI. Deepfakes must be machine-readably watermarked. Fines up to €15M or 3% of global annual turnover. We audit, remediate, and ship Article 50 disclosure scripts and audit logs by default.

TL;DR

  • ·Article 50 enforcement: 2 August 2026. The clock is real.
  • ·Three obligations: AI-system disclosure, deepfake watermarking, AI-generated content labeling. Each carries different fine ranges.
  • ·Maximum penalty: €15M or 3% of worldwide annual turnover (whichever higher). For SMBs the de-minimis ceiling is also relevant — usually capped per-deployment, not company-wide.
  • ·Our delivery: AI-inventory + gap report (€3,800) → fixed-price implementation (~€12,500) covering disclosure UI, audit logs, and provider/deployer documentation under Art. 53 + 26.

Who has to comply

Article 50 applies to providers AND deployers of AI systems that interact with natural persons. 'Provider' = builds and ships the AI. 'Deployer' = uses it commercially. If you operate a chatbot on your own site, you are a deployer with full Article 50 obligations regardless of who built it. The exemption for purely text-to-text systems used for spam filtering does not extend to customer interaction. Voice agents are explicitly covered. Synthesised voice clones (ElevenLabs, custom) trigger the deepfake watermark obligation under Art. 50(2).

What Article 50 actually requires

Three separate obligations, each with its own technical requirement. We see most operators fail on at least two of them.

  • No persistent on-screen disclosure that the chat partner is AI ("This is an automated assistant")
  • Disclosure shown only at session start, not before the first AI response
  • Disclosure rendered in a way assistive tech cannot read (image-only, no aria-label)
  • Voice agent introducing itself as "Maria" without naming itself as AI
  • AI-generated content (images, audio, text) published without C2PA metadata or visible label
  • Deepfake content (synthetic voice cloning a real person) without machine-readable watermark
  • No audit log of AI-system version, system-prompt hash, or training-data class
  • No documentation of risk class (Annex III) or general-purpose model used
  • No human-oversight or escalation path documented under Art. 14
  • No data-protection impact assessment (DPIA) updated for the AI system

What enforcement actually looks like

BNetzA (Bundesnetzagentur) is the lead market-surveillance authority for Germany. Penalties under Art. 99: up to €15M or 3% of worldwide annual turnover for Article 50 violations specifically. The first cases will not be against multinationals — they have legal teams. The first cases will be SMBs running off-the-shelf chatbots without the disclosure layer. €15M is a ceiling, not a floor; smaller operators face proportionate fines but the inspection visit and the public Abmahnung are damaging on their own.

How we run an AI Act project

  1. 01

    AI inventory (5 days): catalog every AI system in your stack — chatbots, voice agents, classifiers, embedding services, content generators. Output: written inventory keyed to Art. 50 / 53 / 26 obligations.

  2. 02

    Gap analysis: per-system risk class assessment (Annex III), provider vs deployer mapping, missing disclosures, missing watermarks, missing audit logs.

  3. 03

    Remediation plan: fixed-price quote with phased rollout. Critical disclosures shipped first (week 1), audit logs and DPIA second (week 2-3).

  4. 04

    Build (10–14 days): we ship a disclosure-component library (React + plain-DOM versions), an audit-log middleware, and the deployer documentation pack.

  5. 05

    Verification: third-party review of the implementation. Output: signed-off Article 50 readiness report.

  6. 06

    Monitoring: Operator-tier retainer includes quarterly reviews against Commission guidance and Codes of Practice updates (which the EU AI Office is publishing on a rolling basis).

EU AI Act Article 50 — frequently asked questions

Does Article 50 apply to my chatbot if I bought it from a vendor?+

Yes. You are the deployer, even if the vendor is the provider. Article 50 applies to deployers — full disclosure obligation regardless of who built the system. The vendor is responsible for technical capability; you are responsible for the disclosure being live, visible, and accessible on your site.

What about AI Overviews from Google or Bing — am I responsible for those?+

No. The provider (Google / Microsoft) bears the disclosure obligation for AI-generated answers within their search products. You are not the deployer of those.

Are voice agents subject to deepfake watermarking?+

Only if the synthetic voice is cloned from a real, identifiable person. Generic synthesized voices (ElevenLabs default voices, OpenAI standard voices) are AI-generated content under Art. 50(2) and require labeling, but not watermarking. Cloned voices of real individuals trigger the full deepfake regime.

When does Article 50 enforcement start?+

2 August 2026. The Act came into force 1 August 2024 with a 2-year transition for most general AI obligations, including Article 50.

Is there a small-business exemption?+

There is no SMB exemption from Article 50. Annex III high-risk classifications have a deployer registration carve-out for some SMBs, but transparency obligations apply universally. The AI Office is expected to publish proportionate-enforcement guidance in summer 2026.

How much does AI Act compliance cost?+

Our AI inventory + gap analysis is €3,800 fixed price. Implementation typically €8,500–€18,000 depending on the number of AI systems and integration complexity. Configure exact pricing in the calculator.