Skip to main content

EU AI Act Compliance with TealTiger

The EU AI Act (Regulation 2024/1689) is the world’s first comprehensive legal framework for artificial intelligence, establishing harmonized rules for the development, deployment, and use of AI systems in the European Union. It came into force on August 1, 2024, with a phased implementation timeline through 2027. TealTiger provides essential capabilities to help enterprises comply with EU AI Act requirements, particularly for high-risk AI systems that use Large Language Models (LLMs) and agentic AI.

EU AI Act Overview

Risk-Based Classification

The EU AI Act classifies AI systems into four risk categories:
Risk LevelExamplesRequirements
Unacceptable RiskSocial scoring, real-time biometric identificationProhibited
High RiskCritical infrastructure, employment, law enforcement, educationStrict compliance required
Limited RiskChatbots, emotion recognitionTransparency obligations
Minimal RiskAI-enabled video games, spam filtersNo obligations
Most enterprise LLM applications fall into High Risk or Limited Risk categories, requiring compliance with specific obligations.

High-Risk AI System Requirements

Article 9: Risk Management System

Requirement: Establish and maintain a risk management system throughout the AI system’s lifecycle. TealTiger Support (v1.0+):

TealGuard Policies

Define risk mitigation policies for PII exposure, prompt injection, content moderation, and budget overruns

Policy Modes

Use MONITOR mode to assess risks before enforcement, then transition to ENFORCE mode

Risk Scoring

Assign risk scores to policy violations for prioritization and escalation

Continuous Monitoring

Real-time detection and logging of policy violations across all LLM interactions
Example:
from tealtiger import TealOpenAI, TealEngine, PIIDetector, PromptInjectionDetector

# Risk management system for high-risk AI
engine = TealEngine(
    guardrails=[
        PIIDetector(
            action="block",
            entities=["email", "phone", "ssn", "credit_card"],
            risk_score=9  # High risk
        ),
        PromptInjectionDetector(
            action="block",
            risk_score=8  # High risk
        )
    ],
    mode="ENFORCE"  # Strict enforcement for high-risk systems
)

client = TealOpenAI(api_key="sk-xxx", engine=engine)
Future Enhancement (v1.2+):
  • Automated risk assessment scoring
  • Risk heat maps and dashboards
  • Integration with enterprise risk management systems

Article 10: Data and Data Governance

Requirement: Training, validation, and testing data must be relevant, representative, and free from errors and biases. TealTiger Support (v1.0+):

Input Validation

Validate and sanitize all inputs before sending to LLMs

Content Moderation

Detect and block harmful, biased, or inappropriate content

Audit Logging

Log all inputs, outputs, and policy decisions for data governance

Redaction

Automatically redact sensitive data from logs and audit trails
Example:
from tealtiger import TealOpenAI, TealEngine, ContentModerator

# Data governance for EU AI Act compliance
engine = TealEngine(
    guardrails=[
        ContentModerator(
            action="block",
            categories=["hate", "violence", "sexual", "self-harm"],
            threshold=0.7
        )
    ],
    audit_config={
        "enabled": True,
        "redact_pii": True,  # GDPR compliance
        "retention_days": 365  # EU AI Act record-keeping
    }
)

client = TealOpenAI(api_key="sk-xxx", engine=engine)
Future Enhancement (v1.3+):
  • Bias detection and mitigation
  • Data quality metrics and reporting
  • Automated data lineage tracking

Article 11: Technical Documentation

Requirement: Maintain detailed technical documentation describing the AI system’s design, development, and performance. TealTiger Support (v1.0+):

Policy Documentation

Document all guardrails, policies, and risk mitigation strategies

Decision Logs

Comprehensive logs of all policy decisions with reason codes

Audit Events

Structured audit events with versioned schema for compliance reporting

Configuration Export

Export TealEngine configuration for documentation and audits
Example:
# Export TealEngine configuration for technical documentation
config = engine.export_config()

# Save to file for EU AI Act documentation
with open("tealtiger-config-v1.0.json", "w") as f:
    json.dump(config, f, indent=2)

# Audit event schema for compliance reporting
audit_schema = engine.get_audit_schema()
Future Enhancement (v1.2+):
  • Automated technical documentation generation
  • Compliance report templates (EU AI Act format)
  • Integration with document management systems

Article 12: Record-Keeping

Requirement: Keep logs of AI system operations for at least 6 months (or longer for high-risk systems). TealTiger Support (v1.0+):

Audit Logging

Persistent logging of all LLM interactions and policy decisions

Structured Events

Versioned audit event schema for long-term storage and retrieval

Correlation IDs

Trace requests across distributed systems for compliance audits

Retention Policies

Configurable retention periods (6 months to 10 years)
Example:
# Configure record-keeping for EU AI Act compliance
engine = TealEngine(
    audit_config={
        "enabled": True,
        "retention_days": 730,  # 2 years for high-risk systems
        "storage_backend": "s3",  # Durable storage
        "bucket": "eu-ai-act-audit-logs",
        "encryption": "AES-256"  # GDPR requirement
    }
)

# Query audit logs for compliance reporting
from tealtiger.audit import AuditQuery

query = AuditQuery(
    start_date="2026-01-01",
    end_date="2026-12-31",
    event_types=["policy_violation", "guardrail_triggered"],
    risk_score_min=7
)

audit_records = engine.query_audit_logs(query)
Future Enhancement (v1.2+):
  • Automated compliance report generation
  • Integration with SIEM systems (Splunk, Datadog)
  • Long-term archival to AWS Glacier / Azure Archive

Article 13: Transparency and Information to Users

Requirement: Inform users that they are interacting with an AI system and provide clear information about its capabilities and limitations. TealTiger Support (v1.0+):

Decision Transparency

Expose policy decisions and reason codes to users

Guardrail Notifications

Inform users when guardrails are triggered

Cost Transparency

Show users the cost of LLM interactions

Provider Disclosure

Disclose which LLM provider is being used
Example:
# Transparent AI system with user notifications
response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "What is my account balance?"}]
)

# Check if guardrails were triggered
if response.tealtiger_decision.blocked:
    print(f"⚠️ AI Safety Notice: {response.tealtiger_decision.reason}")
    print(f"Reason Code: {response.tealtiger_decision.reason_code}")
    print(f"Risk Score: {response.tealtiger_decision.risk_score}")
    
# Show cost transparency
print(f"💰 Cost: ${response.tealtiger_decision.cost:.4f}")
Future Enhancement (v1.2+):
  • User-facing transparency dashboard
  • Automated AI disclosure banners
  • Multi-language transparency notices

Article 14: Human Oversight

Requirement: High-risk AI systems must be designed to enable effective human oversight. TealTiger Support (v1.0+):

Policy Modes

MONITOR mode allows human review before enforcement

Manual Override

Humans can override policy decisions when appropriate

Escalation Workflows

High-risk decisions can be escalated to human reviewers

Audit Review

Humans can review audit logs and policy violations
Example:
# Human-in-the-loop for high-risk decisions
engine = TealEngine(
    mode="MONITOR",  # Log violations but don't block
    escalation_config={
        "enabled": True,
        "risk_threshold": 8,  # Escalate high-risk decisions
        "webhook_url": "https://api.company.com/escalate"
    }
)

# Human reviewer can approve or reject
if response.tealtiger_decision.escalated:
    approval = await request_human_approval(response)
    if not approval.approved:
        raise PolicyViolationError("Human reviewer rejected request")
Future Enhancement (v1.2+):
  • Human review dashboard
  • Approval workflows with SLA tracking
  • Integration with ticketing systems (Jira, ServiceNow)

Article 15: Accuracy, Robustness, and Cybersecurity

Requirement: AI systems must achieve appropriate levels of accuracy, robustness, and cybersecurity. TealTiger Support (v1.0+):

Prompt Injection Defense

Detect and block prompt injection attacks

Input Validation

Validate all inputs for malicious content

Rate Limiting

Prevent abuse and DDoS attacks

Budget Controls

Prevent runaway costs from adversarial inputs
Example:
# Cybersecurity for EU AI Act compliance
engine = TealEngine(
    guardrails=[
        PromptInjectionDetector(action="block"),
        InputValidator(max_length=10000, allowed_chars="alphanumeric")
    ],
    rate_limit={
        "requests_per_minute": 60,
        "requests_per_hour": 1000
    },
    budget_limit=1000.00  # Prevent runaway costs
)
Future Enhancement (v1.3+):
  • Adversarial robustness testing
  • Model output validation
  • Automated security scanning

Limited-Risk AI System Requirements

Article 50: Transparency Obligations

Requirement: Users must be informed that they are interacting with an AI system. TealTiger Support (v1.0+): All TealTiger features for high-risk systems also apply to limited-risk systems, with simplified configuration:
# Minimal transparency for limited-risk AI (e.g., chatbots)
engine = TealEngine(
    mode="MONITOR",  # Log but don't block
    transparency={
        "enabled": True,
        "disclosure": "You are interacting with an AI assistant powered by OpenAI GPT-4."
    }
)

Compliance Roadmap

Current Capabilities (v1.0 - v1.1)

1

Risk Management

TealGuard policies, risk scoring, policy modes (MONITOR, ENFORCE)
2

Data Governance

Input validation, content moderation, audit logging, PII redaction
3

Technical Documentation

Policy documentation, decision logs, audit events, configuration export
4

Record-Keeping

Persistent audit logs, structured events, correlation IDs, retention policies
5

Transparency

Decision transparency, guardrail notifications, cost disclosure
6

Human Oversight

Policy modes, manual override, escalation workflows, audit review
7

Cybersecurity

Prompt injection defense, input validation, rate limiting, budget controls

Planned Enhancements (v1.2 - v1.3)

1

Automated Risk Assessment (v1.2)

Risk heat maps, automated scoring, integration with enterprise risk management
2

Bias Detection (v1.3)

Detect and mitigate bias in LLM outputs, fairness metrics
3

Compliance Reporting (v1.2)

Automated EU AI Act compliance reports, audit templates
4

Human Review Dashboard (v1.2)

Web-based dashboard for human oversight and approval workflows
5

Adversarial Testing (v1.3)

Automated robustness testing, red teaming capabilities

Future Platform Features (v2.0+)

1

Centralized Compliance Hub

Multi-tenant platform for managing EU AI Act compliance across organizations
2

Conformity Assessment

Tools for third-party conformity assessment and certification
3

Market Surveillance

Integration with EU market surveillance authorities
4

CE Marking Support

Documentation and processes for CE marking of high-risk AI systems

Implementation Timeline

The EU AI Act has a phased implementation timeline:
DateMilestoneTealTiger Readiness
Aug 2, 2024EU AI Act enters into force✅ Core capabilities available
Feb 2, 2025Prohibited AI practices banned✅ Content moderation ready
Aug 2, 2025General-purpose AI model rules apply✅ LLM guardrails ready
Aug 2, 2026High-risk AI system obligations apply✅ v1.1 compliance features
Aug 2, 2027Full enforcement for all AI systems🚧 v1.3 full compliance

Best Practices for EU AI Act Compliance

Determine if your LLM application is high-risk, limited-risk, or minimal-risk based on EU AI Act Annex III.High-risk examples:
  • Employment decisions (hiring, firing, promotions)
  • Credit scoring and loan approvals
  • Law enforcement applications
  • Critical infrastructure management
  • Educational assessments
Limited-risk examples:
  • Customer service chatbots
  • Content generation tools
  • Marketing assistants
Use TealGuard policies to identify, assess, and mitigate risks throughout the AI system lifecycle.
# Start with MONITOR mode to assess risks
engine = TealEngine(mode="MONITOR", guardrails=[...])

# Analyze audit logs for risk patterns
risks = analyze_audit_logs(engine.query_audit_logs())

# Transition to ENFORCE mode after risk assessment
engine.set_mode("ENFORCE")
Document all TealEngine configurations, policies, and risk mitigation strategies.
  • Export TealEngine configuration regularly
  • Document policy changes and rationale
  • Maintain audit logs for at least 2 years
  • Create compliance reports quarterly
Design workflows that allow humans to review and override AI decisions.
# High-risk decisions require human approval
if response.tealtiger_decision.risk_score >= 8:
    approval = await request_human_approval(response)
    if not approval.approved:
        raise PolicyViolationError("Human reviewer rejected")
Inform users that they are interacting with an AI system and provide clear information.
  • Display AI disclosure banners
  • Show guardrail notifications to users
  • Provide cost transparency
  • Explain policy decisions with reason codes
Regularly test your AI system for accuracy, robustness, and security.
  • Use TealTiger’s policy test harness (v1.1+)
  • Conduct adversarial testing
  • Monitor for prompt injection attacks
  • Review audit logs for anomalies

Compliance Checklist

Use this checklist to assess your EU AI Act compliance readiness:

High-Risk AI System Compliance Checklist

  • Risk Management System (Article 9)
    • TealGuard policies defined for all identified risks
    • Risk scores assigned to policy violations
    • Policy modes configured (MONITOR → ENFORCE)
    • Continuous monitoring enabled
  • Data Governance (Article 10)
    • Input validation implemented
    • Content moderation enabled
    • Audit logging configured
    • PII redaction enabled (GDPR compliance)
  • Technical Documentation (Article 11)
    • TealEngine configuration documented
    • Policy documentation maintained
    • Decision logs exported regularly
    • Audit event schema documented
  • Record-Keeping (Article 12)
    • Audit logs retained for ≥2 years
    • Logs stored in durable storage (S3, Azure Blob)
    • Logs encrypted at rest (AES-256)
    • Correlation IDs enabled for traceability
  • Transparency (Article 13)
    • AI disclosure banner displayed to users
    • Guardrail notifications shown to users
    • Cost transparency enabled
    • Reason codes exposed in UI
  • Human Oversight (Article 14)
    • MONITOR mode used for risk assessment
    • Manual override capability implemented
    • Escalation workflows configured
    • Human review dashboard deployed (v1.2+)
  • Accuracy & Cybersecurity (Article 15)
    • Prompt injection detection enabled
    • Input validation configured
    • Rate limiting implemented
    • Budget controls set

Getting Help


Additional Resources


Disclaimer: This documentation provides general guidance on how TealTiger can support EU AI Act compliance. It is not legal advice. Organizations should consult with legal counsel to ensure full compliance with the EU AI Act and other applicable regulations.

Last Updated: March 7, 2026
TealTiger Version: v1.0 - v1.1
EU AI Act Version: Regulation (EU) 2024/1689