Skip to main content

Langfuse Integration

Integrate TealTiger with Langfuse to combine governance decisions with LLM observability and tracing.

Why integrate TealTiger with Langfuse?

Langfuse provides LLM-specific observability. Adding TealTiger gives you:
  • Governance visibility - See policy decisions alongside LLM traces
  • Cost attribution - Link TealTiger costs to Langfuse traces
  • Debug workflows - Understand why requests were blocked
  • Compliance evidence - Audit trails in Langfuse

Quick start

Install both packages:
npm install tealtiger langfuse
# or
pip install tealtiger langfuse
Configure TealTiger to export to Langfuse:
import { TealTiger } from 'tealtiger';
import { Langfuse } from 'langfuse';

// Initialize Langfuse
const langfuse = new Langfuse({
  publicKey: process.env.LANGFUSE_PUBLIC_KEY,
  secretKey: process.env.LANGFUSE_SECRET_KEY
});

// Configure TealTiger to export to Langfuse
const teal = new TealTiger({
  policies: { /* ... */ },
  telemetry: {
    langfuse: {
      enabled: true,
      client: langfuse,
      exportDecisions: true,
      exportCosts: true
    }
  }
});

// Use TealTiger - decisions automatically exported
const decision = await teal.evaluate(request);

What gets exported?

TealTiger exports governance data as Langfuse observations:

Decision observations

{
  "type": "event",
  "name": "tealtiger.decision",
  "metadata": {
    "action": "DENY",
    "reason_codes": ["TOOL_NOT_ALLOWED"],
    "risk_score": 85,
    "policy_id": "security.tool_access.v1",
    "mode": "ENFORCE"
  },
  "level": "WARNING"
}

Cost observations

{
  "type": "event",
  "name": "tealtiger.cost",
  "metadata": {
    "estimated_cost": 0.05,
    "tokens_input": 1000,
    "tokens_output": 500,
    "model": "gpt-4"
  }
}

Complete example

import { TealTiger } from 'tealtiger';
import { Langfuse } from 'langfuse';
import { ChatOpenAI } from 'langchain/chat_models/openai';

// Initialize Langfuse
const langfuse = new Langfuse({
  publicKey: process.env.LANGFUSE_PUBLIC_KEY,
  secretKey: process.env.LANGFUSE_SECRET_KEY
});

// Initialize TealTiger with Langfuse export
const teal = new TealTiger({
  policies: {
    tools: {
      web_search: { allowed: true },
      file_delete: { allowed: false }
    },
    budget: {
      maxCostPerRequest: 0.50
    }
  },
  telemetry: {
    langfuse: {
      enabled: true,
      client: langfuse,
      exportDecisions: true,
      exportCosts: true
    }
  }
});

// Create a trace in Langfuse
const trace = langfuse.trace({
  name: "agent-execution",
  userId: "user-123"
});

// Evaluate with TealTiger (exports to Langfuse)
const decision = await teal.evaluate({
  action: 'tool.execute',
  tool: 'web_search',
  context: {
    traceId: trace.id  // Link to Langfuse trace
  }
});

// Add LLM call to trace
const generation = trace.generation({
  name: "llm-call",
  model: "gpt-4",
  input: "Search for AI news"
});

// Execute if allowed
if (decision.action === 'ALLOW') {
  const model = new ChatOpenAI();
  const response = await model.call([
    { role: 'user', content: 'Search for AI news' }
  ]);
  
  generation.end({
    output: response.content
  });
}

// Finalize trace
await langfuse.flushAsync();

Viewing in Langfuse

TealTiger decisions appear in Langfuse traces:
  1. Trace view - See governance decisions in the trace timeline
  2. Metadata - Policy details, reason codes, risk scores
  3. Cost tracking - TealTiger costs alongside LLM costs
  4. Filtering - Filter traces by decision action or policy

Best practices

  1. Link traces - Use traceId to connect TealTiger and Langfuse
  2. Export costs - Track governance overhead
  3. Use metadata - Add context to decisions
  4. Flush regularly - Ensure data is sent to Langfuse
  5. Monitor denials - Create Langfuse dashboards for blocked requests

Next steps