Skip to main content
Deploy TealTiger on managed PaaS platforms for rapid deployment without infrastructure management. This guide covers Heroku, Railway, Render, Fly.io, and other popular PaaS platforms.

Why PaaS?

Benefits:
  • Rapid deployment - Deploy in minutes with one command
  • Managed infrastructure - No servers, databases, or networking to manage
  • Built-in CI/CD - Automatic deployments from Git
  • Quick scaling - Scale with a slider or command
  • Developer-friendly - Focus on code, not infrastructure
Use Cases:
  • Rapid prototyping and MVPs
  • Startups and small teams
  • Side projects and demos
  • Development and staging environments
  • Low-traffic production applications

Heroku

One-Click Deploy

Deploy to Heroku

Manual Deployment

# Install Heroku CLI
curl https://cli-assets.heroku.com/install.sh | sh

# Login to Heroku
heroku login

# Create app
heroku create tealtiger-app

# Set environment variables
heroku config:set OPENAI_API_KEY=sk-xxx
heroku config:set TEALTIGER_PROVIDER=openai
heroku config:set TEALTIGER_MODE=ENFORCE

# Deploy
git push heroku main

# Scale dynos
heroku ps:scale web=2

# View logs
heroku logs --tail

Procfile

# Procfile
web: gunicorn app:app --workers 4 --bind 0.0.0.0:$PORT

app.json (One-Click Deploy)

{
  "name": "TealTiger AI Security",
  "description": "AI agent security with TealTiger SDK",
  "repository": "https://github.com/agentguard-ai/tealtiger-heroku-template",
  "logo": "https://tealtiger.ai/logo.png",
  "keywords": ["ai", "security", "llm", "guardrails"],
  "env": {
    "OPENAI_API_KEY": {
      "description": "OpenAI API key",
      "required": true
    },
    "TEALTIGER_PROVIDER": {
      "description": "LLM provider (openai, anthropic, gemini, etc.)",
      "value": "openai",
      "required": true
    },
    "TEALTIGER_MODE": {
      "description": "Policy mode (ENFORCE, MONITOR, REPORT_ONLY)",
      "value": "ENFORCE",
      "required": true
    },
    "TEALTIGER_BUDGET_LIMIT": {
      "description": "Daily budget limit in USD",
      "value": "100",
      "required": false
    }
  },
  "formation": {
    "web": {
      "quantity": 1,
      "size": "basic"
    }
  },
  "addons": [
    {
      "plan": "papertrail:choklad",
      "as": "LOGGING"
    }
  ],
  "buildpacks": [
    {
      "url": "heroku/python"
    }
  ]
}

Python Example

# app.py
from flask import Flask, request, jsonify
from tealtiger import TealOpenAI, PolicyMode
import os

app = Flask(__name__)

# Initialize TealTiger client
client = TealOpenAI(
    api_key=os.environ['OPENAI_API_KEY'],
    mode=PolicyMode[os.environ.get('TEALTIGER_MODE', 'ENFORCE')],
    budget_limit=float(os.environ.get('TEALTIGER_BUDGET_LIMIT', 100))
)

@app.route('/chat', methods=['POST'])
def chat():
    data = request.json
    response = client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": data['message']}]
    )
    return jsonify({
        'response': response.choices[0].message.content
    })

if __name__ == '__main__':
    port = int(os.environ.get('PORT', 5000))
    app.run(host='0.0.0.0', port=port)

Railway

Deploy with Railway CLI

# Install Railway CLI
npm install -g @railway/cli

# Login
railway login

# Initialize project
railway init

# Set environment variables
railway variables set OPENAI_API_KEY=sk-xxx
railway variables set TEALTIGER_PROVIDER=openai
railway variables set TEALTIGER_MODE=ENFORCE

# Deploy
railway up

# View logs
railway logs

railway.json

{
  "$schema": "https://railway.app/railway.schema.json",
  "build": {
    "builder": "NIXPACKS"
  },
  "deploy": {
    "startCommand": "gunicorn app:app --workers 4 --bind 0.0.0.0:$PORT",
    "restartPolicyType": "ON_FAILURE",
    "restartPolicyMaxRetries": 10
  }
}

One-Click Deploy

Deploy on Railway

Render

Deploy with render.yaml

# render.yaml
services:
  - type: web
    name: tealtiger-app
    env: python
    buildCommand: pip install -r requirements.txt
    startCommand: gunicorn app:app --workers 4 --bind 0.0.0.0:$PORT
    envVars:
      - key: OPENAI_API_KEY
        sync: false
      - key: TEALTIGER_PROVIDER
        value: openai
      - key: TEALTIGER_MODE
        value: ENFORCE
      - key: TEALTIGER_BUDGET_LIMIT
        value: "100"
    healthCheckPath: /health
    autoDeploy: true

Deploy from Dashboard

  1. Go to render.com
  2. Click “New +” → “Web Service”
  3. Connect your Git repository
  4. Configure:
    • Name: tealtiger-app
    • Environment: Python 3
    • Build Command: pip install -r requirements.txt
    • Start Command: gunicorn app:app --workers 4
  5. Add environment variables
  6. Click “Create Web Service”

One-Click Deploy

Deploy to Render

Fly.io

Deploy with Fly CLI

# Install Fly CLI
curl -L https://fly.io/install.sh | sh

# Login
fly auth login

# Launch app
fly launch

# Set secrets
fly secrets set OPENAI_API_KEY=sk-xxx
fly secrets set TEALTIGER_PROVIDER=openai
fly secrets set TEALTIGER_MODE=ENFORCE

# Deploy
fly deploy

# Scale
fly scale count 2

# View logs
fly logs

fly.toml

# fly.toml
app = "tealtiger-app"
primary_region = "iad"

[build]
  builder = "paketobuildpacks/builder:base"

[env]
  PORT = "8080"
  TEALTIGER_PROVIDER = "openai"
  TEALTIGER_MODE = "ENFORCE"

[http_service]
  internal_port = 8080
  force_https = true
  auto_stop_machines = true
  auto_start_machines = true
  min_machines_running = 1
  processes = ["app"]

[[services]]
  protocol = "tcp"
  internal_port = 8080
  processes = ["app"]

  [[services.ports]]
    port = 80
    handlers = ["http"]
    force_https = true

  [[services.ports]]
    port = 443
    handlers = ["tls", "http"]

  [services.concurrency]
    type = "connections"
    hard_limit = 25
    soft_limit = 20

  [[services.tcp_checks]]
    interval = "15s"
    timeout = "2s"
    grace_period = "1s"
    restart_limit = 0

  [[services.http_checks]]
    interval = "10s"
    timeout = "2s"
    grace_period = "5s"
    method = "get"
    path = "/health"
    protocol = "http"
    restart_limit = 0

[[vm]]
  cpu_kind = "shared"
  cpus = 1
  memory_mb = 256

Google App Engine

Deploy with gcloud

# Install gcloud CLI
curl https://sdk.cloud.google.com | bash

# Login
gcloud auth login

# Set project
gcloud config set project YOUR_PROJECT_ID

# Deploy
gcloud app deploy

# View logs
gcloud app logs tail -s default

app.yaml

# app.yaml
runtime: python311
entrypoint: gunicorn -b :$PORT app:app

env_variables:
  TEALTIGER_PROVIDER: "openai"
  TEALTIGER_MODE: "ENFORCE"
  TEALTIGER_BUDGET_LIMIT: "100"

automatic_scaling:
  target_cpu_utilization: 0.65
  min_instances: 1
  max_instances: 10
  min_pending_latency: 30ms
  max_pending_latency: automatic
  max_concurrent_requests: 50

handlers:
- url: /.*
  script: auto
  secure: always
  redirect_http_response_code: 301

Azure App Service

Deploy with Azure CLI

# Install Azure CLI
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash

# Login
az login

# Create resource group
az group create --name tealtiger-rg --location eastus

# Create App Service plan
az appservice plan create \
  --name tealtiger-plan \
  --resource-group tealtiger-rg \
  --sku B1 \
  --is-linux

# Create web app
az webapp create \
  --resource-group tealtiger-rg \
  --plan tealtiger-plan \
  --name tealtiger-app \
  --runtime "PYTHON:3.11"

# Set environment variables
az webapp config appsettings set \
  --resource-group tealtiger-rg \
  --name tealtiger-app \
  --settings \
    OPENAI_API_KEY=sk-xxx \
    TEALTIGER_PROVIDER=openai \
    TEALTIGER_MODE=ENFORCE

# Deploy from Git
az webapp deployment source config \
  --name tealtiger-app \
  --resource-group tealtiger-rg \
  --repo-url https://github.com/your-org/tealtiger-app \
  --branch main \
  --manual-integration

Vercel (Edge Functions)

Deploy with Vercel CLI

# Install Vercel CLI
npm install -g vercel

# Login
vercel login

# Deploy
vercel

# Set environment variables
vercel env add OPENAI_API_KEY
vercel env add TEALTIGER_PROVIDER
vercel env add TEALTIGER_MODE

# Deploy to production
vercel --prod

vercel.json

{
  "version": 2,
  "builds": [
    {
      "src": "api/**/*.py",
      "use": "@vercel/python"
    }
  ],
  "routes": [
    {
      "src": "/api/(.*)",
      "dest": "/api/$1"
    }
  ],
  "env": {
    "TEALTIGER_PROVIDER": "openai",
    "TEALTIGER_MODE": "ENFORCE"
  }
}

Comparison Table

PlatformDeployment TimeFree TierAuto-ScalingCustom DomainsBest For
Heroku2-3 min550 hrs/monthYesYesRapid prototyping
Railway1-2 min$5 creditYesYesModern apps
Render2-3 min750 hrs/monthYesYesProduction apps
Fly.io1-2 min3 VMs freeYesYesGlobal edge
App Engine3-5 min$300 creditYesYesGoogle Cloud users
Azure App Service3-5 min60 min/dayYesYesAzure users
Vercel1 min100 GB-hrsYesYesEdge functions

Cost Optimization

Tips for PaaS Deployments

  1. Use auto-scaling to scale down during low traffic
  2. Enable sleep mode for development environments
  3. Use free tiers for staging and testing
  4. Monitor usage with platform dashboards
  5. Set budget alerts to avoid surprises

TealTiger Cost Tracking

from tealtiger import TealOpenAI, CostTracker

client = TealOpenAI(
    api_key=os.environ['OPENAI_API_KEY'],
    budget_limit=100,  # $100/day
    budget_window='daily'
)

# Get cost report
tracker = CostTracker(client)
report = tracker.get_daily_report()
print(f"Today's cost: ${report.total_cost:.2f}")

Monitoring

Health Check Endpoint

@app.route('/health')
def health():
    return jsonify({
        'status': 'healthy',
        'version': '1.1.0',
        'provider': os.environ.get('TEALTIGER_PROVIDER')
    })

Platform-Specific Monitoring

  • Heroku: Use Papertrail or Logentries add-ons
  • Railway: Built-in metrics dashboard
  • Render: Built-in metrics and logs
  • Fly.io: Built-in metrics and Prometheus integration
  • App Engine: Google Cloud Monitoring
  • Azure: Application Insights

Support