Skip to content

Observability for OpenAI Agents

Gain complete visibility into your OpenAI-powered agentic workflows. The Anosys Platform integrates with the OpenAI Python SDK to automatically capture every API call, tool invocation, and model response — giving you the insights you need to optimize costs, debug failures, and ship faster.


Why Observability Matters for AI Agents

Agentic AI workflows are fundamentally different from traditional software — they are non-deterministic, multi-step, and often expensive to run. Without observability you are flying blind:

Challenge What Observability Gives You
Non-deterministic outputs Trace every reasoning step so you can reproduce and compare runs
Runaway token usage Real-time metrics on input/output tokens per session
Silent failures Structured logs with error classification and alerting
Slow iterations Latency breakdowns across tool calls and model invocations
Cost overruns Per-session and per-project cost attribution dashboards

Getting Started with OpenAI Agents

The Anosys OpenAI integration uses a lightweight Python SDK that wraps the official OpenAI client. Once initialized, every call to the OpenAI API is automatically traced and exported — no manual instrumentation required.

To get started you need to:

  1. Create a pixel in the Anosys Console of type "Agentic AI".
  2. Install the Anosys logger packages via pip.
  3. Initialize the logger in your code before making OpenAI calls.

Setting Up Observability for OpenAI Agents

Step 1 — Create Your Anosys Pixel

Log in to the Anosys Console and create a new pixel of type Agentic AI. Copy the API key shown on the pixel configuration page — you'll need it in the next step.

Step 2 — Install the SDK

Install the required packages:

1
2
3
pip install openai
pip install traceAI-openai-agents
pip install anosys-logger-4-openai

Step 3 — Initialize and Run

Add the Anosys logger to your application before making any OpenAI API calls:

import os
from openai import OpenAI
from anosys_logger_4_openai import AnosysOpenAILogger

# Set your API keys
os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"
os.environ["ANOSYS_API_KEY"] = "YOUR_ANOSYS_API_KEY"

# Initialize the Anosys logger — this auto-instruments all OpenAI calls
AnosysOpenAILogger()

# Use the OpenAI client as usual
client = OpenAI()

response = client.responses.create(
    model="gpt-5",
    input="What is the population of New York City?"
)

print(response.output_text)

That's it. Every OpenAI API call made through the client is now automatically traced.

Environment variables

For production deployments, set OPENAI_API_KEY and ANOSYS_API_KEY as environment variables rather than hardcoding them. The SDK reads both from os.environ automatically.


What the OpenAI SDK Captures

Because the Anosys logger wraps the OpenAI Python SDK directly, it captures additional data points beyond standard OTLP telemetry:

Data Point Description
Model name & version The exact model used for each call (e.g. gpt-5, gpt-4.1-mini)
Prompt & completion tokens Precise input/output token counts per request
Request parameters Temperature, top-p, max tokens, stop sequences, and other model settings
Tool / function calls Names, arguments, and return values of any tool calls made by the agent
Streaming events Per-chunk latency for streaming responses
Response metadata Finish reason, system fingerprint, and response ID
Error details HTTP status codes, rate-limit headers, and retry counts

What You'll See in Anosys

Once the logger is active, the Anosys Platform automatically processes your OpenAI data and surfaces:

  • Request traces — End-to-end visibility into every OpenAI API call, including multi-turn conversations and chained agent executions.
  • Token usage metrics — Input and output token counts per request, per model, and over time.
  • Model comparison — Side-by-side performance and cost breakdowns across different models (e.g. gpt-5 vs gpt-4.1-mini).
  • Latency analysis — Identify slow model calls and bottlenecks, including time-to-first-token for streaming responses.
  • Error tracking — Structured error logs with automatic classification, including rate-limit events and API errors.
  • Cost insights — Per-request and per-project cost estimates based on actual token usage and model pricing.
  • Anomaly detection — ML-powered baselines that alert on token usage spikes, latency regressions, and model quality degradation without manual threshold configuration.
  • Root cause analysis — Causal graphs that connect failures to upstream triggers across multi-step agent executions, tool calls, and model invocations.
  • Alerts — Context-aware notifications via Slack, email, PagerDuty, or webhooks when your agents hit errors, cost overruns, or performance regressions.
  • Custom dashboards — Build your own views or start with auto-generated dashboards for model health, agent reliability, and cost attribution.
  • Automated metric generation — Anosys automatically generates key metrics from your traces and logs so you get dashboards in minutes, not days.
  • Custom pipelines — Enrich, route, and transform your agent telemetry with automated remediation workflows.
  • Labeling — Tag and annotate agent sessions, models, or projects with custom labels for segmentation and drill-down analysis.
  • Natural language interface — Ask questions about your agent data in plain English and get answers backed by your telemetry.

Configuration Reference

Variable / Setting Description
OPENAI_API_KEY Your OpenAI API key
ANOSYS_API_KEY Your Anosys pixel API key (from the Console)
AnosysOpenAILogger() Initializes auto-instrumentation — call once at startup

Troubleshooting

I don't see any data in the Anosys Console
  • Verify that ANOSYS_API_KEY is set correctly and matches the key shown in your Anosys pixel.
  • Make sure AnosysOpenAILogger() is called before you create the OpenAI() client.
  • Confirm that your OpenAI calls are completing successfully (check for API errors).
  • Ensure outbound HTTPS traffic to api.anosys.ai is not blocked by a firewall or proxy.
Does this work with async and streaming calls?

Yes. The Anosys logger automatically instruments both synchronous and asynchronous OpenAI clients, including streaming responses via stream=True.

Can I use this alongside other OTLP collectors?

Yes. The Anosys logger can coexist with other OpenTelemetry instrumentation. If you need to fan out to multiple backends, place an OpenTelemetry Collector in between and configure multiple exporters there.

Which OpenAI SDK versions are supported?

The anosys-logger-4-openai package supports OpenAI Python SDK v1.x and later, including the Agents SDK (traceAI-openai-agents).


Next Steps