Skip to content

OpenTelemetry Integration

Anosys is a full-featured OpenTelemetry-native backend. Any system that speaks OTLP — AI agents, Kubernetes clusters, microservices, infrastructure, IoT — can ship traces, metrics, and logs directly to Anosys and get production-grade observability in minutes.

No proprietary agents. No vendor lock-in. Just standard OpenTelemetry.


Why Anosys for OpenTelemetry

Capability What It Means for You
Full OTLP support Traces, metrics, and logs over OTLP/HTTP — one endpoint for all three signals
Instant time-to-value Automated metric generation and pre-built dashboards get you started in minutes
Full customization Build your own dashboards, pipelines, detection rules, and KPIs once the defaults outgrow your needs
Trace view Visualize distributed traces end to end — span waterfalls, latency breakdowns, and dependency graphs
Error detection Automatic error classification and structured error tracking across every signal
Root cause analysis Causal paths across agents, models, and infrastructure to pinpoint failures fast
Alerts Context-aware alerting via Slack, email, PagerDuty, and webhooks
Labeling Tag and label your data for segmentation, drill-down, and further analysis
Custom pipelines Enrich, route, and transform signals without glue code
Anomaly detection ML-powered baselines that surface real anomalies, not just threshold breaches
Vendor-agnostic Works with any language, framework, or cloud provider that supports OpenTelemetry

Integration Methods

There are two ways to get OpenTelemetry data into Anosys:

Method Protocol Best For
Direct HTTP OTLP/HTTP Apps and services that export OTLP directly
OpenTelemetry Collector OTLP/HTTP or OTLP/gRPC Centralized aggregation, fan-out, protocol translation

gRPC support

Direct OTLP/gRPC ingestion is coming soon. Today, gRPC is fully supported when you route through an OpenTelemetry Collector — the Collector accepts gRPC from your apps and forwards to Anosys over HTTP.


Option 1 — Direct OTLP/HTTP

Point your application's OTLP exporter directly at your Anosys endpoint. No intermediate infrastructure required.

Set the standard OTEL environment variables:

1
2
3
4
5
6
export OTEL_SERVICE_NAME="my-service"
export OTEL_TRACES_EXPORTER="otlp"
export OTEL_METRICS_EXPORTER="otlp"
export OTEL_LOGS_EXPORTER="otlp"
export OTEL_EXPORTER_OTLP_PROTOCOL="http/protobuf"
export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.anosys.ai/YOUR_UNIQUE_PATH"

Replace YOUR_UNIQUE_PATH with the OTLP path from your Anosys Console pixel.

Python example — traces, metrics, and logs:

import os, time, random, atexit, logging
from typing import Dict

# --- OpenTelemetry core ---
from opentelemetry.sdk.resources import Resource
from opentelemetry.semconv.resource import ResourceAttributes

# Traces
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter

# Metrics
from opentelemetry import metrics
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader
from opentelemetry.exporter.otlp.proto.http.metric_exporter import OTLPMetricExporter

# Logs
from opentelemetry._logs import set_logger_provider
from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler
from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
from opentelemetry.exporter.otlp.proto.http._log_exporter import OTLPLogExporter


# ---------------------
# Config
# ---------------------
BACKEND_BASE = os.getenv(
    "OTLP_BASE",
    "https://api.anosys.ai/YOUR_UNIQUE_PATH"
)
SERVICE_NAME    = os.getenv("OTEL_SERVICE_NAME", "my-service")
SERVICE_VERSION = os.getenv("OTEL_SERVICE_VERSION", "1.0.0")
DEPLOY_ENV      = os.getenv("OTEL_ENV", "dev")
HEADERS: Dict[str, str] = {}   # e.g. {"Authorization": "Bearer <token>"}

TRACES_URL  = f"{BACKEND_BASE}/v1/traces"
METRICS_URL = f"{BACKEND_BASE}/v1/metrics"
LOGS_URL    = f"{BACKEND_BASE}/v1/logs"


# ---------------------
# Resource (shared)
# ---------------------
def build_resource() -> Resource:
    return Resource.create({
        ResourceAttributes.SERVICE_NAME: SERVICE_NAME,
        ResourceAttributes.SERVICE_VERSION: SERVICE_VERSION,
        "deployment.environment": DEPLOY_ENV,
    })


# ---------------------
# Traces
# ---------------------
def setup_tracing(resource: Resource) -> None:
    provider = TracerProvider(resource=resource)
    provider.add_span_processor(
        BatchSpanProcessor(OTLPSpanExporter(endpoint=TRACES_URL, headers=HEADERS))
    )
    trace.set_tracer_provider(provider)
    atexit.register(provider.shutdown)


# ---------------------
# Metrics
# ---------------------
def setup_metrics(resource: Resource, export_interval_ms: int = 1000) -> None:
    reader = PeriodicExportingMetricReader(
        OTLPMetricExporter(endpoint=METRICS_URL, headers=HEADERS),
        export_interval_millis=export_interval_ms,
    )
    provider = MeterProvider(resource=resource, metric_readers=[reader])
    metrics.set_meter_provider(provider)
    atexit.register(provider.shutdown)


# ---------------------
# Logs
# ---------------------
def setup_logging(resource: Resource, level: int = logging.INFO) -> logging.Logger:
    logger_provider = LoggerProvider(resource=resource)
    set_logger_provider(logger_provider)
    logger_provider.add_log_record_processor(
        BatchLogRecordProcessor(OTLPLogExporter(endpoint=LOGS_URL, headers=HEADERS))
    )
    atexit.register(logger_provider.shutdown)

    app_logger = logging.getLogger(SERVICE_NAME)
    app_logger.setLevel(level)
    app_logger.handlers.clear()
    app_logger.addHandler(LoggingHandler(level=level, logger_provider=logger_provider))
    return app_logger


# ---------------------
# Initialize everything
# ---------------------
resource = build_resource()
setup_tracing(resource)
setup_metrics(resource, export_interval_ms=1000)
setup_logging(resource, level=logging.DEBUG)

Install the required packages

1
2
3
pip install opentelemetry-sdk \
    opentelemetry-exporter-otlp-proto-http \
    opentelemetry-api

Option 2 — OpenTelemetry Collector

Deploy an OpenTelemetry Collector as a central aggregation point. This is ideal when you want to:

  • Accept gRPC from your apps (full direct gRPC support is coming soon to Anosys)
  • Fan out to multiple backends
  • Enrich, filter, or batch data before it reaches Anosys
  • Centralize collection from Kubernetes, infrastructure agents, or legacy systems

Collector configuration (otel-collector-config.yaml):

receivers:
  otlp:
    protocols:
      grpc:
        endpoint: "0.0.0.0:4317"
      http:
        endpoint: "0.0.0.0:4318"

processors:
  batch:
    timeout: 5s
    send_batch_size: 1024

exporters:
  otlphttp:
    endpoint: "https://api.anosys.ai/YOUR_UNIQUE_PATH"
    compression: gzip

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlphttp]
    metrics:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlphttp]
    logs:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlphttp]

Replace YOUR_UNIQUE_PATH with the OTLP path from your Anosys Console pixel.

Run the Collector:

1
2
3
4
5
6
# Docker
docker run -v ./otel-collector-config.yaml:/etc/otelcol/config.yaml \
  otel/opentelemetry-collector-contrib:latest

# Binary
otelcol-contrib --config otel-collector-config.yaml

Then point your applications at the Collector (localhost:4317 for gRPC or localhost:4318 for HTTP) and the Collector forwards everything to Anosys.


Use Case Examples

AI Applications

OpenTelemetry is rapidly becoming the standard for AI observability. Anosys supports OTEL telemetry from:

  • Claude Code — enable OTEL export in ~/.claude/settings.json and traces flow automatically. See the Claude Code guide.
  • OpenAI Agents — use our native SDK or standard OTEL instrumentation. See the OpenAI Agents guide.
  • LangChain, LlamaIndex, CrewAI, AutoGen — these frameworks emit OTEL traces natively. Point their exporter at Anosys. See Custom LLM Integrations.
  • Any LLM provider — Google Gemini, Meta Llama, Mistral, Cohere, AWS Bedrock, Azure OpenAI, and more.

Anosys automatically surfaces token usage, latency, model comparison, and cost metrics from OTEL trace attributes.

Kubernetes Monitoring

Deploy the OpenTelemetry Collector as a DaemonSet or sidecar in your Kubernetes cluster to collect:

  • Cluster metrics — node CPU, memory, disk, and network utilization via the kubeletstats receiver
  • Pod and container metrics — restart counts, resource limits, OOM kills
  • Application traces — distributed traces from your microservices, correlated with infrastructure metrics
  • Cluster events and logs — Kubernetes events, pod logs, and audit logs via the k8s_events and filelog receivers

Example Kubernetes DaemonSet snippet:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: otel-collector
  namespace: monitoring
spec:
  selector:
    matchLabels:
      app: otel-collector
  template:
    metadata:
      labels:
        app: otel-collector
    spec:
      containers:
        - name: otel-collector
          image: otel/opentelemetry-collector-contrib:latest
          args: ["--config=/etc/otelcol/config.yaml"]
          volumeMounts:
            - name: config
              mountPath: /etc/otelcol
      volumes:
        - name: config
          configMap:
            name: otel-collector-config

Infrastructure Monitoring

Anosys accepts OTEL signals from any infrastructure component:

  • Servers and VMs — CPU, memory, disk, and network via the OTEL Collector's hostmetrics receiver
  • Network devices — routers, switches, and firewalls via SNMP-to-OTEL bridges or the Collector's snmp receiver
  • Databases — query latency, connection pools, and replication lag via auto-instrumentation libraries
  • Message queues — Kafka, RabbitMQ, and Redis metrics via OTEL Collector receivers

For a comprehensive infrastructure monitoring guide, see Network & Infrastructure Observability.

Custom Applications

Any application instrumented with OpenTelemetry can send data to Anosys. This includes:

  • Microservices — distributed tracing across service boundaries with automatic context propagation
  • Batch jobs and cron tasks — trace execution time, error rates, and throughput
  • CI/CD pipelines — capture build times, test results, and deployment metrics
  • Mobile and web apps — Real User Monitoring (RUM) via the OTEL JavaScript and mobile SDKs

What Anosys Provides

Once your OpenTelemetry data is flowing, the Anosys Platform automatically delivers:

  • Trace view — End-to-end distributed trace visualization with span waterfalls, latency breakdowns, and service dependency maps.
  • Automated metric generation — Anosys automatically generates key metrics from your traces and logs so you get dashboards in minutes, not days.
  • Custom dashboards — Build your own views with time-series charts, histograms, tables, gauges, and heat maps — or start from the auto-generated defaults.
  • Error detection — Automatic error classification across traces and logs with structured error tracking.
  • Root cause analysis — Causal graphs connecting anomalies to upstream triggers across agents, models, and infrastructure.
  • Alerts — Context-aware alerting via Slack, email, PagerDuty, and webhooks with intelligent noise reduction.
  • Labeling — Tag and annotate your data with custom labels for segmentation, team ownership, and drill-down analysis.
  • Custom pipelines — Enrich, route, and transform signals with automated remediation workflows.
  • Anomaly detection — ML-powered baselines that learn normal behavior and surface real anomalies.
  • Natural language interface — Ask questions about your data in plain English and get answers backed by your telemetry.

Configuration Reference

Standard OpenTelemetry environment variables used to configure exporters:

Variable Description
OTEL_SERVICE_NAME Logical name for your service (used in traces and dashboards)
OTEL_TRACES_EXPORTER Exporter type for traces (use otlp)
OTEL_METRICS_EXPORTER Exporter type for metrics (use otlp)
OTEL_LOGS_EXPORTER Exporter type for logs (use otlp)
OTEL_EXPORTER_OTLP_PROTOCOL Transport protocol — use http/protobuf for direct, or grpc via Collector
OTEL_EXPORTER_OTLP_ENDPOINT Your Anosys OTLP ingestion endpoint URL
OTEL_RESOURCE_ATTRIBUTES Additional resource attributes (e.g. deployment.environment=prod)

Troubleshooting

I don't see any data in the Anosys Console
  • Verify that your OTEL_EXPORTER_OTLP_ENDPOINT matches the URL shown in your Anosys pixel configuration.
  • Confirm that your application is generating traces, metrics, or logs (check local OTEL debug output by setting OTEL_LOG_LEVEL=debug).
  • Ensure outbound HTTPS traffic to api.anosys.ai is not blocked by a firewall or proxy.
  • If using a Collector, verify that the Collector is running and its exporter endpoint is correct.
Can I use gRPC directly?

gRPC is fully supported when routing through an OpenTelemetry Collector. Direct gRPC ingestion to the Anosys endpoint is coming soon. In the meantime, deploy a Collector that accepts gRPC from your apps and exports to Anosys over HTTP.

Can I send data from multiple services to the same endpoint?

Yes. Use different OTEL_SERVICE_NAME values for each service. Anosys automatically groups and separates data by service name, making it easy to filter and compare.

Does Anosys support all OTLP signal types?

Yes. Anosys fully supports traces, metrics, and logs via OTLP/HTTP. All three signal types are correlated automatically in the platform.

Can I use Anosys alongside other OTLP backends?

Yes. Deploy an OpenTelemetry Collector and configure multiple exporters to fan out the same data to Anosys and any other OTLP-compatible backend simultaneously.

How do I get started quickly?

The fastest path is to set the environment variables shown above and restart your application. Anosys automatically generates dashboards and metrics from incoming data — you'll have visibility within minutes.


Next Steps