Data Ingestion
The Anosys Platform supports multiple ways to ingest your data. Choose the integration that fits your stack — from vendor-specific SDKs to vendor-agnostic protocols and lightweight tracking pixels.
Native OpenAI Integration
The Anosys OpenAI logger wraps the official OpenAI Python SDK. Once initialized, every API call is automatically traced — no manual instrumentation required. For a complete guide, see OpenAI Agents Observability.
Install the packages:
| pip install openai
pip install traceAI-openai-agents
pip install anosys-logger-4-openai
|
Initialize and use:
| import os
from openai import OpenAI
from anosys_logger_4_openai import AnosysOpenAILogger
# Set your API keys
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY", "YOUR_OPENAI_KEY")
os.environ["ANOSYS_API_KEY"] = os.getenv("ANOSYS_API_KEY", "YOUR_ANOSYS_KEY")
# Initialize the Anosys logger — auto-instruments all OpenAI calls
AnosysOpenAILogger()
client = OpenAI()
response = client.responses.create(
model="gpt-5",
input="What is the population of New York City?"
)
print(response.output_text)
|
Anthropic / Claude Code
For Anthropic agents, no SDK is needed — observability is configured via ~/.claude/settings.json using OpenTelemetry. See the Anthropic Agents guide.
Custom Logging with Decorators
For custom Python functions, use the @anosys_logger decorator to automatically capture inputs, outputs, and execution time for any function call.
| from anosys_logger import anosys_logger
@anosys_logger(source="My function call")
def myfunction(param1=None, param2=None):
return f"-={param1}-{param2}=-"
print("->", myfunction("custom", "logging"))
|
Every decorated function call is logged as a trace span with the function name, arguments, return value, and duration.
OpenTelemetry Integration
Anosys natively supports OTLP/HTTP and OTLP/gRPC endpoints. If your application is already instrumented with OpenTelemetry, point your exporter at your Anosys OTLP endpoint and data will flow automatically.
The example below shows a complete Python setup for exporting traces, metrics, and logs via OTLP/HTTP.
View full OpenTelemetry setup example
| # OTEL setup for traces + metrics + logs (OTLP/HTTP)
import os
import time
import random
import atexit
import logging
from typing import Dict
# --- OpenTelemetry core ---
from opentelemetry.sdk.resources import Resource
from opentelemetry.semconv.resource import ResourceAttributes
# Traces
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
# Metrics
from opentelemetry import metrics
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader
from opentelemetry.exporter.otlp.proto.http.metric_exporter import OTLPMetricExporter
# Logs
from opentelemetry._logs import set_logger_provider
from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler
from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
from opentelemetry.exporter.otlp.proto.http._log_exporter import OTLPLogExporter
# --------------------------
# Config
# --------------------------
BACKEND_BASE = os.getenv(
"OTLP_BASE",
"https://api.anosys.ai/YOUR_UNIQUE_PATH"
)
SERVICE_NAME = os.getenv("OTEL_SERVICE_NAME", "my-service")
SERVICE_VERSION = os.getenv("OTEL_SERVICE_VERSION", "1.0.0")
DEPLOY_ENV = os.getenv("OTEL_ENV", "dev")
HEADERS: Dict[str, str] = {} # e.g. {"Authorization": "Bearer <token>"}
# Convenience helpers for endpoints
TRACES_URL = f"{BACKEND_BASE}/v1/traces"
METRICS_URL = f"{BACKEND_BASE}/v1/metrics"
LOGS_URL = f"{BACKEND_BASE}/v1/logs"
# --------------------------
# Resource (shared by all signals)
# --------------------------
def build_resource() -> Resource:
return Resource.create({
ResourceAttributes.SERVICE_NAME: SERVICE_NAME,
ResourceAttributes.SERVICE_VERSION: SERVICE_VERSION,
"deployment.environment": DEPLOY_ENV,
})
# --------------------------
# Traces
# --------------------------
def setup_tracing(resource: Resource) -> None:
provider = TracerProvider(resource=resource)
span_exporter = OTLPSpanExporter(endpoint=TRACES_URL, headers=HEADERS)
provider.add_span_processor(BatchSpanProcessor(span_exporter))
trace.set_tracer_provider(provider)
atexit.register(provider.shutdown)
# --------------------------
# Metrics
# --------------------------
def setup_metrics(resource: Resource, export_interval_ms: int = 1000) -> None:
metric_exporter = OTLPMetricExporter(endpoint=METRICS_URL, headers=HEADERS)
reader = PeriodicExportingMetricReader(
metric_exporter, export_interval_millis=export_interval_ms
)
provider = MeterProvider(resource=resource, metric_readers=[reader])
metrics.set_meter_provider(provider)
atexit.register(provider.shutdown)
# --------------------------
# Logs
# --------------------------
def setup_logging(resource: Resource, level: int = logging.INFO) -> logging.Logger:
logger_provider = LoggerProvider(resource=resource)
set_logger_provider(logger_provider)
log_exporter = OTLPLogExporter(endpoint=LOGS_URL, headers=HEADERS)
logger_provider.add_log_record_processor(BatchLogRecordProcessor(log_exporter))
atexit.register(logger_provider.shutdown)
# Bridge stdlib logging -> OTEL
app_logger = logging.getLogger(SERVICE_NAME)
app_logger.setLevel(level)
app_logger.handlers.clear()
app_logger.addHandler(LoggingHandler(level=level, logger_provider=logger_provider))
return app_logger
# --------------------------
# Demo: generate some telemetry
# --------------------------
def demo_telemetry(iterations: int = 5, delay_sec: float = 1.0) -> None:
tracer = trace.get_tracer(f"{SERVICE_NAME}.demo")
meter = metrics.get_meter(f"{SERVICE_NAME}.demo")
requests_counter = meter.create_counter(
name="demo.requests",
description="Number of demo requests processed",
unit="{request}",
)
latency_hist = meter.create_histogram(
name="demo.request_latency_ms",
description="Simulated request latency in ms",
unit="ms",
)
logger = logging.getLogger(SERVICE_NAME)
for i in range(iterations):
with tracer.start_as_current_span("demo-operation") as span:
span.set_attribute("iteration", i)
span.set_attribute("work.kind", "demo")
simulated_latency_ms = random.randint(20, 200)
time.sleep(delay_sec)
requests_counter.add(1, {"route": "/demo", "status_code": 200})
latency_hist.record(simulated_latency_ms, {"route": "/demo"})
logger.info(
"Processed demo request",
extra={"iteration": i, "latency_ms": simulated_latency_ms},
)
print(f"sent: trace+metric+log iteration={i} latency_ms={simulated_latency_ms}")
def main() -> None:
resource = build_resource()
setup_tracing(resource)
setup_metrics(resource, export_interval_ms=1000)
setup_logging(resource, level=logging.DEBUG)
demo_telemetry()
if __name__ == "__main__":
main()
|
Replace YOUR_UNIQUE_PATH with the OTLP path from your Anosys Console pixel.
REST API
Send data to Anosys with a simple HTTP GET or POST request. This works from any language or platform — Python, cURL, Postman, or a cron job.
Python example:
| import requests
# Your unique Anosys ingestion path
url = "https://api.anosys.ai/ingestion/YOUR_UNIQUE_PATH"
# Query parameters — use s1, s2 for strings and n1, n2 for numbers
params = {
"s1": "string_value", # string parameter
"n1": 123.45 # numeric parameter
}
try:
response = requests.get(url, params=params, timeout=10)
response.raise_for_status()
print("Success:", response.text)
except requests.exceptions.RequestException as e:
print("Error:", e)
|
cURL example:
| curl -G "https://api.anosys.ai/ingestion/YOUR_UNIQUE_PATH" \
--data-urlencode "s1=string_value" \
--data-urlencode "n1=123.45"
|
Parameter Reference
| Prefix |
Type |
Example |
Description |
s1, s2, … |
String |
s1=user_login |
Custom string fields |
n1, n2, … |
Numeric |
n1=42.5 |
Custom numeric fields |
b1, b2, … |
Boolean |
b1=true |
Custom boolean fields |
You can send as many custom fields as needed. All fields are indexed and queryable in the Anosys dashboards.
JavaScript
Add the Anosys tracking script to your web application to automatically capture page views, user sessions, and custom events.
Basic Web Tracking
| <!-- Anosys Web Tracking -->
<script type="text/javascript">
var anosys_project = "YOUR_PROJECT_ID";
</script>
<script async type="text/javascript"
src="https://api.anosys.ai/webstats.js"></script>
<noscript>
<img src="https://api.anosys.ai/ingestion/YOUR_UNIQUE_PATH"
referrerPolicy="no-referrer-when-downgrade"
width="0" height="0">
</noscript>
<!-- End of Anosys Code -->
|
Custom Variables
Pass custom string, numeric, and boolean variables using the anosys_cvs, anosys_cvn, and anosys_cvb prefixes:
| <!-- Anosys Custom Tracking -->
<script type="text/javascript">
var anosys_project = "YOUR_PROJECT_ID";
// Custom variables — replace with your actual values
var anosys_cvs1 = getUserId(); // custom string variable 1
var anosys_cvn1 = getTimestamp(); // custom numeric variable 1
var anosys_cvb1 = isPremiumUser(); // custom boolean variable 1
</script>
<script async type="text/javascript"
src="https://api.anosys.ai/customstats.js"></script>
<noscript>
<img src="https://api.anosys.ai/ingestion/YOUR_UNIQUE_PATH"
referrerPolicy="no-referrer-when-downgrade"
width="0" height="0">
</noscript>
<!-- End of Anosys Code -->
|
| Variable Pattern |
Type |
Description |
anosys_cvs1, anosys_cvs2, … |
String |
Custom string values (e.g. user ID, page name) |
anosys_cvn1, anosys_cvn2, … |
Numeric |
Custom numeric values (e.g. timestamp, score) |
anosys_cvb1, anosys_cvb2, … |
Boolean |
Custom boolean flags (e.g. is premium, is mobile) |
Image Pixels
Invisible 0×0 image pixels are the lightest way to track traffic from websites, mobile apps, or email campaigns. They work everywhere — including environments where JavaScript is not available.
| <img src="https://api.anosys.ai/ingestion/YOUR_UNIQUE_PATH/anosys.gif&s1=value&s2=value"
width="0" height="0">
|
Custom variables can be appended as query parameters using the same s1, n1, b1 convention as the REST API.
Correlate with backend data
Augmenting your application with tracking pixels lets you correlate front-end engagement metrics (page views, clicks, conversions) with backend observability data (latency, errors, model performance) inside the same Anosys dashboards.
Next Steps