Axion Tracing System¶
Simple observability for AI applications with automatic context management. Supports multiple backends including Logfire (OpenTelemetry), Langfuse, and Opik (Comet) for LLM-specific observability.
Why Use Axion Tracing?¶
- Zero setup - Configure once, trace everywhere
- Automatic context - No manual tracer passing between functions
- AI-optimized - Built-in support for LLM, evaluation, and knowledge operations
- Production ready - NOOP mode for zero overhead when tracing is disabled
- Extensible - Registry pattern makes it easy to add custom tracer providers
- Multiple backends - Choose between Logfire, Langfuse, or create your own
Quick Start¶
from axion.tracing import init_tracer, trace
class MyService:
def __init__(self):
self.tracer = init_tracer('llm')
@trace(name='internal_span', capture_result=True)
async def process(self, data: dict):
return 100
@trace(name='span', capture_result=True)
async def run(self):
# Set manual span for tracing
async with self.tracer.async_span("manual_span") as span:
# set attribute on span
span.set_attribute("output_status", "success")
return await self.process({"key": "value"})
await MyService().run()
Tracing Providers¶
Axion supports four built-in tracing providers, all managed through a unified registry system:
| Provider | Description | Use Case |
|---|---|---|
noop |
No-operation tracer with zero overhead | Testing, production without tracing |
logfire |
OpenTelemetry-based tracing via Logfire | General observability, performance monitoring |
langfuse |
LLM-specific observability platform | LLM cost tracking, prompt management, evaluations |
opik |
Comet's open-source LLM observability | LLM tracing, cost tracking, evaluations |
Provider Comparison¶
graph TD
A[TracerRegistry] --> B[NoOpTracer]
A --> C[LogfireTracer]
A --> D[LangfuseTracer]
A --> E[OpikTracer]
B --> F[Zero Overhead]
C --> G[OpenTelemetry Backend]
C --> H[Logfire Cloud/Local UI]
D --> I[LLM Observability]
D --> J[Cost Tracking]
D --> K[Prompt Management]
E --> L[Open Source LLM Tracing]
E --> M[Comet Integration]
Configuration¶
Tracing auto-configures from environment variables on first use. Just use Tracer() and it works.
Environment Variables¶
Set TRACING_MODE to select a provider, or let it auto-detect from available credentials:
| Provider | Description | Auto-Detection |
|---|---|---|
noop |
Disables all tracing (zero overhead) | Default if no credentials found |
logfire |
OpenTelemetry via Logfire | LOGFIRE_TOKEN present |
otel |
Custom OpenTelemetry endpoint | OTEL_EXPORTER_OTLP_ENDPOINT present |
langfuse |
LLM observability via Langfuse | LANGFUSE_SECRET_KEY present |
opik |
LLM observability via Opik (Comet) | OPIK_API_KEY present |
Auto-Detection Priority: If TRACING_MODE is not set, the system checks for credentials in this order:
1. LANGFUSE_SECRET_KEY → uses langfuse
2. OPIK_API_KEY → uses opik
3. LOGFIRE_TOKEN → uses logfire
4. OTEL_EXPORTER_OTLP_ENDPOINT → uses otel
5. Default → uses noop
Logfire Configuration¶
# For Logfire cloud (recommended)
TRACING_MODE=logfire
LOGFIRE_TOKEN=your-logfire-token
# For custom OpenTelemetry endpoint
TRACING_MODE=otel
OTEL_EXPORTER_OTLP_ENDPOINT=https://your-otel-endpoint
Langfuse Configuration¶
TRACING_MODE=langfuse
LANGFUSE_PUBLIC_KEY=pk-lf-your-public-key
LANGFUSE_SECRET_KEY=sk-lf-your-secret-key
LANGFUSE_BASE_URL=https://cloud.langfuse.com # EU region (default)
# or https://us.cloud.langfuse.com for US region
Opik Configuration¶
TRACING_MODE=opik
OPIK_API_KEY=your-opik-api-key
OPIK_WORKSPACE=your-workspace-name # Optional
OPIK_PROJECT_NAME=axion # Optional, defaults to 'axion'
OPIK_URL_OVERRIDE=https://www.comet.com/opik/api # Default (cloud)
# or http://localhost:5173/api for self-hosted
Programmatic Configuration¶
Tracing auto-configures on first use of Tracer(). Only call configure_tracing() if you need to override auto-detection.
from axion.tracing import configure_tracing, Tracer
# Zero-config - auto-detects from environment variables
tracer = Tracer('llm')
# Or explicitly configure a provider
configure_tracing(provider='langfuse')
tracer = Tracer('llm')
# List available providers
from axion.tracing import list_providers
print(list_providers()) # ['noop', 'logfire', 'otel', 'langfuse', 'opik']
# Reconfigure (e.g., for testing)
from axion.tracing import clear_tracing_config, is_tracing_configured
if is_tracing_configured():
clear_tracing_config()
configure_tracing(provider='noop')
TracerRegistry Architecture¶
The tracing system uses a decorator-based registry pattern, similar to LLMRegistry. This makes it easy to:
- Switch between providers at runtime
- Add custom tracer implementations
- Extend functionality without modifying core code
How It Works¶
from axion.tracing import TracerRegistry, BaseTracer
# List all registered providers
providers = TracerRegistry.list_providers()
print(providers) # ['noop', 'logfire', 'langfuse', 'opik']
# Get a specific tracer class
TracerClass = TracerRegistry.get('langfuse')
tracer = TracerClass.create(metadata_type='llm')
Creating Custom Tracers¶
You can create custom tracer implementations by subclassing BaseTracer and registering with the @TracerRegistry.register() decorator:
from axion.tracing import TracerRegistry, BaseTracer
from contextlib import contextmanager, asynccontextmanager
@TracerRegistry.register('my_custom_tracer')
class MyCustomTracer(BaseTracer):
"""Custom tracer implementation."""
def __init__(self, metadata_type: str = 'default', **kwargs):
self.metadata_type = metadata_type
# Initialize your tracing backend here
@classmethod
def create(cls, metadata_type: str = 'default', **kwargs):
return cls(metadata_type=metadata_type, **kwargs)
@contextmanager
def span(self, operation_name: str, **attributes):
# Implement span creation
print(f"Starting span: {operation_name}")
try:
yield self
finally:
print(f"Ending span: {operation_name}")
@asynccontextmanager
async def async_span(self, operation_name: str, **attributes):
# Implement async span creation
print(f"Starting async span: {operation_name}")
try:
yield self
finally:
print(f"Ending async span: {operation_name}")
def start(self, **attributes):
pass
def complete(self, output_data=None, **attributes):
pass
def fail(self, error: str, **attributes):
pass
def add_trace(self, event_type: str, message: str, metadata=None):
pass
# Now you can use it
configure_tracing(provider='my_custom_tracer')
Usage Patterns¶
Decorator Tracing (Recommended)¶
The tracer attribute is required for @trace decorator.
The @trace decorator automatically looks for a tracer attribute on the class instance (self) to create and manage spans.
from axion.tracing import init_tracer, trace
class DecoratorService:
def __init__(self):
self.tracer = init_tracer('base')
@trace(name="process_data", capture_args=True, capture_result=True)
async def process(self, data: dict):
await asyncio.sleep(0.1)
return {"status": "processed", "items": len(data)}
@trace # Simple usage without arguments
async def run(self):
result = await self.process({"id": 123, "items": ["a", "b"]})
return result
# Usage
service = DecoratorService()
await service.run()
Context-Aware Function Tracing¶
Use init_tracer at the top level of a service or class to start a new trace context. Use get_current_tracer in downstream functions or services that you expect to be called within an existing trace, allowing them to add child spans without needing the tracer to be passed in manually.
from axion.tracing import get_current_tracer, init_tracer
class ServiceA:
def __init__(self):
self.tracer = init_tracer('llm')
async def process(self):
async with self.tracer.async_span("service_a_process"):
# Context automatically propagates to ServiceB
service_b = ServiceB()
await service_b.process()
class ServiceB:
async def process(self):
# Get tracer from context - no manual passing needed!
tracer = get_current_tracer()
async with tracer.async_span("service_b_process"):
return "processed"
# Usage
service = ServiceA()
await service.process()
Request-Scoped Tracers¶
By default init_tracer reuses the tracer from the current context (or the global tracer). Use force_new=True to bypass this and always create a fresh tracer — useful when you need request-scoped config like tags, environment, or a specific trace ID that should not leak to other callers sharing the same context.
Any extra keyword arguments are forwarded directly to TracerClass.create():
from axion.tracing import init_tracer
# Always creates a fresh tracer, even inside an existing trace context
tracer = init_tracer(
'llm',
force_new=True,
tags=['request-abc123'],
environment='production',
)
# Full actor → thread → turn lineage
tracer = init_tracer(
'llm',
force_new=True,
user_id='user-42',
session_id='chat-thread-123',
tags=['copilot.stream'],
environment='production',
)
# Without force_new (default) — reuses context/global tracer if one exists
tracer = init_tracer('llm')
All traces created with the same session_id appear grouped under a single Langfuse Session; user_id further attributes traces to a specific actor, enabling user-level replay, scoring, and incident triage.
| Argument | Default | Behaviour |
|---|---|---|
force_new=False |
Reuse context tracer → global tracer → create new | |
force_new=True |
Always create a new tracer, skipping context/global lookup | |
tracer=<instance> |
Return that instance unchanged, force_new is ignored |
|
**create_kwargs |
Forwarded to TracerClass.create() (e.g. tags, environment, session_id, user_id) |
Langfuse-Specific Features¶
When using Langfuse, you get additional LLM-specific features:
import os
os.environ['TRACING_MODE'] = 'langfuse'
os.environ['LANGFUSE_PUBLIC_KEY'] = 'pk-lf-...'
os.environ['LANGFUSE_SECRET_KEY'] = 'sk-lf-...'
from axion.tracing import configure_tracing, Tracer
configure_tracing()
tracer = Tracer('llm')
# Create spans that appear in Langfuse
with tracer.span('my-operation') as span:
# Your code here
span.set_attribute('custom_key', 'custom_value')
# Log LLM calls with token usage
tracer.log_llm_call(
name='chat_completion',
model='gpt-4',
prompt='Hello, how are you?',
response='I am doing well, thank you!',
usage={
'prompt_tokens': 10,
'completion_tokens': 8,
'total_tokens': 18
}
)
# Log evaluations as scores
tracer.log_evaluation(
name='relevance_score',
score=0.95,
comment='Highly relevant response'
)
# Important: Flush traces before exiting
tracer.flush()
Opik-Specific Features¶
Opik (by Comet) provides open-source LLM observability with similar features:
import os
os.environ['TRACING_MODE'] = 'opik'
os.environ['OPIK_API_KEY'] = 'your-api-key'
os.environ['OPIK_WORKSPACE'] = 'your-workspace'
from axion.tracing import configure_tracing, Tracer
configure_tracing()
tracer = Tracer('llm')
# Create spans that appear in Opik dashboard
with tracer.span('my-operation', model='gpt-4') as span:
# Your code here
span.set_input({'query': 'Hello, how are you?'})
span.set_output({'response': 'I am doing well!'})
span.set_usage(prompt_tokens=10, completion_tokens=8)
# Log LLM calls with token usage
tracer.log_llm_call(
name='chat_completion',
model='gpt-4',
provider='openai',
prompt='Hello, how are you?',
response='I am doing well, thank you!',
prompt_tokens=10,
completion_tokens=8,
)
# Important: Flush traces before exiting
tracer.flush()
Key Opik features:
- Open-source and self-hostable
- LLM-specific span types ('llm', 'tool', 'general')
- Token usage tracking via usage attribute
- Integration with Comet ML platform
Input/Output Capture¶
All four provider spans implement the BaseSpan protocol, which guarantees set_attribute, set_input, and set_output on every span regardless of provider. You can import BaseSpan for type annotations:
from axion.tracing import BaseSpan
def process_span(span: BaseSpan) -> None:
span.set_input({'query': 'hello'})
span.set_output({'answer': 'world'})
Spans support set_input() and set_output() for capturing data that appears in the Langfuse/Opik UI's Input and Output fields:
async with tracer.async_span('my-operation') as span:
# Capture the input data
span.set_input({
'query': 'How do I reset my password?',
'context': ['doc1', 'doc2'],
})
result = await process_query(query)
# Capture the output data
span.set_output({
'response': result.text,
'score': result.confidence,
})
The @trace decorator automatically captures input/output when enabled:
@trace(name="process_data", capture_args=True, capture_result=True)
async def process(self, data: dict):
# Input (args/kwargs) automatically captured via span.set_input()
result = await do_work(data)
# Result automatically captured via span.set_output()
return result
Supported data types for serialization:
- Pydantic models (serialized via model_dump())
- Dictionaries, lists, and primitive types
- Other objects are converted to string representation
Custom Span Names¶
Axion components use meaningful span names for better observability:
Evaluation Runner: Uses evaluation_name as the trace name in Langfuse/Opik:
results = evaluation_runner(
evaluation_inputs=dataset,
scoring_metrics=[AnswerRelevancy()],
evaluation_name="My RAG Evaluation" # This becomes the trace name
)
# Appears as "My RAG Evaluation" in Langfuse/Opik instead of "evaluation_runner"
Trace Granularity: Control how evaluation traces are organized:
# Single trace (default) - all evaluations under one parent
results = evaluation_runner(
evaluation_inputs=dataset,
scoring_metrics=[AnswerRelevancy()],
evaluation_name="My Evaluation",
trace_granularity='single_trace' # or 'single'
)
# Separate traces - each metric execution gets its own trace
results = evaluation_runner(
evaluation_inputs=dataset,
scoring_metrics=[AnswerRelevancy()],
evaluation_name="My Evaluation",
trace_granularity='separate'
)
Evaluation Trace Hierarchy (lean 4-level structure):
My RAG Evaluation # evaluation_name (root)
├─ AnswerRelevancy.execute # Metric logic
│ └─ litellm_structured # LLM formatting/parsing
│ └─ llm_call # LLM API call (cost/tokens)
└─ Faithfulness.execute
└─ litellm_structured
└─ llm_call
LLM Handlers: Use metadata.name or class name as the span name:
from axion import LLMHandler
class SentimentAnalysisHandler(LLMHandler):
# ... handler config ...
pass
handler = SentimentAnalysisHandler()
# Appears as "SentimentAnalysisHandler" in Langfuse instead of "llm_handler"
# Or set a custom name via metadata
handler.metadata.name = "Sentiment Analysis"
# Now appears as "Sentiment Analysis" in Langfuse
Complete Example¶
import asyncio
from axion.tracing import init_tracer, trace
from axion.metrics import AnswerRelevancy
from axion.dataset import DatasetItem
from axion._core.metadata.schema import ToolMetadata
class MetricService:
def __init__(self):
# Set Context on Span
tool_metadata = ToolMetadata(
name="MetricService",
description='My Service',
version='1.0.1',
)
self.tracer = init_tracer(
metadata_type='llm',
tool_metadata=tool_metadata,
)
@trace(capture_result=True)
async def run_metric(self):
# AnswerRelevancy has its own tracer and auto-captured as a child span
metric = AnswerRelevancy()
data_item = DatasetItem(
query="How do I reset my password?",
actual_output="To reset your password, click 'Forgot Password' on the login page and follow the email instructions.",
expected_output="Navigate to login, click 'Forgot Password', and follow the reset link sent to your email.",
)
return await metric.execute(data_item)
@trace(name="doing_work")
async def do_some_work(self):
await asyncio.sleep(0.5)
return "Work done!"
@trace(name='run_main_task')
async def run_main_task(self):
# Can also set manual spans
async with self.tracer.async_span("metric_evaluation") as span:
_ = await self.do_some_work()
result = await self.run_metric()
span.set_attribute("operation_status", "success")
return result
# Usage
service = MetricService()
result = await service.run_main_task()
Metadata Types¶
Choose the right type for automatic specialized handling:
'base'- General operations'llm'- Language model calls (captures tokens, model info)'knowledge'- Search and retrieval (captures queries, results)'database'- Database operations (captures performance)'evaluation'- Evaluation metrics (captures scores)
API Reference¶
Core Functions¶
| Function | Description |
|---|---|
configure_tracing(provider) |
Configure the tracing provider (auto-configures if not called) |
is_tracing_configured() |
Check if tracing has been configured |
clear_tracing_config() |
Clear configuration (useful for testing/reconfiguration) |
list_providers() |
List available providers: ['noop', 'logfire', 'otel', 'langfuse', 'opik'] |
get_tracer() |
Get the configured tracer class |
init_tracer(metadata_type, tool_metadata, tracer, *, force_new, **kwargs) |
Initialize a tracer instance |
Tracer(metadata_type) |
Factory function for tracer instances (auto-configures) |
Context Management¶
| Function | Description |
|---|---|
get_current_tracer() |
Get active tracer from context |
set_current_tracer(tracer) |
Set tracer in context |
reset_tracer_context(token) |
Reset tracer context |
Span Methods¶
| Method | Description |
|---|---|
span.set_attribute(key, value) |
Set a single attribute on the span |
span.set_attributes(dict) |
Set multiple attributes at once |
span.set_input(data) |
Set input data (appears in Langfuse Input field) |
span.set_output(data) |
Set output data (appears in Langfuse Output field) |
span.add_trace(event_type, message, metadata) |
Add a trace event to the span |
Registry¶
| Function | Description |
|---|---|
TracerRegistry.register(name) |
Decorator to register a tracer class |
TracerRegistry.get(name) |
Get a tracer class by name |
TracerRegistry.list_providers() |
List all registered providers |
TracerRegistry.is_registered(name) |
Check if a provider is registered |
Decorator Options¶
| Option | Description |
|---|---|
name |
Custom span name (defaults to function name) |
capture_args |
Capture function arguments as span input |
capture_result |
Capture function result as span output |
Types¶
| Type | Description |
|---|---|
TracingMode |
Enumeration of available tracer modes |
TracerRegistry |
Registry for tracer implementations |
BaseTracer |
Abstract base class for tracer implementations |
BaseSpan |
@runtime_checkable Protocol defining the standard span interface (set_attribute, set_input, set_output) |
Integration¶
The tracing system automatically works with other Axion components:
- Evaluation metrics are automatically traced
- API calls include retry and performance data
- LLM operations capture token usage and model info
Just initialize your tracer and everything else traces automatically.
Installation¶
The tracing providers are optional dependencies. From the project root: