Guides

Auto Instrumentation

Overview

Pay-i enables automatic tracking of your GenAI usage with minimal code changes. By simply initializing Pay-i instrumentation, you get immediate visibility into costs, performance, and usage patterns without writing any custom instrumentation code.

This automatic approach (or "auto-instrumentation") provides the fastest path to tracking your GenAI usage across various providers - including language models (LLMs), image generation, audio processing, video analysis, vision capabilities, and Retrieval-Augmented Generation (RAG) technologies.

This guide shows how to configure supported providers for automatic tracking with just a few lines of code.

Looking for deeper insights? Once you have auto-instrumentation working, you can add custom Annotations to gain more detailed business context for your GenAI usage.

Capability Support

Pay-i's underlying REST Ingest API supports tracking a wide range of AI workloads:

  • Large Language Models (LLMs): Track usage of text-generation models
  • Image Generation & Vision: Track image creation and analysis
  • Audio Processing: Track speech-to-text, text-to-speech, and audio analysis
  • Video Analysis: Track video processing
  • Multimodal Models: Track models that handle multiple input and output types
  • RAG Systems: Track Retrieval-Augmented Generation for knowledge-intensive applications

The Python SDK's auto-instrumentation feature specifically provides automated tracking for:

  • OpenAI and Azure OpenAI
  • Anthropic
  • AWS Bedrock
  • LangChain (via callback handlers)

For workloads not covered by auto-instrumentation, you can use the Ingest API directly from any language, including Python.

Client SDKs for Other Languages

If you need Pay-i APIs OpenAPI specification so you can generate client SDKs for languages beyond Python, please contact [email protected].

The examples in this documentation use the Python SDK, which currently offers the most comprehensive support and helper functions.

Basic Setup

Setting up automatic instrumentation is straightforward with the payi_instrument() function:

import os
from openai import OpenAI
from payi.lib.instrument import payi_instrument

# Initialize Pay-i instrumentation
payi_instrument()

# Configure provider client normally for direct access
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

Important Note for Streaming: When working with streaming responses, you must read the stream to the end before the response is fully instrumented. Pay-i needs the complete token information to accurately track usage and calculate costs.

Optional Proxy Configuration

If you need to use Block limits that prevent requests from being sent to providers when a budget is exceeded, you'll need to configure Pay-i to route requests through its proxy:

# For proxy-specific features like Block limits
payi_instrument(config={"proxy": True})

See Pay-i Proxy Configuration for complete details on this approach.

Provider-Specific Configuration

Pay-i supports multiple GenAI providers with various capabilities. Each provider has specific setup requirements and configuration options.

For detailed information on configuring each supported provider, see our dedicated Provider Configuration documentation, which includes:

Auto-Instrumented Providers (enabled by payi_instrument):

  • OpenAI
  • Azure OpenAI
  • Anthropic
  • AWS Bedrock

Callback-Based Integration:

  • LangChain (uses a custom callback handler)

Each provider page contains specific instructions for its SDK, required dependencies, and sample code for proper instrumentation.

Related Resources