Guides

Auto Instrumentation

Overview

Pay-i enables automatic tracking of your GenAI usage with minimal code changes. By simply initializing Pay-i instrumentation, you get immediate visibility into costs, performance, and usage patterns without writing any custom instrumentation code.

This automatic approach (or "auto-instrumentation") provides the fastest path to tracking your GenAI usage across various providers - including language models (LLMs), image generation, audio processing, video analysis, vision capabilities, and Retrieval-Augmented Generation (RAG) technologies.

This guide shows how to configure supported providers for automatic tracking with just a few lines of code.

Looking for deeper insights? Once you have auto-instrumentation working, you can add custom Annotations to gain more detailed business context for your GenAI usage.

SDK Support

Pay-i provides a Python SDK for seamless integration with various GenAI providers. This includes not just Large Language Models (LLMs), but also:

  • Image Generation & Vision: Process and create images with vision-capable models
  • Audio Processing: Speech-to-text, text-to-speech, and audio analysis
  • Video Analysis: Process and analyze video content
  • Multimodal Models: Work with models that can handle multiple input and output types
  • RAG Systems: Implement Retrieval-Augmented Generation for knowledge-intensive applications

For other programming languages, you can use the OpenAPI specification to generate client SDKs. The examples in this documentation use the Python SDK, which offers the most comprehensive support and helper functions.

Basic Setup

Setting up automatic instrumentation is straightforward with the payi_instrument() function:

import os
from openai import OpenAI
from payi.lib.instrument import payi_instrument

# Initialize Pay-i instrumentation
payi_instrument()

# Configure provider client normally for direct access
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

Important Note for Streaming: When working with streaming responses, you must read the stream to the end before the response is fully instrumented. Pay-i needs the complete token information to accurately track usage and calculate costs.

Optional Proxy Configuration

If you need to use Block limits that prevent requests from being sent to providers when a budget is exceeded, you'll need to configure Pay-i to route requests through its proxy:

# For proxy-specific features like Block limits
payi_instrument(config={"proxy": True})

See Pay-i Proxy Configuration for complete details on this approach.

Provider-Specific Configuration

Pay-i supports multiple GenAI providers with various capabilities. Each provider has specific setup requirements and configuration options.

For detailed information on configuring each supported provider, see our dedicated Provider Configuration documentation, which includes:

Auto-Instrumented Providers (directly patched by payi_instrument):

  • OpenAI
  • Azure OpenAI
  • Anthropic
  • AWS Bedrock

Callback-Based Integration:

  • LangChain (uses a custom callback handler)

Each provider page contains specific instructions for its SDK, required dependencies, and sample code for proper instrumentation.

Related Resources