Azure OpenAI Provider Configuration
Overview
This guide explains how to configure Azure OpenAI to work with Pay-i Instrumentation. Azure OpenAI is a supported Provider with several Resources that Pay-i can track.
Prerequisites
To use Pay-i with Azure OpenAI, you'll need:
-
Azure OpenAI Service:
- Azure OpenAI API key
- Azure OpenAI endpoint URL
- API version (e.g.,
2023-05-15
) - Deployment name for the model you want to use (see Pay-i Managed Categories and Resources for supported models)
-
Pay-i:
- Pay-i API key
- Pay-i Python SDK (
pip install payi
)
-
Client Library:
- OpenAI Python SDK (
pip install openai
)
- OpenAI Python SDK (
SDK Support
The examples in this guide use the Pay-i Python SDK, which provides comprehensive support for Azure OpenAI integration. If you're using a different programming language, you can utilize the Pay-i OpenAPI specification to generate a client SDK for your language of choice. The core concepts remain the same, though the exact implementation details may vary depending on the language and client library used.
Basic Setup
Let's start with the common configuration variables needed for both standard and proxy approaches:
import os
from openai import AzureOpenAI
from payi.lib.instrument import payi_instrument
from payi.lib.helpers import create_headers
# API keys and configuration
AZURE_OPENAI_API_KEY = os.getenv("AZURE_OPENAI_API_KEY", "YOUR_AZURE_OPENAI_API_KEY")
PAYI_API_KEY = os.getenv("PAYI_API_KEY", "YOUR_PAYI_API_KEY")
# Azure-specific configuration
API_VERSION = "2024-02-15-preview"
AZURE_ENDPOINT = os.getenv("AZURE_OPENAI_ENDPOINT", "YOUR_AZURE_OPENAI_ENDPOINT")
AZURE_MODEL = "YOUR_AZURE_OPENAI_MODEL" # e.g., "gpt-4o-2024-05-13"
AZURE_DEPLOYMENT = "YOUR_AZURE_OPENAI_DEPLOYMENT" # e.g., "test-4o"
Direct Provider Call with Telemetry (Default)
The standard approach makes API calls directly to Azure OpenAI while Pay-i tracks usage:
# Initialize Pay-i instrumentation
payi_instrument()
# Configure direct Azure OpenAI client
azure_client = AzureOpenAI(
api_key=AZURE_OPENAI_API_KEY,
api_version=API_VERSION,
azure_endpoint=AZURE_ENDPOINT # Direct to Azure
)
# Make a request with annotations
response = azure_client.chat.completions.create(
model=AZURE_DEPLOYMENT,
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello, how are you?"}
],
extra_headers=create_headers(
use_case_name="azure_example",
user_id="example_user",
limit_ids=["azure_limit"],
request_tags=["azure-example"]
)
)
print(response.choices[0].message.content)
With this configuration, Pay-i automatically tracks API calls and calculates costs without adding any latency to your requests. You can also use decorators for tracking multiple related API calls - see Custom Instrumentation for more details.
Optional Proxy Routing (For Block Limits)
If you need to implement Block
limits that prevent requests from being sent to the provider when a budget is exceeded, use Pay-i's proxy configuration. Here's what changes compared to the standard approach:
from payi.lib.helpers import payi_azure_openai_url
import json
# 1. Initialize with proxy mode enabled
payi_instrument(config={"proxy": True})
# 2. Additional configuration for proxy mode
AZURE_DEPLOYMENT_TYPE = None # Optional: "global", "datazone", or "region" depending on your Azure deployment type
# 3. Set up required proxy headers
payi_headers = {
"xProxy-api-key": PAYI_API_KEY,
"xProxy-Provider-BaseUri": AZURE_ENDPOINT,
"xProxy-RouteAs-Resource": AZURE_MODEL,
}
# Add deployment type if specified (only needed for proxy configuration)
if AZURE_DEPLOYMENT_TYPE is not None:
payi_headers["xProxy-Resource-Scope"] = AZURE_DEPLOYMENT_TYPE
# 4. Configure Azure OpenAI client to use Pay-i as a proxy
azure_client = AzureOpenAI(
api_key=AZURE_OPENAI_API_KEY,
api_version=API_VERSION,
azure_endpoint=payi_azure_openai_url(), # Use Pay-i's URL instead of direct endpoint
default_headers=payi_headers # Include Pay-i proxy headers
)
# Use the client normally
response = azure_client.chat.completions.create(
model=AZURE_DEPLOYMENT,
messages=[{"role": "user", "content": "Say 'this is a test'"}]
)
print(response.choices[0].message.content)
# With proxy mode, you get access to real-time cost information
xproxy_result = response.xproxy_result
print(json.dumps(xproxy_result, indent=4))
The key differences in the proxy configuration are:
- We use
payi_instrument(config={"proxy": True})
- We specify the Azure deployment type (if needed)
- We use
payi_azure_openai_url()
instead of the direct Azure endpoint - We include special proxy headers with the API key, base URI, model, and optionally deployment type
For detailed information about proxy configuration, including when to use it and how it works, see the Pay-i Proxy Configuration guide.
Advanced Configuration
Note: The following advanced configurations can be applied to both standard and proxy configurations. If you're using proxy configuration, remember to include the
azure_endpoint=payi_azure_openai_url()
anddefault_headers
parameters as shown in the proxy configuration example above.
Connection Pooling
For high-throughput applications, you may want to configure connection pooling:
import httpx
# Configure with httpx transport for connection pooling
transport = httpx.HTTPTransport(limits=httpx.Limits(max_connections=100))
# For standard instrumentation
azure_client = AzureOpenAI(
api_key=AZURE_OPENAI_API_KEY,
azure_endpoint=AZURE_ENDPOINT,
api_version=API_VERSION,
http_client=httpx.Client(transport=transport)
)
# For proxy configuration, you would add:
# azure_endpoint=payi_azure_openai_url(),
# default_headers=payi_headers
Timeouts
For operations that might take longer, consider configuring appropriate timeouts:
import httpx
# Configure with longer timeouts for large requests
timeout = httpx.Timeout(timeout=60.0) # 60 seconds
# For standard instrumentation
azure_client = AzureOpenAI(
api_key=AZURE_OPENAI_API_KEY,
azure_endpoint=AZURE_ENDPOINT,
api_version=API_VERSION,
http_client=httpx.Client(timeout=timeout)
)
# For proxy configuration, you would add:
# azure_endpoint=payi_azure_openai_url(),
# default_headers=payi_headers
Related Resources
Quickstart Guides
- Azure OpenAI Quickstart - Single comprehensive example with instructions
- OpenAI Quickstart - Similar example for standard OpenAI
Conceptual Guides
- Auto-Instrumentation with GenAI Providers - Overview of all providers
- Custom Instrumentation - Adding business context to your tracking
- Azure OpenAI Custom Headers - Annotating individual Azure OpenAI requests with metadata
- Pay-i Proxy Configuration - For when you need
Block
limits
Example Repositories
- Azure OpenAI Example Collection - Multiple code examples
- All Pay-i Quickstarts - Examples for all providers
Updated 10 days ago