Azure OpenAI Provider Configuration
Overview
This guide explains how to configure Azure OpenAI to work with Pay-i Instrumentation. Azure OpenAI is a supported Provider with several Resources that Pay-i can track.
Prerequisites
To use Pay-i with Azure OpenAI, you'll need:
-
Azure OpenAI Service:
- Azure OpenAI API key
- Azure OpenAI endpoint URL
- API version (e.g.,
2023-05-15
) - Deployment name for the model you want to use (see Pay-i Managed Categories and Resources for supported models)
-
Pay-i:
SDK Support
The examples in this guide use the Pay-i Python SDK, which provides comprehensive support for Azure OpenAI integration. If you're using a different programming language, you can utilize the Pay-i OpenAPI specification to generate a client SDK for your language of choice. The core concepts remain the same, though the exact implementation details may vary depending on the language and client library used.
Basic Setup
First, install the required packages:
# Install the required packages
pip install payi openai python-dotenv
Let's start with the common configuration variables needed for both standard and proxy approaches:
import os
from openai import AzureOpenAI
from payi.lib.instrument import payi_instrument
from payi.lib.helpers import create_headers
# API keys and configuration
AZURE_OPENAI_API_KEY = os.getenv("AZURE_OPENAI_API_KEY", "YOUR_AZURE_OPENAI_API_KEY")
PAYI_API_KEY = os.getenv("PAYI_API_KEY", "YOUR_PAYI_API_KEY")
# Azure-specific configuration
API_VERSION = "2024-02-15-preview"
AZURE_ENDPOINT = os.getenv("AZURE_OPENAI_ENDPOINT", "YOUR_AZURE_OPENAI_ENDPOINT")
AZURE_MODEL = "YOUR_AZURE_OPENAI_MODEL" # e.g., "gpt-4o-2024-05-13"
AZURE_DEPLOYMENT = "YOUR_AZURE_OPENAI_DEPLOYMENT" # e.g., "test-4o"
Direct Provider Call with Telemetry (Default)
The standard approach makes API calls directly to Azure OpenAI while Pay-i tracks usage:
# Initialize Pay-i instrumentation
payi_instrument()
# Configure direct Azure OpenAI client
azure_client = AzureOpenAI(
api_key=AZURE_OPENAI_API_KEY,
api_version=API_VERSION,
azure_endpoint=AZURE_ENDPOINT # Direct to Azure
)
# Make a request with annotations
response = azure_client.chat.completions.create(
model=AZURE_DEPLOYMENT,
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello, how are you?"}
],
extra_headers=create_headers(
use_case_name="azure_example",
user_id="example_user",
limit_ids=["azure_limit"],
request_tags=["azure-example"]
)
)
print(response.choices[0].message.content)
With this configuration, Pay-i automatically tracks API calls and calculates costs without adding any latency to your requests. You can also use the decorator for tracking multiple related API calls - see Custom Instrumentation for more details.
Optional Proxy Routing (For Block Limits)
If you need to implement Block
limits that prevent requests from being sent to the provider when a budget is exceeded, use Pay-i's proxy configuration. Here's what changes compared to the standard approach:
from payi.lib.helpers import payi_azure_openai_url, PayiHeaderNames
import json
# 1. Initialize with proxy mode enabled
payi_instrument(config={"proxy": True})
# 2. Additional configuration for proxy mode
AZURE_DEPLOYMENT_TYPE = None # Optional: "global", "datazone", or "region" depending on your Azure deployment type
# 3. Set up required proxy headers
payi_headers = {
PayiHeaderNames.api_key: PAYI_API_KEY,
PayiHeaderNames.provider_base_uri: AZURE_ENDPOINT,
PayiHeaderNames.route_as_resource: AZURE_MODEL,
}
# Add deployment type if specified (only needed for proxy configuration)
if AZURE_DEPLOYMENT_TYPE:
payi_headers[PayiHeaderNames.resource_scope] = AZURE_DEPLOYMENT_TYPE
# 4. Configure Azure OpenAI client to use Pay-i as a proxy
azure_client = AzureOpenAI(
api_key=AZURE_OPENAI_API_KEY,
api_version=API_VERSION,
azure_endpoint=payi_azure_openai_url(), # Use Pay-i's URL instead of direct endpoint
default_headers=payi_headers # Include Pay-i proxy headers
)
# The payi_azure_openai_url() helper function returns the Pay-i proxy URL with the path `/api/v1/proxy/azure.openai`
# Use the client normally
response = azure_client.chat.completions.create(
model=AZURE_DEPLOYMENT,
messages=[{"role": "user", "content": "Say 'this is a test'"}]
)
print(response.choices[0].message.content)
# With proxy mode, you get access to real-time cost information
xproxy_result = response.xproxy_result
print(json.dumps(xproxy_result, indent=4))
The key differences in the proxy configuration are:
- We use
payi_instrument(config={"proxy": True})
- We specify the Azure deployment type (if needed)
- We use
payi_azure_openai_url()
instead of the direct Azure endpoint, which routes requests through the Pay-i proxy at/api/v1/proxy/azure.openai
- We include special proxy headers with the API key, base URI, model, and optionally deployment type
For detailed information about proxy configuration, including when to use it and how it works, see the Pay-i Proxy Configuration guide.
Advanced Configuration
Note: The following advanced configurations can be applied to both standard and proxy configurations. If you're using proxy configuration, remember to include the
azure_endpoint=payi_azure_openai_url()
anddefault_headers
parameters as shown in the proxy configuration example above.
Additional Client Configuration Options
The Azure OpenAI client supports various configuration options for performance tuning, such as connection pooling and timeouts. For details on these options, refer to the OpenAI Python SDK documentation.
When using these options with Pay-i in proxy mode, ensure you include the Pay-i-specific configuration:
import httpx
# Configure Azure OpenAI client with Pay-i proxy and additional options
azure_client = AzureOpenAI(
# Standard configuration
api_key=AZURE_OPENAI_API_KEY,
api_version=API_VERSION,
# Pay-i proxy configuration
azure_endpoint=payi_azure_openai_url(),
default_headers=payi_headers,
# Additional configuration options as needed
# Refer to OpenAI SDK documentation for available options
http_client=httpx.Client(
# Example: connection pooling or timeout settings
timeout=60.0
)
)
azure_endpoint=payi_azure_openai_url(),
default_headers=payi_headers
## Related Resources
### Quickstart Guides
- [Azure OpenAI Quickstart](/docs/azureopenai-quickstart) - Single comprehensive example with instructions
- [OpenAI Quickstart](/docs/openai-quickstart) - Similar example for standard OpenAI
### Conceptual Guides
- [Auto-Instrumentation with GenAI Providers](/docs/auto-instrumentation) - Overview of all providers
- [Custom Instrumentation](/docs/custom-instrumentation) - Adding business context to your tracking
- [Azure OpenAI Custom Headers](/docs/azure-openai-custom-headers) - Annotating individual Azure OpenAI requests with metadata
- [Pay-i Proxy Configuration](/docs/proxy-configuration) - For when you need `Block` limits
### Example Repositories
- [Azure OpenAI Example Collection](https://github.com/Pay-i/Pay-i-Quickstarts/tree/main/quickstarts/azure.openai/) - Multiple code examples
- [All Pay-i Quickstarts](https://github.com/Pay-i/Pay-i-Quickstarts/) - Examples for all providers
Updated 15 days ago