OpenAI Provider Configuration
Overview
This guide explains how to configure OpenAI to work with Pay-i in both Proxy and Ingest modes. OpenAI is a supported Provider with several Resources that Pay-i can track.
SDK Support
The examples in this guide use the Pay-i Python SDK, which provides comprehensive support for OpenAI integration. If you're using a different programming language, you can utilize the Pay-i OpenAPI specification to generate a client SDK for your language of choice. The core concepts remain the same, though the exact implementation details may vary depending on the language and client library used.
Configuration Helper
Pay-i provides an OpenAI URL helper in payi.lib.helpers
to make OpenAI setup easier:
Helper Function | Description |
---|---|
payi_openai_url() | Generates the correct proxy URL for OpenAI |
Using Pay-i as a Proxy
When using Pay-i as a Proxy, you'll configure the OpenAI client to route calls through Pay-i:
import os
from openai import OpenAI
from payi.lib.helpers import payi_openai_url
# Read API keys from environment variables
payi_key = os.getenv("PAYI_API_KEY", "YOUR_PAYI_API_KEY")
openai_key = os.getenv("OPENAI_API_KEY", "YOUR_OPENAI_API_KEY")
# Configure OpenAI client to use Pay-i as a proxy
client = OpenAI(
api_key=openai_key,
base_url=payi_openai_url(), # Use Pay-i's URL as the base
default_headers={"xProxy-api-key": payi_key} # Authenticate with Pay-i
)
# Use the client normally
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hello, how are you?"}]
)
Ingesting Metrics into Pay-i
In Ingest mode, configure your OpenAI client normally (direct to provider):
import os
from openai import OpenAI
from payi.lib.instrument import ingest
from payi.lib.helpers import create_headers
# Configure standard OpenAI client with direct access
openai_key = os.getenv("OPENAI_API_KEY", "YOUR_OPENAI_API_KEY")
client = OpenAI(api_key=openai_key)
# Use the decorator to track usage
@ingest(request_tags=['example'], use_case_name='hello_world')
def example_function():
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hello, how are you?"}],
extra_headers=create_headers(
user_id="user123",
limit_ids=['budget_1']
)
)
return response.choices[0].message.content
Advanced Configuration
Custom Base URLs
If you need to specify a custom base URL (for private deployments or enterprise configurations):
from payi.lib.helpers import payi_openai_url
# Use a custom base URL (e.g., for private deployments)
custom_url = payi_openai_url(payi_base_url="https://custom.payi.domain.com")
Connection Pooling
For high-throughput applications, you may want to configure connection pooling:
from openai import OpenAI
import httpx
from payi.lib.helpers import payi_openai_url
# Configure with httpx transport for connection pooling
transport = httpx.HTTPTransport(limits=httpx.Limits(max_connections=100))
client = OpenAI(
api_key="your-key",
base_url=payi_openai_url(),
default_headers={"xProxy-api-key": "your-payi-key"},
http_client=httpx.Client(transport=transport)
)
Timeouts
For operations that might take longer, consider configuring appropriate timeouts:
from openai import OpenAI
import httpx
from payi.lib.helpers import payi_openai_url
# Configure with longer timeouts for large requests
timeout = httpx.Timeout(timeout=60.0) # 60 seconds
client = OpenAI(
api_key="your-key",
base_url=payi_openai_url(),
default_headers={"xProxy-api-key": "your-payi-key"},
http_client=httpx.Client(timeout=timeout)
)
Related Resources
Updated about 8 hours ago