Decorators
Overview
The Pay-i Python SDK offers powerful function decorators that make it easy to organize and track your GenAI consumption. These decorators serve as an alternative to direct annotation with custom headers, providing parameter inheritance capabilities that are especially useful for sequences of related GenAI calls that share the same business context.
Looking for different annotations? If you need to add different tracking information to individual API calls, you might prefer using custom headers directly. Decorators are ideal when multiple API calls within a function should share the same business context.
Purpose of Inheritable Decorators
The primary purpose of these decorators is to annotate your functions with metadata such as:
- Use case names and IDs
- Limit IDs for budget tracking
- Request tags for organization
- User IDs for attribution
When used with Pay-i's instrumentation system (via payi_instrument()
), these decorators help maintain consistent tracking across multiple API calls while reducing boilerplate code.
Best Suited For
Decorators are particularly well-suited for:
- Function-level annotations that remain consistent across multiple API calls
- Hierarchical tracking structures where nested functions inherit parameters
- Consistent metadata like use case names and request tags
- Complex applications with many GenAI calls
For request-specific attributes that vary with each call (like user IDs and limit IDs), consider combining decorators with Custom Headers.
Choose an Operational Mode
Before using decorators, refer to the Operational Approaches documentation to understand the two different ways Pay-i can be integrated and which is best for your scenario.
Recommendation: Get basic API tracking working before adding decorators. This ensures your core integration is functioning properly and makes it easier to troubleshoot any issues.
IMPORTANT: Choose either Direct Provider Call with Telemetry or Proxy Routing for your application - do not mix them. Using
@ingest
when Pay-i is configured for proxy routing will cause double-counting of GenAI calls. Using@proxy
when Pay-i is configured for direct provider calls can cause your GenAI Provider to fail requests due to unexpected custom headers.
Setup & Installation
Prerequisites
- Pay-i Python SDK installed (
pip install payi
) - A valid Pay-i API key
- One or more supported GenAI Providers (OpenAI, Azure OpenAI, Anthropic, AWS Bedrock)
Initializing Pay-i Instrumentation
Before you can use either decorator, you must initialize payi_instrument()
:
import os
from payi.lib.instrument import payi_instrument
# Method 1: Initialize with config dictionary (simplest approach)
# This automatically creates Pay-i clients internally using environment variables
payi_instrument(config={"proxy": False}) # False for Direct Provider Call with Telemetry
You can also explicitly create and pass a Pay-i client:
import os
from payi import Payi
from payi.lib.instrument import payi_instrument
# Read API key from environment variables (best practice)
payi_key = os.getenv("PAYI_API_KEY", "YOUR_PAYI_API_KEY")
# Method 2: Create and provide a Pay-i client
payi = Payi(api_key=payi_key)
payi_instrument(payi)
Once you've initialized the instrumentation, import the appropriate decorator for your chosen mode:
For using Pay-i as a proxy:
from payi.lib.instrument import proxy
For Direct Provider Call with Telemetry:
from payi.lib.instrument import ingest
GenAI Provider Client Configuration
For Direct Provider Call with Telemetry
When using Direct Provider Call with Telemetry, configure your GenAI provider client normally (direct to provider):
import os
from openai import OpenAI
# Configure a standard provider client with direct access
openai_key = os.getenv("OPENAI_API_KEY", "YOUR_OPENAI_API_KEY")
client = OpenAI(api_key=openai_key)
For Pay-i as a Proxy
When using Pay-i as a Proxy, configure your GenAI provider client (OpenAI, Azure OpenAI, etc.) to use Pay-i as a proxy:
import os
from openai import OpenAI # Can also be AzureOpenAI or other providers
from payi.lib.helpers import payi_openai_url
# Read API keys from environment variables
payi_key = os.getenv("PAYI_API_KEY", "YOUR_PAYI_API_KEY")
openai_key = os.getenv("OPENAI_API_KEY", "YOUR_OPENAI_API_KEY")
# Configure provider client to use Pay-i as a proxy
client = OpenAI(
api_key=openai_key,
base_url=payi_openai_url(), # Use Pay-i's URL as the base
default_headers={"xProxy-api-key": payi_key} # Authenticate with Pay-i
)
Note:
- This is a basic example for OpenAI. For detailed configuration examples with other providers (Azure OpenAI, Anthropic, AWS Bedrock), refer to the Pay-i Auto-Instrumentation guide.
- With proxy setup alone, Pay-i will track all API calls but won't have function annotations. To add annotations, you must also initialize
payi_instrument()
and use the@proxy
decorator in your code.
Using the Decorators
After initializing payi_instrument()
and configuring your provider client, you can use the decorators to annotate your functions.
Using @ingest
Decorator (Direct Provider Call with Telemetry)
@ingest
Decorator (Direct Provider Call with Telemetry)Use this decorator when your provider client is configured for direct access (not through Pay-i):
How
@ingest
works: The@ingest
decorator first executes your decorated function (which makes a direct provider call), and then automatically calls the Ingest API behind the scenes to submit the telemetry data to Pay-i. Thexproxy_result
object returned by the underlying Ingest API call is not currently returned or easily accessible by the code calling the@ingest
-decorated function.Important: When using
@ingest
with streaming responses, be sure to read the stream completely. Pay-i needs the complete token information to accurately track usage and calculate costs. If you don't read the entire stream, you'll have incomplete data for ingestion.
from payi.lib.instrument import ingest
from payi.lib.helpers import create_headers
@ingest(request_tags=['summarization'], use_case_name='document_summary')
def summarize_document(client, document_text, limit_ids, user_id):
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": f"Summarize this: {document_text}"}],
# Pass metadata to Pay-i using extra_headers
# Note: limit_ids is always passed as a list, even with a single item
extra_headers=create_headers(user_id=user_id, limit_ids=limit_ids) # limit_ids=['budget_1', 'team_limit']
)
return response.choices[0].message.content
When using the @ingest
decorator:
- The API call is made directly to the provider (e.g., OpenAI)
- Pay-i instruments the call to capture usage data
- After the call completes, data is automatically sent to Pay-i with the annotations from the decorator
Using @proxy
Decorator (Pay-i as a Proxy)
@proxy
Decorator (Pay-i as a Proxy)Use this decorator when your provider client is configured to use Pay-i as a proxy:
from payi.lib.instrument import proxy
from payi.lib.helpers import create_headers
@proxy(request_tags=['chat'], use_case_name='customer_support')
def answer_customer_question(client, question, user_id):
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": question}],
# You can track against multiple limits simultaneously
extra_headers=create_headers(
user_id=user_id,
limit_ids=['personal_limit', 'department_budget', 'project_x']
)
)
return response.choices[0].message.content
When using the @proxy
decorator:
- The decorator's metadata (e.g., use_case_name, request_tags) is added to the context
- The API call is routed through Pay-i's proxy service
- Pay-i captures usage data with the annotations and returns the response with additional metadata
Complementary Approaches for Annotations
Pay-i offers two complementary approaches for adding metadata to your GenAI requests: Decorators (covered in this document) and Custom Headers. Each serves different purposes, and they can be used together effectively.
Usage Patterns
The two annotation approaches have different strengths:
- Decorators excel at function-level annotations that propagate through the entire call tree, providing parameter inheritance across all nested function calls
- Custom headers work well for request-specific attributes that vary with each individual API call
Decorators vs. Custom Headers
Example 1: Using Decorators for Function-Level Context
from payi.lib.instrument import proxy
@proxy(use_case_name="summarization", request_tags=["important"])
def get_summary(text):
# All calls in this function inherit the same use_case_name and request_tags
response1 = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": f"Summarize: {text}"}]
)
response2 = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": f"Give me key points from: {text}"}]
)
return response1.choices[0].message.content
Example A: Using Custom Headers for Request-Specific Values
from payi.lib.helpers import create_headers
def get_summary_for_user(text, user_id, limit_id):
# Each call needs to specify its own headers
headers = create_headers(
use_case_name="summarization",
request_tags=["important"],
user_id=user_id, # Varies by request
limit_ids=[limit_id] # Varies by request
)
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": f"Summarize: {text}"}],
extra_headers=headers
)
return response.choices[0].message.content
Recommended Hybrid Approach
The most effective approach is to combine both decorators and custom headers:
from payi.lib.instrument import proxy
from payi.lib.helpers import create_headers
# Use decorator for consistent function-level metadata
@proxy(use_case_name="customer_support", request_tags=["chat"])
def answer_customer_question(question, customer_id, customer_tier):
# Use custom headers for request-specific values
limit_id = f"{customer_tier}_tier_limit"
headers = create_headers(
user_id=customer_id, # Specific to this request
limit_ids=[limit_id, "global_limit"] # Specific to this request
)
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": question}],
extra_headers=headers # Combines with the decorator's context
)
return response.choices[0].message.content
This hybrid approach gives you the best of both worlds:
- Consistent function-level context through decorators
- Request-specific values through custom headers
- Reduced code duplication and improved organization
For more details on custom headers, see the Custom Headers documentation.
Parameter Behavior
Both decorators accept the same parameters, which control how usage data is reported to Pay-i:
Available Parameters
Parameter | Type | Description |
---|---|---|
limit_ids | List[str] | List of limit IDs to associate with the request (always passed as an array, even for a single ID) |
request_tags | List[str] | List of tags to associate with the request |
use_case_name | str | Name of the use case for this request |
use_case_id | str | ID of the use case for this request |
user_id | str | User ID to associate with the request |
Parameter Inheritance Rules
When decorators are nested, parameters are combined or inherited according to these rules:
Combining Parameters
- limit_ids: Values from all nested decorators are combined
- request_tags: Values from all nested decorators are combined
Inheriting Parameters
-
use_case_name:
- If not specified in inner decorator, inherits from outer decorator
- If specified in inner decorator, overrides outer decorator's value
-
use_case_id:
- If not specified, but name is the same as outer decorator, inherits ID from outer
- If not specified, but name is different from outer, generates a new UUID
- If specified, uses the provided value
-
user_id:
- Inner decorator's value takes precedence over outer decorator
Here's how parameters are inherited in the execution context when decorators are nested:
# Using Direct Provider Call with Telemetry example (same pattern works for Proxy Routing)
@ingest(limit_ids=['limit1'], request_tags=['outer'], use_case_name='outer_usecase')
def outer_function():
# Context: limit_ids=['limit1'], request_tags=['outer'], use_case_name='outer_usecase'
@ingest(limit_ids=['limit2'], request_tags=['inner'])
def inner_function():
# Context: limit_ids=['limit1', 'limit2'], request_tags=['outer', 'inner'], use_case_name='outer_usecase'
pass
inner_function()
IMPORTANT: If you choose to use decorators, be consistent and use the same decorator throughout your application. Do not mix
@ingest
and@proxy
decorators in the same application. For more information on using custom headers as an alternative or complementary approach, refer to Complementary Approaches for Annotations and the Custom Headers documentation.
Advanced Examples
Nested Decorators with Inheritance
This example demonstrates how parameters are inherited when decorators are nested:
from payi.lib.instrument import ingest # Use proxy instead for proxy mode
@ingest(request_tags=['app'], use_case_name='document_processing')
def process_document(document):
# First process the document
parsed_content = parse_document(document)
# Then summarize it
summary = summarize_content(parsed_content)
return summary
@ingest(request_tags=['parsing'])
def parse_document(document):
# This function inherits use_case_name='document_processing' from the parent
# The combined request_tags are ['app', 'parsing']
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": f"Parse this document and extract key information: {document}"}]
)
return response.choices[0].message.content
@ingest(request_tags=['summarization'], use_case_name='document_summary')
def summarize_content(content):
# This function uses use_case_name='document_summary' (overriding the parent)
# The combined request_tags are ['app', 'summarization']
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": f"Summarize this content: {content}"}]
)
return response.choices[0].message.content
User ID Precedence Example
This example demonstrates how the innermost user_id
takes precedence:
from payi.lib.instrument import ingest
from payi.lib.helpers import create_headers
# This example demonstrates how user_id precedence works
@ingest(user_id='default_user', request_tags=['app'])
def process_user_request(actual_user_id, query):
# This function's user_id is 'default_user'
response = query_llm(actual_user_id, query)
return response
@ingest(request_tags=['query'])
def query_llm(user_id, query):
# This function's decorator doesn't specify user_id
# So it inherits 'default_user' from the parent
# But we override with the actual user ID in the API call
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": query}],
extra_headers=create_headers(
user_id=user_id, # This takes precedence over the decorator's user_id
limit_ids=['limit_a', 'limit_b'] # Note: limit_ids always requires an array
)
)
return response.choices[0].message.content
Related Resources
Updated 9 days ago