Decorator
Overview
The Pay-i Python SDK offers a powerful function decorator, @track
, that makes it easy to organize and track your GenAI consumption. This decorator serves as an alternative to direct annotation with custom headers, providing parameter inheritance capabilities that are especially useful for sequences of related GenAI calls that share the same business context.
Looking for different annotations? If you need to add different tracking information to individual API calls, you might prefer using custom headers directly. The
@track
decorator is ideal when multiple API calls within a function should share the same business context.
Purpose of the @track
Decorator
@track
DecoratorThe primary purpose of this decorator is to annotate your functions with metadata such as:
- Use case names and IDs
- Limit IDs for budget tracking
- User IDs for attribution
When used with Pay-i's instrumentation system (via payi_instrument()
), the @track
decorator helps maintain consistent tracking across multiple API calls while reducing boilerplate code.
Best Suited For
The @track
decorator is particularly well-suited for:
- Function-level annotations that remain consistent across multiple API calls
- Hierarchical tracking structures where nested functions inherit parameters
- Consistent metadata like use case names, limit and user IDs
- Complex applications with many GenAI calls
For request-specific attributes that vary with each call (like user IDs and limit IDs), consider combining decorators with Custom Headers.
Choose an Operational Mode
Before using decorators, refer to the Operational Approaches documentation to understand the two different ways Pay-i can be integrated and which is best for your scenario.
Recommendation: Get basic API tracking working before adding decorators. This ensures your core integration is functioning properly and makes it easier to troubleshoot any issues.
Setup & Installation
Prerequisites
- Pay-i Python SDK installed (
pip install payi
) - A valid Pay-i API key
- One or more supported GenAI Providers (OpenAI, Azure OpenAI, Anthropic, AWS Bedrock)
Initializing Pay-i Instrumentation
Before you can use the @track
decorator, you must initialize payi_instrument()
:
import os
from payi.lib.instrument import payi_instrument
# Method 1: Initialize with config dictionary (simplest approach)
# This automatically creates Pay-i clients internally using environment variables
payi_instrument(config={"proxy": False}) # False for Direct Provider Call with Telemetry
You can also explicitly create and pass a Pay-i client:
import os
from payi import Payi # [Payi](/docs/payi-clients) client class
from payi.lib.instrument import payi_instrument
# Read API key from environment variables (best practice)
payi_key = os.getenv("PAYI_API_KEY", "YOUR_PAYI_API_KEY")
# Method 2: Create and provide a Pay-i client
payi = Payi(api_key=payi_key)
payi_instrument(payi)
Once you've initialized the instrumentation, import the @track
decorator:
from payi.lib.instrument import track
GenAI Provider Client Configuration
For Direct Provider Call with Telemetry
When using Direct Provider Call with Telemetry, configure your GenAI provider client normally (direct to provider):
import os
from openai import OpenAI
# Configure a standard provider client with direct access
openai_key = os.getenv("OPENAI_API_KEY", "YOUR_OPENAI_API_KEY")
client = OpenAI(api_key=openai_key)
For Pay-i as a Proxy
When using Pay-i as a Proxy, configure your GenAI provider client (OpenAI, Azure OpenAI, etc.) to use Pay-i as a proxy:
import os
from openai import OpenAI # Can also be AzureOpenAI or other providers
from payi.lib.helpers import payi_openai_url
# Read API keys from environment variables
payi_key = os.getenv("PAYI_API_KEY", "YOUR_PAYI_API_KEY")
openai_key = os.getenv("OPENAI_API_KEY", "YOUR_OPENAI_API_KEY")
# Configure provider client to use Pay-i as a proxy
client = OpenAI(
api_key=openai_key,
base_url=payi_openai_url(), # Use Pay-i's URL as the base
default_headers={"xProxy-api-key": payi_key} # Authenticate with Pay-i
)
Note:
- This is a basic example for OpenAI. For detailed configuration examples with other providers (Azure OpenAI, Anthropic, AWS Bedrock), refer to the Pay-i Auto-Instrumentation guide.
- With proxy setup alone, Pay-i will track all API calls but won't have function annotations. To add annotations, you must also initialize
payi_instrument()
and use the@track
decorator in your code.
Using the @track
Decorator
@track
DecoratorAfter initializing payi_instrument()
and configuring your provider client, you can use the @track
decorator to annotate your functions.
How
@track
works: The@track
decorator first executes your decorated function and then handles the GenAI calls based on your Pay-i configuration (Direct Provider Call with Telemetry or Proxy Routing).Important: When working with streaming responses, be sure to read the stream completely. Pay-i needs the complete token information to accurately track usage and calculate costs. If you don't read the entire stream, you'll have incomplete data for ingestion.
from payi.lib.instrument import track
from payi.lib.helpers import create_headers
@track(use_case_name='document_summary')
def summarize_document(client, document_text, limit_ids, user_id):
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": f"Summarize this: {document_text}"}],
# Pass metadata to Pay-i using extra_headers
# Note: limit_ids is always passed as a list, even with a single item
extra_headers=create_headers(user_id=user_id, limit_ids=limit_ids) # limit_ids=['budget_1', 'team_limit']
)
return response.choices[0].message.content
When using the @track
decorator:
- The decorator's metadata (e.g., use_case_name, limit_ids, user_id) is added to the context
- Based on your Pay-i configuration, the call is either made directly to the provider or routed through Pay-i's proxy
- Pay-i captures usage data with all the annotations you've provided
- When specifying
use_case_name
(withoutuse_case_id
), the decorator will create an ID automatically, and this same ID will be included on all ingested data made while in the scope of the decorated function
The @track
decorator works for different types of GenAI applications:
# Example: Document summarization with use case tracking, limits, and user ID
from flask import g # Application context example
@track(
use_case_name='document_summary',
limit_ids=['token_budget'],
user_id=g.user.id # User ID from application context
)
def summarize_document(client, document_text):
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": f"Summarize this: {document_text}"}],
# For request-specific tags, use extra_headers
extra_headers=create_headers(request_tags=['summarization'])
)
return response.choices[0].message.content
# Example: Customer support function with use case tracking, limits, and user ID
from django.contrib.auth import get_user # Another application context example
@track(
use_case_name='customer_support',
limit_ids=['support_budget', 'quality_limit'],
user_id=get_user().id # User ID from another application context
)
def get_support_response(client, question):
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": question}],
# Request-specific tags via headers
extra_headers=create_headers(request_tags=['chat'])
)
return response.choices[0].message.content
Complementary Approaches for Annotations
Pay-i offers four complementary approaches for adding metadata to your GenAI requests:
- Global defaults - Set during initialization with
payi_instrument()
for application-wide settings - The
@track
decorator (covered in this document) - For consistent function-level annotations track_context()
- For code block-level tracking with additional parameters- Custom Headers - For individual request-specific annotations
These approaches can be used together effectively, with each suited for different levels of your application.
Parameters Supported by @track
The @track
decorator supports these parameters:
Parameter | Description |
---|---|
use_case_name | Name of the use case for tracking purposes |
use_case_id | ID of the use case (generated automatically if not provided) |
use_case_version | Version number of the use case |
limit_ids | List of limit IDs to apply to the wrapped function |
user_id | ID of the user making the request (from application context) |
proxy | Control whether to use proxy mode (True) or ingest mode (False) |
For request level parameters not supported by @track
(like request_tags
), refer to the documentation on track_context()
and custom headers.
Usage Patterns
Choosing the Right Annotation Approach
Each annotation approach has different strengths:
-
The
@track
decorator excels at function-level annotations that propagate through the entire call tree, providing parameter inheritance forlimit_ids
,use_case_name
,use_case_id
,use_case_version
, anduser_id
-
The
track_context()
function works with arbitrary code blocks (not just functions) and supports additional parameters likerequest_tags
-
Custom headers work well for request-specific attributes that vary with each individual API call
Important: Parameter precedence follows a "latest wins" strategy. When @track
and track_context()
are used together, whichever executes later in the code flow takes precedence, not because of an inherent hierarchy between methods. For complete details on parameter precedence, refer to the Parameter Precedence in Pay-i Instrumentation documentation.
Decorators vs. Custom Headers
Example 1: Using the @track
Decorator for Function-Level Context
@track
Decorator for Function-Level Contextfrom payi.lib.instrument import track
from payi.lib.helpers import create_headers
@track(use_case_name="summarization")
def get_summary(text):
# All calls in this function inherit the same use_case_name
# Use extra_headers for request_tags
response1 = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": f"Summarize: {text}"}],
extra_headers=create_headers(request_tags=["important"])
)
response2 = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": f"Give me key points from: {text}"}]
)
return response1.choices[0].message.content
Example A: Using Custom Headers for Request-Specific Values
from payi.lib.helpers import create_headers
def get_summary_for_user(text, user_id, limit_id):
# Each call needs to specify its own headers
headers = create_headers(
use_case_name="summarization",
request_tags=["important"],
user_id=user_id, # Varies by request
limit_ids=[limit_id] # Varies by request
)
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": f"Summarize: {text}"}],
extra_headers=headers
)
return response.choices[0].message.content
Example B: Using track_context()
for Code Block Context
track_context()
for Code Block Contextfrom payi.lib.instrument import track_context
def get_combined_analysis(text):
# Apply parameters to a specific code block
with track_context(
use_case_name="text_analysis",
request_tags=["analysis", "important"],
limit_ids=["daily_budget"]
):
# Both API calls inherit the same parameters
summary = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": f"Summarize: {text}"}]
)
entities = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": f"Extract entities from: {text}"}]
)
return {
"summary": summary.choices[0].message.content,
"entities": entities.choices[0].message.content
}
Recommended Hybrid Approaches
Combining @track
with Custom Headers
@track
with Custom Headersfrom payi.lib.instrument import track
from payi.lib.helpers import create_headers
# Use decorator for consistent function-level metadata
@track(use_case_name="customer_support")
def answer_customer_question(question, customer_id, customer_tier):
# Use custom headers for request-specific values
limit_id = f"{customer_tier}_tier_limit"
headers = create_headers(
user_id=customer_id, # Specific to this request
limit_ids=[limit_id, "global_limit"], # Specific to this request
request_tags=["chat"] # Specific request tags
)
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": question}],
extra_headers=headers # Combines with the decorator's context
)
return response.choices[0].message.content
This hybrid approach gives you the best of both worlds:
- Consistent function-level context through the
@track
decorator - Request-specific values through custom headers
- Reduced code duplication and improved organization
Combining @track
with track_context()
@track
with track_context()
When @track
and track_context()
are used together, the one that executes later takes precedence. Typically in code, this means:
- When a
track_context()
block exists inside a function decorated with@track
, the context manager's values take precedence - When a decorated function is called from within a
track_context()
block, the decorator's values take precedence (as they execute later)
from payi.lib.instrument import track, track_context
@track(user_id="default_user")
def process_complex_request(customer_data):
# First part with inherited user_id
analyze_data(customer_data)
# Second part with additional parameters
with track_context(
use_case_name="content_generation",
request_tags=["generation", "marketing"]
):
# Context manager executes later, so its values take precedence here
# - use_case_name: "content_generation" (from context manager)
# - user_id: "default_user" (from decorator, not overridden)
# - request_tags: ["generation", "marketing"] (from context manager)
generate_content(customer_data)
This approach provides:
- Function-level user attribution
- Block-level use case attribution
- Support for additional parameters in specific blocks
For more details on custom headers, see the Custom Headers documentation.
Available Features and Parameters
The @track
Decorator Parameters
@track
Decorator ParametersThe @track
decorator accepts the following parameters:
Parameter | Type | Description |
---|---|---|
limit_ids | List[str] | List of limit IDs to associate with the request (always passed as an array, even for a single ID) |
use_case_name | str | Name of the use case for this request |
use_case_id | str | ID of the use case for this request |
use_case_version | int | Version number of the use case |
user_id | str | User ID to associate with the request |
proxy | bool | Control whether to use proxy mode (True) or ingest mode (False) |
The track_context()
Function Parameters
track_context()
Function ParametersThe track_context()
function supports additional parameters not available in the @track
decorator:
Parameter | Type | Description |
---|---|---|
limit_ids | List[str] | List of limit IDs to associate with the request |
use_case_name | str | Name of the use case for this request |
use_case_id | str | ID of the use case for this request |
use_case_version | int | Version number of the use case |
user_id | str | User ID to associate with the request |
request_tags | List[str] | List of tags to associate with the request |
experience_name | str | Name of the experience for tracking |
experience_id | str | ID of the experience (generated automatically if not provided) |
route_as_resource | str | Indicates a specific resource to route as |
resource_scope | str | Specifies a scope for the resource |
proxy | bool | Controls operational mode (True for proxy mode, False for ingest mode) |
Parameter Precedence: Working with Overlapping Parameters
When using multiple instrumentation methods throughout your application, you'll encounter situations where the same parameter appears in multiple places. Pay-i follows a "latest wins" strategy for resolving these overlaps to help you build predictable applications:
- Whichever context is created later in the execution flow (whether from
@track
ortrack_context()
) takes precedence - Custom headers have final say at the individual request level
- List parameters (like
limit_ids
) have special combining behavior rather than simple override
Detailed Precedence Rules: For comprehensive documentation on parameter precedence in nested contexts, combinations of different methods, and special parameter behaviors, see the Parameter Precedence in Pay-i Instrumentation guide.
Nested Context Behavior
When contexts are nested (whether from @track
decorators, track_context()
blocks, or combinations), the fundamental "latest wins" rule applies, with some parameter-specific behaviors:
Business-level Attribution (use_case_name
, use_case_id
, use_case_version
)
use_case_name
, use_case_id
, use_case_version
)- Following "latest wins", the innermost context's explicit values override outer contexts
- Special handling: When the inner context specifies a name matching the outer context, it preserves the ID
- When the inner context specifies a different name, it gets a new ID
- This ensures consistent business categorization when function calls are related
User-level Attribution (user_id
)
user_id
)- Follows the standard "latest wins" rule - the latest context's value takes precedence
- When an inner context explicitly sets user_id, it overrides any inherited value
Resource Management (limit_ids
)
limit_ids
)- Special behavior for list parameters: values combine rather than strictly override
- Each new context adds its values to the inherited list (unless explicitly set to empty)
- Any duplicate values are included only once
- Empty lists (
[]
) explicitly override and clear all inherited values
Request Metadata (request_tags
)
request_tags
)- While the
@track
decorator's signature doesn't includerequest_tags
, these follow the same list parameter handling - Request tags have the same combining behavior as
limit_ids
:- New values combine with inherited values
- Empty lists clear inherited values
- Duplicates are automatically removed
Remember: The "latest wins" rule means that the execution order determines precedence, not the type of context. A
track_context()
inside a decorated function takes precedence, but a decorated function called inside atrack_context()
also takes precedence because it executes later.
Decorator vs. Headers: Parameter Priority
Following the "latest wins" strategy, custom headers in an API call take precedence over decorator values because they execute later in the code flow:
Business-level Attribution (use_case_name
, use_case_id
)
use_case_name
, use_case_id
)- When use_case_name is specified in headers:
- Headers values take complete precedence
- Both name and ID values from the
@track
decorator are ignored entirely - No mixing of values occurs in this scenario
- When only use_case_id is provided in headers:
- use_case_name from the
@track
decorator is paired with use_case_id from the headers - This may create a mismatched name-ID pair if they don't correspond in the Pay-i portal
- The SDK doesn't validate this relationship - validation happens server-side
- Important: Always specify both name and ID together to ensure proper attribution
- use_case_name from the
- When neither is specified in headers:
- The decorator values are used exclusively
User-level Attribution (user_id
)
user_id
)- Values in
extra_headers
always override@track
decorator values - This makes it easy to switch users at the individual request level
Resource Management (limit_ids
)
limit_ids
)- Values from both sources are combined
- This enables applying both function-level and request-specific limits
Request Metadata (request_tags
)
request_tags
)- Tags from both sources are combined
- This lets you apply both function-level and request-specific metadata
Practical Example: Using Parameter Precedence
Here's a real-world example showing how these precedence rules work together:
# Outer workflow function with default settings
@track(
# Business-level attribution
use_case_name='customer_workflow',
# Resource management
limit_ids=['project_budget'],
# User-level attribution
user_id='default_service_account'
)
def handle_customer_request(request_data):
# All parameters apply to this function and any nested functions
# Inner function with additional settings
@track(
# Additional resource management
limit_ids=['premium_features'],
# Override user attribution
user_id='support_agent_124'
)
def generate_personalized_response(customer_data, customer_id):
# Final parameter resolution:
# - use_case_name: 'customer_workflow' (inherited from outer)
# - limit_ids: ['project_budget', 'premium_features'] (combined)
# - user_id: 'support_agent_124' (inner takes precedence)
# This specific API call can further override with headers
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": customer_data}],
extra_headers=create_headers(
# Override user attribution at request level
user_id=customer_id,
# Add another limit
limit_ids=['rate_limit']
)
)
return response.choices[0].message.content
return generate_personalized_response(processed_data, request_data['customer_id'])
This pattern gives you precise control over attribution, resources, and metadata at each level of your application.
IMPORTANT: For more information on using custom headers as an alternative or complementary approach, refer to Complementary Approaches for Annotations and the Custom Headers documentation.
Advanced Examples
Nested Decorators with Inheritance
This example demonstrates how parameters are inherited when decorators are nested:
from payi.lib.instrument import track
from payi.lib.helpers import create_headers
@track(use_case_name='document_processing')
def process_document(document):
# First process the document
parsed_content = parse_document(document)
# Then summarize it
summary = summarize_content(parsed_content)
return summary
@track() # Inherits use_case_name from parent context
def parse_document(document):
# This function inherits use_case_name='document_processing' from the parent
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": f"Parse this document and extract key information: {document}"}],
extra_headers=create_headers(request_tags=['parsing'])
)
return response.choices[0].message.content
@track(use_case_name='document_summary')
def summarize_content(content):
# This function uses use_case_name='document_summary' (overriding the parent)
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": f"Summarize this content: {content}"}],
extra_headers=create_headers(request_tags=['summarization'])
)
return response.choices[0].message.content
User ID Precedence Example
This example demonstrates how the innermost user_id
takes precedence:
from payi.lib.instrument import track
from payi.lib.helpers import create_headers
# This example demonstrates how user_id precedence works
@track(user_id='default_user')
def process_user_request(actual_user_id, query):
# This function's user_id is 'default_user'
response = query_llm(actual_user_id, query)
return response
@track() # Inherits user_id from parent
def query_llm(user_id, query):
# This function's decorator doesn't specify user_id
# So it inherits 'default_user' from the parent
# But we override with the actual user ID in the API call
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": query}],
extra_headers=create_headers(
user_id=user_id, # This takes precedence over the decorator's user_id
limit_ids=['limit_a', 'limit_b'], # Note: limit_ids always requires an array
request_tags=['query'] # Add request tags via headers
)
)
return response.choices[0].message.content
Related Resources
Updated about 1 month ago