track_context()
Overview
The Pay-i Python SDK provides the track_context()
function, which creates a context manager for applying annotations to specific code blocks. This approach is complementary to the @track
decorator but provides more flexibility and supports additional parameters.
Purpose of track_context()
The primary purpose of this context manager is to:
- Apply annotations to arbitrary code blocks without creating a function
- Support additional parameters like
request_tags
that aren't available in the@track
decorator - Provide temporary parameter overrides within a larger function
- Create nested tracking contexts with precise scope control
Best Suited For
The track_context()
function is particularly well-suited for:
- Code block-specific annotations where you don't want to create a separate function
- Additional parameters like
request_tags
not supported by the@track
decorator - Temporary parameter overrides within a larger function
- Complex workflows with different tracking needs for different sections
- Dynamic parameters that are only available at runtime
Setup & Prerequisites
The same setup requirements apply as for the @track
decorator:
- Pay-i Python SDK installed (
pip install payi
) - Pay-i instrumentation initialized with
payi_instrument()
- One or more supported GenAI provider clients configured
import os
from payi.lib.instrument import payi_instrument, track_context
# Initialize Pay-i instrumentation
payi_instrument() # Defaults to Direct Provider Call with Telemetry
# Import track_context
from payi.lib.instrument import track_context
Basic Usage
The track_context()
function creates a context manager that applies to all API calls within its scope:
from payi.lib.instrument import track_context
# Simple usage with a single parameter
with track_context(use_case_name="data_analysis"):
# All API calls in this block use the "data_analysis" use case
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Analyze this data..."}]
)
Examples
The context manager supports multiple parameters and can be used in various ways:
# Example 1: With multiple parameters including request_tags
with track_context(
use_case_name="content_generation",
user_id="user_123",
limit_ids=["daily_budget", "quality_tier"],
request_tags=["marketing", "blog_post"]
):
# All API calls in this block share these parameters
title_response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Generate a blog title about AI"}]
)
content_response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Write a blog post about AI based on this title: " + title_response.choices[0].message.content}]
)
Request-Specific Parameters with Nested Contexts
You can nest track_context()
calls to apply request-specific parameters:
# Outer context with shared parameters
with track_context(
use_case_name="content_generation",
limit_ids=["daily_budget"]
):
# First API call with additional parameters
with track_context(request_tags=["title_generation"]):
title_response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Generate a blog title about AI"}]
)
# Second API call with different parameters
with track_context(request_tags=["content_generation"]):
content_response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Write a blog post about AI"}]
)
This nested approach:
- Keeps all annotations in a consistent format
- Makes request context visible at the code structure level
- Leverages the parameter inheritance system
- Avoids mixing with the custom headers approach
Handling Dynamic Parameters
One key advantage of track_context()
is handling dynamic parameters that are only available at runtime:
def process_user_requests(client, users, queries):
# Enumerate to get both index and values
for i, (user_id, query) in enumerate(zip(users, queries)):
# Each user gets their own tracking context
with track_context(
use_case_name="personalized_response",
user_id=user_id, # Dynamic user ID
limit_ids=[f"user_{user_id}_limit", "global_limit"], # Dynamic limit IDs
request_tags=[f"query_{i}", "user_query"] # Dynamic tags with index
):
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": query}]
)
process_response(response)
Nested Contexts
Context managers can be nested to create layers of instrumentation:
# Outer context with base parameters
with track_context(use_case_name="data_processing", limit_ids=["project_limit"]):
# Process data in multiple stages
# Stage 1: Data extraction
with track_context(request_tags=["extraction"]):
extraction_result = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Extract key data points from this text..."}]
)
# Stage 2: Data analysis
with track_context(request_tags=["analysis"]):
analysis_result = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": f"Analyze these data points: {extraction_result.choices[0].message.content}"}]
)
Supported Parameters
The track_context()
function supports these parameters:
Parameter | Type | Description |
---|---|---|
limit_ids | List[str] | List of limit IDs to associate with the request |
use_case_name | str | Name of the use case for this request |
use_case_id | str | ID of the use case for this request |
use_case_version | int | Version number of the use case |
user_id | str | User ID to associate with the request |
request_tags | List[str] | List of tags to associate with the request |
experience_name | str | Name of the experience for tracking |
experience_id | str | ID of the experience (generated automatically if not provided) |
route_as_resource | str | Indicates a specific resource to route as |
resource_scope | str | Specifies a scope for the resource |
proxy | bool | Controls operational mode (True for proxy mode, False for ingest mode) |
Parameter Inheritance
When used with other annotation methods, track_context()
follows the same "latest wins" principle that applies to all Pay-i annotations:
- The context that executes later in the code flow takes precedence
- List parameters (like
limit_ids
andrequest_tags
) combine values rather than replacing them
For comprehensive details on parameter behavior:
- Combined Annotations - For practical examples of combining different annotation approaches
- Parameter Precedence in Pay-i Instrumentation - For detailed technical reference on parameter inheritance
Common Use Cases
User Attribution in Web Applications
from flask import g # Flask global context
def handle_api_request():
# Get user from authenticated session
current_user = g.user
with track_context(
use_case_name="api_service",
user_id=current_user.id, # Attribution from application context
limit_ids=[f"tier_{current_user.tier}", "global_api"]
):
# Process the request using GenAI
response = process_with_genai(request.data)
return jsonify(response)
Multi-stage Processing
def multi_stage_document_processing(document):
result = {}
# Stage 1: Extract information
with track_context(use_case_name="document_processing", request_tags=["extraction"]):
result["extracted_data"] = extract_information(document)
# Stage 2: Classify document
with track_context(use_case_name="document_processing", request_tags=["classification"]):
result["classification"] = classify_document(document, result["extracted_data"])
# Stage 3: Generate summary
with track_context(use_case_name="document_processing", request_tags=["summarization"]):
result["summary"] = generate_summary(document, result["extracted_data"])
return result
Temporarily Bypassing Limits
def process_admin_request(is_admin, query):
if is_admin:
# Admin requests bypass normal limits
with track_context(limit_ids=[]): # Empty list clears inherited limits
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": query}]
)
else:
# Normal users have standard limits
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": query}],
extra_headers=create_headers(limit_ids=["standard_user_limit"])
)
return response
Related Resources
Updated 5 days ago