Combined Annotation Approaches
Overview
Pay-i's four annotation methods (global defaults, @track
decorator, track_context()
, and custom headers) can be used together to provide comprehensive instrumentation across all levels of your application. This document explains how to effectively combine these approaches and understand parameter precedence.
Parameter Precedence: The "Latest Wins" Strategy
When using multiple annotation methods together, Pay-i follows a fundamental "latest wins" strategy:
The annotation method that executes latest in the code flow takes precedence.
This is a critical concept to understand when combining different methods:
- Custom headers always have the final say because they execute at the individual API call level (latest in execution)
- When
track_context()
and@track
are used together, whichever executes later takes precedence - Global defaults (from
payi_instrument()
) provide base values but are overridden by any later method
This time-based precedence (rather than method-based) means both of these patterns are possible:
track_context()
within a decorated function (context manager overrides decorator)- Decorated function called within a
track_context()
block (decorator overrides context manager)
Parameter-Specific Behaviors
While the "latest wins" rule applies broadly, different parameter types have specific behaviors:
String Parameters (use_case_name, user_id, etc.)
- Simple override: The latest value completely replaces earlier values
- Empty strings (
""
) explicitly clear inherited values None
values inherit from earlier context
List Parameters (limit_ids, request_tags)
- Special combining behavior: New lists combine with inherited lists rather than replacing them
- Duplicate values are automatically removed from combined lists
- Empty lists (
[]
) explicitly override and clear all inherited values None
values inherit the entire list from the earlier context
Name-ID Pairs (use_case_name/id, experience_name/id)
- Special relationship handling for related parameters
- When only a name is provided, a new ID is generated (unless preserving an existing name)
- When only an ID is provided, it pairs with the inherited name
- When both are provided, they completely override any inherited values
Common Combinations and Use Cases
Global Defaults + @track
This combination provides application-wide defaults that can be overridden at the function level:
# At application startup
payi_instrument(config={
"use_case_name": "my_application",
"user_id": "system"
})
# In a specific module
@track(use_case_name="user_management")
def create_user(user_data):
# Override application-wide use_case_name for this function
# But inherits the global user_id="system"
response = client.chat.completions.create(...)
return process_response(response)
@track + Custom Headers
This combination allows function-level context with request-specific overrides:
@track(use_case_name="document_processing", limit_ids=["department_budget"])
def process_document(document, user_id=None):
# All calls have the same use_case_name and limit_ids
# But this specific call has request-specific parameters
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Process this document: " + document}],
extra_headers=create_headers(
user_id=user_id, # Override at request level
request_tags=["processing", "document_type_1"] # Request-specific tags
)
)
return response
track_context() + Custom Headers
Similar to the previous combination, but with code block scope instead of function scope:
def process_multiple_documents(documents, department_id):
for i, doc in enumerate(documents):
# Each document gets processed in its own context
with track_context(
use_case_name="batch_processing",
limit_ids=[f"department_{department_id}_budget"]
):
# Document-specific parameters via headers
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Process: " + doc}],
extra_headers=create_headers(
request_tags=[f"document_{i}", "batch_process"]
)
)
process_result(response)
@track + track_context()
This combination provides layered contextualization, with precise control over parameter inheritance:
@track(
use_case_name="customer_support",
limit_ids=["support_budget"],
user_id="support_agent"
)
def handle_customer_ticket(ticket_data, priority):
# Initial analysis in the function's context
ticket_analysis = analyze_ticket(ticket_data)
# For high-priority tickets, use a separate context
if priority == "high":
with track_context(
limit_ids=[], # Clear limits for high-priority tickets
request_tags=["high_priority", "escalated"]
):
# This context inherits use_case_name and user_id from decorator
# But clears limit_ids and adds request_tags
response = generate_emergency_response(ticket_data, ticket_analysis)
else:
# Normal tickets use the function's context
response = generate_standard_response(ticket_data, ticket_analysis)
return response
All Methods Together
For complex applications, you can use all four annotation methods together for comprehensive instrumentation:
# At application startup - global defaults
payi_instrument(config={
"use_case_name": "customer_service_app",
"limit_ids": ["global_limit"]
})
# Function-level context with @track
@track(
use_case_name="ticket_resolution",
limit_ids=["department_budget"]
)
def resolve_ticket(ticket_id, agent_id):
# Function uses ticket_resolution use case
# Combines global_limit and department_budget
ticket_data = get_ticket(ticket_id)
# Code block context with track_context()
with track_context(
user_id=agent_id, # Assign to specific agent
request_tags=["resolution"]
):
# This block:
# - Inherits use_case_name from decorator
# - Inherits combined limit_ids from global and decorator
# - Uses agent_id for user_id
# - Adds resolution tag
# Individual request with custom headers
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": f"Resolve ticket: {ticket_data}"}],
extra_headers=create_headers(
request_tags=["automated", "first_response"], # Additional tags
limit_ids=[f"agent_{agent_id}_limit"] # Agent-specific limit
)
)
# This request:
# - Inherits all context, combining it with request-specific values
# - Final tags are ["resolution", "automated", "first_response"]
# - Final limits are ["global_limit", "department_budget", "agent_X_limit"]
return process_response(response)
Real-World Examples
Multi-tenant SaaS Application
# Global defaults for the application
payi_instrument(config={
"use_case_name": "saas_platform",
"limit_ids": ["global_platform_limit"]
})
# Tenant-specific operation with track_context
def process_tenant_operation(tenant_id, user_id, operation_data):
# Set up tenant context
with track_context(
limit_ids=[f"tenant_{tenant_id}_limit"],
user_id=f"tenant_{tenant_id}"
):
# All operations for this tenant share these parameters
# User-specific operation with function context
@track(user_id=user_id) # Override with actual user
def execute_user_operation(data):
# This context:
# - Inherits use_case_name="saas_platform" from global
# - Combines limit_ids from global and tenant context
# - Overrides user_id with specific user
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": f"Process: {data}"}],
extra_headers=create_headers(
request_tags=[f"operation_{data['type']}"],
limit_ids=[f"user_{user_id}_limit"]
)
)
return response
# Call the operation in the tenant context
result = execute_user_operation(operation_data)
return result
Multi-stage Processing Pipeline
@track(use_case_name="data_pipeline")
def process_data_pipeline(input_data, user_id=None):
results = {}
# Stage 1: Data extraction
with track_context(request_tags=["extraction"]):
results["extraction"] = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": f"Extract data from: {input_data}"}],
extra_headers=create_headers(user_id=user_id)
)
# Stage 2: Data analysis
with track_context(request_tags=["analysis"]):
extracted_data = results["extraction"].choices[0].message.content
results["analysis"] = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": f"Analyze this data: {extracted_data}"}],
extra_headers=create_headers(user_id=user_id)
)
# Stage 3: Report generation
with track_context(request_tags=["report"]):
analysis = results["analysis"].choices[0].message.content
results["report"] = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": f"Generate report from analysis: {analysis}"}],
extra_headers=create_headers(user_id=user_id)
)
return results
Best Practices for Combined Annotations
-
Use Each Method for Its Strengths:
- Global defaults for application-wide settings
@track
for function-level business contexttrack_context()
for code blocks and additional parameters- Custom headers for request-specific values
-
Think in Terms of Scope:
- Wider scopes (global, function) for consistent parameters
- Narrower scopes (block, request) for specific overrides
-
Be Mindful of Parameter Precedence:
- Remember the "latest wins" rule based on execution order
- Understand special behavior for list parameters (combining)
- Use empty values (
""
,[]
) explicitly to clear inherited values
-
Organize by Business Domain:
- Group related operations under the same use case
- Use request_tags to differentiate specific operations
- Use limit_ids consistently across related functions
-
Document Your Approach:
- Add comments explaining parameter inheritance in complex cases
- Be consistent in how you combine annotation methods
Related Resources
Updated 4 days ago