Day 3: Tracking Users with track_context()
📋Overview
track_context() allows you to organize and track GenAI consumption at very granular levels within a larger application. Unlike the @track decorator, you can use track_context() any number of times within a function so that you can instrument parameterized metadata that is only available at runtime.
Purpose of the track_context() context manager
track_context() is used to annotate your functions with dynamic metadata such as:
- Use Case Steps
- User/Account IDs
- Custom Properties
Best Suited For
The track_context() context manager is particularly well-suited for:
- Dynamic parameters that are only available at runtime
- Code block-specific annotations where you don't want to create a separate function
- Temporary parameter overrides within a larger function
- Complex workflows with different tracking needs for different sections
Basic Example: Tracking Users
The track_context() function creates a scope that applies to all GenAI calls made within its block. This behavior gives you precise control over context propagation.
Note that before you can use the track_context() context manager, you must initialize payi_instrument().
from payi.lib.instrument import track_context
def my_function(prompt):
    user_id = get_current_user_id()
    
    # Use track_context for runtime values
    with track_context(user_id=user_id):
        response = client.chat.completions.create(
            model="gpt-3.5-turbo",
            messages=[{"role": "user", "content": prompt}]
        )
    return response.choices[0].message.contentSupported Parameters
The track_context() context manager supports the below parameters. All parameters are optional.
| Parameter | Description | Type | 
|---|---|---|
| use_case_name | Name of the use case for tracking purposes | str | 
| use_case_id | A custom ID for the use case Instance. | str | 
| use_case_version | Version number of the use case | int | 
| use_case_step | You can uniquely identify individual steps within a larger agent for analytics and data pivoting. This would be akin to giving a durable name to a commonly executed spanin Open Telemetry. As you make changes to the implementation, you can see how the tracked characteristics of the step evolves over time. | str | 
| limit_ids | List of limit IDs to apply to the wrapped function | list[str] | 
| user_id | ID of the user initiating the use case | str | 
| account_name | The account to which the user belongs, for grouping purposes | str | 
| request_properties | Any custom key:value properties to be added to all requests made in the context manager's block. | dict[str,str] | 
| use_case_properties | Any custom key:value properties to be added to all use case Instances made in the context manager's block. | dict[str,str] | 
| price_as_resource | This parameter is used to tell Pay-i which resource should be used when calculating costs for any of the requests that occur within the context manager. This is used when the resource name is not immediately determinable (such as when leveraging Azure OpenAI deployments), or when using custom resources for custom pricing, so that the costs can be tabulated correctly. Note that this may have unintended results if you are making calls to several different models within the context manager, since all of the calls will be priced as the chosen resource. | str | 
| price_as_category | Optionally used in conjunction with price_as_resource, this header is also used to tell Pay-i which Category + Resource combination should be used when calculating costs. Ifprice_as_resourceis specified andprice_as_categoryis not, then the Pay-i SDK will automatically infer the category based on the provider SDK that is being instrumented. | str | 
| resource_scope | Used to determine the appropriate pricing in Azure AI Foundry for resources deployed at various scopes. For more details, see Resource Scopes. | str | 
Updated about 19 hours ago
