Day 3: Applying Use Cases with @track(use_case_name)
📋 Overview
Welcome to Day 3 of our 5-day learning path! In Day 2, you learned how to track individual users with the track_context()
function. Today, we'll take another step forward by exploring how to assign specific Use Cases to your GenAI calls.
Today's Goal: Learn how to override the default Use Case and assign specific Use Cases to different functions using @track(use_case_name)
.
Recap: Annotations So Far
As we covered in Day 1, when you initialize Pay-i with payi_instrument()
, all GenAI calls are automatically assigned to a default Use Case (named after your Python module). This is convenient for getting started, but not ideal for production applications with multiple AI-powered features.
In Day 2, we introduced the track_context()
function to add user context at runtime. Today, we'll introduce the @track
decorator which is ideal for static annotations like use case names.
Why Use @track
for Use Cases?
@track
for Use Cases?While track_context()
is best for runtime values like user IDs that change per request, the @track
decorator is perfect for static values like use case names that remain the same for every call to a particular function:
- Use case names are typically defined once and remain constant
- They're associated with the function's purpose, not with request-specific data
- Using the decorator makes the code's intention clear at the function definition
Why Use Custom Use Cases?
Assigning specific Use Cases to different functions in your application provides several important benefits:
- Cost allocation: Understand which features are driving your AI costs
- Usage analysis: Analyze usage patterns across different AI workflows
- Granular limits: Apply budget constraints to specific features
- Performance tracking: Monitor response times and token usage per feature
- KPI tracking: Define and track specific Key Performance Indicators for each Use Case
- Versioning: Track different versions of your AI implementations (varying models, prompts, or providers for the same functional purpose)
- A/B testing: Run controlled experiments between different AI approaches and measure their performance differences
🔍 Prerequisite: Creating Use Cases
Important: Before you can specify a Use Case in code with @track(use_case_name)
, that Use Case definition must exist in Pay-i. If you specify a Use Case name that doesn't exist, you might encounter errors depending on your operational mode. For specific error handling strategies, see the Handling Errors documentation.
For the code examples below to work, you need to create these exact Use Case names in the Pay-i Developer Portal:
-
Log in to developer.pay-i.com
-
Navigate to your application dashboard
-
Click on Use Cases in the left navigation menu
-
Click the New Use Case button ① in the top right
-
Enter the following information:
- Name:
chatbot
in the Name field ② (use lowercase with underscores) - Description in the Description field ③:
AI responses for our customer-facing chatbot. Tracks all conversational AI interactions with end users.
- Name:
-
Make sure "Logging Enabled" is checked if you want Pay-i to also store prompts and completions from each GenAI call, not just metadata ④
-
Click Create
Important: Use Case names must be unique, contain only alphanumeric characters, periods (
.
), hyphens (-
), and underscores (_
). Spaces are not allowed, and names are limited to a maximum of 64 characters. Pay-i's convention is to use lowercase with underscores (snake_case). Pay careful attention to the exact spelling and case, as Use Case names are case-sensitive when referenced in code.
- Repeat these steps to create a second Use Case with:
- Name:
document_summarizer
(use lowercase with underscores) - Description:
Summarization of documents and long-form text. Used for content compression and information extraction.
- Name:
- You will see your two newly created use cases displayed in the Pay-i Portal with:
- Their names (
chatbot_response
anddocument_summarizer
) - Their descriptions visible in the middle column
- Version numbers (starting at 1)
- Green "logging" tags indicating that logging is enabled
- Their names (
🔑 Core Concept: @track(use_case_name)
@track(use_case_name)
The @track
decorator accepts a use_case_name
parameter that lets you specify which Use Case should be associated with GenAI calls made within a function.
Note: The
use_case_name
parameter will apply to all GenAI calls made within the decorated function, including any calls made in subfunctions that are called by your decorated function. This inheritance behavior allows you to easily categorize entire workflows with a single decorator.
Basic Usage
Here's how to assign a specific Use Case to a function:
import os
from openai import OpenAI
from payi.lib.instrument import payi_instrument, track
# Initialize Pay-i instrumentation
payi_instrument()
# Configure OpenAI client
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
# Apply the decorator with a specific Use Case
@track(use_case_name="chatbot")
def get_chatbot_response(prompt):
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": prompt}]
)
return response.choices[0].message.content
# Apply a different Use Case to another function
@track(use_case_name="document_summarizer")
def summarize_document(text):
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "Summarize the following text concisely:"},
{"role": "user", "content": text}
]
)
return response.choices[0].message.content
# Call the functions to see them in action
chat_response = get_chatbot_response("Tell me about Pay-i instrumentation")
print(f"Chatbot response: {chat_response[:50]}...\n")
long_text = """
Pay-i is a comprehensive instrumentation platform for GenAI applications.
It provides cost tracking, usage monitoring, and performance analytics for
various AI providers including OpenAI, Anthropic, Azure OpenAI, and AWS Bedrock.
With Pay-i, you can set budgets, track user-specific usage, and categorize
different AI workflows using Use Cases.
"""
summary = summarize_document(long_text)
print(f"Document summary: {summary[:50]}...")
In this example:
- All GenAI calls within
get_chatbot_response()
will be assigned to thechatbot
Use Case - All GenAI calls within
summarize_document()
will be assigned to thedocument_summarizer
Use Case - Any other GenAI calls in your application will still go to the default Use Case
Combining Use Cases and User IDs
You can combine @track
for static use case names with track_context()
for dynamic user IDs:
from payi.lib.instrument import track, track_context
# Use @track for the static use case name
@track(use_case_name="chatbot")
def get_personalized_response(prompt, user_id):
# Use track_context for the dynamic user ID
with track_context(user_id=user_id):
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": prompt}]
)
return response.choices[0].message.content
This approach gives you the best of both worlds: you can analyze usage by both feature (Use Case) and user, while using the right tool for each type of information.
Use Cases in Web Applications
For web applications, here's a complete Flask example showing how to structure your code:
# Flask example
from flask import Flask, request, g
import os
from openai import OpenAI
from payi.lib.instrument import payi_instrument, track
# Initialize Pay-i instrumentation
payi_instrument()
# Configure OpenAI client
client = OpenAI() # Uses OPENAI_API_KEY environment variable
app = Flask(__name__)
@app.route('/chat')
def chat_endpoint():
# User ID is available in the request context
prompt = request.args.get('prompt')
return get_chatbot_response(prompt)
# Properly combine @track for use case and track_context for user ID
@track(use_case_name="chatbot") # Static use case name
def get_chatbot_response(prompt):
# Dynamic user ID using track_context
with track_context(user_id=g.user.id):
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": prompt}]
)
return response.choices[0].message.content
@app.route('/summarize')
def summarize_endpoint():
document = request.args.get('document')
return summarize_document(document)
@track(use_case_name="document_summarizer") # Static use case name
def summarize_document(text):
# Dynamic user ID using track_context
with track_context(user_id=g.user.id):
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "Summarize the following text concisely:"},
{"role": "user", "content": text}
]
)
return response.choices[0].message.content
if __name__ == "__main__":
app.run(debug=True)
This structure ensures that different features of your application are tracked separately, while still maintaining user context. The user ID is retrieved directly from Flask's global g
object rather than being passed as a parameter.
For a refresher on what Use Cases represent, see the Use Cases concept page.
✅ Verification: Checking Use Case Assignment
To verify that your Use Case assignments are working correctly:
- Run your application and make several AI calls through your decorated functions
- Log in to developer.pay-i.com
- Navigate to your application dashboard
- Click on Cost Drivers in the left sidebar
- Select the Use Cases tab at the top of the page
You should see your requests grouped by their respective Use Cases. This allows you to analyze:
- How many requests each feature is generating
- The average cost per feature
- Token usage patterns across different features
- Response times for different AI functionalities
You can also filter by Use Case to focus on specific features of your application.
➡️ Next Steps
Congratulations! You've learned how to categorize your GenAI calls by both user and Use Case, giving you much more granular visibility into your AI usage patterns.
Tomorrow in Day 4, we'll continue exploring the @track
decorator by introducing another powerful capability: applying cost and usage Limits using @track(limit_ids=[...])
.
💡 Additional Resources
- Use Cases concept page - Deeper dive into Use Cases
- Custom Instrumentation guide - Comprehensive coverage of instrumentation options
- Track Decorator reference - Details on the
@track
decorator and its parameters - Track Context reference - How to use the
track_context()
function we learned in Day 2 - Handling Errors - Guide to handling errors in Pay-i
Updated 3 days ago