Python SDK KPI APIs
Overview
This guide explains how to work with Key Performance Indicators (KPIs) in Pay-i using the Python SDK. KPIs provide a way to quantify the effectiveness and business impact of your GenAI applications. For a conceptual understanding of KPIs and their role in the Pay-i platform, please refer to the Use Case KPIs page.
For a detailed API reference with complete parameter and return type information, see the Python SDK KPIs API Reference.
When to Use KPIs
You'll work with KPIs in the Pay-i Python SDK for several important purposes:
- Performance Tracking: Measure the technical effectiveness of your GenAI applications
- Application-Specific Metrics: Quantify the direct impact of your AI features
- Quality Monitoring: Track user satisfaction and content quality over time
- Improvement Validation: Compare metrics across different versions of a use case
- Goal Setting: Establish and track progress toward specific performance targets
Understanding the KPI Framework
Pay-i implements KPIs using a two-tiered approach:
- KPI Definitions: Created at the use case type level (identified by
use_case_name
) - KPI Values: Recorded at the use case instance level (identified by
use_case_id
)
This important distinction means implementing KPIs is a two-step process:
- First, you define what KPIs exist for a particular use case type (e.g., "Chat-Bot" or "Document-Processor")
- Then, you record actual values for those KPIs on specific instances of that use case type
This approach ensures consistent metrics across all instances of a given use case type, while allowing for instance-specific measurements.
Common Workflows
Working with the Pay-i KPIs API involves several common patterns for defining, recording, and analyzing performance metrics. This section walks you through these workflows with practical code examples.
The examples below demonstrate how to:
- Define KPIs for use case types
- Record KPI values for specific use case instances
- Retrieve and analyze KPI data
- Use KPIs for various measurement purposes
Note: The examples in this guide use the Python SDK's client objects (
Payi
andAsyncPayi
), which provide a resource-based interface to the Pay-i API. For details on client initialization and configuration, see the Pay-i Client Initialization guide.
Defining KPIs for a Use Case Type
The first step in implementing KPIs is to define what metrics you want to track for each use case type:
from payi import Payi
# Initialize the Pay-i client
client = Payi() # API key will be loaded from PAYI_API_KEY environment variable
# Step 1: Define KPIs for a Chat Bot use case type
# These definitions establish what metrics will be tracked for all Chat Bot instances
# Define a deflection rate KPI (tracks when AI resolves issues without human help)
deflection_kpi = client.use_cases.definitions.kpis.create(
use_case_name="Chat-Bot", # The use case type this KPI belongs to
kpi_name="Deflection Rate", # Unique name for this KPI
description="Measures when AI resolves issues without human intervention",
kpi_type="boolean", # Boolean KPI: True (1.0) = deflected, False (0.0) = not deflected
goal=0.25 # Target: 25% of queries resolved by AI
)
print(f"Created '{deflection_kpi.name}' KPI for Chat-Bot use case type")
# Define a satisfaction KPI (tracks user happiness with AI interactions)
satisfaction_kpi = client.use_cases.definitions.kpis.create(
use_case_name="Chat-Bot",
kpi_name="Customer Satisfaction",
description="User satisfaction rating for AI interaction",
kpi_type="likert5", # Likert scale (1-5 rating)
goal=4.0 # Target: 4.0/5.0 average rating
)
print(f"Created '{satisfaction_kpi.name}' KPI for Chat-Bot use case type")
# Define a resolution accuracy KPI (tracks correctness of AI responses)
accuracy_kpi = client.use_cases.definitions.kpis.create(
use_case_name="Chat-Bot",
kpi_name="Resolution Accuracy",
description="Expert assessment of AI response quality",
kpi_type="likert5", # Likert scale (1-5 rating)
goal=4.2 # Target: 4.2/5.0 average rating
)
print(f"Created '{accuracy_kpi.name}' KPI for Chat-Bot use case type")
Expected output:
Created 'Deflection Rate' KPI for Chat-Bot use case type
Created 'Customer Satisfaction' KPI for Chat-Bot use case type
Created 'Resolution Accuracy' KPI for Chat-Bot use case type
This step only needs to be performed once when setting up your application. After defining these KPIs, they'll be available for all instances of the Chat-Bot use case type.
Listing Defined KPIs for a Use Case Type
After defining KPIs, you can retrieve them to confirm their configuration:
# List all KPIs defined for the Chat-Bot use case type
print("\nKPIs defined for Chat-Bot use case type:")
kpi_definitions = client.use_cases.definitions.kpis.list(use_case_name="Chat-Bot")
for kpi in kpi_definitions:
print(f" {kpi.name}: Type={kpi.kpi_type}, Goal={kpi.goal}")
print(f" Description: {kpi.description}")
Expected output:
KPIs defined for Chat-Bot use case type:
Deflection Rate: Type=boolean, Goal=0.25
Description: Measures when AI resolves issues without human intervention
Customer Satisfaction: Type=likert5, Goal=4.0
Description: User satisfaction rating for AI interaction
Resolution Accuracy: Type=likert5, Goal=4.2
Description: Expert assessment of AI response quality
Creating a Use Case Instance
Before recording KPI values, you need to create a use case instance each time your application executes the use case:
# Step 2: Create a use case instance when a chat session begins
# In a real application, this would be done at the start of each chat conversation
chat_session = client.use_cases.create(
use_case_name="Chat-Bot", # Must match the use case type with defined KPIs
user_id="user_12345", # Optional user attribution
properties={ # Optional context properties
"channel": "website",
"browser": "chrome",
"locale": "en-US"
}
)
# Store the use case ID for later recording KPI values
chat_id = chat_session.use_case_id
print(f"Created Chat-Bot instance with ID: {chat_id}")
Expected output:
Created Chat-Bot instance with ID: uc_abcdef123456
Recording KPI Values for a Use Case Instance
After the user interaction completes, record KPI values for the relevant metrics:
# Step 3: Record KPI values based on the chat session outcome
# In this example, we'll simulate a successful AI-handled conversation
# Record deflection (AI successfully handled without human intervention)
client.use_cases.kpis.create(
kpi_name="Deflection Rate", # Must match a defined KPI name
use_case_id=chat_id, # The instance ID from step 2
score=1.0 # Boolean: 1.0 = True (deflected)
)
print(f"Recorded Deflection Rate: 1.0 (AI handled successfully)")
# Record customer satisfaction based on post-chat survey
client.use_cases.kpis.create(
kpi_name="Customer Satisfaction",
use_case_id=chat_id,
score=4.5 # Likert5: 4.5/5.0 rating
)
print(f"Recorded Customer Satisfaction: 4.5/5.0")
# Record resolution accuracy based on expert review
client.use_cases.kpis.create(
kpi_name="Resolution Accuracy",
use_case_id=chat_id,
score=4.0 # Likert5: 4.0/5.0 rating
)
print(f"Recorded Resolution Accuracy: 4.0/5.0")
Expected output:
Recorded Deflection Rate: 1.0 (AI handled successfully)
Recorded Customer Satisfaction: 4.5/5.0
Recorded Resolution Accuracy: 4.0/5.0
Retrieving KPI Values for Analysis
After recording KPI values, you can retrieve them for analysis:
# Step 4: List all KPIs recorded for this chat session
print(f"\nKPI values for chat session {chat_id}:")
kpi_values = client.use_cases.kpis.list(use_case_id=chat_id)
for kpi in kpi_values:
print(f" {kpi.kpi_name}: {kpi.score}")
print(f" Created: {kpi.create_timestamp}")
print(f" Updated: {kpi.update_timestamp}")
Expected output:
KPI values for chat session uc_abcdef123456:
Deflection Rate: 1.0
Created: 2025-05-20T16:45:12.345678Z
Updated: 2025-05-20T16:45:12.345678Z
Customer Satisfaction: 4.5
Created: 2025-05-20T16:45:13.456789Z
Updated: 2025-05-20T16:45:13.456789Z
Resolution Accuracy: 4.0
Created: 2025-05-20T16:45:14.567890Z
Updated: 2025-05-20T16:45:14.567890Z
Updating KPI Values
If you need to update a KPI value after it's been recorded:
# Step 5: Update a KPI value (e.g., if additional feedback is received)
client.use_cases.kpis.update(
kpi_name="Customer Satisfaction",
use_case_id=chat_id,
score=5.0 # Updated score: perfect satisfaction
)
print(f"Updated Customer Satisfaction to: 5.0/5.0")
# Verify the update
kpi_values = client.use_cases.kpis.list(
use_case_id=chat_id,
kpi_name="Customer Satisfaction" # Filter to just this KPI
)
for kpi in kpi_values:
print(f" {kpi.kpi_name}: {kpi.score}")
print(f" Updated: {kpi.update_timestamp}")
Expected output:
Updated Customer Satisfaction to: 5.0/5.0
Customer Satisfaction: 5.0
Updated: 2025-05-20T16:46:25.123456Z
Real-World KPI Implementation Examples
Let's explore comprehensive examples of implementing KPIs for different types of AI applications.
Example 1: Content Generation KPIs
For AI-powered content generation applications, typical KPIs include engagement metrics, content quality measures, and effectiveness indicators:
from payi import Payi
client = Payi()
# Step 1: Define KPIs for content generation use case
def setup_content_generator_kpis():
# Engagement metrics
client.use_cases.definitions.kpis.create(
use_case_name="Content-Generator",
kpi_name="Clickthrough Rate",
description="Percentage of users who click on generated content",
kpi_type="boolean",
goal=0.05 # 5% target
)
# Conversion metrics
client.use_cases.definitions.kpis.create(
use_case_name="Content-Generator",
kpi_name="Conversion Rate",
description="Percentage of users who complete desired action",
kpi_type="boolean",
goal=0.01 # 1% target
)
# Quality metrics
client.use_cases.definitions.kpis.create(
use_case_name="Content-Generator",
kpi_name="Editor Rating",
description="Human editor quality assessment",
kpi_type="likert5",
goal=4.0 # 4/5 target
)
print("Content Generator KPIs configured successfully")
# Step 2: Function to track content generation outcomes
def track_content_performance(content_id, clicked, converted, quality_rating):
# Create a use case instance for this content piece
content = client.use_cases.create(
use_case_name="Content-Generator",
properties={"content_id": content_id}
)
use_case_id = content.use_case_id
# Record all KPIs for this content
client.use_cases.kpis.create(
kpi_name="Clickthrough Rate",
use_case_id=use_case_id,
score=1.0 if clicked else 0.0
)
client.use_cases.kpis.create(
kpi_name="Conversion Rate",
use_case_id=use_case_id,
score=1.0 if converted else 0.0
)
client.use_cases.kpis.create(
kpi_name="Editor Rating",
use_case_id=use_case_id,
score=quality_rating
)
print(f"Recorded performance metrics for content ID: {content_id}")
return use_case_id
# Setup the KPIs (only needs to be done once)
setup_content_generator_kpis()
# Example usage: Track several pieces of content
content_data = [
{"id": "blog-post-123", "clicked": True, "converted": False, "rating": 4.5},
{"id": "product-desc-456", "clicked": True, "converted": True, "rating": 4.0},
{"id": "email-campaign-789", "clicked": False, "converted": False, "rating": 3.0}
]
for content in content_data:
track_content_performance(
content_id=content["id"],
clicked=content["clicked"],
converted=content["converted"],
quality_rating=content["rating"]
)
Example 2: Document Processing KPIs
For AI-powered document processing applications, KPIs often focus on accuracy and efficiency:
# Step 1: Define KPIs for document processing use case
def setup_document_processor_kpis():
# Accuracy metrics
client.use_cases.definitions.kpis.create(
use_case_name="Document-Processor",
kpi_name="Extraction Accuracy",
description="Percentage of correctly extracted fields",
kpi_type="number", # Stored as decimal (0.0-1.0)
goal=0.98 # 98% accuracy target
)
# Efficiency metrics
client.use_cases.definitions.kpis.create(
use_case_name="Document-Processor",
kpi_name="Processing Time",
description="Seconds to process document",
kpi_type="number",
goal=3.0 # 3 seconds target
)
# Completeness metrics
client.use_cases.definitions.kpis.create(
use_case_name="Document-Processor",
kpi_name="Extraction Completeness",
description="Percentage of required fields extracted",
kpi_type="number",
goal=0.95 # 95% completeness target
)
# Quality assessment
client.use_cases.definitions.kpis.create(
use_case_name="Document-Processor",
kpi_name="Quality Score",
description="Overall quality assessment by reviewer",
kpi_type="likert5",
goal=4.0 # 4.0/5.0 target
)
print("Document Processor KPIs configured successfully")
# Step 2: Function to track document processing outcomes
def track_document_processing(document_id, accuracy, processing_time, completeness, quality_score):
# Create a use case instance for this document
document = client.use_cases.create(
use_case_name="Document-Processor",
properties={"document_id": document_id}
)
use_case_id = document.use_case_id
# Record all KPIs for this document
client.use_cases.kpis.create(
kpi_name="Extraction Accuracy",
use_case_id=use_case_id,
score=accuracy
)
client.use_cases.kpis.create(
kpi_name="Processing Time",
use_case_id=use_case_id,
score=processing_time
)
client.use_cases.kpis.create(
kpi_name="Extraction Completeness",
use_case_id=use_case_id,
score=completeness
)
client.use_cases.kpis.create(
kpi_name="Quality Score",
use_case_id=use_case_id,
score=quality_score
)
print(f"Recorded processing metrics for document ID: {document_id}")
return use_case_id
Example 3: Tracking Funnel Conversion KPIs
For applications with multi-step user journeys, implementing funnel conversion KPIs helps track progression through each stage:
# Step 1: Define KPIs for recommendation funnel
def setup_recommendation_funnel_kpis():
# Define KPIs for each stage of the recommendation funnel
funnel_stages = [
("Recommendation Viewed", "User viewed AI-generated recommendations", 0.90),
("Recommendation Clicked", "User clicked on at least one recommendation", 0.20),
("Added to Cart", "User added recommended item to cart", 0.05),
("Purchased", "User completed purchase of recommended item", 0.01)
]
for kpi_name, description, goal in funnel_stages:
client.use_cases.definitions.kpis.create(
use_case_name="Product-Recommender",
kpi_name=kpi_name,
description=description,
kpi_type="boolean",
goal=goal
)
print("Recommendation funnel KPIs configured successfully")
# Step 2: Function to track a user's journey through the funnel
def track_recommendation_funnel(recommendation_id, stages_completed):
"""
Track a user's progression through the recommendation funnel
Args:
recommendation_id: Identifier for the recommendation session
stages_completed: Dictionary with funnel stages as keys and boolean values
"""
# Create a use case instance for this recommendation session
recommender = client.use_cases.create(
use_case_name="Product-Recommender",
properties={"recommendation_id": recommendation_id}
)
use_case_id = recommender.use_case_id
# Record which stages were completed
for stage, completed in stages_completed.items():
client.use_cases.kpis.create(
kpi_name=stage,
use_case_id=use_case_id,
score=1.0 if completed else 0.0
)
print(f"Recorded funnel progression for recommendation ID: {recommendation_id}")
return use_case_id
# Setup the funnel KPIs (only needs to be done once)
setup_recommendation_funnel_kpis()
# Example usage: Track several user journeys through the funnel
funnel_data = [
{
"id": "rec-123",
"stages": {
"Recommendation Viewed": True,
"Recommendation Clicked": True,
"Added to Cart": False,
"Purchased": False
}
},
{
"id": "rec-456",
"stages": {
"Recommendation Viewed": True,
"Recommendation Clicked": True,
"Added to Cart": True,
"Purchased": True
}
},
{
"id": "rec-789",
"stages": {
"Recommendation Viewed": True,
"Recommendation Clicked": False,
"Added to Cart": False,
"Purchased": False
}
}
]
for journey in funnel_data:
track_recommendation_funnel(
recommendation_id=journey["id"],
stages_completed=journey["stages"]
)
KPI Best Practices
When implementing KPIs in your Pay-i instrumented applications, consider these best practices:
1. Define KPIs During Application Design
Define your KPIs as part of your initial use case design:
- Identify what success looks like for your use case
- Determine how to measure it quantitatively
- Select appropriate KPI types for each metric
2. Use Consistent KPI Names Across Versions
When you create new versions of a use case type, maintain the same KPI names to enable proper comparison:
# Define the same KPIs for both versions of a chat bot
for version in ["Chat-Bot-v1", "Chat-Bot-v2"]:
client.use_cases.definitions.kpis.create(
use_case_name=version,
kpi_name="Deflection Rate",
description="Measures when AI resolves issues without human intervention",
kpi_type="boolean",
goal=0.25
)
This approach allows you to compare performance between versions in the Pay-i dashboard.
3. Balance Technical and Application-Specific KPIs
Include both technical performance metrics and application-specific indicators:
- Technical KPIs: accuracy, response time, error rates
- Application-specific KPIs: engagement rates, user satisfaction, quality assessments
4. Record KPIs at Appropriate Points
Integrate KPI recording at logical points in your application flow:
- Immediate metrics (like processing time) can be recorded right after the AI operation
- Lagging indicators (like conversions) might need to be recorded asynchronously later
- User feedback metrics should be recorded when feedback is collected
5. Use KPI Types Appropriately
Choose the right KPI type for each metric you want to track:
- boolean: For binary outcomes (success/failure, clicked/not clicked)
- number: For continuous measurements (time, counts)
- percentage: For ratio measurements from 0-100%
- likert5/7/10: For satisfaction or quality ratings on a scale
6. Understanding KPI Value Types
All KPI values are submitted as floating-point numbers in the Pay-i SDK, regardless of the KPI type:
# All KPI scores are passed as float values
client.use_cases.kpis.create(
kpi_name="Clickthrough Rate", # A boolean KPI type
use_case_id=use_case_id,
score=1.0 # Value provided as float (1.0 = true for boolean KPIs)
)
For boolean KPIs, use these floating-point values:
- 1.0 represents true/success outcomes
- 0.0 represents false/failure outcomes
This is why you'll see patterns like this in the examples:
# Converting a Python boolean to the expected float value
client.use_cases.kpis.create(
kpi_name="Clickthrough Rate",
use_case_id=use_case_id,
score=1.0 if clicked else 0.0
)
The Pay-i system handles interpreting these values based on the KPI type definition, allowing it to calculate meaningful statistics such as "5.23% True" for boolean metrics.
Relationship to Value Metrics
While KPIs measure specific application performance aspects, Pay-i also provides Value metrics as a separate system for broader business impact measurement:
- KPIs focus on application-specific performance metrics that you define and record via the SDK
- Value metrics use Business Equations (configurable in the Pay-i UI) that can incorporate standard metrics and your KPIs to calculate broader business impact
For comprehensive ROI measurement, you can combine your application-specific KPIs with the Value metrics system. For more information on Value metrics, see Value Metrics.
API Reference
For detailed information on all the methods, parameters, and response types provided by the KPIs resources, please refer to the Python SDK KPIs API Reference.
The reference documentation includes:
- Complete method signatures
- Parameter descriptions and types
- Response object structures
- Error handling information
Updated 22 days ago