Python SDK KPI APIs
Overview
This guide explains how to work with Key Performance Indicators (KPIs) in Pay-i using the Python SDK. KPIs provide a way to quantify the effectiveness and business impact of your GenAI applications. For a conceptual understanding of KPIs and their role in the Pay-i platform, please refer to the Use Case KPIs page.
For a detailed API reference with complete parameter and return type information, see the Python SDK KPIs API Reference.
When to Use KPIs
You'll work with KPIs in the Pay-i Python SDK for several important purposes:
- Performance Tracking: Measure the technical effectiveness of your GenAI applications
- Business Value Demonstration: Quantify the business impact and ROI of your AI features
- Quality Monitoring: Track user satisfaction and content quality over time
- Improvement Validation: Compare metrics across different versions of a use case
- Goal Setting: Establish and track progress toward specific performance targets
Understanding the KPI Framework
Pay-i implements KPIs using a two-tiered approach:
- KPI Definitions: Created at the use case type level (identified by
use_case_name
) - KPI Values: Recorded at the use case instance level (identified by
use_case_id
)
This important distinction means implementing KPIs is a two-step process:
- First, you define what KPIs exist for a particular use case type (e.g., "Chat-Bot" or "Document-Processor")
- Then, you record actual values for those KPIs on specific instances of that use case type
This approach ensures consistent metrics across all instances of a given use case type, while allowing for instance-specific measurements.
Common Workflows
Working with the Pay-i KPIs API involves several common patterns for defining, recording, and analyzing performance metrics. This section walks you through these workflows with practical code examples.
The examples below demonstrate how to:
- Define KPIs for use case types
- Record KPI values for specific use case instances
- Retrieve and analyze KPI data
- Use KPIs for various measurement purposes
Note: The examples in this guide use the Python SDK's client objects (
Payi
andAsyncPayi
), which provide a resource-based interface to the Pay-i API. For details on client initialization and configuration, see the Pay-i Client Initialization guide.
Defining KPIs for a Use Case Type
The first step in implementing KPIs is to define what metrics you want to track for each use case type:
from payi import Payi
# Initialize the Pay-i client
client = Payi() # API key will be loaded from PAYI_API_KEY environment variable
# Step 1: Define KPIs for a Chat Bot use case type
# These definitions establish what metrics will be tracked for all Chat Bot instances
# Define a deflection rate KPI (tracks when AI resolves issues without human help)
deflection_kpi = client.use_cases.definitions.kpis.create(
use_case_name="Chat-Bot", # The use case type this KPI belongs to
kpi_name="Deflection Rate", # Unique name for this KPI
description="Measures when AI resolves issues without human intervention",
kpi_type="boolean", # Boolean KPI: True (1.0) = deflected, False (0.0) = not deflected
goal=0.25 # Target: 25% of queries resolved by AI
)
print(f"Created '{deflection_kpi.name}' KPI for Chat-Bot use case type")
# Define a satisfaction KPI (tracks user happiness with AI interactions)
satisfaction_kpi = client.use_cases.definitions.kpis.create(
use_case_name="Chat-Bot",
kpi_name="Customer Satisfaction",
description="User satisfaction rating for AI interaction",
kpi_type="likert5", # Likert scale (1-5 rating)
goal=4.0 # Target: 4.0/5.0 average rating
)
print(f"Created '{satisfaction_kpi.name}' KPI for Chat-Bot use case type")
# Define a time savings KPI (tracks operational efficiency gains)
time_saved_kpi = client.use_cases.definitions.kpis.create(
use_case_name="Chat-Bot",
kpi_name="Time Saved",
description="Minutes saved compared to human handling",
kpi_type="number", # Numeric KPI (measured in minutes)
goal=2.0 # Target: 2 minutes saved per interaction
)
print(f"Created '{time_saved_kpi.name}' KPI for Chat-Bot use case type")
Expected output:
Created 'Deflection Rate' KPI for Chat-Bot use case type
Created 'Customer Satisfaction' KPI for Chat-Bot use case type
Created 'Time Saved' KPI for Chat-Bot use case type
This step only needs to be performed once when setting up your application. After defining these KPIs, they'll be available for all instances of the Chat-Bot use case type.
Listing Defined KPIs for a Use Case Type
After defining KPIs, you can retrieve them to confirm their configuration:
# List all KPIs defined for the Chat-Bot use case type
print("\nKPIs defined for Chat-Bot use case type:")
kpi_definitions = client.use_cases.definitions.kpis.list(use_case_name="Chat-Bot")
for kpi in kpi_definitions:
print(f" {kpi.name}: Type={kpi.kpi_type}, Goal={kpi.goal}")
print(f" Description: {kpi.description}")
Expected output:
KPIs defined for Chat-Bot use case type:
Deflection Rate: Type=boolean, Goal=0.25
Description: Measures when AI resolves issues without human intervention
Customer Satisfaction: Type=likert5, Goal=4.0
Description: User satisfaction rating for AI interaction
Time Saved: Type=number, Goal=2.0
Description: Minutes saved compared to human handling
Creating a Use Case Instance
Before recording KPI values, you need to create a use case instance each time your application executes the use case:
# Step 2: Create a use case instance when a chat session begins
# In a real application, this would be done at the start of each chat conversation
chat_session = client.use_cases.create(
use_case_name="Chat-Bot", # Must match the use case type with defined KPIs
user_id="user_12345", # Optional user attribution
properties={ # Optional context properties
"channel": "website",
"browser": "chrome",
"locale": "en-US"
}
)
# Store the use case ID for later recording KPI values
chat_id = chat_session.use_case_id
print(f"Created Chat-Bot instance with ID: {chat_id}")
Expected output:
Created Chat-Bot instance with ID: uc_abcdef123456
Recording KPI Values for a Use Case Instance
After the user interaction completes, record KPI values for the relevant metrics:
# Step 3: Record KPI values based on the chat session outcome
# In this example, we'll simulate a successful AI-handled conversation
# Record deflection (AI successfully handled without human intervention)
client.use_cases.kpis.create(
kpi_name="Deflection Rate", # Must match a defined KPI name
use_case_id=chat_id, # The instance ID from step 2
score=1.0 # Boolean: 1.0 = True (deflected)
)
print(f"Recorded Deflection Rate: 1.0 (AI handled successfully)")
# Record customer satisfaction based on post-chat survey
client.use_cases.kpis.create(
kpi_name="Customer Satisfaction",
use_case_id=chat_id,
score=4.5 # Likert5: 4.5/5.0 rating
)
print(f"Recorded Customer Satisfaction: 4.5/5.0")
# Record time saved based on average handling time difference
client.use_cases.kpis.create(
kpi_name="Time Saved",
use_case_id=chat_id,
score=3.2 # Number: 3.2 minutes saved
)
print(f"Recorded Time Saved: 3.2 minutes")
Expected output:
Recorded Deflection Rate: 1.0 (AI handled successfully)
Recorded Customer Satisfaction: 4.5/5.0
Recorded Time Saved: 3.2 minutes
Retrieving KPI Values for Analysis
After recording KPI values, you can retrieve them for analysis:
# Step 4: List all KPIs recorded for this chat session
print(f"\nKPI values for chat session {chat_id}:")
kpi_values = client.use_cases.kpis.list(use_case_id=chat_id)
for kpi in kpi_values:
print(f" {kpi.kpi_name}: {kpi.score}")
print(f" Created: {kpi.create_timestamp}")
print(f" Updated: {kpi.update_timestamp}")
Expected output:
KPI values for chat session uc_abcdef123456:
Deflection Rate: 1.0
Created: 2025-05-20T16:45:12.345678Z
Updated: 2025-05-20T16:45:12.345678Z
Customer Satisfaction: 4.5
Created: 2025-05-20T16:45:13.456789Z
Updated: 2025-05-20T16:45:13.456789Z
Time Saved: 3.2
Created: 2025-05-20T16:45:14.567890Z
Updated: 2025-05-20T16:45:14.567890Z
Updating KPI Values
If you need to update a KPI value after it's been recorded:
# Step 5: Update a KPI value (e.g., if additional feedback is received)
client.use_cases.kpis.update(
kpi_name="Customer Satisfaction",
use_case_id=chat_id,
score=5.0 # Updated score: perfect satisfaction
)
print(f"Updated Customer Satisfaction to: 5.0/5.0")
# Verify the update
kpi_values = client.use_cases.kpis.list(
use_case_id=chat_id,
kpi_name="Customer Satisfaction" # Filter to just this KPI
)
for kpi in kpi_values:
print(f" {kpi.kpi_name}: {kpi.score}")
print(f" Updated: {kpi.update_timestamp}")
Expected output:
Updated Customer Satisfaction to: 5.0/5.0
Customer Satisfaction: 5.0
Updated: 2025-05-20T16:46:25.123456Z
Real-World KPI Implementation Examples
Let's explore comprehensive examples of implementing KPIs for different types of AI applications.
Example 1: Content Generation KPIs
For AI-powered content generation applications, typical KPIs include engagement metrics, content quality measures, and business impact indicators:
from payi import Payi
client = Payi()
# Step 1: Define KPIs for content generation use case
def setup_content_generator_kpis():
# Engagement metrics
client.use_cases.definitions.kpis.create(
use_case_name="Content-Generator",
kpi_name="Clickthrough Rate",
description="Percentage of users who click on generated content",
kpi_type="boolean",
goal=0.05 # 5% target
)
# Conversion metrics
client.use_cases.definitions.kpis.create(
use_case_name="Content-Generator",
kpi_name="Conversion Rate",
description="Percentage of users who complete desired action",
kpi_type="boolean",
goal=0.01 # 1% target
)
# Revenue metrics
client.use_cases.definitions.kpis.create(
use_case_name="Content-Generator",
kpi_name="Revenue Generated",
description="Direct revenue attributed to content",
kpi_type="number",
goal=10.0 # $10 per content piece
)
# Quality metrics
client.use_cases.definitions.kpis.create(
use_case_name="Content-Generator",
kpi_name="Editor Rating",
description="Human editor quality assessment",
kpi_type="likert5",
goal=4.0 # 4/5 target
)
print("Content Generator KPIs configured successfully")
# Step 2: Function to track content generation outcomes
def track_content_performance(content_id, clicked, converted, revenue, quality_rating):
# Create a use case instance for this content piece
content = client.use_cases.create(
use_case_name="Content-Generator",
properties={"content_id": content_id}
)
use_case_id = content.use_case_id
# Record all KPIs for this content
client.use_cases.kpis.create(
kpi_name="Clickthrough Rate",
use_case_id=use_case_id,
score=1.0 if clicked else 0.0
)
client.use_cases.kpis.create(
kpi_name="Conversion Rate",
use_case_id=use_case_id,
score=1.0 if converted else 0.0
)
client.use_cases.kpis.create(
kpi_name="Revenue Generated",
use_case_id=use_case_id,
score=revenue
)
client.use_cases.kpis.create(
kpi_name="Editor Rating",
use_case_id=use_case_id,
score=quality_rating
)
print(f"Recorded performance metrics for content ID: {content_id}")
return use_case_id
# Setup the KPIs (only needs to be done once)
setup_content_generator_kpis()
# Example usage: Track several pieces of content
content_data = [
{"id": "blog-post-123", "clicked": True, "converted": False, "revenue": 0, "rating": 4.5},
{"id": "product-desc-456", "clicked": True, "converted": True, "revenue": 25.75, "rating": 4.0},
{"id": "email-campaign-789", "clicked": False, "converted": False, "revenue": 0, "rating": 3.0}
]
for content in content_data:
track_content_performance(
content_id=content["id"],
clicked=content["clicked"],
converted=content["converted"],
revenue=content["revenue"],
quality_rating=content["rating"]
)
Example 2: Document Processing KPIs
For AI-powered document processing applications, KPIs often focus on accuracy, efficiency, and cost savings:
# Step 1: Define KPIs for document processing use case
def setup_document_processor_kpis():
# Accuracy metrics
client.use_cases.definitions.kpis.create(
use_case_name="Document-Processor",
kpi_name="Extraction Accuracy",
description="Percentage of correctly extracted fields",
kpi_type="number", # Stored as decimal (0.0-1.0)
goal=0.98 # 98% accuracy target
)
# Efficiency metrics
client.use_cases.definitions.kpis.create(
use_case_name="Document-Processor",
kpi_name="Processing Time",
description="Seconds to process document",
kpi_type="number",
goal=3.0 # 3 seconds target
)
# Business value metrics
client.use_cases.definitions.kpis.create(
use_case_name="Document-Processor",
kpi_name="Time Saved",
description="Minutes saved vs. manual processing",
kpi_type="number",
goal=5.0 # 5 minutes target
)
client.use_cases.definitions.kpis.create(
use_case_name="Document-Processor",
kpi_name="Cost Saved",
description="Dollar value of time saved",
kpi_type="number",
goal=2.50 # $2.50 target
)
print("Document Processor KPIs configured successfully")
# Step 2: Function to track document processing outcomes
def track_document_processing(document_id, accuracy, processing_time, time_saved, cost_saved):
# Create a use case instance for this document
document = client.use_cases.create(
use_case_name="Document-Processor",
properties={"document_id": document_id}
)
use_case_id = document.use_case_id
# Record all KPIs for this document
client.use_cases.kpis.create(
kpi_name="Extraction Accuracy",
use_case_id=use_case_id,
score=accuracy
)
client.use_cases.kpis.create(
kpi_name="Processing Time",
use_case_id=use_case_id,
score=processing_time
)
client.use_cases.kpis.create(
kpi_name="Time Saved",
use_case_id=use_case_id,
score=time_saved
)
client.use_cases.kpis.create(
kpi_name="Cost Saved",
use_case_id=use_case_id,
score=cost_saved
)
print(f"Recorded processing metrics for document ID: {document_id}")
return use_case_id
KPI Best Practices
When implementing KPIs in your Pay-i instrumented applications, consider these best practices:
1. Define KPIs During Application Design
Define your KPIs as part of your initial use case design:
- Identify what success looks like for your use case
- Determine how to measure it quantitatively
- Select appropriate KPI types for each metric
2. Use Consistent KPI Names Across Versions
When you create new versions of a use case type, maintain the same KPI names to enable proper comparison:
# Define the same KPIs for both versions of a chat bot
for version in ["Chat-Bot-v1", "Chat-Bot-v2"]:
client.use_cases.definitions.kpis.create(
use_case_name=version,
kpi_name="Deflection Rate",
description="Measures when AI resolves issues without human intervention",
kpi_type="boolean",
goal=0.25
)
This approach allows you to compare performance between versions in the Pay-i dashboard.
3. Balance Technical and Business KPIs
Include both technical performance metrics and business impact indicators:
- Technical KPIs: accuracy, response time, error rates
- Business KPIs: conversion rates, revenue generated, time/cost savings
4. Record KPIs at Appropriate Points
Integrate KPI recording at logical points in your application flow:
- Immediate metrics (like processing time) can be recorded right after the AI operation
- Lagging indicators (like conversions) might need to be recorded asynchronously later
- User feedback metrics should be recorded when feedback is collected
5. Use KPI Types Appropriately
Choose the right KPI type for each metric you want to track:
- boolean: For binary outcomes (success/failure, clicked/not clicked)
- number: For continuous measurements (time, money, counts)
- percentage: For ratio measurements from 0-100%
- likert5/7/10: For satisfaction or quality ratings on a scale
API Reference
For detailed information on all the methods, parameters, and response types provided by the KPIs resources, please refer to the Python SDK KPIs API Reference.
The reference documentation includes:
- Complete method signatures with all parameters
- Return type structures for all response types
- Detailed explanations of parameter behavior
- REST API endpoint mappings
- Examples for each method
This separate reference guide complements the workflow examples provided in this document, offering a more technical and comprehensive view of the KPIs API.
Updated about 8 hours ago