Python SDK Categories and Resources API
Overview
This guide explains how to manage Categories and Resources using the Pay-i Python SDK. For a conceptual understanding of what Categories and Resources are and how they fit into the Pay-i platform, please refer to the Categories and Resources Concepts page.
When to Use Categories and Resources
You'll work with categories and resources in the Pay-i Python SDK for several important purposes:
- Exploring Available Models: Discover which AI models Pay-i currently supports and their pricing
- Creating Custom Resources: Track usage of custom AI models or services not natively supported by Pay-i
- Setting Up Pricing: Define pricing for custom or enterprise-negotiated services
- Cost Management: View and understand the details of resources you're using in your applications
Common Workflows
Discovering Available AI Models and Resources
When you're planning your AI implementation, you'll often want to explore what models are available and understand their pricing structure:
from payi import Payi
# Initialize the Pay-i client
client = Payi() # API key will be loaded from PAYI_API_KEY environment variable
# Step 1: List available categories
print("Available Categories:")
categories = client.categories.list()
for category in categories:
print(f"Category: {category.category}")
# Step 2: For a specific category, list all resources
# IMPORTANT: Use client.categories.list_resources() to list all resources in a category
# NOT client.categories.resources.list() which is for listing all versions of a specific resource
print("\nListing all resources in the OpenAI category:")
openai_resources = client.categories.list_resources(category="system.openai")
for resource in openai_resources:
print(f" Resource: {resource.resource}")
# Step 3: Find and print details for a specific resource
print("\nChecking details for OpenAI 'gpt-4o' resource:")
# Find the gpt-4o resource in the list
gpt4o_resource = next((r for r in openai_resources if r.resource == "gpt-4o"), None)
if gpt4o_resource:
print(f" Resource: {gpt4o_resource.resource}")
# Step 4: Print pricing details
print(f" Input Price: ${gpt4o_resource.units['text'].input_price} per token")
print(f" Output Price: ${gpt4o_resource.units['text'].output_price} per token")
print(f" Max Input Tokens: {gpt4o_resource.max_input_units}")
else:
print(" Resource 'gpt-4o' not found in the OpenAI category.")
# Note: If you need to list all versions of a specific resource instead,
# use client.categories.resources.list():
# resource_versions = client.categories.resources.list(
# resource="gpt-4o",
# category="system.openai"
# )
Implementing Custom Resource Tracking
If you're using AI models or services that aren't managed by Pay-i (like custom-trained models, vector databases, or enterprise services), you'll need to create custom resources to track their usage:
from datetime import datetime, timezone
from payi import Payi
client = Payi()
# Step 1: Create a custom category and resource for a vector database service
# Note: In production, you might want to check if the resource exists first
# or handle potential exceptions if the resource already exists.
response = client.categories.resources.create(
category="vector-db", # Your custom category
resource="qdrant-embeddings", # Your custom resource
units={
"text": {
"input_price": 0.0000025, # Price per embedding token
"output_price": 0.0000000 # No output cost for vector search
}
},
max_input_units=10000, # Max tokens per request
max_output_units=0,
start_timestamp=datetime.now(timezone.utc) # Pricing effective from now (in UTC)
)
print(f"Created resource with ID: {response.resource_id}")
# Step 2: Now track usage of this custom resource in your application
def track_vector_db_usage(query_text_length):
response = client.ingest.units(
category="vector-db",
resource="qdrant-embeddings",
units={
"text": {
"input": query_text_length, # Number of tokens processed
"output": 0
}
},
http_status_code=200,
end_to_end_latency_ms=125,
user_id="customer_abc123" # Optional user attribution
)
return response
# Example usage in your application
track_vector_db_usage(query_text_length=450)
Setting Up Enterprise-Negotiated Pricing
If your organization has negotiated special pricing with AI providers, you can create custom resources with your negotiated rates:
from payi import Payi
from datetime import datetime, timezone
client = Payi()
# Set up your enterprise-negotiated pricing for OpenAI models
# This is useful when you have special pricing different from Pay-i's standard rates
enterprise_pricing = client.categories.resources.create(
category="enterprise-openai", # Custom category for your enterprise
resource="gpt-4-enterprise", # Custom resource name
units={
"text": {
"input_price": 0.000005, # Your negotiated rates
"output_price": 0.000015
}
},
max_input_units=128000,
max_output_units=128000,
start_timestamp=datetime.now(timezone.utc) # Always use UTC for timestamps
)
Managing Resource Lifecycle
As your AI infrastructure evolves, you may need to update or remove resources that are no longer needed:
from payi import Payi
client = Payi()
# Scenario: Removing deprecated or unused custom resources
deprecated_models = ["old-model-v1", "deprecated-embedding-model"]
# Best practice: Use keyword parameters consistently for clarity and to avoid errors
# This makes code more readable and less prone to parameter ordering mistakes
# Method 1: Remove resources one at a time with proper error handling
for model in deprecated_models:
try:
client.categories.delete_resource(
resource=model, # Use keyword parameter for all arguments
category="my-custom-models" # Use keyword parameter for all arguments
)
print(f"Successfully removed {model}")
except Exception as e:
print(f"Could not remove {model}: {e}")
# Method 2: Alternative approach - Delete an entire category and all its resources at once
# This is more efficient when you need to remove many resources in the same category
try:
# This deletes the entire category and all resources it contains
client.categories.delete(category="my-custom-models") # Use keyword parameter for clarity
print("Successfully removed the entire category and all its resources")
except Exception as e:
print(f"Could not remove category: {e}")
Real-World Example: Building a Multi-Model AI Application
Let's walk through a comprehensive example of setting up and tracking usage for a complete AI application that uses multiple services:
from payi import Payi
from datetime import datetime, timezone
# Initialize the Pay-i client
client = Payi()
# Step 1: Set up custom resources for all AI components in our application
def setup_resources():
# Create embeddings model resource
client.categories.resources.create(
category="company-ai-stack",
resource="text-embeddings",
units={"text": {"input_price": 0.0000010, "output_price": 0.0}},
max_input_units=8000,
start_timestamp=datetime.now(timezone.utc)
)
# Create vector database resource
client.categories.resources.create(
category="company-ai-stack",
resource="vector-search",
units={"text": {"input_price": 0.00000025, "output_price": 0.0}},
max_input_units=10000,
start_timestamp=datetime.now(timezone.utc)
)
# Create custom LLM resource
client.categories.resources.create(
category="company-ai-stack",
resource="rag-completion-model",
units={"text": {"input_price": 0.000006, "output_price": 0.000012}},
max_input_units=16000,
max_output_units=4000,
start_timestamp=datetime.now(timezone.utc)
)
print("AI resource stack configured successfully")
# Step 2: Create a function to track RAG pipeline usage
def track_rag_query(query_text, context_chunks, response_text, user_id):
# Track embeddings usage
client.ingest.units(
category="company-ai-stack",
resource="text-embeddings",
units={"text": {"input": len(query_text) // 4, "output": 0}}, # Approximate tokens
user_id=user_id
)
# Track vector search usage
client.ingest.units(
category="company-ai-stack",
resource="vector-search",
units={"text": {"input": len(query_text) // 4, "output": 0}},
user_id=user_id
)
# Track LLM usage
context_tokens = sum(len(chunk) // 4 for chunk in context_chunks)
query_tokens = len(query_text) // 4
response_tokens = len(response_text) // 4
client.ingest.units(
category="company-ai-stack",
resource="rag-completion-model",
units={"text": {
"input": query_tokens + context_tokens, # Query + context
"output": response_tokens # Generated response
}},
user_id=user_id
)
print(f"Tracked complete RAG pipeline usage for user {user_id}")
# Usage in application
def main():
# Setup resources - safe to call every time as create operations are idempotent
# (If resources already exist, the create calls are handled gracefully)
setup_resources()
# Example RAG query tracking
query = "What are the key features of quantum computing?"
context_chunks = [
"Quantum computing uses quantum bits or qubits which can exist in multiple states simultaneously.",
"Unlike classical bits that can be either 0 or 1, qubits leverage superposition and entanglement.",
"This allows quantum computers to process complex calculations exponentially faster than classical computers for certain problems."
]
response = "Quantum computing's key features include qubits (which leverage superposition to exist in multiple states simultaneously), entanglement (allowing qubits to be correlated with each other), and quantum interference. These properties enable quantum computers to solve certain complex problems exponentially faster than classical computers, particularly in areas like cryptography, optimization, simulation of quantum systems, and certain types of machine learning tasks."
track_rag_query(query, context_chunks, response, user_id="user_12345")
if __name__ == "__main__":
main()
Best Practices
When working with categories and resources in the Pay-i Python SDK, consider these best practices:
-
Check Before Creating: Always check if a resource exists before attempting to create it to avoid errors.
-
Use Descriptive Names: Choose meaningful category and resource names that clearly indicate their purpose.
-
Set Accurate Prices: For custom resources, set prices that accurately reflect your actual costs to ensure correct budget tracking.
-
Track Complete Workflows: When using multiple AI services together (like in RAG), track all components of the workflow to get a complete picture of costs.
-
Version Resources: When updating pricing or specifications, consider creating new versioned resources rather than modifying existing ones to maintain historical data.
Pagination in List Methods
When using any list()
method in the SDK (such as client.categories.list()
or client.categories.list_resources()
), pagination is automatically handled for you:
# Automatically iterates through all pages of results
for category in client.categories.list():
print(f"Category: {category.category}")
# The same automatic pagination works for resources
for resource in client.categories.list_resources(category="system.openai"):
print(f"Resource: {resource.resource}")
You don't need to manually handle pagination or worry about cursor management - the SDK takes care of making multiple API calls as needed when you iterate through the results. This makes it easy to work with large collections of categories and resources without worrying about pagination implementation details.
For advanced usage, including cursor and sorting parameters, see the Pagination Parameters documentation.
Reference
Method Reference
For a complete list of all methods and parameters related to Categories and Resources in the Pay-i Python SDK.
Categories Methods (client.categories
)
client.categories
)Method | Description | API Reference | REST API Endpoint |
---|---|---|---|
list() | List all available categories | Get Categories | GET /api/v1/categories |
list_resources(category) | List all resources in a category | Get Resources List | GET /api/v1/categories/{category}/resources |
delete_resource(resource, *, category) | Delete all versions of a specific resource | Delete Category Resource | DELETE /api/v1/categories/{category}/resources/{resource} |
delete(category) | Delete a category and all its resources | Delete Category | DELETE /api/v1/categories/{category} |
Resources Methods (client.categories.resources
)
client.categories.resources
)Method | Description | API Reference | REST API Endpoint |
---|---|---|---|
create() | Create a custom resource | Create Resource | POST /api/v1/categories/{category}/resources/{resource} |
retrieve(resource_id, category, resource) | Get details of a specific resource version | Get Resource by ID | GET /api/v1/categories/{category}/resources/{resource}/{resource_id} |
list(resource, category) | List all versions of a specific resource | Get Resources by Name | GET /api/v1/categories/{category}/resources/{resource} |
delete(resource_id, category, resource) | Delete a specific resource version | Delete Resource | DELETE /api/v1/categories/{category}/resources/{resource}/{resource_id} |
For more information on tracking usage of resources, see the Python SDK Ingest documentation.
Updated about 5 hours ago