Configuring payi_instrument()
Overview
The payi_instrument() function is the entry point for Pay-i's automatic instrumentation. Call it once at application startup to intercept GenAI provider calls and route telemetry to Pay-i. This page is a configuration reference — for usage patterns and getting started, see Day 1: Basic Instrumentation.
Function Signature
from payi.lib.instrument import payi_instrument
payi_instrument(
payi=None,
instruments=None,
log_prompt_and_response=True,
config=None,
logger=None,
)All parameters are keyword-only. Calling payi_instrument() more than once is a no-op — only the first call takes effect.
Top-Level Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
payi | Payi, AsyncPayi, or list of both | None | Pay-i client instance(s) to use for ingesting telemetry. When None, clients are created automatically using environment variables (PAYI_API_KEY, PAYI_BASE_URL). Pass a list when you need both sync and async clients. |
instruments | set[str] | None | Restricts which provider SDKs are instrumented. None or {"*"} instruments all supported providers. See Instrument Values below. |
log_prompt_and_response | bool | True | When True, full prompt and response content is sent to Pay-i. Set to False to send only cost and token metadata. The application in Pay-i must also be configured to store prompt and responses for the values to be stored. |
config | PayiInstrumentConfig | None | A dictionary of configuration options that control instrumentation behavior and set global context defaults. See PayiInstrumentConfig below. |
logger | logging.Logger | None | Custom logger for SDK diagnostic messages. Defaults to logging.getLogger("payi.instrument"). |
Instrument Values
When you pass a set of strings to instruments, only the matching providers are instrumented. Use the PayiCategories class for readable, typo-proof values:
| PayiCategories value | Provider |
|---|---|
PayiCategories.openai | OpenAI |
PayiCategories.azure | Azure-hosted models (OpenAI + Anthropic) |
PayiCategories.anthropic | Anthropic |
PayiCategories.aws_bedrock | AWS Bedrock |
PayiCategories.google_vertex | Google Vertex AI / Google GenAI |
PayiCategories.databricks_azure | Databricks (Azure) |
PayiCategories.databricks_aws | Databricks (AWS) |
PayiCategories.databricks_google | Databricks (Google) |
from payi.lib.helpers import PayiCategories
# Instrument only OpenAI and Anthropic
payi_instrument(instruments={PayiCategories.openai, PayiCategories.anthropic})PayiInstrumentConfig
The config parameter accepts a PayiInstrumentConfig dictionary. Settings fall into two groups: behavior settings that control how instrumentation works, and global context defaults that set initial values for the context.
Behavior Settings
| Setting | Type | Default | Description |
|---|---|---|---|
proxy | bool | False | When False, requests are instrumented in the library and ingested into Pay-i. When True, requests are routed through the Pay-i proxy. |
global_instrumentation | bool | True | When True, the payi library maintains the payi context state across all threads and tasks. Set to False when you want to control tracking exclusively through @track decorators or track_context. |
instrument_inline_data | bool | False | When True, base64-encoded inline data (e.g. images) in requests is included in telemetry. Disable to reduce payload size. |
connection_error_logging_window | int | 60 | Minimum seconds between repeated connection-error log messages. Prevents log flooding when the Pay-i service is temporarily unreachable. Must be non-negative. |
host_mappings | Maps hostnames to Pay-i categories for correct pricing when a client SDK communicates with a host other than its native provider — for example, using the OpenAI client to call open-weight or Databricks-hosted models. See Host Mappings. | ||
offline_instrumentation | Writes telemetry to a local JSON file instead of sending to Pay-i. See Offline Instrumentation. | ||
ingest_retry | Controls retry behavior for failed ingest calls. See Ingest Retry. | ||
openai_config | OpenAI-specific configuration. See OpenAI Config. | ||
anthropic_config | Anthropic-specific configuration. See Anthropic Config. | ||
aws_config | AWS Bedrock-specific configuration. See AWS Bedrock Config. |
Global Context Defaults
These values populate the initial context frame when global_instrumentation = True. They can be overridden per-request by @track, track_context, or extra_headers.
| Setting | Type | Default | Description |
|---|---|---|---|
use_case_name | str | Calling filename | Identifies the Use Case. Defaults to the name of the Python file that called payi_instrument(). |
use_case_id | str | None | Specific Use Case Instance ID. When omitted, a new instance is created automatically. |
use_case_version | int | None | Version number for the use case instance. |
use_case_properties | dict[str, str] | None | Custom key-value properties attached to the use case. |
limit_ids | list[str] | None | Limit IDs to check on every request. |
user_id | str | None | Identifies the end user for attribution and per-user limits. |
account_name | str | None | Account name for multi-tenant attribution. |
request_properties | dict[str, str] | None | Custom key-value properties attached to every request. |
payi_instrument(
config={
"use_case_name": "customer_support_bot",
"user_id": "user5",
"limit_ids": ["global_budget"],
"request_properties": {"environment": "production"},
}
)Host Mappings
Maps hostnames to Pay-i categories for correct pricing. This is needed when you use a provider's client SDK to send requests to a host other than the provider's own — for example, using the OpenAI client to communicate with open-weight models hosted on Databricks or a custom inference endpoint. Without a host mapping, Pay-i will use the default category value associated with the client which will result in an unknown_model.
The key is a hostname string or httpx.URL. The value is a PayiInstrumentHostMappingConfig dictionary.
| Setting | Type | Default | Description |
|---|---|---|---|
price_as_category | str | None | The Pay-i category to use for pricing requests to this host (e.g. "system.databricks.azure"). |
payi_instrument(
config={
"host_mappings": {
"my-workspace.cloud.databricks.com": {"price_as_category": "system.databricks.azure"},
}
}
)Offline Instrumentation
Writes telemetry packets to a local JSON file instead of sending them to Pay-i. The file is written when the process exits. This is useful for testing, debugging, or air-gapped environments.
| Setting | Type | Default | Description |
|---|---|---|---|
file_name | str | payi_instrumentation_<timestamp>.json | Output file path. The timestamp format is YYYY-MM-DD-HH-MM-SS. |
payi_instrument(
config={
"offline_instrumentation": {"file_name": "telemetry_capture.json"}
}
)When offline instrumentation is enabled, no Pay-i client is required.
Ingest Retry
Controls retry behavior when ingest calls to the Pay-i service fail due to transient connection errors. There are two retry layers: inline retries that happen synchronously before the call returns, and a background retry queue that re-delivers failed packets asynchronously.
| Setting | Type | Default | Description |
|---|---|---|---|
max_inline_retries | int | 0 | Maximum number of synchronous retries per ingest call. Each retry doubles the delay (exponential backoff). Must be non-negative. |
inline_retry_initial_delay | float | 0.5 | Initial delay in seconds before the first inline retry. Doubles on each subsequent retry. Must be non-negative. |
queue_enabled | bool | True | Enables the background retry queue. Failed ingest calls that exhaust inline retries are enqueued for later delivery. NOTE: Setting to False can result in dropped ingest calls and data loss. |
queue_max_size | int | 0 | Maximum number of items in the retry queue. 0 means unlimited. Must be non-negative. |
queue_interval | float | 5.0 | Seconds between background retry queue drain cycles. Must be positive. |
payi_instrument(
config={
"ingest_retry": {
"max_inline_retries": 2,
"inline_retry_initial_delay": 1.0,
"queue_enabled": True,
"queue_max_size": 100,
}
}
)OpenAI Config
Configuration for the OpenAI provider instrumentation.
| Setting | Type | Default | Description |
|---|---|---|---|
model_mappings | list[ModelMapping] | None | Maps model names to Pay-i categories/resources for pricing. See Model Mappings. |
payi_instrument(
config={
"openai_config": {
"model_mappings": [
{"model": "my-gpt4-deployment", "price_as_resource": "gpt-4"},
]
}
}
)Anthropic Config
Configuration for the Anthropic provider instrumentation.
| Setting | Type | Default | Description |
|---|---|---|---|
model_mappings | list[ModelMapping] | None | Maps model names to Pay-i categories/resources for pricing. See Model Mappings. |
payi_instrument(
config={
"anthropic_config": {
"model_mappings": [
{"model": "my-claude-deployment", "price_as_resource": "claude-sonnet-4-20250514"},
]
}
}
)AWS Bedrock Config
Configuration for the AWS Bedrock provider instrumentation.
| Setting | Type | Default | Description |
|---|---|---|---|
guardrail_trace | bool | True | Automatically enables guardrail tracing on Bedrock requests that include a guardrail ID and version. NOTE if set to False, payi cannot track guardrail usage and cost. |
add_streaming_xproxy_result | bool | False | When True, attaches the xproxy_result object to streaming response events. Enable this when your code needs to inspect Pay-i results during stream iteration. |
model_mappings | list[ModelMapping] | [] | Maps model identifiers to Pay-i categories/resources. See Model Mappings. |
model_config | dict[str, ModelConfig] | {} | Per-model configuration keyed by model name. See Model Config. |
Model Mappings
Model mappings tell Pay-i how to price requests when the model name or deployment name doesn't directly correspond to a known Pay-i resource. Each mapping is a dictionary with these fields:
| Setting | Type | Default | Description |
|---|---|---|---|
model | str | (required) | The model name or deployment name to match. |
host | str or httpx.URL | None | Restrict this mapping to requests sent to a specific host. |
price_as_category | str | None | The Pay-i category to use (e.g. "system.openai"). |
price_as_resource | str | None | The Pay-i resource name to use for pricing (e.g. "gpt-4"). |
resource_scope | str | None | Pricing scope. One of "global", "datazone", "region", or "region.<region_name>". |
At least one setting (price_as_category, price_as_resource, or resource_scope) must be specified.
payi_instrument(
config={
"aws_config": {
"model_mappings": [
{
"model": "arn:aws:bedrock:us-east-1::foundation-model/my-custom-model",
"price_as_category": "system.aws.bedrock",
"price_as_resource": "anthropic.claude-3-sonnet",
"resource_scope": "region.us-east-1",
},
]
}
}
)Model Config
Per-model configuration, keyed by model name.
| Setting | Type | Default | Description |
|---|---|---|---|
tokenizer_path | str | None | Path to a custom tokenizer file for token counting with the tokenizers library. Required when instrumenting models do not self report token usage and require payi to count tokens. The client must install the tokenizers library in addition to providing the tokenizer file. |
Updated about 3 hours ago