Configuring payi_instrument()

Overview

The payi_instrument() function is the entry point for Pay-i's automatic instrumentation. Call it once at application startup to intercept GenAI provider calls and route telemetry to Pay-i. This page is a configuration reference — for usage patterns and getting started, see Day 1: Basic Instrumentation.

Function Signature

from payi.lib.instrument import payi_instrument

payi_instrument(
    payi=None,
    instruments=None,
    log_prompt_and_response=True,
    config=None,
    logger=None,
)

All parameters are keyword-only. Calling payi_instrument() more than once is a no-op — only the first call takes effect.

Top-Level Parameters

ParameterTypeDefaultDescription
payiPayi, AsyncPayi, or list of bothNonePay-i client instance(s) to use for ingesting telemetry. When None, clients are created automatically using environment variables (PAYI_API_KEY, PAYI_BASE_URL). Pass a list when you need both sync and async clients.
instrumentsset[str]NoneRestricts which provider SDKs are instrumented. None or {"*"} instruments all supported providers. See Instrument Values below.
log_prompt_and_responseboolTrueWhen True, full prompt and response content is sent to Pay-i. Set to False to send only cost and token metadata. The application in Pay-i must also be configured to store prompt and responses for the values to be stored.
configPayiInstrumentConfigNoneA dictionary of configuration options that control instrumentation behavior and set global context defaults. See PayiInstrumentConfig below.
loggerlogging.LoggerNoneCustom logger for SDK diagnostic messages. Defaults to logging.getLogger("payi.instrument").

Instrument Values

When you pass a set of strings to instruments, only the matching providers are instrumented. Use the PayiCategories class for readable, typo-proof values:

PayiCategories valueProvider
PayiCategories.openaiOpenAI
PayiCategories.azureAzure-hosted models (OpenAI + Anthropic)
PayiCategories.anthropicAnthropic
PayiCategories.aws_bedrockAWS Bedrock
PayiCategories.google_vertexGoogle Vertex AI / Google GenAI
PayiCategories.databricks_azureDatabricks (Azure)
PayiCategories.databricks_awsDatabricks (AWS)
PayiCategories.databricks_googleDatabricks (Google)
from payi.lib.helpers import PayiCategories

# Instrument only OpenAI and Anthropic
payi_instrument(instruments={PayiCategories.openai, PayiCategories.anthropic})

PayiInstrumentConfig

The config parameter accepts a PayiInstrumentConfig dictionary. Settings fall into two groups: behavior settings that control how instrumentation works, and global context defaults that set initial values for the context.

Behavior Settings

SettingTypeDefaultDescription
proxyboolFalseWhen False, requests are instrumented in the library and ingested into Pay-i. When True, requests are routed through the Pay-i proxy.
global_instrumentationboolTrueWhen True, the payi library maintains the payi context state across all threads and tasks. Set to False when you want to control tracking exclusively through @track decorators or track_context.
instrument_inline_databoolFalseWhen True, base64-encoded inline data (e.g. images) in requests is included in telemetry. Disable to reduce payload size.
connection_error_logging_windowint60Minimum seconds between repeated connection-error log messages. Prevents log flooding when the Pay-i service is temporarily unreachable. Must be non-negative.
host_mappingsMaps hostnames to Pay-i categories for correct pricing when a client SDK communicates with a host other than its native provider — for example, using the OpenAI client to call open-weight or Databricks-hosted models. See Host Mappings.
offline_instrumentationWrites telemetry to a local JSON file instead of sending to Pay-i. See Offline Instrumentation.
ingest_retryControls retry behavior for failed ingest calls. See Ingest Retry.
openai_configOpenAI-specific configuration. See OpenAI Config.
anthropic_configAnthropic-specific configuration. See Anthropic Config.
aws_configAWS Bedrock-specific configuration. See AWS Bedrock Config.

Global Context Defaults

These values populate the initial context frame when global_instrumentation = True. They can be overridden per-request by @track, track_context, or extra_headers.

SettingTypeDefaultDescription
use_case_namestrCalling filenameIdentifies the Use Case. Defaults to the name of the Python file that called payi_instrument().
use_case_idstrNoneSpecific Use Case Instance ID. When omitted, a new instance is created automatically.
use_case_versionintNoneVersion number for the use case instance.
use_case_propertiesdict[str, str]NoneCustom key-value properties attached to the use case.
limit_idslist[str]NoneLimit IDs to check on every request.
user_idstrNoneIdentifies the end user for attribution and per-user limits.
account_namestrNoneAccount name for multi-tenant attribution.
request_propertiesdict[str, str]NoneCustom key-value properties attached to every request.
payi_instrument(
    config={
        "use_case_name": "customer_support_bot",
        "user_id": "user5",
        "limit_ids": ["global_budget"],
        "request_properties": {"environment": "production"},
    }
)

Host Mappings

Maps hostnames to Pay-i categories for correct pricing. This is needed when you use a provider's client SDK to send requests to a host other than the provider's own — for example, using the OpenAI client to communicate with open-weight models hosted on Databricks or a custom inference endpoint. Without a host mapping, Pay-i will use the default category value associated with the client which will result in an unknown_model.

The key is a hostname string or httpx.URL. The value is a PayiInstrumentHostMappingConfig dictionary.

SettingTypeDefaultDescription
price_as_categorystrNoneThe Pay-i category to use for pricing requests to this host (e.g. "system.databricks.azure").
payi_instrument(
    config={
        "host_mappings": {
            "my-workspace.cloud.databricks.com": {"price_as_category": "system.databricks.azure"},
        }
    }
)

Offline Instrumentation

Writes telemetry packets to a local JSON file instead of sending them to Pay-i. The file is written when the process exits. This is useful for testing, debugging, or air-gapped environments.

SettingTypeDefaultDescription
file_namestrpayi_instrumentation_<timestamp>.jsonOutput file path. The timestamp format is YYYY-MM-DD-HH-MM-SS.
payi_instrument(
    config={
        "offline_instrumentation": {"file_name": "telemetry_capture.json"}
    }
)

When offline instrumentation is enabled, no Pay-i client is required.

Ingest Retry

Controls retry behavior when ingest calls to the Pay-i service fail due to transient connection errors. There are two retry layers: inline retries that happen synchronously before the call returns, and a background retry queue that re-delivers failed packets asynchronously.

SettingTypeDefaultDescription
max_inline_retriesint0Maximum number of synchronous retries per ingest call. Each retry doubles the delay (exponential backoff). Must be non-negative.
inline_retry_initial_delayfloat0.5Initial delay in seconds before the first inline retry. Doubles on each subsequent retry. Must be non-negative.
queue_enabledboolTrueEnables the background retry queue. Failed ingest calls that exhaust inline retries are enqueued for later delivery. NOTE: Setting to False can result in dropped ingest calls and data loss.
queue_max_sizeint0Maximum number of items in the retry queue. 0 means unlimited. Must be non-negative.
queue_intervalfloat5.0Seconds between background retry queue drain cycles. Must be positive.
payi_instrument(
    config={
        "ingest_retry": {
            "max_inline_retries": 2,
            "inline_retry_initial_delay": 1.0,
            "queue_enabled": True,
            "queue_max_size": 100,
        }
    }
)

OpenAI Config

Configuration for the OpenAI provider instrumentation.

SettingTypeDefaultDescription
model_mappingslist[ModelMapping]NoneMaps model names to Pay-i categories/resources for pricing. See Model Mappings.
payi_instrument(
    config={
        "openai_config": {
            "model_mappings": [
                {"model": "my-gpt4-deployment", "price_as_resource": "gpt-4"},
            ]
        }
    }
)

Anthropic Config

Configuration for the Anthropic provider instrumentation.

SettingTypeDefaultDescription
model_mappingslist[ModelMapping]NoneMaps model names to Pay-i categories/resources for pricing. See Model Mappings.
payi_instrument(
    config={
        "anthropic_config": {
            "model_mappings": [
                {"model": "my-claude-deployment", "price_as_resource": "claude-sonnet-4-20250514"},
            ]
        }
    }
)

AWS Bedrock Config

Configuration for the AWS Bedrock provider instrumentation.

SettingTypeDefaultDescription
guardrail_traceboolTrueAutomatically enables guardrail tracing on Bedrock requests that include a guardrail ID and version. NOTE if set to False, payi cannot track guardrail usage and cost.
add_streaming_xproxy_resultboolFalseWhen True, attaches the xproxy_result object to streaming response events. Enable this when your code needs to inspect Pay-i results during stream iteration.
model_mappingslist[ModelMapping][]Maps model identifiers to Pay-i categories/resources. See Model Mappings.
model_configdict[str, ModelConfig]{}Per-model configuration keyed by model name. See Model Config.

Model Mappings

Model mappings tell Pay-i how to price requests when the model name or deployment name doesn't directly correspond to a known Pay-i resource. Each mapping is a dictionary with these fields:

SettingTypeDefaultDescription
modelstr(required)The model name or deployment name to match.
hoststr or httpx.URLNoneRestrict this mapping to requests sent to a specific host.
price_as_categorystrNoneThe Pay-i category to use (e.g. "system.openai").
price_as_resourcestrNoneThe Pay-i resource name to use for pricing (e.g. "gpt-4").
resource_scopestrNonePricing scope. One of "global", "datazone", "region", or "region.<region_name>".

At least one setting (price_as_category, price_as_resource, or resource_scope) must be specified.

payi_instrument(
    config={
        "aws_config": {
            "model_mappings": [
                {
                    "model": "arn:aws:bedrock:us-east-1::foundation-model/my-custom-model",
                    "price_as_category": "system.aws.bedrock",
                    "price_as_resource": "anthropic.claude-3-sonnet",
                    "resource_scope": "region.us-east-1",
                },
            ]
        }
    }
)

Model Config

Per-model configuration, keyed by model name.

SettingTypeDefaultDescription
tokenizer_pathstrNonePath to a custom tokenizer file for token counting with the tokenizers library. Required when instrumenting models do not self report token usage and require payi to count tokens. The client must install the tokenizers library in addition to providing the tokenizer file.