Skip to main content

Fetch Prompts via SDK/API

This guide shows you how to fetch prompt configurations from variants or environments using the Agenta SDK.

Fetching a Prompt Configuration

You can fetch the configurations from a variant reference (app_slug, variant_slug, variant_version) or an environment reference (app_slug, environment_slug). The default behavior when fetching is to fetch the latest configuration from the production environment. If you don't provide a variant_version parameter but only a variant_slug or an environment_slug, the SDK will fetch the latest version of the variant from the specified environment/variant.

tip

Check the reference section for more details on the data format used for prompts.

Default Behavior when fetching

If you don't provide either variant or environment identifiers, the SDK fetches the latest configuration deployed to the production environment.

config = ag.ConfigManager.get_from_registry(
app_slug="my-app-slug",
variant_slug="my-variant-slug",
variant_version=2 # Optional: fetches latest if not provided
)

print("Fetched configuration from production:")
print(config)

Example Output:

{
"prompt": {
"messages": [
{
"role": "system",
"content": "You are an assistant that provides concise answers"
},
{
"role": "user",
"content": "Explain {{topic}} in simple terms"
}
],
"llm_config": {
"model": "gpt-3.5-turbo",
"top_p": 1.0,
"max_tokens": 150,
"temperature": 0.7,
"presence_penalty": 0.0,
"frequency_penalty": 0.0
},
"template_format": "curly"
}
}
tip

Agenta provides a helper class PromptTemplate to format the configuration and then use it to generate the prompt.

from openai import OpenAI
from agenta.sdk.types import PromptTemplate

# Fetch configuration
config = ag.ConfigManager.get_from_registry(
app_slug="my-app-slug"
)

# Format the prompt with variables
prompt = PromptTemplate(**config['prompt']).format(topic="AI")

# Use with OpenAI
client = OpenAI()
response = client.chat.completions.create(
**prompt.to_openai_kwargs()
)

print(response.choices[0].message.content)

Fetching by Variant Reference

# Fetch configuration by variant
config = ag.ConfigManager.get_from_registry(
app_slug="my-app-slug",
variant_slug="my-variant-slug",
variant_version=2 # Optional: If not provided, fetches the latest version
)

print("Fetched configuration:")
print(config)

Fetching by Environment Reference

# Fetch the latest configuration from the staging environment
config = ag.ConfigManager.get_from_registry(
app_slug="my-app",
environment_slug="staging",
environment_version=1 # Optional: If not provided, fetches the latest version
)

print("Fetched configuration from staging:")
print(config)

Response Format

The API response contains your prompt configuration under params:

{
"params": {
"prompt": {
"messages": [
{
"role": "system",
"content": "You are an assistant that provides concise answers"
},
{
"role": "user",
"content": "Explain {{topic}} in simple terms"
}
],
"llm_config": {
"model": "gpt-3.5-turbo",
"max_tokens": 150,
"temperature": 0.7,
"top_p": 1.0,
"frequency_penalty": 0.0,
"presence_penalty": 0.0
},
"template_format": "curly"
}
},
"url": "https://cloud.agenta.ai/services/completion",
"application_ref": {
"slug": "my-app-slug",
"version": null,
"id": "..."
},
"variant_ref": {
"slug": "my-variant-slug",
"version": 2,
"id": "..."
},
"environment_ref": {
"slug": "production",
"version": 1,
"id": "..."
}
}
Asynchronous Operations in Python SDK

All SDK methods have async counterparts with an a prefix:

async def async_operations():
# Fetch configuration asynchronously
config = await ag.ConfigManager.aget_from_registry(...)