Quick Start
In this tutorial, you'll learn how to build a custom workflow with two prompts. By the end, you'll have an interactive playground to run and evaluate your chain of prompts.
The complete code for this tutorial is available here.
Custom workflows in Agenta
Custom workflows are Python applications connected to Agenta via API endpoints. After linking your app, you can use Agenta's playground to:
- Experiment with configurations
- Run evaluations
- Deploy the workflow
- Monitor its performance
1. Creating the application
We will build an app that summarizes a blog post and generates a tweet based on the summary. The highlighted lines indicate integration with Agenta.
from openai import OpenAI
from pydantic import BaseModel, Field
import agenta as ag
ag.init()
client = OpenAI()
prompt1 = "Summarize the following blog post: {blog_post}"
prompt2 = "Write a tweet based on this: {output_1}"
class CoPConfig(BaseModel):
prompt1: str = Field(default=prompt1)
prompt2: str = Field(default=prompt2)
@ag.route("/", config_schema=CoPConfig)
def generate(blog_post: str):
config = ag.ConfigManager.get_from_route(schema=CoPConfig)
formatted_prompt1 = config.prompt1.format(blog_post=blog_post)
completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": formatted_prompt1}]
)
output_1 = completion.choices[0].message.content
formatted_prompt2 = config.prompt2.format(output_1=output_1)
completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": formatted_prompt2}]
)
return completion.choices[0].message.content
if __name__ == "__main__":
import uvicorn
uvicorn.run("agenta.sdk.decorators.routing:app", host="0.0.0.0", port=8000, reload=True)
Let's explore each section:
Initialization
import agenta as ag
ag.init()
Initialize Agenta using ag.init()
. Agenta automatically provides necessary environment variables (AGENTA_API_KEY
and AGENTA_HOST
) when serving your application.
Workflow configuration
class CoPConfig(BaseModel):
prompt1: str = Field(default=prompt1)
prompt2: str = Field(default=prompt2)
Workflows have configurable parameters you can edit and version directly in the playground. Agenta uses Pydantic models to define these configurations. Each field must have a default value. String fields appear as text areas; you can also include integers, floats, or booleans, which appear as sliders or checkboxes.
This tutorial uses a simple configuration. In practice, you can add more parameters, such as model choice, temperature, or other advanced settings.
Defining entry points
@ag.route("/", config_schema=CoPConfig)
def generate(blog_post: str):
Agenta communicates with your workflow through defined entry points, creating an HTTP API for each one. The config_schema
parameter specifies the expected configuration model for this entry point.
Accessing configuration in your code
config = ag.ConfigManager.get_from_route(schema=CoPConfig)
Access the configuration passed from the playground or evaluations using ag.ConfigManager.get_from_route()
. This allows dynamic configuration changes without modifying your code.
Running the server
Use Uvicorn to serve your application:
if __name__ == "__main__":
import uvicorn
uvicorn.run("agenta.sdk.decorators.routing:app", host="0.0.0.0", port=8000, reload=True)
agenta.sdk.decorators.routing:app
defines your FastAPI server handling all requests from Agenta.
The application server (http://localhost:8000
) must be accessible from the internet when running evaluations. Consider using ngrok for tunneling during tests.
The server includes middleware for authentication using Agenta's API key.
Linking your application to Agenta
To connect your app:
- Click "Create Custom Workflow".
- Enter the URL and application name.
You will be redirected to the playground.
Adding observability (optional)
Initially, your workflow isn't automatically instrumented. To enable observability and debugging, instrument your application as follows:
-
Add the
opentelemetry.instrumentation.openai
package to yourrequirements.txt
. -
Update your code:
from openai import OpenAI
import agenta as ag
from pydantic import BaseModel, Field
from opentelemetry.instrumentation.openai import OpenAIInstrumentor
ag.init()
client = OpenAI()
prompt1 = "Summarize the following blog post: {blog_post}"
prompt2 = "Write a tweet based on this: {output_1}"
OpenAIInstrumentor().instrument()
class CoPConfig(BaseModel):
prompt1: str = Field(default=prompt1)
prompt2: str = Field(default=prompt2)
@ag.route("/", config_schema=CoPConfig)
@ag.instrument()
def generate(blog_post: str):
config = ag.ConfigManager.get_from_route(schema=CoPConfig)
formatted_prompt1 = config.prompt1.format(blog_post=blog_post)
completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": formatted_prompt1}]
)
output_1 = completion.choices[0].message.content
formatted_prompt2 = config.prompt2.format(output_1=output_1)
completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": formatted_prompt2}]
)
return completion.choices[0].message.content
Ensure the @ag.instrument()
decorator is always placed after @ag.route()
.
You can now view traces in the playground for easier debugging.