Prompt Management
Version-controlled prompt templates with variables, labels, and built-in testing —manage your AI prompts like code.
How It Works
You create prompt templates with {{variable}} placeholders, commit immutable versions, and use labels like production or staging to control which version your application resolves at runtime. When you're ready, test prompts against any model directly from the API.
The workflow is: create prompt → commit versions → assign labels → resolve at runtime.
Two prompt types
Backbone supports Text prompts (a single template string) and Chat prompts (structured messages with roles). The type is set at creation and cannot be changed.
Prompt Types
Text Prompts
A plain text template with {{variable}} placeholders:
You are an expert at {{domain}}. Analyze the following text and provide
a summary focused on {{focus_area}}.
Text: {{input_text}}
Chat Prompts
A structured array of messages, each with a role and content. Variables work inside message content:
[
{
"role": "system",
"content": "You are a {{role}} assistant specialized in {{domain}}."
},
{
"role": "user",
"content": "{{user_input}}"
}
]
Chat prompts also support placeholder messages —named slots that can be filled programmatically:
[
{ "role": "system", "content": "You are a helpful assistant." },
{ "type": "placeholder", "name": "context" },
{ "role": "user", "content": "{{question}}" }
]
Creating a Prompt
POST /api/v1/projects/{projectId}/prompts
Request
curl -X POST https://backbone.manfred-kunze.dev/api/v1/projects/{projectId}/prompts \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk_your_api_key" \
-d '{
"name": "summarize",
"description": "Summarize text with a focus area",
"type": "TEXT"
}'
| Field | Type | Required | Description |
|---|---|---|---|
name | string | Yes | Unique within the project |
description | string | No | What this prompt does |
type | string | Yes | TEXT or CHAT —immutable after creation |
Version Control
Every prompt change creates a new immutable version. Versions can't be edited after creation — this gives you a full audit trail of every change.
Committing a Version
POST /api/v1/projects/{projectId}/prompts/{promptId}/versions
{
"content": "You are an expert at {{domain}}. Summarize: {{input_text}}",
"config": {
"temperature": 0.7,
"maxTokens": 500
},
"changeDescription": "Added domain variable for specialization"
}
| Field | Type | Required | Description |
|---|---|---|---|
content | string or array | Yes | Template string for TEXT, message array for CHAT |
config | object | No | Model config (temperature, maxTokens, etc.) |
changeDescription | string | No | What changed in this version |
Each new version auto-increments the version number and updates the latest label.
Version Operations
You can:
- List versions —browse the full history with pagination
- View any version —inspect historical content (read-only)
- Re-activate a historical version —creates a new version as a copy, preserving the audit trail
- Deactivate a version —soft-delete versions you don't want used
Re-activation creates a copy
Re-activating version 3 doesn't revert — it creates version N+1 with the same content. The original stays untouched.
Labels
Labels are named pointers to specific versions. They decouple your application code from version numbers, enabling controlled rollouts.
How Labels Work
Every prompt gets a latest label automatically, updated on each new version. You create custom labels for deployment stages:
| Label | Points to | Purpose |
|---|---|---|
latest | v5 (auto) | Always the newest version |
production | v3 | What your users see |
staging | v5 | What you're testing |
canary | v4 | Gradual rollout |
Your application resolves prompts by name + label —when you're ready to promote, just move the label pointer.
Managing Labels
POST /api/v1/projects/{projectId}/prompts/{promptId}/labels
Create a label:
{
"name": "production",
"promptVersionId": "version-uuid"
}
Update a label to point to a different version:
PUT /api/v1/projects/{projectId}/prompts/{promptId}/labels/{labelName}
{
"promptVersionId": "new-version-uuid"
}
Label names must be lowercase alphanumeric with hyphens (e.g., production, staging, v2-rollback). The system latest label cannot be deleted.
Variables
Variables use the {{variableName}} syntax and are automatically detected from prompt content.
How Variables Work
When you compile or test a prompt, Backbone scans the content for {{...}} patterns and returns the list of detected variables. You provide values as a key-value map:
{
"variables": {
"domain": "financial analysis",
"input_text": "Q4 revenue grew 15% year-over-year...",
"focus_area": "key metrics and trends"
}
}
Variables that aren't provided keep their {{placeholder}} syntax in the output —so you can do partial compilation.
Prompt References
Prompts can reference other prompts using the @@@prompt:...@@@ syntax:
@@@prompt:name=system-context|label=production@@@
Now analyze: {{input_text}}
References are resolved recursively (up to 5 levels deep) with circular reference detection.
Resolving Prompts
GET /api/v1/projects/{projectId}/prompts/{promptId}/resolve
This is the main runtime endpoint. Resolve a prompt by ID and optional label to get the compiled content ready for use:
Request
curl "https://backbone.manfred-kunze.dev/api/v1/projects/{projectId}/prompts/{promptId}/resolve?label=production" \
-H "Authorization: Bearer sk_your_api_key"
| Parameter | Type | Required | Description |
|---|---|---|---|
label | string | No | Label to resolve (defaults to latest) |
Response:
{
"promptId": "uuid",
"promptName": "summarize",
"type": "TEXT",
"versionId": "uuid",
"versionNumber": 3,
"content": "You are an expert at {{domain}}. Summarize: {{input_text}}",
"config": { "temperature": 0.7 },
"label": "production",
"variables": ["domain", "input_text"]
}
Compiling Prompts
POST /api/v1/projects/{projectId}/prompts/{promptId}/compile
Compile substitutes variables and resolves prompt references without executing against a model:
{
"label": "production",
"variables": {
"domain": "financial analysis",
"input_text": "Q4 revenue grew 15%..."
}
}
Response:
{
"compiledContent": "You are an expert at financial analysis. Summarize: Q4 revenue grew 15%...",
"variables": ["domain", "input_text"],
"versionNumber": 3,
"versionId": "uuid"
}
Testing Prompts
POST /api/v1/projects/{projectId}/prompts/{promptId}/test
Run a compiled prompt against any AI model to validate output before deploying:
Request
curl -X POST https://backbone.manfred-kunze.dev/api/v1/projects/{projectId}/prompts/{promptId}/test \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk_your_api_key" \
-d '{
"model": "gpt-4o",
"label": "staging",
"variables": {
"domain": "financial analysis",
"input_text": "Q4 revenue grew 15% year-over-year"
}
}'
| Field | Type | Required | Description |
|---|---|---|---|
model | string | Yes | Model to test with (e.g., gpt-4o or openai/gpt-4o) |
versionId | string | No | Specific version to test |
label | string | No | Label to resolve (defaults to latest) |
variables | object | No | Key-value map of variable values |
Response:
{
"success": true,
"response": "Based on the financial data, Q4 showed strong growth...",
"inputTokens": 120,
"outputTokens": 85,
"processingDurationMs": 1250,
"modelUsed": "gpt-4o"
}
The response includes token counts and processing time so you can estimate costs before deploying to production.
Use Cases
- System prompts —version-controlled system instructions for chatbots and agents
- Extraction templates —reusable prompts for structured data extraction
- Multi-language —maintain translated prompt variants with labels (
en,de,fr) - A/B testing —use labels to route traffic between prompt versions
- Audit compliance —immutable version history for regulated industries
- Prompt chaining —compose complex workflows with prompt references