API Server Usage Guide¶
This guide explains how to use the aipartnerupflow API server for remote task execution and integration.
Overview¶
The API server provides: - A2A Protocol Server: Standard agent-to-agent communication protocol - HTTP API: RESTful endpoints for task management - Real-time Streaming: Progress updates via SSE/WebSocket - Multi-user Support: User isolation and authentication
Starting the API Server¶
Basic Startup¶
# Start server on default port (8000)
aipartnerupflow serve
# Or use the server command
aipartnerupflow-server
# Or use Python module
python -m aipartnerupflow.api.main
Advanced Options¶
# Custom host and port
aipartnerupflow serve --host 0.0.0.0 --port 8080
# Enable auto-reload (development)
aipartnerupflow serve --reload
# Multiple workers (production)
aipartnerupflow serve --workers 4
# Custom configuration
aipartnerupflow serve --config config.yaml
API Endpoints¶
A2A Protocol Endpoints¶
The API server implements the A2A (Agent-to-Agent) Protocol standard.
Get Agent Card¶
Returns agent capabilities and available skills.
Execute Task Tree (A2A Protocol)¶
curl -X POST http://localhost:8000/ \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "tasks.execute",
"params": {
"tasks": [
{
"id": "task1",
"name": "my_task",
"user_id": "user123",
"inputs": {"key": "value"}
}
]
},
"id": "request-123"
}'
Note: The method execute_task_tree is still supported for backward compatibility, but tasks.execute is the recommended standard method name.
Task Management via A2A Protocol¶
All task management operations are now fully supported through the A2A Protocol / route:
- Task Execution:
tasks.execute(orexecute_task_treefor backward compatibility) - Task CRUD:
tasks.create,tasks.get,tasks.update,tasks.delete - Task Query:
tasks.detail,tasks.tree,tasks.list,tasks.children - Running Tasks:
tasks.running.list,tasks.running.status,tasks.running.count - Task Control:
tasks.cancel,tasks.copy
All methods follow the same A2A Protocol JSON-RPC format and return A2A Protocol Task objects with real-time status updates.
Task Management Endpoints (Legacy JSON-RPC)¶
Create Tasks¶
curl -X POST http://localhost:8000/tasks \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "tasks.create",
"params": {
"tasks": [...]
},
"id": "request-123"
}'
Get Task Status¶
List Tasks¶
Streaming Support¶
Server-Sent Events (SSE)¶
Use tasks.execute with use_streaming=true to receive real-time updates via SSE:
curl -N -H "Content-Type: application/json" \
-d '{"jsonrpc": "2.0", "method": "tasks.execute", "params": {"task_id": "task-123", "use_streaming": true}, "id": 1}' \
http://localhost:8000/tasks
The response will be a Server-Sent Events stream with real-time progress updates.
Copy Before Execution¶
Use tasks.execute with copy_execution=true to copy the task before execution, preserving the original task's execution history:
curl -X POST http://localhost:8000/tasks \
-H "Content-Type: application/json" \
-d '{"jsonrpc": "2.0", "method": "tasks.execute", "params": {"task_id": "task-123", "copy_execution": true}, "id": 1}'
To also copy child tasks with their dependencies:
curl -X POST http://localhost:8000/tasks \
-H "Content-Type: application/json" \
-d '{"jsonrpc": "2.0", "method": "tasks.execute", "params": {"task_id": "task-123", "copy_execution": true, "copy_children": true}, "id": 1}'
The response will include both task_id (the copied task) and original_task_id (the original task).
WebSocket¶
Connect via WebSocket for bidirectional communication:
const ws = new WebSocket('ws://localhost:8000/ws');
ws.onmessage = (event) => {
const update = JSON.parse(event.data);
console.log('Progress:', update.progress);
};
Client Integration¶
Python Client Example¶
import httpx
import json
# Execute task via API
async with httpx.AsyncClient() as client:
response = await client.post(
"http://localhost:8000/",
json={
"jsonrpc": "2.0",
"method": "tasks.execute",
"params": {
"tasks": [
{
"id": "task1",
"name": "my_task",
"user_id": "user123",
"inputs": {"key": "value"}
}
]
},
"id": "request-123"
}
)
result = response.json()
print(result)
JavaScript Client Example¶
// Execute task via API
const response = await fetch('http://localhost:8000/', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
jsonrpc: '2.0',
method: 'tasks.execute',
params: {
tasks: [
{
id: 'task1',
name: 'my_task',
user_id: 'user123',
inputs: { key: 'value' }
}
]
},
id: 'request-123'
})
});
const result = await response.json();
console.log(result);
Authentication¶
JWT Authentication (Optional)¶
If authentication is enabled:
# Get token
curl -X POST http://localhost:8000/auth/login \
-H "Content-Type: application/json" \
-d '{"username": "user", "password": "pass"}'
# Use token
curl -X POST http://localhost:8000/ \
-H "Authorization: Bearer {token}" \
-H "Content-Type: application/json" \
-d '{...}'
LLM API Key Management¶
The API server supports dynamic LLM API key injection for CrewAI tasks. Keys can be provided via request headers or user configuration.
Request Header (Demo/One-time Usage)¶
For demo or one-time usage, you can provide LLM API keys via the X-LLM-API-KEY header:
# Simple format (auto-detects provider from model)
curl -X POST http://localhost:8000/tasks \
-H "Content-Type: application/json" \
-H "X-LLM-API-KEY: sk-your-openai-key" \
-d '{
"jsonrpc": "2.0",
"method": "tasks.create",
"params": {
"tasks": [{
"id": "task1",
"name": "CrewAI Task",
"schemas": {"method": "crewai_executor"},
"params": {
"works": {
"agents": {
"researcher": {
"role": "Research Analyst",
"llm": "openai/gpt-4"
}
}
}
}
}]
}
}'
# Provider-specific format (explicit provider)
curl -X POST http://localhost:8000/tasks \
-H "Content-Type: application/json" \
-H "X-LLM-API-KEY: openai:sk-your-openai-key" \
-d '{...}'
# Anthropic example
curl -X POST http://localhost:8000/tasks \
-H "Content-Type: application/json" \
-H "X-LLM-API-KEY: anthropic:sk-ant-your-key" \
-d '{...}'
Header Format: - Simple: X-LLM-API-KEY: <api-key> (provider auto-detected from model name) - Provider-specific: X-LLM-API-KEY: <provider>:<api-key> (e.g., openai:sk-xxx, anthropic:sk-ant-xxx)
Supported Providers: - openai - OpenAI (GPT models) - anthropic - Anthropic (Claude models) - google / gemini - Google (Gemini models) - mistral - Mistral AI - groq - Groq - And more (see LLM Key Injector documentation)
Priority: 1. Request header (X-LLM-API-KEY) - highest priority 2. User config (if llm-key-config extension is installed) 3. Environment variables (automatically read by CrewAI/LiteLLM)
User Configuration (Multi-user Scenarios)¶
For production multi-user scenarios, use the llm-key-config extension:
Then configure keys programmatically:
from aipartnerupflow.extensions.llm_key_config import LLMKeyConfigManager
# Set user's LLM key
config_manager = LLMKeyConfigManager()
config_manager.set_key(user_id="user123", api_key="sk-xxx", provider="openai")
# Set provider-specific keys
config_manager.set_key(user_id="user123", api_key="sk-xxx", provider="openai")
config_manager.set_key(user_id="user123", api_key="sk-ant-xxx", provider="anthropic")
Note: Keys are stored in memory (not in database). For production multi-server scenarios, consider using Redis.
Environment Variables (Fallback)¶
If no header or user config is provided, CrewAI/LiteLLM will automatically use provider-specific environment variables:
Examples Auto-initialization¶
When the API server starts, it automatically initializes example tasks if the database is empty. This helps beginners get started quickly.
To manually initialize examples:
# Via CLI
aipartnerupflow examples init
# Or force re-initialization
aipartnerupflow examples init --force
Configuration¶
Environment Variables¶
# Server configuration
export AIPARTNERUPFLOW_HOST=0.0.0.0
export AIPARTNERUPFLOW_PORT=8000
# Database configuration
export AIPARTNERUPFLOW_DATABASE_URL=postgresql://user:pass@localhost/db
# Authentication
export AIPARTNERUPFLOW_JWT_SECRET=your-secret-key
Configuration File¶
Create config.yaml:
server:
host: 0.0.0.0
port: 8000
workers: 4
database:
url: postgresql://user:pass@localhost/db
auth:
enabled: true
jwt_secret: your-secret-key
Production Deployment¶
Using Uvicorn Directly¶
Using Docker¶
FROM python:3.12
WORKDIR /app
COPY . .
RUN pip install aipartnerupflow[a2a]
CMD ["aipartnerupflow-server", "--host", "0.0.0.0", "--port", "8000"]
Using Systemd¶
Create /etc/systemd/system/aipartnerupflow.service:
[Unit]
Description=aipartnerupflow API Server
After=network.target
[Service]
Type=simple
User=apflow
WorkingDirectory=/opt/aipartnerupflow
ExecStart=/usr/local/bin/aipartnerupflow-server --host 0.0.0.0 --port 8000
Restart=always
[Install]
WantedBy=multi-user.target
Monitoring¶
Health Check¶
Metrics¶
Troubleshooting¶
Server Won't Start¶
- Check if port is already in use
- Verify database connection
- Check logs for errors
Tasks Not Executing¶
- Verify executor is registered
- Check task name matches executor ID
- Review server logs
Connection Issues¶
- Verify firewall settings
- Check network connectivity
- Ensure server is accessible
Next Steps¶
- See HTTP API Reference for complete endpoint documentation
- Check Examples for integration examples