CLI Commands Reference¶
Complete reference of all CLI commands for executing and managing tasks.
Task Execution¶
apflow run¶
Execute a task or batch of tasks:
Options: - --tasks <json> - Task definition in JSON format (required) - --inputs <json> - Task inputs as JSON (optional) - --tags <tag1,tag2> - Task tags (optional) - --priority <priority> - Task priority: low, normal, high (default: normal) - --user-id <id> - User ID (optional) - --timeout <seconds> - Execution timeout (default: 300) - --retry-count <count> - Retry failed tasks (default: 0) - --retry-delay <seconds> - Delay between retries (default: 1) - --parallel-count <count> - Run multiple tasks in parallel (default: 1) - --demo-mode - Run in demo mode (for testing)
Examples:
Basic execution:
apflow run batch-001 --tasks '[
{
"id": "task1",
"name": "Check CPU",
"schemas": {"method": "system_info_executor"},
"inputs": {"resource": "cpu"}
}
]'
With inputs and tags:
apflow run batch-002 --tasks '[{...}]' \
--inputs '{"config": "value"}' \
--tags production,monitoring \
--priority high
Parallel execution:
With retry:
Task Querying¶
apflow tasks list¶
List all tasks in database:
Options: - --user-id <id> - Filter by user ID - --status <status> - Filter by status: pending, running, completed, failed, cancelled - --batch-id <id> - Filter by batch ID - --limit <count> - Limit results (default: 100) - --offset <count> - Skip N results (default: 0) - --sort <field> - Sort by field: created_at, updated_at, status - --reverse - Reverse sort order
Examples:
List all tasks:
List by user:
List failed tasks:
List with pagination:
List and sort by date:
apflow tasks status¶
Get status of a specific task:
Options: - --include-details - Include full task details - --watch - Watch for changes (exit with Ctrl+C) - --watch-interval <seconds> - Polling interval when watching (default: 1)
Examples:
Check status:
With full details:
Watch for completion:
apflow tasks watch¶
Monitor task execution in real-time:
Options: - --task-id <id> - Watch specific task - --all - Watch all running tasks - --batch-id <id> - Watch tasks in batch - --user-id <id> - Watch user's tasks - --interval <seconds> - Polling interval (default: 1)
Examples:
Watch single task:
Watch all running tasks:
Watch batch:
Watch with slower polling:
apflow tasks history¶
View task execution history:
Options: - --user-id <id> - Filter by user - --days <n> - Show last N days (default: 7) - --limit <count> - Limit results (default: 100)
Examples:
View task history:
View recent history:
Task Cancellation¶
apflow tasks cancel¶
Cancel a running task:
Options: - --force - Force cancellation even if stuck - --reason <text> - Cancellation reason - --wait - Wait for cancellation to complete (default: 5 seconds)
Examples:
Cancel task:
Force cancel:
Cancel with reason:
Task Management¶
apflow tasks create¶
Create a task without executing:
Options: - --name <name> - Task name (required) - --method <method> - Executor method (required) - --inputs <json> - Task inputs as JSON - --tags <tags> - Task tags - --description <text> - Task description - --batch-id <id> - Batch ID
Examples:
Create task:
apflow tasks create --name "CPU Check" --method system_info_executor \
--inputs '{"resource": "cpu"}'
With batch and tags:
apflow tasks create --name "Memory Check" --method system_info_executor \
--batch-id batch-001 --tags monitoring,system
apflow tasks update¶
Update task configuration:
Options: - --name <name> - Update task name - --inputs <json> - Update task inputs - --tags <tags> - Update task tags - --status <status> - Update task status - --description <text> - Update description - --validate - Validate changes before applying
Examples:
Update task name:
Update inputs:
Update with validation:
apflow tasks delete¶
Delete a task:
Options: - --force - Delete without confirmation - --reason <text> - Deletion reason - --keep-logs - Keep execution logs after deletion
Examples:
Delete task:
Force delete:
Delete and keep logs:
apflow tasks copy¶
Copy a task:
Options: - --task-id <id> - New task ID (auto-generated if not provided) - --batch-id <id> - Target batch - --increment-name - Append " (copy)" to name - --clear-status - Start with pending status
Examples:
Copy task:
Copy to new batch:
Copy with incremented name:
Copy and reset status:
Flow Management¶
apflow flow run¶
Execute a flow (batch of tasks):
Options: - --tasks <json> - Task definitions (required) - --inputs <json> - Flow inputs - --parallel-count <count> - Tasks to run in parallel - --skip-failed - Continue even if task fails - --timeout <seconds> - Flow timeout
Examples:
Run flow:
Run with parallelism:
Skip failed tasks:
apflow flow status¶
Get flow execution status:
Examples:
Check flow status:
apflow flow cancel¶
Cancel a flow:
Options: - --force - Force cancellation - --cancel-tasks - Cancel remaining tasks in flow
Examples:
Cancel flow:
Cancel with tasks:
Query and Filtering¶
Common Filter Patterns¶
Filter by status:
apflow tasks list --status running
apflow tasks list --status completed
apflow tasks list --status failed
apflow tasks list --status cancelled
Filter by user:
Filter by batch:
Combine filters:
apflow tasks list --batch-id batch-001 --status failed
apflow tasks list --user-id alice --status running
Executor Methods¶
Common executor methods available:
system_info_executor¶
Get system information:
apflow run batch --tasks '[{
"id": "t1",
"name": "CPU Info",
"schemas": {"method": "system_info_executor"},
"inputs": {"resource": "cpu"}
}]'
Inputs: - resource: cpu, memory, disk, network
http_executor¶
Make HTTP requests:
apflow run batch --tasks '[{
"id": "t1",
"name": "API Call",
"schemas": {"method": "http_executor"},
"inputs": {
"url": "https://api.example.com/data",
"method": "GET"
}
}]'
Inputs: - url: Target URL - method: GET, POST, PUT, DELETE - headers: HTTP headers (optional) - body: Request body (optional)
command_executor¶
Execute shell commands:
apflow run batch --tasks '[{
"id": "t1",
"name": "Run Script",
"schemas": {"method": "command_executor"},
"inputs": {
"command": "ls -la",
"timeout": 30
}
}]'
Inputs: - command: Shell command to execute - timeout: Timeout in seconds
custom_executor¶
Custom business logic:
Implement custom executors by extending the executor framework. See Extending Guide for details.
Task Input Format¶
All task inputs are JSON:
{
"id": "unique-task-id",
"name": "Human Readable Name",
"schemas": {
"method": "executor_method_name"
},
"inputs": {
"param1": "value1",
"param2": "value2"
},
"tags": ["tag1", "tag2"],
"priority": "high"
}
Fields: - id: Unique task identifier - name: Human-readable task name - schemas.method: Executor method to use - inputs: Method-specific parameters (object) - tags: Optional tags for organization - priority: low, normal, high (optional)
Output Format¶
Task output depends on the executor method. Examples:
system_info_executor output:
http_executor output:
command_executor output:
Command Aliases¶
Shorter versions of common commands:
apflow tasks list→apflow tasks lsapflow tasks status→apflow tasks stapflow tasks cancel→apflow tasks capflow tasks watch→apflow tasks wapflow flow run→apflow f run
Error Handling¶
Common Errors¶
Error: "Task not found"
Error: "Database connection error"
# Check database configuration
echo $DATABASE_URL
# Or check DuckDB file
ls ~/.aipartnerup/data/apflow.duckdb
Error: "Invalid task format"
# Validate JSON
echo '[{"id":"t1","name":"Task","schemas":{"method":"system_info_executor"},"inputs":{}}]' | python -m json.tool
Error: "Executor method not found"
Debugging¶
Debug mode:¶
# Enable verbose logging (preferred with APFLOW_ prefix)
export APFLOW_LOG_LEVEL=DEBUG
apflow run batch --tasks '[...]'
# Or use generic LOG_LEVEL (fallback)
export LOG_LEVEL=DEBUG
apflow run batch --tasks '[...]'
Check task details:¶
Monitor execution:¶
Summary¶
- ✅ Execute tasks:
apflow runwith JSON task definitions - ✅ Query tasks:
apflow tasks list,status,watch - ✅ Manage tasks:
create,update,delete,copy - ✅ Cancel tasks:
apflow tasks cancelwith force option - ✅ Monitor flows:
apflow flowcommands - ✅ Debug issues: Enable debug mode and check logs
All commands support JSON input/output for integration with other tools.