The terminal-native orchestration framework. Use the CLI or TUI to route tasks, explore codebases, execute file edits, and generate training data.
A local-first orchestration layer for your AI tools.
Multiple ways to run your AI agents.
The simplest way to interact. Send a prompt to any agent and get a response. Auto-routing selects the best agent for your task.
Give LLMs safe access to your filesystem. Agents can explore directory structures, read files, and propose edits. You maintain full control with permission prompts for every tool execution.
Run the same prompt across multiple models simultaneously. Compare reasoning, code quality, and speed side-by-side in your terminal.
Describe the goal, AI builds the plan. The planner agent analyzes your task and generates a multi-step execution plan with the right agents for each step.
Chain agents together for complex workflows. Pipe the output of one agent as context for the next. Perfect for research-then-write tasks.
Orchestrate multi-agent interactions. Have one agent review another's code, or setup a debate to find the best architectural decision.
The infrastructure powering your agents.
Persistent session history and retrieval-augmented generation. Context flows seamlessly between execution sessions.
Parse your codebase into searchable semantic structures. Search across functions, classes, and dependencies.
Capture every interaction for training data. Export preference pairs and trajectories for model fine-tuning.
Safe system access with permission gates. Give agents the tools they need to read, write, and execute.
Intelligent context window handling. Summarize history and translate context between different models.
Full support for the Model Context Protocol. Expose Puzld agents as MCP servers to other tools.
PuzldAI acts as a local orchestrator and does not handle your API keys directly. It wraps the official CLI tools you already have installed and authenticated.
~/.puzldai/config.json