Quickstart
This walks through scaffolding a project, running a workflow, and reading the output. Takes about 2 minutes.
Prerequisites
Kerf calls Claude Code under the hood. Make sure it's installed and authenticated:
If that fails, install Claude Code first. Then authenticate:
If kerf run fails with "Claude CLI not found on PATH", you need to install Claude Code first.
Create a project
This creates the standard project structure with example workflows. See Project Structure for details.
Run the example workflows
The scaffolded project includes summarize, classify, extract, clean, and digest workflows:
kerf run summarize "The quarterly earnings report showed revenue growth of 15%, driven by enterprise expansion and strong retention."
{
"summary": "Quarterly revenue grew 15% YoY, driven by enterprise expansion and strong retention."
}
{
"name": "Sarah Chen",
"email": "sarah@acme.co",
"company": "Acme Corp",
"role": "VP of Engineering"
}
The digest workflow chains multiple tools before calling the LLM, stripping HTML, normalizing whitespace, and truncating long input before summarizing:
kerf run digest "<div><h2>Mantis Shrimp</h2><p>Mantis shrimp are carnivorous marine crustaceans of the order <em>Stomatopoda</em>, branching off from other crustaceans around <strong>400 million years ago</strong>. Their raptorial claws can accelerate at a rate comparable to a .22 caliber bullet, delivering around 1,500 newtons of force per strike.</p><p>They are thought to have the most complex eyes in the animal kingdom, perceiving wavelengths from deep ultraviolet to far-red with up to 16 distinct photoreceptors.</p></div>"
{
"summary": "Mantis shrimp are 400-million-year-old marine crustaceans with bullet-fast claws delivering 1,500 newtons of force and the most complex eyes in the animal kingdom, with 16 photoreceptors spanning ultraviolet to far-red."
}
You can also pipe input from stdin:
Check the logs
{
"workflow": "summarize",
"timestamp": "2025-01-15T14:32:01+00:00",
"input_preview": "The quarterly earnings report showed...",
"task_type": "summarization",
"tool_chain": ["normalize_text"],
"fallback_policy": "retry",
"fallback_triggered": false,
"preprocessed": "the quarterly earnings report showed...",
"result": { "summary": "..." }
}
Every execution is logged with a UUID filename in logs/. The user-facing output is just the result. The log captures the full pipeline breakdown for debugging and pattern extraction.
What's next
- Writing Workflows: create your own pipelines
- Writing Tools: add custom preprocessing steps
- Reading Logs: audit execution results and extract patterns
- Using the API Server: run workflows over HTTP