/v1/messages/batches endpoint, zero /v1/batches endpoint, zero MessageBatch / BatchedRequest / BatchedResult / BatchProcessingStatus / BatchRequestCounts typed taxonomy across rust/crates/api/src/types.rs (zero hits for batches, MessageBatch, BatchedRequest, custom_id, processing_status), zero submit_batch / retrieve_batch / retrieve_batch_results / cancel_batch / list_batches methods on the Provider trait at rust/crates/api/src/providers/mod.rs:17-30 (only send_message and stream_message exist, both per-request synchronous), zero batch dispatch on ProviderClient enum at rust/crates/api/src/client.rs:8-14 (three variants Anthropic/Xai/OpenAi all closed under sync send_message + stream_message), zero BatchSubmittedEvent / BatchInProgressEvent / BatchEndedEvent typed events on the runtime telemetry sink, zero claw batch / claw batches CLI subcommand surface at rust/crates/rusty-claude-cli/src/main.rs, zero /batch slash command in SlashCommandSpec table at rust/crates/commands/src/lib.rs, zero pending_batches field in claw status --json output, zero is_batch_request flag on pricing_for_model cost estimator (so even if Batch API were wired, cost would over-charge by 2x), zero batch_input_tokens_per_million_usd / batch_output_tokens_per_million_usd fields in the Pricing struct — the API has been GA on Anthropic since 2024-10-08 (18 months ago at filing time, with explicit 'anthropic-beta: message-batches-2024-09-24' opt-in header documented) and on OpenAI since 2024-04-15 (24 months ago at filing time), uniformly offers 50% input-and-output token discount, accepts up to 100,000 requests per batch with 256MB total payload (Anthropic) or unlimited via Files API (OpenAI), 24-hour completion SLO; combining with #219's also-missing prompt-caching opt-in (90% input savings) gives a compounded ~95% input-cost asymmetry on bulk ingest scenarios — the single largest cost-reduction lever in the entire API parity audit, missing at the endpoint-family level rather than the per-field level (Jobdori cycle #373 / extends #168c emission-routing audit / sibling-shape cluster grows to twenty: #201/#202/#203/#206/#207/#208/#209/#210/#211/#212/#213/#214/#215/#216/#217/#218/#219/#220/#221 / wire-format-parity cluster grows to eleven: #211+#212+#213+#214+#215+#216+#217+#218+#219+#220+#221 / capability-parity cluster grows to three: #218+#220+#221 / cost-parity cluster grows to eight: #204+#207+#209+#210+#213+#216+#219+#221 — #221 compounds with #219 to ~95% bulk-ingest cost asymmetry, the largest cost gap in the cluster / seven-layer-endpoint-family-absence shape (endpoint-URL + data-model-taxonomy + Provider-trait-method + ProviderClient-enum-dispatch + Worker-registry-status-enum + CLI-subcommand-surface + pricing-tier-flag) is the largest single capability absence catalogued, exceeding #220's five-layer-feature-absence / endpoint-family-level absence shape is novel — applies to follow-on candidates 'Files API typed taxonomy is absent' (the OpenAI batch path's prerequisite endpoint, also absent), 'Embeddings API typed taxonomy is absent' (/v1/embeddings cross-cutting), 'Models list endpoint typed taxonomy is absent' (/v1/models / Anthropic Models API) / external validation: Anthropic Message Batches API reference at https://docs.anthropic.com/en/api/messages-batches documenting five operations on /v1/messages/batches + GA 2024-10-08 + 50% discount + 100k-requests-per-batch + 256MB-total-payload + 24-hour-SLO + custom_id correlation field, Anthropic launch announcement at anthropic.com/news/message-batches-api documenting '50% off both input and output tokens' positioning, Anthropic Pricing page documenting Batch API column with 50% across Sonnet 3.5/4/4.5/4.6 + Opus 3/4/4.6 + Haiku 3.5, Anthropic Python SDK client.messages.batches.create(requests=[...]) first-class typed surface, Anthropic TypeScript SDK parallel surface, AWS Bedrock InvokeModelBatch / batch-inference docs (Bedrock-anthropic-relay path), OpenAI Batch API reference at platform.openai.com/docs/api-reference/batch documenting GA 2024-04-15 + 50% discount + JSONL-via-Files-API + completion_window:'24h', OpenAI launch announcement at openai.com/index/openai-introduces-batch-api documenting 'process batches asynchronously and receive results within 24 hours at a 50% discount', DeepSeek/Moonshot/Alibaba-DashScope/xAI batch-inference parallel surfaces, OpenRouter batch passthrough, simonw/llm --batch flag, Vercel AI SDK generateBatch + provider-specific batch passthrough, LangChain Runnable.batch() + Runnable.abatch() first-class Python+TypeScript parity, LangSmith batch-aware tracing, llmindset.co.uk independent cost-calculus validation, Medium 'process 10,000 queries without breaking the bank' tutorial, Steve Kinney's Anthropic-Batch-with-Temporal workflow-orchestration article, ai.moda Anthropic-Batch+Caching 95%-compounded-savings analysis (proves #219+#221 together close the largest cost gap), VentureBeat industry-press coverage, Reddit r/ClaudeAI launch thread, zed-industries/zed#19945 (peer ecosystem with same gap), RooCodeInc/Roo-Code#8667 (peer ecosystem with same gap), n8n Anthropic-batch-processing workflow, startground.com batch-deals tracker, silicondata.com 2026-pricing per-model batch breakdown, Hacker News batch-mechanics discussions, OpenTelemetry GenAI semconv gen_ai.request.batch_id + gen_ai.batch.processing_status + gen_ai.batch.request_counts documented attributes, IANA application/x-ndjson + application/jsonl MIME-type registrations / claw-code is the sole client/agent/CLI in the surveyed coding-agent ecosystem with zero batch-dispatch capability despite the API being GA on both major providers for 18+ months — parity floor against every other CLI/SDK/coding-agent in 2024-2025, the largest single cost-reduction lever in the entire emission-routing audit, and the largest endpoint-family-level capability gap catalogued so far)
/v1/messages/batches endpoint, zero /v1/batches endpoint, zero MessageBatch / BatchedRequest / BatchedResult / BatchProcessingStatus / BatchRequestCounts typed taxonomy across rust/crates/api/src/types.rs (zero hits for batches, MessageBatch, BatchedRequest, custom_id, processing_status), zero submit_batch / retrieve_batch / retrieve_batch_results / cancel_batch / list_batches methods on the Provider trait at rust/crates/api/src/providers/mod.rs:17-30 (only send_message and stream_message exist, both per-request synchronous), zero batch dispatch on ProviderClient enum at rust/crates/api/src/client.rs:8-14 (three variants Anthropic/Xai/OpenAi all closed under sync send_message + stream_message), zero BatchSubmittedEvent / BatchInProgressEvent / BatchEndedEvent typed events on the runtime telemetry sink, zero claw batch / claw batches CLI subcommand surface at rust/crates/rusty-claude-cli/src/main.rs, zero /batch slash command in SlashCommandSpec table at rust/crates/commands/src/lib.rs, zero pending_batches field in claw status --json output, zero is_batch_request flag on pricing_for_model cost estimator (so even if Batch API were wired, cost would over-charge by 2x), zero batch_input_tokens_per_million_usd / batch_output_tokens_per_million_usd fields in the Pricing struct — the API has been GA on Anthropic since 2024-10-08 (18 months ago at filing time, with explicit 'anthropic-beta: message-batches-2024-09-24' opt-in header documented) and on OpenAI since 2024-04-15 (24 months ago at filing time), uniformly offers 50% input-and-output token discount, accepts up to 100,000 requests per batch with 256MB total payload (Anthropic) or unlimited via Files API (OpenAI), 24-hour completion SLO; combining with #219's also-missing prompt-caching opt-in (90% input savings) gives a compounded ~95% input-cost asymmetry on bulk ingest scenarios — the single largest cost-reduction lever in the entire API parity audit, missing at the endpoint-family level rather than the per-field level (Jobdori cycle #373 / extends #168c emission-routing audit / sibling-shape cluster grows to twenty: #201/#202/#203/#206/#207/#208/#209/#210/#211/#212/#213/#214/#215/#216/#217/#218/#219/#220/#221 / wire-format-parity cluster grows to eleven: #211+#212+#213+#214+#215+#216+#217+#218+#219+#220+#221 / capability-parity cluster grows to three: #218+#220+#221 / cost-parity cluster grows to eight: #204+#207+#209+#210+#213+#216+#219+#221 — #221 compounds with #219 to ~95% bulk-ingest cost asymmetry, the largest cost gap in the cluster / seven-layer-endpoint-family-absence shape (endpoint-URL + data-model-taxonomy + Provider-trait-method + ProviderClient-enum-dispatch + Worker-registry-status-enum + CLI-subcommand-surface + pricing-tier-flag) is the largest single capability absence catalogued, exceeding #220's five-layer-feature-absence / endpoint-family-level absence shape is novel — applies to follow-on candidates 'Files API typed taxonomy is absent' (the OpenAI batch path's prerequisite endpoint, also absent), 'Embeddings API typed taxonomy is absent' (/v1/embeddings cross-cutting), 'Models list endpoint typed taxonomy is absent' (/v1/models / Anthropic Models API) / external validation: Anthropic Message Batches API reference at https://docs.anthropic.com/en/api/messages-batches documenting five operations on /v1/messages/batches + GA 2024-10-08 + 50% discount + 100k-requests-per-batch + 256MB-total-payload + 24-hour-SLO + custom_id correlation field, Anthropic launch announcement at anthropic.com/news/message-batches-api documenting '50% off both input and output tokens' positioning, Anthropic Pricing page documenting Batch API column with 50% across Sonnet 3.5/4/4.5/4.6 + Opus 3/4/4.6 + Haiku 3.5, Anthropic Python SDK client.messages.batches.create(requests=[...]) first-class typed surface, Anthropic TypeScript SDK parallel surface, AWS Bedrock InvokeModelBatch / batch-inference docs (Bedrock-anthropic-relay path), OpenAI Batch API reference at platform.openai.com/docs/api-reference/batch documenting GA 2024-04-15 + 50% discount + JSONL-via-Files-API + completion_window:'24h', OpenAI launch announcement at openai.com/index/openai-introduces-batch-api documenting 'process batches asynchronously and receive results within 24 hours at a 50% discount', DeepSeek/Moonshot/Alibaba-DashScope/xAI batch-inference parallel surfaces, OpenRouter batch passthrough, simonw/llm --batch flag, Vercel AI SDK generateBatch + provider-specific batch passthrough, LangChain Runnable.batch() + Runnable.abatch() first-class Python+TypeScript parity, LangSmith batch-aware tracing, llmindset.co.uk independent cost-calculus validation, Medium 'process 10,000 queries without breaking the bank' tutorial, Steve Kinney's Anthropic-Batch-with-Temporal workflow-orchestration article, ai.moda Anthropic-Batch+Caching 95%-compounded-savings analysis (proves #219+#221 together close the largest cost gap), VentureBeat industry-press coverage, Reddit r/ClaudeAI launch thread, zed-industries/zed#19945 (peer ecosystem with same gap), RooCodeInc/Roo-Code#8667 (peer ecosystem with same gap), n8n Anthropic-batch-processing workflow, startground.com batch-deals tracker, silicondata.com 2026-pricing per-model batch breakdown, Hacker News batch-mechanics discussions, OpenTelemetry GenAI semconv gen_ai.request.batch_id + gen_ai.batch.processing_status + gen_ai.batch.request_counts documented attributes, IANA application/x-ndjson + application/jsonl MIME-type registrations / claw-code is the sole client/agent/CLI in the surveyed coding-agent ecosystem with zero batch-dispatch capability despite the API being GA on both major providers for 18+ months — parity floor against every other CLI/SDK/coding-agent in 2024-2025, the largest single cost-reduction lever in the entire emission-routing audit, and the largest endpoint-family-level capability gap catalogued so far)
Claw Code
ultraworkers/claw-code · Usage · Error Handling · Rust workspace · Parity · Roadmap · UltraWorkers Discord
Claw Code is the public Rust implementation of the claw CLI agent harness.
The canonical implementation lives in rust/, and the current source of truth for this repository is ultraworkers/claw-code.
Important
Start with
USAGE.mdfor build, auth, CLI, session, and parity-harness workflows. Makeclaw doctoryour first health check after building, userust/README.mdfor crate-level details, readPARITY.mdfor the current Rust-port checkpoint, and seedocs/container.mdfor the container-first workflow.ACP / Zed status:
claw-codedoes not ship an ACP/Zed daemon entrypoint yet. Runclaw acp(orclaw --acp) for the current status instead of guessing from source layout;claw acp serveis currently a discoverability alias only, and real ACP support remains tracked separately inROADMAP.md.
Current repository shape
rust/— canonical Rust workspace and theclawCLI binaryUSAGE.md— task-oriented usage guide for the current product surfaceERROR_HANDLING.md— unified error-handling pattern for orchestration codePARITY.md— Rust-port parity status and migration notesROADMAP.md— active roadmap and cleanup backlogPHILOSOPHY.md— project intent and system-design framingSCHEMAS.md— JSON protocol contract (Python harness reference)src/+tests/— companion Python/reference workspace and audit helpers; not the primary runtime surface
Quick start
Note
[!WARNING]
cargo install claw-codeinstalls the wrong thing. Theclaw-codecrate on crates.io is a deprecated stub that placesclaw-code-deprecated.exe— notclaw. Running it only prints"claw-code has been renamed to agent-code". Do not usecargo install claw-code. Either build from source (this repo) or install the upstream binary:cargo install agent-code # upstream binary — installs 'agent.exe' (Windows) / 'agent' (Unix), NOT 'agent-code'This repo (
ultraworkers/claw-code) is build-from-source only — follow the steps below.
# 1. Clone and build
git clone https://github.com/ultraworkers/claw-code
cd claw-code/rust
cargo build --workspace
# 2. Set your API key (Anthropic API key — not a Claude subscription)
export ANTHROPIC_API_KEY="sk-ant-..."
# 3. Verify everything is wired correctly
./target/debug/claw doctor
# 4. Run a prompt
./target/debug/claw prompt "say hello"
Note
Windows (PowerShell): the binary is
claw.exe, notclaw. Use.\target\debug\claw.exeor runcargo run -- prompt "say hello"to skip the path lookup.
Windows setup
PowerShell is a supported Windows path. Use whichever shell works for you. The common onboarding issues on Windows are:
- Install Rust first — download from https://rustup.rs/ and run the installer. Close and reopen your terminal when it finishes.
- Verify Rust is on PATH:
If this fails, reopen your terminal or run the PATH setup from the Rust installer output, then retry.cargo --version - Clone and build (works in PowerShell, Git Bash, or WSL):
git clone https://github.com/ultraworkers/claw-code cd claw-code/rust cargo build --workspace - Run (PowerShell — note
.exeand backslash):$env:ANTHROPIC_API_KEY = "sk-ant-..." .\target\debug\claw.exe prompt "say hello"
Git Bash / WSL are optional alternatives, not requirements. If you prefer bash-style paths (/c/Users/you/... instead of C:\Users\you\...), Git Bash (ships with Git for Windows) works well. In Git Bash, the MINGW64 prompt is expected and normal — not a broken install.
Post-build: locate the binary and verify
After running cargo build --workspace, the claw binary is built but not automatically installed to your system. Here's where to find it and how to verify the build succeeded.
Binary location
After cargo build --workspace in claw-code/rust/:
Debug build (default, faster compile):
- macOS/Linux:
rust/target/debug/claw - Windows:
rust/target/debug/claw.exe
Release build (optimized, slower compile):
- macOS/Linux:
rust/target/release/claw - Windows:
rust/target/release/claw.exe
If you ran cargo build without --release, the binary is in the debug/ folder.
Verify the build succeeded
Test the binary directly using its path:
# macOS/Linux (debug build)
./rust/target/debug/claw --help
./rust/target/debug/claw doctor
# Windows PowerShell (debug build)
.\rust\target\debug\claw.exe --help
.\rust\target\debug\claw.exe doctor
If these commands succeed, the build is working. claw doctor is your first health check — it validates your API key, model access, and tool configuration.
Optional: Add to PATH
If you want to run claw from any directory without the full path, choose one of these approaches:
Option 1: Symlink (macOS/Linux)
ln -s $(pwd)/rust/target/debug/claw /usr/local/bin/claw
Then reload your shell and test:
claw --help
Option 2: Use cargo install (all platforms)
Build and install to Cargo's default location (~/.cargo/bin/, which is usually on PATH):
# From the claw-code/rust/ directory
cargo install --path . --force
# Then from anywhere
claw --help
Option 3: Update shell profile (bash/zsh)
Add this line to ~/.bashrc or ~/.zshrc:
export PATH="$(pwd)/rust/target/debug:$PATH"
Reload your shell:
source ~/.bashrc # or source ~/.zshrc
claw --help
Troubleshooting
- "command not found: claw" — The binary is in
rust/target/debug/claw, but it's not on your PATH. Use the full path./rust/target/debug/clawor symlink/install as above. - "permission denied" — On macOS/Linux, you may need
chmod +x rust/target/debug/clawif the executable bit isn't set (rare). - Debug vs. release — If the build is slow, you're in debug mode (default). Add
--releasetocargo buildfor faster runtime, but the build itself will take 5–10 minutes.
Note
Auth: claw requires an API key (
ANTHROPIC_API_KEY,OPENAI_API_KEY, etc.) — Claude subscription login is not a supported auth path.
Run the workspace test suite after verifying the binary works:
cd rust
cargo test --workspace
Documentation map
USAGE.md— quick commands, auth, sessions, config, parity harnessrust/README.md— crate map, CLI surface, features, workspace layoutPARITY.md— parity status for the Rust portrust/MOCK_PARITY_HARNESS.md— deterministic mock-service harness detailsROADMAP.md— active roadmap and open cleanup workPHILOSOPHY.md— why the project exists and how it is operated
Ecosystem
Claw Code is built in the open alongside the broader UltraWorkers toolchain:
Ownership / affiliation disclaimer
- This repository does not claim ownership of the original Claude Code source material.
- This repository is not affiliated with, endorsed by, or maintained by Anthropic.
