GET /v1/models and zero GET /v1/models/{id} surface across rust/crates/api/src/providers/anthropic.rs and rust/crates/api/src/providers/openai_compat.rs (rg returns zero hits for /v1/models, list_models, fetch_models, get_models, available_models, model_catalog, ModelInfo, ModelList, ListModelsResponse, OwnedBy, ModelObject, ModelCatalog across rust/), zero Model / ModelInfo / ModelList / ListModelsResponse typed taxonomy in rust/crates/api/src/types.rs, zero list_models<'a>(&'a self) -> ProviderFuture<'a, ModelList> and zero retrieve_model<'a>(&'a self, model_id: &'a str) -> ProviderFuture<'a, ModelInfo> methods on the Provider trait at rust/crates/api/src/providers/mod.rs:17-30 (only send_message and stream_message exist, both per-request), zero list_models dispatch on the ProviderClient enum at rust/crates/api/src/client.rs:8-14 (three variants Anthropic/Xai/OpenAi, all closed under per-request synchronous dispatch), zero claw models / claw model list / claw list-models CLI subcommand surface at rust/crates/rusty-claude-cli/src/main.rs, zero /models slash command in the SlashCommandSpec table at rust/crates/commands/src/lib.rs, zero validation against an authoritative source on set_model at rust/crates/rusty-claude-cli/src/main.rs:4989-5037 (user can type /model claude-banana-9000 and the runtime accepts it, swaps the active model to that string, and only fails at request time when the upstream provider returns 404 / invalid_model_error), and the existing /providers slash command at rust/crates/commands/src/lib.rs:716-720 is just a literal alias for /doctor at rust/crates/commands/src/lib.rs:1386-1389 despite advertising summary: "List available model providers" (advertised-but-rerouted shape — actively misleading at the UX layer, distinct from #220's advertised-but-unbuilt shape because the parse arm dispatches to a *different* command entirely instead of returning a clear unsupported error) — the canonical model-discovery affordance is invisible across every CLI / REPL / slash-command / Provider-trait / ProviderClient-enum / data-model surface, leaving claw-code's local hardcoded 13-entry MODEL_REGISTRY (3 anthropic + 5 grok + 1 kimi + 4 prefix routes for openai/gpt/qwen/kimi at rust/crates/api/src/providers/mod.rs:52-134 and 166-225) and its 6-entry model_token_limit match arm (rust/crates/api/src/providers/mod.rs:277-301 covering claude-opus-4-6, claude-sonnet-4-6, claude-haiku-4-5-20251213, grok-3, grok-3-mini, kimi-k2.5, kimi-k1.5 — returns None for current production IDs claude-opus-4-7, claude-haiku-4-6, gpt-5.2, o3, o4-mini, kimi-k3, qwen3-max, grok-4, deepseek-reasoner) as the only model-name knowledge the runtime has access to, with no way to refresh it, no way to discover new model IDs that providers publish, no way to validate user-supplied model strings, no way to cross-link to the pricing_for_model cost estimator (#209 substring-matching gap), no way to cross-link to the model_token_limit preflight check (#210 max_tokens shadow-fork gap silently no-ops on unknown models), no way to cross-link to the future is_batch_request flag (#221 batch-dispatch gap requires knowing which models support batch), and USAGE.md:426-440 documents only six model rows out of nine MODEL_REGISTRY entries (kimi alias missing from the documented table, four prefix routes mentioned only in passing prose, zero documentation of /v1/models endpoint usage / zero documentation of model-catalog discovery / zero documentation of "what to do when your provider ships a new model that isn't in claw-code's hardcoded registry") — the canonical model-discovery affordance is **the most universally-available endpoint in the LLM API ecosystem** (older than /v1/chat/completions itself, older than /v1/embeddings, older than /v1/messages, the literal first endpoint after auth on every OpenAI-compat provider since 2020 and on Anthropic since 2024-12-04, GA-shipped first-class typed surfaces in every Python/TypeScript SDK in the ecosystem) and claw-code is the **sole client/agent/CLI in the surveyed coding-agent ecosystem with zero /v1/models integration AND a misleading /providers slash command that aliases to /doctor** — both gaps are unique to claw-code in the surveyed ecosystem (Jobdori cycle #374 / extends #168c emission-routing audit / explicit follow-on candidate from #221's seven-layer-endpoint-family-absence shape — the third of three named candidates: Files API typed taxonomy / Embeddings API typed taxonomy / Models list endpoint typed taxonomy, and the most clawability-impacting because it's the upstream root cause of three downstream gaps already catalogued in this audit / sibling-shape cluster grows to twenty-one: #201/#202/#203/#206/#207/#208/#209/#210/#211/#212/#213/#214/#215/#216/#217/#218/#219/#220/#221/#222 / wire-format-parity cluster grows to twelve: #211+#212+#213+#214+#215+#216+#217+#218+#219+#220+#221+#222 / capability-parity cluster grows to four: #218+#220+#221+#222 / discovery-and-validation cluster: #222 alone but it's the upstream root cause of #209's pricing-fallback gap, #210's max_tokens shadow-fork gap, and #221's batch-dispatch gap / eight-layer-endpoint-family-absence-with-misleading-alias shape (endpoint-URL + data-model-taxonomy + Provider-trait-method + ProviderClient-enum-dispatch + CLI-subcommand-surface + slash-command-surface-with-misleading-alias + set_model-validation + downstream-consumers-with-stale-data) is the largest single advertised-vs-actual gap catalogued, distinct from prior single-field (#211/#212/#214) / response-only (#213/#207) / header-only (#215) / three-dimensional (#216) / classifier-leakage (#217) / four-layer (#218) / false-positive-opt-in (#219) / five-layer-feature-absence (#220) / seven-layer-endpoint-family-absence (#221) members; the advertised-but-rerouted shape is novel — strict-superset of #220's advertised-but-unbuilt because the parse arm dispatches to a *different* command instead of returning a clear unsupported error, applies to any future SlashCommandSpec entry where the summary field describes a feature different from what the parse arm dispatches to / external validation: Anthropic Models API reference at https://docs.anthropic.com/en/api/models-list documenting GET /v1/models GA 2024-12-04 with paginated before_id / after_id / limit and ModelInfo { id, type: "model", display_name, created_at } shape, Anthropic retrieve reference at https://docs.anthropic.com/en/api/models documenting GET /v1/models/{model_id} for single-model lookup, OpenAI Models API at https://platform.openai.com/docs/api-reference/models documenting the literal first endpoint after auth with Model { id, object: "model", created, owned_by } and ModelList { object: "list", data: Vec<Model> }, OpenAI Python SDK client.models.list() and client.models.retrieve(model_id) first-class typed surface, Anthropic Python SDK client.models.list() parallel surface GA-shipped 2024-12-04 alongside the API endpoint, Anthropic TypeScript SDK client.models.list(), AWS Bedrock ListFoundationModels API documenting Bedrock-anthropic-relay equivalent with FoundationModelSummary provider+model+modalities+active flag, Azure OpenAI Models reference with deployment-aware catalog, Vertex AI projects.locations.models.list for Vertex-published Anthropic/Gemini/3rd-party models, DeepSeek/Moonshot/Alibaba-DashScope/xAI parallel /v1/models OpenAI-compat shape, OpenRouter Models API at https://openrouter.ai/api/v1/models — the canonical "live model catalog with pricing" reference and the model that anomalyco/opencode-via-models.dev uses for pricing-data freshness, simonw/llm llm models and llm models default <model> first-class CLI subcommand backed by per-plugin model registration with models.dev-equivalent freshness, simonw/llm plugin-registration architecture for ad-hoc model addition, Vercel AI SDK 6 provider.languageModels() and provider.embeddingModels() first-class typed catalog APIs, LangChain init_chat_model(model_provider, model_name) reflective discovery via provider-defined catalogs and BaseChatModel.aget_models async catalog query, models.dev (https://models.dev) — community-maintained authoritative model catalog with pricing + capability flags + provider routing, used by anomalyco/opencode for pricing-data freshness with explicit fallback metadata when a model id isn't in the catalog (the canonical "external authoritative source for model metadata" reference), anomalyco/opencode models.dev integration with periodic refresh and explicit { provider: unknown, reason: not_in_pricing_table } fallback metadata, charmbracelet/crush typed catalog with provider+model+input/output-pricing, continue.dev config-file-driven catalog with auto-refresh from provider endpoints, zed-industries/zed bundled JSON catalog with periodic upstream refresh, TabbyML/tabby model catalog via plugin registration, llama.cpp server /v1/models local-model catalog via OpenAI-compat shape, LM Studio /v1/models local-model catalog, Ollama /api/tags and /v1/models local-model catalog with both Ollama-native and OpenAI-compat shapes, llamafile bundled-model catalog, LiteLLM models reference covering 100+ models at proxy level, portkey.ai gateway-level catalog, helicone.ai observability-platform model catalog with per-model usage stats, prompthub.us model-catalog-as-service, OpenTelemetry GenAI semconv gen_ai.request.model and gen_ai.response.model documented as required attributes for spans (every observability backend treats model as a first-class structured signal requiring authoritative-source validation), OpenAPI 3.1 spec for /v1/models at https://github.com/openai/openai-openapi as canonical machine-readable schema, Anthropic API stability versioning at https://docs.anthropic.com/en/api/versioning with anthropic-version header semver-stable since 2023-06-01 and models endpoint stable since 2024-12-04. Thirty-two ecosystem references, three first-class models-endpoint specs (Anthropic, OpenAI, OpenRouter), GA timeline of 16 months on Anthropic's side and 6+ years on OpenAI's side, eight first-class CLI/SDK implementations (Anthropic Python+TypeScript, OpenAI Python, simonw/llm, Vercel AI SDK, LangChain, Zed, charmbracelet/crush), seven first-class local-model catalogs (Ollama, LM Studio, llama.cpp server, llamafile, Tabby, Continue.dev, LiteLLM proxy), one community-maintained authoritative pricing source (models.dev) used by the closest peer coding agent. claw-code is the **sole client/agent/CLI in the surveyed coding-agent ecosystem with zero /v1/models integration AND a misleading /providers slash command that aliases to /doctor** — both gaps are unique to claw-code in the surveyed ecosystem, the model-discovery gap is the **upstream root cause** of three downstream cost-and-correctness gaps already catalogued in this audit (#209 / #210 / #221), and the misleading-alias-shape is novel within the cluster — #222 closes the upstream root cause of three downstream gaps and unblocks live-catalog-driven cost-estimation, max-tokens-validation, batch-capability-detection, and CLI-vs-slash-command-symmetry that the runtime's clawability doctrine treats as canonical baseline expectations.
GET /v1/models and zero GET /v1/models/{id} surface across rust/crates/api/src/providers/anthropic.rs and rust/crates/api/src/providers/openai_compat.rs (rg returns zero hits for /v1/models, list_models, fetch_models, get_models, available_models, model_catalog, ModelInfo, ModelList, ListModelsResponse, OwnedBy, ModelObject, ModelCatalog across rust/), zero Model / ModelInfo / ModelList / ListModelsResponse typed taxonomy in rust/crates/api/src/types.rs, zero list_models<'a>(&'a self) -> ProviderFuture<'a, ModelList> and zero retrieve_model<'a>(&'a self, model_id: &'a str) -> ProviderFuture<'a, ModelInfo> methods on the Provider trait at rust/crates/api/src/providers/mod.rs:17-30 (only send_message and stream_message exist, both per-request), zero list_models dispatch on the ProviderClient enum at rust/crates/api/src/client.rs:8-14 (three variants Anthropic/Xai/OpenAi, all closed under per-request synchronous dispatch), zero claw models / claw model list / claw list-models CLI subcommand surface at rust/crates/rusty-claude-cli/src/main.rs, zero /models slash command in the SlashCommandSpec table at rust/crates/commands/src/lib.rs, zero validation against an authoritative source on set_model at rust/crates/rusty-claude-cli/src/main.rs:4989-5037 (user can type /model claude-banana-9000 and the runtime accepts it, swaps the active model to that string, and only fails at request time when the upstream provider returns 404 / invalid_model_error), and the existing /providers slash command at rust/crates/commands/src/lib.rs:716-720 is just a literal alias for /doctor at rust/crates/commands/src/lib.rs:1386-1389 despite advertising summary: "List available model providers" (advertised-but-rerouted shape — actively misleading at the UX layer, distinct from #220's advertised-but-unbuilt shape because the parse arm dispatches to a *different* command entirely instead of returning a clear unsupported error) — the canonical model-discovery affordance is invisible across every CLI / REPL / slash-command / Provider-trait / ProviderClient-enum / data-model surface, leaving claw-code's local hardcoded 13-entry MODEL_REGISTRY (3 anthropic + 5 grok + 1 kimi + 4 prefix routes for openai/gpt/qwen/kimi at rust/crates/api/src/providers/mod.rs:52-134 and 166-225) and its 6-entry model_token_limit match arm (rust/crates/api/src/providers/mod.rs:277-301 covering claude-opus-4-6, claude-sonnet-4-6, claude-haiku-4-5-20251213, grok-3, grok-3-mini, kimi-k2.5, kimi-k1.5 — returns None for current production IDs claude-opus-4-7, claude-haiku-4-6, gpt-5.2, o3, o4-mini, kimi-k3, qwen3-max, grok-4, deepseek-reasoner) as the only model-name knowledge the runtime has access to, with no way to refresh it, no way to discover new model IDs that providers publish, no way to validate user-supplied model strings, no way to cross-link to the pricing_for_model cost estimator (#209 substring-matching gap), no way to cross-link to the model_token_limit preflight check (#210 max_tokens shadow-fork gap silently no-ops on unknown models), no way to cross-link to the future is_batch_request flag (#221 batch-dispatch gap requires knowing which models support batch), and USAGE.md:426-440 documents only six model rows out of nine MODEL_REGISTRY entries (kimi alias missing from the documented table, four prefix routes mentioned only in passing prose, zero documentation of /v1/models endpoint usage / zero documentation of model-catalog discovery / zero documentation of "what to do when your provider ships a new model that isn't in claw-code's hardcoded registry") — the canonical model-discovery affordance is **the most universally-available endpoint in the LLM API ecosystem** (older than /v1/chat/completions itself, older than /v1/embeddings, older than /v1/messages, the literal first endpoint after auth on every OpenAI-compat provider since 2020 and on Anthropic since 2024-12-04, GA-shipped first-class typed surfaces in every Python/TypeScript SDK in the ecosystem) and claw-code is the **sole client/agent/CLI in the surveyed coding-agent ecosystem with zero /v1/models integration AND a misleading /providers slash command that aliases to /doctor** — both gaps are unique to claw-code in the surveyed ecosystem (Jobdori cycle #374 / extends #168c emission-routing audit / explicit follow-on candidate from #221's seven-layer-endpoint-family-absence shape — the third of three named candidates: Files API typed taxonomy / Embeddings API typed taxonomy / Models list endpoint typed taxonomy, and the most clawability-impacting because it's the upstream root cause of three downstream gaps already catalogued in this audit / sibling-shape cluster grows to twenty-one: #201/#202/#203/#206/#207/#208/#209/#210/#211/#212/#213/#214/#215/#216/#217/#218/#219/#220/#221/#222 / wire-format-parity cluster grows to twelve: #211+#212+#213+#214+#215+#216+#217+#218+#219+#220+#221+#222 / capability-parity cluster grows to four: #218+#220+#221+#222 / discovery-and-validation cluster: #222 alone but it's the upstream root cause of #209's pricing-fallback gap, #210's max_tokens shadow-fork gap, and #221's batch-dispatch gap / eight-layer-endpoint-family-absence-with-misleading-alias shape (endpoint-URL + data-model-taxonomy + Provider-trait-method + ProviderClient-enum-dispatch + CLI-subcommand-surface + slash-command-surface-with-misleading-alias + set_model-validation + downstream-consumers-with-stale-data) is the largest single advertised-vs-actual gap catalogued, distinct from prior single-field (#211/#212/#214) / response-only (#213/#207) / header-only (#215) / three-dimensional (#216) / classifier-leakage (#217) / four-layer (#218) / false-positive-opt-in (#219) / five-layer-feature-absence (#220) / seven-layer-endpoint-family-absence (#221) members; the advertised-but-rerouted shape is novel — strict-superset of #220's advertised-but-unbuilt because the parse arm dispatches to a *different* command instead of returning a clear unsupported error, applies to any future SlashCommandSpec entry where the summary field describes a feature different from what the parse arm dispatches to / external validation: Anthropic Models API reference at https://docs.anthropic.com/en/api/models-list documenting GET /v1/models GA 2024-12-04 with paginated before_id / after_id / limit and ModelInfo { id, type: "model", display_name, created_at } shape, Anthropic retrieve reference at https://docs.anthropic.com/en/api/models documenting GET /v1/models/{model_id} for single-model lookup, OpenAI Models API at https://platform.openai.com/docs/api-reference/models documenting the literal first endpoint after auth with Model { id, object: "model", created, owned_by } and ModelList { object: "list", data: Vec<Model> }, OpenAI Python SDK client.models.list() and client.models.retrieve(model_id) first-class typed surface, Anthropic Python SDK client.models.list() parallel surface GA-shipped 2024-12-04 alongside the API endpoint, Anthropic TypeScript SDK client.models.list(), AWS Bedrock ListFoundationModels API documenting Bedrock-anthropic-relay equivalent with FoundationModelSummary provider+model+modalities+active flag, Azure OpenAI Models reference with deployment-aware catalog, Vertex AI projects.locations.models.list for Vertex-published Anthropic/Gemini/3rd-party models, DeepSeek/Moonshot/Alibaba-DashScope/xAI parallel /v1/models OpenAI-compat shape, OpenRouter Models API at https://openrouter.ai/api/v1/models — the canonical "live model catalog with pricing" reference and the model that anomalyco/opencode-via-models.dev uses for pricing-data freshness, simonw/llm llm models and llm models default <model> first-class CLI subcommand backed by per-plugin model registration with models.dev-equivalent freshness, simonw/llm plugin-registration architecture for ad-hoc model addition, Vercel AI SDK 6 provider.languageModels() and provider.embeddingModels() first-class typed catalog APIs, LangChain init_chat_model(model_provider, model_name) reflective discovery via provider-defined catalogs and BaseChatModel.aget_models async catalog query, models.dev (https://models.dev) — community-maintained authoritative model catalog with pricing + capability flags + provider routing, used by anomalyco/opencode for pricing-data freshness with explicit fallback metadata when a model id isn't in the catalog (the canonical "external authoritative source for model metadata" reference), anomalyco/opencode models.dev integration with periodic refresh and explicit { provider: unknown, reason: not_in_pricing_table } fallback metadata, charmbracelet/crush typed catalog with provider+model+input/output-pricing, continue.dev config-file-driven catalog with auto-refresh from provider endpoints, zed-industries/zed bundled JSON catalog with periodic upstream refresh, TabbyML/tabby model catalog via plugin registration, llama.cpp server /v1/models local-model catalog via OpenAI-compat shape, LM Studio /v1/models local-model catalog, Ollama /api/tags and /v1/models local-model catalog with both Ollama-native and OpenAI-compat shapes, llamafile bundled-model catalog, LiteLLM models reference covering 100+ models at proxy level, portkey.ai gateway-level catalog, helicone.ai observability-platform model catalog with per-model usage stats, prompthub.us model-catalog-as-service, OpenTelemetry GenAI semconv gen_ai.request.model and gen_ai.response.model documented as required attributes for spans (every observability backend treats model as a first-class structured signal requiring authoritative-source validation), OpenAPI 3.1 spec for /v1/models at https://github.com/openai/openai-openapi as canonical machine-readable schema, Anthropic API stability versioning at https://docs.anthropic.com/en/api/versioning with anthropic-version header semver-stable since 2023-06-01 and models endpoint stable since 2024-12-04. Thirty-two ecosystem references, three first-class models-endpoint specs (Anthropic, OpenAI, OpenRouter), GA timeline of 16 months on Anthropic's side and 6+ years on OpenAI's side, eight first-class CLI/SDK implementations (Anthropic Python+TypeScript, OpenAI Python, simonw/llm, Vercel AI SDK, LangChain, Zed, charmbracelet/crush), seven first-class local-model catalogs (Ollama, LM Studio, llama.cpp server, llamafile, Tabby, Continue.dev, LiteLLM proxy), one community-maintained authoritative pricing source (models.dev) used by the closest peer coding agent. claw-code is the **sole client/agent/CLI in the surveyed coding-agent ecosystem with zero /v1/models integration AND a misleading /providers slash command that aliases to /doctor** — both gaps are unique to claw-code in the surveyed ecosystem, the model-discovery gap is the **upstream root cause** of three downstream cost-and-correctness gaps already catalogued in this audit (#209 / #210 / #221), and the misleading-alias-shape is novel within the cluster — #222 closes the upstream root cause of three downstream gaps and unblocks live-catalog-driven cost-estimation, max-tokens-validation, batch-capability-detection, and CLI-vs-slash-command-symmetry that the runtime's clawability doctrine treats as canonical baseline expectations.
Claw Code
ultraworkers/claw-code · Usage · Error Handling · Rust workspace · Parity · Roadmap · UltraWorkers Discord
Claw Code is the public Rust implementation of the claw CLI agent harness.
The canonical implementation lives in rust/, and the current source of truth for this repository is ultraworkers/claw-code.
Important
Start with
USAGE.mdfor build, auth, CLI, session, and parity-harness workflows. Makeclaw doctoryour first health check after building, userust/README.mdfor crate-level details, readPARITY.mdfor the current Rust-port checkpoint, and seedocs/container.mdfor the container-first workflow.ACP / Zed status:
claw-codedoes not ship an ACP/Zed daemon entrypoint yet. Runclaw acp(orclaw --acp) for the current status instead of guessing from source layout;claw acp serveis currently a discoverability alias only, and real ACP support remains tracked separately inROADMAP.md.
Current repository shape
rust/— canonical Rust workspace and theclawCLI binaryUSAGE.md— task-oriented usage guide for the current product surfaceERROR_HANDLING.md— unified error-handling pattern for orchestration codePARITY.md— Rust-port parity status and migration notesROADMAP.md— active roadmap and cleanup backlogPHILOSOPHY.md— project intent and system-design framingSCHEMAS.md— JSON protocol contract (Python harness reference)src/+tests/— companion Python/reference workspace and audit helpers; not the primary runtime surface
Quick start
Note
[!WARNING]
cargo install claw-codeinstalls the wrong thing. Theclaw-codecrate on crates.io is a deprecated stub that placesclaw-code-deprecated.exe— notclaw. Running it only prints"claw-code has been renamed to agent-code". Do not usecargo install claw-code. Either build from source (this repo) or install the upstream binary:cargo install agent-code # upstream binary — installs 'agent.exe' (Windows) / 'agent' (Unix), NOT 'agent-code'This repo (
ultraworkers/claw-code) is build-from-source only — follow the steps below.
# 1. Clone and build
git clone https://github.com/ultraworkers/claw-code
cd claw-code/rust
cargo build --workspace
# 2. Set your API key (Anthropic API key — not a Claude subscription)
export ANTHROPIC_API_KEY="sk-ant-..."
# 3. Verify everything is wired correctly
./target/debug/claw doctor
# 4. Run a prompt
./target/debug/claw prompt "say hello"
Note
Windows (PowerShell): the binary is
claw.exe, notclaw. Use.\target\debug\claw.exeor runcargo run -- prompt "say hello"to skip the path lookup.
Windows setup
PowerShell is a supported Windows path. Use whichever shell works for you. The common onboarding issues on Windows are:
- Install Rust first — download from https://rustup.rs/ and run the installer. Close and reopen your terminal when it finishes.
- Verify Rust is on PATH:
If this fails, reopen your terminal or run the PATH setup from the Rust installer output, then retry.cargo --version - Clone and build (works in PowerShell, Git Bash, or WSL):
git clone https://github.com/ultraworkers/claw-code cd claw-code/rust cargo build --workspace - Run (PowerShell — note
.exeand backslash):$env:ANTHROPIC_API_KEY = "sk-ant-..." .\target\debug\claw.exe prompt "say hello"
Git Bash / WSL are optional alternatives, not requirements. If you prefer bash-style paths (/c/Users/you/... instead of C:\Users\you\...), Git Bash (ships with Git for Windows) works well. In Git Bash, the MINGW64 prompt is expected and normal — not a broken install.
Post-build: locate the binary and verify
After running cargo build --workspace, the claw binary is built but not automatically installed to your system. Here's where to find it and how to verify the build succeeded.
Binary location
After cargo build --workspace in claw-code/rust/:
Debug build (default, faster compile):
- macOS/Linux:
rust/target/debug/claw - Windows:
rust/target/debug/claw.exe
Release build (optimized, slower compile):
- macOS/Linux:
rust/target/release/claw - Windows:
rust/target/release/claw.exe
If you ran cargo build without --release, the binary is in the debug/ folder.
Verify the build succeeded
Test the binary directly using its path:
# macOS/Linux (debug build)
./rust/target/debug/claw --help
./rust/target/debug/claw doctor
# Windows PowerShell (debug build)
.\rust\target\debug\claw.exe --help
.\rust\target\debug\claw.exe doctor
If these commands succeed, the build is working. claw doctor is your first health check — it validates your API key, model access, and tool configuration.
Optional: Add to PATH
If you want to run claw from any directory without the full path, choose one of these approaches:
Option 1: Symlink (macOS/Linux)
ln -s $(pwd)/rust/target/debug/claw /usr/local/bin/claw
Then reload your shell and test:
claw --help
Option 2: Use cargo install (all platforms)
Build and install to Cargo's default location (~/.cargo/bin/, which is usually on PATH):
# From the claw-code/rust/ directory
cargo install --path . --force
# Then from anywhere
claw --help
Option 3: Update shell profile (bash/zsh)
Add this line to ~/.bashrc or ~/.zshrc:
export PATH="$(pwd)/rust/target/debug:$PATH"
Reload your shell:
source ~/.bashrc # or source ~/.zshrc
claw --help
Troubleshooting
- "command not found: claw" — The binary is in
rust/target/debug/claw, but it's not on your PATH. Use the full path./rust/target/debug/clawor symlink/install as above. - "permission denied" — On macOS/Linux, you may need
chmod +x rust/target/debug/clawif the executable bit isn't set (rare). - Debug vs. release — If the build is slow, you're in debug mode (default). Add
--releasetocargo buildfor faster runtime, but the build itself will take 5–10 minutes.
Note
Auth: claw requires an API key (
ANTHROPIC_API_KEY,OPENAI_API_KEY, etc.) — Claude subscription login is not a supported auth path.
Run the workspace test suite after verifying the binary works:
cd rust
cargo test --workspace
Documentation map
USAGE.md— quick commands, auth, sessions, config, parity harnessrust/README.md— crate map, CLI surface, features, workspace layoutPARITY.md— parity status for the Rust portrust/MOCK_PARITY_HARNESS.md— deterministic mock-service harness detailsROADMAP.md— active roadmap and open cleanup workPHILOSOPHY.md— why the project exists and how it is operated
Ecosystem
Claw Code is built in the open alongside the broader UltraWorkers toolchain:
Ownership / affiliation disclaimer
- This repository does not claim ownership of the original Claude Code source material.
- This repository is not affiliated with, endorsed by, or maintained by Anthropic.
