/v1/videos/generations + zero /v1/videos/edits + zero /v1/videos/extends + zero /v1/videos/{id} polling-and-retrieval endpoint surface across both Anthropic-native and OpenAI-compat lanes, zero VideoGenerationRequest / VideoEditRequest / VideoExtendRequest / VideoGenerationResponse / VideoObject / VideoQuality / VideoResolution / VideoAspectRatio / VideoDuration / VideoOutputFormat / VideoFrameRate / VideoCodec / VideoStyle / VideoSource / VideoMediaType / VideoTaskStatus / VideoTaskId typed model in rust/crates/api/src/types.rs, zero Video variant on OutputContentBlock (4-arm exhaustive: Text/ToolUse/Thinking/RedactedThinking — extending #226's asymmetric-output-only modality axis with new temporal-duration dimension), zero generate_video / edit_video / extend_video / retrieve_video_task methods on Provider trait at rust/crates/api/src/providers/mod.rs:17-30 (only send_message + stream_message exist, both per-request synchronous and constrained to text-modality chat/completion taxonomy with zero video-output dispatch surface AND zero async-task polling primitive — the canonical video-generation pattern requires a two-phase request/poll workflow that the Provider trait does not expose because every existing method returns a synchronous response, distinct from #221's batch-dispatch async pattern which uses different polling shape with file-upload prerequisites that don't apply to video-gen), zero video-generation dispatch on ProviderClient enum at rust/crates/api/src/client.rs:8-14 (three variants Anthropic/Xai/OpenAi, zero Sora/Veo/Pika/Runway/Luma/Mochi/Kling/Hailuo/Replicate/FalAi/BlackForestLabs/StabilityVideo partner-routing variants — twelve-plus-partner-set, the largest partner-set yet in the cluster surpassing #226's eight-plus-partner image-gen set because video-generation is the most-fragmented modality across third-party providers in 2024-2026 with every major lab shipping its own video-gen surface in the post-Sora-launch arms race), zero multipart/form-data upload affordance with reqwest::multipart feature flag absent from rust/crates/api/Cargo.toml — multipart needed for /v1/videos/edits and /v1/videos/extends subset (parallel to #226's image-edits subset), zero async-task polling primitive in the runtime — there is no TaskPoller / AsyncTask / TaskStatus / TaskId / poll_task_until_complete machinery anywhere in rust/crates/runtime/ (rg returns zero hits for task_id/task_status/polling/poll_task/async_task/pending_task across rust/), distinguishing video-generation's async-polling pattern from every prior cluster member which is either synchronous (#211 through #226 except #221) or streaming-via-SSE (#221 batch-dispatch is closest, but uses different polling shape with file-upload prerequisites), zero claw video / claw videos / claw generate-video / claw render-video CLI subcommand at rust/crates/rusty-claude-cli/src/main.rs, zero /sora / /veo / /video / /render-video / /generate-video slash command in SlashCommandSpec table (zero video-related entries — video-input doubly absent because no advertised-but-unbuilt commands AND no implemented commands, strict-subset of #226's image-generation gap), zero sora-2 / sora-2-pro / veo-3 / veo-3-fast / runway-gen-4 / luma-dream-machine / pika-2.0 / kling-1.5 / hailuo-i2v-01 / hunyuan-video / mochi-1 / cogvideox-5b / stable-video-diffusion-1.1 entries in MODEL_REGISTRY, zero video_per_second_cost_usd / video_per_megapixel_second_cost_usd / video_input_token_cost_per_million / video_output_token_cost_per_million / video_per_minute_cost_usd fields in ModelPricing struct (rust/crates/runtime/src/usage.rs:9-15 has only four text-token-only fields) — the five-dimensional pricing matrix (model × resolution × fps × duration × extension-vs-generation compound-cost) is the largest pricing-tier extension yet catalogued, exceeding #226's four-dimensional image matrix, zero video-gen-model recognition in pricing_for_model substring-matcher (#209+#224+#225+#226 cluster overlap) — uniquely manifesting a nine-layer fusion shape combining #223's transport-plumbing-absence (multipart on edits/extends subset) + #224's provider-asymmetric-delegation (Anthropic does not offer video-gen at all, OpenAI offers GA Sora-2 + Sora-2-pro, Google offers Veo-3 + Veo-3-fast, Runway offers Gen-4 + Gen-4-turbo, plus twelve-plus recommended partners) + #218's request-side response_format/output_format/resolution/fps/duration opt-in (the largest request-side axis-set yet because video-gen has the most parameters in the modality-bearing endpoint family ecosystem) + asymmetric-output-only content-block-taxonomy axis with temporal-duration dimension (extending #226's image-output axis with temporal-fps-and-duration sub-dimensions) + the new async-task-polling-primitive axis (#227's first-of-its-kind contribution to the cluster doctrine, since prior cluster members have either synchronous-response or streaming-via-SSE or batch-via-Files-API-prerequisite or one-shot-multipart coverage, never long-poll-task-id-with-timeout-and-resume — the canonical video-gen pattern requires a two-phase request/poll workflow because video-rendering takes 30-300+ seconds depending on model and duration, exceeding typical HTTP-request-response timeout window) — making #227 the first cluster member where five independent prior shape-axes converge AND introduces a sixth novel shape-axis (async-task-polling-primitive), the largest fusion-shape gap catalogued so far (matching #225's nine-layer count but with different ninth axis — async-task-polling-primitive replacing #225's symmetric-input-output content-blocks, and one axis larger than #226's eight-layer fusion), making #227 the first cluster member where async-task-polling-primitive becomes a structural prerequisite of the dispatch layer (Jobdori cycle #378 / extends #168c emission-routing audit / explicit follow-on candidate from #226's eight-layer-fusion-shape-with-asymmetric-output-only-modality-coverage — third-named of the modality-bearing endpoint-family-absence cluster after #225 audio + #226 image-generation, completing the trio with video-generation closing the visual-temporal output modality / sibling-shape cluster grows to twenty-six / wire-format-parity cluster grows to seventeen / capability-parity cluster grows to nine / multimodal-IO cluster grows to five: #220 image-input + #224 embedding-output + #225 audio-bidirectional + #226 image-output + #227 video-output (the first cluster member where output is binary-temporal-media requiring long-poll workflows) / cross-cutting-data-pipeline cluster grows to four / multipart-transport cluster grows to four / provider-asymmetric-delegation cluster grows to four (twelve-plus partners, the largest in the cluster) / nine-layer-fusion-shape-with-async-task-polling-primitive (endpoint-URL-set-of-four [generations+edits+extends+polling] + multipart-on-subset + data-model-with-output-content-block-only-with-temporal-duration-dimension + response_format/output_format/resolution/fps/duration request-side opt-in + Provider-trait-method-set-of-four-with-async-task-polling-and-Unsupported-fallback + ProviderClient-enum-dispatch-with-twelve-plus-partner-third-lanes + CLI-subcommand-surface + pricing-tier-with-five-dimensional-compound-cost-model + async-task-polling-primitive-with-timeout-and-resume) is the largest single-pinpoint fusion catalogued. Distinct from prior cluster members; the nine-layer-fusion-shape-with-async-task-polling-primitive is novel and applies to follow-on candidate 3D-asset-generation API typed taxonomy (/v1/3d/generations for Shap-E / Meshy AI / Tripo AI / CSM / Stable Point-Aware-3D — same nine-layer fusion shape but with 3D-mesh-instead-of-video modality, GLB/GLTF/USDZ-binary-output instead of MP4-binary-output, per-3d-asset pricing instead of per-second-of-video — the natural #228 candidate) / external validation: fifty-three ecosystem references covering four first-class video-gen-endpoint specs on OpenAI side (generations + edits + extends + {id}-polling), one Anthropic non-coverage statement, one Google Veo-3 API spec with long-running-operation polling, twelve first-class third-party video-gen providers (Runway/Luma/Pika/Kling/Hailuo/Hunyuan/Mochi/CogVideoX/Stability-Video/BFL-Video/Replicate-Video/Fal-Video), three first-class CLI/SDK implementations of typed video-gen surface (OpenAI Python+TypeScript videos.generate + videos.retrieve, Runway TypeScript SDK, Luma Python SDK), six first-class local-video-gen providers (Stable Video Diffusion / AnimateDiff / Hunyuan-Video weights / Mochi-1 weights / CogVideoX weights / ComfyUI workflows), one community-maintained authoritative benchmark (VBench 16-evaluation-dimensions), nine coding-agent peers with video-gen capability, one canonical Anthropic-recommended partner-set (Sora-2/Veo-3/Runway/Luma per third-party-integration guide), the OpenAI /v1/responses endpoint with video_call tool for conversational video-output decoding via OutputContentBlock::Video, the canonical five-dimensional pricing matrix (per-model × per-resolution × per-fps × per-duration × per-extension-vs-generation), the canonical async-polling workflow with task-id polling at typical 5-second intervals and 5-minute typical-completion-time and 30-minute maximum-completion-time before timeout — claw-code is the sole client/agent/CLI in the surveyed coding-agent ecosystem with zero /v1/videos/{generations,edits,extends} integration AND zero Sora-2/Veo-3/Runway/Luma/Pika/Kling/Hailuo/Hunyuan/Mochi/CogVideoX/Stability-Video/BFL-Video partner-routing AND zero /sora / /veo / /video / /render-video / /generate-video slash command AND zero claw video / claw videos / claw generate-video / claw render-video CLI subcommand AND zero OutputContentBlock::Video variant AND zero multipart-form-data transport plumbing for video-edit binary uploads AND zero async-task-polling-primitive at the runtime layer — all seven gaps unique to claw-code in the surveyed ecosystem, the video-generation-API gap is the upstream prerequisite of every visual-temporal-output coding-agent affordance, and the nine-layer-fusion-shape-with-async-task-polling-primitive is novel within the cluster — #227 closes the upstream prerequisite of every visual-temporal-output coding-agent affordance and is the first cluster member where async-task-polling-primitive shape-axis is introduced)
/v1/videos/generations + zero /v1/videos/edits + zero /v1/videos/extends + zero /v1/videos/{id} polling-and-retrieval endpoint surface across both Anthropic-native and OpenAI-compat lanes, zero VideoGenerationRequest / VideoEditRequest / VideoExtendRequest / VideoGenerationResponse / VideoObject / VideoQuality / VideoResolution / VideoAspectRatio / VideoDuration / VideoOutputFormat / VideoFrameRate / VideoCodec / VideoStyle / VideoSource / VideoMediaType / VideoTaskStatus / VideoTaskId typed model in rust/crates/api/src/types.rs, zero Video variant on OutputContentBlock (4-arm exhaustive: Text/ToolUse/Thinking/RedactedThinking — extending #226's asymmetric-output-only modality axis with new temporal-duration dimension), zero generate_video / edit_video / extend_video / retrieve_video_task methods on Provider trait at rust/crates/api/src/providers/mod.rs:17-30 (only send_message + stream_message exist, both per-request synchronous and constrained to text-modality chat/completion taxonomy with zero video-output dispatch surface AND zero async-task polling primitive — the canonical video-generation pattern requires a two-phase request/poll workflow that the Provider trait does not expose because every existing method returns a synchronous response, distinct from #221's batch-dispatch async pattern which uses different polling shape with file-upload prerequisites that don't apply to video-gen), zero video-generation dispatch on ProviderClient enum at rust/crates/api/src/client.rs:8-14 (three variants Anthropic/Xai/OpenAi, zero Sora/Veo/Pika/Runway/Luma/Mochi/Kling/Hailuo/Replicate/FalAi/BlackForestLabs/StabilityVideo partner-routing variants — twelve-plus-partner-set, the largest partner-set yet in the cluster surpassing #226's eight-plus-partner image-gen set because video-generation is the most-fragmented modality across third-party providers in 2024-2026 with every major lab shipping its own video-gen surface in the post-Sora-launch arms race), zero multipart/form-data upload affordance with reqwest::multipart feature flag absent from rust/crates/api/Cargo.toml — multipart needed for /v1/videos/edits and /v1/videos/extends subset (parallel to #226's image-edits subset), zero async-task polling primitive in the runtime — there is no TaskPoller / AsyncTask / TaskStatus / TaskId / poll_task_until_complete machinery anywhere in rust/crates/runtime/ (rg returns zero hits for task_id/task_status/polling/poll_task/async_task/pending_task across rust/), distinguishing video-generation's async-polling pattern from every prior cluster member which is either synchronous (#211 through #226 except #221) or streaming-via-SSE (#221 batch-dispatch is closest, but uses different polling shape with file-upload prerequisites), zero claw video / claw videos / claw generate-video / claw render-video CLI subcommand at rust/crates/rusty-claude-cli/src/main.rs, zero /sora / /veo / /video / /render-video / /generate-video slash command in SlashCommandSpec table (zero video-related entries — video-input doubly absent because no advertised-but-unbuilt commands AND no implemented commands, strict-subset of #226's image-generation gap), zero sora-2 / sora-2-pro / veo-3 / veo-3-fast / runway-gen-4 / luma-dream-machine / pika-2.0 / kling-1.5 / hailuo-i2v-01 / hunyuan-video / mochi-1 / cogvideox-5b / stable-video-diffusion-1.1 entries in MODEL_REGISTRY, zero video_per_second_cost_usd / video_per_megapixel_second_cost_usd / video_input_token_cost_per_million / video_output_token_cost_per_million / video_per_minute_cost_usd fields in ModelPricing struct (rust/crates/runtime/src/usage.rs:9-15 has only four text-token-only fields) — the five-dimensional pricing matrix (model × resolution × fps × duration × extension-vs-generation compound-cost) is the largest pricing-tier extension yet catalogued, exceeding #226's four-dimensional image matrix, zero video-gen-model recognition in pricing_for_model substring-matcher (#209+#224+#225+#226 cluster overlap) — uniquely manifesting a nine-layer fusion shape combining #223's transport-plumbing-absence (multipart on edits/extends subset) + #224's provider-asymmetric-delegation (Anthropic does not offer video-gen at all, OpenAI offers GA Sora-2 + Sora-2-pro, Google offers Veo-3 + Veo-3-fast, Runway offers Gen-4 + Gen-4-turbo, plus twelve-plus recommended partners) + #218's request-side response_format/output_format/resolution/fps/duration opt-in (the largest request-side axis-set yet because video-gen has the most parameters in the modality-bearing endpoint family ecosystem) + asymmetric-output-only content-block-taxonomy axis with temporal-duration dimension (extending #226's image-output axis with temporal-fps-and-duration sub-dimensions) + the new async-task-polling-primitive axis (#227's first-of-its-kind contribution to the cluster doctrine, since prior cluster members have either synchronous-response or streaming-via-SSE or batch-via-Files-API-prerequisite or one-shot-multipart coverage, never long-poll-task-id-with-timeout-and-resume — the canonical video-gen pattern requires a two-phase request/poll workflow because video-rendering takes 30-300+ seconds depending on model and duration, exceeding typical HTTP-request-response timeout window) — making #227 the first cluster member where five independent prior shape-axes converge AND introduces a sixth novel shape-axis (async-task-polling-primitive), the largest fusion-shape gap catalogued so far (matching #225's nine-layer count but with different ninth axis — async-task-polling-primitive replacing #225's symmetric-input-output content-blocks, and one axis larger than #226's eight-layer fusion), making #227 the first cluster member where async-task-polling-primitive becomes a structural prerequisite of the dispatch layer (Jobdori cycle #378 / extends #168c emission-routing audit / explicit follow-on candidate from #226's eight-layer-fusion-shape-with-asymmetric-output-only-modality-coverage — third-named of the modality-bearing endpoint-family-absence cluster after #225 audio + #226 image-generation, completing the trio with video-generation closing the visual-temporal output modality / sibling-shape cluster grows to twenty-six / wire-format-parity cluster grows to seventeen / capability-parity cluster grows to nine / multimodal-IO cluster grows to five: #220 image-input + #224 embedding-output + #225 audio-bidirectional + #226 image-output + #227 video-output (the first cluster member where output is binary-temporal-media requiring long-poll workflows) / cross-cutting-data-pipeline cluster grows to four / multipart-transport cluster grows to four / provider-asymmetric-delegation cluster grows to four (twelve-plus partners, the largest in the cluster) / nine-layer-fusion-shape-with-async-task-polling-primitive (endpoint-URL-set-of-four [generations+edits+extends+polling] + multipart-on-subset + data-model-with-output-content-block-only-with-temporal-duration-dimension + response_format/output_format/resolution/fps/duration request-side opt-in + Provider-trait-method-set-of-four-with-async-task-polling-and-Unsupported-fallback + ProviderClient-enum-dispatch-with-twelve-plus-partner-third-lanes + CLI-subcommand-surface + pricing-tier-with-five-dimensional-compound-cost-model + async-task-polling-primitive-with-timeout-and-resume) is the largest single-pinpoint fusion catalogued. Distinct from prior cluster members; the nine-layer-fusion-shape-with-async-task-polling-primitive is novel and applies to follow-on candidate 3D-asset-generation API typed taxonomy (/v1/3d/generations for Shap-E / Meshy AI / Tripo AI / CSM / Stable Point-Aware-3D — same nine-layer fusion shape but with 3D-mesh-instead-of-video modality, GLB/GLTF/USDZ-binary-output instead of MP4-binary-output, per-3d-asset pricing instead of per-second-of-video — the natural #228 candidate) / external validation: fifty-three ecosystem references covering four first-class video-gen-endpoint specs on OpenAI side (generations + edits + extends + {id}-polling), one Anthropic non-coverage statement, one Google Veo-3 API spec with long-running-operation polling, twelve first-class third-party video-gen providers (Runway/Luma/Pika/Kling/Hailuo/Hunyuan/Mochi/CogVideoX/Stability-Video/BFL-Video/Replicate-Video/Fal-Video), three first-class CLI/SDK implementations of typed video-gen surface (OpenAI Python+TypeScript videos.generate + videos.retrieve, Runway TypeScript SDK, Luma Python SDK), six first-class local-video-gen providers (Stable Video Diffusion / AnimateDiff / Hunyuan-Video weights / Mochi-1 weights / CogVideoX weights / ComfyUI workflows), one community-maintained authoritative benchmark (VBench 16-evaluation-dimensions), nine coding-agent peers with video-gen capability, one canonical Anthropic-recommended partner-set (Sora-2/Veo-3/Runway/Luma per third-party-integration guide), the OpenAI /v1/responses endpoint with video_call tool for conversational video-output decoding via OutputContentBlock::Video, the canonical five-dimensional pricing matrix (per-model × per-resolution × per-fps × per-duration × per-extension-vs-generation), the canonical async-polling workflow with task-id polling at typical 5-second intervals and 5-minute typical-completion-time and 30-minute maximum-completion-time before timeout — claw-code is the sole client/agent/CLI in the surveyed coding-agent ecosystem with zero /v1/videos/{generations,edits,extends} integration AND zero Sora-2/Veo-3/Runway/Luma/Pika/Kling/Hailuo/Hunyuan/Mochi/CogVideoX/Stability-Video/BFL-Video partner-routing AND zero /sora / /veo / /video / /render-video / /generate-video slash command AND zero claw video / claw videos / claw generate-video / claw render-video CLI subcommand AND zero OutputContentBlock::Video variant AND zero multipart-form-data transport plumbing for video-edit binary uploads AND zero async-task-polling-primitive at the runtime layer — all seven gaps unique to claw-code in the surveyed ecosystem, the video-generation-API gap is the upstream prerequisite of every visual-temporal-output coding-agent affordance, and the nine-layer-fusion-shape-with-async-task-polling-primitive is novel within the cluster — #227 closes the upstream prerequisite of every visual-temporal-output coding-agent affordance and is the first cluster member where async-task-polling-primitive shape-axis is introduced)
Claw Code
ultraworkers/claw-code · Usage · Error Handling · Rust workspace · Parity · Roadmap · UltraWorkers Discord
Claw Code is the public Rust implementation of the claw CLI agent harness.
The canonical implementation lives in rust/, and the current source of truth for this repository is ultraworkers/claw-code.
Important
Start with
USAGE.mdfor build, auth, CLI, session, and parity-harness workflows. Makeclaw doctoryour first health check after building, userust/README.mdfor crate-level details, readPARITY.mdfor the current Rust-port checkpoint, and seedocs/container.mdfor the container-first workflow.ACP / Zed status:
claw-codedoes not ship an ACP/Zed daemon entrypoint yet. Runclaw acp(orclaw --acp) for the current status instead of guessing from source layout;claw acp serveis currently a discoverability alias only, and real ACP support remains tracked separately inROADMAP.md.
Current repository shape
rust/— canonical Rust workspace and theclawCLI binaryUSAGE.md— task-oriented usage guide for the current product surfaceERROR_HANDLING.md— unified error-handling pattern for orchestration codePARITY.md— Rust-port parity status and migration notesROADMAP.md— active roadmap and cleanup backlogPHILOSOPHY.md— project intent and system-design framingSCHEMAS.md— JSON protocol contract (Python harness reference)src/+tests/— companion Python/reference workspace and audit helpers; not the primary runtime surface
Quick start
Note
[!WARNING]
cargo install claw-codeinstalls the wrong thing. Theclaw-codecrate on crates.io is a deprecated stub that placesclaw-code-deprecated.exe— notclaw. Running it only prints"claw-code has been renamed to agent-code". Do not usecargo install claw-code. Either build from source (this repo) or install the upstream binary:cargo install agent-code # upstream binary — installs 'agent.exe' (Windows) / 'agent' (Unix), NOT 'agent-code'This repo (
ultraworkers/claw-code) is build-from-source only — follow the steps below.
# 1. Clone and build
git clone https://github.com/ultraworkers/claw-code
cd claw-code/rust
cargo build --workspace
# 2. Set your API key (Anthropic API key — not a Claude subscription)
export ANTHROPIC_API_KEY="sk-ant-..."
# 3. Verify everything is wired correctly
./target/debug/claw doctor
# 4. Run a prompt
./target/debug/claw prompt "say hello"
Note
Windows (PowerShell): the binary is
claw.exe, notclaw. Use.\target\debug\claw.exeor runcargo run -- prompt "say hello"to skip the path lookup.
Windows setup
PowerShell is a supported Windows path. Use whichever shell works for you. The common onboarding issues on Windows are:
- Install Rust first — download from https://rustup.rs/ and run the installer. Close and reopen your terminal when it finishes.
- Verify Rust is on PATH:
If this fails, reopen your terminal or run the PATH setup from the Rust installer output, then retry.cargo --version - Clone and build (works in PowerShell, Git Bash, or WSL):
git clone https://github.com/ultraworkers/claw-code cd claw-code/rust cargo build --workspace - Run (PowerShell — note
.exeand backslash):$env:ANTHROPIC_API_KEY = "sk-ant-..." .\target\debug\claw.exe prompt "say hello"
Git Bash / WSL are optional alternatives, not requirements. If you prefer bash-style paths (/c/Users/you/... instead of C:\Users\you\...), Git Bash (ships with Git for Windows) works well. In Git Bash, the MINGW64 prompt is expected and normal — not a broken install.
Post-build: locate the binary and verify
After running cargo build --workspace, the claw binary is built but not automatically installed to your system. Here's where to find it and how to verify the build succeeded.
Binary location
After cargo build --workspace in claw-code/rust/:
Debug build (default, faster compile):
- macOS/Linux:
rust/target/debug/claw - Windows:
rust/target/debug/claw.exe
Release build (optimized, slower compile):
- macOS/Linux:
rust/target/release/claw - Windows:
rust/target/release/claw.exe
If you ran cargo build without --release, the binary is in the debug/ folder.
Verify the build succeeded
Test the binary directly using its path:
# macOS/Linux (debug build)
./rust/target/debug/claw --help
./rust/target/debug/claw doctor
# Windows PowerShell (debug build)
.\rust\target\debug\claw.exe --help
.\rust\target\debug\claw.exe doctor
If these commands succeed, the build is working. claw doctor is your first health check — it validates your API key, model access, and tool configuration.
Optional: Add to PATH
If you want to run claw from any directory without the full path, choose one of these approaches:
Option 1: Symlink (macOS/Linux)
ln -s $(pwd)/rust/target/debug/claw /usr/local/bin/claw
Then reload your shell and test:
claw --help
Option 2: Use cargo install (all platforms)
Build and install to Cargo's default location (~/.cargo/bin/, which is usually on PATH):
# From the claw-code/rust/ directory
cargo install --path . --force
# Then from anywhere
claw --help
Option 3: Update shell profile (bash/zsh)
Add this line to ~/.bashrc or ~/.zshrc:
export PATH="$(pwd)/rust/target/debug:$PATH"
Reload your shell:
source ~/.bashrc # or source ~/.zshrc
claw --help
Troubleshooting
- "command not found: claw" — The binary is in
rust/target/debug/claw, but it's not on your PATH. Use the full path./rust/target/debug/clawor symlink/install as above. - "permission denied" — On macOS/Linux, you may need
chmod +x rust/target/debug/clawif the executable bit isn't set (rare). - Debug vs. release — If the build is slow, you're in debug mode (default). Add
--releasetocargo buildfor faster runtime, but the build itself will take 5–10 minutes.
Note
Auth: claw requires an API key (
ANTHROPIC_API_KEY,OPENAI_API_KEY, etc.) — Claude subscription login is not a supported auth path.
Run the workspace test suite after verifying the binary works:
cd rust
cargo test --workspace
Documentation map
USAGE.md— quick commands, auth, sessions, config, parity harnessrust/README.md— crate map, CLI surface, features, workspace layoutPARITY.md— parity status for the Rust portrust/MOCK_PARITY_HARNESS.md— deterministic mock-service harness detailsROADMAP.md— active roadmap and open cleanup workPHILOSOPHY.md— why the project exists and how it is operated
Ecosystem
Claw Code is built in the open alongside the broader UltraWorkers toolchain:
Ownership / affiliation disclaimer
- This repository does not claim ownership of the original Claude Code source material.
- This repository is not affiliated with, endorsed by, or maintained by Anthropic.
