YeonGyu-Kim ceb092abd7 roadmap: #216 filed — neither MessageRequest nor MessageResponse has any service_tier field; build_chat_completion_request (openai_compat.rs:845) writes thirteen optional fields (model, max_tokens/max_completion_tokens, messages, stream, stream_options, tools, tool_choice, temperature, top_p, frequency_penalty, presence_penalty, stop, reasoning_effort) and does not write service_tier; AnthropicClient::send_raw_request (anthropic.rs:466) renders the same MessageRequest struct via AnthropicRequestProfile::render_json_body (telemetry/lib.rs:107) which has no field for it either, only a per-client extra_body escape hatch (asymmetric — openai_compat path has zero hits for extra_body); ChatCompletionResponse / ChatCompletionChunk / OpenAiUsage all deserialize four fields each, dropping the upstream-echoed service_tier confirmation and the system_fingerprint reproducibility marker that OpenAI documents as the canonical "what backend served you" signal; claw cannot opt into OpenAI flex (~50% cheaper async batch — developers.openai.com/api/docs/guides/flex-processing), cannot opt into OpenAI priority (~1.5-2x premium SLA latency — developers.openai.com/api/docs/guides/priority-processing), cannot opt into Anthropic priority (auto/standard_only — platform.claude.com/docs/en/api/service-tiers), and cannot detect at the response layer whether a request was flex-served or silently upgraded to priority by a project-level default override (Jobdori cycle #368 / extends #168c emission-routing audit / sibling-shape cluster grows to fifteen: #201/#202/#203/#206/#207/#208/#209/#210/#211/#212/#213/#214/#215/#216 / wire-format-parity cluster grows to six: #211+#212+#213+#214+#215+#216 / cost-parity cluster grows to six: #204+#207+#209+#210+#213+#216 / three-dimensional-structural-absence shape: request-side write + response-side read + reproducibility marker, distinct from prior request-only #211#212 / response-only #207#213#214 / header-only #215 members / external validation: OpenAI flex/priority/scale-tier guides, OpenAI advanced-usage system_fingerprint guide, Anthropic service-tiers reference, OpenTelemetry GenAI semconv gen_ai.openai.request.service_tier + gen_ai.openai.response.service_tier + gen_ai.openai.response.system_fingerprint, anomalyco/opencode#12297, Vercel AI SDK serviceTier provider option, LangChain ChatOpenAI service_tier ctor param, LiteLLM service_tier pass-through, semantic-kernel OpenAIPromptExecutionSettings.ServiceTier, openai-python SDK client.chat.completions.create(service_tier=...) first-class kwarg, MiniMax/DeepSeek Anthropic-compat layer notes, badlogic/pi-mono#1381)
2026-04-25 23:12:25 +09:00
2026-04-07 15:52:30 +09:00
roadmap: #216 filed — neither MessageRequest nor MessageResponse has any service_tier field; build_chat_completion_request (openai_compat.rs:845) writes thirteen optional fields (model, max_tokens/max_completion_tokens, messages, stream, stream_options, tools, tool_choice, temperature, top_p, frequency_penalty, presence_penalty, stop, reasoning_effort) and does not write service_tier; AnthropicClient::send_raw_request (anthropic.rs:466) renders the same MessageRequest struct via AnthropicRequestProfile::render_json_body (telemetry/lib.rs:107) which has no field for it either, only a per-client extra_body escape hatch (asymmetric — openai_compat path has zero hits for extra_body); ChatCompletionResponse / ChatCompletionChunk / OpenAiUsage all deserialize four fields each, dropping the upstream-echoed service_tier confirmation and the system_fingerprint reproducibility marker that OpenAI documents as the canonical "what backend served you" signal; claw cannot opt into OpenAI flex (~50% cheaper async batch — developers.openai.com/api/docs/guides/flex-processing), cannot opt into OpenAI priority (~1.5-2x premium SLA latency — developers.openai.com/api/docs/guides/priority-processing), cannot opt into Anthropic priority (auto/standard_only — platform.claude.com/docs/en/api/service-tiers), and cannot detect at the response layer whether a request was flex-served or silently upgraded to priority by a project-level default override (Jobdori cycle #368 / extends #168c emission-routing audit / sibling-shape cluster grows to fifteen: #201/#202/#203/#206/#207/#208/#209/#210/#211/#212/#213/#214/#215/#216 / wire-format-parity cluster grows to six: #211+#212+#213+#214+#215+#216 / cost-parity cluster grows to six: #204+#207+#209+#210+#213+#216 / three-dimensional-structural-absence shape: request-side write + response-side read + reproducibility marker, distinct from prior request-only #211#212 / response-only #207#213#214 / header-only #215 members / external validation: OpenAI flex/priority/scale-tier guides, OpenAI advanced-usage system_fingerprint guide, Anthropic service-tiers reference, OpenTelemetry GenAI semconv gen_ai.openai.request.service_tier + gen_ai.openai.response.service_tier + gen_ai.openai.response.system_fingerprint, anomalyco/opencode#12297, Vercel AI SDK serviceTier provider option, LangChain ChatOpenAI service_tier ctor param, LiteLLM service_tier pass-through, semantic-kernel OpenAIPromptExecutionSettings.ServiceTier, openai-python SDK client.chat.completions.create(service_tier=...) first-class kwarg, MiniMax/DeepSeek Anthropic-compat layer notes, badlogic/pi-mono#1381)
2026-04-25 23:12:25 +09:00

Claw Code

ultraworkers/claw-code · Usage · Error Handling · Rust workspace · Parity · Roadmap · UltraWorkers Discord

Star history for ultraworkers/claw-code

Claw Code

Claw Code is the public Rust implementation of the claw CLI agent harness. The canonical implementation lives in rust/, and the current source of truth for this repository is ultraworkers/claw-code.

Important

Start with USAGE.md for build, auth, CLI, session, and parity-harness workflows. Make claw doctor your first health check after building, use rust/README.md for crate-level details, read PARITY.md for the current Rust-port checkpoint, and see docs/container.md for the container-first workflow.

ACP / Zed status: claw-code does not ship an ACP/Zed daemon entrypoint yet. Run claw acp (or claw --acp) for the current status instead of guessing from source layout; claw acp serve is currently a discoverability alias only, and real ACP support remains tracked separately in ROADMAP.md.

Current repository shape

  • rust/ — canonical Rust workspace and the claw CLI binary
  • USAGE.md — task-oriented usage guide for the current product surface
  • ERROR_HANDLING.md — unified error-handling pattern for orchestration code
  • PARITY.md — Rust-port parity status and migration notes
  • ROADMAP.md — active roadmap and cleanup backlog
  • PHILOSOPHY.md — project intent and system-design framing
  • SCHEMAS.md — JSON protocol contract (Python harness reference)
  • src/ + tests/ — companion Python/reference workspace and audit helpers; not the primary runtime surface

Quick start

Note

[!WARNING] cargo install claw-code installs the wrong thing. The claw-code crate on crates.io is a deprecated stub that places claw-code-deprecated.exe — not claw. Running it only prints "claw-code has been renamed to agent-code". Do not use cargo install claw-code. Either build from source (this repo) or install the upstream binary:

cargo install agent-code   # upstream binary — installs 'agent.exe' (Windows) / 'agent' (Unix), NOT 'agent-code'

This repo (ultraworkers/claw-code) is build-from-source only — follow the steps below.

# 1. Clone and build
git clone https://github.com/ultraworkers/claw-code
cd claw-code/rust
cargo build --workspace

# 2. Set your API key (Anthropic API key — not a Claude subscription)
export ANTHROPIC_API_KEY="sk-ant-..."

# 3. Verify everything is wired correctly
./target/debug/claw doctor

# 4. Run a prompt
./target/debug/claw prompt "say hello"

Note

Windows (PowerShell): the binary is claw.exe, not claw. Use .\target\debug\claw.exe or run cargo run -- prompt "say hello" to skip the path lookup.

Windows setup

PowerShell is a supported Windows path. Use whichever shell works for you. The common onboarding issues on Windows are:

  1. Install Rust first — download from https://rustup.rs/ and run the installer. Close and reopen your terminal when it finishes.
  2. Verify Rust is on PATH:
    cargo --version
    
    If this fails, reopen your terminal or run the PATH setup from the Rust installer output, then retry.
  3. Clone and build (works in PowerShell, Git Bash, or WSL):
    git clone https://github.com/ultraworkers/claw-code
    cd claw-code/rust
    cargo build --workspace
    
  4. Run (PowerShell — note .exe and backslash):
    $env:ANTHROPIC_API_KEY = "sk-ant-..."
    .\target\debug\claw.exe prompt "say hello"
    

Git Bash / WSL are optional alternatives, not requirements. If you prefer bash-style paths (/c/Users/you/... instead of C:\Users\you\...), Git Bash (ships with Git for Windows) works well. In Git Bash, the MINGW64 prompt is expected and normal — not a broken install.

Post-build: locate the binary and verify

After running cargo build --workspace, the claw binary is built but not automatically installed to your system. Here's where to find it and how to verify the build succeeded.

Binary location

After cargo build --workspace in claw-code/rust/:

Debug build (default, faster compile):

  • macOS/Linux: rust/target/debug/claw
  • Windows: rust/target/debug/claw.exe

Release build (optimized, slower compile):

  • macOS/Linux: rust/target/release/claw
  • Windows: rust/target/release/claw.exe

If you ran cargo build without --release, the binary is in the debug/ folder.

Verify the build succeeded

Test the binary directly using its path:

# macOS/Linux (debug build)
./rust/target/debug/claw --help
./rust/target/debug/claw doctor

# Windows PowerShell (debug build)
.\rust\target\debug\claw.exe --help
.\rust\target\debug\claw.exe doctor

If these commands succeed, the build is working. claw doctor is your first health check — it validates your API key, model access, and tool configuration.

Optional: Add to PATH

If you want to run claw from any directory without the full path, choose one of these approaches:

Option 1: Symlink (macOS/Linux)

ln -s $(pwd)/rust/target/debug/claw /usr/local/bin/claw

Then reload your shell and test:

claw --help

Option 2: Use cargo install (all platforms)

Build and install to Cargo's default location (~/.cargo/bin/, which is usually on PATH):

# From the claw-code/rust/ directory
cargo install --path . --force

# Then from anywhere
claw --help

Option 3: Update shell profile (bash/zsh)

Add this line to ~/.bashrc or ~/.zshrc:

export PATH="$(pwd)/rust/target/debug:$PATH"

Reload your shell:

source ~/.bashrc  # or source ~/.zshrc
claw --help

Troubleshooting

  • "command not found: claw" — The binary is in rust/target/debug/claw, but it's not on your PATH. Use the full path ./rust/target/debug/claw or symlink/install as above.
  • "permission denied" — On macOS/Linux, you may need chmod +x rust/target/debug/claw if the executable bit isn't set (rare).
  • Debug vs. release — If the build is slow, you're in debug mode (default). Add --release to cargo build for faster runtime, but the build itself will take 510 minutes.

Note

Auth: claw requires an API key (ANTHROPIC_API_KEY, OPENAI_API_KEY, etc.) — Claude subscription login is not a supported auth path.

Run the workspace test suite after verifying the binary works:

cd rust
cargo test --workspace

Documentation map

Ecosystem

Claw Code is built in the open alongside the broader UltraWorkers toolchain:

Ownership / affiliation disclaimer

  • This repository does not claim ownership of the original Claude Code source material.
  • This repository is not affiliated with, endorsed by, or maintained by Anthropic.
Description
The repo is finally unlocked. enjoy the party! The fastest repo in history to surpass 100K stars . Join Discord: https://discord.gg/5TUQKqFWd Built in Rust using oh-my-codex.
Readme 60 MiB
Languages
Rust 96.4%
Python 3.2%
Shell 0.4%