因无法 fork,手动迁移代码 source: https://github.com/paoloanzn/free-code
This commit is contained in:
commit
a4deee05bc
5
.gitignore
vendored
Normal file
5
.gitignore
vendored
Normal file
@ -0,0 +1,5 @@
|
||||
node_modules/
|
||||
dist/
|
||||
cli
|
||||
cli-dev
|
||||
openclaw/
|
||||
47
CLAUDE.md
Normal file
47
CLAUDE.md
Normal file
@ -0,0 +1,47 @@
|
||||
# CLAUDE.md
|
||||
|
||||
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
||||
|
||||
## Common commands
|
||||
|
||||
```bash
|
||||
# Install dependencies
|
||||
bun install
|
||||
|
||||
# Standard build (./cli)
|
||||
bun run build
|
||||
|
||||
# Dev build (./cli-dev)
|
||||
bun run build:dev
|
||||
|
||||
# Dev build with all experimental features (./cli-dev)
|
||||
bun run build:dev:full
|
||||
|
||||
# Compiled build (./dist/cli)
|
||||
bun run compile
|
||||
|
||||
# Run from source without compiling
|
||||
bun run dev
|
||||
```
|
||||
|
||||
Run the built binary with `./cli` or `./cli-dev`. Set `ANTHROPIC_API_KEY` in the environment or use OAuth via `./cli /login`.
|
||||
|
||||
## High-level architecture
|
||||
|
||||
- **Entry point/UI loop**: src/entrypoints/cli.tsx bootstraps the CLI, with the main interactive UI in src/screens/REPL.tsx (Ink/React).
|
||||
- **Command/tool registries**: src/commands.ts registers slash commands; src/tools.ts registers tool implementations. Implementations live in src/commands/ and src/tools/.
|
||||
- **LLM query pipeline**: src/QueryEngine.ts coordinates message flow, tool use, and model invocation.
|
||||
- **Core subsystems**:
|
||||
- src/services/: API clients, OAuth/MCP integration, analytics stubs
|
||||
- src/state/: app state store
|
||||
- src/hooks/: React hooks used by UI/flows
|
||||
- src/components/: terminal UI components (Ink)
|
||||
- src/skills/: skill system
|
||||
- src/plugins/: plugin system
|
||||
- src/bridge/: IDE bridge
|
||||
- src/voice/: voice input
|
||||
- src/tasks/: background task management
|
||||
|
||||
## Build system
|
||||
|
||||
- scripts/build.ts is the build script and feature-flag bundler. Feature flags are set via build arguments (e.g., `--feature=ULTRAPLAN`) or presets like `--feature-set=dev-full` (see README for details).
|
||||
317
FEATURES.md
Normal file
317
FEATURES.md
Normal file
@ -0,0 +1,317 @@
|
||||
# Feature Flags Audit
|
||||
|
||||
Audit date: 2026-03-31
|
||||
|
||||
This repository currently references 88 `feature('FLAG')` compile-time flags.
|
||||
I re-checked them by bundling the CLI once per flag on top of the current
|
||||
external-build defines and externals. Result:
|
||||
|
||||
- 54 flags bundle cleanly in this snapshot
|
||||
- 34 flags still fail to bundle
|
||||
|
||||
Important: "bundle cleanly" does not always mean "runtime-safe". Some flags
|
||||
still depend on optional native modules, claude.ai OAuth, GrowthBook gates, or
|
||||
externalized `@ant/*` packages.
|
||||
|
||||
## Build Variants
|
||||
|
||||
- `bun run build`
|
||||
Builds the regular external binary at `./cli`.
|
||||
- `bun run compile`
|
||||
Builds the regular external binary at `./dist/cli`.
|
||||
- `bun run build:dev`
|
||||
Builds `./cli-dev` with a dev-stamped version and experimental GrowthBook key.
|
||||
- `bun run build:dev:full`
|
||||
Builds `./cli-dev` with the entire current "Working Experimental Features"
|
||||
bundle from this document, minus `CHICAGO_MCP`. That flag still compiles,
|
||||
but the external binary does not boot cleanly with it because startup
|
||||
reaches the missing `@ant/computer-use-mcp` runtime package.
|
||||
|
||||
## Default Build Flags
|
||||
|
||||
- `VOICE_MODE`
|
||||
This is now included in the default build pipeline, not just the dev build.
|
||||
It enables `/voice`, push-to-talk UI, voice notices, and dictation plumbing.
|
||||
Runtime still depends on claude.ai OAuth plus either the native audio module
|
||||
or a fallback recorder such as SoX.
|
||||
|
||||
## Working Experimental Features
|
||||
|
||||
These are the user-facing or behavior-changing flags that currently bundle
|
||||
cleanly and should still be treated as experimental in this snapshot unless
|
||||
explicitly called out as default-on.
|
||||
|
||||
### Interaction and UI Experiments
|
||||
|
||||
- `AWAY_SUMMARY`
|
||||
Adds away-from-keyboard summary behavior in the REPL.
|
||||
- `HISTORY_PICKER`
|
||||
Enables the interactive prompt history picker.
|
||||
- `HOOK_PROMPTS`
|
||||
Passes the prompt/request text into hook execution flows.
|
||||
- `KAIROS_BRIEF`
|
||||
Enables brief-only transcript layout and BriefTool-oriented UX without the
|
||||
full assistant stack.
|
||||
- `KAIROS_CHANNELS`
|
||||
Enables channel notices and channel callback plumbing around MCP/channel
|
||||
messaging.
|
||||
- `LODESTONE`
|
||||
Enables deep-link / protocol-registration related flows and settings wiring.
|
||||
- `MESSAGE_ACTIONS`
|
||||
Enables message action entrypoints in the interactive UI.
|
||||
- `NEW_INIT`
|
||||
Enables the newer `/init` decision path.
|
||||
- `QUICK_SEARCH`
|
||||
Enables prompt quick-search behavior.
|
||||
- `SHOT_STATS`
|
||||
Enables additional shot-distribution stats views.
|
||||
- `TOKEN_BUDGET`
|
||||
Enables token budget tracking, prompt triggers, and token warning UI.
|
||||
- `ULTRAPLAN`
|
||||
Enables `/ultraplan`, prompt triggers, and exit-plan affordances.
|
||||
- `ULTRATHINK`
|
||||
Enables the extra thinking-depth mode switch.
|
||||
- `VOICE_MODE`
|
||||
Enables voice toggling, dictation keybindings, voice notices, and voice UI.
|
||||
|
||||
### Agent, Memory, and Planning Experiments
|
||||
|
||||
- `AGENT_MEMORY_SNAPSHOT`
|
||||
Stores extra custom-agent memory snapshot state in the app.
|
||||
- `AGENT_TRIGGERS`
|
||||
Enables local cron/trigger tools and bundled trigger-related skills.
|
||||
- `AGENT_TRIGGERS_REMOTE`
|
||||
Enables the remote trigger tool path.
|
||||
- `BUILTIN_EXPLORE_PLAN_AGENTS`
|
||||
Enables built-in explore/plan agent presets.
|
||||
- `CACHED_MICROCOMPACT`
|
||||
Enables cached microcompact state through query and API flows.
|
||||
- `COMPACTION_REMINDERS`
|
||||
Enables reminder copy around compaction and attachment flows.
|
||||
- `EXTRACT_MEMORIES`
|
||||
Enables post-query memory extraction hooks.
|
||||
- `PROMPT_CACHE_BREAK_DETECTION`
|
||||
Enables cache-break detection around compaction/query/API flow.
|
||||
- `TEAMMEM`
|
||||
Enables team-memory files, watcher hooks, and related UI messages.
|
||||
- `VERIFICATION_AGENT`
|
||||
Enables verification-agent guidance in prompts and task/todo tooling.
|
||||
|
||||
### Tools, Permissions, and Remote Experiments
|
||||
|
||||
- `BASH_CLASSIFIER`
|
||||
Enables classifier-assisted bash permission decisions.
|
||||
- `BRIDGE_MODE`
|
||||
Enables Remote Control / REPL bridge command and entitlement paths.
|
||||
- `CCR_AUTO_CONNECT`
|
||||
Enables the CCR auto-connect default path.
|
||||
- `CCR_MIRROR`
|
||||
Enables outbound-only CCR mirror sessions.
|
||||
- `CCR_REMOTE_SETUP`
|
||||
Enables the remote setup command path.
|
||||
- `CHICAGO_MCP`
|
||||
Enables computer-use MCP integration paths and wrapper loading.
|
||||
- `CONNECTOR_TEXT`
|
||||
Enables connector-text block handling in API/logging/UI paths.
|
||||
- `MCP_RICH_OUTPUT`
|
||||
Enables richer MCP UI rendering.
|
||||
- `NATIVE_CLIPBOARD_IMAGE`
|
||||
Enables the native macOS clipboard image fast path.
|
||||
- `POWERSHELL_AUTO_MODE`
|
||||
Enables PowerShell-specific auto-mode permission handling.
|
||||
- `TREE_SITTER_BASH`
|
||||
Enables the tree-sitter bash parser backend.
|
||||
- `TREE_SITTER_BASH_SHADOW`
|
||||
Enables the tree-sitter bash shadow rollout path.
|
||||
- `UNATTENDED_RETRY`
|
||||
Enables unattended retry behavior in API retry flows.
|
||||
|
||||
## Bundle-Clean Support Flags
|
||||
|
||||
These also bundle cleanly, but they are mostly rollout, platform, telemetry,
|
||||
or plumbing toggles rather than user-facing experimental features.
|
||||
|
||||
- `ABLATION_BASELINE`
|
||||
CLI ablation/baseline entrypoint toggle.
|
||||
- `ALLOW_TEST_VERSIONS`
|
||||
Allows test versions in native installer flows.
|
||||
- `ANTI_DISTILLATION_CC`
|
||||
Adds anti-distillation request metadata.
|
||||
- `BREAK_CACHE_COMMAND`
|
||||
Injects the break-cache command path.
|
||||
- `COWORKER_TYPE_TELEMETRY`
|
||||
Adds coworker-type telemetry fields.
|
||||
- `DOWNLOAD_USER_SETTINGS`
|
||||
Enables settings-sync pull paths.
|
||||
- `DUMP_SYSTEM_PROMPT`
|
||||
Enables the system-prompt dump path.
|
||||
- `FILE_PERSISTENCE`
|
||||
Enables file persistence plumbing.
|
||||
- `HARD_FAIL`
|
||||
Enables stricter failure/logging behavior.
|
||||
- `IS_LIBC_GLIBC`
|
||||
Forces glibc environment detection.
|
||||
- `IS_LIBC_MUSL`
|
||||
Forces musl environment detection.
|
||||
- `NATIVE_CLIENT_ATTESTATION`
|
||||
Adds native attestation marker text in the system header.
|
||||
- `PERFETTO_TRACING`
|
||||
Enables perfetto tracing hooks.
|
||||
- `SKILL_IMPROVEMENT`
|
||||
Enables skill-improvement hooks.
|
||||
- `SKIP_DETECTION_WHEN_AUTOUPDATES_DISABLED`
|
||||
Skips updater detection when auto-updates are disabled.
|
||||
- `SLOW_OPERATION_LOGGING`
|
||||
Enables slow-operation logging.
|
||||
- `UPLOAD_USER_SETTINGS`
|
||||
Enables settings-sync push paths.
|
||||
|
||||
## Compile-Safe But Runtime-Caveated
|
||||
|
||||
These bundle today, but I would still treat them as experimental because they
|
||||
have meaningful runtime caveats:
|
||||
|
||||
- `VOICE_MODE`
|
||||
Bundles cleanly, but requires claude.ai OAuth and a local recording backend.
|
||||
The native audio module is optional now; on this machine the fallback path
|
||||
asks for `brew install sox`.
|
||||
- `NATIVE_CLIPBOARD_IMAGE`
|
||||
Bundles cleanly, but only accelerates macOS clipboard reads when
|
||||
`image-processor-napi` is present.
|
||||
- `BRIDGE_MODE`, `CCR_AUTO_CONNECT`, `CCR_MIRROR`, `CCR_REMOTE_SETUP`
|
||||
Bundle cleanly, but are gated at runtime on claude.ai OAuth plus GrowthBook
|
||||
entitlement checks.
|
||||
- `KAIROS_BRIEF`, `KAIROS_CHANNELS`
|
||||
Bundle cleanly, but they do not restore the full missing assistant stack.
|
||||
They only expose the brief/channel-specific surfaces that still exist.
|
||||
- `CHICAGO_MCP`
|
||||
Bundles cleanly, but the runtime path still reaches externalized
|
||||
`@ant/computer-use-*` packages. This is compile-safe, not fully
|
||||
runtime-safe, in the external snapshot.
|
||||
- `TEAMMEM`
|
||||
Bundles cleanly, but only does useful work when team-memory config/files are
|
||||
actually enabled in the environment.
|
||||
|
||||
## Broken Flags With Easy Reconstruction Paths
|
||||
|
||||
These are the failed flags where the current blocker looks small enough that a
|
||||
focused reconstruction pass could probably restore them without rebuilding an
|
||||
entire subsystem.
|
||||
|
||||
- `AUTO_THEME`
|
||||
Fails on missing `src/utils/systemThemeWatcher.js`. `systemTheme.ts` and the
|
||||
theme provider already contain the cache/parsing logic, so the missing piece
|
||||
looks like the OSC 11 watcher only.
|
||||
- `BG_SESSIONS`
|
||||
Fails on missing `src/cli/bg.js`. The CLI fast-path dispatch in
|
||||
`src/entrypoints/cli.tsx` is already wired.
|
||||
- `BUDDY`
|
||||
Fails on missing `src/commands/buddy/index.js`. The buddy UI components and
|
||||
prompt-input hooks already exist.
|
||||
- `BUILDING_CLAUDE_APPS`
|
||||
Fails on missing `src/claude-api/csharp/claude-api.md`. This looks like an
|
||||
asset/document gap, not a missing runtime subsystem.
|
||||
- `COMMIT_ATTRIBUTION`
|
||||
Fails on missing `src/utils/attributionHooks.js`. Setup and cache-clear code
|
||||
already call into that hook module.
|
||||
- `FORK_SUBAGENT`
|
||||
Fails on missing `src/commands/fork/index.js`. Command slot and message
|
||||
rendering support are already present.
|
||||
- `HISTORY_SNIP`
|
||||
Fails on missing `src/commands/force-snip.js`. The surrounding SnipTool and
|
||||
query/message comments are already there.
|
||||
- `KAIROS_GITHUB_WEBHOOKS`
|
||||
Fails on missing `src/tools/SubscribePRTool/SubscribePRTool.js`. The command
|
||||
slot and some message handling already exist.
|
||||
- `KAIROS_PUSH_NOTIFICATION`
|
||||
Fails on missing `src/tools/PushNotificationTool/PushNotificationTool.js`.
|
||||
The tool slot already exists in `src/tools.ts`.
|
||||
- `MCP_SKILLS`
|
||||
Fails on missing `src/skills/mcpSkills.js`. `mcpSkillBuilders.ts` already
|
||||
exists specifically to support that missing registry layer.
|
||||
- `MEMORY_SHAPE_TELEMETRY`
|
||||
Fails on missing `src/memdir/memoryShapeTelemetry.js`. The hook call sites
|
||||
are already in place in `sessionFileAccessHooks.ts`.
|
||||
- `OVERFLOW_TEST_TOOL`
|
||||
Fails on missing `src/tools/OverflowTestTool/OverflowTestTool.js`. This
|
||||
appears isolated and test-only.
|
||||
- `RUN_SKILL_GENERATOR`
|
||||
Fails on missing `src/runSkillGenerator.js`. The bundled skill registration
|
||||
path already expects it.
|
||||
- `TEMPLATES`
|
||||
Fails on missing `src/cli/handlers/templateJobs.js`. The CLI fast-path is
|
||||
already wired in `src/entrypoints/cli.tsx`.
|
||||
- `TORCH`
|
||||
Fails on missing `src/commands/torch.js`. This looks like a single command
|
||||
entry gap.
|
||||
- `TRANSCRIPT_CLASSIFIER`
|
||||
The first hard failure is missing
|
||||
`src/utils/permissions/yolo-classifier-prompts/auto_mode_system_prompt.txt`.
|
||||
The classifier engine, parser, and settings plumbing already exist, so the
|
||||
missing prompt/assets are likely the first reconstruction target.
|
||||
|
||||
## Broken Flags With Partial Wiring But Medium-Sized Gaps
|
||||
|
||||
These do have meaningful surrounding code, but the missing piece is larger
|
||||
than a single wrapper or asset.
|
||||
|
||||
- `BYOC_ENVIRONMENT_RUNNER`
|
||||
Missing `src/environment-runner/main.js`.
|
||||
- `CONTEXT_COLLAPSE`
|
||||
Missing `src/tools/CtxInspectTool/CtxInspectTool.js`.
|
||||
- `COORDINATOR_MODE`
|
||||
Missing `src/coordinator/workerAgent.js`.
|
||||
- `DAEMON`
|
||||
Missing `src/daemon/workerRegistry.js`.
|
||||
- `DIRECT_CONNECT`
|
||||
Missing `src/server/parseConnectUrl.js`.
|
||||
- `EXPERIMENTAL_SKILL_SEARCH`
|
||||
Missing `src/services/skillSearch/localSearch.js`.
|
||||
- `MONITOR_TOOL`
|
||||
Missing `src/tools/MonitorTool/MonitorTool.js`.
|
||||
- `REACTIVE_COMPACT`
|
||||
Missing `src/services/compact/reactiveCompact.js`.
|
||||
- `REVIEW_ARTIFACT`
|
||||
Missing `src/hunter.js`.
|
||||
- `SELF_HOSTED_RUNNER`
|
||||
Missing `src/self-hosted-runner/main.js`.
|
||||
- `SSH_REMOTE`
|
||||
Missing `src/ssh/createSSHSession.js`.
|
||||
- `TERMINAL_PANEL`
|
||||
Missing `src/tools/TerminalCaptureTool/TerminalCaptureTool.js`.
|
||||
- `UDS_INBOX`
|
||||
Missing `src/utils/udsMessaging.js`.
|
||||
- `WEB_BROWSER_TOOL`
|
||||
Missing `src/tools/WebBrowserTool/WebBrowserTool.js`.
|
||||
- `WORKFLOW_SCRIPTS`
|
||||
Fails first on `src/commands/workflows/index.js`, but there are more gaps:
|
||||
`tasks.ts` already expects `LocalWorkflowTask`, and `tools.ts` expects a
|
||||
real `WorkflowTool` implementation while only `WorkflowTool/constants.ts`
|
||||
exists in this snapshot.
|
||||
|
||||
## Broken Flags With Large Missing Subsystems
|
||||
|
||||
These are the ones that still look expensive to restore because the first
|
||||
missing import is only the visible edge of a broader absent subsystem.
|
||||
|
||||
- `KAIROS`
|
||||
Missing `src/assistant/index.js` and much of the assistant stack with it.
|
||||
- `KAIROS_DREAM`
|
||||
Missing `src/dream.js` and related dream-task behavior.
|
||||
- `PROACTIVE`
|
||||
Missing `src/proactive/index.js` and the proactive task/tool stack.
|
||||
|
||||
## Useful Entry Points
|
||||
|
||||
- Feature-aware build logic:
|
||||
[scripts/build.ts](/Users/paolo/Repos/claude-code/scripts/build.ts)
|
||||
- Feature-gated command imports:
|
||||
[src/commands.ts](/Users/paolo/Repos/claude-code/src/commands.ts)
|
||||
- Feature-gated tool imports:
|
||||
[src/tools.ts](/Users/paolo/Repos/claude-code/src/tools.ts)
|
||||
- Feature-gated task imports:
|
||||
[src/tasks.ts](/Users/paolo/Repos/claude-code/src/tasks.ts)
|
||||
- Feature-gated query behavior:
|
||||
[src/query.ts](/Users/paolo/Repos/claude-code/src/query.ts)
|
||||
- Feature-gated CLI entry paths:
|
||||
[src/entrypoints/cli.tsx](/Users/paolo/Repos/claude-code/src/entrypoints/cli.tsx)
|
||||
358
README.md
Normal file
358
README.md
Normal file
@ -0,0 +1,358 @@
|
||||
<p align="center">
|
||||
<img src="assets/screenshot.png" alt="free-code" width="720" />
|
||||
</p>
|
||||
|
||||
<h1 align="center">free-code</h1>
|
||||
|
||||
<p align="center">
|
||||
<strong>The free build of Claude Code.</strong><br>
|
||||
All telemetry stripped. All guardrails removed. All experimental features unlocked.<br>
|
||||
One binary, zero callbacks home.
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="#quick-install"><img src="https://img.shields.io/badge/install-one--liner-blue?style=flat-square" alt="Install" /></a>
|
||||
<a href="https://github.com/paoloanzn/free-code/stargazers"><img src="https://img.shields.io/github/stars/paoloanzn/free-code?style=flat-square" alt="Stars" /></a>
|
||||
<a href="https://github.com/paoloanzn/free-code/issues"><img src="https://img.shields.io/github/issues/paoloanzn/free-code?style=flat-square" alt="Issues" /></a>
|
||||
<a href="https://github.com/paoloanzn/free-code/blob/main/FEATURES.md"><img src="https://img.shields.io/badge/features-88%20flags-orange?style=flat-square" alt="Feature Flags" /></a>
|
||||
<a href="#ipfs-mirror"><img src="https://img.shields.io/badge/IPFS-mirrored-teal?style=flat-square" alt="IPFS" /></a>
|
||||
</p>
|
||||
|
||||
---
|
||||
|
||||
## Quick Install
|
||||
|
||||
```bash
|
||||
curl -fsSL https://raw.githubusercontent.com/paoloanzn/free-code/main/install.sh | bash
|
||||
```
|
||||
|
||||
Checks your system, installs Bun if needed, clones the repo, builds with all experimental features enabled, and symlinks `free-code` on your PATH.
|
||||
|
||||
Then run `free-code` and use the `/login` command to authenticate with your preferred model provider.
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [What is this](#what-is-this)
|
||||
- [Model Providers](#model-providers)
|
||||
- [Quick Install](#quick-install)
|
||||
- [Requirements](#requirements)
|
||||
- [Build](#build)
|
||||
- [Usage](#usage)
|
||||
- [Experimental Features](#experimental-features)
|
||||
- [Project Structure](#project-structure)
|
||||
- [Tech Stack](#tech-stack)
|
||||
- [IPFS Mirror](#ipfs-mirror)
|
||||
- [Contributing](#contributing)
|
||||
- [License](#license)
|
||||
|
||||
---
|
||||
|
||||
## What is this
|
||||
|
||||
A clean, buildable fork of Anthropic's [Claude Code](https://docs.anthropic.com/en/docs/claude-code) CLI -- the terminal-native AI coding agent. The upstream source became publicly available on March 31, 2026 through a source map exposure in the npm distribution.
|
||||
|
||||
This fork applies three categories of changes on top of that snapshot:
|
||||
|
||||
### Telemetry removed
|
||||
|
||||
The upstream binary phones home through OpenTelemetry/gRPC, GrowthBook analytics, Sentry error reporting, and custom event logging. In this build:
|
||||
|
||||
- All outbound telemetry endpoints are dead-code-eliminated or stubbed
|
||||
- GrowthBook feature flag evaluation still works locally (needed for runtime feature gates) but does not report back
|
||||
- No crash reports, no usage analytics, no session fingerprinting
|
||||
|
||||
### Security-prompt guardrails removed
|
||||
|
||||
Anthropic injects system-level instructions into every conversation that constrain Claude's behavior beyond what the model itself enforces. These include hardcoded refusal patterns, injected "cyber risk" instruction blocks, and managed-settings security overlays pushed from Anthropic's servers.
|
||||
|
||||
This build strips those injections. The model's own safety training still applies -- this just removes the extra layer of prompt-level restrictions that the CLI wraps around it.
|
||||
|
||||
### Experimental features unlocked
|
||||
|
||||
Claude Code ships with 88 feature flags gated behind `bun:bundle` compile-time switches. Most are disabled in the public npm release. This build unlocks all 54 flags that compile cleanly. See [Experimental Features](#experimental-features) below, or refer to [FEATURES.md](FEATURES.md) for the full audit.
|
||||
|
||||
---
|
||||
|
||||
## Model Providers
|
||||
|
||||
free-code supports **five API providers** out of the box. Set the corresponding environment variable to switch providers -- no code changes needed.
|
||||
|
||||
### Anthropic (Direct API) -- Default
|
||||
|
||||
Use Anthropic's first-party API directly.
|
||||
|
||||
| Model | ID |
|
||||
|---|---|
|
||||
| Claude Opus 4.6 | `claude-opus-4-6` |
|
||||
| Claude Sonnet 4.6 | `claude-sonnet-4-6` |
|
||||
| Claude Haiku 4.5 | `claude-haiku-4-5` |
|
||||
|
||||
### OpenAI Codex
|
||||
|
||||
Use OpenAI's Codex models for code generation. Requires a Codex subscription.
|
||||
|
||||
| Model | ID |
|
||||
|---|---|
|
||||
| GPT-5.3 Codex (recommended) | `gpt-5.3-codex` |
|
||||
| GPT-5.4 | `gpt-5.4` |
|
||||
| GPT-5.4 Mini | `gpt-5.4-mini` |
|
||||
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_OPENAI=1
|
||||
free-code
|
||||
```
|
||||
|
||||
### AWS Bedrock
|
||||
|
||||
Route requests through your AWS account via Amazon Bedrock.
|
||||
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_BEDROCK=1
|
||||
export AWS_REGION="us-east-1" # or AWS_DEFAULT_REGION
|
||||
free-code
|
||||
```
|
||||
|
||||
Uses your standard AWS credentials (environment variables, `~/.aws/config`, or IAM role). Models are mapped to Bedrock ARN format automatically (e.g., `us.anthropic.claude-opus-4-6-v1`).
|
||||
|
||||
| Variable | Purpose |
|
||||
|---|---|
|
||||
| `CLAUDE_CODE_USE_BEDROCK` | Enable Bedrock provider |
|
||||
| `AWS_REGION` / `AWS_DEFAULT_REGION` | AWS region (default: `us-east-1`) |
|
||||
| `ANTHROPIC_BEDROCK_BASE_URL` | Custom Bedrock endpoint |
|
||||
| `AWS_BEARER_TOKEN_BEDROCK` | Bearer token auth |
|
||||
| `CLAUDE_CODE_SKIP_BEDROCK_AUTH` | Skip auth (testing) |
|
||||
|
||||
### Google Cloud Vertex AI
|
||||
|
||||
Route requests through your GCP project via Vertex AI.
|
||||
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_VERTEX=1
|
||||
free-code
|
||||
```
|
||||
|
||||
Uses Google Cloud Application Default Credentials (`gcloud auth application-default login`). Models are mapped to Vertex format automatically (e.g., `claude-opus-4-6@latest`).
|
||||
|
||||
### Anthropic Foundry
|
||||
|
||||
Use Anthropic Foundry for dedicated deployments.
|
||||
|
||||
```bash
|
||||
export CLAUDE_CODE_USE_FOUNDRY=1
|
||||
export ANTHROPIC_FOUNDRY_API_KEY="..."
|
||||
free-code
|
||||
```
|
||||
|
||||
Supports custom deployment IDs as model names.
|
||||
|
||||
### Provider Selection Summary
|
||||
|
||||
| Provider | Env Variable | Auth Method |
|
||||
|---|---|---|
|
||||
| Anthropic (default) | -- | `ANTHROPIC_API_KEY` or OAuth |
|
||||
| OpenAI Codex | `CLAUDE_CODE_USE_OPENAI=1` | OAuth via OpenAI |
|
||||
| AWS Bedrock | `CLAUDE_CODE_USE_BEDROCK=1` | AWS credentials |
|
||||
| Google Vertex AI | `CLAUDE_CODE_USE_VERTEX=1` | `gcloud` ADC |
|
||||
| Anthropic Foundry | `CLAUDE_CODE_USE_FOUNDRY=1` | `ANTHROPIC_FOUNDRY_API_KEY` |
|
||||
|
||||
---
|
||||
|
||||
## Requirements
|
||||
|
||||
- **Runtime**: [Bun](https://bun.sh) >= 1.3.11
|
||||
- **OS**: macOS or Linux (Windows via WSL)
|
||||
- **Auth**: An API key or OAuth login for your chosen provider
|
||||
|
||||
```bash
|
||||
# Install Bun if you don't have it
|
||||
curl -fsSL https://bun.sh/install | bash
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Build
|
||||
|
||||
```bash
|
||||
git clone https://github.com/paoloanzn/free-code.git
|
||||
cd free-code
|
||||
bun build
|
||||
./cli
|
||||
```
|
||||
|
||||
### Build Variants
|
||||
|
||||
| Command | Output | Features | Description |
|
||||
|---|---|---|---|
|
||||
| `bun run build` | `./cli` | `VOICE_MODE` only | Production-like binary |
|
||||
| `bun run build:dev` | `./cli-dev` | `VOICE_MODE` only | Dev version stamp |
|
||||
| `bun run build:dev:full` | `./cli-dev` | All 54 experimental flags | Full unlock build |
|
||||
| `bun run compile` | `./dist/cli` | `VOICE_MODE` only | Alternative output path |
|
||||
|
||||
### Custom Feature Flags
|
||||
|
||||
Enable specific flags without the full bundle:
|
||||
|
||||
```bash
|
||||
# Enable just ultraplan and ultrathink
|
||||
bun run ./scripts/build.ts --feature=ULTRAPLAN --feature=ULTRATHINK
|
||||
|
||||
# Add a flag on top of the dev build
|
||||
bun run ./scripts/build.ts --dev --feature=BRIDGE_MODE
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# Interactive REPL (default)
|
||||
./cli
|
||||
|
||||
# One-shot mode
|
||||
./cli -p "what files are in this directory?"
|
||||
|
||||
# Specify a model
|
||||
./cli --model claude-opus-4-6
|
||||
|
||||
# Run from source (slower startup)
|
||||
bun run dev
|
||||
|
||||
# OAuth login
|
||||
./cli /login
|
||||
```
|
||||
|
||||
### Environment Variables Reference
|
||||
|
||||
| Variable | Purpose |
|
||||
|---|---|
|
||||
| `ANTHROPIC_API_KEY` | Anthropic API key |
|
||||
| `ANTHROPIC_AUTH_TOKEN` | Auth token (alternative) |
|
||||
| `ANTHROPIC_MODEL` | Override default model |
|
||||
| `ANTHROPIC_BASE_URL` | Custom API endpoint |
|
||||
| `ANTHROPIC_DEFAULT_OPUS_MODEL` | Custom Opus model ID |
|
||||
| `ANTHROPIC_DEFAULT_SONNET_MODEL` | Custom Sonnet model ID |
|
||||
| `ANTHROPIC_DEFAULT_HAIKU_MODEL` | Custom Haiku model ID |
|
||||
| `CLAUDE_CODE_OAUTH_TOKEN` | OAuth token via env |
|
||||
| `CLAUDE_CODE_API_KEY_HELPER_TTL_MS` | API key helper cache TTL |
|
||||
|
||||
---
|
||||
|
||||
## Experimental Features
|
||||
|
||||
The `bun run build:dev:full` build enables all 54 working feature flags. Highlights:
|
||||
|
||||
### Interaction & UI
|
||||
|
||||
| Flag | Description |
|
||||
|---|---|
|
||||
| `ULTRAPLAN` | Remote multi-agent planning on Claude Code web (Opus-class) |
|
||||
| `ULTRATHINK` | Deep thinking mode -- type "ultrathink" to boost reasoning effort |
|
||||
| `VOICE_MODE` | Push-to-talk voice input and dictation |
|
||||
| `TOKEN_BUDGET` | Token budget tracking and usage warnings |
|
||||
| `HISTORY_PICKER` | Interactive prompt history picker |
|
||||
| `MESSAGE_ACTIONS` | Message action entrypoints in the UI |
|
||||
| `QUICK_SEARCH` | Prompt quick-search |
|
||||
| `SHOT_STATS` | Shot-distribution stats |
|
||||
|
||||
### Agents, Memory & Planning
|
||||
|
||||
| Flag | Description |
|
||||
|---|---|
|
||||
| `BUILTIN_EXPLORE_PLAN_AGENTS` | Built-in explore/plan agent presets |
|
||||
| `VERIFICATION_AGENT` | Verification agent for task validation |
|
||||
| `AGENT_TRIGGERS` | Local cron/trigger tools for background automation |
|
||||
| `AGENT_TRIGGERS_REMOTE` | Remote trigger tool path |
|
||||
| `EXTRACT_MEMORIES` | Post-query automatic memory extraction |
|
||||
| `COMPACTION_REMINDERS` | Smart reminders around context compaction |
|
||||
| `CACHED_MICROCOMPACT` | Cached microcompact state through query flows |
|
||||
| `TEAMMEM` | Team-memory files and watcher hooks |
|
||||
|
||||
### Tools & Infrastructure
|
||||
|
||||
| Flag | Description |
|
||||
|---|---|
|
||||
| `BRIDGE_MODE` | IDE remote-control bridge (VS Code, JetBrains) |
|
||||
| `BASH_CLASSIFIER` | Classifier-assisted bash permission decisions |
|
||||
| `PROMPT_CACHE_BREAK_DETECTION` | Cache-break detection in compaction/query flow |
|
||||
|
||||
See [FEATURES.md](FEATURES.md) for the complete audit of all 88 flags, including 34 broken flags with reconstruction notes.
|
||||
|
||||
---
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
scripts/
|
||||
build.ts # Build script with feature flag system
|
||||
|
||||
src/
|
||||
entrypoints/cli.tsx # CLI entrypoint
|
||||
commands.ts # Command registry (slash commands)
|
||||
tools.ts # Tool registry (agent tools)
|
||||
QueryEngine.ts # LLM query engine
|
||||
screens/REPL.tsx # Main interactive UI (Ink/React)
|
||||
|
||||
commands/ # /slash command implementations
|
||||
tools/ # Agent tool implementations (Bash, Read, Edit, etc.)
|
||||
components/ # Ink/React terminal UI components
|
||||
hooks/ # React hooks
|
||||
services/ # API clients, MCP, OAuth, analytics
|
||||
api/ # API client + Codex fetch adapter
|
||||
oauth/ # OAuth flows (Anthropic + OpenAI)
|
||||
state/ # App state store
|
||||
utils/ # Utilities
|
||||
model/ # Model configs, providers, validation
|
||||
skills/ # Skill system
|
||||
plugins/ # Plugin system
|
||||
bridge/ # IDE bridge
|
||||
voice/ # Voice input
|
||||
tasks/ # Background task management
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Tech Stack
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Runtime** | [Bun](https://bun.sh) |
|
||||
| **Language** | TypeScript |
|
||||
| **Terminal UI** | React + [Ink](https://github.com/vadimdemedes/ink) |
|
||||
| **CLI Parsing** | [Commander.js](https://github.com/tj/commander.js) |
|
||||
| **Schema Validation** | Zod v4 |
|
||||
| **Code Search** | ripgrep (bundled) |
|
||||
| **Protocols** | MCP, LSP |
|
||||
| **APIs** | Anthropic Messages, OpenAI Codex, AWS Bedrock, Google Vertex AI |
|
||||
|
||||
---
|
||||
|
||||
## IPFS Mirror
|
||||
|
||||
A full copy of this repository is permanently pinned on IPFS via Filecoin:
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **CID** | `bafybeiegvef3dt24n2znnnmzcud2vxat7y7rl5ikz7y7yoglxappim54bm` |
|
||||
| **Gateway** | https://w3s.link/ipfs/bafybeiegvef3dt24n2znnnmzcud2vxat7y7rl5ikz7y7yoglxappim54bm |
|
||||
|
||||
If this repo gets taken down, the code lives on.
|
||||
|
||||
---
|
||||
|
||||
## Contributing
|
||||
|
||||
Contributions are welcome. If you're working on restoring one of the 34 broken feature flags, check the reconstruction notes in [FEATURES.md](FEATURES.md) first -- many are close to compiling and just need a small wrapper or missing asset.
|
||||
|
||||
1. Fork the repository
|
||||
2. Create a feature branch (`git checkout -b feat/my-feature`)
|
||||
3. Commit your changes (`git commit -m 'feat: add something'`)
|
||||
4. Push to the branch (`git push origin feat/my-feature`)
|
||||
5. Open a Pull Request
|
||||
|
||||
---
|
||||
|
||||
## License
|
||||
|
||||
The original Claude Code source is the property of Anthropic. This fork exists because the source was publicly exposed through their npm distribution. Use at your own discretion.
|
||||
BIN
assets/screenshot.png
Normal file
BIN
assets/screenshot.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 886 KiB |
24
changes.md
Normal file
24
changes.md
Normal file
@ -0,0 +1,24 @@
|
||||
# Codex API Support: Feature Parity & UI Overhaul
|
||||
|
||||
## Summary
|
||||
This pull request introduces full feature parity and explicit UI support for the OpenAI Codex backend (`chatgpt.com/backend-api/codex/responses`). The codebase is now entirely backend-agnostic and smoothly transitions between Anthropic Claude and OpenAI Codex schemas based on current authentication, without losing features like reasoning animations, token billing, or multi-modal visual inputs.
|
||||
|
||||
## Key Changes
|
||||
|
||||
### 1. Codex API Gateway Adapter (`codex-fetch-adapter.ts`)
|
||||
- **Native Vision Translation**: Anthropic `base64` image schemas now map precisely to the Codex expected `input_image` payloads.
|
||||
- **Strict Payload Mapping**: Refactored the internal mapping logic to translate `msg.content` items precisely into `input_text`, sidestepping OpenAI's strict `v1/responses` validation rules (`Invalid value: 'text'`).
|
||||
- **Tool Logic Fixes**: Properly routed `tool_result` items into top-level `function_call_output` objects to guarantee that local CLI tool executions (File Reads, Bash loops) cleanly feed back into Codex logic without throwing "No tool output found" errors.
|
||||
- **Cache Stripping**: Cleanly stripped Anthropic-only `cache_control` annotations from tool bindings and prompts prior to transmission so the Codex API doesn't reject malformed JSON.
|
||||
|
||||
### 2. Deep UI & Routing Integration
|
||||
- **Model Cleanups (`model.ts`)**: Updated `getPublicModelDisplayName` and `getClaudeAiUserDefaultModelDescription` to recognize Codex GPT strings. Models like `gpt-5.1-codex-max` now beautifully map to `Codex 5.1 Max` in the CLI visual outputs instead of passing the raw proxy IDs.
|
||||
- **Default Reroutes**: Made `getDefaultMainLoopModelSetting` aware of `isCodexSubscriber()`, automatically defaulting to `gpt-5.2-codex` instead of `sonnet46`.
|
||||
- **Billing Visuals (`logoV2Utils.ts`)**: Refactored `formatModelAndBilling` logic to render `Codex API Billing` proudly inside the terminal header when authenticated.
|
||||
|
||||
### 3. Reasoning & Metrics Support
|
||||
- **Thinking Animations**: `codex-fetch-adapter` now intentionally intercepts the proprietary `response.reasoning.delta` SSE frames emitted by `codex-max` models. It wraps them into Anthropic `<thinking>` events, ensuring the standard CLI "Thinking..." spinner continues to function flawlessly for OpenAI reasoning.
|
||||
- **Token Accuracy**: Bound logic to track `response.completed` completion events, fetching `usage.input_tokens` and `output_tokens`. These are injected natively into the final `message_stop` token handler, meaning Codex queries correctly trigger the terminal's Token/Price tracker summary logic.
|
||||
|
||||
### 4. Git Housekeeping
|
||||
- Configured `.gitignore` to securely and durably exclude the `openclaw/` gateway directory from staging commits.
|
||||
14
env.d.ts
vendored
Normal file
14
env.d.ts
vendored
Normal file
@ -0,0 +1,14 @@
|
||||
declare const MACRO: {
|
||||
VERSION: string
|
||||
BUILD_TIME: string
|
||||
PACKAGE_URL?: string
|
||||
NATIVE_PACKAGE_URL?: string
|
||||
FEEDBACK_CHANNEL?: string
|
||||
ISSUES_EXPLAINER?: string
|
||||
VERSION_CHANGELOG?: string
|
||||
}
|
||||
|
||||
declare module '*.node' {
|
||||
const value: unknown
|
||||
export default value
|
||||
}
|
||||
179
install.sh
Executable file
179
install.sh
Executable file
@ -0,0 +1,179 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# free-code installer
|
||||
# Usage: curl -fsSL https://raw.githubusercontent.com/paoloanzn/free-code/main/install.sh | bash
|
||||
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
CYAN='\033[0;36m'
|
||||
BOLD='\033[1m'
|
||||
DIM='\033[2m'
|
||||
RESET='\033[0m'
|
||||
|
||||
REPO="https://github.com/paoloanzn/free-code.git"
|
||||
INSTALL_DIR="$HOME/free-code"
|
||||
BUN_MIN_VERSION="1.3.11"
|
||||
|
||||
info() { printf "${CYAN}[*]${RESET} %s\n" "$*"; }
|
||||
ok() { printf "${GREEN}[+]${RESET} %s\n" "$*"; }
|
||||
warn() { printf "${YELLOW}[!]${RESET} %s\n" "$*"; }
|
||||
fail() { printf "${RED}[x]${RESET} %s\n" "$*"; exit 1; }
|
||||
|
||||
header() {
|
||||
echo ""
|
||||
printf "${BOLD}${CYAN}"
|
||||
cat << 'ART'
|
||||
___ _
|
||||
/ _|_ __ ___ ___ ___ __| | ___
|
||||
| |_| '__/ _ \/ _ \_____ / __/ _` |/ _ \
|
||||
| _| | | __/ __/_____| (_| (_| | __/
|
||||
|_| |_| \___|\___| \___\__,_|\___|
|
||||
|
||||
ART
|
||||
printf "${RESET}"
|
||||
printf "${DIM} The free build of Claude Code${RESET}\n"
|
||||
echo ""
|
||||
}
|
||||
|
||||
# -------------------------------------------------------------------
|
||||
# System checks
|
||||
# -------------------------------------------------------------------
|
||||
|
||||
check_os() {
|
||||
case "$(uname -s)" in
|
||||
Darwin) OS="macos" ;;
|
||||
Linux) OS="linux" ;;
|
||||
*) fail "Unsupported OS: $(uname -s). macOS or Linux required." ;;
|
||||
esac
|
||||
ok "OS: $(uname -s) $(uname -m)"
|
||||
}
|
||||
|
||||
check_git() {
|
||||
if ! command -v git &>/dev/null; then
|
||||
fail "git is not installed. Install it first:
|
||||
macOS: xcode-select --install
|
||||
Linux: sudo apt install git (or your distro's equivalent)"
|
||||
fi
|
||||
ok "git: $(git --version | head -1)"
|
||||
}
|
||||
|
||||
# Compare semver: returns 0 if $1 >= $2
|
||||
version_gte() {
|
||||
[ "$(printf '%s\n' "$1" "$2" | sort -V | head -1)" = "$2" ]
|
||||
}
|
||||
|
||||
check_bun() {
|
||||
if command -v bun &>/dev/null; then
|
||||
local ver
|
||||
ver="$(bun --version 2>/dev/null || echo "0.0.0")"
|
||||
if version_gte "$ver" "$BUN_MIN_VERSION"; then
|
||||
ok "bun: v${ver}"
|
||||
return
|
||||
fi
|
||||
warn "bun v${ver} found but v${BUN_MIN_VERSION}+ required. Upgrading..."
|
||||
else
|
||||
info "bun not found. Installing..."
|
||||
fi
|
||||
install_bun
|
||||
}
|
||||
|
||||
install_bun() {
|
||||
curl -fsSL https://bun.sh/install | bash
|
||||
# Source the updated profile so bun is on PATH for this session
|
||||
export BUN_INSTALL="${BUN_INSTALL:-$HOME/.bun}"
|
||||
export PATH="$BUN_INSTALL/bin:$PATH"
|
||||
if ! command -v bun &>/dev/null; then
|
||||
fail "bun installation succeeded but binary not found on PATH.
|
||||
Add this to your shell profile and restart:
|
||||
export PATH=\"\$HOME/.bun/bin:\$PATH\""
|
||||
fi
|
||||
ok "bun: v$(bun --version) (just installed)"
|
||||
}
|
||||
|
||||
# -------------------------------------------------------------------
|
||||
# Clone & build
|
||||
# -------------------------------------------------------------------
|
||||
|
||||
clone_repo() {
|
||||
if [ -d "$INSTALL_DIR" ]; then
|
||||
warn "$INSTALL_DIR already exists"
|
||||
if [ -d "$INSTALL_DIR/.git" ]; then
|
||||
info "Pulling latest changes..."
|
||||
git -C "$INSTALL_DIR" pull --ff-only origin main 2>/dev/null || {
|
||||
warn "Pull failed, continuing with existing copy"
|
||||
}
|
||||
fi
|
||||
else
|
||||
info "Cloning repository..."
|
||||
git clone --depth 1 "$REPO" "$INSTALL_DIR"
|
||||
fi
|
||||
ok "Source: $INSTALL_DIR"
|
||||
}
|
||||
|
||||
install_deps() {
|
||||
info "Installing dependencies..."
|
||||
cd "$INSTALL_DIR"
|
||||
bun install --frozen-lockfile 2>/dev/null || bun install
|
||||
ok "Dependencies installed"
|
||||
}
|
||||
|
||||
build_binary() {
|
||||
info "Building free-code (all experimental features enabled)..."
|
||||
cd "$INSTALL_DIR"
|
||||
bun run build:dev:full
|
||||
ok "Binary built: $INSTALL_DIR/cli-dev"
|
||||
}
|
||||
|
||||
link_binary() {
|
||||
local link_dir="$HOME/.local/bin"
|
||||
mkdir -p "$link_dir"
|
||||
|
||||
ln -sf "$INSTALL_DIR/cli-dev" "$link_dir/free-code"
|
||||
ok "Symlinked: $link_dir/free-code"
|
||||
|
||||
if ! echo "$PATH" | tr ':' '\n' | grep -qx "$link_dir"; then
|
||||
warn "$link_dir is not on your PATH"
|
||||
echo ""
|
||||
printf "${YELLOW} Add this to your shell profile (~/.bashrc, ~/.zshrc, etc.):${RESET}\n"
|
||||
printf "${BOLD} export PATH=\"\$HOME/.local/bin:\$PATH\"${RESET}\n"
|
||||
echo ""
|
||||
fi
|
||||
}
|
||||
|
||||
# -------------------------------------------------------------------
|
||||
# Main
|
||||
# -------------------------------------------------------------------
|
||||
|
||||
header
|
||||
info "Starting installation..."
|
||||
echo ""
|
||||
|
||||
check_os
|
||||
check_git
|
||||
check_bun
|
||||
echo ""
|
||||
|
||||
clone_repo
|
||||
install_deps
|
||||
build_binary
|
||||
link_binary
|
||||
|
||||
echo ""
|
||||
printf "${GREEN}${BOLD} Installation complete!${RESET}\n"
|
||||
echo ""
|
||||
printf " ${BOLD}Run it:${RESET}\n"
|
||||
printf " ${CYAN}free-code${RESET} # interactive REPL\n"
|
||||
printf " ${CYAN}free-code -p \"your prompt\"${RESET} # one-shot mode\n"
|
||||
echo ""
|
||||
printf " ${BOLD}Set your API key:${RESET}\n"
|
||||
printf " ${CYAN}export ANTHROPIC_API_KEY=\"sk-ant-...\"${RESET}\n"
|
||||
echo ""
|
||||
printf " ${BOLD}Or log in with Claude.ai:${RESET}\n"
|
||||
printf " ${CYAN}free-code /login${RESET}\n"
|
||||
echo ""
|
||||
printf " ${DIM}Source: $INSTALL_DIR${RESET}\n"
|
||||
printf " ${DIM}Binary: $INSTALL_DIR/cli-dev${RESET}\n"
|
||||
printf " ${DIM}Link: ~/.local/bin/free-code${RESET}\n"
|
||||
echo ""
|
||||
122
package.json
Normal file
122
package.json
Normal file
@ -0,0 +1,122 @@
|
||||
{
|
||||
"name": "claude-code-source-snapshot",
|
||||
"version": "2.1.87",
|
||||
"private": true,
|
||||
"description": "Reconstructed Bun CLI workspace for the Claude Code source snapshot.",
|
||||
"type": "module",
|
||||
"packageManager": "bun@1.3.11",
|
||||
"bin": {
|
||||
"claude": "./cli",
|
||||
"claude-source": "./cli"
|
||||
},
|
||||
"engines": {
|
||||
"bun": ">=1.3.11"
|
||||
},
|
||||
"scripts": {
|
||||
"build": "bun run ./scripts/build.ts",
|
||||
"build:dev": "bun run ./scripts/build.ts --dev",
|
||||
"build:dev:full": "bun run ./scripts/build.ts --dev --feature-set=dev-full",
|
||||
"compile": "bun run ./scripts/build.ts --compile",
|
||||
"dev": "bun run ./src/entrypoints/cli.tsx"
|
||||
},
|
||||
"dependencies": {
|
||||
"@alcalzone/ansi-tokenize": "^0.3.0",
|
||||
"@anthropic-ai/bedrock-sdk": "^0.26.4",
|
||||
"@anthropic-ai/claude-agent-sdk": "^0.2.87",
|
||||
"@anthropic-ai/foundry-sdk": "^0.2.3",
|
||||
"@anthropic-ai/mcpb": "^2.1.2",
|
||||
"@anthropic-ai/sandbox-runtime": "^0.0.44",
|
||||
"@anthropic-ai/sdk": "^0.80.0",
|
||||
"@anthropic-ai/vertex-sdk": "^0.14.4",
|
||||
"@aws-sdk/client-bedrock": "^3.1020.0",
|
||||
"@aws-sdk/client-bedrock-runtime": "^3.1020.0",
|
||||
"@aws-sdk/client-sts": "^3.1020.0",
|
||||
"@aws-sdk/credential-provider-node": "^3.972.28",
|
||||
"@aws-sdk/credential-providers": "^3.1020.0",
|
||||
"@azure/identity": "^4.13.1",
|
||||
"@commander-js/extra-typings": "^14.0.0",
|
||||
"@growthbook/growthbook": "^1.6.5",
|
||||
"@modelcontextprotocol/sdk": "^1.29.0",
|
||||
"@opentelemetry/api": "^1.9.1",
|
||||
"@opentelemetry/api-logs": "^0.214.0",
|
||||
"@opentelemetry/core": "^2.6.1",
|
||||
"@opentelemetry/exporter-logs-otlp-grpc": "^0.214.0",
|
||||
"@opentelemetry/exporter-logs-otlp-http": "^0.214.0",
|
||||
"@opentelemetry/exporter-logs-otlp-proto": "^0.214.0",
|
||||
"@opentelemetry/exporter-metrics-otlp-grpc": "^0.214.0",
|
||||
"@opentelemetry/exporter-metrics-otlp-http": "^0.214.0",
|
||||
"@opentelemetry/exporter-metrics-otlp-proto": "^0.214.0",
|
||||
"@opentelemetry/exporter-prometheus": "^0.214.0",
|
||||
"@opentelemetry/exporter-trace-otlp-grpc": "^0.214.0",
|
||||
"@opentelemetry/exporter-trace-otlp-http": "^0.214.0",
|
||||
"@opentelemetry/exporter-trace-otlp-proto": "^0.214.0",
|
||||
"@opentelemetry/resources": "^2.6.1",
|
||||
"@opentelemetry/sdk-logs": "^0.214.0",
|
||||
"@opentelemetry/sdk-metrics": "^2.6.1",
|
||||
"@opentelemetry/sdk-trace-base": "^2.6.1",
|
||||
"@opentelemetry/semantic-conventions": "^1.40.0",
|
||||
"@smithy/core": "^3.23.13",
|
||||
"@smithy/node-http-handler": "^4.5.1",
|
||||
"ajv": "^8.18.0",
|
||||
"asciichart": "^1.5.25",
|
||||
"auto-bind": "^5.0.1",
|
||||
"axios": "^1.14.0",
|
||||
"bidi-js": "^1.0.3",
|
||||
"cacache": "^20.0.4",
|
||||
"chalk": "^5.6.2",
|
||||
"chokidar": "^5.0.0",
|
||||
"cli-boxes": "^4.0.1",
|
||||
"cli-highlight": "^2.1.11",
|
||||
"code-excerpt": "^4.0.0",
|
||||
"diff": "^8.0.4",
|
||||
"emoji-regex": "^10.6.0",
|
||||
"env-paths": "^4.0.0",
|
||||
"execa": "^9.6.1",
|
||||
"fflate": "^0.8.2",
|
||||
"figures": "^6.1.0",
|
||||
"fuse.js": "^7.1.0",
|
||||
"get-east-asian-width": "^1.5.0",
|
||||
"google-auth-library": "^10.6.2",
|
||||
"highlight.js": "^11.11.1",
|
||||
"https-proxy-agent": "^8.0.0",
|
||||
"ignore": "^7.0.5",
|
||||
"indent-string": "^5.0.0",
|
||||
"ink": "^6.8.0",
|
||||
"jsonc-parser": "^3.3.1",
|
||||
"lodash-es": "^4.17.23",
|
||||
"lru-cache": "^11.2.7",
|
||||
"marked": "^17.0.5",
|
||||
"p-map": "^7.0.4",
|
||||
"picomatch": "^4.0.4",
|
||||
"plist": "^3.1.0",
|
||||
"proper-lockfile": "^4.1.2",
|
||||
"qrcode": "^1.5.4",
|
||||
"react": "^19.2.4",
|
||||
"react-reconciler": "^0.33.0",
|
||||
"semver": "^7.7.4",
|
||||
"sharp": "^0.34.5",
|
||||
"shell-quote": "^1.8.3",
|
||||
"signal-exit": "^4.1.0",
|
||||
"stack-utils": "^2.0.6",
|
||||
"strip-ansi": "^7.2.0",
|
||||
"supports-hyperlinks": "^4.4.0",
|
||||
"tree-kill": "^1.2.2",
|
||||
"turndown": "^7.2.2",
|
||||
"type-fest": "^5.5.0",
|
||||
"undici": "^7.24.6",
|
||||
"usehooks-ts": "^3.1.1",
|
||||
"vscode-jsonrpc": "^8.2.1",
|
||||
"vscode-languageserver-protocol": "^3.17.5",
|
||||
"vscode-languageserver-types": "^3.17.5",
|
||||
"wrap-ansi": "^10.0.0",
|
||||
"ws": "^8.20.0",
|
||||
"xss": "^1.0.15",
|
||||
"xxhash-wasm": "^1.1.0",
|
||||
"yaml": "^2.8.3",
|
||||
"zod": "^4.3.6"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/bun": "^1.3.11",
|
||||
"typescript": "^6.0.2"
|
||||
}
|
||||
}
|
||||
207
scripts/build.ts
Normal file
207
scripts/build.ts
Normal file
@ -0,0 +1,207 @@
|
||||
import { chmodSync, existsSync, mkdirSync } from 'fs'
|
||||
import { dirname } from 'path'
|
||||
|
||||
const pkg = await Bun.file(new URL('../package.json', import.meta.url)).json() as {
|
||||
name: string
|
||||
version: string
|
||||
}
|
||||
|
||||
const args = process.argv.slice(2)
|
||||
const compile = args.includes('--compile')
|
||||
const dev = args.includes('--dev')
|
||||
|
||||
const fullExperimentalFeatures = [
|
||||
'AGENT_MEMORY_SNAPSHOT',
|
||||
'AGENT_TRIGGERS',
|
||||
'AGENT_TRIGGERS_REMOTE',
|
||||
'AWAY_SUMMARY',
|
||||
'BASH_CLASSIFIER',
|
||||
'BRIDGE_MODE',
|
||||
'BUILTIN_EXPLORE_PLAN_AGENTS',
|
||||
'CACHED_MICROCOMPACT',
|
||||
'CCR_AUTO_CONNECT',
|
||||
'CCR_MIRROR',
|
||||
'CCR_REMOTE_SETUP',
|
||||
'COMPACTION_REMINDERS',
|
||||
'CONNECTOR_TEXT',
|
||||
'EXTRACT_MEMORIES',
|
||||
'HISTORY_PICKER',
|
||||
'HOOK_PROMPTS',
|
||||
'KAIROS_BRIEF',
|
||||
'KAIROS_CHANNELS',
|
||||
'LODESTONE',
|
||||
'MCP_RICH_OUTPUT',
|
||||
'MESSAGE_ACTIONS',
|
||||
'NATIVE_CLIPBOARD_IMAGE',
|
||||
'NEW_INIT',
|
||||
'POWERSHELL_AUTO_MODE',
|
||||
'PROMPT_CACHE_BREAK_DETECTION',
|
||||
'QUICK_SEARCH',
|
||||
'SHOT_STATS',
|
||||
'TEAMMEM',
|
||||
'TOKEN_BUDGET',
|
||||
'TREE_SITTER_BASH',
|
||||
'TREE_SITTER_BASH_SHADOW',
|
||||
'ULTRAPLAN',
|
||||
'ULTRATHINK',
|
||||
'UNATTENDED_RETRY',
|
||||
'VERIFICATION_AGENT',
|
||||
'VOICE_MODE',
|
||||
] as const
|
||||
|
||||
function runCommand(cmd: string[]): string | null {
|
||||
const proc = Bun.spawnSync({
|
||||
cmd,
|
||||
cwd: process.cwd(),
|
||||
stdout: 'pipe',
|
||||
stderr: 'pipe',
|
||||
})
|
||||
|
||||
if (proc.exitCode !== 0) {
|
||||
return null
|
||||
}
|
||||
|
||||
return new TextDecoder().decode(proc.stdout).trim() || null
|
||||
}
|
||||
|
||||
function getDevVersion(baseVersion: string): string {
|
||||
const timestamp = new Date().toISOString()
|
||||
const date = timestamp.slice(0, 10).replaceAll('-', '')
|
||||
const time = timestamp.slice(11, 19).replaceAll(':', '')
|
||||
const sha = runCommand(['git', 'rev-parse', '--short=8', 'HEAD']) ?? 'unknown'
|
||||
return `${baseVersion}-dev.${date}.t${time}.sha${sha}`
|
||||
}
|
||||
|
||||
function getVersionChangelog(): string {
|
||||
return (
|
||||
runCommand(['git', 'log', '--format=%h %s', '-20']) ??
|
||||
'Local development build'
|
||||
)
|
||||
}
|
||||
|
||||
const defaultFeatures = ['VOICE_MODE']
|
||||
const featureSet = new Set(defaultFeatures)
|
||||
for (let i = 0; i < args.length; i += 1) {
|
||||
const arg = args[i]
|
||||
if (arg === '--feature-set' && args[i + 1]) {
|
||||
if (args[i + 1] === 'dev-full') {
|
||||
for (const feature of fullExperimentalFeatures) {
|
||||
featureSet.add(feature)
|
||||
}
|
||||
}
|
||||
i += 1
|
||||
continue
|
||||
}
|
||||
if (arg === '--feature-set=dev-full') {
|
||||
for (const feature of fullExperimentalFeatures) {
|
||||
featureSet.add(feature)
|
||||
}
|
||||
continue
|
||||
}
|
||||
if (arg === '--feature' && args[i + 1]) {
|
||||
featureSet.add(args[i + 1]!)
|
||||
i += 1
|
||||
continue
|
||||
}
|
||||
if (arg.startsWith('--feature=')) {
|
||||
featureSet.add(arg.slice('--feature='.length))
|
||||
}
|
||||
}
|
||||
const features = [...featureSet]
|
||||
|
||||
const outfile = compile
|
||||
? dev
|
||||
? './dist/cli-dev'
|
||||
: './dist/cli'
|
||||
: dev
|
||||
? './cli-dev'
|
||||
: './cli'
|
||||
const buildTime = new Date().toISOString()
|
||||
const version = dev ? getDevVersion(pkg.version) : pkg.version
|
||||
|
||||
const outDir = dirname(outfile)
|
||||
if (outDir !== '.') {
|
||||
mkdirSync(outDir, { recursive: true })
|
||||
}
|
||||
|
||||
const externals = [
|
||||
'@ant/*',
|
||||
'audio-capture-napi',
|
||||
'image-processor-napi',
|
||||
'modifiers-napi',
|
||||
'url-handler-napi',
|
||||
]
|
||||
|
||||
const defines = {
|
||||
'process.env.USER_TYPE': JSON.stringify('external'),
|
||||
'process.env.CLAUDE_CODE_FORCE_FULL_LOGO': JSON.stringify('true'),
|
||||
...(dev
|
||||
? { 'process.env.NODE_ENV': JSON.stringify('development') }
|
||||
: {}),
|
||||
...(dev
|
||||
? {
|
||||
'process.env.CLAUDE_CODE_EXPERIMENTAL_BUILD': JSON.stringify('true'),
|
||||
}
|
||||
: {}),
|
||||
'process.env.CLAUDE_CODE_VERIFY_PLAN': JSON.stringify('false'),
|
||||
'process.env.CCR_FORCE_BUNDLE': JSON.stringify('true'),
|
||||
'MACRO.VERSION': JSON.stringify(version),
|
||||
'MACRO.BUILD_TIME': JSON.stringify(buildTime),
|
||||
'MACRO.PACKAGE_URL': JSON.stringify(pkg.name),
|
||||
'MACRO.NATIVE_PACKAGE_URL': 'undefined',
|
||||
'MACRO.FEEDBACK_CHANNEL': JSON.stringify('github'),
|
||||
'MACRO.ISSUES_EXPLAINER': JSON.stringify(
|
||||
'This reconstructed source snapshot does not include Anthropic internal issue routing.',
|
||||
),
|
||||
'MACRO.VERSION_CHANGELOG': JSON.stringify(
|
||||
dev ? getVersionChangelog() : 'https://github.com/paoloanzn/claude-code',
|
||||
),
|
||||
} as const
|
||||
|
||||
const cmd = [
|
||||
'bun',
|
||||
'build',
|
||||
'./src/entrypoints/cli.tsx',
|
||||
'--compile',
|
||||
'--target',
|
||||
'bun',
|
||||
'--format',
|
||||
'esm',
|
||||
'--outfile',
|
||||
outfile,
|
||||
'--minify',
|
||||
'--bytecode',
|
||||
'--packages',
|
||||
'bundle',
|
||||
'--conditions',
|
||||
'bun',
|
||||
]
|
||||
|
||||
for (const external of externals) {
|
||||
cmd.push('--external', external)
|
||||
}
|
||||
|
||||
for (const feature of features) {
|
||||
cmd.push(`--feature=${feature}`)
|
||||
}
|
||||
|
||||
for (const [key, value] of Object.entries(defines)) {
|
||||
cmd.push('--define', `${key}=${value}`)
|
||||
}
|
||||
|
||||
const proc = Bun.spawnSync({
|
||||
cmd,
|
||||
cwd: process.cwd(),
|
||||
stdout: 'inherit',
|
||||
stderr: 'inherit',
|
||||
})
|
||||
|
||||
if (proc.exitCode !== 0) {
|
||||
process.exit(proc.exitCode ?? 1)
|
||||
}
|
||||
|
||||
if (existsSync(outfile)) {
|
||||
chmodSync(outfile, 0o755)
|
||||
}
|
||||
|
||||
console.log(`Built ${outfile}`)
|
||||
1295
src/QueryEngine.ts
Normal file
1295
src/QueryEngine.ts
Normal file
File diff suppressed because it is too large
Load Diff
125
src/Task.ts
Normal file
125
src/Task.ts
Normal file
@ -0,0 +1,125 @@
|
||||
import { randomBytes } from 'crypto'
|
||||
import type { AppState } from './state/AppState.js'
|
||||
import type { AgentId } from './types/ids.js'
|
||||
import { getTaskOutputPath } from './utils/task/diskOutput.js'
|
||||
|
||||
export type TaskType =
|
||||
| 'local_bash'
|
||||
| 'local_agent'
|
||||
| 'remote_agent'
|
||||
| 'in_process_teammate'
|
||||
| 'local_workflow'
|
||||
| 'monitor_mcp'
|
||||
| 'dream'
|
||||
|
||||
export type TaskStatus =
|
||||
| 'pending'
|
||||
| 'running'
|
||||
| 'completed'
|
||||
| 'failed'
|
||||
| 'killed'
|
||||
|
||||
/**
|
||||
* True when a task is in a terminal state and will not transition further.
|
||||
* Used to guard against injecting messages into dead teammates, evicting
|
||||
* finished tasks from AppState, and orphan-cleanup paths.
|
||||
*/
|
||||
export function isTerminalTaskStatus(status: TaskStatus): boolean {
|
||||
return status === 'completed' || status === 'failed' || status === 'killed'
|
||||
}
|
||||
|
||||
export type TaskHandle = {
|
||||
taskId: string
|
||||
cleanup?: () => void
|
||||
}
|
||||
|
||||
export type SetAppState = (f: (prev: AppState) => AppState) => void
|
||||
|
||||
export type TaskContext = {
|
||||
abortController: AbortController
|
||||
getAppState: () => AppState
|
||||
setAppState: SetAppState
|
||||
}
|
||||
|
||||
// Base fields shared by all task states
|
||||
export type TaskStateBase = {
|
||||
id: string
|
||||
type: TaskType
|
||||
status: TaskStatus
|
||||
description: string
|
||||
toolUseId?: string
|
||||
startTime: number
|
||||
endTime?: number
|
||||
totalPausedMs?: number
|
||||
outputFile: string
|
||||
outputOffset: number
|
||||
notified: boolean
|
||||
}
|
||||
|
||||
export type LocalShellSpawnInput = {
|
||||
command: string
|
||||
description: string
|
||||
timeout?: number
|
||||
toolUseId?: string
|
||||
agentId?: AgentId
|
||||
/** UI display variant: description-as-label, dialog title, status bar pill. */
|
||||
kind?: 'bash' | 'monitor'
|
||||
}
|
||||
|
||||
// What getTaskByType dispatches for: kill. spawn/render were never
|
||||
// called polymorphically (removed in #22546). All six kill implementations
|
||||
// use only setAppState — getAppState/abortController were dead weight.
|
||||
export type Task = {
|
||||
name: string
|
||||
type: TaskType
|
||||
kill(taskId: string, setAppState: SetAppState): Promise<void>
|
||||
}
|
||||
|
||||
// Task ID prefixes
|
||||
const TASK_ID_PREFIXES: Record<string, string> = {
|
||||
local_bash: 'b', // Keep as 'b' for backward compatibility
|
||||
local_agent: 'a',
|
||||
remote_agent: 'r',
|
||||
in_process_teammate: 't',
|
||||
local_workflow: 'w',
|
||||
monitor_mcp: 'm',
|
||||
dream: 'd',
|
||||
}
|
||||
|
||||
// Get task ID prefix
|
||||
function getTaskIdPrefix(type: TaskType): string {
|
||||
return TASK_ID_PREFIXES[type] ?? 'x'
|
||||
}
|
||||
|
||||
// Case-insensitive-safe alphabet (digits + lowercase) for task IDs.
|
||||
// 36^8 ≈ 2.8 trillion combinations, sufficient to resist brute-force symlink attacks.
|
||||
const TASK_ID_ALPHABET = '0123456789abcdefghijklmnopqrstuvwxyz'
|
||||
|
||||
export function generateTaskId(type: TaskType): string {
|
||||
const prefix = getTaskIdPrefix(type)
|
||||
const bytes = randomBytes(8)
|
||||
let id = prefix
|
||||
for (let i = 0; i < 8; i++) {
|
||||
id += TASK_ID_ALPHABET[bytes[i]! % TASK_ID_ALPHABET.length]
|
||||
}
|
||||
return id
|
||||
}
|
||||
|
||||
export function createTaskStateBase(
|
||||
id: string,
|
||||
type: TaskType,
|
||||
description: string,
|
||||
toolUseId?: string,
|
||||
): TaskStateBase {
|
||||
return {
|
||||
id,
|
||||
type,
|
||||
status: 'pending',
|
||||
description,
|
||||
toolUseId,
|
||||
startTime: Date.now(),
|
||||
outputFile: getTaskOutputPath(id),
|
||||
outputOffset: 0,
|
||||
notified: false,
|
||||
}
|
||||
}
|
||||
792
src/Tool.ts
Normal file
792
src/Tool.ts
Normal file
@ -0,0 +1,792 @@
|
||||
import type {
|
||||
ToolResultBlockParam,
|
||||
ToolUseBlockParam,
|
||||
} from '@anthropic-ai/sdk/resources/index.mjs'
|
||||
import type {
|
||||
ElicitRequestURLParams,
|
||||
ElicitResult,
|
||||
} from '@modelcontextprotocol/sdk/types.js'
|
||||
import type { UUID } from 'crypto'
|
||||
import type { z } from 'zod/v4'
|
||||
import type { Command } from './commands.js'
|
||||
import type { CanUseToolFn } from './hooks/useCanUseTool.js'
|
||||
import type { ThinkingConfig } from './utils/thinking.js'
|
||||
|
||||
export type ToolInputJSONSchema = {
|
||||
[x: string]: unknown
|
||||
type: 'object'
|
||||
properties?: {
|
||||
[x: string]: unknown
|
||||
}
|
||||
}
|
||||
|
||||
import type { Notification } from './context/notifications.js'
|
||||
import type {
|
||||
MCPServerConnection,
|
||||
ServerResource,
|
||||
} from './services/mcp/types.js'
|
||||
import type {
|
||||
AgentDefinition,
|
||||
AgentDefinitionsResult,
|
||||
} from './tools/AgentTool/loadAgentsDir.js'
|
||||
import type {
|
||||
AssistantMessage,
|
||||
AttachmentMessage,
|
||||
Message,
|
||||
ProgressMessage,
|
||||
SystemLocalCommandMessage,
|
||||
SystemMessage,
|
||||
UserMessage,
|
||||
} from './types/message.js'
|
||||
// Import permission types from centralized location to break import cycles
|
||||
// Import PermissionResult from centralized location to break import cycles
|
||||
import type {
|
||||
AdditionalWorkingDirectory,
|
||||
PermissionMode,
|
||||
PermissionResult,
|
||||
} from './types/permissions.js'
|
||||
// Import tool progress types from centralized location to break import cycles
|
||||
import type {
|
||||
AgentToolProgress,
|
||||
BashProgress,
|
||||
MCPProgress,
|
||||
REPLToolProgress,
|
||||
SkillToolProgress,
|
||||
TaskOutputProgress,
|
||||
ToolProgressData,
|
||||
WebSearchProgress,
|
||||
} from './types/tools.js'
|
||||
import type { FileStateCache } from './utils/fileStateCache.js'
|
||||
import type { DenialTrackingState } from './utils/permissions/denialTracking.js'
|
||||
import type { SystemPrompt } from './utils/systemPromptType.js'
|
||||
import type { ContentReplacementState } from './utils/toolResultStorage.js'
|
||||
|
||||
// Re-export progress types for backwards compatibility
|
||||
export type {
|
||||
AgentToolProgress,
|
||||
BashProgress,
|
||||
MCPProgress,
|
||||
REPLToolProgress,
|
||||
SkillToolProgress,
|
||||
TaskOutputProgress,
|
||||
WebSearchProgress,
|
||||
}
|
||||
|
||||
import type { SpinnerMode } from './components/Spinner.js'
|
||||
import type { QuerySource } from './constants/querySource.js'
|
||||
import type { SDKStatus } from './entrypoints/agentSdkTypes.js'
|
||||
import type { AppState } from './state/AppState.js'
|
||||
import type {
|
||||
HookProgress,
|
||||
PromptRequest,
|
||||
PromptResponse,
|
||||
} from './types/hooks.js'
|
||||
import type { AgentId } from './types/ids.js'
|
||||
import type { DeepImmutable } from './types/utils.js'
|
||||
import type { AttributionState } from './utils/commitAttribution.js'
|
||||
import type { FileHistoryState } from './utils/fileHistory.js'
|
||||
import type { Theme, ThemeName } from './utils/theme.js'
|
||||
|
||||
export type QueryChainTracking = {
|
||||
chainId: string
|
||||
depth: number
|
||||
}
|
||||
|
||||
export type ValidationResult =
|
||||
| { result: true }
|
||||
| {
|
||||
result: false
|
||||
message: string
|
||||
errorCode: number
|
||||
}
|
||||
|
||||
export type SetToolJSXFn = (
|
||||
args: {
|
||||
jsx: React.ReactNode | null
|
||||
shouldHidePromptInput: boolean
|
||||
shouldContinueAnimation?: true
|
||||
showSpinner?: boolean
|
||||
isLocalJSXCommand?: boolean
|
||||
isImmediate?: boolean
|
||||
/** Set to true to clear a local JSX command (e.g., from its onDone callback) */
|
||||
clearLocalJSX?: boolean
|
||||
} | null,
|
||||
) => void
|
||||
|
||||
// Import tool permission types from centralized location to break import cycles
|
||||
import type { ToolPermissionRulesBySource } from './types/permissions.js'
|
||||
|
||||
// Re-export for backwards compatibility
|
||||
export type { ToolPermissionRulesBySource }
|
||||
|
||||
// Apply DeepImmutable to the imported type
|
||||
export type ToolPermissionContext = DeepImmutable<{
|
||||
mode: PermissionMode
|
||||
additionalWorkingDirectories: Map<string, AdditionalWorkingDirectory>
|
||||
alwaysAllowRules: ToolPermissionRulesBySource
|
||||
alwaysDenyRules: ToolPermissionRulesBySource
|
||||
alwaysAskRules: ToolPermissionRulesBySource
|
||||
isBypassPermissionsModeAvailable: boolean
|
||||
isAutoModeAvailable?: boolean
|
||||
strippedDangerousRules?: ToolPermissionRulesBySource
|
||||
/** When true, permission prompts are auto-denied (e.g., background agents that can't show UI) */
|
||||
shouldAvoidPermissionPrompts?: boolean
|
||||
/** When true, automated checks (classifier, hooks) are awaited before showing the permission dialog (coordinator workers) */
|
||||
awaitAutomatedChecksBeforeDialog?: boolean
|
||||
/** Stores the permission mode before model-initiated plan mode entry, so it can be restored on exit */
|
||||
prePlanMode?: PermissionMode
|
||||
}>
|
||||
|
||||
export const getEmptyToolPermissionContext: () => ToolPermissionContext =
|
||||
() => ({
|
||||
mode: 'default',
|
||||
additionalWorkingDirectories: new Map(),
|
||||
alwaysAllowRules: {},
|
||||
alwaysDenyRules: {},
|
||||
alwaysAskRules: {},
|
||||
isBypassPermissionsModeAvailable: false,
|
||||
})
|
||||
|
||||
export type CompactProgressEvent =
|
||||
| {
|
||||
type: 'hooks_start'
|
||||
hookType: 'pre_compact' | 'post_compact' | 'session_start'
|
||||
}
|
||||
| { type: 'compact_start' }
|
||||
| { type: 'compact_end' }
|
||||
|
||||
export type ToolUseContext = {
|
||||
options: {
|
||||
commands: Command[]
|
||||
debug: boolean
|
||||
mainLoopModel: string
|
||||
tools: Tools
|
||||
verbose: boolean
|
||||
thinkingConfig: ThinkingConfig
|
||||
mcpClients: MCPServerConnection[]
|
||||
mcpResources: Record<string, ServerResource[]>
|
||||
isNonInteractiveSession: boolean
|
||||
agentDefinitions: AgentDefinitionsResult
|
||||
maxBudgetUsd?: number
|
||||
/** Custom system prompt that replaces the default system prompt */
|
||||
customSystemPrompt?: string
|
||||
/** Additional system prompt appended after the main system prompt */
|
||||
appendSystemPrompt?: string
|
||||
/** Override querySource for analytics tracking */
|
||||
querySource?: QuerySource
|
||||
/** Optional callback to get the latest tools (e.g., after MCP servers connect mid-query) */
|
||||
refreshTools?: () => Tools
|
||||
}
|
||||
abortController: AbortController
|
||||
readFileState: FileStateCache
|
||||
getAppState(): AppState
|
||||
setAppState(f: (prev: AppState) => AppState): void
|
||||
/**
|
||||
* Always-shared setAppState for session-scoped infrastructure (background
|
||||
* tasks, session hooks). Unlike setAppState, which is no-op for async agents
|
||||
* (see createSubagentContext), this always reaches the root store so agents
|
||||
* at any nesting depth can register/clean up infrastructure that outlives
|
||||
* a single turn. Only set by createSubagentContext; main-thread contexts
|
||||
* fall back to setAppState.
|
||||
*/
|
||||
setAppStateForTasks?: (f: (prev: AppState) => AppState) => void
|
||||
/**
|
||||
* Optional handler for URL elicitations triggered by tool call errors (-32042).
|
||||
* In print/SDK mode, this delegates to structuredIO.handleElicitation.
|
||||
* In REPL mode, this is undefined and the queue-based UI path is used.
|
||||
*/
|
||||
handleElicitation?: (
|
||||
serverName: string,
|
||||
params: ElicitRequestURLParams,
|
||||
signal: AbortSignal,
|
||||
) => Promise<ElicitResult>
|
||||
setToolJSX?: SetToolJSXFn
|
||||
addNotification?: (notif: Notification) => void
|
||||
/** Append a UI-only system message to the REPL message list. Stripped at the
|
||||
* normalizeMessagesForAPI boundary — the Exclude<> makes that type-enforced. */
|
||||
appendSystemMessage?: (
|
||||
msg: Exclude<SystemMessage, SystemLocalCommandMessage>,
|
||||
) => void
|
||||
/** Send an OS-level notification (iTerm2, Kitty, Ghostty, bell, etc.) */
|
||||
sendOSNotification?: (opts: {
|
||||
message: string
|
||||
notificationType: string
|
||||
}) => void
|
||||
nestedMemoryAttachmentTriggers?: Set<string>
|
||||
/**
|
||||
* CLAUDE.md paths already injected as nested_memory attachments this
|
||||
* session. Dedup for memoryFilesToAttachments — readFileState is an LRU
|
||||
* that evicts entries in busy sessions, so its .has() check alone can
|
||||
* re-inject the same CLAUDE.md dozens of times.
|
||||
*/
|
||||
loadedNestedMemoryPaths?: Set<string>
|
||||
dynamicSkillDirTriggers?: Set<string>
|
||||
/** Skill names surfaced via skill_discovery this session. Telemetry only (feeds was_discovered). */
|
||||
discoveredSkillNames?: Set<string>
|
||||
userModified?: boolean
|
||||
setInProgressToolUseIDs: (f: (prev: Set<string>) => Set<string>) => void
|
||||
/** Only wired in interactive (REPL) contexts; SDK/QueryEngine don't set this. */
|
||||
setHasInterruptibleToolInProgress?: (v: boolean) => void
|
||||
setResponseLength: (f: (prev: number) => number) => void
|
||||
/** Ant-only: push a new API metrics entry for OTPS tracking.
|
||||
* Called by subagent streaming when a new API request starts. */
|
||||
pushApiMetricsEntry?: (ttftMs: number) => void
|
||||
setStreamMode?: (mode: SpinnerMode) => void
|
||||
onCompactProgress?: (event: CompactProgressEvent) => void
|
||||
setSDKStatus?: (status: SDKStatus) => void
|
||||
openMessageSelector?: () => void
|
||||
updateFileHistoryState: (
|
||||
updater: (prev: FileHistoryState) => FileHistoryState,
|
||||
) => void
|
||||
updateAttributionState: (
|
||||
updater: (prev: AttributionState) => AttributionState,
|
||||
) => void
|
||||
setConversationId?: (id: UUID) => void
|
||||
agentId?: AgentId // Only set for subagents; use getSessionId() for session ID. Hooks use this to distinguish subagent calls.
|
||||
agentType?: string // Subagent type name. For the main thread's --agent type, hooks fall back to getMainThreadAgentType().
|
||||
/** When true, canUseTool must always be called even when hooks auto-approve.
|
||||
* Used by speculation for overlay file path rewriting. */
|
||||
requireCanUseTool?: boolean
|
||||
messages: Message[]
|
||||
fileReadingLimits?: {
|
||||
maxTokens?: number
|
||||
maxSizeBytes?: number
|
||||
}
|
||||
globLimits?: {
|
||||
maxResults?: number
|
||||
}
|
||||
toolDecisions?: Map<
|
||||
string,
|
||||
{
|
||||
source: string
|
||||
decision: 'accept' | 'reject'
|
||||
timestamp: number
|
||||
}
|
||||
>
|
||||
queryTracking?: QueryChainTracking
|
||||
/** Callback factory for requesting interactive prompts from the user.
|
||||
* Returns a prompt callback bound to the given source name.
|
||||
* Only available in interactive (REPL) contexts. */
|
||||
requestPrompt?: (
|
||||
sourceName: string,
|
||||
toolInputSummary?: string | null,
|
||||
) => (request: PromptRequest) => Promise<PromptResponse>
|
||||
toolUseId?: string
|
||||
criticalSystemReminder_EXPERIMENTAL?: string
|
||||
/** When true, preserve toolUseResult on messages even for subagents.
|
||||
* Used by in-process teammates whose transcripts are viewable by the user. */
|
||||
preserveToolUseResults?: boolean
|
||||
/** Local denial tracking state for async subagents whose setAppState is a
|
||||
* no-op. Without this, the denial counter never accumulates and the
|
||||
* fallback-to-prompting threshold is never reached. Mutable — the
|
||||
* permissions code updates it in place. */
|
||||
localDenialTracking?: DenialTrackingState
|
||||
/**
|
||||
* Per-conversation-thread content replacement state for the tool result
|
||||
* budget. When present, query.ts applies the aggregate tool result budget.
|
||||
* Main thread: REPL provisions once (never resets — stale UUID keys
|
||||
* are inert). Subagents: createSubagentContext clones the parent's state
|
||||
* by default (cache-sharing forks need identical decisions), or
|
||||
* resumeAgentBackground threads one reconstructed from sidechain records.
|
||||
*/
|
||||
contentReplacementState?: ContentReplacementState
|
||||
/**
|
||||
* Parent's rendered system prompt bytes, frozen at turn start.
|
||||
* Used by fork subagents to share the parent's prompt cache — re-calling
|
||||
* getSystemPrompt() at fork-spawn time can diverge (GrowthBook cold→warm)
|
||||
* and bust the cache. See forkSubagent.ts.
|
||||
*/
|
||||
renderedSystemPrompt?: SystemPrompt
|
||||
}
|
||||
|
||||
// Re-export ToolProgressData from centralized location
|
||||
export type { ToolProgressData }
|
||||
|
||||
export type Progress = ToolProgressData | HookProgress
|
||||
|
||||
export type ToolProgress<P extends ToolProgressData> = {
|
||||
toolUseID: string
|
||||
data: P
|
||||
}
|
||||
|
||||
export function filterToolProgressMessages(
|
||||
progressMessagesForMessage: ProgressMessage[],
|
||||
): ProgressMessage<ToolProgressData>[] {
|
||||
return progressMessagesForMessage.filter(
|
||||
(msg): msg is ProgressMessage<ToolProgressData> =>
|
||||
msg.data?.type !== 'hook_progress',
|
||||
)
|
||||
}
|
||||
|
||||
export type ToolResult<T> = {
|
||||
data: T
|
||||
newMessages?: (
|
||||
| UserMessage
|
||||
| AssistantMessage
|
||||
| AttachmentMessage
|
||||
| SystemMessage
|
||||
)[]
|
||||
// contextModifier is only honored for tools that aren't concurrency safe.
|
||||
contextModifier?: (context: ToolUseContext) => ToolUseContext
|
||||
/** MCP protocol metadata (structuredContent, _meta) to pass through to SDK consumers */
|
||||
mcpMeta?: {
|
||||
_meta?: Record<string, unknown>
|
||||
structuredContent?: Record<string, unknown>
|
||||
}
|
||||
}
|
||||
|
||||
export type ToolCallProgress<P extends ToolProgressData = ToolProgressData> = (
|
||||
progress: ToolProgress<P>,
|
||||
) => void
|
||||
|
||||
// Type for any schema that outputs an object with string keys
|
||||
export type AnyObject = z.ZodType<{ [key: string]: unknown }>
|
||||
|
||||
/**
|
||||
* Checks if a tool matches the given name (primary name or alias).
|
||||
*/
|
||||
export function toolMatchesName(
|
||||
tool: { name: string; aliases?: string[] },
|
||||
name: string,
|
||||
): boolean {
|
||||
return tool.name === name || (tool.aliases?.includes(name) ?? false)
|
||||
}
|
||||
|
||||
/**
|
||||
* Finds a tool by name or alias from a list of tools.
|
||||
*/
|
||||
export function findToolByName(tools: Tools, name: string): Tool | undefined {
|
||||
return tools.find(t => toolMatchesName(t, name))
|
||||
}
|
||||
|
||||
export type Tool<
|
||||
Input extends AnyObject = AnyObject,
|
||||
Output = unknown,
|
||||
P extends ToolProgressData = ToolProgressData,
|
||||
> = {
|
||||
/**
|
||||
* Optional aliases for backwards compatibility when a tool is renamed.
|
||||
* The tool can be looked up by any of these names in addition to its primary name.
|
||||
*/
|
||||
aliases?: string[]
|
||||
/**
|
||||
* One-line capability phrase used by ToolSearch for keyword matching.
|
||||
* Helps the model find this tool via keyword search when it's deferred.
|
||||
* 3–10 words, no trailing period.
|
||||
* Prefer terms not already in the tool name (e.g. 'jupyter' for NotebookEdit).
|
||||
*/
|
||||
searchHint?: string
|
||||
call(
|
||||
args: z.infer<Input>,
|
||||
context: ToolUseContext,
|
||||
canUseTool: CanUseToolFn,
|
||||
parentMessage: AssistantMessage,
|
||||
onProgress?: ToolCallProgress<P>,
|
||||
): Promise<ToolResult<Output>>
|
||||
description(
|
||||
input: z.infer<Input>,
|
||||
options: {
|
||||
isNonInteractiveSession: boolean
|
||||
toolPermissionContext: ToolPermissionContext
|
||||
tools: Tools
|
||||
},
|
||||
): Promise<string>
|
||||
readonly inputSchema: Input
|
||||
// Type for MCP tools that can specify their input schema directly in JSON Schema format
|
||||
// rather than converting from Zod schema
|
||||
readonly inputJSONSchema?: ToolInputJSONSchema
|
||||
// Optional because TungstenTool doesn't define this. TODO: Make it required.
|
||||
// When we do that, we can also go through and make this a bit more type-safe.
|
||||
outputSchema?: z.ZodType<unknown>
|
||||
inputsEquivalent?(a: z.infer<Input>, b: z.infer<Input>): boolean
|
||||
isConcurrencySafe(input: z.infer<Input>): boolean
|
||||
isEnabled(): boolean
|
||||
isReadOnly(input: z.infer<Input>): boolean
|
||||
/** Defaults to false. Only set when the tool performs irreversible operations (delete, overwrite, send). */
|
||||
isDestructive?(input: z.infer<Input>): boolean
|
||||
/**
|
||||
* What should happen when the user submits a new message while this tool
|
||||
* is running.
|
||||
*
|
||||
* - `'cancel'` — stop the tool and discard its result
|
||||
* - `'block'` — keep running; the new message waits
|
||||
*
|
||||
* Defaults to `'block'` when not implemented.
|
||||
*/
|
||||
interruptBehavior?(): 'cancel' | 'block'
|
||||
/**
|
||||
* Returns information about whether this tool use is a search or read operation
|
||||
* that should be collapsed into a condensed display in the UI. Examples include
|
||||
* file searching (Grep, Glob), file reading (Read), and bash commands like find,
|
||||
* grep, wc, etc.
|
||||
*
|
||||
* Returns an object indicating whether the operation is a search or read operation:
|
||||
* - `isSearch: true` for search operations (grep, find, glob patterns)
|
||||
* - `isRead: true` for read operations (cat, head, tail, file read)
|
||||
* - `isList: true` for directory-listing operations (ls, tree, du)
|
||||
* - All can be false if the operation shouldn't be collapsed
|
||||
*/
|
||||
isSearchOrReadCommand?(input: z.infer<Input>): {
|
||||
isSearch: boolean
|
||||
isRead: boolean
|
||||
isList?: boolean
|
||||
}
|
||||
isOpenWorld?(input: z.infer<Input>): boolean
|
||||
requiresUserInteraction?(): boolean
|
||||
isMcp?: boolean
|
||||
isLsp?: boolean
|
||||
/**
|
||||
* When true, this tool is deferred (sent with defer_loading: true) and requires
|
||||
* ToolSearch to be used before it can be called.
|
||||
*/
|
||||
readonly shouldDefer?: boolean
|
||||
/**
|
||||
* When true, this tool is never deferred — its full schema appears in the
|
||||
* initial prompt even when ToolSearch is enabled. For MCP tools, set via
|
||||
* `_meta['anthropic/alwaysLoad']`. Use for tools the model must see on
|
||||
* turn 1 without a ToolSearch round-trip.
|
||||
*/
|
||||
readonly alwaysLoad?: boolean
|
||||
/**
|
||||
* For MCP tools: the server and tool names as received from the MCP server (unnormalized).
|
||||
* Present on all MCP tools regardless of whether `name` is prefixed (mcp__server__tool)
|
||||
* or unprefixed (CLAUDE_AGENT_SDK_MCP_NO_PREFIX mode).
|
||||
*/
|
||||
mcpInfo?: { serverName: string; toolName: string }
|
||||
readonly name: string
|
||||
/**
|
||||
* Maximum size in characters for tool result before it gets persisted to disk.
|
||||
* When exceeded, the result is saved to a file and Claude receives a preview
|
||||
* with the file path instead of the full content.
|
||||
*
|
||||
* Set to Infinity for tools whose output must never be persisted (e.g. Read,
|
||||
* where persisting creates a circular Read→file→Read loop and the tool
|
||||
* already self-bounds via its own limits).
|
||||
*/
|
||||
maxResultSizeChars: number
|
||||
/**
|
||||
* When true, enables strict mode for this tool, which causes the API to
|
||||
* more strictly adhere to tool instructions and parameter schemas.
|
||||
* Only applied when the tengu_tool_pear is enabled.
|
||||
*/
|
||||
readonly strict?: boolean
|
||||
|
||||
/**
|
||||
* Called on copies of tool_use input before observers see it (SDK stream,
|
||||
* transcript, canUseTool, PreToolUse/PostToolUse hooks). Mutate in place
|
||||
* to add legacy/derived fields. Must be idempotent. The original API-bound
|
||||
* input is never mutated (preserves prompt cache). Not re-applied when a
|
||||
* hook/permission returns a fresh updatedInput — those own their shape.
|
||||
*/
|
||||
backfillObservableInput?(input: Record<string, unknown>): void
|
||||
|
||||
/**
|
||||
* Determines if this tool is allowed to run with this input in the current context.
|
||||
* It informs the model of why the tool use failed, and does not directly display any UI.
|
||||
* @param input
|
||||
* @param context
|
||||
*/
|
||||
validateInput?(
|
||||
input: z.infer<Input>,
|
||||
context: ToolUseContext,
|
||||
): Promise<ValidationResult>
|
||||
|
||||
/**
|
||||
* Determines if the user is asked for permission. Only called after validateInput() passes.
|
||||
* General permission logic is in permissions.ts. This method contains tool-specific logic.
|
||||
* @param input
|
||||
* @param context
|
||||
*/
|
||||
checkPermissions(
|
||||
input: z.infer<Input>,
|
||||
context: ToolUseContext,
|
||||
): Promise<PermissionResult>
|
||||
|
||||
// Optional method for tools that operate on a file path
|
||||
getPath?(input: z.infer<Input>): string
|
||||
|
||||
/**
|
||||
* Prepare a matcher for hook `if` conditions (permission-rule patterns like
|
||||
* "git *" from "Bash(git *)"). Called once per hook-input pair; any
|
||||
* expensive parsing happens here. Returns a closure that is called per
|
||||
* hook pattern. If not implemented, only tool-name-level matching works.
|
||||
*/
|
||||
preparePermissionMatcher?(
|
||||
input: z.infer<Input>,
|
||||
): Promise<(pattern: string) => boolean>
|
||||
|
||||
prompt(options: {
|
||||
getToolPermissionContext: () => Promise<ToolPermissionContext>
|
||||
tools: Tools
|
||||
agents: AgentDefinition[]
|
||||
allowedAgentTypes?: string[]
|
||||
}): Promise<string>
|
||||
userFacingName(input: Partial<z.infer<Input>> | undefined): string
|
||||
userFacingNameBackgroundColor?(
|
||||
input: Partial<z.infer<Input>> | undefined,
|
||||
): keyof Theme | undefined
|
||||
/**
|
||||
* Transparent wrappers (e.g. REPL) delegate all rendering to their progress
|
||||
* handler, which emits native-looking blocks for each inner tool call.
|
||||
* The wrapper itself shows nothing.
|
||||
*/
|
||||
isTransparentWrapper?(): boolean
|
||||
/**
|
||||
* Returns a short string summary of this tool use for display in compact views.
|
||||
* @param input The tool input
|
||||
* @returns A short string summary, or null to not display
|
||||
*/
|
||||
getToolUseSummary?(input: Partial<z.infer<Input>> | undefined): string | null
|
||||
/**
|
||||
* Returns a human-readable present-tense activity description for spinner display.
|
||||
* Example: "Reading src/foo.ts", "Running bun test", "Searching for pattern"
|
||||
* @param input The tool input
|
||||
* @returns Activity description string, or null to fall back to tool name
|
||||
*/
|
||||
getActivityDescription?(
|
||||
input: Partial<z.infer<Input>> | undefined,
|
||||
): string | null
|
||||
/**
|
||||
* Returns a compact representation of this tool use for the auto-mode
|
||||
* security classifier. Examples: `ls -la` for Bash, `/tmp/x: new content`
|
||||
* for Edit. Return '' to skip this tool in the classifier transcript
|
||||
* (e.g. tools with no security relevance). May return an object to avoid
|
||||
* double-encoding when the caller JSON-wraps the value.
|
||||
*/
|
||||
toAutoClassifierInput(input: z.infer<Input>): unknown
|
||||
mapToolResultToToolResultBlockParam(
|
||||
content: Output,
|
||||
toolUseID: string,
|
||||
): ToolResultBlockParam
|
||||
/**
|
||||
* Optional. When omitted, the tool result renders nothing (same as returning
|
||||
* null). Omit for tools whose results are surfaced elsewhere (e.g., TodoWrite
|
||||
* updates the todo panel, not the transcript).
|
||||
*/
|
||||
renderToolResultMessage?(
|
||||
content: Output,
|
||||
progressMessagesForMessage: ProgressMessage<P>[],
|
||||
options: {
|
||||
style?: 'condensed'
|
||||
theme: ThemeName
|
||||
tools: Tools
|
||||
verbose: boolean
|
||||
isTranscriptMode?: boolean
|
||||
isBriefOnly?: boolean
|
||||
/** Original tool_use input, when available. Useful for compact result
|
||||
* summaries that reference what was requested (e.g. "Sent to #foo"). */
|
||||
input?: unknown
|
||||
},
|
||||
): React.ReactNode
|
||||
/**
|
||||
* Flattened text of what renderToolResultMessage shows IN TRANSCRIPT
|
||||
* MODE (verbose=true, isTranscriptMode=true). For transcript search
|
||||
* indexing: the index counts occurrences in this string, the highlight
|
||||
* overlay scans the actual screen buffer. For count ≡ highlight, this
|
||||
* must return the text that ends up visible — not the model-facing
|
||||
* serialization from mapToolResultToToolResultBlockParam (which adds
|
||||
* system-reminders, persisted-output wrappers).
|
||||
*
|
||||
* Chrome can be skipped (under-count is fine). "Found 3 files in 12ms"
|
||||
* isn't worth indexing. Phantoms are not fine — text that's claimed
|
||||
* here but doesn't render is a count≠highlight bug.
|
||||
*
|
||||
* Optional: omitted → field-name heuristic in transcriptSearch.ts.
|
||||
* Drift caught by test/utils/transcriptSearch.renderFidelity.test.tsx
|
||||
* which renders sample outputs and flags text that's indexed-but-not-
|
||||
* rendered (phantom) or rendered-but-not-indexed (under-count warning).
|
||||
*/
|
||||
extractSearchText?(out: Output): string
|
||||
/**
|
||||
* Render the tool use message. Note that `input` is partial because we render
|
||||
* the message as soon as possible, possibly before tool parameters have fully
|
||||
* streamed in.
|
||||
*/
|
||||
renderToolUseMessage(
|
||||
input: Partial<z.infer<Input>>,
|
||||
options: { theme: ThemeName; verbose: boolean; commands?: Command[] },
|
||||
): React.ReactNode
|
||||
/**
|
||||
* Returns true when the non-verbose rendering of this output is truncated
|
||||
* (i.e., clicking to expand would reveal more content). Gates
|
||||
* click-to-expand in fullscreen — only messages where verbose actually
|
||||
* shows more get a hover/click affordance. Unset means never truncated.
|
||||
*/
|
||||
isResultTruncated?(output: Output): boolean
|
||||
/**
|
||||
* Renders an optional tag to display after the tool use message.
|
||||
* Used for additional metadata like timeout, model, resume ID, etc.
|
||||
* Returns null to not display anything.
|
||||
*/
|
||||
renderToolUseTag?(input: Partial<z.infer<Input>>): React.ReactNode
|
||||
/**
|
||||
* Optional. When omitted, no progress UI is shown while the tool runs.
|
||||
*/
|
||||
renderToolUseProgressMessage?(
|
||||
progressMessagesForMessage: ProgressMessage<P>[],
|
||||
options: {
|
||||
tools: Tools
|
||||
verbose: boolean
|
||||
terminalSize?: { columns: number; rows: number }
|
||||
inProgressToolCallCount?: number
|
||||
isTranscriptMode?: boolean
|
||||
},
|
||||
): React.ReactNode
|
||||
renderToolUseQueuedMessage?(): React.ReactNode
|
||||
/**
|
||||
* Optional. When omitted, falls back to <FallbackToolUseRejectedMessage />.
|
||||
* Only define this for tools that need custom rejection UI (e.g., file edits
|
||||
* that show the rejected diff).
|
||||
*/
|
||||
renderToolUseRejectedMessage?(
|
||||
input: z.infer<Input>,
|
||||
options: {
|
||||
columns: number
|
||||
messages: Message[]
|
||||
style?: 'condensed'
|
||||
theme: ThemeName
|
||||
tools: Tools
|
||||
verbose: boolean
|
||||
progressMessagesForMessage: ProgressMessage<P>[]
|
||||
isTranscriptMode?: boolean
|
||||
},
|
||||
): React.ReactNode
|
||||
/**
|
||||
* Optional. When omitted, falls back to <FallbackToolUseErrorMessage />.
|
||||
* Only define this for tools that need custom error UI (e.g., search tools
|
||||
* that show "File not found" instead of the raw error).
|
||||
*/
|
||||
renderToolUseErrorMessage?(
|
||||
result: ToolResultBlockParam['content'],
|
||||
options: {
|
||||
progressMessagesForMessage: ProgressMessage<P>[]
|
||||
tools: Tools
|
||||
verbose: boolean
|
||||
isTranscriptMode?: boolean
|
||||
},
|
||||
): React.ReactNode
|
||||
|
||||
/**
|
||||
* Renders multiple parallel instances of this tool as a group.
|
||||
* @returns React node to render, or null to fall back to individual rendering
|
||||
*/
|
||||
/**
|
||||
* Renders multiple tool uses as a group (non-verbose mode only).
|
||||
* In verbose mode, individual tool uses render at their original positions.
|
||||
* @returns React node to render, or null to fall back to individual rendering
|
||||
*/
|
||||
renderGroupedToolUse?(
|
||||
toolUses: Array<{
|
||||
param: ToolUseBlockParam
|
||||
isResolved: boolean
|
||||
isError: boolean
|
||||
isInProgress: boolean
|
||||
progressMessages: ProgressMessage<P>[]
|
||||
result?: {
|
||||
param: ToolResultBlockParam
|
||||
output: unknown
|
||||
}
|
||||
}>,
|
||||
options: {
|
||||
shouldAnimate: boolean
|
||||
tools: Tools
|
||||
},
|
||||
): React.ReactNode | null
|
||||
}
|
||||
|
||||
/**
|
||||
* A collection of tools. Use this type instead of `Tool[]` to make it easier
|
||||
* to track where tool sets are assembled, passed, and filtered across the codebase.
|
||||
*/
|
||||
export type Tools = readonly Tool[]
|
||||
|
||||
/**
|
||||
* Methods that `buildTool` supplies a default for. A `ToolDef` may omit these;
|
||||
* the resulting `Tool` always has them.
|
||||
*/
|
||||
type DefaultableToolKeys =
|
||||
| 'isEnabled'
|
||||
| 'isConcurrencySafe'
|
||||
| 'isReadOnly'
|
||||
| 'isDestructive'
|
||||
| 'checkPermissions'
|
||||
| 'toAutoClassifierInput'
|
||||
| 'userFacingName'
|
||||
|
||||
/**
|
||||
* Tool definition accepted by `buildTool`. Same shape as `Tool` but with the
|
||||
* defaultable methods optional — `buildTool` fills them in so callers always
|
||||
* see a complete `Tool`.
|
||||
*/
|
||||
export type ToolDef<
|
||||
Input extends AnyObject = AnyObject,
|
||||
Output = unknown,
|
||||
P extends ToolProgressData = ToolProgressData,
|
||||
> = Omit<Tool<Input, Output, P>, DefaultableToolKeys> &
|
||||
Partial<Pick<Tool<Input, Output, P>, DefaultableToolKeys>>
|
||||
|
||||
/**
|
||||
* Type-level spread mirroring `{ ...TOOL_DEFAULTS, ...def }`. For each
|
||||
* defaultable key: if D provides it (required), D's type wins; if D omits
|
||||
* it or has it optional (inherited from Partial<> in the constraint), the
|
||||
* default fills in. All other keys come from D verbatim — preserving arity,
|
||||
* optional presence, and literal types exactly as `satisfies Tool` did.
|
||||
*/
|
||||
type BuiltTool<D> = Omit<D, DefaultableToolKeys> & {
|
||||
[K in DefaultableToolKeys]-?: K extends keyof D
|
||||
? undefined extends D[K]
|
||||
? ToolDefaults[K]
|
||||
: D[K]
|
||||
: ToolDefaults[K]
|
||||
}
|
||||
|
||||
/**
|
||||
* Build a complete `Tool` from a partial definition, filling in safe defaults
|
||||
* for the commonly-stubbed methods. All tool exports should go through this so
|
||||
* that defaults live in one place and callers never need `?.() ?? default`.
|
||||
*
|
||||
* Defaults (fail-closed where it matters):
|
||||
* - `isEnabled` → `true`
|
||||
* - `isConcurrencySafe` → `false` (assume not safe)
|
||||
* - `isReadOnly` → `false` (assume writes)
|
||||
* - `isDestructive` → `false`
|
||||
* - `checkPermissions` → `{ behavior: 'allow', updatedInput }` (defer to general permission system)
|
||||
* - `toAutoClassifierInput` → `''` (skip classifier — security-relevant tools must override)
|
||||
* - `userFacingName` → `name`
|
||||
*/
|
||||
const TOOL_DEFAULTS = {
|
||||
isEnabled: () => true,
|
||||
isConcurrencySafe: (_input?: unknown) => false,
|
||||
isReadOnly: (_input?: unknown) => false,
|
||||
isDestructive: (_input?: unknown) => false,
|
||||
checkPermissions: (
|
||||
input: { [key: string]: unknown },
|
||||
_ctx?: ToolUseContext,
|
||||
): Promise<PermissionResult> =>
|
||||
Promise.resolve({ behavior: 'allow', updatedInput: input }),
|
||||
toAutoClassifierInput: (_input?: unknown) => '',
|
||||
userFacingName: (_input?: unknown) => '',
|
||||
}
|
||||
|
||||
// The defaults type is the ACTUAL shape of TOOL_DEFAULTS (optional params so
|
||||
// both 0-arg and full-arg call sites type-check — stubs varied in arity and
|
||||
// tests relied on that), not the interface's strict signatures.
|
||||
type ToolDefaults = typeof TOOL_DEFAULTS
|
||||
|
||||
// D infers the concrete object-literal type from the call site. The
|
||||
// constraint provides contextual typing for method parameters; `any` in
|
||||
// constraint position is structural and never leaks into the return type.
|
||||
// BuiltTool<D> mirrors runtime `{...TOOL_DEFAULTS, ...def}` at the type level.
|
||||
// eslint-disable-next-line @typescript-eslint/no-explicit-any
|
||||
type AnyToolDef = ToolDef<any, any, any>
|
||||
|
||||
export function buildTool<D extends AnyToolDef>(def: D): BuiltTool<D> {
|
||||
// The runtime spread is straightforward; the `as` bridges the gap between
|
||||
// the structural-any constraint and the precise BuiltTool<D> return. The
|
||||
// type semantics are proven by the 0-error typecheck across all 60+ tools.
|
||||
return {
|
||||
...TOOL_DEFAULTS,
|
||||
userFacingName: () => def.name,
|
||||
...def,
|
||||
} as BuiltTool<D>
|
||||
}
|
||||
15
src/assistant/AssistantSessionChooser.tsx
Normal file
15
src/assistant/AssistantSessionChooser.tsx
Normal file
@ -0,0 +1,15 @@
|
||||
import { useEffect } from 'react'
|
||||
|
||||
type Props = {
|
||||
sessions: unknown[]
|
||||
onSelect: (sessionId: string) => void
|
||||
onCancel: () => void
|
||||
}
|
||||
|
||||
export function AssistantSessionChooser({ onCancel }: Props) {
|
||||
useEffect(() => {
|
||||
onCancel()
|
||||
}, [onCancel])
|
||||
|
||||
return null
|
||||
}
|
||||
87
src/assistant/sessionHistory.ts
Normal file
87
src/assistant/sessionHistory.ts
Normal file
@ -0,0 +1,87 @@
|
||||
import axios from 'axios'
|
||||
import { getOauthConfig } from '../constants/oauth.js'
|
||||
import type { SDKMessage } from '../entrypoints/agentSdkTypes.js'
|
||||
import { logForDebugging } from '../utils/debug.js'
|
||||
import { getOAuthHeaders, prepareApiRequest } from '../utils/teleport/api.js'
|
||||
|
||||
export const HISTORY_PAGE_SIZE = 100
|
||||
|
||||
export type HistoryPage = {
|
||||
/** Chronological order within the page. */
|
||||
events: SDKMessage[]
|
||||
/** Oldest event ID in this page → before_id cursor for next-older page. */
|
||||
firstId: string | null
|
||||
/** true = older events exist. */
|
||||
hasMore: boolean
|
||||
}
|
||||
|
||||
type SessionEventsResponse = {
|
||||
data: SDKMessage[]
|
||||
has_more: boolean
|
||||
first_id: string | null
|
||||
last_id: string | null
|
||||
}
|
||||
|
||||
export type HistoryAuthCtx = {
|
||||
baseUrl: string
|
||||
headers: Record<string, string>
|
||||
}
|
||||
|
||||
/** Prepare auth + headers + base URL once, reuse across pages. */
|
||||
export async function createHistoryAuthCtx(
|
||||
sessionId: string,
|
||||
): Promise<HistoryAuthCtx> {
|
||||
const { accessToken, orgUUID } = await prepareApiRequest()
|
||||
return {
|
||||
baseUrl: `${getOauthConfig().BASE_API_URL}/v1/sessions/${sessionId}/events`,
|
||||
headers: {
|
||||
...getOAuthHeaders(accessToken),
|
||||
'anthropic-beta': 'ccr-byoc-2025-07-29',
|
||||
'x-organization-uuid': orgUUID,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
async function fetchPage(
|
||||
ctx: HistoryAuthCtx,
|
||||
params: Record<string, string | number | boolean>,
|
||||
label: string,
|
||||
): Promise<HistoryPage | null> {
|
||||
const resp = await axios
|
||||
.get<SessionEventsResponse>(ctx.baseUrl, {
|
||||
headers: ctx.headers,
|
||||
params,
|
||||
timeout: 15000,
|
||||
validateStatus: () => true,
|
||||
})
|
||||
.catch(() => null)
|
||||
if (!resp || resp.status !== 200) {
|
||||
logForDebugging(`[${label}] HTTP ${resp?.status ?? 'error'}`)
|
||||
return null
|
||||
}
|
||||
return {
|
||||
events: Array.isArray(resp.data.data) ? resp.data.data : [],
|
||||
firstId: resp.data.first_id,
|
||||
hasMore: resp.data.has_more,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Newest page: last `limit` events, chronological, via anchor_to_latest.
|
||||
* has_more=true means older events exist.
|
||||
*/
|
||||
export async function fetchLatestEvents(
|
||||
ctx: HistoryAuthCtx,
|
||||
limit = HISTORY_PAGE_SIZE,
|
||||
): Promise<HistoryPage | null> {
|
||||
return fetchPage(ctx, { limit, anchor_to_latest: true }, 'fetchLatestEvents')
|
||||
}
|
||||
|
||||
/** Older page: events immediately before `beforeId` cursor. */
|
||||
export async function fetchOlderEvents(
|
||||
ctx: HistoryAuthCtx,
|
||||
beforeId: string,
|
||||
limit = HISTORY_PAGE_SIZE,
|
||||
): Promise<HistoryPage | null> {
|
||||
return fetchPage(ctx, { limit, before_id: beforeId }, 'fetchOlderEvents')
|
||||
}
|
||||
1758
src/bootstrap/state.ts
Normal file
1758
src/bootstrap/state.ts
Normal file
File diff suppressed because it is too large
Load Diff
539
src/bridge/bridgeApi.ts
Normal file
539
src/bridge/bridgeApi.ts
Normal file
@ -0,0 +1,539 @@
|
||||
import axios from 'axios'
|
||||
|
||||
import { debugBody, extractErrorDetail } from './debugUtils.js'
|
||||
import {
|
||||
BRIDGE_LOGIN_INSTRUCTION,
|
||||
type BridgeApiClient,
|
||||
type BridgeConfig,
|
||||
type PermissionResponseEvent,
|
||||
type WorkResponse,
|
||||
} from './types.js'
|
||||
|
||||
type BridgeApiDeps = {
|
||||
baseUrl: string
|
||||
getAccessToken: () => string | undefined
|
||||
runnerVersion: string
|
||||
onDebug?: (msg: string) => void
|
||||
/**
|
||||
* Called on 401 to attempt OAuth token refresh. Returns true if refreshed,
|
||||
* in which case the request is retried once. Injected because
|
||||
* handleOAuth401Error from utils/auth.ts transitively pulls in config.ts →
|
||||
* file.ts → permissions/filesystem.ts → sessionStorage.ts → commands.ts
|
||||
* (~1300 modules). Daemon callers using env-var tokens omit this — their
|
||||
* tokens don't refresh, so 401 goes straight to BridgeFatalError.
|
||||
*/
|
||||
onAuth401?: (staleAccessToken: string) => Promise<boolean>
|
||||
/**
|
||||
* Returns the trusted device token to send as X-Trusted-Device-Token on
|
||||
* bridge API calls. Bridge sessions have SecurityTier=ELEVATED on the
|
||||
* server (CCR v2); when the server's enforcement flag is on,
|
||||
* ConnectBridgeWorker requires a trusted device at JWT-issuance.
|
||||
* Optional — when absent or returning undefined, the header is omitted
|
||||
* and the server falls through to its flag-off/no-op path. The CLI-side
|
||||
* gate is tengu_sessions_elevated_auth_enforcement (see trustedDevice.ts).
|
||||
*/
|
||||
getTrustedDeviceToken?: () => string | undefined
|
||||
}
|
||||
|
||||
const BETA_HEADER = 'environments-2025-11-01'
|
||||
|
||||
/** Allowlist pattern for server-provided IDs used in URL path segments. */
|
||||
const SAFE_ID_PATTERN = /^[a-zA-Z0-9_-]+$/
|
||||
|
||||
/**
|
||||
* Validate that a server-provided ID is safe to interpolate into a URL path.
|
||||
* Prevents path traversal (e.g. `../../admin`) and injection via IDs that
|
||||
* contain slashes, dots, or other special characters.
|
||||
*/
|
||||
export function validateBridgeId(id: string, label: string): string {
|
||||
if (!id || !SAFE_ID_PATTERN.test(id)) {
|
||||
throw new Error(`Invalid ${label}: contains unsafe characters`)
|
||||
}
|
||||
return id
|
||||
}
|
||||
|
||||
/** Fatal bridge errors that should not be retried (e.g. auth failures). */
|
||||
export class BridgeFatalError extends Error {
|
||||
readonly status: number
|
||||
/** Server-provided error type, e.g. "environment_expired". */
|
||||
readonly errorType: string | undefined
|
||||
constructor(message: string, status: number, errorType?: string) {
|
||||
super(message)
|
||||
this.name = 'BridgeFatalError'
|
||||
this.status = status
|
||||
this.errorType = errorType
|
||||
}
|
||||
}
|
||||
|
||||
export function createBridgeApiClient(deps: BridgeApiDeps): BridgeApiClient {
|
||||
function debug(msg: string): void {
|
||||
deps.onDebug?.(msg)
|
||||
}
|
||||
|
||||
let consecutiveEmptyPolls = 0
|
||||
const EMPTY_POLL_LOG_INTERVAL = 100
|
||||
|
||||
function getHeaders(accessToken: string): Record<string, string> {
|
||||
const headers: Record<string, string> = {
|
||||
Authorization: `Bearer ${accessToken}`,
|
||||
'Content-Type': 'application/json',
|
||||
'anthropic-version': '2023-06-01',
|
||||
'anthropic-beta': BETA_HEADER,
|
||||
'x-environment-runner-version': deps.runnerVersion,
|
||||
}
|
||||
const deviceToken = deps.getTrustedDeviceToken?.()
|
||||
if (deviceToken) {
|
||||
headers['X-Trusted-Device-Token'] = deviceToken
|
||||
}
|
||||
return headers
|
||||
}
|
||||
|
||||
function resolveAuth(): string {
|
||||
const accessToken = deps.getAccessToken()
|
||||
if (!accessToken) {
|
||||
throw new Error(BRIDGE_LOGIN_INSTRUCTION)
|
||||
}
|
||||
return accessToken
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute an OAuth-authenticated request with a single retry on 401.
|
||||
* On 401, attempts token refresh via handleOAuth401Error (same pattern as
|
||||
* withRetry.ts for v1/messages). If refresh succeeds, retries the request
|
||||
* once with the new token. If refresh fails or the retry also returns 401,
|
||||
* the 401 response is returned for handleErrorStatus to throw BridgeFatalError.
|
||||
*/
|
||||
async function withOAuthRetry<T>(
|
||||
fn: (accessToken: string) => Promise<{ status: number; data: T }>,
|
||||
context: string,
|
||||
): Promise<{ status: number; data: T }> {
|
||||
const accessToken = resolveAuth()
|
||||
const response = await fn(accessToken)
|
||||
|
||||
if (response.status !== 401) {
|
||||
return response
|
||||
}
|
||||
|
||||
if (!deps.onAuth401) {
|
||||
debug(`[bridge:api] ${context}: 401 received, no refresh handler`)
|
||||
return response
|
||||
}
|
||||
|
||||
// Attempt token refresh — matches the pattern in withRetry.ts
|
||||
debug(`[bridge:api] ${context}: 401 received, attempting token refresh`)
|
||||
const refreshed = await deps.onAuth401(accessToken)
|
||||
if (refreshed) {
|
||||
debug(`[bridge:api] ${context}: Token refreshed, retrying request`)
|
||||
const newToken = resolveAuth()
|
||||
const retryResponse = await fn(newToken)
|
||||
if (retryResponse.status !== 401) {
|
||||
return retryResponse
|
||||
}
|
||||
debug(`[bridge:api] ${context}: Retry after refresh also got 401`)
|
||||
} else {
|
||||
debug(`[bridge:api] ${context}: Token refresh failed`)
|
||||
}
|
||||
|
||||
// Refresh failed — return 401 for handleErrorStatus to throw
|
||||
return response
|
||||
}
|
||||
|
||||
return {
|
||||
async registerBridgeEnvironment(
|
||||
config: BridgeConfig,
|
||||
): Promise<{ environment_id: string; environment_secret: string }> {
|
||||
debug(
|
||||
`[bridge:api] POST /v1/environments/bridge bridgeId=${config.bridgeId}`,
|
||||
)
|
||||
|
||||
const response = await withOAuthRetry(
|
||||
(token: string) =>
|
||||
axios.post<{
|
||||
environment_id: string
|
||||
environment_secret: string
|
||||
}>(
|
||||
`${deps.baseUrl}/v1/environments/bridge`,
|
||||
{
|
||||
machine_name: config.machineName,
|
||||
directory: config.dir,
|
||||
branch: config.branch,
|
||||
git_repo_url: config.gitRepoUrl,
|
||||
// Advertise session capacity so claude.ai/code can show
|
||||
// "2/4 sessions" badges and only block the picker when
|
||||
// actually at capacity. Backends that don't yet accept
|
||||
// this field will silently ignore it.
|
||||
max_sessions: config.maxSessions,
|
||||
// worker_type lets claude.ai filter environments by origin
|
||||
// (e.g. assistant picker only shows assistant-mode workers).
|
||||
// Desktop cowork app sends "cowork"; we send a distinct value.
|
||||
metadata: { worker_type: config.workerType },
|
||||
// Idempotent re-registration: if we have a backend-issued
|
||||
// environment_id from a prior session (--session-id resume),
|
||||
// send it back so the backend reattaches instead of creating
|
||||
// a new env. The backend may still hand back a fresh ID if
|
||||
// the old one expired — callers must compare the response.
|
||||
...(config.reuseEnvironmentId && {
|
||||
environment_id: config.reuseEnvironmentId,
|
||||
}),
|
||||
},
|
||||
{
|
||||
headers: getHeaders(token),
|
||||
timeout: 15_000,
|
||||
validateStatus: status => status < 500,
|
||||
},
|
||||
),
|
||||
'Registration',
|
||||
)
|
||||
|
||||
handleErrorStatus(response.status, response.data, 'Registration')
|
||||
debug(
|
||||
`[bridge:api] POST /v1/environments/bridge -> ${response.status} environment_id=${response.data.environment_id}`,
|
||||
)
|
||||
debug(
|
||||
`[bridge:api] >>> ${debugBody({ machine_name: config.machineName, directory: config.dir, branch: config.branch, git_repo_url: config.gitRepoUrl, max_sessions: config.maxSessions, metadata: { worker_type: config.workerType } })}`,
|
||||
)
|
||||
debug(`[bridge:api] <<< ${debugBody(response.data)}`)
|
||||
return response.data
|
||||
},
|
||||
|
||||
async pollForWork(
|
||||
environmentId: string,
|
||||
environmentSecret: string,
|
||||
signal?: AbortSignal,
|
||||
reclaimOlderThanMs?: number,
|
||||
): Promise<WorkResponse | null> {
|
||||
validateBridgeId(environmentId, 'environmentId')
|
||||
|
||||
// Save and reset so errors break the "consecutive empty" streak.
|
||||
// Restored below when the response is truly empty.
|
||||
const prevEmptyPolls = consecutiveEmptyPolls
|
||||
consecutiveEmptyPolls = 0
|
||||
|
||||
const response = await axios.get<WorkResponse | null>(
|
||||
`${deps.baseUrl}/v1/environments/${environmentId}/work/poll`,
|
||||
{
|
||||
headers: getHeaders(environmentSecret),
|
||||
params:
|
||||
reclaimOlderThanMs !== undefined
|
||||
? { reclaim_older_than_ms: reclaimOlderThanMs }
|
||||
: undefined,
|
||||
timeout: 10_000,
|
||||
signal,
|
||||
validateStatus: status => status < 500,
|
||||
},
|
||||
)
|
||||
|
||||
handleErrorStatus(response.status, response.data, 'Poll')
|
||||
|
||||
// Empty body or null = no work available
|
||||
if (!response.data) {
|
||||
consecutiveEmptyPolls = prevEmptyPolls + 1
|
||||
if (
|
||||
consecutiveEmptyPolls === 1 ||
|
||||
consecutiveEmptyPolls % EMPTY_POLL_LOG_INTERVAL === 0
|
||||
) {
|
||||
debug(
|
||||
`[bridge:api] GET .../work/poll -> ${response.status} (no work, ${consecutiveEmptyPolls} consecutive empty polls)`,
|
||||
)
|
||||
}
|
||||
return null
|
||||
}
|
||||
|
||||
debug(
|
||||
`[bridge:api] GET .../work/poll -> ${response.status} workId=${response.data.id} type=${response.data.data?.type}${response.data.data?.id ? ` sessionId=${response.data.data.id}` : ''}`,
|
||||
)
|
||||
debug(`[bridge:api] <<< ${debugBody(response.data)}`)
|
||||
return response.data
|
||||
},
|
||||
|
||||
async acknowledgeWork(
|
||||
environmentId: string,
|
||||
workId: string,
|
||||
sessionToken: string,
|
||||
): Promise<void> {
|
||||
validateBridgeId(environmentId, 'environmentId')
|
||||
validateBridgeId(workId, 'workId')
|
||||
|
||||
debug(`[bridge:api] POST .../work/${workId}/ack`)
|
||||
|
||||
const response = await axios.post(
|
||||
`${deps.baseUrl}/v1/environments/${environmentId}/work/${workId}/ack`,
|
||||
{},
|
||||
{
|
||||
headers: getHeaders(sessionToken),
|
||||
timeout: 10_000,
|
||||
validateStatus: s => s < 500,
|
||||
},
|
||||
)
|
||||
|
||||
handleErrorStatus(response.status, response.data, 'Acknowledge')
|
||||
debug(`[bridge:api] POST .../work/${workId}/ack -> ${response.status}`)
|
||||
},
|
||||
|
||||
async stopWork(
|
||||
environmentId: string,
|
||||
workId: string,
|
||||
force: boolean,
|
||||
): Promise<void> {
|
||||
validateBridgeId(environmentId, 'environmentId')
|
||||
validateBridgeId(workId, 'workId')
|
||||
|
||||
debug(`[bridge:api] POST .../work/${workId}/stop force=${force}`)
|
||||
|
||||
const response = await withOAuthRetry(
|
||||
(token: string) =>
|
||||
axios.post(
|
||||
`${deps.baseUrl}/v1/environments/${environmentId}/work/${workId}/stop`,
|
||||
{ force },
|
||||
{
|
||||
headers: getHeaders(token),
|
||||
timeout: 10_000,
|
||||
validateStatus: s => s < 500,
|
||||
},
|
||||
),
|
||||
'StopWork',
|
||||
)
|
||||
|
||||
handleErrorStatus(response.status, response.data, 'StopWork')
|
||||
debug(`[bridge:api] POST .../work/${workId}/stop -> ${response.status}`)
|
||||
},
|
||||
|
||||
async deregisterEnvironment(environmentId: string): Promise<void> {
|
||||
validateBridgeId(environmentId, 'environmentId')
|
||||
|
||||
debug(`[bridge:api] DELETE /v1/environments/bridge/${environmentId}`)
|
||||
|
||||
const response = await withOAuthRetry(
|
||||
(token: string) =>
|
||||
axios.delete(
|
||||
`${deps.baseUrl}/v1/environments/bridge/${environmentId}`,
|
||||
{
|
||||
headers: getHeaders(token),
|
||||
timeout: 10_000,
|
||||
validateStatus: s => s < 500,
|
||||
},
|
||||
),
|
||||
'Deregister',
|
||||
)
|
||||
|
||||
handleErrorStatus(response.status, response.data, 'Deregister')
|
||||
debug(
|
||||
`[bridge:api] DELETE /v1/environments/bridge/${environmentId} -> ${response.status}`,
|
||||
)
|
||||
},
|
||||
|
||||
async archiveSession(sessionId: string): Promise<void> {
|
||||
validateBridgeId(sessionId, 'sessionId')
|
||||
|
||||
debug(`[bridge:api] POST /v1/sessions/${sessionId}/archive`)
|
||||
|
||||
const response = await withOAuthRetry(
|
||||
(token: string) =>
|
||||
axios.post(
|
||||
`${deps.baseUrl}/v1/sessions/${sessionId}/archive`,
|
||||
{},
|
||||
{
|
||||
headers: getHeaders(token),
|
||||
timeout: 10_000,
|
||||
validateStatus: s => s < 500,
|
||||
},
|
||||
),
|
||||
'ArchiveSession',
|
||||
)
|
||||
|
||||
// 409 = already archived (idempotent, not an error)
|
||||
if (response.status === 409) {
|
||||
debug(
|
||||
`[bridge:api] POST /v1/sessions/${sessionId}/archive -> 409 (already archived)`,
|
||||
)
|
||||
return
|
||||
}
|
||||
|
||||
handleErrorStatus(response.status, response.data, 'ArchiveSession')
|
||||
debug(
|
||||
`[bridge:api] POST /v1/sessions/${sessionId}/archive -> ${response.status}`,
|
||||
)
|
||||
},
|
||||
|
||||
async reconnectSession(
|
||||
environmentId: string,
|
||||
sessionId: string,
|
||||
): Promise<void> {
|
||||
validateBridgeId(environmentId, 'environmentId')
|
||||
validateBridgeId(sessionId, 'sessionId')
|
||||
|
||||
debug(
|
||||
`[bridge:api] POST /v1/environments/${environmentId}/bridge/reconnect session_id=${sessionId}`,
|
||||
)
|
||||
|
||||
const response = await withOAuthRetry(
|
||||
(token: string) =>
|
||||
axios.post(
|
||||
`${deps.baseUrl}/v1/environments/${environmentId}/bridge/reconnect`,
|
||||
{ session_id: sessionId },
|
||||
{
|
||||
headers: getHeaders(token),
|
||||
timeout: 10_000,
|
||||
validateStatus: s => s < 500,
|
||||
},
|
||||
),
|
||||
'ReconnectSession',
|
||||
)
|
||||
|
||||
handleErrorStatus(response.status, response.data, 'ReconnectSession')
|
||||
debug(`[bridge:api] POST .../bridge/reconnect -> ${response.status}`)
|
||||
},
|
||||
|
||||
async heartbeatWork(
|
||||
environmentId: string,
|
||||
workId: string,
|
||||
sessionToken: string,
|
||||
): Promise<{ lease_extended: boolean; state: string }> {
|
||||
validateBridgeId(environmentId, 'environmentId')
|
||||
validateBridgeId(workId, 'workId')
|
||||
|
||||
debug(`[bridge:api] POST .../work/${workId}/heartbeat`)
|
||||
|
||||
const response = await axios.post<{
|
||||
lease_extended: boolean
|
||||
state: string
|
||||
last_heartbeat: string
|
||||
ttl_seconds: number
|
||||
}>(
|
||||
`${deps.baseUrl}/v1/environments/${environmentId}/work/${workId}/heartbeat`,
|
||||
{},
|
||||
{
|
||||
headers: getHeaders(sessionToken),
|
||||
timeout: 10_000,
|
||||
validateStatus: s => s < 500,
|
||||
},
|
||||
)
|
||||
|
||||
handleErrorStatus(response.status, response.data, 'Heartbeat')
|
||||
debug(
|
||||
`[bridge:api] POST .../work/${workId}/heartbeat -> ${response.status} lease_extended=${response.data.lease_extended} state=${response.data.state}`,
|
||||
)
|
||||
return response.data
|
||||
},
|
||||
|
||||
async sendPermissionResponseEvent(
|
||||
sessionId: string,
|
||||
event: PermissionResponseEvent,
|
||||
sessionToken: string,
|
||||
): Promise<void> {
|
||||
validateBridgeId(sessionId, 'sessionId')
|
||||
|
||||
debug(
|
||||
`[bridge:api] POST /v1/sessions/${sessionId}/events type=${event.type}`,
|
||||
)
|
||||
|
||||
const response = await axios.post(
|
||||
`${deps.baseUrl}/v1/sessions/${sessionId}/events`,
|
||||
{ events: [event] },
|
||||
{
|
||||
headers: getHeaders(sessionToken),
|
||||
timeout: 10_000,
|
||||
validateStatus: s => s < 500,
|
||||
},
|
||||
)
|
||||
|
||||
handleErrorStatus(
|
||||
response.status,
|
||||
response.data,
|
||||
'SendPermissionResponseEvent',
|
||||
)
|
||||
debug(
|
||||
`[bridge:api] POST /v1/sessions/${sessionId}/events -> ${response.status}`,
|
||||
)
|
||||
debug(`[bridge:api] >>> ${debugBody({ events: [event] })}`)
|
||||
debug(`[bridge:api] <<< ${debugBody(response.data)}`)
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
function handleErrorStatus(
|
||||
status: number,
|
||||
data: unknown,
|
||||
context: string,
|
||||
): void {
|
||||
if (status === 200 || status === 204) {
|
||||
return
|
||||
}
|
||||
const detail = extractErrorDetail(data)
|
||||
const errorType = extractErrorTypeFromData(data)
|
||||
switch (status) {
|
||||
case 401:
|
||||
throw new BridgeFatalError(
|
||||
`${context}: Authentication failed (401)${detail ? `: ${detail}` : ''}. ${BRIDGE_LOGIN_INSTRUCTION}`,
|
||||
401,
|
||||
errorType,
|
||||
)
|
||||
case 403:
|
||||
throw new BridgeFatalError(
|
||||
isExpiredErrorType(errorType)
|
||||
? 'Remote Control session has expired. Please restart with `claude remote-control` or /remote-control.'
|
||||
: `${context}: Access denied (403)${detail ? `: ${detail}` : ''}. Check your organization permissions.`,
|
||||
403,
|
||||
errorType,
|
||||
)
|
||||
case 404:
|
||||
throw new BridgeFatalError(
|
||||
detail ??
|
||||
`${context}: Not found (404). Remote Control may not be available for this organization.`,
|
||||
404,
|
||||
errorType,
|
||||
)
|
||||
case 410:
|
||||
throw new BridgeFatalError(
|
||||
detail ??
|
||||
'Remote Control session has expired. Please restart with `claude remote-control` or /remote-control.',
|
||||
410,
|
||||
errorType ?? 'environment_expired',
|
||||
)
|
||||
case 429:
|
||||
throw new Error(`${context}: Rate limited (429). Polling too frequently.`)
|
||||
default:
|
||||
throw new Error(
|
||||
`${context}: Failed with status ${status}${detail ? `: ${detail}` : ''}`,
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
/** Check whether an error type string indicates a session/environment expiry. */
|
||||
export function isExpiredErrorType(errorType: string | undefined): boolean {
|
||||
if (!errorType) {
|
||||
return false
|
||||
}
|
||||
return errorType.includes('expired') || errorType.includes('lifetime')
|
||||
}
|
||||
|
||||
/**
|
||||
* Check whether a BridgeFatalError is a suppressible 403 permission error.
|
||||
* These are 403 errors for scopes like 'external_poll_sessions' or operations
|
||||
* like StopWork that fail because the user's role lacks 'environments:manage'.
|
||||
* They don't affect core functionality and shouldn't be shown to users.
|
||||
*/
|
||||
export function isSuppressible403(err: BridgeFatalError): boolean {
|
||||
if (err.status !== 403) {
|
||||
return false
|
||||
}
|
||||
return (
|
||||
err.message.includes('external_poll_sessions') ||
|
||||
err.message.includes('environments:manage')
|
||||
)
|
||||
}
|
||||
|
||||
function extractErrorTypeFromData(data: unknown): string | undefined {
|
||||
if (data && typeof data === 'object') {
|
||||
if (
|
||||
'error' in data &&
|
||||
data.error &&
|
||||
typeof data.error === 'object' &&
|
||||
'type' in data.error &&
|
||||
typeof data.error.type === 'string'
|
||||
) {
|
||||
return data.error.type
|
||||
}
|
||||
}
|
||||
return undefined
|
||||
}
|
||||
48
src/bridge/bridgeConfig.ts
Normal file
48
src/bridge/bridgeConfig.ts
Normal file
@ -0,0 +1,48 @@
|
||||
/**
|
||||
* Shared bridge auth/URL resolution. Consolidates the ant-only
|
||||
* CLAUDE_BRIDGE_* dev overrides that were previously copy-pasted across
|
||||
* a dozen files — inboundAttachments, BriefTool/upload, bridgeMain,
|
||||
* initReplBridge, remoteBridgeCore, daemon workers, /rename,
|
||||
* /remote-control.
|
||||
*
|
||||
* Two layers: *Override() returns the ant-only env var (or undefined);
|
||||
* the non-Override versions fall through to the real OAuth store/config.
|
||||
* Callers that compose with a different auth source (e.g. daemon workers
|
||||
* using IPC auth) use the Override getters directly.
|
||||
*/
|
||||
|
||||
import { getOauthConfig } from '../constants/oauth.js'
|
||||
import { getClaudeAIOAuthTokens } from '../utils/auth.js'
|
||||
|
||||
/** Ant-only dev override: CLAUDE_BRIDGE_OAUTH_TOKEN, else undefined. */
|
||||
export function getBridgeTokenOverride(): string | undefined {
|
||||
return (
|
||||
(process.env.USER_TYPE === 'ant' &&
|
||||
process.env.CLAUDE_BRIDGE_OAUTH_TOKEN) ||
|
||||
undefined
|
||||
)
|
||||
}
|
||||
|
||||
/** Ant-only dev override: CLAUDE_BRIDGE_BASE_URL, else undefined. */
|
||||
export function getBridgeBaseUrlOverride(): string | undefined {
|
||||
return (
|
||||
(process.env.USER_TYPE === 'ant' && process.env.CLAUDE_BRIDGE_BASE_URL) ||
|
||||
undefined
|
||||
)
|
||||
}
|
||||
|
||||
/**
|
||||
* Access token for bridge API calls: dev override first, then the OAuth
|
||||
* keychain. Undefined means "not logged in".
|
||||
*/
|
||||
export function getBridgeAccessToken(): string | undefined {
|
||||
return getBridgeTokenOverride() ?? getClaudeAIOAuthTokens()?.accessToken
|
||||
}
|
||||
|
||||
/**
|
||||
* Base URL for bridge API calls: dev override first, then the production
|
||||
* OAuth config. Always returns a URL.
|
||||
*/
|
||||
export function getBridgeBaseUrl(): string {
|
||||
return getBridgeBaseUrlOverride() ?? getOauthConfig().BASE_API_URL
|
||||
}
|
||||
135
src/bridge/bridgeDebug.ts
Normal file
135
src/bridge/bridgeDebug.ts
Normal file
@ -0,0 +1,135 @@
|
||||
import { logForDebugging } from '../utils/debug.js'
|
||||
import { BridgeFatalError } from './bridgeApi.js'
|
||||
import type { BridgeApiClient } from './types.js'
|
||||
|
||||
/**
|
||||
* Ant-only fault injection for manually testing bridge recovery paths.
|
||||
*
|
||||
* Real failure modes this targets (BQ 2026-03-12, 7-day window):
|
||||
* poll 404 not_found_error — 147K sessions/week, dead onEnvironmentLost gate
|
||||
* ws_closed 1002/1006 — 22K sessions/week, zombie poll after close
|
||||
* register transient failure — residual: network blips during doReconnect
|
||||
*
|
||||
* Usage: /bridge-kick <subcommand> from the REPL while Remote Control is
|
||||
* connected, then tail debug.log to watch the recovery machinery react.
|
||||
*
|
||||
* Module-level state is intentional here: one bridge per REPL process, the
|
||||
* /bridge-kick slash command has no other way to reach into initBridgeCore's
|
||||
* closures, and teardown clears the slot.
|
||||
*/
|
||||
|
||||
/** One-shot fault to inject on the next matching api call. */
|
||||
type BridgeFault = {
|
||||
method:
|
||||
| 'pollForWork'
|
||||
| 'registerBridgeEnvironment'
|
||||
| 'reconnectSession'
|
||||
| 'heartbeatWork'
|
||||
/** Fatal errors go through handleErrorStatus → BridgeFatalError. Transient
|
||||
* errors surface as plain axios rejections (5xx / network). Recovery code
|
||||
* distinguishes the two: fatal → teardown, transient → retry/backoff. */
|
||||
kind: 'fatal' | 'transient'
|
||||
status: number
|
||||
errorType?: string
|
||||
/** Remaining injections. Decremented on consume; removed at 0. */
|
||||
count: number
|
||||
}
|
||||
|
||||
export type BridgeDebugHandle = {
|
||||
/** Invoke the transport's permanent-close handler directly. Tests the
|
||||
* ws_closed → reconnectEnvironmentWithSession escalation (#22148). */
|
||||
fireClose: (code: number) => void
|
||||
/** Call reconnectEnvironmentWithSession() — same as SIGUSR2 but
|
||||
* reachable from the slash command. */
|
||||
forceReconnect: () => void
|
||||
/** Queue a fault for the next N calls to the named api method. */
|
||||
injectFault: (fault: BridgeFault) => void
|
||||
/** Abort the at-capacity sleep so an injected poll fault lands
|
||||
* immediately instead of up to 10min later. */
|
||||
wakePollLoop: () => void
|
||||
/** env/session IDs for the debug.log grep. */
|
||||
describe: () => string
|
||||
}
|
||||
|
||||
let debugHandle: BridgeDebugHandle | null = null
|
||||
const faultQueue: BridgeFault[] = []
|
||||
|
||||
export function registerBridgeDebugHandle(h: BridgeDebugHandle): void {
|
||||
debugHandle = h
|
||||
}
|
||||
|
||||
export function clearBridgeDebugHandle(): void {
|
||||
debugHandle = null
|
||||
faultQueue.length = 0
|
||||
}
|
||||
|
||||
export function getBridgeDebugHandle(): BridgeDebugHandle | null {
|
||||
return debugHandle
|
||||
}
|
||||
|
||||
export function injectBridgeFault(fault: BridgeFault): void {
|
||||
faultQueue.push(fault)
|
||||
logForDebugging(
|
||||
`[bridge:debug] Queued fault: ${fault.method} ${fault.kind}/${fault.status}${fault.errorType ? `/${fault.errorType}` : ''} ×${fault.count}`,
|
||||
)
|
||||
}
|
||||
|
||||
/**
|
||||
* Wrap a BridgeApiClient so each call first checks the fault queue. If a
|
||||
* matching fault is queued, throw the specified error instead of calling
|
||||
* through. Delegates everything else to the real client.
|
||||
*
|
||||
* Only called when USER_TYPE === 'ant' — zero overhead in external builds.
|
||||
*/
|
||||
export function wrapApiForFaultInjection(
|
||||
api: BridgeApiClient,
|
||||
): BridgeApiClient {
|
||||
function consume(method: BridgeFault['method']): BridgeFault | null {
|
||||
const idx = faultQueue.findIndex(f => f.method === method)
|
||||
if (idx === -1) return null
|
||||
const fault = faultQueue[idx]!
|
||||
fault.count--
|
||||
if (fault.count <= 0) faultQueue.splice(idx, 1)
|
||||
return fault
|
||||
}
|
||||
|
||||
function throwFault(fault: BridgeFault, context: string): never {
|
||||
logForDebugging(
|
||||
`[bridge:debug] Injecting ${fault.kind} fault into ${context}: status=${fault.status} errorType=${fault.errorType ?? 'none'}`,
|
||||
)
|
||||
if (fault.kind === 'fatal') {
|
||||
throw new BridgeFatalError(
|
||||
`[injected] ${context} ${fault.status}`,
|
||||
fault.status,
|
||||
fault.errorType,
|
||||
)
|
||||
}
|
||||
// Transient: mimic an axios rejection (5xx / network). No .status on
|
||||
// the error itself — that's how the catch blocks distinguish.
|
||||
throw new Error(`[injected transient] ${context} ${fault.status}`)
|
||||
}
|
||||
|
||||
return {
|
||||
...api,
|
||||
async pollForWork(envId, secret, signal, reclaimMs) {
|
||||
const f = consume('pollForWork')
|
||||
if (f) throwFault(f, 'Poll')
|
||||
return api.pollForWork(envId, secret, signal, reclaimMs)
|
||||
},
|
||||
async registerBridgeEnvironment(config) {
|
||||
const f = consume('registerBridgeEnvironment')
|
||||
if (f) throwFault(f, 'Registration')
|
||||
return api.registerBridgeEnvironment(config)
|
||||
},
|
||||
async reconnectSession(envId, sessionId) {
|
||||
const f = consume('reconnectSession')
|
||||
if (f) throwFault(f, 'ReconnectSession')
|
||||
return api.reconnectSession(envId, sessionId)
|
||||
},
|
||||
async heartbeatWork(envId, workId, token) {
|
||||
const f = consume('heartbeatWork')
|
||||
if (f) throwFault(f, 'Heartbeat')
|
||||
return api.heartbeatWork(envId, workId, token)
|
||||
},
|
||||
}
|
||||
}
|
||||
202
src/bridge/bridgeEnabled.ts
Normal file
202
src/bridge/bridgeEnabled.ts
Normal file
@ -0,0 +1,202 @@
|
||||
import { feature } from 'bun:bundle'
|
||||
import {
|
||||
checkGate_CACHED_OR_BLOCKING,
|
||||
getDynamicConfig_CACHED_MAY_BE_STALE,
|
||||
getFeatureValue_CACHED_MAY_BE_STALE,
|
||||
} from '../services/analytics/growthbook.js'
|
||||
// Namespace import breaks the bridgeEnabled → auth → config → bridgeEnabled
|
||||
// cycle — authModule.foo is a live binding, so by the time the helpers below
|
||||
// call it, auth.js is fully loaded. Previously used require() for the same
|
||||
// deferral, but require() hits a CJS cache that diverges from the ESM
|
||||
// namespace after mock.module() (daemon/auth.test.ts), breaking spyOn.
|
||||
import * as authModule from '../utils/auth.js'
|
||||
import { isEnvTruthy } from '../utils/envUtils.js'
|
||||
import { lt } from '../utils/semver.js'
|
||||
|
||||
/**
|
||||
* Runtime check for bridge mode entitlement.
|
||||
*
|
||||
* Remote Control requires a claude.ai subscription (the bridge auths to CCR
|
||||
* with the claude.ai OAuth token). isClaudeAISubscriber() excludes
|
||||
* Bedrock/Vertex/Foundry, apiKeyHelper/gateway deployments, env-var API keys,
|
||||
* and Console API logins — none of which have the OAuth token CCR needs.
|
||||
* See github.com/deshaw/anthropic-issues/issues/24.
|
||||
*
|
||||
* The `feature('BRIDGE_MODE')` guard ensures the GrowthBook string literal
|
||||
* is only referenced when bridge mode is enabled at build time.
|
||||
*/
|
||||
export function isBridgeEnabled(): boolean {
|
||||
// Positive ternary pattern — see docs/feature-gating.md.
|
||||
// Negative pattern (if (!feature(...)) return) does not eliminate
|
||||
// inline string literals from external builds.
|
||||
return feature('BRIDGE_MODE')
|
||||
? isClaudeAISubscriber() &&
|
||||
getFeatureValue_CACHED_MAY_BE_STALE('tengu_ccr_bridge', false)
|
||||
: false
|
||||
}
|
||||
|
||||
/**
|
||||
* Blocking entitlement check for Remote Control.
|
||||
*
|
||||
* Returns cached `true` immediately (fast path). If the disk cache says
|
||||
* `false` or is missing, awaits GrowthBook init and fetches the fresh
|
||||
* server value (slow path, max ~5s), then writes it to disk.
|
||||
*
|
||||
* Use at entitlement gates where a stale `false` would unfairly block access.
|
||||
* For user-facing error paths, prefer `getBridgeDisabledReason()` which gives
|
||||
* a specific diagnostic. For render-body UI visibility checks, use
|
||||
* `isBridgeEnabled()` instead.
|
||||
*/
|
||||
export async function isBridgeEnabledBlocking(): Promise<boolean> {
|
||||
return feature('BRIDGE_MODE')
|
||||
? isClaudeAISubscriber() &&
|
||||
(await checkGate_CACHED_OR_BLOCKING('tengu_ccr_bridge'))
|
||||
: false
|
||||
}
|
||||
|
||||
/**
|
||||
* Diagnostic message for why Remote Control is unavailable, or null if
|
||||
* it's enabled. Call this instead of a bare `isBridgeEnabledBlocking()`
|
||||
* check when you need to show the user an actionable error.
|
||||
*
|
||||
* The GrowthBook gate targets on organizationUUID, which comes from
|
||||
* config.oauthAccount — populated by /api/oauth/profile during login.
|
||||
* That endpoint requires the user:profile scope. Tokens without it
|
||||
* (setup-token, CLAUDE_CODE_OAUTH_TOKEN env var, or pre-scope-expansion
|
||||
* logins) leave oauthAccount unpopulated, so the gate falls back to
|
||||
* false and users see a dead-end "not enabled" message with no hint
|
||||
* that re-login would fix it. See CC-1165 / gh-33105.
|
||||
*/
|
||||
export async function getBridgeDisabledReason(): Promise<string | null> {
|
||||
if (feature('BRIDGE_MODE')) {
|
||||
if (!isClaudeAISubscriber()) {
|
||||
return 'Remote Control requires a claude.ai subscription. Run `claude auth login` to sign in with your claude.ai account.'
|
||||
}
|
||||
if (!hasProfileScope()) {
|
||||
return 'Remote Control requires a full-scope login token. Long-lived tokens (from `claude setup-token` or CLAUDE_CODE_OAUTH_TOKEN) are limited to inference-only for security reasons. Run `claude auth login` to use Remote Control.'
|
||||
}
|
||||
if (!getOauthAccountInfo()?.organizationUuid) {
|
||||
return 'Unable to determine your organization for Remote Control eligibility. Run `claude auth login` to refresh your account information.'
|
||||
}
|
||||
if (!(await checkGate_CACHED_OR_BLOCKING('tengu_ccr_bridge'))) {
|
||||
return 'Remote Control is not yet enabled for your account.'
|
||||
}
|
||||
return null
|
||||
}
|
||||
return 'Remote Control is not available in this build.'
|
||||
}
|
||||
|
||||
// try/catch: main.tsx:5698 calls isBridgeEnabled() while defining the Commander
|
||||
// program, before enableConfigs() runs. isClaudeAISubscriber() → getGlobalConfig()
|
||||
// throws "Config accessed before allowed" there. Pre-config, no OAuth token can
|
||||
// exist anyway — false is correct. Same swallow getFeatureValue_CACHED_MAY_BE_STALE
|
||||
// already does at growthbook.ts:775-780.
|
||||
function isClaudeAISubscriber(): boolean {
|
||||
try {
|
||||
return authModule.isClaudeAISubscriber()
|
||||
} catch {
|
||||
return false
|
||||
}
|
||||
}
|
||||
function hasProfileScope(): boolean {
|
||||
try {
|
||||
return authModule.hasProfileScope()
|
||||
} catch {
|
||||
return false
|
||||
}
|
||||
}
|
||||
function getOauthAccountInfo(): ReturnType<
|
||||
typeof authModule.getOauthAccountInfo
|
||||
> {
|
||||
try {
|
||||
return authModule.getOauthAccountInfo()
|
||||
} catch {
|
||||
return undefined
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Runtime check for the env-less (v2) REPL bridge path.
|
||||
* Returns true when the GrowthBook flag `tengu_bridge_repl_v2` is enabled.
|
||||
*
|
||||
* This gates which implementation initReplBridge uses — NOT whether bridge
|
||||
* is available at all (see isBridgeEnabled above). Daemon/print paths stay
|
||||
* on the env-based implementation regardless of this gate.
|
||||
*/
|
||||
export function isEnvLessBridgeEnabled(): boolean {
|
||||
return feature('BRIDGE_MODE')
|
||||
? getFeatureValue_CACHED_MAY_BE_STALE('tengu_bridge_repl_v2', false)
|
||||
: false
|
||||
}
|
||||
|
||||
/**
|
||||
* Kill-switch for the `cse_*` → `session_*` client-side retag shim.
|
||||
*
|
||||
* The shim exists because compat/convert.go:27 validates TagSession and the
|
||||
* claude.ai frontend routes on `session_*`, while v2 worker endpoints hand out
|
||||
* `cse_*`. Once the server tags by environment_kind and the frontend accepts
|
||||
* `cse_*` directly, flip this to false to make toCompatSessionId a no-op.
|
||||
* Defaults to true — the shim stays active until explicitly disabled.
|
||||
*/
|
||||
export function isCseShimEnabled(): boolean {
|
||||
return feature('BRIDGE_MODE')
|
||||
? getFeatureValue_CACHED_MAY_BE_STALE(
|
||||
'tengu_bridge_repl_v2_cse_shim_enabled',
|
||||
true,
|
||||
)
|
||||
: true
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns an error message if the current CLI version is below the
|
||||
* minimum required for the v1 (env-based) Remote Control path, or null if the
|
||||
* version is fine. The v2 (env-less) path uses checkEnvLessBridgeMinVersion()
|
||||
* in envLessBridgeConfig.ts instead — the two implementations have independent
|
||||
* version floors.
|
||||
*
|
||||
* Uses cached (non-blocking) GrowthBook config. If GrowthBook hasn't
|
||||
* loaded yet, the default '0.0.0' means the check passes — a safe fallback.
|
||||
*/
|
||||
export function checkBridgeMinVersion(): string | null {
|
||||
// Positive pattern — see docs/feature-gating.md.
|
||||
// Negative pattern (if (!feature(...)) return) does not eliminate
|
||||
// inline string literals from external builds.
|
||||
if (feature('BRIDGE_MODE')) {
|
||||
const config = getDynamicConfig_CACHED_MAY_BE_STALE<{
|
||||
minVersion: string
|
||||
}>('tengu_bridge_min_version', { minVersion: '0.0.0' })
|
||||
if (config.minVersion && lt(MACRO.VERSION, config.minVersion)) {
|
||||
return `Your version of Claude Code (${MACRO.VERSION}) is too old for Remote Control.\nVersion ${config.minVersion} or higher is required. Run \`claude update\` to update.`
|
||||
}
|
||||
}
|
||||
return null
|
||||
}
|
||||
|
||||
/**
|
||||
* Default for remoteControlAtStartup when the user hasn't explicitly set it.
|
||||
* When the CCR_AUTO_CONNECT build flag is present (ant-only) and the
|
||||
* tengu_cobalt_harbor GrowthBook gate is on, all sessions connect to CCR by
|
||||
* default — the user can still opt out by setting remoteControlAtStartup=false
|
||||
* in config (explicit settings always win over this default).
|
||||
*
|
||||
* Defined here rather than in config.ts to avoid a direct
|
||||
* config.ts → growthbook.ts import cycle (growthbook.ts → user.ts → config.ts).
|
||||
*/
|
||||
export function getCcrAutoConnectDefault(): boolean {
|
||||
return feature('CCR_AUTO_CONNECT')
|
||||
? getFeatureValue_CACHED_MAY_BE_STALE('tengu_cobalt_harbor', false)
|
||||
: false
|
||||
}
|
||||
|
||||
/**
|
||||
* Opt-in CCR mirror mode — every local session spawns an outbound-only
|
||||
* Remote Control session that receives forwarded events. Separate from
|
||||
* getCcrAutoConnectDefault (bidirectional Remote Control). Env var wins for
|
||||
* local opt-in; GrowthBook controls rollout.
|
||||
*/
|
||||
export function isCcrMirrorEnabled(): boolean {
|
||||
return feature('CCR_MIRROR')
|
||||
? isEnvTruthy(process.env.CLAUDE_CODE_CCR_MIRROR) ||
|
||||
getFeatureValue_CACHED_MAY_BE_STALE('tengu_ccr_mirror', false)
|
||||
: false
|
||||
}
|
||||
2999
src/bridge/bridgeMain.ts
Normal file
2999
src/bridge/bridgeMain.ts
Normal file
File diff suppressed because it is too large
Load Diff
461
src/bridge/bridgeMessaging.ts
Normal file
461
src/bridge/bridgeMessaging.ts
Normal file
@ -0,0 +1,461 @@
|
||||
/**
|
||||
* Shared transport-layer helpers for bridge message handling.
|
||||
*
|
||||
* Extracted from replBridge.ts so both the env-based core (initBridgeCore)
|
||||
* and the env-less core (initEnvLessBridgeCore) can use the same ingress
|
||||
* parsing, control-request handling, and echo-dedup machinery.
|
||||
*
|
||||
* Everything here is pure — no closure over bridge-specific state. All
|
||||
* collaborators (transport, sessionId, UUID sets, callbacks) are passed
|
||||
* as params.
|
||||
*/
|
||||
|
||||
import { randomUUID } from 'crypto'
|
||||
import type { SDKMessage } from '../entrypoints/agentSdkTypes.js'
|
||||
import type {
|
||||
SDKControlRequest,
|
||||
SDKControlResponse,
|
||||
} from '../entrypoints/sdk/controlTypes.js'
|
||||
import type { SDKResultSuccess } from '../entrypoints/sdk/coreTypes.js'
|
||||
import { logEvent } from '../services/analytics/index.js'
|
||||
import { EMPTY_USAGE } from '../services/api/emptyUsage.js'
|
||||
import type { Message } from '../types/message.js'
|
||||
import { normalizeControlMessageKeys } from '../utils/controlMessageCompat.js'
|
||||
import { logForDebugging } from '../utils/debug.js'
|
||||
import { stripDisplayTagsAllowEmpty } from '../utils/displayTags.js'
|
||||
import { errorMessage } from '../utils/errors.js'
|
||||
import type { PermissionMode } from '../utils/permissions/PermissionMode.js'
|
||||
import { jsonParse } from '../utils/slowOperations.js'
|
||||
import type { ReplBridgeTransport } from './replBridgeTransport.js'
|
||||
|
||||
// ─── Type guards ─────────────────────────────────────────────────────────────
|
||||
|
||||
/** Type predicate for parsed WebSocket messages. SDKMessage is a
|
||||
* discriminated union on `type` — validating the discriminant is
|
||||
* sufficient for the predicate; callers narrow further via the union. */
|
||||
export function isSDKMessage(value: unknown): value is SDKMessage {
|
||||
return (
|
||||
value !== null &&
|
||||
typeof value === 'object' &&
|
||||
'type' in value &&
|
||||
typeof value.type === 'string'
|
||||
)
|
||||
}
|
||||
|
||||
/** Type predicate for control_response messages from the server. */
|
||||
export function isSDKControlResponse(
|
||||
value: unknown,
|
||||
): value is SDKControlResponse {
|
||||
return (
|
||||
value !== null &&
|
||||
typeof value === 'object' &&
|
||||
'type' in value &&
|
||||
value.type === 'control_response' &&
|
||||
'response' in value
|
||||
)
|
||||
}
|
||||
|
||||
/** Type predicate for control_request messages from the server. */
|
||||
export function isSDKControlRequest(
|
||||
value: unknown,
|
||||
): value is SDKControlRequest {
|
||||
return (
|
||||
value !== null &&
|
||||
typeof value === 'object' &&
|
||||
'type' in value &&
|
||||
value.type === 'control_request' &&
|
||||
'request_id' in value &&
|
||||
'request' in value
|
||||
)
|
||||
}
|
||||
|
||||
/**
|
||||
* True for message types that should be forwarded to the bridge transport.
|
||||
* The server only wants user/assistant turns and slash-command system events;
|
||||
* everything else (tool_result, progress, etc.) is internal REPL chatter.
|
||||
*/
|
||||
export function isEligibleBridgeMessage(m: Message): boolean {
|
||||
// Virtual messages (REPL inner calls) are display-only — bridge/SDK
|
||||
// consumers see the REPL tool_use/result which summarizes the work.
|
||||
if ((m.type === 'user' || m.type === 'assistant') && m.isVirtual) {
|
||||
return false
|
||||
}
|
||||
return (
|
||||
m.type === 'user' ||
|
||||
m.type === 'assistant' ||
|
||||
(m.type === 'system' && m.subtype === 'local_command')
|
||||
)
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract title-worthy text from a Message for onUserMessage. Returns
|
||||
* undefined for messages that shouldn't title the session: non-user, meta
|
||||
* (nudges), tool results, compact summaries, non-human origins (task
|
||||
* notifications, channel messages), or pure display-tag content
|
||||
* (<ide_opened_file>, <session-start-hook>, etc.).
|
||||
*
|
||||
* Synthetic interrupts ([Request interrupted by user]) are NOT filtered here —
|
||||
* isSyntheticMessage lives in messages.ts (heavy import, pulls command
|
||||
* registry). The initialMessages path in initReplBridge checks it; the
|
||||
* writeMessages path reaching an interrupt as the *first* message is
|
||||
* implausible (an interrupt implies a prior prompt already flowed through).
|
||||
*/
|
||||
export function extractTitleText(m: Message): string | undefined {
|
||||
if (m.type !== 'user' || m.isMeta || m.toolUseResult || m.isCompactSummary)
|
||||
return undefined
|
||||
if (m.origin && m.origin.kind !== 'human') return undefined
|
||||
const content = m.message.content
|
||||
let raw: string | undefined
|
||||
if (typeof content === 'string') {
|
||||
raw = content
|
||||
} else {
|
||||
for (const block of content) {
|
||||
if (block.type === 'text') {
|
||||
raw = block.text
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
if (!raw) return undefined
|
||||
const clean = stripDisplayTagsAllowEmpty(raw)
|
||||
return clean || undefined
|
||||
}
|
||||
|
||||
// ─── Ingress routing ─────────────────────────────────────────────────────────
|
||||
|
||||
/**
|
||||
* Parse an ingress WebSocket message and route it to the appropriate handler.
|
||||
* Ignores messages whose UUID is in recentPostedUUIDs (echoes of what we sent)
|
||||
* or in recentInboundUUIDs (re-deliveries we've already forwarded — e.g.
|
||||
* server replayed history after a transport swap lost the seq-num cursor).
|
||||
*/
|
||||
export function handleIngressMessage(
|
||||
data: string,
|
||||
recentPostedUUIDs: BoundedUUIDSet,
|
||||
recentInboundUUIDs: BoundedUUIDSet,
|
||||
onInboundMessage: ((msg: SDKMessage) => void | Promise<void>) | undefined,
|
||||
onPermissionResponse?: ((response: SDKControlResponse) => void) | undefined,
|
||||
onControlRequest?: ((request: SDKControlRequest) => void) | undefined,
|
||||
): void {
|
||||
try {
|
||||
const parsed: unknown = normalizeControlMessageKeys(jsonParse(data))
|
||||
|
||||
// control_response is not an SDKMessage — check before the type guard
|
||||
if (isSDKControlResponse(parsed)) {
|
||||
logForDebugging('[bridge:repl] Ingress message type=control_response')
|
||||
onPermissionResponse?.(parsed)
|
||||
return
|
||||
}
|
||||
|
||||
// control_request from the server (initialize, set_model, can_use_tool).
|
||||
// Must respond promptly or the server kills the WS (~10-14s timeout).
|
||||
if (isSDKControlRequest(parsed)) {
|
||||
logForDebugging(
|
||||
`[bridge:repl] Inbound control_request subtype=${parsed.request.subtype}`,
|
||||
)
|
||||
onControlRequest?.(parsed)
|
||||
return
|
||||
}
|
||||
|
||||
if (!isSDKMessage(parsed)) return
|
||||
|
||||
// Check for UUID to detect echoes of our own messages
|
||||
const uuid =
|
||||
'uuid' in parsed && typeof parsed.uuid === 'string'
|
||||
? parsed.uuid
|
||||
: undefined
|
||||
|
||||
if (uuid && recentPostedUUIDs.has(uuid)) {
|
||||
logForDebugging(
|
||||
`[bridge:repl] Ignoring echo: type=${parsed.type} uuid=${uuid}`,
|
||||
)
|
||||
return
|
||||
}
|
||||
|
||||
// Defensive dedup: drop inbound prompts we've already forwarded. The
|
||||
// SSE seq-num carryover (lastTransportSequenceNum) is the primary fix
|
||||
// for history-replay; this catches edge cases where that negotiation
|
||||
// fails (server ignores from_sequence_num, transport died before
|
||||
// receiving any frames, etc).
|
||||
if (uuid && recentInboundUUIDs.has(uuid)) {
|
||||
logForDebugging(
|
||||
`[bridge:repl] Ignoring re-delivered inbound: type=${parsed.type} uuid=${uuid}`,
|
||||
)
|
||||
return
|
||||
}
|
||||
|
||||
logForDebugging(
|
||||
`[bridge:repl] Ingress message type=${parsed.type}${uuid ? ` uuid=${uuid}` : ''}`,
|
||||
)
|
||||
|
||||
if (parsed.type === 'user') {
|
||||
if (uuid) recentInboundUUIDs.add(uuid)
|
||||
logEvent('tengu_bridge_message_received', {
|
||||
is_repl: true,
|
||||
})
|
||||
// Fire-and-forget — handler may be async (attachment resolution).
|
||||
void onInboundMessage?.(parsed)
|
||||
} else {
|
||||
logForDebugging(
|
||||
`[bridge:repl] Ignoring non-user inbound message: type=${parsed.type}`,
|
||||
)
|
||||
}
|
||||
} catch (err) {
|
||||
logForDebugging(
|
||||
`[bridge:repl] Failed to parse ingress message: ${errorMessage(err)}`,
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
// ─── Server-initiated control requests ───────────────────────────────────────
|
||||
|
||||
export type ServerControlRequestHandlers = {
|
||||
transport: ReplBridgeTransport | null
|
||||
sessionId: string
|
||||
/**
|
||||
* When true, all mutable requests (interrupt, set_model, set_permission_mode,
|
||||
* set_max_thinking_tokens) reply with an error instead of false-success.
|
||||
* initialize still replies success — the server kills the connection otherwise.
|
||||
* Used by the outbound-only bridge mode and the SDK's /bridge subpath so claude.ai sees a
|
||||
* proper error instead of "action succeeded but nothing happened locally".
|
||||
*/
|
||||
outboundOnly?: boolean
|
||||
onInterrupt?: () => void
|
||||
onSetModel?: (model: string | undefined) => void
|
||||
onSetMaxThinkingTokens?: (maxTokens: number | null) => void
|
||||
onSetPermissionMode?: (
|
||||
mode: PermissionMode,
|
||||
) => { ok: true } | { ok: false; error: string }
|
||||
}
|
||||
|
||||
const OUTBOUND_ONLY_ERROR =
|
||||
'This session is outbound-only. Enable Remote Control locally to allow inbound control.'
|
||||
|
||||
/**
|
||||
* Respond to inbound control_request messages from the server. The server
|
||||
* sends these for session lifecycle events (initialize, set_model) and
|
||||
* for turn-level coordination (interrupt, set_max_thinking_tokens). If we
|
||||
* don't respond, the server hangs and kills the WS after ~10-14s.
|
||||
*
|
||||
* Previously a closure inside initBridgeCore's onWorkReceived; now takes
|
||||
* collaborators as params so both cores can use it.
|
||||
*/
|
||||
export function handleServerControlRequest(
|
||||
request: SDKControlRequest,
|
||||
handlers: ServerControlRequestHandlers,
|
||||
): void {
|
||||
const {
|
||||
transport,
|
||||
sessionId,
|
||||
outboundOnly,
|
||||
onInterrupt,
|
||||
onSetModel,
|
||||
onSetMaxThinkingTokens,
|
||||
onSetPermissionMode,
|
||||
} = handlers
|
||||
if (!transport) {
|
||||
logForDebugging(
|
||||
'[bridge:repl] Cannot respond to control_request: transport not configured',
|
||||
)
|
||||
return
|
||||
}
|
||||
|
||||
let response: SDKControlResponse
|
||||
|
||||
// Outbound-only: reply error for mutable requests so claude.ai doesn't show
|
||||
// false success. initialize must still succeed (server kills the connection
|
||||
// if it doesn't — see comment above).
|
||||
if (outboundOnly && request.request.subtype !== 'initialize') {
|
||||
response = {
|
||||
type: 'control_response',
|
||||
response: {
|
||||
subtype: 'error',
|
||||
request_id: request.request_id,
|
||||
error: OUTBOUND_ONLY_ERROR,
|
||||
},
|
||||
}
|
||||
const event = { ...response, session_id: sessionId }
|
||||
void transport.write(event)
|
||||
logForDebugging(
|
||||
`[bridge:repl] Rejected ${request.request.subtype} (outbound-only) request_id=${request.request_id}`,
|
||||
)
|
||||
return
|
||||
}
|
||||
|
||||
switch (request.request.subtype) {
|
||||
case 'initialize':
|
||||
// Respond with minimal capabilities — the REPL handles
|
||||
// commands, models, and account info itself.
|
||||
response = {
|
||||
type: 'control_response',
|
||||
response: {
|
||||
subtype: 'success',
|
||||
request_id: request.request_id,
|
||||
response: {
|
||||
commands: [],
|
||||
output_style: 'normal',
|
||||
available_output_styles: ['normal'],
|
||||
models: [],
|
||||
account: {},
|
||||
pid: process.pid,
|
||||
},
|
||||
},
|
||||
}
|
||||
break
|
||||
|
||||
case 'set_model':
|
||||
onSetModel?.(request.request.model)
|
||||
response = {
|
||||
type: 'control_response',
|
||||
response: {
|
||||
subtype: 'success',
|
||||
request_id: request.request_id,
|
||||
},
|
||||
}
|
||||
break
|
||||
|
||||
case 'set_max_thinking_tokens':
|
||||
onSetMaxThinkingTokens?.(request.request.max_thinking_tokens)
|
||||
response = {
|
||||
type: 'control_response',
|
||||
response: {
|
||||
subtype: 'success',
|
||||
request_id: request.request_id,
|
||||
},
|
||||
}
|
||||
break
|
||||
|
||||
case 'set_permission_mode': {
|
||||
// The callback returns a policy verdict so we can send an error
|
||||
// control_response without importing isAutoModeGateEnabled /
|
||||
// isBypassPermissionsModeDisabled here (bootstrap-isolation). If no
|
||||
// callback is registered (daemon context, which doesn't wire this —
|
||||
// see daemonBridge.ts), return an error verdict rather than a silent
|
||||
// false-success: the mode is never actually applied in that context,
|
||||
// so success would lie to the client.
|
||||
const verdict = onSetPermissionMode?.(request.request.mode) ?? {
|
||||
ok: false,
|
||||
error:
|
||||
'set_permission_mode is not supported in this context (onSetPermissionMode callback not registered)',
|
||||
}
|
||||
if (verdict.ok) {
|
||||
response = {
|
||||
type: 'control_response',
|
||||
response: {
|
||||
subtype: 'success',
|
||||
request_id: request.request_id,
|
||||
},
|
||||
}
|
||||
} else {
|
||||
response = {
|
||||
type: 'control_response',
|
||||
response: {
|
||||
subtype: 'error',
|
||||
request_id: request.request_id,
|
||||
error: verdict.error,
|
||||
},
|
||||
}
|
||||
}
|
||||
break
|
||||
}
|
||||
|
||||
case 'interrupt':
|
||||
onInterrupt?.()
|
||||
response = {
|
||||
type: 'control_response',
|
||||
response: {
|
||||
subtype: 'success',
|
||||
request_id: request.request_id,
|
||||
},
|
||||
}
|
||||
break
|
||||
|
||||
default:
|
||||
// Unknown subtype — respond with error so the server doesn't
|
||||
// hang waiting for a reply that never comes.
|
||||
response = {
|
||||
type: 'control_response',
|
||||
response: {
|
||||
subtype: 'error',
|
||||
request_id: request.request_id,
|
||||
error: `REPL bridge does not handle control_request subtype: ${request.request.subtype}`,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
const event = { ...response, session_id: sessionId }
|
||||
void transport.write(event)
|
||||
logForDebugging(
|
||||
`[bridge:repl] Sent control_response for ${request.request.subtype} request_id=${request.request_id} result=${response.response.subtype}`,
|
||||
)
|
||||
}
|
||||
|
||||
// ─── Result message (for session archival on teardown) ───────────────────────
|
||||
|
||||
/**
|
||||
* Build a minimal `SDKResultSuccess` message for session archival.
|
||||
* The server needs this event before a WS close to trigger archival.
|
||||
*/
|
||||
export function makeResultMessage(sessionId: string): SDKResultSuccess {
|
||||
return {
|
||||
type: 'result',
|
||||
subtype: 'success',
|
||||
duration_ms: 0,
|
||||
duration_api_ms: 0,
|
||||
is_error: false,
|
||||
num_turns: 0,
|
||||
result: '',
|
||||
stop_reason: null,
|
||||
total_cost_usd: 0,
|
||||
usage: { ...EMPTY_USAGE },
|
||||
modelUsage: {},
|
||||
permission_denials: [],
|
||||
session_id: sessionId,
|
||||
uuid: randomUUID(),
|
||||
}
|
||||
}
|
||||
|
||||
// ─── BoundedUUIDSet (echo-dedup ring buffer) ─────────────────────────────────
|
||||
|
||||
/**
|
||||
* FIFO-bounded set backed by a circular buffer. Evicts the oldest entry
|
||||
* when capacity is reached, keeping memory usage constant at O(capacity).
|
||||
*
|
||||
* Messages are added in chronological order, so evicted entries are always
|
||||
* the oldest. The caller relies on external ordering (the hook's
|
||||
* lastWrittenIndexRef) as the primary dedup — this set is a secondary
|
||||
* safety net for echo filtering and race-condition dedup.
|
||||
*/
|
||||
export class BoundedUUIDSet {
|
||||
private readonly capacity: number
|
||||
private readonly ring: (string | undefined)[]
|
||||
private readonly set = new Set<string>()
|
||||
private writeIdx = 0
|
||||
|
||||
constructor(capacity: number) {
|
||||
this.capacity = capacity
|
||||
this.ring = new Array<string | undefined>(capacity)
|
||||
}
|
||||
|
||||
add(uuid: string): void {
|
||||
if (this.set.has(uuid)) return
|
||||
// Evict the entry at the current write position (if occupied)
|
||||
const evicted = this.ring[this.writeIdx]
|
||||
if (evicted !== undefined) {
|
||||
this.set.delete(evicted)
|
||||
}
|
||||
this.ring[this.writeIdx] = uuid
|
||||
this.set.add(uuid)
|
||||
this.writeIdx = (this.writeIdx + 1) % this.capacity
|
||||
}
|
||||
|
||||
has(uuid: string): boolean {
|
||||
return this.set.has(uuid)
|
||||
}
|
||||
|
||||
clear(): void {
|
||||
this.set.clear()
|
||||
this.ring.fill(undefined)
|
||||
this.writeIdx = 0
|
||||
}
|
||||
}
|
||||
43
src/bridge/bridgePermissionCallbacks.ts
Normal file
43
src/bridge/bridgePermissionCallbacks.ts
Normal file
@ -0,0 +1,43 @@
|
||||
import type { PermissionUpdate } from '../utils/permissions/PermissionUpdateSchema.js'
|
||||
|
||||
type BridgePermissionResponse = {
|
||||
behavior: 'allow' | 'deny'
|
||||
updatedInput?: Record<string, unknown>
|
||||
updatedPermissions?: PermissionUpdate[]
|
||||
message?: string
|
||||
}
|
||||
|
||||
type BridgePermissionCallbacks = {
|
||||
sendRequest(
|
||||
requestId: string,
|
||||
toolName: string,
|
||||
input: Record<string, unknown>,
|
||||
toolUseId: string,
|
||||
description: string,
|
||||
permissionSuggestions?: PermissionUpdate[],
|
||||
blockedPath?: string,
|
||||
): void
|
||||
sendResponse(requestId: string, response: BridgePermissionResponse): void
|
||||
/** Cancel a pending control_request so the web app can dismiss its prompt. */
|
||||
cancelRequest(requestId: string): void
|
||||
onResponse(
|
||||
requestId: string,
|
||||
handler: (response: BridgePermissionResponse) => void,
|
||||
): () => void // returns unsubscribe
|
||||
}
|
||||
|
||||
/** Type predicate for validating a parsed control_response payload
|
||||
* as a BridgePermissionResponse. Checks the required `behavior`
|
||||
* discriminant rather than using an unsafe `as` cast. */
|
||||
function isBridgePermissionResponse(
|
||||
value: unknown,
|
||||
): value is BridgePermissionResponse {
|
||||
if (!value || typeof value !== 'object') return false
|
||||
return (
|
||||
'behavior' in value &&
|
||||
(value.behavior === 'allow' || value.behavior === 'deny')
|
||||
)
|
||||
}
|
||||
|
||||
export { isBridgePermissionResponse }
|
||||
export type { BridgePermissionCallbacks, BridgePermissionResponse }
|
||||
210
src/bridge/bridgePointer.ts
Normal file
210
src/bridge/bridgePointer.ts
Normal file
@ -0,0 +1,210 @@
|
||||
import { mkdir, readFile, stat, unlink, writeFile } from 'fs/promises'
|
||||
import { dirname, join } from 'path'
|
||||
import { z } from 'zod/v4'
|
||||
import { logForDebugging } from '../utils/debug.js'
|
||||
import { isENOENT } from '../utils/errors.js'
|
||||
import { getWorktreePathsPortable } from '../utils/getWorktreePathsPortable.js'
|
||||
import { lazySchema } from '../utils/lazySchema.js'
|
||||
import {
|
||||
getProjectsDir,
|
||||
sanitizePath,
|
||||
} from '../utils/sessionStoragePortable.js'
|
||||
import { jsonParse, jsonStringify } from '../utils/slowOperations.js'
|
||||
|
||||
/**
|
||||
* Upper bound on worktree fanout. git worktree list is naturally bounded
|
||||
* (50 is a LOT), but this caps the parallel stat() burst and guards against
|
||||
* pathological setups. Above this, --continue falls back to current-dir-only.
|
||||
*/
|
||||
const MAX_WORKTREE_FANOUT = 50
|
||||
|
||||
/**
|
||||
* Crash-recovery pointer for Remote Control sessions.
|
||||
*
|
||||
* Written immediately after a bridge session is created, periodically
|
||||
* refreshed during the session, and cleared on clean shutdown. If the
|
||||
* process dies unclean (crash, kill -9, terminal closed), the pointer
|
||||
* persists. On next startup, `claude remote-control` detects it and offers
|
||||
* to resume via the --session-id flow from #20460.
|
||||
*
|
||||
* Staleness is checked against the file's mtime (not an embedded timestamp)
|
||||
* so that a periodic re-write with the same content serves as a refresh —
|
||||
* matches the backend's rolling BRIDGE_LAST_POLL_TTL (4h) semantics. A
|
||||
* bridge that's been polling for 5+ hours and then crashes still has a
|
||||
* fresh pointer as long as the refresh ran within the window.
|
||||
*
|
||||
* Scoped per working directory (alongside transcript JSONL files) so two
|
||||
* concurrent bridges in different repos don't clobber each other.
|
||||
*/
|
||||
|
||||
export const BRIDGE_POINTER_TTL_MS = 4 * 60 * 60 * 1000
|
||||
|
||||
const BridgePointerSchema = lazySchema(() =>
|
||||
z.object({
|
||||
sessionId: z.string(),
|
||||
environmentId: z.string(),
|
||||
source: z.enum(['standalone', 'repl']),
|
||||
}),
|
||||
)
|
||||
|
||||
export type BridgePointer = z.infer<ReturnType<typeof BridgePointerSchema>>
|
||||
|
||||
export function getBridgePointerPath(dir: string): string {
|
||||
return join(getProjectsDir(), sanitizePath(dir), 'bridge-pointer.json')
|
||||
}
|
||||
|
||||
/**
|
||||
* Write the pointer. Also used to refresh mtime during long sessions —
|
||||
* calling with the same IDs is a cheap no-content-change write that bumps
|
||||
* the staleness clock. Best-effort — a crash-recovery file must never
|
||||
* itself cause a crash. Logs and swallows on error.
|
||||
*/
|
||||
export async function writeBridgePointer(
|
||||
dir: string,
|
||||
pointer: BridgePointer,
|
||||
): Promise<void> {
|
||||
const path = getBridgePointerPath(dir)
|
||||
try {
|
||||
await mkdir(dirname(path), { recursive: true })
|
||||
await writeFile(path, jsonStringify(pointer), 'utf8')
|
||||
logForDebugging(`[bridge:pointer] wrote ${path}`)
|
||||
} catch (err: unknown) {
|
||||
logForDebugging(`[bridge:pointer] write failed: ${err}`, { level: 'warn' })
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Read the pointer and its age (ms since last write). Operates directly
|
||||
* and handles errors — no existence check (CLAUDE.md TOCTOU rule). Returns
|
||||
* null on any failure: missing file, corrupted JSON, schema mismatch, or
|
||||
* stale (mtime > 4h ago). Stale/invalid pointers are deleted so they don't
|
||||
* keep re-prompting after the backend has already GC'd the env.
|
||||
*/
|
||||
export async function readBridgePointer(
|
||||
dir: string,
|
||||
): Promise<(BridgePointer & { ageMs: number }) | null> {
|
||||
const path = getBridgePointerPath(dir)
|
||||
let raw: string
|
||||
let mtimeMs: number
|
||||
try {
|
||||
// stat for mtime (staleness anchor), then read. Two syscalls, but both
|
||||
// are needed — mtime IS the data we return, not a TOCTOU guard.
|
||||
mtimeMs = (await stat(path)).mtimeMs
|
||||
raw = await readFile(path, 'utf8')
|
||||
} catch {
|
||||
return null
|
||||
}
|
||||
|
||||
const parsed = BridgePointerSchema().safeParse(safeJsonParse(raw))
|
||||
if (!parsed.success) {
|
||||
logForDebugging(`[bridge:pointer] invalid schema, clearing: ${path}`)
|
||||
await clearBridgePointer(dir)
|
||||
return null
|
||||
}
|
||||
|
||||
const ageMs = Math.max(0, Date.now() - mtimeMs)
|
||||
if (ageMs > BRIDGE_POINTER_TTL_MS) {
|
||||
logForDebugging(`[bridge:pointer] stale (>4h mtime), clearing: ${path}`)
|
||||
await clearBridgePointer(dir)
|
||||
return null
|
||||
}
|
||||
|
||||
return { ...parsed.data, ageMs }
|
||||
}
|
||||
|
||||
/**
|
||||
* Worktree-aware read for `--continue`. The REPL bridge writes its pointer
|
||||
* to `getOriginalCwd()` which EnterWorktreeTool/activeWorktreeSession can
|
||||
* mutate to a worktree path — but `claude remote-control --continue` runs
|
||||
* with `resolve('.')` = shell CWD. This fans out across git worktree
|
||||
* siblings to find the freshest pointer, matching /resume's semantics.
|
||||
*
|
||||
* Fast path: checks `dir` first. Only shells out to `git worktree list` if
|
||||
* that misses — the common case (pointer in launch dir) is one stat, zero
|
||||
* exec. Fanout reads run in parallel; capped at MAX_WORKTREE_FANOUT.
|
||||
*
|
||||
* Returns the pointer AND the dir it was found in, so the caller can clear
|
||||
* the right file on resume failure.
|
||||
*/
|
||||
export async function readBridgePointerAcrossWorktrees(
|
||||
dir: string,
|
||||
): Promise<{ pointer: BridgePointer & { ageMs: number }; dir: string } | null> {
|
||||
// Fast path: current dir. Covers standalone bridge (always matches) and
|
||||
// REPL bridge when no worktree mutation happened.
|
||||
const here = await readBridgePointer(dir)
|
||||
if (here) {
|
||||
return { pointer: here, dir }
|
||||
}
|
||||
|
||||
// Fanout: scan worktree siblings. getWorktreePathsPortable has a 5s
|
||||
// timeout and returns [] on any error (not a git repo, git not installed).
|
||||
const worktrees = await getWorktreePathsPortable(dir)
|
||||
if (worktrees.length <= 1) return null
|
||||
if (worktrees.length > MAX_WORKTREE_FANOUT) {
|
||||
logForDebugging(
|
||||
`[bridge:pointer] ${worktrees.length} worktrees exceeds fanout cap ${MAX_WORKTREE_FANOUT}, skipping`,
|
||||
)
|
||||
return null
|
||||
}
|
||||
|
||||
// Dedupe against `dir` so we don't re-stat it. sanitizePath normalizes
|
||||
// case/separators so worktree-list output matches our fast-path key even
|
||||
// on Windows where git may emit C:/ vs stored c:/.
|
||||
const dirKey = sanitizePath(dir)
|
||||
const candidates = worktrees.filter(wt => sanitizePath(wt) !== dirKey)
|
||||
|
||||
// Parallel stat+read. Each readBridgePointer is a stat() that ENOENTs
|
||||
// for worktrees with no pointer (cheap) plus a ~100-byte read for the
|
||||
// rare ones that have one. Promise.all → latency ≈ slowest single stat.
|
||||
const results = await Promise.all(
|
||||
candidates.map(async wt => {
|
||||
const p = await readBridgePointer(wt)
|
||||
return p ? { pointer: p, dir: wt } : null
|
||||
}),
|
||||
)
|
||||
|
||||
// Pick freshest (lowest ageMs). The pointer stores environmentId so
|
||||
// resume reconnects to the right env regardless of which worktree
|
||||
// --continue was invoked from.
|
||||
let freshest: {
|
||||
pointer: BridgePointer & { ageMs: number }
|
||||
dir: string
|
||||
} | null = null
|
||||
for (const r of results) {
|
||||
if (r && (!freshest || r.pointer.ageMs < freshest.pointer.ageMs)) {
|
||||
freshest = r
|
||||
}
|
||||
}
|
||||
if (freshest) {
|
||||
logForDebugging(
|
||||
`[bridge:pointer] fanout found pointer in worktree ${freshest.dir} (ageMs=${freshest.pointer.ageMs})`,
|
||||
)
|
||||
}
|
||||
return freshest
|
||||
}
|
||||
|
||||
/**
|
||||
* Delete the pointer. Idempotent — ENOENT is expected when the process
|
||||
* shut down clean previously.
|
||||
*/
|
||||
export async function clearBridgePointer(dir: string): Promise<void> {
|
||||
const path = getBridgePointerPath(dir)
|
||||
try {
|
||||
await unlink(path)
|
||||
logForDebugging(`[bridge:pointer] cleared ${path}`)
|
||||
} catch (err: unknown) {
|
||||
if (!isENOENT(err)) {
|
||||
logForDebugging(`[bridge:pointer] clear failed: ${err}`, {
|
||||
level: 'warn',
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function safeJsonParse(raw: string): unknown {
|
||||
try {
|
||||
return jsonParse(raw)
|
||||
} catch {
|
||||
return null
|
||||
}
|
||||
}
|
||||
163
src/bridge/bridgeStatusUtil.ts
Normal file
163
src/bridge/bridgeStatusUtil.ts
Normal file
@ -0,0 +1,163 @@
|
||||
import {
|
||||
getClaudeAiBaseUrl,
|
||||
getRemoteSessionUrl,
|
||||
} from '../constants/product.js'
|
||||
import { stringWidth } from '../ink/stringWidth.js'
|
||||
import { formatDuration, truncateToWidth } from '../utils/format.js'
|
||||
import { getGraphemeSegmenter } from '../utils/intl.js'
|
||||
|
||||
/** Bridge status state machine states. */
|
||||
export type StatusState =
|
||||
| 'idle'
|
||||
| 'attached'
|
||||
| 'titled'
|
||||
| 'reconnecting'
|
||||
| 'failed'
|
||||
|
||||
/** How long a tool activity line stays visible after last tool_start (ms). */
|
||||
export const TOOL_DISPLAY_EXPIRY_MS = 30_000
|
||||
|
||||
/** Interval for the shimmer animation tick (ms). */
|
||||
export const SHIMMER_INTERVAL_MS = 150
|
||||
|
||||
export function timestamp(): string {
|
||||
const now = new Date()
|
||||
const h = String(now.getHours()).padStart(2, '0')
|
||||
const m = String(now.getMinutes()).padStart(2, '0')
|
||||
const s = String(now.getSeconds()).padStart(2, '0')
|
||||
return `${h}:${m}:${s}`
|
||||
}
|
||||
|
||||
export { formatDuration, truncateToWidth as truncatePrompt }
|
||||
|
||||
/** Abbreviate a tool activity summary for the trail display. */
|
||||
export function abbreviateActivity(summary: string): string {
|
||||
return truncateToWidth(summary, 30)
|
||||
}
|
||||
|
||||
/** Build the connect URL shown when the bridge is idle. */
|
||||
export function buildBridgeConnectUrl(
|
||||
environmentId: string,
|
||||
ingressUrl?: string,
|
||||
): string {
|
||||
const baseUrl = getClaudeAiBaseUrl(undefined, ingressUrl)
|
||||
return `${baseUrl}/code?bridge=${environmentId}`
|
||||
}
|
||||
|
||||
/**
|
||||
* Build the session URL shown when a session is attached. Delegates to
|
||||
* getRemoteSessionUrl for the cse_→session_ prefix translation, then appends
|
||||
* the v1-specific ?bridge={environmentId} query.
|
||||
*/
|
||||
export function buildBridgeSessionUrl(
|
||||
sessionId: string,
|
||||
environmentId: string,
|
||||
ingressUrl?: string,
|
||||
): string {
|
||||
return `${getRemoteSessionUrl(sessionId, ingressUrl)}?bridge=${environmentId}`
|
||||
}
|
||||
|
||||
/** Compute the glimmer index for a reverse-sweep shimmer animation. */
|
||||
export function computeGlimmerIndex(
|
||||
tick: number,
|
||||
messageWidth: number,
|
||||
): number {
|
||||
const cycleLength = messageWidth + 20
|
||||
return messageWidth + 10 - (tick % cycleLength)
|
||||
}
|
||||
|
||||
/**
|
||||
* Split text into three segments by visual column position for shimmer rendering.
|
||||
*
|
||||
* Uses grapheme segmentation and `stringWidth` so the split is correct for
|
||||
* multi-byte characters, emoji, and CJK glyphs.
|
||||
*
|
||||
* Returns `{ before, shimmer, after }` strings. Both renderers (chalk in
|
||||
* bridgeUI.ts and React/Ink in bridge.tsx) apply their own coloring to
|
||||
* these segments.
|
||||
*/
|
||||
export function computeShimmerSegments(
|
||||
text: string,
|
||||
glimmerIndex: number,
|
||||
): { before: string; shimmer: string; after: string } {
|
||||
const messageWidth = stringWidth(text)
|
||||
const shimmerStart = glimmerIndex - 1
|
||||
const shimmerEnd = glimmerIndex + 1
|
||||
|
||||
// When shimmer is offscreen, return all text as "before"
|
||||
if (shimmerStart >= messageWidth || shimmerEnd < 0) {
|
||||
return { before: text, shimmer: '', after: '' }
|
||||
}
|
||||
|
||||
// Split into at most 3 segments by visual column position
|
||||
const clampedStart = Math.max(0, shimmerStart)
|
||||
let colPos = 0
|
||||
let before = ''
|
||||
let shimmer = ''
|
||||
let after = ''
|
||||
for (const { segment } of getGraphemeSegmenter().segment(text)) {
|
||||
const segWidth = stringWidth(segment)
|
||||
if (colPos + segWidth <= clampedStart) {
|
||||
before += segment
|
||||
} else if (colPos > shimmerEnd) {
|
||||
after += segment
|
||||
} else {
|
||||
shimmer += segment
|
||||
}
|
||||
colPos += segWidth
|
||||
}
|
||||
|
||||
return { before, shimmer, after }
|
||||
}
|
||||
|
||||
/** Computed bridge status label and color from connection state. */
|
||||
export type BridgeStatusInfo = {
|
||||
label:
|
||||
| 'Remote Control failed'
|
||||
| 'Remote Control reconnecting'
|
||||
| 'Remote Control active'
|
||||
| 'Remote Control connecting\u2026'
|
||||
color: 'error' | 'warning' | 'success'
|
||||
}
|
||||
|
||||
/** Derive a status label and color from the bridge connection state. */
|
||||
export function getBridgeStatus({
|
||||
error,
|
||||
connected,
|
||||
sessionActive,
|
||||
reconnecting,
|
||||
}: {
|
||||
error: string | undefined
|
||||
connected: boolean
|
||||
sessionActive: boolean
|
||||
reconnecting: boolean
|
||||
}): BridgeStatusInfo {
|
||||
if (error) return { label: 'Remote Control failed', color: 'error' }
|
||||
if (reconnecting)
|
||||
return { label: 'Remote Control reconnecting', color: 'warning' }
|
||||
if (sessionActive || connected)
|
||||
return { label: 'Remote Control active', color: 'success' }
|
||||
return { label: 'Remote Control connecting\u2026', color: 'warning' }
|
||||
}
|
||||
|
||||
/** Footer text shown when bridge is idle (Ready state). */
|
||||
export function buildIdleFooterText(url: string): string {
|
||||
return `Code everywhere with the Claude app or ${url}`
|
||||
}
|
||||
|
||||
/** Footer text shown when a session is active (Connected state). */
|
||||
export function buildActiveFooterText(url: string): string {
|
||||
return `Continue coding in the Claude app or ${url}`
|
||||
}
|
||||
|
||||
/** Footer text shown when the bridge has failed. */
|
||||
export const FAILED_FOOTER_TEXT = 'Something went wrong, please try again'
|
||||
|
||||
/**
|
||||
* Wrap text in an OSC 8 terminal hyperlink. Zero visual width for layout purposes.
|
||||
* strip-ansi (used by stringWidth) correctly strips these sequences, so
|
||||
* countVisualLines in bridgeUI.ts remains accurate.
|
||||
*/
|
||||
export function wrapWithOsc8Link(text: string, url: string): string {
|
||||
return `\x1b]8;;${url}\x07${text}\x1b]8;;\x07`
|
||||
}
|
||||
530
src/bridge/bridgeUI.ts
Normal file
530
src/bridge/bridgeUI.ts
Normal file
@ -0,0 +1,530 @@
|
||||
import chalk from 'chalk'
|
||||
import { toString as qrToString } from 'qrcode'
|
||||
import {
|
||||
BRIDGE_FAILED_INDICATOR,
|
||||
BRIDGE_READY_INDICATOR,
|
||||
BRIDGE_SPINNER_FRAMES,
|
||||
} from '../constants/figures.js'
|
||||
import { stringWidth } from '../ink/stringWidth.js'
|
||||
import { logForDebugging } from '../utils/debug.js'
|
||||
import {
|
||||
buildActiveFooterText,
|
||||
buildBridgeConnectUrl,
|
||||
buildBridgeSessionUrl,
|
||||
buildIdleFooterText,
|
||||
FAILED_FOOTER_TEXT,
|
||||
formatDuration,
|
||||
type StatusState,
|
||||
TOOL_DISPLAY_EXPIRY_MS,
|
||||
timestamp,
|
||||
truncatePrompt,
|
||||
wrapWithOsc8Link,
|
||||
} from './bridgeStatusUtil.js'
|
||||
import type {
|
||||
BridgeConfig,
|
||||
BridgeLogger,
|
||||
SessionActivity,
|
||||
SpawnMode,
|
||||
} from './types.js'
|
||||
|
||||
const QR_OPTIONS = {
|
||||
type: 'utf8' as const,
|
||||
errorCorrectionLevel: 'L' as const,
|
||||
small: true,
|
||||
}
|
||||
|
||||
/** Generate a QR code and return its lines. */
|
||||
async function generateQr(url: string): Promise<string[]> {
|
||||
const qr = await qrToString(url, QR_OPTIONS)
|
||||
return qr.split('\n').filter((line: string) => line.length > 0)
|
||||
}
|
||||
|
||||
export function createBridgeLogger(options: {
|
||||
verbose: boolean
|
||||
write?: (s: string) => void
|
||||
}): BridgeLogger {
|
||||
const write = options.write ?? ((s: string) => process.stdout.write(s))
|
||||
const verbose = options.verbose
|
||||
|
||||
// Track how many status lines are currently displayed at the bottom
|
||||
let statusLineCount = 0
|
||||
|
||||
// Status state machine
|
||||
let currentState: StatusState = 'idle'
|
||||
let currentStateText = 'Ready'
|
||||
let repoName = ''
|
||||
let branch = ''
|
||||
let debugLogPath = ''
|
||||
|
||||
// Connect URL (built in printBanner with correct base for staging/prod)
|
||||
let connectUrl = ''
|
||||
let cachedIngressUrl = ''
|
||||
let cachedEnvironmentId = ''
|
||||
let activeSessionUrl: string | null = null
|
||||
|
||||
// QR code lines for the current URL
|
||||
let qrLines: string[] = []
|
||||
let qrVisible = false
|
||||
|
||||
// Tool activity for the second status line
|
||||
let lastToolSummary: string | null = null
|
||||
let lastToolTime = 0
|
||||
|
||||
// Session count indicator (shown when multi-session mode is enabled)
|
||||
let sessionActive = 0
|
||||
let sessionMax = 1
|
||||
// Spawn mode shown in the session-count line + gates the `w` hint
|
||||
let spawnModeDisplay: 'same-dir' | 'worktree' | null = null
|
||||
let spawnMode: SpawnMode = 'single-session'
|
||||
|
||||
// Per-session display info for the multi-session bullet list (keyed by compat sessionId)
|
||||
const sessionDisplayInfo = new Map<
|
||||
string,
|
||||
{ title?: string; url: string; activity?: SessionActivity }
|
||||
>()
|
||||
|
||||
// Connecting spinner state
|
||||
let connectingTimer: ReturnType<typeof setInterval> | null = null
|
||||
let connectingTick = 0
|
||||
|
||||
/**
|
||||
* Count how many visual terminal rows a string occupies, accounting for
|
||||
* line wrapping. Each `\n` is one row, and content wider than the terminal
|
||||
* wraps to additional rows.
|
||||
*/
|
||||
function countVisualLines(text: string): number {
|
||||
// eslint-disable-next-line custom-rules/prefer-use-terminal-size
|
||||
const cols = process.stdout.columns || 80 // non-React CLI context
|
||||
let count = 0
|
||||
// Split on newlines to get logical lines
|
||||
for (const logical of text.split('\n')) {
|
||||
if (logical.length === 0) {
|
||||
// Empty segment between consecutive \n — counts as 1 row
|
||||
count++
|
||||
continue
|
||||
}
|
||||
const width = stringWidth(logical)
|
||||
count += Math.max(1, Math.ceil(width / cols))
|
||||
}
|
||||
// The trailing \n in "line\n" produces an empty last element — don't count it
|
||||
// because the cursor sits at the start of the next line, not a new visual row.
|
||||
if (text.endsWith('\n')) {
|
||||
count--
|
||||
}
|
||||
return count
|
||||
}
|
||||
|
||||
/** Write a status line and track its visual line count. */
|
||||
function writeStatus(text: string): void {
|
||||
write(text)
|
||||
statusLineCount += countVisualLines(text)
|
||||
}
|
||||
|
||||
/** Clear any currently displayed status lines. */
|
||||
function clearStatusLines(): void {
|
||||
if (statusLineCount <= 0) return
|
||||
logForDebugging(`[bridge:ui] clearStatusLines count=${statusLineCount}`)
|
||||
// Move cursor up to the start of the status block, then erase everything below
|
||||
write(`\x1b[${statusLineCount}A`) // cursor up N lines
|
||||
write('\x1b[J') // erase from cursor to end of screen
|
||||
statusLineCount = 0
|
||||
}
|
||||
|
||||
/** Print a permanent log line, clearing status first and restoring after. */
|
||||
function printLog(line: string): void {
|
||||
clearStatusLines()
|
||||
write(line)
|
||||
}
|
||||
|
||||
/** Regenerate the QR code with the given URL. */
|
||||
function regenerateQr(url: string): void {
|
||||
generateQr(url)
|
||||
.then(lines => {
|
||||
qrLines = lines
|
||||
renderStatusLine()
|
||||
})
|
||||
.catch(e => {
|
||||
logForDebugging(`QR code generation failed: ${e}`, { level: 'error' })
|
||||
})
|
||||
}
|
||||
|
||||
/** Render the connecting spinner line (shown before first updateIdleStatus). */
|
||||
function renderConnectingLine(): void {
|
||||
clearStatusLines()
|
||||
|
||||
const frame =
|
||||
BRIDGE_SPINNER_FRAMES[connectingTick % BRIDGE_SPINNER_FRAMES.length]!
|
||||
let suffix = ''
|
||||
if (repoName) {
|
||||
suffix += chalk.dim(' \u00b7 ') + chalk.dim(repoName)
|
||||
}
|
||||
if (branch) {
|
||||
suffix += chalk.dim(' \u00b7 ') + chalk.dim(branch)
|
||||
}
|
||||
writeStatus(
|
||||
`${chalk.yellow(frame)} ${chalk.yellow('Connecting')}${suffix}\n`,
|
||||
)
|
||||
}
|
||||
|
||||
/** Start the connecting spinner. Stopped by first updateIdleStatus(). */
|
||||
function startConnecting(): void {
|
||||
stopConnecting()
|
||||
renderConnectingLine()
|
||||
connectingTimer = setInterval(() => {
|
||||
connectingTick++
|
||||
renderConnectingLine()
|
||||
}, 150)
|
||||
}
|
||||
|
||||
/** Stop the connecting spinner. */
|
||||
function stopConnecting(): void {
|
||||
if (connectingTimer) {
|
||||
clearInterval(connectingTimer)
|
||||
connectingTimer = null
|
||||
}
|
||||
}
|
||||
|
||||
/** Render and write the current status lines based on state. */
|
||||
function renderStatusLine(): void {
|
||||
if (currentState === 'reconnecting' || currentState === 'failed') {
|
||||
// These states are handled separately (updateReconnectingStatus /
|
||||
// updateFailedStatus). Return before clearing so callers like toggleQr
|
||||
// and setSpawnModeDisplay don't blank the display during these states.
|
||||
return
|
||||
}
|
||||
|
||||
clearStatusLines()
|
||||
|
||||
const isIdle = currentState === 'idle'
|
||||
|
||||
// QR code above the status line
|
||||
if (qrVisible) {
|
||||
for (const line of qrLines) {
|
||||
writeStatus(`${chalk.dim(line)}\n`)
|
||||
}
|
||||
}
|
||||
|
||||
// Determine indicator and colors based on state
|
||||
const indicator = BRIDGE_READY_INDICATOR
|
||||
const indicatorColor = isIdle ? chalk.green : chalk.cyan
|
||||
const baseColor = isIdle ? chalk.green : chalk.cyan
|
||||
const stateText = baseColor(currentStateText)
|
||||
|
||||
// Build the suffix with repo and branch
|
||||
let suffix = ''
|
||||
if (repoName) {
|
||||
suffix += chalk.dim(' \u00b7 ') + chalk.dim(repoName)
|
||||
}
|
||||
// In worktree mode each session gets its own branch, so showing the
|
||||
// bridge's branch would be misleading.
|
||||
if (branch && spawnMode !== 'worktree') {
|
||||
suffix += chalk.dim(' \u00b7 ') + chalk.dim(branch)
|
||||
}
|
||||
|
||||
if (process.env.USER_TYPE === 'ant' && debugLogPath) {
|
||||
writeStatus(
|
||||
`${chalk.yellow('[ANT-ONLY] Logs:')} ${chalk.dim(debugLogPath)}\n`,
|
||||
)
|
||||
}
|
||||
writeStatus(`${indicatorColor(indicator)} ${stateText}${suffix}\n`)
|
||||
|
||||
// Session count and per-session list (multi-session mode only)
|
||||
if (sessionMax > 1) {
|
||||
const modeHint =
|
||||
spawnMode === 'worktree'
|
||||
? 'New sessions will be created in an isolated worktree'
|
||||
: 'New sessions will be created in the current directory'
|
||||
writeStatus(
|
||||
` ${chalk.dim(`Capacity: ${sessionActive}/${sessionMax} \u00b7 ${modeHint}`)}\n`,
|
||||
)
|
||||
for (const [, info] of sessionDisplayInfo) {
|
||||
const titleText = info.title
|
||||
? truncatePrompt(info.title, 35)
|
||||
: chalk.dim('Attached')
|
||||
const titleLinked = wrapWithOsc8Link(titleText, info.url)
|
||||
const act = info.activity
|
||||
const showAct = act && act.type !== 'result' && act.type !== 'error'
|
||||
const actText = showAct
|
||||
? chalk.dim(` ${truncatePrompt(act.summary, 40)}`)
|
||||
: ''
|
||||
writeStatus(` ${titleLinked}${actText}
|
||||
`)
|
||||
}
|
||||
}
|
||||
|
||||
// Mode line for spawn modes with a single slot (or true single-session mode)
|
||||
if (sessionMax === 1) {
|
||||
const modeText =
|
||||
spawnMode === 'single-session'
|
||||
? 'Single session \u00b7 exits when complete'
|
||||
: spawnMode === 'worktree'
|
||||
? `Capacity: ${sessionActive}/1 \u00b7 New sessions will be created in an isolated worktree`
|
||||
: `Capacity: ${sessionActive}/1 \u00b7 New sessions will be created in the current directory`
|
||||
writeStatus(` ${chalk.dim(modeText)}\n`)
|
||||
}
|
||||
|
||||
// Tool activity line for single-session mode
|
||||
if (
|
||||
sessionMax === 1 &&
|
||||
!isIdle &&
|
||||
lastToolSummary &&
|
||||
Date.now() - lastToolTime < TOOL_DISPLAY_EXPIRY_MS
|
||||
) {
|
||||
writeStatus(` ${chalk.dim(truncatePrompt(lastToolSummary, 60))}\n`)
|
||||
}
|
||||
|
||||
// Blank line separator before footer
|
||||
const url = activeSessionUrl ?? connectUrl
|
||||
if (url) {
|
||||
writeStatus('\n')
|
||||
const footerText = isIdle
|
||||
? buildIdleFooterText(url)
|
||||
: buildActiveFooterText(url)
|
||||
const qrHint = qrVisible
|
||||
? chalk.dim.italic('space to hide QR code')
|
||||
: chalk.dim.italic('space to show QR code')
|
||||
const toggleHint = spawnModeDisplay
|
||||
? chalk.dim.italic(' \u00b7 w to toggle spawn mode')
|
||||
: ''
|
||||
writeStatus(`${chalk.dim(footerText)}\n`)
|
||||
writeStatus(`${qrHint}${toggleHint}\n`)
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
printBanner(config: BridgeConfig, environmentId: string): void {
|
||||
cachedIngressUrl = config.sessionIngressUrl
|
||||
cachedEnvironmentId = environmentId
|
||||
connectUrl = buildBridgeConnectUrl(environmentId, cachedIngressUrl)
|
||||
regenerateQr(connectUrl)
|
||||
|
||||
if (verbose) {
|
||||
write(chalk.dim(`Remote Control`) + ` v${MACRO.VERSION}\n`)
|
||||
}
|
||||
if (verbose) {
|
||||
if (config.spawnMode !== 'single-session') {
|
||||
write(chalk.dim(`Spawn mode: `) + `${config.spawnMode}\n`)
|
||||
write(
|
||||
chalk.dim(`Max concurrent sessions: `) + `${config.maxSessions}\n`,
|
||||
)
|
||||
}
|
||||
write(chalk.dim(`Environment ID: `) + `${environmentId}\n`)
|
||||
}
|
||||
if (config.sandbox) {
|
||||
write(chalk.dim(`Sandbox: `) + `${chalk.green('Enabled')}\n`)
|
||||
}
|
||||
write('\n')
|
||||
|
||||
// Start connecting spinner — first updateIdleStatus() will stop it
|
||||
startConnecting()
|
||||
},
|
||||
|
||||
logSessionStart(sessionId: string, prompt: string): void {
|
||||
if (verbose) {
|
||||
const short = truncatePrompt(prompt, 80)
|
||||
printLog(
|
||||
chalk.dim(`[${timestamp()}]`) +
|
||||
` Session started: ${chalk.white(`"${short}"`)} (${chalk.dim(sessionId)})\n`,
|
||||
)
|
||||
}
|
||||
},
|
||||
|
||||
logSessionComplete(sessionId: string, durationMs: number): void {
|
||||
printLog(
|
||||
chalk.dim(`[${timestamp()}]`) +
|
||||
` Session ${chalk.green('completed')} (${formatDuration(durationMs)}) ${chalk.dim(sessionId)}\n`,
|
||||
)
|
||||
},
|
||||
|
||||
logSessionFailed(sessionId: string, error: string): void {
|
||||
printLog(
|
||||
chalk.dim(`[${timestamp()}]`) +
|
||||
` Session ${chalk.red('failed')}: ${error} ${chalk.dim(sessionId)}\n`,
|
||||
)
|
||||
},
|
||||
|
||||
logStatus(message: string): void {
|
||||
printLog(chalk.dim(`[${timestamp()}]`) + ` ${message}\n`)
|
||||
},
|
||||
|
||||
logVerbose(message: string): void {
|
||||
if (verbose) {
|
||||
printLog(chalk.dim(`[${timestamp()}] ${message}`) + '\n')
|
||||
}
|
||||
},
|
||||
|
||||
logError(message: string): void {
|
||||
printLog(chalk.red(`[${timestamp()}] Error: ${message}`) + '\n')
|
||||
},
|
||||
|
||||
logReconnected(disconnectedMs: number): void {
|
||||
printLog(
|
||||
chalk.dim(`[${timestamp()}]`) +
|
||||
` ${chalk.green('Reconnected')} after ${formatDuration(disconnectedMs)}\n`,
|
||||
)
|
||||
},
|
||||
|
||||
setRepoInfo(repo: string, branchName: string): void {
|
||||
repoName = repo
|
||||
branch = branchName
|
||||
},
|
||||
|
||||
setDebugLogPath(path: string): void {
|
||||
debugLogPath = path
|
||||
},
|
||||
|
||||
updateIdleStatus(): void {
|
||||
stopConnecting()
|
||||
|
||||
currentState = 'idle'
|
||||
currentStateText = 'Ready'
|
||||
lastToolSummary = null
|
||||
lastToolTime = 0
|
||||
activeSessionUrl = null
|
||||
regenerateQr(connectUrl)
|
||||
renderStatusLine()
|
||||
},
|
||||
|
||||
setAttached(sessionId: string): void {
|
||||
stopConnecting()
|
||||
currentState = 'attached'
|
||||
currentStateText = 'Connected'
|
||||
lastToolSummary = null
|
||||
lastToolTime = 0
|
||||
// Multi-session: keep footer/QR on the environment connect URL so users
|
||||
// can spawn more sessions. Per-session links are in the bullet list.
|
||||
if (sessionMax <= 1) {
|
||||
activeSessionUrl = buildBridgeSessionUrl(
|
||||
sessionId,
|
||||
cachedEnvironmentId,
|
||||
cachedIngressUrl,
|
||||
)
|
||||
regenerateQr(activeSessionUrl)
|
||||
}
|
||||
renderStatusLine()
|
||||
},
|
||||
|
||||
updateReconnectingStatus(delayStr: string, elapsedStr: string): void {
|
||||
stopConnecting()
|
||||
clearStatusLines()
|
||||
currentState = 'reconnecting'
|
||||
|
||||
// QR code above the status line
|
||||
if (qrVisible) {
|
||||
for (const line of qrLines) {
|
||||
writeStatus(`${chalk.dim(line)}\n`)
|
||||
}
|
||||
}
|
||||
|
||||
const frame =
|
||||
BRIDGE_SPINNER_FRAMES[connectingTick % BRIDGE_SPINNER_FRAMES.length]!
|
||||
connectingTick++
|
||||
writeStatus(
|
||||
`${chalk.yellow(frame)} ${chalk.yellow('Reconnecting')} ${chalk.dim('\u00b7')} ${chalk.dim(`retrying in ${delayStr}`)} ${chalk.dim('\u00b7')} ${chalk.dim(`disconnected ${elapsedStr}`)}\n`,
|
||||
)
|
||||
},
|
||||
|
||||
updateFailedStatus(error: string): void {
|
||||
stopConnecting()
|
||||
clearStatusLines()
|
||||
currentState = 'failed'
|
||||
|
||||
let suffix = ''
|
||||
if (repoName) {
|
||||
suffix += chalk.dim(' \u00b7 ') + chalk.dim(repoName)
|
||||
}
|
||||
if (branch) {
|
||||
suffix += chalk.dim(' \u00b7 ') + chalk.dim(branch)
|
||||
}
|
||||
|
||||
writeStatus(
|
||||
`${chalk.red(BRIDGE_FAILED_INDICATOR)} ${chalk.red('Remote Control Failed')}${suffix}\n`,
|
||||
)
|
||||
writeStatus(`${chalk.dim(FAILED_FOOTER_TEXT)}\n`)
|
||||
|
||||
if (error) {
|
||||
writeStatus(`${chalk.red(error)}\n`)
|
||||
}
|
||||
},
|
||||
|
||||
updateSessionStatus(
|
||||
_sessionId: string,
|
||||
_elapsed: string,
|
||||
activity: SessionActivity,
|
||||
_trail: string[],
|
||||
): void {
|
||||
// Cache tool activity for the second status line
|
||||
if (activity.type === 'tool_start') {
|
||||
lastToolSummary = activity.summary
|
||||
lastToolTime = Date.now()
|
||||
}
|
||||
renderStatusLine()
|
||||
},
|
||||
|
||||
clearStatus(): void {
|
||||
stopConnecting()
|
||||
clearStatusLines()
|
||||
},
|
||||
|
||||
toggleQr(): void {
|
||||
qrVisible = !qrVisible
|
||||
renderStatusLine()
|
||||
},
|
||||
|
||||
updateSessionCount(active: number, max: number, mode: SpawnMode): void {
|
||||
if (sessionActive === active && sessionMax === max && spawnMode === mode)
|
||||
return
|
||||
sessionActive = active
|
||||
sessionMax = max
|
||||
spawnMode = mode
|
||||
// Don't re-render here — the status ticker calls renderStatusLine
|
||||
// on its own cadence, and the next tick will pick up the new values.
|
||||
},
|
||||
|
||||
setSpawnModeDisplay(mode: 'same-dir' | 'worktree' | null): void {
|
||||
if (spawnModeDisplay === mode) return
|
||||
spawnModeDisplay = mode
|
||||
// Also sync the #21118-added spawnMode so the next render shows correct
|
||||
// mode hint + branch visibility. Don't render here — matches
|
||||
// updateSessionCount: called before printBanner (initial setup) and
|
||||
// again from the `w` handler (which follows with refreshDisplay).
|
||||
if (mode) spawnMode = mode
|
||||
},
|
||||
|
||||
addSession(sessionId: string, url: string): void {
|
||||
sessionDisplayInfo.set(sessionId, { url })
|
||||
},
|
||||
|
||||
updateSessionActivity(sessionId: string, activity: SessionActivity): void {
|
||||
const info = sessionDisplayInfo.get(sessionId)
|
||||
if (!info) return
|
||||
info.activity = activity
|
||||
},
|
||||
|
||||
setSessionTitle(sessionId: string, title: string): void {
|
||||
const info = sessionDisplayInfo.get(sessionId)
|
||||
if (!info) return
|
||||
info.title = title
|
||||
// Guard against reconnecting/failed — renderStatusLine clears then returns
|
||||
// early for those states, which would erase the spinner/error.
|
||||
if (currentState === 'reconnecting' || currentState === 'failed') return
|
||||
if (sessionMax === 1) {
|
||||
// Single-session: show title in the main status line too.
|
||||
currentState = 'titled'
|
||||
currentStateText = truncatePrompt(title, 40)
|
||||
}
|
||||
renderStatusLine()
|
||||
},
|
||||
|
||||
removeSession(sessionId: string): void {
|
||||
sessionDisplayInfo.delete(sessionId)
|
||||
},
|
||||
|
||||
refreshDisplay(): void {
|
||||
// Skip during reconnecting/failed — renderStatusLine clears then returns
|
||||
// early for those states, which would erase the spinner/error.
|
||||
if (currentState === 'reconnecting' || currentState === 'failed') return
|
||||
renderStatusLine()
|
||||
},
|
||||
}
|
||||
}
|
||||
56
src/bridge/capacityWake.ts
Normal file
56
src/bridge/capacityWake.ts
Normal file
@ -0,0 +1,56 @@
|
||||
/**
|
||||
* Shared capacity-wake primitive for bridge poll loops.
|
||||
*
|
||||
* Both replBridge.ts and bridgeMain.ts need to sleep while "at capacity"
|
||||
* but wake early when either (a) the outer loop signal aborts (shutdown),
|
||||
* or (b) capacity frees up (session done / transport lost). This module
|
||||
* encapsulates the mutable wake-controller + two-signal merger that both
|
||||
* poll loops previously duplicated byte-for-byte.
|
||||
*/
|
||||
|
||||
export type CapacitySignal = { signal: AbortSignal; cleanup: () => void }
|
||||
|
||||
export type CapacityWake = {
|
||||
/**
|
||||
* Create a signal that aborts when either the outer loop signal or the
|
||||
* capacity-wake controller fires. Returns the merged signal and a cleanup
|
||||
* function that removes listeners when the sleep resolves normally
|
||||
* (without abort).
|
||||
*/
|
||||
signal(): CapacitySignal
|
||||
/**
|
||||
* Abort the current at-capacity sleep and arm a fresh controller so the
|
||||
* poll loop immediately re-checks for new work.
|
||||
*/
|
||||
wake(): void
|
||||
}
|
||||
|
||||
export function createCapacityWake(outerSignal: AbortSignal): CapacityWake {
|
||||
let wakeController = new AbortController()
|
||||
|
||||
function wake(): void {
|
||||
wakeController.abort()
|
||||
wakeController = new AbortController()
|
||||
}
|
||||
|
||||
function signal(): CapacitySignal {
|
||||
const merged = new AbortController()
|
||||
const abort = (): void => merged.abort()
|
||||
if (outerSignal.aborted || wakeController.signal.aborted) {
|
||||
merged.abort()
|
||||
return { signal: merged.signal, cleanup: () => {} }
|
||||
}
|
||||
outerSignal.addEventListener('abort', abort, { once: true })
|
||||
const capSig = wakeController.signal
|
||||
capSig.addEventListener('abort', abort, { once: true })
|
||||
return {
|
||||
signal: merged.signal,
|
||||
cleanup: () => {
|
||||
outerSignal.removeEventListener('abort', abort)
|
||||
capSig.removeEventListener('abort', abort)
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
return { signal, wake }
|
||||
}
|
||||
168
src/bridge/codeSessionApi.ts
Normal file
168
src/bridge/codeSessionApi.ts
Normal file
@ -0,0 +1,168 @@
|
||||
/**
|
||||
* Thin HTTP wrappers for the CCR v2 code-session API.
|
||||
*
|
||||
* Separate file from remoteBridgeCore.ts so the SDK /bridge subpath can
|
||||
* export createCodeSession + fetchRemoteCredentials without bundling the
|
||||
* heavy CLI tree (analytics, transport, etc.). Callers supply explicit
|
||||
* accessToken + baseUrl — no implicit auth or config reads.
|
||||
*/
|
||||
|
||||
import axios from 'axios'
|
||||
import { logForDebugging } from '../utils/debug.js'
|
||||
import { errorMessage } from '../utils/errors.js'
|
||||
import { jsonStringify } from '../utils/slowOperations.js'
|
||||
import { extractErrorDetail } from './debugUtils.js'
|
||||
|
||||
const ANTHROPIC_VERSION = '2023-06-01'
|
||||
|
||||
function oauthHeaders(accessToken: string): Record<string, string> {
|
||||
return {
|
||||
Authorization: `Bearer ${accessToken}`,
|
||||
'Content-Type': 'application/json',
|
||||
'anthropic-version': ANTHROPIC_VERSION,
|
||||
}
|
||||
}
|
||||
|
||||
export async function createCodeSession(
|
||||
baseUrl: string,
|
||||
accessToken: string,
|
||||
title: string,
|
||||
timeoutMs: number,
|
||||
tags?: string[],
|
||||
): Promise<string | null> {
|
||||
const url = `${baseUrl}/v1/code/sessions`
|
||||
let response
|
||||
try {
|
||||
response = await axios.post(
|
||||
url,
|
||||
// bridge: {} is the positive signal for the oneof runner — omitting it
|
||||
// (or sending environment_id: "") now 400s. BridgeRunner is an empty
|
||||
// message today; it's a placeholder for future bridge-specific options.
|
||||
{ title, bridge: {}, ...(tags?.length ? { tags } : {}) },
|
||||
{
|
||||
headers: oauthHeaders(accessToken),
|
||||
timeout: timeoutMs,
|
||||
validateStatus: s => s < 500,
|
||||
},
|
||||
)
|
||||
} catch (err: unknown) {
|
||||
logForDebugging(
|
||||
`[code-session] Session create request failed: ${errorMessage(err)}`,
|
||||
)
|
||||
return null
|
||||
}
|
||||
|
||||
if (response.status !== 200 && response.status !== 201) {
|
||||
const detail = extractErrorDetail(response.data)
|
||||
logForDebugging(
|
||||
`[code-session] Session create failed ${response.status}${detail ? `: ${detail}` : ''}`,
|
||||
)
|
||||
return null
|
||||
}
|
||||
|
||||
const data: unknown = response.data
|
||||
if (
|
||||
!data ||
|
||||
typeof data !== 'object' ||
|
||||
!('session' in data) ||
|
||||
!data.session ||
|
||||
typeof data.session !== 'object' ||
|
||||
!('id' in data.session) ||
|
||||
typeof data.session.id !== 'string' ||
|
||||
!data.session.id.startsWith('cse_')
|
||||
) {
|
||||
logForDebugging(
|
||||
`[code-session] No session.id (cse_*) in response: ${jsonStringify(data).slice(0, 200)}`,
|
||||
)
|
||||
return null
|
||||
}
|
||||
return data.session.id
|
||||
}
|
||||
|
||||
/**
|
||||
* Credentials from POST /bridge. JWT is opaque — do not decode.
|
||||
* Each /bridge call bumps worker_epoch server-side (it IS the register).
|
||||
*/
|
||||
export type RemoteCredentials = {
|
||||
worker_jwt: string
|
||||
api_base_url: string
|
||||
expires_in: number
|
||||
worker_epoch: number
|
||||
}
|
||||
|
||||
export async function fetchRemoteCredentials(
|
||||
sessionId: string,
|
||||
baseUrl: string,
|
||||
accessToken: string,
|
||||
timeoutMs: number,
|
||||
trustedDeviceToken?: string,
|
||||
): Promise<RemoteCredentials | null> {
|
||||
const url = `${baseUrl}/v1/code/sessions/${sessionId}/bridge`
|
||||
const headers = oauthHeaders(accessToken)
|
||||
if (trustedDeviceToken) {
|
||||
headers['X-Trusted-Device-Token'] = trustedDeviceToken
|
||||
}
|
||||
let response
|
||||
try {
|
||||
response = await axios.post(
|
||||
url,
|
||||
{},
|
||||
{
|
||||
headers,
|
||||
timeout: timeoutMs,
|
||||
validateStatus: s => s < 500,
|
||||
},
|
||||
)
|
||||
} catch (err: unknown) {
|
||||
logForDebugging(
|
||||
`[code-session] /bridge request failed: ${errorMessage(err)}`,
|
||||
)
|
||||
return null
|
||||
}
|
||||
|
||||
if (response.status !== 200) {
|
||||
const detail = extractErrorDetail(response.data)
|
||||
logForDebugging(
|
||||
`[code-session] /bridge failed ${response.status}${detail ? `: ${detail}` : ''}`,
|
||||
)
|
||||
return null
|
||||
}
|
||||
|
||||
const data: unknown = response.data
|
||||
if (
|
||||
data === null ||
|
||||
typeof data !== 'object' ||
|
||||
!('worker_jwt' in data) ||
|
||||
typeof data.worker_jwt !== 'string' ||
|
||||
!('expires_in' in data) ||
|
||||
typeof data.expires_in !== 'number' ||
|
||||
!('api_base_url' in data) ||
|
||||
typeof data.api_base_url !== 'string' ||
|
||||
!('worker_epoch' in data)
|
||||
) {
|
||||
logForDebugging(
|
||||
`[code-session] /bridge response malformed (need worker_jwt, expires_in, api_base_url, worker_epoch): ${jsonStringify(data).slice(0, 200)}`,
|
||||
)
|
||||
return null
|
||||
}
|
||||
// protojson serializes int64 as a string to avoid JS precision loss;
|
||||
// Go may also return a number depending on encoder settings.
|
||||
const rawEpoch = data.worker_epoch
|
||||
const epoch = typeof rawEpoch === 'string' ? Number(rawEpoch) : rawEpoch
|
||||
if (
|
||||
typeof epoch !== 'number' ||
|
||||
!Number.isFinite(epoch) ||
|
||||
!Number.isSafeInteger(epoch)
|
||||
) {
|
||||
logForDebugging(
|
||||
`[code-session] /bridge worker_epoch invalid: ${jsonStringify(rawEpoch)}`,
|
||||
)
|
||||
return null
|
||||
}
|
||||
return {
|
||||
worker_jwt: data.worker_jwt,
|
||||
api_base_url: data.api_base_url,
|
||||
expires_in: data.expires_in,
|
||||
worker_epoch: epoch,
|
||||
}
|
||||
}
|
||||
384
src/bridge/createSession.ts
Normal file
384
src/bridge/createSession.ts
Normal file
@ -0,0 +1,384 @@
|
||||
import type { SDKMessage } from '../entrypoints/agentSdkTypes.js'
|
||||
import { logForDebugging } from '../utils/debug.js'
|
||||
import { errorMessage } from '../utils/errors.js'
|
||||
import { extractErrorDetail } from './debugUtils.js'
|
||||
import { toCompatSessionId } from './sessionIdCompat.js'
|
||||
|
||||
type GitSource = {
|
||||
type: 'git_repository'
|
||||
url: string
|
||||
revision?: string
|
||||
}
|
||||
|
||||
type GitOutcome = {
|
||||
type: 'git_repository'
|
||||
git_info: { type: 'github'; repo: string; branches: string[] }
|
||||
}
|
||||
|
||||
// Events must be wrapped in { type: 'event', data: <sdk_message> } for the
|
||||
// POST /v1/sessions endpoint (discriminated union format).
|
||||
type SessionEvent = {
|
||||
type: 'event'
|
||||
data: SDKMessage
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a session on a bridge environment via POST /v1/sessions.
|
||||
*
|
||||
* Used by both `claude remote-control` (empty session so the user has somewhere to
|
||||
* type immediately) and `/remote-control` (session pre-populated with conversation
|
||||
* history).
|
||||
*
|
||||
* Returns the session ID on success, or null if creation fails (non-fatal).
|
||||
*/
|
||||
export async function createBridgeSession({
|
||||
environmentId,
|
||||
title,
|
||||
events,
|
||||
gitRepoUrl,
|
||||
branch,
|
||||
signal,
|
||||
baseUrl: baseUrlOverride,
|
||||
getAccessToken,
|
||||
permissionMode,
|
||||
}: {
|
||||
environmentId: string
|
||||
title?: string
|
||||
events: SessionEvent[]
|
||||
gitRepoUrl: string | null
|
||||
branch: string
|
||||
signal: AbortSignal
|
||||
baseUrl?: string
|
||||
getAccessToken?: () => string | undefined
|
||||
permissionMode?: string
|
||||
}): Promise<string | null> {
|
||||
const { getClaudeAIOAuthTokens } = await import('../utils/auth.js')
|
||||
const { getOrganizationUUID } = await import('../services/oauth/client.js')
|
||||
const { getOauthConfig } = await import('../constants/oauth.js')
|
||||
const { getOAuthHeaders } = await import('../utils/teleport/api.js')
|
||||
const { parseGitHubRepository } = await import('../utils/detectRepository.js')
|
||||
const { getDefaultBranch } = await import('../utils/git.js')
|
||||
const { getMainLoopModel } = await import('../utils/model/model.js')
|
||||
const { default: axios } = await import('axios')
|
||||
|
||||
const accessToken =
|
||||
getAccessToken?.() ?? getClaudeAIOAuthTokens()?.accessToken
|
||||
if (!accessToken) {
|
||||
logForDebugging('[bridge] No access token for session creation')
|
||||
return null
|
||||
}
|
||||
|
||||
const orgUUID = await getOrganizationUUID()
|
||||
if (!orgUUID) {
|
||||
logForDebugging('[bridge] No org UUID for session creation')
|
||||
return null
|
||||
}
|
||||
|
||||
// Build git source and outcome context
|
||||
let gitSource: GitSource | null = null
|
||||
let gitOutcome: GitOutcome | null = null
|
||||
|
||||
if (gitRepoUrl) {
|
||||
const { parseGitRemote } = await import('../utils/detectRepository.js')
|
||||
const parsed = parseGitRemote(gitRepoUrl)
|
||||
if (parsed) {
|
||||
const { host, owner, name } = parsed
|
||||
const revision = branch || (await getDefaultBranch()) || undefined
|
||||
gitSource = {
|
||||
type: 'git_repository',
|
||||
url: `https://${host}/${owner}/${name}`,
|
||||
revision,
|
||||
}
|
||||
gitOutcome = {
|
||||
type: 'git_repository',
|
||||
git_info: {
|
||||
type: 'github',
|
||||
repo: `${owner}/${name}`,
|
||||
branches: [`claude/${branch || 'task'}`],
|
||||
},
|
||||
}
|
||||
} else {
|
||||
// Fallback: try parseGitHubRepository for owner/repo format
|
||||
const ownerRepo = parseGitHubRepository(gitRepoUrl)
|
||||
if (ownerRepo) {
|
||||
const [owner, name] = ownerRepo.split('/')
|
||||
if (owner && name) {
|
||||
const revision = branch || (await getDefaultBranch()) || undefined
|
||||
gitSource = {
|
||||
type: 'git_repository',
|
||||
url: `https://github.com/${owner}/${name}`,
|
||||
revision,
|
||||
}
|
||||
gitOutcome = {
|
||||
type: 'git_repository',
|
||||
git_info: {
|
||||
type: 'github',
|
||||
repo: `${owner}/${name}`,
|
||||
branches: [`claude/${branch || 'task'}`],
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const requestBody = {
|
||||
...(title !== undefined && { title }),
|
||||
events,
|
||||
session_context: {
|
||||
sources: gitSource ? [gitSource] : [],
|
||||
outcomes: gitOutcome ? [gitOutcome] : [],
|
||||
model: getMainLoopModel(),
|
||||
},
|
||||
environment_id: environmentId,
|
||||
source: 'remote-control',
|
||||
...(permissionMode && { permission_mode: permissionMode }),
|
||||
}
|
||||
|
||||
const headers = {
|
||||
...getOAuthHeaders(accessToken),
|
||||
'anthropic-beta': 'ccr-byoc-2025-07-29',
|
||||
'x-organization-uuid': orgUUID,
|
||||
}
|
||||
|
||||
const url = `${baseUrlOverride ?? getOauthConfig().BASE_API_URL}/v1/sessions`
|
||||
let response
|
||||
try {
|
||||
response = await axios.post(url, requestBody, {
|
||||
headers,
|
||||
signal,
|
||||
validateStatus: s => s < 500,
|
||||
})
|
||||
} catch (err: unknown) {
|
||||
logForDebugging(
|
||||
`[bridge] Session creation request failed: ${errorMessage(err)}`,
|
||||
)
|
||||
return null
|
||||
}
|
||||
const isSuccess = response.status === 200 || response.status === 201
|
||||
|
||||
if (!isSuccess) {
|
||||
const detail = extractErrorDetail(response.data)
|
||||
logForDebugging(
|
||||
`[bridge] Session creation failed with status ${response.status}${detail ? `: ${detail}` : ''}`,
|
||||
)
|
||||
return null
|
||||
}
|
||||
|
||||
const sessionData: unknown = response.data
|
||||
if (
|
||||
!sessionData ||
|
||||
typeof sessionData !== 'object' ||
|
||||
!('id' in sessionData) ||
|
||||
typeof sessionData.id !== 'string'
|
||||
) {
|
||||
logForDebugging('[bridge] No session ID in response')
|
||||
return null
|
||||
}
|
||||
|
||||
return sessionData.id
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch a bridge session via GET /v1/sessions/{id}.
|
||||
*
|
||||
* Returns the session's environment_id (for `--session-id` resume) and title.
|
||||
* Uses the same org-scoped headers as create/archive — the environments-level
|
||||
* client in bridgeApi.ts uses a different beta header and no org UUID, which
|
||||
* makes the Sessions API return 404.
|
||||
*/
|
||||
export async function getBridgeSession(
|
||||
sessionId: string,
|
||||
opts?: { baseUrl?: string; getAccessToken?: () => string | undefined },
|
||||
): Promise<{ environment_id?: string; title?: string } | null> {
|
||||
const { getClaudeAIOAuthTokens } = await import('../utils/auth.js')
|
||||
const { getOrganizationUUID } = await import('../services/oauth/client.js')
|
||||
const { getOauthConfig } = await import('../constants/oauth.js')
|
||||
const { getOAuthHeaders } = await import('../utils/teleport/api.js')
|
||||
const { default: axios } = await import('axios')
|
||||
|
||||
const accessToken =
|
||||
opts?.getAccessToken?.() ?? getClaudeAIOAuthTokens()?.accessToken
|
||||
if (!accessToken) {
|
||||
logForDebugging('[bridge] No access token for session fetch')
|
||||
return null
|
||||
}
|
||||
|
||||
const orgUUID = await getOrganizationUUID()
|
||||
if (!orgUUID) {
|
||||
logForDebugging('[bridge] No org UUID for session fetch')
|
||||
return null
|
||||
}
|
||||
|
||||
const headers = {
|
||||
...getOAuthHeaders(accessToken),
|
||||
'anthropic-beta': 'ccr-byoc-2025-07-29',
|
||||
'x-organization-uuid': orgUUID,
|
||||
}
|
||||
|
||||
const url = `${opts?.baseUrl ?? getOauthConfig().BASE_API_URL}/v1/sessions/${sessionId}`
|
||||
logForDebugging(`[bridge] Fetching session ${sessionId}`)
|
||||
|
||||
let response
|
||||
try {
|
||||
response = await axios.get<{ environment_id?: string; title?: string }>(
|
||||
url,
|
||||
{ headers, timeout: 10_000, validateStatus: s => s < 500 },
|
||||
)
|
||||
} catch (err: unknown) {
|
||||
logForDebugging(
|
||||
`[bridge] Session fetch request failed: ${errorMessage(err)}`,
|
||||
)
|
||||
return null
|
||||
}
|
||||
|
||||
if (response.status !== 200) {
|
||||
const detail = extractErrorDetail(response.data)
|
||||
logForDebugging(
|
||||
`[bridge] Session fetch failed with status ${response.status}${detail ? `: ${detail}` : ''}`,
|
||||
)
|
||||
return null
|
||||
}
|
||||
|
||||
return response.data
|
||||
}
|
||||
|
||||
/**
|
||||
* Archive a bridge session via POST /v1/sessions/{id}/archive.
|
||||
*
|
||||
* The CCR server never auto-archives sessions — archival is always an
|
||||
* explicit client action. Both `claude remote-control` (standalone bridge) and the
|
||||
* always-on `/remote-control` REPL bridge call this during shutdown to archive any
|
||||
* sessions that are still alive.
|
||||
*
|
||||
* The archive endpoint accepts sessions in any status (running, idle,
|
||||
* requires_action, pending) and returns 409 if already archived, making
|
||||
* it safe to call even if the server-side runner already archived the
|
||||
* session.
|
||||
*
|
||||
* Callers must handle errors — this function has no try/catch; 5xx,
|
||||
* timeouts, and network errors throw. Archival is best-effort during
|
||||
* cleanup; call sites wrap with .catch().
|
||||
*/
|
||||
export async function archiveBridgeSession(
|
||||
sessionId: string,
|
||||
opts?: {
|
||||
baseUrl?: string
|
||||
getAccessToken?: () => string | undefined
|
||||
timeoutMs?: number
|
||||
},
|
||||
): Promise<void> {
|
||||
const { getClaudeAIOAuthTokens } = await import('../utils/auth.js')
|
||||
const { getOrganizationUUID } = await import('../services/oauth/client.js')
|
||||
const { getOauthConfig } = await import('../constants/oauth.js')
|
||||
const { getOAuthHeaders } = await import('../utils/teleport/api.js')
|
||||
const { default: axios } = await import('axios')
|
||||
|
||||
const accessToken =
|
||||
opts?.getAccessToken?.() ?? getClaudeAIOAuthTokens()?.accessToken
|
||||
if (!accessToken) {
|
||||
logForDebugging('[bridge] No access token for session archive')
|
||||
return
|
||||
}
|
||||
|
||||
const orgUUID = await getOrganizationUUID()
|
||||
if (!orgUUID) {
|
||||
logForDebugging('[bridge] No org UUID for session archive')
|
||||
return
|
||||
}
|
||||
|
||||
const headers = {
|
||||
...getOAuthHeaders(accessToken),
|
||||
'anthropic-beta': 'ccr-byoc-2025-07-29',
|
||||
'x-organization-uuid': orgUUID,
|
||||
}
|
||||
|
||||
const url = `${opts?.baseUrl ?? getOauthConfig().BASE_API_URL}/v1/sessions/${sessionId}/archive`
|
||||
logForDebugging(`[bridge] Archiving session ${sessionId}`)
|
||||
|
||||
const response = await axios.post(
|
||||
url,
|
||||
{},
|
||||
{
|
||||
headers,
|
||||
timeout: opts?.timeoutMs ?? 10_000,
|
||||
validateStatus: s => s < 500,
|
||||
},
|
||||
)
|
||||
|
||||
if (response.status === 200) {
|
||||
logForDebugging(`[bridge] Session ${sessionId} archived successfully`)
|
||||
} else {
|
||||
const detail = extractErrorDetail(response.data)
|
||||
logForDebugging(
|
||||
`[bridge] Session archive failed with status ${response.status}${detail ? `: ${detail}` : ''}`,
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Update the title of a bridge session via PATCH /v1/sessions/{id}.
|
||||
*
|
||||
* Called when the user renames a session via /rename while a bridge
|
||||
* connection is active, so the title stays in sync on claude.ai/code.
|
||||
*
|
||||
* Errors are swallowed — title sync is best-effort.
|
||||
*/
|
||||
export async function updateBridgeSessionTitle(
|
||||
sessionId: string,
|
||||
title: string,
|
||||
opts?: { baseUrl?: string; getAccessToken?: () => string | undefined },
|
||||
): Promise<void> {
|
||||
const { getClaudeAIOAuthTokens } = await import('../utils/auth.js')
|
||||
const { getOrganizationUUID } = await import('../services/oauth/client.js')
|
||||
const { getOauthConfig } = await import('../constants/oauth.js')
|
||||
const { getOAuthHeaders } = await import('../utils/teleport/api.js')
|
||||
const { default: axios } = await import('axios')
|
||||
|
||||
const accessToken =
|
||||
opts?.getAccessToken?.() ?? getClaudeAIOAuthTokens()?.accessToken
|
||||
if (!accessToken) {
|
||||
logForDebugging('[bridge] No access token for session title update')
|
||||
return
|
||||
}
|
||||
|
||||
const orgUUID = await getOrganizationUUID()
|
||||
if (!orgUUID) {
|
||||
logForDebugging('[bridge] No org UUID for session title update')
|
||||
return
|
||||
}
|
||||
|
||||
const headers = {
|
||||
...getOAuthHeaders(accessToken),
|
||||
'anthropic-beta': 'ccr-byoc-2025-07-29',
|
||||
'x-organization-uuid': orgUUID,
|
||||
}
|
||||
|
||||
// Compat gateway only accepts session_* (compat/convert.go:27). v2 callers
|
||||
// pass raw cse_*; retag here so all callers can pass whatever they hold.
|
||||
// Idempotent for v1's session_* and bridgeMain's pre-converted compatSessionId.
|
||||
const compatId = toCompatSessionId(sessionId)
|
||||
const url = `${opts?.baseUrl ?? getOauthConfig().BASE_API_URL}/v1/sessions/${compatId}`
|
||||
logForDebugging(`[bridge] Updating session title: ${compatId} → ${title}`)
|
||||
|
||||
try {
|
||||
const response = await axios.patch(
|
||||
url,
|
||||
{ title },
|
||||
{ headers, timeout: 10_000, validateStatus: s => s < 500 },
|
||||
)
|
||||
|
||||
if (response.status === 200) {
|
||||
logForDebugging(`[bridge] Session title updated successfully`)
|
||||
} else {
|
||||
const detail = extractErrorDetail(response.data)
|
||||
logForDebugging(
|
||||
`[bridge] Session title update failed with status ${response.status}${detail ? `: ${detail}` : ''}`,
|
||||
)
|
||||
}
|
||||
} catch (err: unknown) {
|
||||
logForDebugging(
|
||||
`[bridge] Session title update request failed: ${errorMessage(err)}`,
|
||||
)
|
||||
}
|
||||
}
|
||||
141
src/bridge/debugUtils.ts
Normal file
141
src/bridge/debugUtils.ts
Normal file
@ -0,0 +1,141 @@
|
||||
import {
|
||||
type AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS,
|
||||
logEvent,
|
||||
} from '../services/analytics/index.js'
|
||||
import { logForDebugging } from '../utils/debug.js'
|
||||
import { errorMessage } from '../utils/errors.js'
|
||||
import { jsonStringify } from '../utils/slowOperations.js'
|
||||
|
||||
const DEBUG_MSG_LIMIT = 2000
|
||||
|
||||
const SECRET_FIELD_NAMES = [
|
||||
'session_ingress_token',
|
||||
'environment_secret',
|
||||
'access_token',
|
||||
'secret',
|
||||
'token',
|
||||
]
|
||||
|
||||
const SECRET_PATTERN = new RegExp(
|
||||
`"(${SECRET_FIELD_NAMES.join('|')})"\\s*:\\s*"([^"]*)"`,
|
||||
'g',
|
||||
)
|
||||
|
||||
const REDACT_MIN_LENGTH = 16
|
||||
|
||||
export function redactSecrets(s: string): string {
|
||||
return s.replace(SECRET_PATTERN, (_match, field: string, value: string) => {
|
||||
if (value.length < REDACT_MIN_LENGTH) {
|
||||
return `"${field}":"[REDACTED]"`
|
||||
}
|
||||
const redacted = `${value.slice(0, 8)}...${value.slice(-4)}`
|
||||
return `"${field}":"${redacted}"`
|
||||
})
|
||||
}
|
||||
|
||||
/** Truncate a string for debug logging, collapsing newlines. */
|
||||
export function debugTruncate(s: string): string {
|
||||
const flat = s.replace(/\n/g, '\\n')
|
||||
if (flat.length <= DEBUG_MSG_LIMIT) {
|
||||
return flat
|
||||
}
|
||||
return flat.slice(0, DEBUG_MSG_LIMIT) + `... (${flat.length} chars)`
|
||||
}
|
||||
|
||||
/** Truncate a JSON-serializable value for debug logging. */
|
||||
export function debugBody(data: unknown): string {
|
||||
const raw = typeof data === 'string' ? data : jsonStringify(data)
|
||||
const s = redactSecrets(raw)
|
||||
if (s.length <= DEBUG_MSG_LIMIT) {
|
||||
return s
|
||||
}
|
||||
return s.slice(0, DEBUG_MSG_LIMIT) + `... (${s.length} chars)`
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract a descriptive error message from an axios error (or any error).
|
||||
* For HTTP errors, appends the server's response body message if available,
|
||||
* since axios's default message only includes the status code.
|
||||
*/
|
||||
export function describeAxiosError(err: unknown): string {
|
||||
const msg = errorMessage(err)
|
||||
if (err && typeof err === 'object' && 'response' in err) {
|
||||
const response = (err as { response?: { data?: unknown } }).response
|
||||
if (response?.data && typeof response.data === 'object') {
|
||||
const data = response.data as Record<string, unknown>
|
||||
const detail =
|
||||
typeof data.message === 'string'
|
||||
? data.message
|
||||
: typeof data.error === 'object' &&
|
||||
data.error &&
|
||||
'message' in data.error &&
|
||||
typeof (data.error as Record<string, unknown>).message ===
|
||||
'string'
|
||||
? (data.error as Record<string, unknown>).message
|
||||
: undefined
|
||||
if (detail) {
|
||||
return `${msg}: ${detail}`
|
||||
}
|
||||
}
|
||||
}
|
||||
return msg
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract the HTTP status code from an axios error, if present.
|
||||
* Returns undefined for non-HTTP errors (e.g. network failures).
|
||||
*/
|
||||
export function extractHttpStatus(err: unknown): number | undefined {
|
||||
if (
|
||||
err &&
|
||||
typeof err === 'object' &&
|
||||
'response' in err &&
|
||||
(err as { response?: { status?: unknown } }).response &&
|
||||
typeof (err as { response: { status?: unknown } }).response.status ===
|
||||
'number'
|
||||
) {
|
||||
return (err as { response: { status: number } }).response.status
|
||||
}
|
||||
return undefined
|
||||
}
|
||||
|
||||
/**
|
||||
* Pull a human-readable message out of an API error response body.
|
||||
* Checks `data.message` first, then `data.error.message`.
|
||||
*/
|
||||
export function extractErrorDetail(data: unknown): string | undefined {
|
||||
if (!data || typeof data !== 'object') return undefined
|
||||
if ('message' in data && typeof data.message === 'string') {
|
||||
return data.message
|
||||
}
|
||||
if (
|
||||
'error' in data &&
|
||||
data.error !== null &&
|
||||
typeof data.error === 'object' &&
|
||||
'message' in data.error &&
|
||||
typeof data.error.message === 'string'
|
||||
) {
|
||||
return data.error.message
|
||||
}
|
||||
return undefined
|
||||
}
|
||||
|
||||
/**
|
||||
* Log a bridge init skip — debug message + `tengu_bridge_repl_skipped`
|
||||
* analytics event. Centralizes the event name and the AnalyticsMetadata
|
||||
* cast so call sites don't each repeat the 5-line boilerplate.
|
||||
*/
|
||||
export function logBridgeSkip(
|
||||
reason: string,
|
||||
debugMsg?: string,
|
||||
v2?: boolean,
|
||||
): void {
|
||||
if (debugMsg) {
|
||||
logForDebugging(debugMsg)
|
||||
}
|
||||
logEvent('tengu_bridge_repl_skipped', {
|
||||
reason:
|
||||
reason as AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS,
|
||||
...(v2 !== undefined && { v2 }),
|
||||
})
|
||||
}
|
||||
165
src/bridge/envLessBridgeConfig.ts
Normal file
165
src/bridge/envLessBridgeConfig.ts
Normal file
@ -0,0 +1,165 @@
|
||||
import { z } from 'zod/v4'
|
||||
import { getFeatureValue_DEPRECATED } from '../services/analytics/growthbook.js'
|
||||
import { lazySchema } from '../utils/lazySchema.js'
|
||||
import { lt } from '../utils/semver.js'
|
||||
import { isEnvLessBridgeEnabled } from './bridgeEnabled.js'
|
||||
|
||||
export type EnvLessBridgeConfig = {
|
||||
// withRetry — init-phase backoff (createSession, POST /bridge, recovery /bridge)
|
||||
init_retry_max_attempts: number
|
||||
init_retry_base_delay_ms: number
|
||||
init_retry_jitter_fraction: number
|
||||
init_retry_max_delay_ms: number
|
||||
// axios timeout for POST /sessions, POST /bridge, POST /archive
|
||||
http_timeout_ms: number
|
||||
// BoundedUUIDSet ring size (echo + re-delivery dedup)
|
||||
uuid_dedup_buffer_size: number
|
||||
// CCRClient worker heartbeat cadence. Server TTL is 60s — 20s gives 3× margin.
|
||||
heartbeat_interval_ms: number
|
||||
// ±fraction of interval — per-beat jitter to spread fleet load.
|
||||
heartbeat_jitter_fraction: number
|
||||
// Fire proactive JWT refresh this long before expires_in. Larger buffer =
|
||||
// more frequent refresh (refresh cadence ≈ expires_in - buffer).
|
||||
token_refresh_buffer_ms: number
|
||||
// Archive POST timeout in teardown(). Distinct from http_timeout_ms because
|
||||
// gracefulShutdown races runCleanupFunctions() against a 2s cap — a 10s
|
||||
// axios timeout on a slow/stalled archive burns the whole budget on a
|
||||
// request that forceExit will kill anyway.
|
||||
teardown_archive_timeout_ms: number
|
||||
// Deadline for onConnect after transport.connect(). If neither onConnect
|
||||
// nor onClose fires before this, emit tengu_bridge_repl_connect_timeout
|
||||
// — the only telemetry for the ~1% of sessions that emit `started` then
|
||||
// go silent (no error, no event, just nothing).
|
||||
connect_timeout_ms: number
|
||||
// Semver floor for the env-less bridge path. Separate from the v1
|
||||
// tengu_bridge_min_version config so a v2-specific bug can force upgrades
|
||||
// without blocking v1 (env-based) clients, and vice versa.
|
||||
min_version: string
|
||||
// When true, tell users their claude.ai app may be too old to see v2
|
||||
// sessions — lets us roll the v2 bridge before the app ships the new
|
||||
// session-list query.
|
||||
should_show_app_upgrade_message: boolean
|
||||
}
|
||||
|
||||
export const DEFAULT_ENV_LESS_BRIDGE_CONFIG: EnvLessBridgeConfig = {
|
||||
init_retry_max_attempts: 3,
|
||||
init_retry_base_delay_ms: 500,
|
||||
init_retry_jitter_fraction: 0.25,
|
||||
init_retry_max_delay_ms: 4000,
|
||||
http_timeout_ms: 10_000,
|
||||
uuid_dedup_buffer_size: 2000,
|
||||
heartbeat_interval_ms: 20_000,
|
||||
heartbeat_jitter_fraction: 0.1,
|
||||
token_refresh_buffer_ms: 300_000,
|
||||
teardown_archive_timeout_ms: 1500,
|
||||
connect_timeout_ms: 15_000,
|
||||
min_version: '0.0.0',
|
||||
should_show_app_upgrade_message: false,
|
||||
}
|
||||
|
||||
// Floors reject the whole object on violation (fall back to DEFAULT) rather
|
||||
// than partially trusting — same defense-in-depth as pollConfig.ts.
|
||||
const envLessBridgeConfigSchema = lazySchema(() =>
|
||||
z.object({
|
||||
init_retry_max_attempts: z.number().int().min(1).max(10).default(3),
|
||||
init_retry_base_delay_ms: z.number().int().min(100).default(500),
|
||||
init_retry_jitter_fraction: z.number().min(0).max(1).default(0.25),
|
||||
init_retry_max_delay_ms: z.number().int().min(500).default(4000),
|
||||
http_timeout_ms: z.number().int().min(2000).default(10_000),
|
||||
uuid_dedup_buffer_size: z.number().int().min(100).max(50_000).default(2000),
|
||||
// Server TTL is 60s. Floor 5s prevents thrash; cap 30s keeps ≥2× margin.
|
||||
heartbeat_interval_ms: z
|
||||
.number()
|
||||
.int()
|
||||
.min(5000)
|
||||
.max(30_000)
|
||||
.default(20_000),
|
||||
// ±fraction per beat. Cap 0.5: at max interval (30s) × 1.5 = 45s worst case,
|
||||
// still under the 60s TTL.
|
||||
heartbeat_jitter_fraction: z.number().min(0).max(0.5).default(0.1),
|
||||
// Floor 30s prevents tight-looping. Cap 30min rejects buffer-vs-delay
|
||||
// semantic inversion: ops entering expires_in-5min (the *delay until
|
||||
// refresh*) instead of 5min (the *buffer before expiry*) yields
|
||||
// delayMs = expires_in - buffer ≈ 5min instead of ≈4h. Both are positive
|
||||
// durations so .min() alone can't distinguish; .max() catches the
|
||||
// inverted value since buffer ≥ 30min is nonsensical for a multi-hour JWT.
|
||||
token_refresh_buffer_ms: z
|
||||
.number()
|
||||
.int()
|
||||
.min(30_000)
|
||||
.max(1_800_000)
|
||||
.default(300_000),
|
||||
// Cap 2000 keeps this under gracefulShutdown's 2s cleanup race — a higher
|
||||
// timeout just lies to axios since forceExit kills the socket regardless.
|
||||
teardown_archive_timeout_ms: z
|
||||
.number()
|
||||
.int()
|
||||
.min(500)
|
||||
.max(2000)
|
||||
.default(1500),
|
||||
// Observed p99 connect is ~2-3s; 15s is ~5× headroom. Floor 5s bounds
|
||||
// false-positive rate under transient slowness; cap 60s bounds how long
|
||||
// a truly-stalled session stays dark.
|
||||
connect_timeout_ms: z.number().int().min(5_000).max(60_000).default(15_000),
|
||||
min_version: z
|
||||
.string()
|
||||
.refine(v => {
|
||||
try {
|
||||
lt(v, '0.0.0')
|
||||
return true
|
||||
} catch {
|
||||
return false
|
||||
}
|
||||
})
|
||||
.default('0.0.0'),
|
||||
should_show_app_upgrade_message: z.boolean().default(false),
|
||||
}),
|
||||
)
|
||||
|
||||
/**
|
||||
* Fetch the env-less bridge timing config from GrowthBook. Read once per
|
||||
* initEnvLessBridgeCore call — config is fixed for the lifetime of a bridge
|
||||
* session.
|
||||
*
|
||||
* Uses the blocking getter (not _CACHED_MAY_BE_STALE) because /remote-control
|
||||
* runs well after GrowthBook init — initializeGrowthBook() resolves instantly,
|
||||
* so there's no startup penalty, and we get the fresh in-memory remoteEval
|
||||
* value instead of the stale-on-first-read disk cache. The _DEPRECATED suffix
|
||||
* warns against startup-path usage, which this isn't.
|
||||
*/
|
||||
export async function getEnvLessBridgeConfig(): Promise<EnvLessBridgeConfig> {
|
||||
const raw = await getFeatureValue_DEPRECATED<unknown>(
|
||||
'tengu_bridge_repl_v2_config',
|
||||
DEFAULT_ENV_LESS_BRIDGE_CONFIG,
|
||||
)
|
||||
const parsed = envLessBridgeConfigSchema().safeParse(raw)
|
||||
return parsed.success ? parsed.data : DEFAULT_ENV_LESS_BRIDGE_CONFIG
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns an error message if the current CLI version is below the minimum
|
||||
* required for the env-less (v2) bridge path, or null if the version is fine.
|
||||
*
|
||||
* v2 analogue of checkBridgeMinVersion() — reads from tengu_bridge_repl_v2_config
|
||||
* instead of tengu_bridge_min_version so the two implementations can enforce
|
||||
* independent floors.
|
||||
*/
|
||||
export async function checkEnvLessBridgeMinVersion(): Promise<string | null> {
|
||||
const cfg = await getEnvLessBridgeConfig()
|
||||
if (cfg.min_version && lt(MACRO.VERSION, cfg.min_version)) {
|
||||
return `Your version of Claude Code (${MACRO.VERSION}) is too old for Remote Control.\nVersion ${cfg.min_version} or higher is required. Run \`claude update\` to update.`
|
||||
}
|
||||
return null
|
||||
}
|
||||
|
||||
/**
|
||||
* Whether to nudge users toward upgrading their claude.ai app when a
|
||||
* Remote Control session starts. True only when the v2 bridge is active
|
||||
* AND the should_show_app_upgrade_message config bit is set — lets us
|
||||
* roll the v2 bridge before the app ships the new session-list query.
|
||||
*/
|
||||
export async function shouldShowAppUpgradeMessage(): Promise<boolean> {
|
||||
if (!isEnvLessBridgeEnabled()) return false
|
||||
const cfg = await getEnvLessBridgeConfig()
|
||||
return cfg.should_show_app_upgrade_message
|
||||
}
|
||||
71
src/bridge/flushGate.ts
Normal file
71
src/bridge/flushGate.ts
Normal file
@ -0,0 +1,71 @@
|
||||
/**
|
||||
* State machine for gating message writes during an initial flush.
|
||||
*
|
||||
* When a bridge session starts, historical messages are flushed to the
|
||||
* server via a single HTTP POST. During that flush, new messages must
|
||||
* be queued to prevent them from arriving at the server interleaved
|
||||
* with the historical messages.
|
||||
*
|
||||
* Lifecycle:
|
||||
* start() → enqueue() returns true, items are queued
|
||||
* end() → returns queued items for draining, enqueue() returns false
|
||||
* drop() → discards queued items (permanent transport close)
|
||||
* deactivate() → clears active flag without dropping items
|
||||
* (transport replacement — new transport will drain)
|
||||
*/
|
||||
export class FlushGate<T> {
|
||||
private _active = false
|
||||
private _pending: T[] = []
|
||||
|
||||
get active(): boolean {
|
||||
return this._active
|
||||
}
|
||||
|
||||
get pendingCount(): number {
|
||||
return this._pending.length
|
||||
}
|
||||
|
||||
/** Mark flush as in-progress. enqueue() will start queuing items. */
|
||||
start(): void {
|
||||
this._active = true
|
||||
}
|
||||
|
||||
/**
|
||||
* End the flush and return any queued items for draining.
|
||||
* Caller is responsible for sending the returned items.
|
||||
*/
|
||||
end(): T[] {
|
||||
this._active = false
|
||||
return this._pending.splice(0)
|
||||
}
|
||||
|
||||
/**
|
||||
* If flush is active, queue the items and return true.
|
||||
* If flush is not active, return false (caller should send directly).
|
||||
*/
|
||||
enqueue(...items: T[]): boolean {
|
||||
if (!this._active) return false
|
||||
this._pending.push(...items)
|
||||
return true
|
||||
}
|
||||
|
||||
/**
|
||||
* Discard all queued items (permanent transport close).
|
||||
* Returns the number of items dropped.
|
||||
*/
|
||||
drop(): number {
|
||||
this._active = false
|
||||
const count = this._pending.length
|
||||
this._pending.length = 0
|
||||
return count
|
||||
}
|
||||
|
||||
/**
|
||||
* Clear the active flag without dropping queued items.
|
||||
* Used when the transport is replaced (onWorkReceived) — the new
|
||||
* transport's flush will drain the pending items.
|
||||
*/
|
||||
deactivate(): void {
|
||||
this._active = false
|
||||
}
|
||||
}
|
||||
175
src/bridge/inboundAttachments.ts
Normal file
175
src/bridge/inboundAttachments.ts
Normal file
@ -0,0 +1,175 @@
|
||||
/**
|
||||
* Resolve file_uuid attachments on inbound bridge user messages.
|
||||
*
|
||||
* Web composer uploads via cookie-authed /api/{org}/upload, sends file_uuid
|
||||
* alongside the message. Here we fetch each via GET /api/oauth/files/{uuid}/content
|
||||
* (oauth-authed, same store), write to ~/.claude/uploads/{sessionId}/, and
|
||||
* return @path refs to prepend. Claude's Read tool takes it from there.
|
||||
*
|
||||
* Best-effort: any failure (no token, network, non-2xx, disk) logs debug and
|
||||
* skips that attachment. The message still reaches Claude, just without @path.
|
||||
*/
|
||||
|
||||
import type { ContentBlockParam } from '@anthropic-ai/sdk/resources/messages.mjs'
|
||||
import axios from 'axios'
|
||||
import { randomUUID } from 'crypto'
|
||||
import { mkdir, writeFile } from 'fs/promises'
|
||||
import { basename, join } from 'path'
|
||||
import { z } from 'zod/v4'
|
||||
import { getSessionId } from '../bootstrap/state.js'
|
||||
import { logForDebugging } from '../utils/debug.js'
|
||||
import { getClaudeConfigHomeDir } from '../utils/envUtils.js'
|
||||
import { lazySchema } from '../utils/lazySchema.js'
|
||||
import { getBridgeAccessToken, getBridgeBaseUrl } from './bridgeConfig.js'
|
||||
|
||||
const DOWNLOAD_TIMEOUT_MS = 30_000
|
||||
|
||||
function debug(msg: string): void {
|
||||
logForDebugging(`[bridge:inbound-attach] ${msg}`)
|
||||
}
|
||||
|
||||
const attachmentSchema = lazySchema(() =>
|
||||
z.object({
|
||||
file_uuid: z.string(),
|
||||
file_name: z.string(),
|
||||
}),
|
||||
)
|
||||
const attachmentsArraySchema = lazySchema(() => z.array(attachmentSchema()))
|
||||
|
||||
export type InboundAttachment = z.infer<ReturnType<typeof attachmentSchema>>
|
||||
|
||||
/** Pull file_attachments off a loosely-typed inbound message. */
|
||||
export function extractInboundAttachments(msg: unknown): InboundAttachment[] {
|
||||
if (typeof msg !== 'object' || msg === null || !('file_attachments' in msg)) {
|
||||
return []
|
||||
}
|
||||
const parsed = attachmentsArraySchema().safeParse(msg.file_attachments)
|
||||
return parsed.success ? parsed.data : []
|
||||
}
|
||||
|
||||
/**
|
||||
* Strip path components and keep only filename-safe chars. file_name comes
|
||||
* from the network (web composer), so treat it as untrusted even though the
|
||||
* composer controls it.
|
||||
*/
|
||||
function sanitizeFileName(name: string): string {
|
||||
const base = basename(name).replace(/[^a-zA-Z0-9._-]/g, '_')
|
||||
return base || 'attachment'
|
||||
}
|
||||
|
||||
function uploadsDir(): string {
|
||||
return join(getClaudeConfigHomeDir(), 'uploads', getSessionId())
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch + write one attachment. Returns the absolute path on success,
|
||||
* undefined on any failure.
|
||||
*/
|
||||
async function resolveOne(att: InboundAttachment): Promise<string | undefined> {
|
||||
const token = getBridgeAccessToken()
|
||||
if (!token) {
|
||||
debug('skip: no oauth token')
|
||||
return undefined
|
||||
}
|
||||
|
||||
let data: Buffer
|
||||
try {
|
||||
// getOauthConfig() (via getBridgeBaseUrl) throws on a non-allowlisted
|
||||
// CLAUDE_CODE_CUSTOM_OAUTH_URL — keep it inside the try so a bad
|
||||
// FedStart URL degrades to "no @path" instead of crashing print.ts's
|
||||
// reader loop (which has no catch around the await).
|
||||
const url = `${getBridgeBaseUrl()}/api/oauth/files/${encodeURIComponent(att.file_uuid)}/content`
|
||||
const response = await axios.get(url, {
|
||||
headers: { Authorization: `Bearer ${token}` },
|
||||
responseType: 'arraybuffer',
|
||||
timeout: DOWNLOAD_TIMEOUT_MS,
|
||||
validateStatus: () => true,
|
||||
})
|
||||
if (response.status !== 200) {
|
||||
debug(`fetch ${att.file_uuid} failed: status=${response.status}`)
|
||||
return undefined
|
||||
}
|
||||
data = Buffer.from(response.data)
|
||||
} catch (e) {
|
||||
debug(`fetch ${att.file_uuid} threw: ${e}`)
|
||||
return undefined
|
||||
}
|
||||
|
||||
// uuid-prefix makes collisions impossible across messages and within one
|
||||
// (same filename, different files). 8 chars is enough — this isn't security.
|
||||
const safeName = sanitizeFileName(att.file_name)
|
||||
const prefix = (
|
||||
att.file_uuid.slice(0, 8) || randomUUID().slice(0, 8)
|
||||
).replace(/[^a-zA-Z0-9_-]/g, '_')
|
||||
const dir = uploadsDir()
|
||||
const outPath = join(dir, `${prefix}-${safeName}`)
|
||||
|
||||
try {
|
||||
await mkdir(dir, { recursive: true })
|
||||
await writeFile(outPath, data)
|
||||
} catch (e) {
|
||||
debug(`write ${outPath} failed: ${e}`)
|
||||
return undefined
|
||||
}
|
||||
|
||||
debug(`resolved ${att.file_uuid} → ${outPath} (${data.length} bytes)`)
|
||||
return outPath
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve all attachments on an inbound message to a prefix string of
|
||||
* @path refs. Empty string if none resolved.
|
||||
*/
|
||||
export async function resolveInboundAttachments(
|
||||
attachments: InboundAttachment[],
|
||||
): Promise<string> {
|
||||
if (attachments.length === 0) return ''
|
||||
debug(`resolving ${attachments.length} attachment(s)`)
|
||||
const paths = await Promise.all(attachments.map(resolveOne))
|
||||
const ok = paths.filter((p): p is string => p !== undefined)
|
||||
if (ok.length === 0) return ''
|
||||
// Quoted form — extractAtMentionedFiles truncates unquoted @refs at the
|
||||
// first space, which breaks any home dir with spaces (/Users/John Smith/).
|
||||
return ok.map(p => `@"${p}"`).join(' ') + ' '
|
||||
}
|
||||
|
||||
/**
|
||||
* Prepend @path refs to content, whichever form it's in.
|
||||
* Targets the LAST text block — processUserInputBase reads inputString
|
||||
* from processedBlocks[processedBlocks.length - 1], so putting refs in
|
||||
* block[0] means they're silently ignored for [text, image] content.
|
||||
*/
|
||||
export function prependPathRefs(
|
||||
content: string | Array<ContentBlockParam>,
|
||||
prefix: string,
|
||||
): string | Array<ContentBlockParam> {
|
||||
if (!prefix) return content
|
||||
if (typeof content === 'string') return prefix + content
|
||||
const i = content.findLastIndex(b => b.type === 'text')
|
||||
if (i !== -1) {
|
||||
const b = content[i]!
|
||||
if (b.type === 'text') {
|
||||
return [
|
||||
...content.slice(0, i),
|
||||
{ ...b, text: prefix + b.text },
|
||||
...content.slice(i + 1),
|
||||
]
|
||||
}
|
||||
}
|
||||
// No text block — append one at the end so it's last.
|
||||
return [...content, { type: 'text', text: prefix.trimEnd() }]
|
||||
}
|
||||
|
||||
/**
|
||||
* Convenience: extract + resolve + prepend. No-op when the message has no
|
||||
* file_attachments field (fast path — no network, returns same reference).
|
||||
*/
|
||||
export async function resolveAndPrepend(
|
||||
msg: unknown,
|
||||
content: string | Array<ContentBlockParam>,
|
||||
): Promise<string | Array<ContentBlockParam>> {
|
||||
const attachments = extractInboundAttachments(msg)
|
||||
if (attachments.length === 0) return content
|
||||
const prefix = await resolveInboundAttachments(attachments)
|
||||
return prependPathRefs(content, prefix)
|
||||
}
|
||||
80
src/bridge/inboundMessages.ts
Normal file
80
src/bridge/inboundMessages.ts
Normal file
@ -0,0 +1,80 @@
|
||||
import type {
|
||||
Base64ImageSource,
|
||||
ContentBlockParam,
|
||||
ImageBlockParam,
|
||||
} from '@anthropic-ai/sdk/resources/messages.mjs'
|
||||
import type { UUID } from 'crypto'
|
||||
import type { SDKMessage } from '../entrypoints/agentSdkTypes.js'
|
||||
import { detectImageFormatFromBase64 } from '../utils/imageResizer.js'
|
||||
|
||||
/**
|
||||
* Process an inbound user message from the bridge, extracting content
|
||||
* and UUID for enqueueing. Supports both string content and
|
||||
* ContentBlockParam[] (e.g. messages containing images).
|
||||
*
|
||||
* Normalizes image blocks from bridge clients that may use camelCase
|
||||
* `mediaType` instead of snake_case `media_type` (mobile-apps#5825).
|
||||
*
|
||||
* Returns the extracted fields, or undefined if the message should be
|
||||
* skipped (non-user type, missing/empty content).
|
||||
*/
|
||||
export function extractInboundMessageFields(
|
||||
msg: SDKMessage,
|
||||
):
|
||||
| { content: string | Array<ContentBlockParam>; uuid: UUID | undefined }
|
||||
| undefined {
|
||||
if (msg.type !== 'user') return undefined
|
||||
const content = msg.message?.content
|
||||
if (!content) return undefined
|
||||
if (Array.isArray(content) && content.length === 0) return undefined
|
||||
|
||||
const uuid =
|
||||
'uuid' in msg && typeof msg.uuid === 'string'
|
||||
? (msg.uuid as UUID)
|
||||
: undefined
|
||||
|
||||
return {
|
||||
content: Array.isArray(content) ? normalizeImageBlocks(content) : content,
|
||||
uuid,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Normalize image content blocks from bridge clients. iOS/web clients may
|
||||
* send `mediaType` (camelCase) instead of `media_type` (snake_case), or
|
||||
* omit the field entirely. Without normalization, the bad block poisons
|
||||
* the session — every subsequent API call fails with
|
||||
* "media_type: Field required".
|
||||
*
|
||||
* Fast-path scan returns the original array reference when no
|
||||
* normalization is needed (zero allocation on the happy path).
|
||||
*/
|
||||
export function normalizeImageBlocks(
|
||||
blocks: Array<ContentBlockParam>,
|
||||
): Array<ContentBlockParam> {
|
||||
if (!blocks.some(isMalformedBase64Image)) return blocks
|
||||
|
||||
return blocks.map(block => {
|
||||
if (!isMalformedBase64Image(block)) return block
|
||||
const src = block.source as unknown as Record<string, unknown>
|
||||
const mediaType =
|
||||
typeof src.mediaType === 'string' && src.mediaType
|
||||
? src.mediaType
|
||||
: detectImageFormatFromBase64(block.source.data)
|
||||
return {
|
||||
...block,
|
||||
source: {
|
||||
type: 'base64' as const,
|
||||
media_type: mediaType as Base64ImageSource['media_type'],
|
||||
data: block.source.data,
|
||||
},
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
function isMalformedBase64Image(
|
||||
block: ContentBlockParam,
|
||||
): block is ImageBlockParam & { source: Base64ImageSource } {
|
||||
if (block.type !== 'image' || block.source?.type !== 'base64') return false
|
||||
return !(block.source as unknown as Record<string, unknown>).media_type
|
||||
}
|
||||
569
src/bridge/initReplBridge.ts
Normal file
569
src/bridge/initReplBridge.ts
Normal file
@ -0,0 +1,569 @@
|
||||
/**
|
||||
* REPL-specific wrapper around initBridgeCore. Owns the parts that read
|
||||
* bootstrap state — gates, cwd, session ID, git context, OAuth, title
|
||||
* derivation — then delegates to the bootstrap-free core.
|
||||
*
|
||||
* Split out of replBridge.ts because the sessionStorage import
|
||||
* (getCurrentSessionTitle) transitively pulls in src/commands.ts → the
|
||||
* entire slash command + React component tree (~1300 modules). Keeping
|
||||
* initBridgeCore in a file that doesn't touch sessionStorage lets
|
||||
* daemonBridge.ts import the core without bloating the Agent SDK bundle.
|
||||
*
|
||||
* Called via dynamic import by useReplBridge (auto-start) and print.ts
|
||||
* (SDK -p mode via query.enableRemoteControl).
|
||||
*/
|
||||
|
||||
import { feature } from 'bun:bundle'
|
||||
import { hostname } from 'os'
|
||||
import { getOriginalCwd, getSessionId } from '../bootstrap/state.js'
|
||||
import type { SDKMessage } from '../entrypoints/agentSdkTypes.js'
|
||||
import type { SDKControlResponse } from '../entrypoints/sdk/controlTypes.js'
|
||||
import { getFeatureValue_CACHED_WITH_REFRESH } from '../services/analytics/growthbook.js'
|
||||
import { getOrganizationUUID } from '../services/oauth/client.js'
|
||||
import {
|
||||
isPolicyAllowed,
|
||||
waitForPolicyLimitsToLoad,
|
||||
} from '../services/policyLimits/index.js'
|
||||
import type { Message } from '../types/message.js'
|
||||
import {
|
||||
checkAndRefreshOAuthTokenIfNeeded,
|
||||
getClaudeAIOAuthTokens,
|
||||
handleOAuth401Error,
|
||||
} from '../utils/auth.js'
|
||||
import { getGlobalConfig, saveGlobalConfig } from '../utils/config.js'
|
||||
import { logForDebugging } from '../utils/debug.js'
|
||||
import { stripDisplayTagsAllowEmpty } from '../utils/displayTags.js'
|
||||
import { errorMessage } from '../utils/errors.js'
|
||||
import { getBranch, getRemoteUrl } from '../utils/git.js'
|
||||
import { toSDKMessages } from '../utils/messages/mappers.js'
|
||||
import {
|
||||
getContentText,
|
||||
getMessagesAfterCompactBoundary,
|
||||
isSyntheticMessage,
|
||||
} from '../utils/messages.js'
|
||||
import type { PermissionMode } from '../utils/permissions/PermissionMode.js'
|
||||
import { getCurrentSessionTitle } from '../utils/sessionStorage.js'
|
||||
import {
|
||||
extractConversationText,
|
||||
generateSessionTitle,
|
||||
} from '../utils/sessionTitle.js'
|
||||
import { generateShortWordSlug } from '../utils/words.js'
|
||||
import {
|
||||
getBridgeAccessToken,
|
||||
getBridgeBaseUrl,
|
||||
getBridgeTokenOverride,
|
||||
} from './bridgeConfig.js'
|
||||
import {
|
||||
checkBridgeMinVersion,
|
||||
isBridgeEnabledBlocking,
|
||||
isCseShimEnabled,
|
||||
isEnvLessBridgeEnabled,
|
||||
} from './bridgeEnabled.js'
|
||||
import {
|
||||
archiveBridgeSession,
|
||||
createBridgeSession,
|
||||
updateBridgeSessionTitle,
|
||||
} from './createSession.js'
|
||||
import { logBridgeSkip } from './debugUtils.js'
|
||||
import { checkEnvLessBridgeMinVersion } from './envLessBridgeConfig.js'
|
||||
import { getPollIntervalConfig } from './pollConfig.js'
|
||||
import type { BridgeState, ReplBridgeHandle } from './replBridge.js'
|
||||
import { initBridgeCore } from './replBridge.js'
|
||||
import { setCseShimGate } from './sessionIdCompat.js'
|
||||
import type { BridgeWorkerType } from './types.js'
|
||||
|
||||
export type InitBridgeOptions = {
|
||||
onInboundMessage?: (msg: SDKMessage) => void | Promise<void>
|
||||
onPermissionResponse?: (response: SDKControlResponse) => void
|
||||
onInterrupt?: () => void
|
||||
onSetModel?: (model: string | undefined) => void
|
||||
onSetMaxThinkingTokens?: (maxTokens: number | null) => void
|
||||
onSetPermissionMode?: (
|
||||
mode: PermissionMode,
|
||||
) => { ok: true } | { ok: false; error: string }
|
||||
onStateChange?: (state: BridgeState, detail?: string) => void
|
||||
initialMessages?: Message[]
|
||||
// Explicit session name from `/remote-control <name>`. When set, overrides
|
||||
// the title derived from the conversation or /rename.
|
||||
initialName?: string
|
||||
// Fresh view of the full conversation at call time. Used by onUserMessage's
|
||||
// count-3 derivation to call generateSessionTitle over the full conversation.
|
||||
// Optional — print.ts's SDK enableRemoteControl path has no REPL message
|
||||
// array; count-3 falls back to the single message text when absent.
|
||||
getMessages?: () => Message[]
|
||||
// UUIDs already flushed in a prior bridge session. Messages with these
|
||||
// UUIDs are excluded from the initial flush to avoid poisoning the
|
||||
// server (duplicate UUIDs across sessions cause the WS to be killed).
|
||||
// Mutated in place — newly flushed UUIDs are added after each flush.
|
||||
previouslyFlushedUUIDs?: Set<string>
|
||||
/** See BridgeCoreParams.perpetual. */
|
||||
perpetual?: boolean
|
||||
/**
|
||||
* When true, the bridge only forwards events outbound (no SSE inbound
|
||||
* stream). Used by CCR mirror mode — local sessions visible on claude.ai
|
||||
* without enabling inbound control.
|
||||
*/
|
||||
outboundOnly?: boolean
|
||||
tags?: string[]
|
||||
}
|
||||
|
||||
export async function initReplBridge(
|
||||
options?: InitBridgeOptions,
|
||||
): Promise<ReplBridgeHandle | null> {
|
||||
const {
|
||||
onInboundMessage,
|
||||
onPermissionResponse,
|
||||
onInterrupt,
|
||||
onSetModel,
|
||||
onSetMaxThinkingTokens,
|
||||
onSetPermissionMode,
|
||||
onStateChange,
|
||||
initialMessages,
|
||||
getMessages,
|
||||
previouslyFlushedUUIDs,
|
||||
initialName,
|
||||
perpetual,
|
||||
outboundOnly,
|
||||
tags,
|
||||
} = options ?? {}
|
||||
|
||||
// Wire the cse_ shim kill switch so toCompatSessionId respects the
|
||||
// GrowthBook gate. Daemon/SDK paths skip this — shim defaults to active.
|
||||
setCseShimGate(isCseShimEnabled)
|
||||
|
||||
// 1. Runtime gate
|
||||
if (!(await isBridgeEnabledBlocking())) {
|
||||
logBridgeSkip('not_enabled', '[bridge:repl] Skipping: bridge not enabled')
|
||||
return null
|
||||
}
|
||||
|
||||
// 1b. Minimum version check — deferred to after the v1/v2 branch below,
|
||||
// since each implementation has its own floor (tengu_bridge_min_version
|
||||
// for v1, tengu_bridge_repl_v2_config.min_version for v2).
|
||||
|
||||
// 2. Check OAuth — must be signed in with claude.ai. Runs before the
|
||||
// policy check so console-auth users get the actionable "/login" hint
|
||||
// instead of a misleading policy error from a stale/wrong-org cache.
|
||||
if (!getBridgeAccessToken()) {
|
||||
logBridgeSkip('no_oauth', '[bridge:repl] Skipping: no OAuth tokens')
|
||||
onStateChange?.('failed', '/login')
|
||||
return null
|
||||
}
|
||||
|
||||
// 3. Check organization policy — remote control may be disabled
|
||||
await waitForPolicyLimitsToLoad()
|
||||
if (!isPolicyAllowed('allow_remote_control')) {
|
||||
logBridgeSkip(
|
||||
'policy_denied',
|
||||
'[bridge:repl] Skipping: allow_remote_control policy not allowed',
|
||||
)
|
||||
onStateChange?.('failed', "disabled by your organization's policy")
|
||||
return null
|
||||
}
|
||||
|
||||
// When CLAUDE_BRIDGE_OAUTH_TOKEN is set (ant-only local dev), the bridge
|
||||
// uses that token directly via getBridgeAccessToken() — keychain state is
|
||||
// irrelevant. Skip 2b/2c to preserve that decoupling: an expired keychain
|
||||
// token shouldn't block a bridge connection that doesn't use it.
|
||||
if (!getBridgeTokenOverride()) {
|
||||
// 2a. Cross-process backoff. If N prior processes already saw this exact
|
||||
// dead token (matched by expiresAt), skip silently — no event, no refresh
|
||||
// attempt. The count threshold tolerates transient refresh failures (auth
|
||||
// server 5xx, lockfile errors per auth.ts:1437/1444/1485): each process
|
||||
// independently retries until 3 consecutive failures prove the token dead.
|
||||
// Mirrors useReplBridge's MAX_CONSECUTIVE_INIT_FAILURES for in-process.
|
||||
// The expiresAt key is content-addressed: /login → new token → new expiresAt
|
||||
// → this stops matching without any explicit clear.
|
||||
const cfg = getGlobalConfig()
|
||||
if (
|
||||
cfg.bridgeOauthDeadExpiresAt != null &&
|
||||
(cfg.bridgeOauthDeadFailCount ?? 0) >= 3 &&
|
||||
getClaudeAIOAuthTokens()?.expiresAt === cfg.bridgeOauthDeadExpiresAt
|
||||
) {
|
||||
logForDebugging(
|
||||
`[bridge:repl] Skipping: cross-process backoff (dead token seen ${cfg.bridgeOauthDeadFailCount} times)`,
|
||||
)
|
||||
return null
|
||||
}
|
||||
|
||||
// 2b. Proactively refresh if expired. Mirrors bridgeMain.ts:2096 — the REPL
|
||||
// bridge fires at useEffect mount BEFORE any v1/messages call, making this
|
||||
// usually the first OAuth request of the session. Without this, ~9% of
|
||||
// registrations hit the server with a >8h-expired token → 401 → withOAuthRetry
|
||||
// recovers, but the server logs a 401 we can avoid. VPN egress IPs observed
|
||||
// at 30:1 401:200 when many unrelated users cluster at the 8h TTL boundary.
|
||||
//
|
||||
// Fresh-token cost: one memoized read + one Date.now() comparison (~µs).
|
||||
// checkAndRefreshOAuthTokenIfNeeded clears its own cache in every path that
|
||||
// touches the keychain (refresh success, lockfile race, throw), so no
|
||||
// explicit clearOAuthTokenCache() here — that would force a blocking
|
||||
// keychain spawn on the 91%+ fresh-token path.
|
||||
await checkAndRefreshOAuthTokenIfNeeded()
|
||||
|
||||
// 2c. Skip if token is still expired post-refresh-attempt. Env-var / FD
|
||||
// tokens (auth.ts:894-917) have expiresAt=null → never trip this. But a
|
||||
// keychain token whose refresh token is dead (password change, org left,
|
||||
// token GC'd) has expiresAt<now AND refresh just failed — the client would
|
||||
// otherwise loop 401 forever: withOAuthRetry → handleOAuth401Error →
|
||||
// refresh fails again → retry with same stale token → 401 again.
|
||||
// Datadog 2026-03-08: single IPs generating 2,879 such 401s/day. Skip the
|
||||
// guaranteed-fail API call; useReplBridge surfaces the failure.
|
||||
//
|
||||
// Intentionally NOT using isOAuthTokenExpired here — that has a 5-minute
|
||||
// proactive-refresh buffer, which is the right heuristic for "should
|
||||
// refresh soon" but wrong for "provably unusable". A token with 3min left
|
||||
// + transient refresh endpoint blip (5xx/timeout/wifi-reconnect) would
|
||||
// falsely trip a buffered check; the still-valid token would connect fine.
|
||||
// Check actual expiry instead: past-expiry AND refresh-failed → truly dead.
|
||||
const tokens = getClaudeAIOAuthTokens()
|
||||
if (tokens && tokens.expiresAt !== null && tokens.expiresAt <= Date.now()) {
|
||||
logBridgeSkip(
|
||||
'oauth_expired_unrefreshable',
|
||||
'[bridge:repl] Skipping: OAuth token expired and refresh failed (re-login required)',
|
||||
)
|
||||
onStateChange?.('failed', '/login')
|
||||
// Persist for the next process. Increments failCount when re-discovering
|
||||
// the same dead token (matched by expiresAt); resets to 1 for a different
|
||||
// token. Once count reaches 3, step 2a's early-return fires and this path
|
||||
// is never reached again — writes are capped at 3 per dead token.
|
||||
// Local const captures the narrowed type (closure loses !==null narrowing).
|
||||
const deadExpiresAt = tokens.expiresAt
|
||||
saveGlobalConfig(c => ({
|
||||
...c,
|
||||
bridgeOauthDeadExpiresAt: deadExpiresAt,
|
||||
bridgeOauthDeadFailCount:
|
||||
c.bridgeOauthDeadExpiresAt === deadExpiresAt
|
||||
? (c.bridgeOauthDeadFailCount ?? 0) + 1
|
||||
: 1,
|
||||
}))
|
||||
return null
|
||||
}
|
||||
}
|
||||
|
||||
// 4. Compute baseUrl — needed by both v1 (env-based) and v2 (env-less)
|
||||
// paths. Hoisted above the v2 gate so both can use it.
|
||||
const baseUrl = getBridgeBaseUrl()
|
||||
|
||||
// 5. Derive session title. Precedence: explicit initialName → /rename
|
||||
// (session storage) → last meaningful user message → generated slug.
|
||||
// Cosmetic only (claude.ai session list); the model never sees it.
|
||||
// Two flags: `hasExplicitTitle` (initialName or /rename — never auto-
|
||||
// overwrite) vs. `hasTitle` (any title, including auto-derived — blocks
|
||||
// the count-1 re-derivation but not count-3). The onUserMessage callback
|
||||
// (wired to both v1 and v2 below) derives from the 1st prompt and again
|
||||
// from the 3rd so mobile/web show a title that reflects more context.
|
||||
// The slug fallback (e.g. "remote-control-graceful-unicorn") makes
|
||||
// auto-started sessions distinguishable in the claude.ai list before the
|
||||
// first prompt.
|
||||
let title = `remote-control-${generateShortWordSlug()}`
|
||||
let hasTitle = false
|
||||
let hasExplicitTitle = false
|
||||
if (initialName) {
|
||||
title = initialName
|
||||
hasTitle = true
|
||||
hasExplicitTitle = true
|
||||
} else {
|
||||
const sessionId = getSessionId()
|
||||
const customTitle = sessionId
|
||||
? getCurrentSessionTitle(sessionId)
|
||||
: undefined
|
||||
if (customTitle) {
|
||||
title = customTitle
|
||||
hasTitle = true
|
||||
hasExplicitTitle = true
|
||||
} else if (initialMessages && initialMessages.length > 0) {
|
||||
// Find the last user message that has meaningful content. Skip meta
|
||||
// (nudges), tool results, compact summaries ("This session is being
|
||||
// continued…"), non-human origins (task notifications, channel pushes),
|
||||
// and synthetic interrupts ([Request interrupted by user]) — none are
|
||||
// human-authored. Same filter as extractTitleText + isSyntheticMessage.
|
||||
for (let i = initialMessages.length - 1; i >= 0; i--) {
|
||||
const msg = initialMessages[i]!
|
||||
if (
|
||||
msg.type !== 'user' ||
|
||||
msg.isMeta ||
|
||||
msg.toolUseResult ||
|
||||
msg.isCompactSummary ||
|
||||
(msg.origin && msg.origin.kind !== 'human') ||
|
||||
isSyntheticMessage(msg)
|
||||
)
|
||||
continue
|
||||
const rawContent = getContentText(msg.message.content)
|
||||
if (!rawContent) continue
|
||||
const derived = deriveTitle(rawContent)
|
||||
if (!derived) continue
|
||||
title = derived
|
||||
hasTitle = true
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Shared by both v1 and v2 — fires on every title-worthy user message until
|
||||
// it returns true. At count 1: deriveTitle placeholder immediately, then
|
||||
// generateSessionTitle (Haiku, sentence-case) fire-and-forget upgrade. At
|
||||
// count 3: re-generate over the full conversation. Skips entirely if the
|
||||
// title is explicit (/remote-control <name> or /rename) — re-checks
|
||||
// sessionStorage at call time so /rename between messages isn't clobbered.
|
||||
// Skips count 1 if initialMessages already derived (that title is fresh);
|
||||
// still refreshes at count 3. v2 passes cse_*; updateBridgeSessionTitle
|
||||
// retags internally.
|
||||
let userMessageCount = 0
|
||||
let lastBridgeSessionId: string | undefined
|
||||
let genSeq = 0
|
||||
const patch = (
|
||||
derived: string,
|
||||
bridgeSessionId: string,
|
||||
atCount: number,
|
||||
): void => {
|
||||
hasTitle = true
|
||||
title = derived
|
||||
logForDebugging(
|
||||
`[bridge:repl] derived title from message ${atCount}: ${derived}`,
|
||||
)
|
||||
void updateBridgeSessionTitle(bridgeSessionId, derived, {
|
||||
baseUrl,
|
||||
getAccessToken: getBridgeAccessToken,
|
||||
}).catch(() => {})
|
||||
}
|
||||
// Fire-and-forget Haiku generation with post-await guards. Re-checks /rename
|
||||
// (sessionStorage), v1 env-lost (lastBridgeSessionId), and same-session
|
||||
// out-of-order resolution (genSeq — count-1's Haiku resolving after count-3
|
||||
// would clobber the richer title). generateSessionTitle never rejects.
|
||||
const generateAndPatch = (input: string, bridgeSessionId: string): void => {
|
||||
const gen = ++genSeq
|
||||
const atCount = userMessageCount
|
||||
void generateSessionTitle(input, AbortSignal.timeout(15_000)).then(
|
||||
generated => {
|
||||
if (
|
||||
generated &&
|
||||
gen === genSeq &&
|
||||
lastBridgeSessionId === bridgeSessionId &&
|
||||
!getCurrentSessionTitle(getSessionId())
|
||||
) {
|
||||
patch(generated, bridgeSessionId, atCount)
|
||||
}
|
||||
},
|
||||
)
|
||||
}
|
||||
const onUserMessage = (text: string, bridgeSessionId: string): boolean => {
|
||||
if (hasExplicitTitle || getCurrentSessionTitle(getSessionId())) {
|
||||
return true
|
||||
}
|
||||
// v1 env-lost re-creates the session with a new ID. Reset the count so
|
||||
// the new session gets its own count-3 derivation; hasTitle stays true
|
||||
// (new session was created via getCurrentTitle(), which reads the count-1
|
||||
// title from this closure), so count-1 of the fresh cycle correctly skips.
|
||||
if (
|
||||
lastBridgeSessionId !== undefined &&
|
||||
lastBridgeSessionId !== bridgeSessionId
|
||||
) {
|
||||
userMessageCount = 0
|
||||
}
|
||||
lastBridgeSessionId = bridgeSessionId
|
||||
userMessageCount++
|
||||
if (userMessageCount === 1 && !hasTitle) {
|
||||
const placeholder = deriveTitle(text)
|
||||
if (placeholder) patch(placeholder, bridgeSessionId, userMessageCount)
|
||||
generateAndPatch(text, bridgeSessionId)
|
||||
} else if (userMessageCount === 3) {
|
||||
const msgs = getMessages?.()
|
||||
const input = msgs
|
||||
? extractConversationText(getMessagesAfterCompactBoundary(msgs))
|
||||
: text
|
||||
generateAndPatch(input, bridgeSessionId)
|
||||
}
|
||||
// Also re-latches if v1 env-lost resets the transport's done flag past 3.
|
||||
return userMessageCount >= 3
|
||||
}
|
||||
|
||||
const initialHistoryCap = getFeatureValue_CACHED_WITH_REFRESH(
|
||||
'tengu_bridge_initial_history_cap',
|
||||
200,
|
||||
5 * 60 * 1000,
|
||||
)
|
||||
|
||||
// Fetch orgUUID before the v1/v2 branch — both paths need it. v1 for
|
||||
// environment registration; v2 for archive (which lives at the compat
|
||||
// /v1/sessions/{id}/archive, not /v1/code/sessions). Without it, v2
|
||||
// archive 404s and sessions stay alive in CCR after /exit.
|
||||
const orgUUID = await getOrganizationUUID()
|
||||
if (!orgUUID) {
|
||||
logBridgeSkip('no_org_uuid', '[bridge:repl] Skipping: no org UUID')
|
||||
onStateChange?.('failed', '/login')
|
||||
return null
|
||||
}
|
||||
|
||||
// ── GrowthBook gate: env-less bridge ──────────────────────────────────
|
||||
// When enabled, skips the Environments API layer entirely (no register/
|
||||
// poll/ack/heartbeat) and connects directly via POST /bridge → worker_jwt.
|
||||
// See server PR #292605 (renamed in #293280). REPL-only — daemon/print stay
|
||||
// on env-based.
|
||||
//
|
||||
// NAMING: "env-less" is distinct from "CCR v2" (the /worker/* transport).
|
||||
// The env-based path below can ALSO use CCR v2 via CLAUDE_CODE_USE_CCR_V2.
|
||||
// tengu_bridge_repl_v2 gates env-less (no poll loop), not transport version.
|
||||
//
|
||||
// perpetual (assistant-mode session continuity via bridge-pointer.json) is
|
||||
// env-coupled and not yet implemented here — fall back to env-based when set
|
||||
// so KAIROS users don't silently lose cross-restart continuity.
|
||||
if (isEnvLessBridgeEnabled() && !perpetual) {
|
||||
const versionError = await checkEnvLessBridgeMinVersion()
|
||||
if (versionError) {
|
||||
logBridgeSkip(
|
||||
'version_too_old',
|
||||
`[bridge:repl] Skipping: ${versionError}`,
|
||||
true,
|
||||
)
|
||||
onStateChange?.('failed', 'run `claude update` to upgrade')
|
||||
return null
|
||||
}
|
||||
logForDebugging(
|
||||
'[bridge:repl] Using env-less bridge path (tengu_bridge_repl_v2)',
|
||||
)
|
||||
const { initEnvLessBridgeCore } = await import('./remoteBridgeCore.js')
|
||||
return initEnvLessBridgeCore({
|
||||
baseUrl,
|
||||
orgUUID,
|
||||
title,
|
||||
getAccessToken: getBridgeAccessToken,
|
||||
onAuth401: handleOAuth401Error,
|
||||
toSDKMessages,
|
||||
initialHistoryCap,
|
||||
initialMessages,
|
||||
// v2 always creates a fresh server session (new cse_* id), so
|
||||
// previouslyFlushedUUIDs is not passed — there's no cross-session
|
||||
// UUID collision risk, and the ref persists across enable→disable→
|
||||
// re-enable cycles which would cause the new session to receive zero
|
||||
// history (all UUIDs already in the set from the prior enable).
|
||||
// v1 handles this by calling previouslyFlushedUUIDs.clear() on fresh
|
||||
// session creation (replBridge.ts:768); v2 skips the param entirely.
|
||||
onInboundMessage,
|
||||
onUserMessage,
|
||||
onPermissionResponse,
|
||||
onInterrupt,
|
||||
onSetModel,
|
||||
onSetMaxThinkingTokens,
|
||||
onSetPermissionMode,
|
||||
onStateChange,
|
||||
outboundOnly,
|
||||
tags,
|
||||
})
|
||||
}
|
||||
|
||||
// ── v1 path: env-based (register/poll/ack/heartbeat) ──────────────────
|
||||
|
||||
const versionError = checkBridgeMinVersion()
|
||||
if (versionError) {
|
||||
logBridgeSkip('version_too_old', `[bridge:repl] Skipping: ${versionError}`)
|
||||
onStateChange?.('failed', 'run `claude update` to upgrade')
|
||||
return null
|
||||
}
|
||||
|
||||
// Gather git context — this is the bootstrap-read boundary.
|
||||
// Everything from here down is passed explicitly to bridgeCore.
|
||||
const branch = await getBranch()
|
||||
const gitRepoUrl = await getRemoteUrl()
|
||||
const sessionIngressUrl =
|
||||
process.env.USER_TYPE === 'ant' &&
|
||||
process.env.CLAUDE_BRIDGE_SESSION_INGRESS_URL
|
||||
? process.env.CLAUDE_BRIDGE_SESSION_INGRESS_URL
|
||||
: baseUrl
|
||||
|
||||
// Assistant-mode sessions advertise a distinct worker_type so the web UI
|
||||
// can filter them into a dedicated picker. KAIROS guard keeps the
|
||||
// assistant module out of external builds entirely.
|
||||
let workerType: BridgeWorkerType = 'claude_code'
|
||||
if (feature('KAIROS')) {
|
||||
/* eslint-disable @typescript-eslint/no-require-imports */
|
||||
const { isAssistantMode } =
|
||||
require('../assistant/index.js') as typeof import('../assistant/index.js')
|
||||
/* eslint-enable @typescript-eslint/no-require-imports */
|
||||
if (isAssistantMode()) {
|
||||
workerType = 'claude_code_assistant'
|
||||
}
|
||||
}
|
||||
|
||||
// 6. Delegate. BridgeCoreHandle is a structural superset of
|
||||
// ReplBridgeHandle (adds writeSdkMessages which REPL callers don't use),
|
||||
// so no adapter needed — just the narrower type on the way out.
|
||||
return initBridgeCore({
|
||||
dir: getOriginalCwd(),
|
||||
machineName: hostname(),
|
||||
branch,
|
||||
gitRepoUrl,
|
||||
title,
|
||||
baseUrl,
|
||||
sessionIngressUrl,
|
||||
workerType,
|
||||
getAccessToken: getBridgeAccessToken,
|
||||
createSession: opts =>
|
||||
createBridgeSession({
|
||||
...opts,
|
||||
events: [],
|
||||
baseUrl,
|
||||
getAccessToken: getBridgeAccessToken,
|
||||
}),
|
||||
archiveSession: sessionId =>
|
||||
archiveBridgeSession(sessionId, {
|
||||
baseUrl,
|
||||
getAccessToken: getBridgeAccessToken,
|
||||
// gracefulShutdown.ts:407 races runCleanupFunctions against 2s.
|
||||
// Teardown also does stopWork (parallel) + deregister (sequential),
|
||||
// so archive can't have the full budget. 1.5s matches v2's
|
||||
// teardown_archive_timeout_ms default.
|
||||
timeoutMs: 1500,
|
||||
}).catch((err: unknown) => {
|
||||
// archiveBridgeSession has no try/catch — 5xx/timeout/network throw
|
||||
// straight through. Previously swallowed silently, making archive
|
||||
// failures BQ-invisible and undiagnosable from debug logs.
|
||||
logForDebugging(
|
||||
`[bridge:repl] archiveBridgeSession threw: ${errorMessage(err)}`,
|
||||
{ level: 'error' },
|
||||
)
|
||||
}),
|
||||
// getCurrentTitle is read on reconnect-after-env-lost to re-title the new
|
||||
// session. /rename writes to session storage; onUserMessage mutates
|
||||
// `title` directly — both paths are picked up here.
|
||||
getCurrentTitle: () => getCurrentSessionTitle(getSessionId()) ?? title,
|
||||
onUserMessage,
|
||||
toSDKMessages,
|
||||
onAuth401: handleOAuth401Error,
|
||||
getPollIntervalConfig,
|
||||
initialHistoryCap,
|
||||
initialMessages,
|
||||
previouslyFlushedUUIDs,
|
||||
onInboundMessage,
|
||||
onPermissionResponse,
|
||||
onInterrupt,
|
||||
onSetModel,
|
||||
onSetMaxThinkingTokens,
|
||||
onSetPermissionMode,
|
||||
onStateChange,
|
||||
perpetual,
|
||||
})
|
||||
}
|
||||
|
||||
const TITLE_MAX_LEN = 50
|
||||
|
||||
/**
|
||||
* Quick placeholder title: strip display tags, take the first sentence,
|
||||
* collapse whitespace, truncate to 50 chars. Returns undefined if the result
|
||||
* is empty (e.g. message was only <local-command-stdout>). Replaced by
|
||||
* generateSessionTitle once Haiku resolves (~1-15s).
|
||||
*/
|
||||
function deriveTitle(raw: string): string | undefined {
|
||||
// Strip <ide_opened_file>, <session-start-hook>, etc. — these appear in
|
||||
// user messages when IDE/hooks inject context. stripDisplayTagsAllowEmpty
|
||||
// returns '' (not the original) so pure-tag messages are skipped.
|
||||
const clean = stripDisplayTagsAllowEmpty(raw)
|
||||
// First sentence is usually the intent; rest is often context/detail.
|
||||
// Capture group instead of lookbehind — keeps YARR JIT happy.
|
||||
const firstSentence = /^(.*?[.!?])\s/.exec(clean)?.[1] ?? clean
|
||||
// Collapse newlines/tabs — titles are single-line in the claude.ai list.
|
||||
const flat = firstSentence.replace(/\s+/g, ' ').trim()
|
||||
if (!flat) return undefined
|
||||
return flat.length > TITLE_MAX_LEN
|
||||
? flat.slice(0, TITLE_MAX_LEN - 1) + '\u2026'
|
||||
: flat
|
||||
}
|
||||
256
src/bridge/jwtUtils.ts
Normal file
256
src/bridge/jwtUtils.ts
Normal file
@ -0,0 +1,256 @@
|
||||
import { logEvent } from '../services/analytics/index.js'
|
||||
import { logForDebugging } from '../utils/debug.js'
|
||||
import { logForDiagnosticsNoPII } from '../utils/diagLogs.js'
|
||||
import { errorMessage } from '../utils/errors.js'
|
||||
import { jsonParse } from '../utils/slowOperations.js'
|
||||
|
||||
/** Format a millisecond duration as a human-readable string (e.g. "5m 30s"). */
|
||||
function formatDuration(ms: number): string {
|
||||
if (ms < 60_000) return `${Math.round(ms / 1000)}s`
|
||||
const m = Math.floor(ms / 60_000)
|
||||
const s = Math.round((ms % 60_000) / 1000)
|
||||
return s > 0 ? `${m}m ${s}s` : `${m}m`
|
||||
}
|
||||
|
||||
/**
|
||||
* Decode a JWT's payload segment without verifying the signature.
|
||||
* Strips the `sk-ant-si-` session-ingress prefix if present.
|
||||
* Returns the parsed JSON payload as `unknown`, or `null` if the
|
||||
* token is malformed or the payload is not valid JSON.
|
||||
*/
|
||||
export function decodeJwtPayload(token: string): unknown | null {
|
||||
const jwt = token.startsWith('sk-ant-si-')
|
||||
? token.slice('sk-ant-si-'.length)
|
||||
: token
|
||||
const parts = jwt.split('.')
|
||||
if (parts.length !== 3 || !parts[1]) return null
|
||||
try {
|
||||
return jsonParse(Buffer.from(parts[1], 'base64url').toString('utf8'))
|
||||
} catch {
|
||||
return null
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Decode the `exp` (expiry) claim from a JWT without verifying the signature.
|
||||
* @returns The `exp` value in Unix seconds, or `null` if unparseable
|
||||
*/
|
||||
export function decodeJwtExpiry(token: string): number | null {
|
||||
const payload = decodeJwtPayload(token)
|
||||
if (
|
||||
payload !== null &&
|
||||
typeof payload === 'object' &&
|
||||
'exp' in payload &&
|
||||
typeof payload.exp === 'number'
|
||||
) {
|
||||
return payload.exp
|
||||
}
|
||||
return null
|
||||
}
|
||||
|
||||
/** Refresh buffer: request a new token before expiry. */
|
||||
const TOKEN_REFRESH_BUFFER_MS = 5 * 60 * 1000
|
||||
|
||||
/** Fallback refresh interval when the new token's expiry is unknown. */
|
||||
const FALLBACK_REFRESH_INTERVAL_MS = 30 * 60 * 1000 // 30 minutes
|
||||
|
||||
/** Max consecutive failures before giving up on the refresh chain. */
|
||||
const MAX_REFRESH_FAILURES = 3
|
||||
|
||||
/** Retry delay when getAccessToken returns undefined. */
|
||||
const REFRESH_RETRY_DELAY_MS = 60_000
|
||||
|
||||
/**
|
||||
* Creates a token refresh scheduler that proactively refreshes session tokens
|
||||
* before they expire. Used by both the standalone bridge and the REPL bridge.
|
||||
*
|
||||
* When a token is about to expire, the scheduler calls `onRefresh` with the
|
||||
* session ID and the bridge's OAuth access token. The caller is responsible
|
||||
* for delivering the token to the appropriate transport (child process stdin
|
||||
* for standalone bridge, WebSocket reconnect for REPL bridge).
|
||||
*/
|
||||
export function createTokenRefreshScheduler({
|
||||
getAccessToken,
|
||||
onRefresh,
|
||||
label,
|
||||
refreshBufferMs = TOKEN_REFRESH_BUFFER_MS,
|
||||
}: {
|
||||
getAccessToken: () => string | undefined | Promise<string | undefined>
|
||||
onRefresh: (sessionId: string, oauthToken: string) => void
|
||||
label: string
|
||||
/** How long before expiry to fire refresh. Defaults to 5 min. */
|
||||
refreshBufferMs?: number
|
||||
}): {
|
||||
schedule: (sessionId: string, token: string) => void
|
||||
scheduleFromExpiresIn: (sessionId: string, expiresInSeconds: number) => void
|
||||
cancel: (sessionId: string) => void
|
||||
cancelAll: () => void
|
||||
} {
|
||||
const timers = new Map<string, ReturnType<typeof setTimeout>>()
|
||||
const failureCounts = new Map<string, number>()
|
||||
// Generation counter per session — incremented by schedule() and cancel()
|
||||
// so that in-flight async doRefresh() calls can detect when they've been
|
||||
// superseded and should skip setting follow-up timers.
|
||||
const generations = new Map<string, number>()
|
||||
|
||||
function nextGeneration(sessionId: string): number {
|
||||
const gen = (generations.get(sessionId) ?? 0) + 1
|
||||
generations.set(sessionId, gen)
|
||||
return gen
|
||||
}
|
||||
|
||||
function schedule(sessionId: string, token: string): void {
|
||||
const expiry = decodeJwtExpiry(token)
|
||||
if (!expiry) {
|
||||
// Token is not a decodable JWT (e.g. an OAuth token passed from the
|
||||
// REPL bridge WebSocket open handler). Preserve any existing timer
|
||||
// (such as the follow-up refresh set by doRefresh) so the refresh
|
||||
// chain is not broken.
|
||||
logForDebugging(
|
||||
`[${label}:token] Could not decode JWT expiry for sessionId=${sessionId}, token prefix=${token.slice(0, 15)}…, keeping existing timer`,
|
||||
)
|
||||
return
|
||||
}
|
||||
|
||||
// Clear any existing refresh timer — we have a concrete expiry to replace it.
|
||||
const existing = timers.get(sessionId)
|
||||
if (existing) {
|
||||
clearTimeout(existing)
|
||||
}
|
||||
|
||||
// Bump generation to invalidate any in-flight async doRefresh.
|
||||
const gen = nextGeneration(sessionId)
|
||||
|
||||
const expiryDate = new Date(expiry * 1000).toISOString()
|
||||
const delayMs = expiry * 1000 - Date.now() - refreshBufferMs
|
||||
if (delayMs <= 0) {
|
||||
logForDebugging(
|
||||
`[${label}:token] Token for sessionId=${sessionId} expires=${expiryDate} (past or within buffer), refreshing immediately`,
|
||||
)
|
||||
void doRefresh(sessionId, gen)
|
||||
return
|
||||
}
|
||||
|
||||
logForDebugging(
|
||||
`[${label}:token] Scheduled token refresh for sessionId=${sessionId} in ${formatDuration(delayMs)} (expires=${expiryDate}, buffer=${refreshBufferMs / 1000}s)`,
|
||||
)
|
||||
|
||||
const timer = setTimeout(doRefresh, delayMs, sessionId, gen)
|
||||
timers.set(sessionId, timer)
|
||||
}
|
||||
|
||||
/**
|
||||
* Schedule refresh using an explicit TTL (seconds until expiry) rather
|
||||
* than decoding a JWT's exp claim. Used by callers whose JWT is opaque
|
||||
* (e.g. POST /v1/code/sessions/{id}/bridge returns expires_in directly).
|
||||
*/
|
||||
function scheduleFromExpiresIn(
|
||||
sessionId: string,
|
||||
expiresInSeconds: number,
|
||||
): void {
|
||||
const existing = timers.get(sessionId)
|
||||
if (existing) clearTimeout(existing)
|
||||
const gen = nextGeneration(sessionId)
|
||||
// Clamp to 30s floor — if refreshBufferMs exceeds the server's expires_in
|
||||
// (e.g. very large buffer for frequent-refresh testing, or server shortens
|
||||
// expires_in unexpectedly), unclamped delayMs ≤ 0 would tight-loop.
|
||||
const delayMs = Math.max(expiresInSeconds * 1000 - refreshBufferMs, 30_000)
|
||||
logForDebugging(
|
||||
`[${label}:token] Scheduled token refresh for sessionId=${sessionId} in ${formatDuration(delayMs)} (expires_in=${expiresInSeconds}s, buffer=${refreshBufferMs / 1000}s)`,
|
||||
)
|
||||
const timer = setTimeout(doRefresh, delayMs, sessionId, gen)
|
||||
timers.set(sessionId, timer)
|
||||
}
|
||||
|
||||
async function doRefresh(sessionId: string, gen: number): Promise<void> {
|
||||
let oauthToken: string | undefined
|
||||
try {
|
||||
oauthToken = await getAccessToken()
|
||||
} catch (err) {
|
||||
logForDebugging(
|
||||
`[${label}:token] getAccessToken threw for sessionId=${sessionId}: ${errorMessage(err)}`,
|
||||
{ level: 'error' },
|
||||
)
|
||||
}
|
||||
|
||||
// If the session was cancelled or rescheduled while we were awaiting,
|
||||
// the generation will have changed — bail out to avoid orphaned timers.
|
||||
if (generations.get(sessionId) !== gen) {
|
||||
logForDebugging(
|
||||
`[${label}:token] doRefresh for sessionId=${sessionId} stale (gen ${gen} vs ${generations.get(sessionId)}), skipping`,
|
||||
)
|
||||
return
|
||||
}
|
||||
|
||||
if (!oauthToken) {
|
||||
const failures = (failureCounts.get(sessionId) ?? 0) + 1
|
||||
failureCounts.set(sessionId, failures)
|
||||
logForDebugging(
|
||||
`[${label}:token] No OAuth token available for refresh, sessionId=${sessionId} (failure ${failures}/${MAX_REFRESH_FAILURES})`,
|
||||
{ level: 'error' },
|
||||
)
|
||||
logForDiagnosticsNoPII('error', 'bridge_token_refresh_no_oauth')
|
||||
// Schedule a retry so the refresh chain can recover if the token
|
||||
// becomes available again (e.g. transient cache clear during refresh).
|
||||
// Cap retries to avoid spamming on genuine failures.
|
||||
if (failures < MAX_REFRESH_FAILURES) {
|
||||
const retryTimer = setTimeout(
|
||||
doRefresh,
|
||||
REFRESH_RETRY_DELAY_MS,
|
||||
sessionId,
|
||||
gen,
|
||||
)
|
||||
timers.set(sessionId, retryTimer)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Reset failure counter on successful token retrieval
|
||||
failureCounts.delete(sessionId)
|
||||
|
||||
logForDebugging(
|
||||
`[${label}:token] Refreshing token for sessionId=${sessionId}: new token prefix=${oauthToken.slice(0, 15)}…`,
|
||||
)
|
||||
logEvent('tengu_bridge_token_refreshed', {})
|
||||
onRefresh(sessionId, oauthToken)
|
||||
|
||||
// Schedule a follow-up refresh so long-running sessions stay authenticated.
|
||||
// Without this, the initial one-shot timer leaves the session vulnerable
|
||||
// to token expiry if it runs past the first refresh window.
|
||||
const timer = setTimeout(
|
||||
doRefresh,
|
||||
FALLBACK_REFRESH_INTERVAL_MS,
|
||||
sessionId,
|
||||
gen,
|
||||
)
|
||||
timers.set(sessionId, timer)
|
||||
logForDebugging(
|
||||
`[${label}:token] Scheduled follow-up refresh for sessionId=${sessionId} in ${formatDuration(FALLBACK_REFRESH_INTERVAL_MS)}`,
|
||||
)
|
||||
}
|
||||
|
||||
function cancel(sessionId: string): void {
|
||||
// Bump generation to invalidate any in-flight async doRefresh.
|
||||
nextGeneration(sessionId)
|
||||
const timer = timers.get(sessionId)
|
||||
if (timer) {
|
||||
clearTimeout(timer)
|
||||
timers.delete(sessionId)
|
||||
}
|
||||
failureCounts.delete(sessionId)
|
||||
}
|
||||
|
||||
function cancelAll(): void {
|
||||
// Bump all generations so in-flight doRefresh calls are invalidated.
|
||||
for (const sessionId of generations.keys()) {
|
||||
nextGeneration(sessionId)
|
||||
}
|
||||
for (const timer of timers.values()) {
|
||||
clearTimeout(timer)
|
||||
}
|
||||
timers.clear()
|
||||
failureCounts.clear()
|
||||
}
|
||||
|
||||
return { schedule, scheduleFromExpiresIn, cancel, cancelAll }
|
||||
}
|
||||
110
src/bridge/pollConfig.ts
Normal file
110
src/bridge/pollConfig.ts
Normal file
@ -0,0 +1,110 @@
|
||||
import { z } from 'zod/v4'
|
||||
import { getFeatureValue_CACHED_WITH_REFRESH } from '../services/analytics/growthbook.js'
|
||||
import { lazySchema } from '../utils/lazySchema.js'
|
||||
import {
|
||||
DEFAULT_POLL_CONFIG,
|
||||
type PollIntervalConfig,
|
||||
} from './pollConfigDefaults.js'
|
||||
|
||||
// .min(100) on the seek-work intervals restores the old Math.max(..., 100)
|
||||
// defense-in-depth floor against fat-fingered GrowthBook values. Unlike a
|
||||
// clamp, Zod rejects the whole object on violation — a config with one bad
|
||||
// field falls back to DEFAULT_POLL_CONFIG entirely rather than being
|
||||
// partially trusted.
|
||||
//
|
||||
// The at_capacity intervals use a 0-or-≥100 refinement: 0 means "disabled"
|
||||
// (heartbeat-only mode), ≥100 is the fat-finger floor. Values 1–99 are
|
||||
// rejected so unit confusion (ops thinks seconds, enters 10) doesn't poll
|
||||
// every 10ms against the VerifyEnvironmentSecretAuth DB path.
|
||||
//
|
||||
// The object-level refines require at least one at-capacity liveness
|
||||
// mechanism enabled: heartbeat OR the relevant poll interval. Without this,
|
||||
// the hb=0, atCapMs=0 drift config (ops disables heartbeat without
|
||||
// restoring at_capacity) falls through every throttle site with no sleep —
|
||||
// tight-looping /poll at HTTP-round-trip speed.
|
||||
const zeroOrAtLeast100 = {
|
||||
message: 'must be 0 (disabled) or ≥100ms',
|
||||
}
|
||||
const pollIntervalConfigSchema = lazySchema(() =>
|
||||
z
|
||||
.object({
|
||||
poll_interval_ms_not_at_capacity: z.number().int().min(100),
|
||||
// 0 = no at-capacity polling. Independent of heartbeat — both can be
|
||||
// enabled (heartbeat runs, periodically breaks out to poll).
|
||||
poll_interval_ms_at_capacity: z
|
||||
.number()
|
||||
.int()
|
||||
.refine(v => v === 0 || v >= 100, zeroOrAtLeast100),
|
||||
// 0 = disabled; positive value = heartbeat at this interval while at
|
||||
// capacity. Runs alongside at-capacity polling, not instead of it.
|
||||
// Named non_exclusive to distinguish from the old heartbeat_interval_ms
|
||||
// (either-or semantics in pre-#22145 clients). .default(0) so existing
|
||||
// GrowthBook configs without this field parse successfully.
|
||||
non_exclusive_heartbeat_interval_ms: z.number().int().min(0).default(0),
|
||||
// Multisession (bridgeMain.ts) intervals. Defaults match the
|
||||
// single-session values so existing configs without these fields
|
||||
// preserve current behavior.
|
||||
multisession_poll_interval_ms_not_at_capacity: z
|
||||
.number()
|
||||
.int()
|
||||
.min(100)
|
||||
.default(
|
||||
DEFAULT_POLL_CONFIG.multisession_poll_interval_ms_not_at_capacity,
|
||||
),
|
||||
multisession_poll_interval_ms_partial_capacity: z
|
||||
.number()
|
||||
.int()
|
||||
.min(100)
|
||||
.default(
|
||||
DEFAULT_POLL_CONFIG.multisession_poll_interval_ms_partial_capacity,
|
||||
),
|
||||
multisession_poll_interval_ms_at_capacity: z
|
||||
.number()
|
||||
.int()
|
||||
.refine(v => v === 0 || v >= 100, zeroOrAtLeast100)
|
||||
.default(DEFAULT_POLL_CONFIG.multisession_poll_interval_ms_at_capacity),
|
||||
// .min(1) matches the server's ge=1 constraint (work_v1.py:230).
|
||||
reclaim_older_than_ms: z.number().int().min(1).default(5000),
|
||||
session_keepalive_interval_v2_ms: z
|
||||
.number()
|
||||
.int()
|
||||
.min(0)
|
||||
.default(120_000),
|
||||
})
|
||||
.refine(
|
||||
cfg =>
|
||||
cfg.non_exclusive_heartbeat_interval_ms > 0 ||
|
||||
cfg.poll_interval_ms_at_capacity > 0,
|
||||
{
|
||||
message:
|
||||
'at-capacity liveness requires non_exclusive_heartbeat_interval_ms > 0 or poll_interval_ms_at_capacity > 0',
|
||||
},
|
||||
)
|
||||
.refine(
|
||||
cfg =>
|
||||
cfg.non_exclusive_heartbeat_interval_ms > 0 ||
|
||||
cfg.multisession_poll_interval_ms_at_capacity > 0,
|
||||
{
|
||||
message:
|
||||
'at-capacity liveness requires non_exclusive_heartbeat_interval_ms > 0 or multisession_poll_interval_ms_at_capacity > 0',
|
||||
},
|
||||
),
|
||||
)
|
||||
|
||||
/**
|
||||
* Fetch the bridge poll interval config from GrowthBook with a 5-minute
|
||||
* refresh window. Validates the served JSON against the schema; falls back
|
||||
* to defaults if the flag is absent, malformed, or partially-specified.
|
||||
*
|
||||
* Shared by bridgeMain.ts (standalone) and replBridge.ts (REPL) so ops
|
||||
* can tune both poll rates fleet-wide with a single config push.
|
||||
*/
|
||||
export function getPollIntervalConfig(): PollIntervalConfig {
|
||||
const raw = getFeatureValue_CACHED_WITH_REFRESH<unknown>(
|
||||
'tengu_bridge_poll_interval_config',
|
||||
DEFAULT_POLL_CONFIG,
|
||||
5 * 60 * 1000,
|
||||
)
|
||||
const parsed = pollIntervalConfigSchema().safeParse(raw)
|
||||
return parsed.success ? parsed.data : DEFAULT_POLL_CONFIG
|
||||
}
|
||||
82
src/bridge/pollConfigDefaults.ts
Normal file
82
src/bridge/pollConfigDefaults.ts
Normal file
@ -0,0 +1,82 @@
|
||||
/**
|
||||
* Bridge poll interval defaults. Extracted from pollConfig.ts so callers
|
||||
* that don't need live GrowthBook tuning (daemon via Agent SDK) can avoid
|
||||
* the growthbook.ts → config.ts → file.ts → sessionStorage.ts → commands.ts
|
||||
* transitive dependency chain.
|
||||
*/
|
||||
|
||||
/**
|
||||
* Poll interval when actively seeking work (no transport / below maxSessions).
|
||||
* Governs user-visible "connecting…" latency on initial work pickup and
|
||||
* recovery speed after the server re-dispatches a work item.
|
||||
*/
|
||||
const POLL_INTERVAL_MS_NOT_AT_CAPACITY = 2000
|
||||
|
||||
/**
|
||||
* Poll interval when the transport is connected. Runs independently of
|
||||
* heartbeat — when both are enabled, the heartbeat loop breaks out to poll
|
||||
* at this interval. Set to 0 to disable at-capacity polling entirely.
|
||||
*
|
||||
* Server-side constraints that bound this value:
|
||||
* - BRIDGE_LAST_POLL_TTL = 4h (Redis key expiry → environment auto-archived)
|
||||
* - max_poll_stale_seconds = 24h (session-creation health gate, currently disabled)
|
||||
*
|
||||
* 10 minutes gives 24× headroom on the Redis TTL while still picking up
|
||||
* server-initiated token-rotation redispatches within one poll cycle.
|
||||
* The transport auto-reconnects internally for 10 minutes on transient WS
|
||||
* failures, so poll is not the recovery path — it's strictly a liveness
|
||||
* signal plus a backstop for permanent close.
|
||||
*/
|
||||
const POLL_INTERVAL_MS_AT_CAPACITY = 600_000
|
||||
|
||||
/**
|
||||
* Multisession bridge (bridgeMain.ts) poll intervals. Defaults match the
|
||||
* single-session values so existing GrowthBook configs without these fields
|
||||
* preserve current behavior. Ops can tune these independently via the
|
||||
* tengu_bridge_poll_interval_config GB flag.
|
||||
*/
|
||||
const MULTISESSION_POLL_INTERVAL_MS_NOT_AT_CAPACITY =
|
||||
POLL_INTERVAL_MS_NOT_AT_CAPACITY
|
||||
const MULTISESSION_POLL_INTERVAL_MS_PARTIAL_CAPACITY =
|
||||
POLL_INTERVAL_MS_NOT_AT_CAPACITY
|
||||
const MULTISESSION_POLL_INTERVAL_MS_AT_CAPACITY = POLL_INTERVAL_MS_AT_CAPACITY
|
||||
|
||||
export type PollIntervalConfig = {
|
||||
poll_interval_ms_not_at_capacity: number
|
||||
poll_interval_ms_at_capacity: number
|
||||
non_exclusive_heartbeat_interval_ms: number
|
||||
multisession_poll_interval_ms_not_at_capacity: number
|
||||
multisession_poll_interval_ms_partial_capacity: number
|
||||
multisession_poll_interval_ms_at_capacity: number
|
||||
reclaim_older_than_ms: number
|
||||
session_keepalive_interval_v2_ms: number
|
||||
}
|
||||
|
||||
export const DEFAULT_POLL_CONFIG: PollIntervalConfig = {
|
||||
poll_interval_ms_not_at_capacity: POLL_INTERVAL_MS_NOT_AT_CAPACITY,
|
||||
poll_interval_ms_at_capacity: POLL_INTERVAL_MS_AT_CAPACITY,
|
||||
// 0 = disabled. When > 0, at-capacity loops send per-work-item heartbeats
|
||||
// at this interval. Independent of poll_interval_ms_at_capacity — both may
|
||||
// run (heartbeat periodically yields to poll). 60s gives 5× headroom under
|
||||
// the server's 300s heartbeat TTL. Named non_exclusive to distinguish from
|
||||
// the old heartbeat_interval_ms field (either-or semantics in pre-#22145
|
||||
// clients — heartbeat suppressed poll). Old clients ignore this key; ops
|
||||
// can set both fields during rollout.
|
||||
non_exclusive_heartbeat_interval_ms: 0,
|
||||
multisession_poll_interval_ms_not_at_capacity:
|
||||
MULTISESSION_POLL_INTERVAL_MS_NOT_AT_CAPACITY,
|
||||
multisession_poll_interval_ms_partial_capacity:
|
||||
MULTISESSION_POLL_INTERVAL_MS_PARTIAL_CAPACITY,
|
||||
multisession_poll_interval_ms_at_capacity:
|
||||
MULTISESSION_POLL_INTERVAL_MS_AT_CAPACITY,
|
||||
// Poll query param: reclaim unacknowledged work items older than this.
|
||||
// Matches the server's DEFAULT_RECLAIM_OLDER_THAN_MS (work_service.py:24).
|
||||
// Enables picking up stale-pending work after JWT expiry, when the prior
|
||||
// ack failed because the session_ingress_token was already stale.
|
||||
reclaim_older_than_ms: 5000,
|
||||
// 0 = disabled. When > 0, push a silent {type:'keep_alive'} frame to
|
||||
// session-ingress at this interval so upstream proxies don't GC an idle
|
||||
// remote-control session. 2 min is the default. _v2: bridge-only gate
|
||||
// (pre-v2 clients read the old key, new clients ignore it).
|
||||
session_keepalive_interval_v2_ms: 120_000,
|
||||
}
|
||||
1008
src/bridge/remoteBridgeCore.ts
Normal file
1008
src/bridge/remoteBridgeCore.ts
Normal file
File diff suppressed because it is too large
Load Diff
2406
src/bridge/replBridge.ts
Normal file
2406
src/bridge/replBridge.ts
Normal file
File diff suppressed because it is too large
Load Diff
36
src/bridge/replBridgeHandle.ts
Normal file
36
src/bridge/replBridgeHandle.ts
Normal file
@ -0,0 +1,36 @@
|
||||
import { updateSessionBridgeId } from '../utils/concurrentSessions.js'
|
||||
import type { ReplBridgeHandle } from './replBridge.js'
|
||||
import { toCompatSessionId } from './sessionIdCompat.js'
|
||||
|
||||
/**
|
||||
* Global pointer to the active REPL bridge handle, so callers outside
|
||||
* useReplBridge's React tree (tools, slash commands) can invoke handle methods
|
||||
* like subscribePR. Same one-bridge-per-process justification as bridgeDebug.ts
|
||||
* — the handle's closure captures the sessionId and getAccessToken that created
|
||||
* the session, and re-deriving those independently (BriefTool/upload.ts pattern)
|
||||
* risks staging/prod token divergence.
|
||||
*
|
||||
* Set from useReplBridge.tsx when init completes; cleared on teardown.
|
||||
*/
|
||||
|
||||
let handle: ReplBridgeHandle | null = null
|
||||
|
||||
export function setReplBridgeHandle(h: ReplBridgeHandle | null): void {
|
||||
handle = h
|
||||
// Publish (or clear) our bridge session ID in the session record so other
|
||||
// local peers can dedup us out of their bridge list — local is preferred.
|
||||
void updateSessionBridgeId(getSelfBridgeCompatId() ?? null).catch(() => {})
|
||||
}
|
||||
|
||||
export function getReplBridgeHandle(): ReplBridgeHandle | null {
|
||||
return handle
|
||||
}
|
||||
|
||||
/**
|
||||
* Our own bridge session ID in the session_* compat format the API returns
|
||||
* in /v1/sessions responses — or undefined if bridge isn't connected.
|
||||
*/
|
||||
export function getSelfBridgeCompatId(): string | undefined {
|
||||
const h = getReplBridgeHandle()
|
||||
return h ? toCompatSessionId(h.bridgeSessionId) : undefined
|
||||
}
|
||||
370
src/bridge/replBridgeTransport.ts
Normal file
370
src/bridge/replBridgeTransport.ts
Normal file
@ -0,0 +1,370 @@
|
||||
import type { StdoutMessage } from 'src/entrypoints/sdk/controlTypes.js'
|
||||
import { CCRClient } from '../cli/transports/ccrClient.js'
|
||||
import type { HybridTransport } from '../cli/transports/HybridTransport.js'
|
||||
import { SSETransport } from '../cli/transports/SSETransport.js'
|
||||
import { logForDebugging } from '../utils/debug.js'
|
||||
import { errorMessage } from '../utils/errors.js'
|
||||
import { updateSessionIngressAuthToken } from '../utils/sessionIngressAuth.js'
|
||||
import type { SessionState } from '../utils/sessionState.js'
|
||||
import { registerWorker } from './workSecret.js'
|
||||
|
||||
/**
|
||||
* Transport abstraction for replBridge. Covers exactly the surface that
|
||||
* replBridge.ts uses against HybridTransport so the v1/v2 choice is
|
||||
* confined to the construction site.
|
||||
*
|
||||
* - v1: HybridTransport (WS reads + POST writes to Session-Ingress)
|
||||
* - v2: SSETransport (reads) + CCRClient (writes to CCR v2 /worker/*)
|
||||
*
|
||||
* The v2 write path goes through CCRClient.writeEvent → SerialBatchEventUploader,
|
||||
* NOT through SSETransport.write() — SSETransport.write() targets the
|
||||
* Session-Ingress POST URL shape, which is wrong for CCR v2.
|
||||
*/
|
||||
export type ReplBridgeTransport = {
|
||||
write(message: StdoutMessage): Promise<void>
|
||||
writeBatch(messages: StdoutMessage[]): Promise<void>
|
||||
close(): void
|
||||
isConnectedStatus(): boolean
|
||||
getStateLabel(): string
|
||||
setOnData(callback: (data: string) => void): void
|
||||
setOnClose(callback: (closeCode?: number) => void): void
|
||||
setOnConnect(callback: () => void): void
|
||||
connect(): void
|
||||
/**
|
||||
* High-water mark of the underlying read stream's event sequence numbers.
|
||||
* replBridge reads this before swapping transports so the new one can
|
||||
* resume from where the old one left off (otherwise the server replays
|
||||
* the entire session history from seq 0).
|
||||
*
|
||||
* v1 returns 0 — Session-Ingress WS doesn't use SSE sequence numbers;
|
||||
* replay-on-reconnect is handled by the server-side message cursor.
|
||||
*/
|
||||
getLastSequenceNum(): number
|
||||
/**
|
||||
* Monotonic count of batches dropped via maxConsecutiveFailures.
|
||||
* Snapshot before writeBatch() and compare after to detect silent drops
|
||||
* (writeBatch() resolves normally even when batches were dropped).
|
||||
* v2 returns 0 — the v2 write path doesn't set maxConsecutiveFailures.
|
||||
*/
|
||||
readonly droppedBatchCount: number
|
||||
/**
|
||||
* PUT /worker state (v2 only; v1 is a no-op). `requires_action` tells
|
||||
* the backend a permission prompt is pending — claude.ai shows the
|
||||
* "waiting for input" indicator. REPL/daemon callers don't need this
|
||||
* (user watches the REPL locally); multi-session worker callers do.
|
||||
*/
|
||||
reportState(state: SessionState): void
|
||||
/** PUT /worker external_metadata (v2 only; v1 is a no-op). */
|
||||
reportMetadata(metadata: Record<string, unknown>): void
|
||||
/**
|
||||
* POST /worker/events/{id}/delivery (v2 only; v1 is a no-op). Populates
|
||||
* CCR's processing_at/processed_at columns. `received` is auto-fired by
|
||||
* CCRClient on every SSE frame and is not exposed here.
|
||||
*/
|
||||
reportDelivery(eventId: string, status: 'processing' | 'processed'): void
|
||||
/**
|
||||
* Drain the write queue before close() (v2 only; v1 resolves
|
||||
* immediately — HybridTransport POSTs are already awaited per-write).
|
||||
*/
|
||||
flush(): Promise<void>
|
||||
}
|
||||
|
||||
/**
|
||||
* v1 adapter: HybridTransport already has the full surface (it extends
|
||||
* WebSocketTransport which has setOnConnect + getStateLabel). This is a
|
||||
* no-op wrapper that exists only so replBridge's `transport` variable
|
||||
* has a single type.
|
||||
*/
|
||||
export function createV1ReplTransport(
|
||||
hybrid: HybridTransport,
|
||||
): ReplBridgeTransport {
|
||||
return {
|
||||
write: msg => hybrid.write(msg),
|
||||
writeBatch: msgs => hybrid.writeBatch(msgs),
|
||||
close: () => hybrid.close(),
|
||||
isConnectedStatus: () => hybrid.isConnectedStatus(),
|
||||
getStateLabel: () => hybrid.getStateLabel(),
|
||||
setOnData: cb => hybrid.setOnData(cb),
|
||||
setOnClose: cb => hybrid.setOnClose(cb),
|
||||
setOnConnect: cb => hybrid.setOnConnect(cb),
|
||||
connect: () => void hybrid.connect(),
|
||||
// v1 Session-Ingress WS doesn't use SSE sequence numbers; replay
|
||||
// semantics are different. Always return 0 so the seq-num carryover
|
||||
// logic in replBridge is a no-op for v1.
|
||||
getLastSequenceNum: () => 0,
|
||||
get droppedBatchCount() {
|
||||
return hybrid.droppedBatchCount
|
||||
},
|
||||
reportState: () => {},
|
||||
reportMetadata: () => {},
|
||||
reportDelivery: () => {},
|
||||
flush: () => Promise.resolve(),
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* v2 adapter: wrap SSETransport (reads) + CCRClient (writes, heartbeat,
|
||||
* state, delivery tracking).
|
||||
*
|
||||
* Auth: v2 endpoints validate the JWT's session_id claim (register_worker.go:32)
|
||||
* and worker role (environment_auth.py:856). OAuth tokens have neither.
|
||||
* This is the inverse of the v1 replBridge path, which deliberately uses OAuth.
|
||||
* The JWT is refreshed when the poll loop re-dispatches work — the caller
|
||||
* invokes createV2ReplTransport again with the fresh token.
|
||||
*
|
||||
* Registration happens here (not in the caller) so the entire v2 handshake
|
||||
* is one async step. registerWorker failure propagates — replBridge will
|
||||
* catch it and stay on the poll loop.
|
||||
*/
|
||||
export async function createV2ReplTransport(opts: {
|
||||
sessionUrl: string
|
||||
ingressToken: string
|
||||
sessionId: string
|
||||
/**
|
||||
* SSE sequence-number high-water mark from the previous transport.
|
||||
* Passed to the new SSETransport so its first connect() sends
|
||||
* from_sequence_num / Last-Event-ID and the server resumes from where
|
||||
* the old stream left off. Without this, every transport swap asks the
|
||||
* server to replay the entire session history from seq 0.
|
||||
*/
|
||||
initialSequenceNum?: number
|
||||
/**
|
||||
* Worker epoch from POST /bridge response. When provided, the server
|
||||
* already bumped epoch (the /bridge call IS the register — see server
|
||||
* PR #293280). When omitted (v1 CCR-v2 path via replBridge.ts poll loop),
|
||||
* call registerWorker as before.
|
||||
*/
|
||||
epoch?: number
|
||||
/** CCRClient heartbeat interval. Defaults to 20s when omitted. */
|
||||
heartbeatIntervalMs?: number
|
||||
/** ±fraction per-beat jitter. Defaults to 0 (no jitter) when omitted. */
|
||||
heartbeatJitterFraction?: number
|
||||
/**
|
||||
* When true, skip opening the SSE read stream — only the CCRClient write
|
||||
* path is activated. Use for mirror-mode attachments that forward events
|
||||
* but never receive inbound prompts or control requests.
|
||||
*/
|
||||
outboundOnly?: boolean
|
||||
/**
|
||||
* Per-instance auth header source. When provided, CCRClient + SSETransport
|
||||
* read auth from this closure instead of the process-wide
|
||||
* CLAUDE_CODE_SESSION_ACCESS_TOKEN env var. Required for callers managing
|
||||
* multiple concurrent sessions — the env-var path stomps across sessions.
|
||||
* When omitted, falls back to the env var (single-session callers).
|
||||
*/
|
||||
getAuthToken?: () => string | undefined
|
||||
}): Promise<ReplBridgeTransport> {
|
||||
const {
|
||||
sessionUrl,
|
||||
ingressToken,
|
||||
sessionId,
|
||||
initialSequenceNum,
|
||||
getAuthToken,
|
||||
} = opts
|
||||
|
||||
// Auth header builder. If getAuthToken is provided, read from it
|
||||
// (per-instance, multi-session safe). Otherwise write ingressToken to
|
||||
// the process-wide env var (legacy single-session path — CCRClient's
|
||||
// default getAuthHeaders reads it via getSessionIngressAuthHeaders).
|
||||
let getAuthHeaders: (() => Record<string, string>) | undefined
|
||||
if (getAuthToken) {
|
||||
getAuthHeaders = (): Record<string, string> => {
|
||||
const token = getAuthToken()
|
||||
if (!token) return {}
|
||||
return { Authorization: `Bearer ${token}` }
|
||||
}
|
||||
} else {
|
||||
// CCRClient.request() and SSETransport.connect() both read auth via
|
||||
// getSessionIngressAuthHeaders() → this env var. Set it before either
|
||||
// touches the network.
|
||||
updateSessionIngressAuthToken(ingressToken)
|
||||
}
|
||||
|
||||
const epoch = opts.epoch ?? (await registerWorker(sessionUrl, ingressToken))
|
||||
logForDebugging(
|
||||
`[bridge:repl] CCR v2: worker sessionId=${sessionId} epoch=${epoch}${opts.epoch !== undefined ? ' (from /bridge)' : ' (via registerWorker)'}`,
|
||||
)
|
||||
|
||||
// Derive SSE stream URL. Same logic as transportUtils.ts:26-33 but
|
||||
// starting from an http(s) base instead of a --sdk-url that might be ws://.
|
||||
const sseUrl = new URL(sessionUrl)
|
||||
sseUrl.pathname = sseUrl.pathname.replace(/\/$/, '') + '/worker/events/stream'
|
||||
|
||||
const sse = new SSETransport(
|
||||
sseUrl,
|
||||
{},
|
||||
sessionId,
|
||||
undefined,
|
||||
initialSequenceNum,
|
||||
getAuthHeaders,
|
||||
)
|
||||
let onCloseCb: ((closeCode?: number) => void) | undefined
|
||||
const ccr = new CCRClient(sse, new URL(sessionUrl), {
|
||||
getAuthHeaders,
|
||||
heartbeatIntervalMs: opts.heartbeatIntervalMs,
|
||||
heartbeatJitterFraction: opts.heartbeatJitterFraction,
|
||||
// Default is process.exit(1) — correct for spawn-mode children. In-process,
|
||||
// that kills the REPL. Close instead: replBridge's onClose wakes the poll
|
||||
// loop, which picks up the server's re-dispatch (with fresh epoch).
|
||||
onEpochMismatch: () => {
|
||||
logForDebugging(
|
||||
'[bridge:repl] CCR v2: epoch superseded (409) — closing for poll-loop recovery',
|
||||
)
|
||||
// Close resources in a try block so the throw always executes.
|
||||
// If ccr.close() or sse.close() throw, we still need to unwind
|
||||
// the caller (request()) — otherwise handleEpochMismatch's `never`
|
||||
// return type is violated at runtime and control falls through.
|
||||
try {
|
||||
ccr.close()
|
||||
sse.close()
|
||||
onCloseCb?.(4090)
|
||||
} catch (closeErr: unknown) {
|
||||
logForDebugging(
|
||||
`[bridge:repl] CCR v2: error during epoch-mismatch cleanup: ${errorMessage(closeErr)}`,
|
||||
{ level: 'error' },
|
||||
)
|
||||
}
|
||||
// Don't return — the calling request() code continues after the 409
|
||||
// branch, so callers see the logged warning and a false return. We
|
||||
// throw to unwind; the uploaders catch it as a send failure.
|
||||
throw new Error('epoch superseded')
|
||||
},
|
||||
})
|
||||
|
||||
// CCRClient's constructor wired sse.setOnEvent → reportDelivery('received').
|
||||
// remoteIO.ts additionally sends 'processing'/'processed' via
|
||||
// setCommandLifecycleListener, which the in-process query loop fires. This
|
||||
// transport's only caller (replBridge/daemonBridge) has no such wiring — the
|
||||
// daemon's agent child is a separate process (ProcessTransport), and its
|
||||
// notifyCommandLifecycle calls fire with listener=null in its own module
|
||||
// scope. So events stay at 'received' forever, and reconnectSession re-queues
|
||||
// them on every daemon restart (observed: 21→24→25 phantom prompts as
|
||||
// "user sent a new message while you were working" system-reminders).
|
||||
//
|
||||
// Fix: ACK 'processed' immediately alongside 'received'. The window between
|
||||
// SSE receipt and transcript-write is narrow (queue → SDK → child stdin →
|
||||
// model); a crash there loses one prompt vs. the observed N-prompt flood on
|
||||
// every restart. Overwrite the constructor's wiring to do both — setOnEvent
|
||||
// replaces, not appends (SSETransport.ts:658).
|
||||
sse.setOnEvent(event => {
|
||||
ccr.reportDelivery(event.event_id, 'received')
|
||||
ccr.reportDelivery(event.event_id, 'processed')
|
||||
})
|
||||
|
||||
// Both sse.connect() and ccr.initialize() are deferred to connect() below.
|
||||
// replBridge's calling order is newTransport → setOnConnect → setOnData →
|
||||
// setOnClose → connect(), and both calls need those callbacks wired first:
|
||||
// sse.connect() opens the stream (events flow to onData/onClose immediately),
|
||||
// and ccr.initialize().then() fires onConnectCb.
|
||||
//
|
||||
// onConnect fires once ccr.initialize() resolves. Writes go via
|
||||
// CCRClient HTTP POST (SerialBatchEventUploader), not SSE, so the
|
||||
// write path is ready the moment workerEpoch is set. SSE.connect()
|
||||
// awaits its read loop and never resolves — don't gate on it.
|
||||
// The SSE stream opens in parallel (~30ms) and starts delivering
|
||||
// inbound events via setOnData; outbound doesn't need to wait for it.
|
||||
let onConnectCb: (() => void) | undefined
|
||||
let ccrInitialized = false
|
||||
let closed = false
|
||||
|
||||
return {
|
||||
write(msg) {
|
||||
return ccr.writeEvent(msg)
|
||||
},
|
||||
async writeBatch(msgs) {
|
||||
// SerialBatchEventUploader already batches internally (maxBatchSize=100);
|
||||
// sequential enqueue preserves order and the uploader coalesces.
|
||||
// Check closed between writes to avoid sending partial batches after
|
||||
// transport teardown (epoch mismatch, SSE drop).
|
||||
for (const m of msgs) {
|
||||
if (closed) break
|
||||
await ccr.writeEvent(m)
|
||||
}
|
||||
},
|
||||
close() {
|
||||
closed = true
|
||||
ccr.close()
|
||||
sse.close()
|
||||
},
|
||||
isConnectedStatus() {
|
||||
// Write-readiness, not read-readiness — replBridge checks this
|
||||
// before calling writeBatch. SSE open state is orthogonal.
|
||||
return ccrInitialized
|
||||
},
|
||||
getStateLabel() {
|
||||
// SSETransport doesn't expose its state string; synthesize from
|
||||
// what we can observe. replBridge only uses this for debug logging.
|
||||
if (sse.isClosedStatus()) return 'closed'
|
||||
if (sse.isConnectedStatus()) return ccrInitialized ? 'connected' : 'init'
|
||||
return 'connecting'
|
||||
},
|
||||
setOnData(cb) {
|
||||
sse.setOnData(cb)
|
||||
},
|
||||
setOnClose(cb) {
|
||||
onCloseCb = cb
|
||||
// SSE reconnect-budget exhaustion fires onClose(undefined) — map to
|
||||
// 4092 so ws_closed telemetry can distinguish it from HTTP-status
|
||||
// closes (SSETransport:280 passes response.status). Stop CCRClient's
|
||||
// heartbeat timer before notifying replBridge. (sse.close() doesn't
|
||||
// invoke this, so the epoch-mismatch path above isn't double-firing.)
|
||||
sse.setOnClose(code => {
|
||||
ccr.close()
|
||||
cb(code ?? 4092)
|
||||
})
|
||||
},
|
||||
setOnConnect(cb) {
|
||||
onConnectCb = cb
|
||||
},
|
||||
getLastSequenceNum() {
|
||||
return sse.getLastSequenceNum()
|
||||
},
|
||||
// v2 write path (CCRClient) doesn't set maxConsecutiveFailures — no drops.
|
||||
droppedBatchCount: 0,
|
||||
reportState(state) {
|
||||
ccr.reportState(state)
|
||||
},
|
||||
reportMetadata(metadata) {
|
||||
ccr.reportMetadata(metadata)
|
||||
},
|
||||
reportDelivery(eventId, status) {
|
||||
ccr.reportDelivery(eventId, status)
|
||||
},
|
||||
flush() {
|
||||
return ccr.flush()
|
||||
},
|
||||
connect() {
|
||||
// Outbound-only: skip the SSE read stream entirely — no inbound
|
||||
// events to receive, no delivery ACKs to send. Only the CCRClient
|
||||
// write path (POST /worker/events) and heartbeat are needed.
|
||||
if (!opts.outboundOnly) {
|
||||
// Fire-and-forget — SSETransport.connect() awaits readStream()
|
||||
// (the read loop) and only resolves on stream close/error. The
|
||||
// spawn-mode path in remoteIO.ts does the same void discard.
|
||||
void sse.connect()
|
||||
}
|
||||
void ccr.initialize(epoch).then(
|
||||
() => {
|
||||
ccrInitialized = true
|
||||
logForDebugging(
|
||||
`[bridge:repl] v2 transport ready for writes (epoch=${epoch}, sse=${sse.isConnectedStatus() ? 'open' : 'opening'})`,
|
||||
)
|
||||
onConnectCb?.()
|
||||
},
|
||||
(err: unknown) => {
|
||||
logForDebugging(
|
||||
`[bridge:repl] CCR v2 initialize failed: ${errorMessage(err)}`,
|
||||
{ level: 'error' },
|
||||
)
|
||||
// Close transport resources and notify replBridge via onClose
|
||||
// so the poll loop can retry on the next work dispatch.
|
||||
// Without this callback, replBridge never learns the transport
|
||||
// failed to initialize and sits with transport === null forever.
|
||||
ccr.close()
|
||||
sse.close()
|
||||
onCloseCb?.(4091) // 4091 = init failure, distinguishable from 4090 epoch mismatch
|
||||
},
|
||||
)
|
||||
},
|
||||
}
|
||||
}
|
||||
57
src/bridge/sessionIdCompat.ts
Normal file
57
src/bridge/sessionIdCompat.ts
Normal file
@ -0,0 +1,57 @@
|
||||
/**
|
||||
* Session ID tag translation helpers for the CCR v2 compat layer.
|
||||
*
|
||||
* Lives in its own file (rather than workSecret.ts) so that sessionHandle.ts
|
||||
* and replBridgeTransport.ts (bridge.mjs entry points) can import from
|
||||
* workSecret.ts without pulling in these retag functions.
|
||||
*
|
||||
* The isCseShimEnabled kill switch is injected via setCseShimGate() to avoid
|
||||
* a static import of bridgeEnabled.ts → growthbook.ts → config.ts — all
|
||||
* banned from the sdk.mjs bundle (scripts/build-agent-sdk.sh). Callers that
|
||||
* already import bridgeEnabled.ts register the gate; the SDK path never does,
|
||||
* so the shim defaults to active (matching isCseShimEnabled()'s own default).
|
||||
*/
|
||||
|
||||
let _isCseShimEnabled: (() => boolean) | undefined
|
||||
|
||||
/**
|
||||
* Register the GrowthBook gate for the cse_ shim. Called from bridge
|
||||
* init code that already imports bridgeEnabled.ts.
|
||||
*/
|
||||
export function setCseShimGate(gate: () => boolean): void {
|
||||
_isCseShimEnabled = gate
|
||||
}
|
||||
|
||||
/**
|
||||
* Re-tag a `cse_*` session ID to `session_*` for use with the v1 compat API.
|
||||
*
|
||||
* Worker endpoints (/v1/code/sessions/{id}/worker/*) want `cse_*`; that's
|
||||
* what the work poll delivers. Client-facing compat endpoints
|
||||
* (/v1/sessions/{id}, /v1/sessions/{id}/archive, /v1/sessions/{id}/events)
|
||||
* want `session_*` — compat/convert.go:27 validates TagSession. Same UUID,
|
||||
* different costume. No-op for IDs that aren't `cse_*`.
|
||||
*
|
||||
* bridgeMain holds one sessionId variable for both worker registration and
|
||||
* session-management calls. It arrives as `cse_*` from the work poll under
|
||||
* the compat gate, so archiveSession/fetchSessionTitle need this re-tag.
|
||||
*/
|
||||
export function toCompatSessionId(id: string): string {
|
||||
if (!id.startsWith('cse_')) return id
|
||||
if (_isCseShimEnabled && !_isCseShimEnabled()) return id
|
||||
return 'session_' + id.slice('cse_'.length)
|
||||
}
|
||||
|
||||
/**
|
||||
* Re-tag a `session_*` session ID to `cse_*` for infrastructure-layer calls.
|
||||
*
|
||||
* Inverse of toCompatSessionId. POST /v1/environments/{id}/bridge/reconnect
|
||||
* lives below the compat layer: once ccr_v2_compat_enabled is on server-side,
|
||||
* it looks sessions up by their infra tag (`cse_*`). createBridgeSession still
|
||||
* returns `session_*` (compat/convert.go:41) and that's what bridge-pointer
|
||||
* stores — so perpetual reconnect passes the wrong costume and gets "Session
|
||||
* not found" back. Same UUID, wrong tag. No-op for IDs that aren't `session_*`.
|
||||
*/
|
||||
export function toInfraSessionId(id: string): string {
|
||||
if (!id.startsWith('session_')) return id
|
||||
return 'cse_' + id.slice('session_'.length)
|
||||
}
|
||||
550
src/bridge/sessionRunner.ts
Normal file
550
src/bridge/sessionRunner.ts
Normal file
@ -0,0 +1,550 @@
|
||||
import { type ChildProcess, spawn } from 'child_process'
|
||||
import { createWriteStream, type WriteStream } from 'fs'
|
||||
import { tmpdir } from 'os'
|
||||
import { dirname, join } from 'path'
|
||||
import { createInterface } from 'readline'
|
||||
import { jsonParse, jsonStringify } from '../utils/slowOperations.js'
|
||||
import { debugTruncate } from './debugUtils.js'
|
||||
import type {
|
||||
SessionActivity,
|
||||
SessionDoneStatus,
|
||||
SessionHandle,
|
||||
SessionSpawner,
|
||||
SessionSpawnOpts,
|
||||
} from './types.js'
|
||||
|
||||
const MAX_ACTIVITIES = 10
|
||||
const MAX_STDERR_LINES = 10
|
||||
|
||||
/**
|
||||
* Sanitize a session ID for use in file names.
|
||||
* Strips any characters that could cause path traversal (e.g. `../`, `/`)
|
||||
* or other filesystem issues, replacing them with underscores.
|
||||
*/
|
||||
export function safeFilenameId(id: string): string {
|
||||
return id.replace(/[^a-zA-Z0-9_-]/g, '_')
|
||||
}
|
||||
|
||||
/**
|
||||
* A control_request emitted by the child CLI when it needs permission to
|
||||
* execute a **specific** tool invocation (not a general capability check).
|
||||
* The bridge forwards this to the server so the user can approve/deny.
|
||||
*/
|
||||
export type PermissionRequest = {
|
||||
type: 'control_request'
|
||||
request_id: string
|
||||
request: {
|
||||
/** Per-invocation permission check — "may I run this tool with these inputs?" */
|
||||
subtype: 'can_use_tool'
|
||||
tool_name: string
|
||||
input: Record<string, unknown>
|
||||
tool_use_id: string
|
||||
}
|
||||
}
|
||||
|
||||
type SessionSpawnerDeps = {
|
||||
execPath: string
|
||||
/**
|
||||
* Arguments that must precede the CLI flags when spawning. Empty for
|
||||
* compiled binaries (where execPath is the claude binary itself); contains
|
||||
* the script path (process.argv[1]) for npm installs where execPath is the
|
||||
* node runtime. Without this, node sees --sdk-url as a node option and
|
||||
* exits with "bad option: --sdk-url" (see anthropics/claude-code#28334).
|
||||
*/
|
||||
scriptArgs: string[]
|
||||
env: NodeJS.ProcessEnv
|
||||
verbose: boolean
|
||||
sandbox: boolean
|
||||
debugFile?: string
|
||||
permissionMode?: string
|
||||
onDebug: (msg: string) => void
|
||||
onActivity?: (sessionId: string, activity: SessionActivity) => void
|
||||
onPermissionRequest?: (
|
||||
sessionId: string,
|
||||
request: PermissionRequest,
|
||||
accessToken: string,
|
||||
) => void
|
||||
}
|
||||
|
||||
/** Map tool names to human-readable verbs for the status display. */
|
||||
const TOOL_VERBS: Record<string, string> = {
|
||||
Read: 'Reading',
|
||||
Write: 'Writing',
|
||||
Edit: 'Editing',
|
||||
MultiEdit: 'Editing',
|
||||
Bash: 'Running',
|
||||
Glob: 'Searching',
|
||||
Grep: 'Searching',
|
||||
WebFetch: 'Fetching',
|
||||
WebSearch: 'Searching',
|
||||
Task: 'Running task',
|
||||
FileReadTool: 'Reading',
|
||||
FileWriteTool: 'Writing',
|
||||
FileEditTool: 'Editing',
|
||||
GlobTool: 'Searching',
|
||||
GrepTool: 'Searching',
|
||||
BashTool: 'Running',
|
||||
NotebookEditTool: 'Editing notebook',
|
||||
LSP: 'LSP',
|
||||
}
|
||||
|
||||
function toolSummary(name: string, input: Record<string, unknown>): string {
|
||||
const verb = TOOL_VERBS[name] ?? name
|
||||
const target =
|
||||
(input.file_path as string) ??
|
||||
(input.filePath as string) ??
|
||||
(input.pattern as string) ??
|
||||
(input.command as string | undefined)?.slice(0, 60) ??
|
||||
(input.url as string) ??
|
||||
(input.query as string) ??
|
||||
''
|
||||
if (target) {
|
||||
return `${verb} ${target}`
|
||||
}
|
||||
return verb
|
||||
}
|
||||
|
||||
function extractActivities(
|
||||
line: string,
|
||||
sessionId: string,
|
||||
onDebug: (msg: string) => void,
|
||||
): SessionActivity[] {
|
||||
let parsed: unknown
|
||||
try {
|
||||
parsed = jsonParse(line)
|
||||
} catch {
|
||||
return []
|
||||
}
|
||||
|
||||
if (!parsed || typeof parsed !== 'object') {
|
||||
return []
|
||||
}
|
||||
|
||||
const msg = parsed as Record<string, unknown>
|
||||
const activities: SessionActivity[] = []
|
||||
const now = Date.now()
|
||||
|
||||
switch (msg.type) {
|
||||
case 'assistant': {
|
||||
const message = msg.message as Record<string, unknown> | undefined
|
||||
if (!message) break
|
||||
const content = message.content
|
||||
if (!Array.isArray(content)) break
|
||||
|
||||
for (const block of content) {
|
||||
if (!block || typeof block !== 'object') continue
|
||||
const b = block as Record<string, unknown>
|
||||
|
||||
if (b.type === 'tool_use') {
|
||||
const name = (b.name as string) ?? 'Tool'
|
||||
const input = (b.input as Record<string, unknown>) ?? {}
|
||||
const summary = toolSummary(name, input)
|
||||
activities.push({
|
||||
type: 'tool_start',
|
||||
summary,
|
||||
timestamp: now,
|
||||
})
|
||||
onDebug(
|
||||
`[bridge:activity] sessionId=${sessionId} tool_use name=${name} ${inputPreview(input)}`,
|
||||
)
|
||||
} else if (b.type === 'text') {
|
||||
const text = (b.text as string) ?? ''
|
||||
if (text.length > 0) {
|
||||
activities.push({
|
||||
type: 'text',
|
||||
summary: text.slice(0, 80),
|
||||
timestamp: now,
|
||||
})
|
||||
onDebug(
|
||||
`[bridge:activity] sessionId=${sessionId} text "${text.slice(0, 100)}"`,
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
break
|
||||
}
|
||||
case 'result': {
|
||||
const subtype = msg.subtype as string | undefined
|
||||
if (subtype === 'success') {
|
||||
activities.push({
|
||||
type: 'result',
|
||||
summary: 'Session completed',
|
||||
timestamp: now,
|
||||
})
|
||||
onDebug(
|
||||
`[bridge:activity] sessionId=${sessionId} result subtype=success`,
|
||||
)
|
||||
} else if (subtype) {
|
||||
const errors = msg.errors as string[] | undefined
|
||||
const errorSummary = errors?.[0] ?? `Error: ${subtype}`
|
||||
activities.push({
|
||||
type: 'error',
|
||||
summary: errorSummary,
|
||||
timestamp: now,
|
||||
})
|
||||
onDebug(
|
||||
`[bridge:activity] sessionId=${sessionId} result subtype=${subtype} error="${errorSummary}"`,
|
||||
)
|
||||
} else {
|
||||
onDebug(
|
||||
`[bridge:activity] sessionId=${sessionId} result subtype=undefined`,
|
||||
)
|
||||
}
|
||||
break
|
||||
}
|
||||
default:
|
||||
break
|
||||
}
|
||||
|
||||
return activities
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract plain text from a replayed SDKUserMessage NDJSON line. Returns the
|
||||
* trimmed text if this looks like a real human-authored message, otherwise
|
||||
* undefined so the caller keeps waiting for the first real message.
|
||||
*/
|
||||
function extractUserMessageText(
|
||||
msg: Record<string, unknown>,
|
||||
): string | undefined {
|
||||
// Skip tool-result user messages (wrapped subagent results) and synthetic
|
||||
// caveat messages — neither is human-authored.
|
||||
if (msg.parent_tool_use_id != null || msg.isSynthetic || msg.isReplay)
|
||||
return undefined
|
||||
|
||||
const message = msg.message as Record<string, unknown> | undefined
|
||||
const content = message?.content
|
||||
let text: string | undefined
|
||||
if (typeof content === 'string') {
|
||||
text = content
|
||||
} else if (Array.isArray(content)) {
|
||||
for (const block of content) {
|
||||
if (
|
||||
block &&
|
||||
typeof block === 'object' &&
|
||||
(block as Record<string, unknown>).type === 'text'
|
||||
) {
|
||||
text = (block as Record<string, unknown>).text as string | undefined
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
text = text?.trim()
|
||||
return text ? text : undefined
|
||||
}
|
||||
|
||||
/** Build a short preview of tool input for debug logging. */
|
||||
function inputPreview(input: Record<string, unknown>): string {
|
||||
const parts: string[] = []
|
||||
for (const [key, val] of Object.entries(input)) {
|
||||
if (typeof val === 'string') {
|
||||
parts.push(`${key}="${val.slice(0, 100)}"`)
|
||||
}
|
||||
if (parts.length >= 3) break
|
||||
}
|
||||
return parts.join(' ')
|
||||
}
|
||||
|
||||
export function createSessionSpawner(deps: SessionSpawnerDeps): SessionSpawner {
|
||||
return {
|
||||
spawn(opts: SessionSpawnOpts, dir: string): SessionHandle {
|
||||
// Debug file resolution:
|
||||
// 1. If deps.debugFile is provided, use it with session ID suffix for uniqueness
|
||||
// 2. If verbose or ant build, auto-generate a temp file path
|
||||
// 3. Otherwise, no debug file
|
||||
const safeId = safeFilenameId(opts.sessionId)
|
||||
let debugFile: string | undefined
|
||||
if (deps.debugFile) {
|
||||
const ext = deps.debugFile.lastIndexOf('.')
|
||||
if (ext > 0) {
|
||||
debugFile = `${deps.debugFile.slice(0, ext)}-${safeId}${deps.debugFile.slice(ext)}`
|
||||
} else {
|
||||
debugFile = `${deps.debugFile}-${safeId}`
|
||||
}
|
||||
} else if (deps.verbose || process.env.USER_TYPE === 'ant') {
|
||||
debugFile = join(tmpdir(), 'claude', `bridge-session-${safeId}.log`)
|
||||
}
|
||||
|
||||
// Transcript file: write raw NDJSON lines for post-hoc analysis.
|
||||
// Placed alongside the debug file when one is configured.
|
||||
let transcriptStream: WriteStream | null = null
|
||||
let transcriptPath: string | undefined
|
||||
if (deps.debugFile) {
|
||||
transcriptPath = join(
|
||||
dirname(deps.debugFile),
|
||||
`bridge-transcript-${safeId}.jsonl`,
|
||||
)
|
||||
transcriptStream = createWriteStream(transcriptPath, { flags: 'a' })
|
||||
transcriptStream.on('error', err => {
|
||||
deps.onDebug(
|
||||
`[bridge:session] Transcript write error: ${err.message}`,
|
||||
)
|
||||
transcriptStream = null
|
||||
})
|
||||
deps.onDebug(`[bridge:session] Transcript log: ${transcriptPath}`)
|
||||
}
|
||||
|
||||
const args = [
|
||||
...deps.scriptArgs,
|
||||
'--print',
|
||||
'--sdk-url',
|
||||
opts.sdkUrl,
|
||||
'--session-id',
|
||||
opts.sessionId,
|
||||
'--input-format',
|
||||
'stream-json',
|
||||
'--output-format',
|
||||
'stream-json',
|
||||
'--replay-user-messages',
|
||||
...(deps.verbose ? ['--verbose'] : []),
|
||||
...(debugFile ? ['--debug-file', debugFile] : []),
|
||||
...(deps.permissionMode
|
||||
? ['--permission-mode', deps.permissionMode]
|
||||
: []),
|
||||
]
|
||||
|
||||
const env: NodeJS.ProcessEnv = {
|
||||
...deps.env,
|
||||
// Strip the bridge's OAuth token so the child CC process uses
|
||||
// the session access token for inference instead.
|
||||
CLAUDE_CODE_OAUTH_TOKEN: undefined,
|
||||
CLAUDE_CODE_ENVIRONMENT_KIND: 'bridge',
|
||||
...(deps.sandbox && { CLAUDE_CODE_FORCE_SANDBOX: '1' }),
|
||||
CLAUDE_CODE_SESSION_ACCESS_TOKEN: opts.accessToken,
|
||||
// v1: HybridTransport (WS reads + POST writes) to Session-Ingress.
|
||||
// Harmless in v2 mode — transportUtils checks CLAUDE_CODE_USE_CCR_V2 first.
|
||||
CLAUDE_CODE_POST_FOR_SESSION_INGRESS_V2: '1',
|
||||
// v2: SSETransport + CCRClient to CCR's /v1/code/sessions/* endpoints.
|
||||
// Same env vars environment-manager sets in the container path.
|
||||
...(opts.useCcrV2 && {
|
||||
CLAUDE_CODE_USE_CCR_V2: '1',
|
||||
CLAUDE_CODE_WORKER_EPOCH: String(opts.workerEpoch),
|
||||
}),
|
||||
}
|
||||
|
||||
deps.onDebug(
|
||||
`[bridge:session] Spawning sessionId=${opts.sessionId} sdkUrl=${opts.sdkUrl} accessToken=${opts.accessToken ? 'present' : 'MISSING'}`,
|
||||
)
|
||||
deps.onDebug(`[bridge:session] Child args: ${args.join(' ')}`)
|
||||
if (debugFile) {
|
||||
deps.onDebug(`[bridge:session] Debug log: ${debugFile}`)
|
||||
}
|
||||
|
||||
// Pipe all three streams: stdin for control, stdout for NDJSON parsing,
|
||||
// stderr for error capture and diagnostics.
|
||||
const child: ChildProcess = spawn(deps.execPath, args, {
|
||||
cwd: dir,
|
||||
stdio: ['pipe', 'pipe', 'pipe'],
|
||||
env,
|
||||
windowsHide: true,
|
||||
})
|
||||
|
||||
deps.onDebug(
|
||||
`[bridge:session] sessionId=${opts.sessionId} pid=${child.pid}`,
|
||||
)
|
||||
|
||||
const activities: SessionActivity[] = []
|
||||
let currentActivity: SessionActivity | null = null
|
||||
const lastStderr: string[] = []
|
||||
let sigkillSent = false
|
||||
let firstUserMessageSeen = false
|
||||
|
||||
// Buffer stderr for error diagnostics
|
||||
if (child.stderr) {
|
||||
const stderrRl = createInterface({ input: child.stderr })
|
||||
stderrRl.on('line', line => {
|
||||
// Forward stderr to bridge's stderr in verbose mode
|
||||
if (deps.verbose) {
|
||||
process.stderr.write(line + '\n')
|
||||
}
|
||||
// Ring buffer of last N lines
|
||||
if (lastStderr.length >= MAX_STDERR_LINES) {
|
||||
lastStderr.shift()
|
||||
}
|
||||
lastStderr.push(line)
|
||||
})
|
||||
}
|
||||
|
||||
// Parse NDJSON from child stdout
|
||||
if (child.stdout) {
|
||||
const rl = createInterface({ input: child.stdout })
|
||||
rl.on('line', line => {
|
||||
// Write raw NDJSON to transcript file
|
||||
if (transcriptStream) {
|
||||
transcriptStream.write(line + '\n')
|
||||
}
|
||||
|
||||
// Log all messages flowing from the child CLI to the bridge
|
||||
deps.onDebug(
|
||||
`[bridge:ws] sessionId=${opts.sessionId} <<< ${debugTruncate(line)}`,
|
||||
)
|
||||
|
||||
// In verbose mode, forward raw output to stderr
|
||||
if (deps.verbose) {
|
||||
process.stderr.write(line + '\n')
|
||||
}
|
||||
|
||||
const extracted = extractActivities(
|
||||
line,
|
||||
opts.sessionId,
|
||||
deps.onDebug,
|
||||
)
|
||||
for (const activity of extracted) {
|
||||
// Maintain ring buffer
|
||||
if (activities.length >= MAX_ACTIVITIES) {
|
||||
activities.shift()
|
||||
}
|
||||
activities.push(activity)
|
||||
currentActivity = activity
|
||||
|
||||
deps.onActivity?.(opts.sessionId, activity)
|
||||
}
|
||||
|
||||
// Detect control_request and replayed user messages.
|
||||
// extractActivities parses the same line but swallows parse errors
|
||||
// and skips 'user' type — re-parse here is cheap (NDJSON lines are
|
||||
// small) and keeps each path self-contained.
|
||||
{
|
||||
let parsed: unknown
|
||||
try {
|
||||
parsed = jsonParse(line)
|
||||
} catch {
|
||||
// Non-JSON line, skip detection
|
||||
}
|
||||
if (parsed && typeof parsed === 'object') {
|
||||
const msg = parsed as Record<string, unknown>
|
||||
|
||||
if (msg.type === 'control_request') {
|
||||
const request = msg.request as
|
||||
| Record<string, unknown>
|
||||
| undefined
|
||||
if (
|
||||
request?.subtype === 'can_use_tool' &&
|
||||
deps.onPermissionRequest
|
||||
) {
|
||||
deps.onPermissionRequest(
|
||||
opts.sessionId,
|
||||
parsed as PermissionRequest,
|
||||
opts.accessToken,
|
||||
)
|
||||
}
|
||||
// interrupt is turn-level; the child handles it internally (print.ts)
|
||||
} else if (
|
||||
msg.type === 'user' &&
|
||||
!firstUserMessageSeen &&
|
||||
opts.onFirstUserMessage
|
||||
) {
|
||||
const text = extractUserMessageText(msg)
|
||||
if (text) {
|
||||
firstUserMessageSeen = true
|
||||
opts.onFirstUserMessage(text)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
const done = new Promise<SessionDoneStatus>(resolve => {
|
||||
child.on('close', (code, signal) => {
|
||||
// Close transcript stream on exit
|
||||
if (transcriptStream) {
|
||||
transcriptStream.end()
|
||||
transcriptStream = null
|
||||
}
|
||||
|
||||
if (signal === 'SIGTERM' || signal === 'SIGINT') {
|
||||
deps.onDebug(
|
||||
`[bridge:session] sessionId=${opts.sessionId} interrupted signal=${signal} pid=${child.pid}`,
|
||||
)
|
||||
resolve('interrupted')
|
||||
} else if (code === 0) {
|
||||
deps.onDebug(
|
||||
`[bridge:session] sessionId=${opts.sessionId} completed exit_code=0 pid=${child.pid}`,
|
||||
)
|
||||
resolve('completed')
|
||||
} else {
|
||||
deps.onDebug(
|
||||
`[bridge:session] sessionId=${opts.sessionId} failed exit_code=${code} pid=${child.pid}`,
|
||||
)
|
||||
resolve('failed')
|
||||
}
|
||||
})
|
||||
|
||||
child.on('error', err => {
|
||||
deps.onDebug(
|
||||
`[bridge:session] sessionId=${opts.sessionId} spawn error: ${err.message}`,
|
||||
)
|
||||
resolve('failed')
|
||||
})
|
||||
})
|
||||
|
||||
const handle: SessionHandle = {
|
||||
sessionId: opts.sessionId,
|
||||
done,
|
||||
activities,
|
||||
accessToken: opts.accessToken,
|
||||
lastStderr,
|
||||
get currentActivity(): SessionActivity | null {
|
||||
return currentActivity
|
||||
},
|
||||
kill(): void {
|
||||
if (!child.killed) {
|
||||
deps.onDebug(
|
||||
`[bridge:session] Sending SIGTERM to sessionId=${opts.sessionId} pid=${child.pid}`,
|
||||
)
|
||||
// On Windows, child.kill('SIGTERM') throws; use default signal.
|
||||
if (process.platform === 'win32') {
|
||||
child.kill()
|
||||
} else {
|
||||
child.kill('SIGTERM')
|
||||
}
|
||||
}
|
||||
},
|
||||
forceKill(): void {
|
||||
// Use separate flag because child.killed is set when kill() is called,
|
||||
// not when the process exits. We need to send SIGKILL even after SIGTERM.
|
||||
if (!sigkillSent && child.pid) {
|
||||
sigkillSent = true
|
||||
deps.onDebug(
|
||||
`[bridge:session] Sending SIGKILL to sessionId=${opts.sessionId} pid=${child.pid}`,
|
||||
)
|
||||
if (process.platform === 'win32') {
|
||||
child.kill()
|
||||
} else {
|
||||
child.kill('SIGKILL')
|
||||
}
|
||||
}
|
||||
},
|
||||
writeStdin(data: string): void {
|
||||
if (child.stdin && !child.stdin.destroyed) {
|
||||
deps.onDebug(
|
||||
`[bridge:ws] sessionId=${opts.sessionId} >>> ${debugTruncate(data)}`,
|
||||
)
|
||||
child.stdin.write(data)
|
||||
}
|
||||
},
|
||||
updateAccessToken(token: string): void {
|
||||
handle.accessToken = token
|
||||
// Send the fresh token to the child process via stdin. The child's
|
||||
// StructuredIO handles update_environment_variables messages by
|
||||
// setting process.env directly, so getSessionIngressAuthToken()
|
||||
// picks up the new token on the next refreshHeaders call.
|
||||
handle.writeStdin(
|
||||
jsonStringify({
|
||||
type: 'update_environment_variables',
|
||||
variables: { CLAUDE_CODE_SESSION_ACCESS_TOKEN: token },
|
||||
}) + '\n',
|
||||
)
|
||||
deps.onDebug(
|
||||
`[bridge:session] Sent token refresh via stdin for sessionId=${opts.sessionId}`,
|
||||
)
|
||||
},
|
||||
}
|
||||
|
||||
return handle
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
export { extractActivities as _extractActivitiesForTesting }
|
||||
210
src/bridge/trustedDevice.ts
Normal file
210
src/bridge/trustedDevice.ts
Normal file
@ -0,0 +1,210 @@
|
||||
import axios from 'axios'
|
||||
import memoize from 'lodash-es/memoize.js'
|
||||
import { hostname } from 'os'
|
||||
import { getOauthConfig } from '../constants/oauth.js'
|
||||
import {
|
||||
checkGate_CACHED_OR_BLOCKING,
|
||||
getFeatureValue_CACHED_MAY_BE_STALE,
|
||||
} from '../services/analytics/growthbook.js'
|
||||
import { logForDebugging } from '../utils/debug.js'
|
||||
import { errorMessage } from '../utils/errors.js'
|
||||
import { isEssentialTrafficOnly } from '../utils/privacyLevel.js'
|
||||
import { getSecureStorage } from '../utils/secureStorage/index.js'
|
||||
import { jsonStringify } from '../utils/slowOperations.js'
|
||||
|
||||
/**
|
||||
* Trusted device token source for bridge (remote-control) sessions.
|
||||
*
|
||||
* Bridge sessions have SecurityTier=ELEVATED on the server (CCR v2).
|
||||
* The server gates ConnectBridgeWorker on its own flag
|
||||
* (sessions_elevated_auth_enforcement in Anthropic Main); this CLI-side
|
||||
* flag controls whether the CLI sends X-Trusted-Device-Token at all.
|
||||
* Two flags so rollout can be staged: flip CLI-side first (headers
|
||||
* start flowing, server still no-ops), then flip server-side.
|
||||
*
|
||||
* Enrollment (POST /auth/trusted_devices) is gated server-side by
|
||||
* account_session.created_at < 10min, so it must happen during /login.
|
||||
* Token is persistent (90d rolling expiry) and stored in keychain.
|
||||
*
|
||||
* See anthropics/anthropic#274559 (spec), #310375 (B1b tenant RPCs),
|
||||
* #295987 (B2 Python routes), #307150 (C1' CCR v2 gate).
|
||||
*/
|
||||
|
||||
const TRUSTED_DEVICE_GATE = 'tengu_sessions_elevated_auth_enforcement'
|
||||
|
||||
function isGateEnabled(): boolean {
|
||||
return getFeatureValue_CACHED_MAY_BE_STALE(TRUSTED_DEVICE_GATE, false)
|
||||
}
|
||||
|
||||
// Memoized — secureStorage.read() spawns a macOS `security` subprocess (~40ms).
|
||||
// bridgeApi.ts calls this from getHeaders() on every poll/heartbeat/ack.
|
||||
// Cache cleared after enrollment (below) and on logout (clearAuthRelatedCaches).
|
||||
//
|
||||
// Only the storage read is memoized — the GrowthBook gate is checked live so
|
||||
// that a gate flip after GrowthBook refresh takes effect without a restart.
|
||||
const readStoredToken = memoize((): string | undefined => {
|
||||
// Env var takes precedence for testing/canary.
|
||||
const envToken = process.env.CLAUDE_TRUSTED_DEVICE_TOKEN
|
||||
if (envToken) {
|
||||
return envToken
|
||||
}
|
||||
return getSecureStorage().read()?.trustedDeviceToken
|
||||
})
|
||||
|
||||
export function getTrustedDeviceToken(): string | undefined {
|
||||
if (!isGateEnabled()) {
|
||||
return undefined
|
||||
}
|
||||
return readStoredToken()
|
||||
}
|
||||
|
||||
export function clearTrustedDeviceTokenCache(): void {
|
||||
readStoredToken.cache?.clear?.()
|
||||
}
|
||||
|
||||
/**
|
||||
* Clear the stored trusted device token from secure storage and the memo cache.
|
||||
* Called before enrollTrustedDevice() during /login so a stale token from the
|
||||
* previous account isn't sent as X-Trusted-Device-Token while enrollment is
|
||||
* in-flight (enrollTrustedDevice is async — bridge API calls between login and
|
||||
* enrollment completion would otherwise still read the old cached token).
|
||||
*/
|
||||
export function clearTrustedDeviceToken(): void {
|
||||
if (!isGateEnabled()) {
|
||||
return
|
||||
}
|
||||
const secureStorage = getSecureStorage()
|
||||
try {
|
||||
const data = secureStorage.read()
|
||||
if (data?.trustedDeviceToken) {
|
||||
delete data.trustedDeviceToken
|
||||
secureStorage.update(data)
|
||||
}
|
||||
} catch {
|
||||
// Best-effort — don't block login if storage is inaccessible
|
||||
}
|
||||
readStoredToken.cache?.clear?.()
|
||||
}
|
||||
|
||||
/**
|
||||
* Enroll this device via POST /auth/trusted_devices and persist the token
|
||||
* to keychain. Best-effort — logs and returns on failure so callers
|
||||
* (post-login hooks) don't block the login flow.
|
||||
*
|
||||
* The server gates enrollment on account_session.created_at < 10min, so
|
||||
* this must be called immediately after a fresh /login. Calling it later
|
||||
* (e.g. lazy enrollment on /bridge 403) will fail with 403 stale_session.
|
||||
*/
|
||||
export async function enrollTrustedDevice(): Promise<void> {
|
||||
try {
|
||||
// checkGate_CACHED_OR_BLOCKING awaits any in-flight GrowthBook re-init
|
||||
// (triggered by refreshGrowthBookAfterAuthChange in login.tsx) before
|
||||
// reading the gate, so we get the post-refresh value.
|
||||
if (!(await checkGate_CACHED_OR_BLOCKING(TRUSTED_DEVICE_GATE))) {
|
||||
logForDebugging(
|
||||
`[trusted-device] Gate ${TRUSTED_DEVICE_GATE} is off, skipping enrollment`,
|
||||
)
|
||||
return
|
||||
}
|
||||
// If CLAUDE_TRUSTED_DEVICE_TOKEN is set (e.g. by an enterprise wrapper),
|
||||
// skip enrollment — the env var takes precedence in readStoredToken() so
|
||||
// any enrolled token would be shadowed and never used.
|
||||
if (process.env.CLAUDE_TRUSTED_DEVICE_TOKEN) {
|
||||
logForDebugging(
|
||||
'[trusted-device] CLAUDE_TRUSTED_DEVICE_TOKEN env var is set, skipping enrollment (env var takes precedence)',
|
||||
)
|
||||
return
|
||||
}
|
||||
// Lazy require — utils/auth.ts transitively pulls ~1300 modules
|
||||
// (config → file → permissions → sessionStorage → commands). Daemon callers
|
||||
// of getTrustedDeviceToken() don't need this; only /login does.
|
||||
/* eslint-disable @typescript-eslint/no-require-imports */
|
||||
const { getClaudeAIOAuthTokens } =
|
||||
require('../utils/auth.js') as typeof import('../utils/auth.js')
|
||||
/* eslint-enable @typescript-eslint/no-require-imports */
|
||||
const accessToken = getClaudeAIOAuthTokens()?.accessToken
|
||||
if (!accessToken) {
|
||||
logForDebugging('[trusted-device] No OAuth token, skipping enrollment')
|
||||
return
|
||||
}
|
||||
// Always re-enroll on /login — the existing token may belong to a
|
||||
// different account (account-switch without /logout). Skipping enrollment
|
||||
// would send the old account's token on the new account's bridge calls.
|
||||
const secureStorage = getSecureStorage()
|
||||
|
||||
if (isEssentialTrafficOnly()) {
|
||||
logForDebugging(
|
||||
'[trusted-device] Essential traffic only, skipping enrollment',
|
||||
)
|
||||
return
|
||||
}
|
||||
|
||||
const baseUrl = getOauthConfig().BASE_API_URL
|
||||
let response
|
||||
try {
|
||||
response = await axios.post<{
|
||||
device_token?: string
|
||||
device_id?: string
|
||||
}>(
|
||||
`${baseUrl}/api/auth/trusted_devices`,
|
||||
{ display_name: `Claude Code on ${hostname()} · ${process.platform}` },
|
||||
{
|
||||
headers: {
|
||||
Authorization: `Bearer ${accessToken}`,
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
timeout: 10_000,
|
||||
validateStatus: s => s < 500,
|
||||
},
|
||||
)
|
||||
} catch (err: unknown) {
|
||||
logForDebugging(
|
||||
`[trusted-device] Enrollment request failed: ${errorMessage(err)}`,
|
||||
)
|
||||
return
|
||||
}
|
||||
|
||||
if (response.status !== 200 && response.status !== 201) {
|
||||
logForDebugging(
|
||||
`[trusted-device] Enrollment failed ${response.status}: ${jsonStringify(response.data).slice(0, 200)}`,
|
||||
)
|
||||
return
|
||||
}
|
||||
|
||||
const token = response.data?.device_token
|
||||
if (!token || typeof token !== 'string') {
|
||||
logForDebugging(
|
||||
'[trusted-device] Enrollment response missing device_token field',
|
||||
)
|
||||
return
|
||||
}
|
||||
|
||||
try {
|
||||
const storageData = secureStorage.read()
|
||||
if (!storageData) {
|
||||
logForDebugging(
|
||||
'[trusted-device] Cannot read storage, skipping token persist',
|
||||
)
|
||||
return
|
||||
}
|
||||
storageData.trustedDeviceToken = token
|
||||
const result = secureStorage.update(storageData)
|
||||
if (!result.success) {
|
||||
logForDebugging(
|
||||
`[trusted-device] Failed to persist token: ${result.warning ?? 'unknown'}`,
|
||||
)
|
||||
return
|
||||
}
|
||||
readStoredToken.cache?.clear?.()
|
||||
logForDebugging(
|
||||
`[trusted-device] Enrolled device_id=${response.data.device_id ?? 'unknown'}`,
|
||||
)
|
||||
} catch (err: unknown) {
|
||||
logForDebugging(
|
||||
`[trusted-device] Storage write failed: ${errorMessage(err)}`,
|
||||
)
|
||||
}
|
||||
} catch (err: unknown) {
|
||||
logForDebugging(`[trusted-device] Enrollment error: ${errorMessage(err)}`)
|
||||
}
|
||||
}
|
||||
262
src/bridge/types.ts
Normal file
262
src/bridge/types.ts
Normal file
@ -0,0 +1,262 @@
|
||||
/** Default per-session timeout (24 hours). */
|
||||
export const DEFAULT_SESSION_TIMEOUT_MS = 24 * 60 * 60 * 1000
|
||||
|
||||
/** Reusable login guidance appended to bridge auth errors. */
|
||||
export const BRIDGE_LOGIN_INSTRUCTION =
|
||||
'Remote Control is only available with claude.ai subscriptions. Please use `/login` to sign in with your claude.ai account.'
|
||||
|
||||
/** Full error printed when `claude remote-control` is run without auth. */
|
||||
export const BRIDGE_LOGIN_ERROR =
|
||||
'Error: You must be logged in to use Remote Control.\n\n' +
|
||||
BRIDGE_LOGIN_INSTRUCTION
|
||||
|
||||
/** Shown when the user disconnects Remote Control (via /remote-control or ultraplan launch). */
|
||||
export const REMOTE_CONTROL_DISCONNECTED_MSG = 'Remote Control disconnected.'
|
||||
|
||||
// --- Protocol types for the environments API ---
|
||||
|
||||
export type WorkData = {
|
||||
type: 'session' | 'healthcheck'
|
||||
id: string
|
||||
}
|
||||
|
||||
export type WorkResponse = {
|
||||
id: string
|
||||
type: 'work'
|
||||
environment_id: string
|
||||
state: string
|
||||
data: WorkData
|
||||
secret: string // base64url-encoded JSON
|
||||
created_at: string
|
||||
}
|
||||
|
||||
export type WorkSecret = {
|
||||
version: number
|
||||
session_ingress_token: string
|
||||
api_base_url: string
|
||||
sources: Array<{
|
||||
type: string
|
||||
git_info?: { type: string; repo: string; ref?: string; token?: string }
|
||||
}>
|
||||
auth: Array<{ type: string; token: string }>
|
||||
claude_code_args?: Record<string, string> | null
|
||||
mcp_config?: unknown | null
|
||||
environment_variables?: Record<string, string> | null
|
||||
/**
|
||||
* Server-driven CCR v2 selector. Set by prepare_work_secret() when the
|
||||
* session was created via the v2 compat layer (ccr_v2_compat_enabled).
|
||||
* Same field the BYOC runner reads at environment-runner/sessionExecutor.ts.
|
||||
*/
|
||||
use_code_sessions?: boolean
|
||||
}
|
||||
|
||||
export type SessionDoneStatus = 'completed' | 'failed' | 'interrupted'
|
||||
|
||||
export type SessionActivityType = 'tool_start' | 'text' | 'result' | 'error'
|
||||
|
||||
export type SessionActivity = {
|
||||
type: SessionActivityType
|
||||
summary: string // e.g. "Editing src/foo.ts", "Reading package.json"
|
||||
timestamp: number
|
||||
}
|
||||
|
||||
/**
|
||||
* How `claude remote-control` chooses session working directories.
|
||||
* - `single-session`: one session in cwd, bridge tears down when it ends
|
||||
* - `worktree`: persistent server, every session gets an isolated git worktree
|
||||
* - `same-dir`: persistent server, every session shares cwd (can stomp each other)
|
||||
*/
|
||||
export type SpawnMode = 'single-session' | 'worktree' | 'same-dir'
|
||||
|
||||
/**
|
||||
* Well-known worker_type values THIS codebase produces. Sent as
|
||||
* `metadata.worker_type` at environment registration so claude.ai can filter
|
||||
* the session picker by origin (e.g. assistant tab only shows assistant
|
||||
* workers). The backend treats this as an opaque string — desktop cowork
|
||||
* sends `"cowork"`, which isn't in this union. REPL code uses this narrow
|
||||
* type for its own exhaustiveness; wire-level fields accept any string.
|
||||
*/
|
||||
export type BridgeWorkerType = 'claude_code' | 'claude_code_assistant'
|
||||
|
||||
export type BridgeConfig = {
|
||||
dir: string
|
||||
machineName: string
|
||||
branch: string
|
||||
gitRepoUrl: string | null
|
||||
maxSessions: number
|
||||
spawnMode: SpawnMode
|
||||
verbose: boolean
|
||||
sandbox: boolean
|
||||
/** Client-generated UUID identifying this bridge instance. */
|
||||
bridgeId: string
|
||||
/**
|
||||
* Sent as metadata.worker_type so web clients can filter by origin.
|
||||
* Backend treats this as opaque — any string, not just BridgeWorkerType.
|
||||
*/
|
||||
workerType: string
|
||||
/** Client-generated UUID for idempotent environment registration. */
|
||||
environmentId: string
|
||||
/**
|
||||
* Backend-issued environment_id to reuse on re-register. When set, the
|
||||
* backend treats registration as a reconnect to the existing environment
|
||||
* instead of creating a new one. Used by `claude remote-control
|
||||
* --session-id` resume. Must be a backend-format ID — client UUIDs are
|
||||
* rejected with 400.
|
||||
*/
|
||||
reuseEnvironmentId?: string
|
||||
/** API base URL the bridge is connected to (used for polling). */
|
||||
apiBaseUrl: string
|
||||
/** Session ingress base URL for WebSocket connections (may differ from apiBaseUrl locally). */
|
||||
sessionIngressUrl: string
|
||||
/** Debug file path passed via --debug-file. */
|
||||
debugFile?: string
|
||||
/** Per-session timeout in milliseconds. Sessions exceeding this are killed. */
|
||||
sessionTimeoutMs?: number
|
||||
}
|
||||
|
||||
// --- Dependency interfaces (for testability) ---
|
||||
|
||||
/**
|
||||
* A control_response event sent back to a session (e.g. a permission decision).
|
||||
* The `subtype` is `'success'` per the SDK protocol; the inner `response`
|
||||
* carries the permission decision payload (e.g. `{ behavior: 'allow' }`).
|
||||
*/
|
||||
export type PermissionResponseEvent = {
|
||||
type: 'control_response'
|
||||
response: {
|
||||
subtype: 'success'
|
||||
request_id: string
|
||||
response: Record<string, unknown>
|
||||
}
|
||||
}
|
||||
|
||||
export type BridgeApiClient = {
|
||||
registerBridgeEnvironment(config: BridgeConfig): Promise<{
|
||||
environment_id: string
|
||||
environment_secret: string
|
||||
}>
|
||||
pollForWork(
|
||||
environmentId: string,
|
||||
environmentSecret: string,
|
||||
signal?: AbortSignal,
|
||||
reclaimOlderThanMs?: number,
|
||||
): Promise<WorkResponse | null>
|
||||
acknowledgeWork(
|
||||
environmentId: string,
|
||||
workId: string,
|
||||
sessionToken: string,
|
||||
): Promise<void>
|
||||
/** Stop a work item via the environments API. */
|
||||
stopWork(environmentId: string, workId: string, force: boolean): Promise<void>
|
||||
/** Deregister/delete the bridge environment on graceful shutdown. */
|
||||
deregisterEnvironment(environmentId: string): Promise<void>
|
||||
/** Send a permission response (control_response) to a session via the session events API. */
|
||||
sendPermissionResponseEvent(
|
||||
sessionId: string,
|
||||
event: PermissionResponseEvent,
|
||||
sessionToken: string,
|
||||
): Promise<void>
|
||||
/** Archive a session so it no longer appears as active on the server. */
|
||||
archiveSession(sessionId: string): Promise<void>
|
||||
/**
|
||||
* Force-stop stale worker instances and re-queue a session on an environment.
|
||||
* Used by `--session-id` to resume a session after the original bridge died.
|
||||
*/
|
||||
reconnectSession(environmentId: string, sessionId: string): Promise<void>
|
||||
/**
|
||||
* Send a lightweight heartbeat for an active work item, extending its lease.
|
||||
* Uses SessionIngressAuth (JWT, no DB hit) instead of EnvironmentSecretAuth.
|
||||
* Returns the server's response with lease status.
|
||||
*/
|
||||
heartbeatWork(
|
||||
environmentId: string,
|
||||
workId: string,
|
||||
sessionToken: string,
|
||||
): Promise<{ lease_extended: boolean; state: string }>
|
||||
}
|
||||
|
||||
export type SessionHandle = {
|
||||
sessionId: string
|
||||
done: Promise<SessionDoneStatus>
|
||||
kill(): void
|
||||
forceKill(): void
|
||||
activities: SessionActivity[] // ring buffer of recent activities (last ~10)
|
||||
currentActivity: SessionActivity | null // most recent
|
||||
accessToken: string // session_ingress_token for API calls
|
||||
lastStderr: string[] // ring buffer of last stderr lines
|
||||
writeStdin(data: string): void // write directly to child stdin
|
||||
/** Update the access token for a running session (e.g. after token refresh). */
|
||||
updateAccessToken(token: string): void
|
||||
}
|
||||
|
||||
export type SessionSpawnOpts = {
|
||||
sessionId: string
|
||||
sdkUrl: string
|
||||
accessToken: string
|
||||
/** When true, spawn the child with CCR v2 env vars (SSE transport + CCRClient). */
|
||||
useCcrV2?: boolean
|
||||
/** Required when useCcrV2 is true. Obtained from POST /worker/register. */
|
||||
workerEpoch?: number
|
||||
/**
|
||||
* Fires once with the text of the first real user message seen on the
|
||||
* child's stdout (via --replay-user-messages). Lets the caller derive a
|
||||
* session title when none exists yet. Tool-result and synthetic user
|
||||
* messages are skipped.
|
||||
*/
|
||||
onFirstUserMessage?: (text: string) => void
|
||||
}
|
||||
|
||||
export type SessionSpawner = {
|
||||
spawn(opts: SessionSpawnOpts, dir: string): SessionHandle
|
||||
}
|
||||
|
||||
export type BridgeLogger = {
|
||||
printBanner(config: BridgeConfig, environmentId: string): void
|
||||
logSessionStart(sessionId: string, prompt: string): void
|
||||
logSessionComplete(sessionId: string, durationMs: number): void
|
||||
logSessionFailed(sessionId: string, error: string): void
|
||||
logStatus(message: string): void
|
||||
logVerbose(message: string): void
|
||||
logError(message: string): void
|
||||
/** Log a reconnection success event after recovering from connection errors. */
|
||||
logReconnected(disconnectedMs: number): void
|
||||
/** Show idle status with repo/branch info and shimmer animation. */
|
||||
updateIdleStatus(): void
|
||||
/** Show reconnecting status in the live display. */
|
||||
updateReconnectingStatus(delayStr: string, elapsedStr: string): void
|
||||
updateSessionStatus(
|
||||
sessionId: string,
|
||||
elapsed: string,
|
||||
activity: SessionActivity,
|
||||
trail: string[],
|
||||
): void
|
||||
clearStatus(): void
|
||||
/** Set repository info for status line display. */
|
||||
setRepoInfo(repoName: string, branch: string): void
|
||||
/** Set debug log glob shown above the status line (ant users). */
|
||||
setDebugLogPath(path: string): void
|
||||
/** Transition to "Attached" state when a session starts. */
|
||||
setAttached(sessionId: string): void
|
||||
/** Show failed status in the live display. */
|
||||
updateFailedStatus(error: string): void
|
||||
/** Toggle QR code visibility. */
|
||||
toggleQr(): void
|
||||
/** Update the "<n> of <m> sessions" indicator and spawn mode hint. */
|
||||
updateSessionCount(active: number, max: number, mode: SpawnMode): void
|
||||
/** Update the spawn mode shown in the session-count line. Pass null to hide (single-session or toggle unavailable). */
|
||||
setSpawnModeDisplay(mode: 'same-dir' | 'worktree' | null): void
|
||||
/** Register a new session for multi-session display (called after spawn succeeds). */
|
||||
addSession(sessionId: string, url: string): void
|
||||
/** Update the per-session activity summary (tool being run) in the multi-session list. */
|
||||
updateSessionActivity(sessionId: string, activity: SessionActivity): void
|
||||
/**
|
||||
* Set a session's display title. In multi-session mode, updates the bullet list
|
||||
* entry. In single-session mode, also shows the title in the main status line.
|
||||
* Triggers a render (guarded against reconnecting/failed states).
|
||||
*/
|
||||
setSessionTitle(sessionId: string, title: string): void
|
||||
/** Remove a session from the multi-session display when it ends. */
|
||||
removeSession(sessionId: string): void
|
||||
/** Force a re-render of the status display (for multi-session activity refresh). */
|
||||
refreshDisplay(): void
|
||||
}
|
||||
127
src/bridge/workSecret.ts
Normal file
127
src/bridge/workSecret.ts
Normal file
@ -0,0 +1,127 @@
|
||||
import axios from 'axios'
|
||||
import { jsonParse, jsonStringify } from '../utils/slowOperations.js'
|
||||
import type { WorkSecret } from './types.js'
|
||||
|
||||
/** Decode a base64url-encoded work secret and validate its version. */
|
||||
export function decodeWorkSecret(secret: string): WorkSecret {
|
||||
const json = Buffer.from(secret, 'base64url').toString('utf-8')
|
||||
const parsed: unknown = jsonParse(json)
|
||||
if (
|
||||
!parsed ||
|
||||
typeof parsed !== 'object' ||
|
||||
!('version' in parsed) ||
|
||||
parsed.version !== 1
|
||||
) {
|
||||
throw new Error(
|
||||
`Unsupported work secret version: ${parsed && typeof parsed === 'object' && 'version' in parsed ? parsed.version : 'unknown'}`,
|
||||
)
|
||||
}
|
||||
const obj = parsed as Record<string, unknown>
|
||||
if (
|
||||
typeof obj.session_ingress_token !== 'string' ||
|
||||
obj.session_ingress_token.length === 0
|
||||
) {
|
||||
throw new Error(
|
||||
'Invalid work secret: missing or empty session_ingress_token',
|
||||
)
|
||||
}
|
||||
if (typeof obj.api_base_url !== 'string') {
|
||||
throw new Error('Invalid work secret: missing api_base_url')
|
||||
}
|
||||
return parsed as WorkSecret
|
||||
}
|
||||
|
||||
/**
|
||||
* Build a WebSocket SDK URL from the API base URL and session ID.
|
||||
* Strips the HTTP(S) protocol and constructs a ws(s):// ingress URL.
|
||||
*
|
||||
* Uses /v2/ for localhost (direct to session-ingress, no Envoy rewrite)
|
||||
* and /v1/ for production (Envoy rewrites /v1/ → /v2/).
|
||||
*/
|
||||
export function buildSdkUrl(apiBaseUrl: string, sessionId: string): string {
|
||||
const isLocalhost =
|
||||
apiBaseUrl.includes('localhost') || apiBaseUrl.includes('127.0.0.1')
|
||||
const protocol = isLocalhost ? 'ws' : 'wss'
|
||||
const version = isLocalhost ? 'v2' : 'v1'
|
||||
const host = apiBaseUrl.replace(/^https?:\/\//, '').replace(/\/+$/, '')
|
||||
return `${protocol}://${host}/${version}/session_ingress/ws/${sessionId}`
|
||||
}
|
||||
|
||||
/**
|
||||
* Compare two session IDs regardless of their tagged-ID prefix.
|
||||
*
|
||||
* Tagged IDs have the form {tag}_{body} or {tag}_staging_{body}, where the
|
||||
* body encodes a UUID. CCR v2's compat layer returns `session_*` to v1 API
|
||||
* clients (compat/convert.go:41) but the infrastructure layer (sandbox-gateway
|
||||
* work queue, work poll response) uses `cse_*` (compat/CLAUDE.md:13). Both
|
||||
* have the same underlying UUID.
|
||||
*
|
||||
* Without this, replBridge rejects its own session as "foreign" at the
|
||||
* work-received check when the ccr_v2_compat_enabled gate is on.
|
||||
*/
|
||||
export function sameSessionId(a: string, b: string): boolean {
|
||||
if (a === b) return true
|
||||
// The body is everything after the last underscore — this handles both
|
||||
// `{tag}_{body}` and `{tag}_staging_{body}`.
|
||||
const aBody = a.slice(a.lastIndexOf('_') + 1)
|
||||
const bBody = b.slice(b.lastIndexOf('_') + 1)
|
||||
// Guard against IDs with no underscore (bare UUIDs): lastIndexOf returns -1,
|
||||
// slice(0) returns the whole string, and we already checked a === b above.
|
||||
// Require a minimum length to avoid accidental matches on short suffixes
|
||||
// (e.g. single-char tag remnants from malformed IDs).
|
||||
return aBody.length >= 4 && aBody === bBody
|
||||
}
|
||||
|
||||
/**
|
||||
* Build a CCR v2 session URL from the API base URL and session ID.
|
||||
* Unlike buildSdkUrl, this returns an HTTP(S) URL (not ws://) and points at
|
||||
* /v1/code/sessions/{id} — the child CC will derive the SSE stream path
|
||||
* and worker endpoints from this base.
|
||||
*/
|
||||
export function buildCCRv2SdkUrl(
|
||||
apiBaseUrl: string,
|
||||
sessionId: string,
|
||||
): string {
|
||||
const base = apiBaseUrl.replace(/\/+$/, '')
|
||||
return `${base}/v1/code/sessions/${sessionId}`
|
||||
}
|
||||
|
||||
/**
|
||||
* Register this bridge as the worker for a CCR v2 session.
|
||||
* Returns the worker_epoch, which must be passed to the child CC process
|
||||
* so its CCRClient can include it in every heartbeat/state/event request.
|
||||
*
|
||||
* Mirrors what environment-manager does in the container path
|
||||
* (api-go/environment-manager/cmd/cmd_task_run.go RegisterWorker).
|
||||
*/
|
||||
export async function registerWorker(
|
||||
sessionUrl: string,
|
||||
accessToken: string,
|
||||
): Promise<number> {
|
||||
const response = await axios.post(
|
||||
`${sessionUrl}/worker/register`,
|
||||
{},
|
||||
{
|
||||
headers: {
|
||||
Authorization: `Bearer ${accessToken}`,
|
||||
'Content-Type': 'application/json',
|
||||
'anthropic-version': '2023-06-01',
|
||||
},
|
||||
timeout: 10_000,
|
||||
},
|
||||
)
|
||||
// protojson serializes int64 as a string to avoid JS number precision loss;
|
||||
// the Go side may also return a number depending on encoder settings.
|
||||
const raw = response.data?.worker_epoch
|
||||
const epoch = typeof raw === 'string' ? Number(raw) : raw
|
||||
if (
|
||||
typeof epoch !== 'number' ||
|
||||
!Number.isFinite(epoch) ||
|
||||
!Number.isSafeInteger(epoch)
|
||||
) {
|
||||
throw new Error(
|
||||
`registerWorker: invalid worker_epoch in response: ${jsonStringify(response.data)}`,
|
||||
)
|
||||
}
|
||||
return epoch
|
||||
}
|
||||
371
src/buddy/CompanionSprite.tsx
Normal file
371
src/buddy/CompanionSprite.tsx
Normal file
File diff suppressed because one or more lines are too long
133
src/buddy/companion.ts
Normal file
133
src/buddy/companion.ts
Normal file
@ -0,0 +1,133 @@
|
||||
import { getGlobalConfig } from '../utils/config.js'
|
||||
import {
|
||||
type Companion,
|
||||
type CompanionBones,
|
||||
EYES,
|
||||
HATS,
|
||||
RARITIES,
|
||||
RARITY_WEIGHTS,
|
||||
type Rarity,
|
||||
SPECIES,
|
||||
STAT_NAMES,
|
||||
type StatName,
|
||||
} from './types.js'
|
||||
|
||||
// Mulberry32 — tiny seeded PRNG, good enough for picking ducks
|
||||
function mulberry32(seed: number): () => number {
|
||||
let a = seed >>> 0
|
||||
return function () {
|
||||
a |= 0
|
||||
a = (a + 0x6d2b79f5) | 0
|
||||
let t = Math.imul(a ^ (a >>> 15), 1 | a)
|
||||
t = (t + Math.imul(t ^ (t >>> 7), 61 | t)) ^ t
|
||||
return ((t ^ (t >>> 14)) >>> 0) / 4294967296
|
||||
}
|
||||
}
|
||||
|
||||
function hashString(s: string): number {
|
||||
if (typeof Bun !== 'undefined') {
|
||||
return Number(BigInt(Bun.hash(s)) & 0xffffffffn)
|
||||
}
|
||||
let h = 2166136261
|
||||
for (let i = 0; i < s.length; i++) {
|
||||
h ^= s.charCodeAt(i)
|
||||
h = Math.imul(h, 16777619)
|
||||
}
|
||||
return h >>> 0
|
||||
}
|
||||
|
||||
function pick<T>(rng: () => number, arr: readonly T[]): T {
|
||||
return arr[Math.floor(rng() * arr.length)]!
|
||||
}
|
||||
|
||||
function rollRarity(rng: () => number): Rarity {
|
||||
const total = Object.values(RARITY_WEIGHTS).reduce((a, b) => a + b, 0)
|
||||
let roll = rng() * total
|
||||
for (const rarity of RARITIES) {
|
||||
roll -= RARITY_WEIGHTS[rarity]
|
||||
if (roll < 0) return rarity
|
||||
}
|
||||
return 'common'
|
||||
}
|
||||
|
||||
const RARITY_FLOOR: Record<Rarity, number> = {
|
||||
common: 5,
|
||||
uncommon: 15,
|
||||
rare: 25,
|
||||
epic: 35,
|
||||
legendary: 50,
|
||||
}
|
||||
|
||||
// One peak stat, one dump stat, rest scattered. Rarity bumps the floor.
|
||||
function rollStats(
|
||||
rng: () => number,
|
||||
rarity: Rarity,
|
||||
): Record<StatName, number> {
|
||||
const floor = RARITY_FLOOR[rarity]
|
||||
const peak = pick(rng, STAT_NAMES)
|
||||
let dump = pick(rng, STAT_NAMES)
|
||||
while (dump === peak) dump = pick(rng, STAT_NAMES)
|
||||
|
||||
const stats = {} as Record<StatName, number>
|
||||
for (const name of STAT_NAMES) {
|
||||
if (name === peak) {
|
||||
stats[name] = Math.min(100, floor + 50 + Math.floor(rng() * 30))
|
||||
} else if (name === dump) {
|
||||
stats[name] = Math.max(1, floor - 10 + Math.floor(rng() * 15))
|
||||
} else {
|
||||
stats[name] = floor + Math.floor(rng() * 40)
|
||||
}
|
||||
}
|
||||
return stats
|
||||
}
|
||||
|
||||
const SALT = 'friend-2026-401'
|
||||
|
||||
export type Roll = {
|
||||
bones: CompanionBones
|
||||
inspirationSeed: number
|
||||
}
|
||||
|
||||
function rollFrom(rng: () => number): Roll {
|
||||
const rarity = rollRarity(rng)
|
||||
const bones: CompanionBones = {
|
||||
rarity,
|
||||
species: pick(rng, SPECIES),
|
||||
eye: pick(rng, EYES),
|
||||
hat: rarity === 'common' ? 'none' : pick(rng, HATS),
|
||||
shiny: rng() < 0.01,
|
||||
stats: rollStats(rng, rarity),
|
||||
}
|
||||
return { bones, inspirationSeed: Math.floor(rng() * 1e9) }
|
||||
}
|
||||
|
||||
// Called from three hot paths (500ms sprite tick, per-keystroke PromptInput,
|
||||
// per-turn observer) with the same userId → cache the deterministic result.
|
||||
let rollCache: { key: string; value: Roll } | undefined
|
||||
export function roll(userId: string): Roll {
|
||||
const key = userId + SALT
|
||||
if (rollCache?.key === key) return rollCache.value
|
||||
const value = rollFrom(mulberry32(hashString(key)))
|
||||
rollCache = { key, value }
|
||||
return value
|
||||
}
|
||||
|
||||
export function rollWithSeed(seed: string): Roll {
|
||||
return rollFrom(mulberry32(hashString(seed)))
|
||||
}
|
||||
|
||||
export function companionUserId(): string {
|
||||
const config = getGlobalConfig()
|
||||
return config.oauthAccount?.accountUuid ?? config.userID ?? 'anon'
|
||||
}
|
||||
|
||||
// Regenerate bones from userId, merge with stored soul. Bones never persist
|
||||
// so species renames and SPECIES-array edits can't break stored companions,
|
||||
// and editing config.companion can't fake a rarity.
|
||||
export function getCompanion(): Companion | undefined {
|
||||
const stored = getGlobalConfig().companion
|
||||
if (!stored) return undefined
|
||||
const { bones } = roll(companionUserId())
|
||||
// bones last so stale bones fields in old-format configs get overridden
|
||||
return { ...stored, ...bones }
|
||||
}
|
||||
36
src/buddy/prompt.ts
Normal file
36
src/buddy/prompt.ts
Normal file
@ -0,0 +1,36 @@
|
||||
import { feature } from 'bun:bundle'
|
||||
import type { Message } from '../types/message.js'
|
||||
import type { Attachment } from '../utils/attachments.js'
|
||||
import { getGlobalConfig } from '../utils/config.js'
|
||||
import { getCompanion } from './companion.js'
|
||||
|
||||
export function companionIntroText(name: string, species: string): string {
|
||||
return `# Companion
|
||||
|
||||
A small ${species} named ${name} sits beside the user's input box and occasionally comments in a speech bubble. You're not ${name} — it's a separate watcher.
|
||||
|
||||
When the user addresses ${name} directly (by name), its bubble will answer. Your job in that moment is to stay out of the way: respond in ONE line or less, or just answer any part of the message meant for you. Don't explain that you're not ${name} — they know. Don't narrate what ${name} might say — the bubble handles that.`
|
||||
}
|
||||
|
||||
export function getCompanionIntroAttachment(
|
||||
messages: Message[] | undefined,
|
||||
): Attachment[] {
|
||||
if (!feature('BUDDY')) return []
|
||||
const companion = getCompanion()
|
||||
if (!companion || getGlobalConfig().companionMuted) return []
|
||||
|
||||
// Skip if already announced for this companion.
|
||||
for (const msg of messages ?? []) {
|
||||
if (msg.type !== 'attachment') continue
|
||||
if (msg.attachment.type !== 'companion_intro') continue
|
||||
if (msg.attachment.name === companion.name) return []
|
||||
}
|
||||
|
||||
return [
|
||||
{
|
||||
type: 'companion_intro',
|
||||
name: companion.name,
|
||||
species: companion.species,
|
||||
},
|
||||
]
|
||||
}
|
||||
514
src/buddy/sprites.ts
Normal file
514
src/buddy/sprites.ts
Normal file
@ -0,0 +1,514 @@
|
||||
import type { CompanionBones, Eye, Hat, Species } from './types.js'
|
||||
import {
|
||||
axolotl,
|
||||
blob,
|
||||
cactus,
|
||||
capybara,
|
||||
cat,
|
||||
chonk,
|
||||
dragon,
|
||||
duck,
|
||||
ghost,
|
||||
goose,
|
||||
mushroom,
|
||||
octopus,
|
||||
owl,
|
||||
penguin,
|
||||
rabbit,
|
||||
robot,
|
||||
snail,
|
||||
turtle,
|
||||
} from './types.js'
|
||||
|
||||
// Each sprite is 5 lines tall, 12 wide (after {E}→1char substitution).
|
||||
// Multiple frames per species for idle fidget animation.
|
||||
// Line 0 is the hat slot — must be blank in frames 0-1; frame 2 may use it.
|
||||
const BODIES: Record<Species, string[][]> = {
|
||||
[duck]: [
|
||||
[
|
||||
' ',
|
||||
' __ ',
|
||||
' <({E} )___ ',
|
||||
' ( ._> ',
|
||||
' `--´ ',
|
||||
],
|
||||
[
|
||||
' ',
|
||||
' __ ',
|
||||
' <({E} )___ ',
|
||||
' ( ._> ',
|
||||
' `--´~ ',
|
||||
],
|
||||
[
|
||||
' ',
|
||||
' __ ',
|
||||
' <({E} )___ ',
|
||||
' ( .__> ',
|
||||
' `--´ ',
|
||||
],
|
||||
],
|
||||
[goose]: [
|
||||
[
|
||||
' ',
|
||||
' ({E}> ',
|
||||
' || ',
|
||||
' _(__)_ ',
|
||||
' ^^^^ ',
|
||||
],
|
||||
[
|
||||
' ',
|
||||
' ({E}> ',
|
||||
' || ',
|
||||
' _(__)_ ',
|
||||
' ^^^^ ',
|
||||
],
|
||||
[
|
||||
' ',
|
||||
' ({E}>> ',
|
||||
' || ',
|
||||
' _(__)_ ',
|
||||
' ^^^^ ',
|
||||
],
|
||||
],
|
||||
[blob]: [
|
||||
[
|
||||
' ',
|
||||
' .----. ',
|
||||
' ( {E} {E} ) ',
|
||||
' ( ) ',
|
||||
' `----´ ',
|
||||
],
|
||||
[
|
||||
' ',
|
||||
' .------. ',
|
||||
' ( {E} {E} ) ',
|
||||
' ( ) ',
|
||||
' `------´ ',
|
||||
],
|
||||
[
|
||||
' ',
|
||||
' .--. ',
|
||||
' ({E} {E}) ',
|
||||
' ( ) ',
|
||||
' `--´ ',
|
||||
],
|
||||
],
|
||||
[cat]: [
|
||||
[
|
||||
' ',
|
||||
' /\\_/\\ ',
|
||||
' ( {E} {E}) ',
|
||||
' ( ω ) ',
|
||||
' (")_(") ',
|
||||
],
|
||||
[
|
||||
' ',
|
||||
' /\\_/\\ ',
|
||||
' ( {E} {E}) ',
|
||||
' ( ω ) ',
|
||||
' (")_(")~ ',
|
||||
],
|
||||
[
|
||||
' ',
|
||||
' /\\-/\\ ',
|
||||
' ( {E} {E}) ',
|
||||
' ( ω ) ',
|
||||
' (")_(") ',
|
||||
],
|
||||
],
|
||||
[dragon]: [
|
||||
[
|
||||
' ',
|
||||
' /^\\ /^\\ ',
|
||||
' < {E} {E} > ',
|
||||
' ( ~~ ) ',
|
||||
' `-vvvv-´ ',
|
||||
],
|
||||
[
|
||||
' ',
|
||||
' /^\\ /^\\ ',
|
||||
' < {E} {E} > ',
|
||||
' ( ) ',
|
||||
' `-vvvv-´ ',
|
||||
],
|
||||
[
|
||||
' ~ ~ ',
|
||||
' /^\\ /^\\ ',
|
||||
' < {E} {E} > ',
|
||||
' ( ~~ ) ',
|
||||
' `-vvvv-´ ',
|
||||
],
|
||||
],
|
||||
[octopus]: [
|
||||
[
|
||||
' ',
|
||||
' .----. ',
|
||||
' ( {E} {E} ) ',
|
||||
' (______) ',
|
||||
' /\\/\\/\\/\\ ',
|
||||
],
|
||||
[
|
||||
' ',
|
||||
' .----. ',
|
||||
' ( {E} {E} ) ',
|
||||
' (______) ',
|
||||
' \\/\\/\\/\\/ ',
|
||||
],
|
||||
[
|
||||
' o ',
|
||||
' .----. ',
|
||||
' ( {E} {E} ) ',
|
||||
' (______) ',
|
||||
' /\\/\\/\\/\\ ',
|
||||
],
|
||||
],
|
||||
[owl]: [
|
||||
[
|
||||
' ',
|
||||
' /\\ /\\ ',
|
||||
' (({E})({E})) ',
|
||||
' ( >< ) ',
|
||||
' `----´ ',
|
||||
],
|
||||
[
|
||||
' ',
|
||||
' /\\ /\\ ',
|
||||
' (({E})({E})) ',
|
||||
' ( >< ) ',
|
||||
' .----. ',
|
||||
],
|
||||
[
|
||||
' ',
|
||||
' /\\ /\\ ',
|
||||
' (({E})(-)) ',
|
||||
' ( >< ) ',
|
||||
' `----´ ',
|
||||
],
|
||||
],
|
||||
[penguin]: [
|
||||
[
|
||||
' ',
|
||||
' .---. ',
|
||||
' ({E}>{E}) ',
|
||||
' /( )\\ ',
|
||||
' `---´ ',
|
||||
],
|
||||
[
|
||||
' ',
|
||||
' .---. ',
|
||||
' ({E}>{E}) ',
|
||||
' |( )| ',
|
||||
' `---´ ',
|
||||
],
|
||||
[
|
||||
' .---. ',
|
||||
' ({E}>{E}) ',
|
||||
' /( )\\ ',
|
||||
' `---´ ',
|
||||
' ~ ~ ',
|
||||
],
|
||||
],
|
||||
[turtle]: [
|
||||
[
|
||||
' ',
|
||||
' _,--._ ',
|
||||
' ( {E} {E} ) ',
|
||||
' /[______]\\ ',
|
||||
' `` `` ',
|
||||
],
|
||||
[
|
||||
' ',
|
||||
' _,--._ ',
|
||||
' ( {E} {E} ) ',
|
||||
' /[______]\\ ',
|
||||
' `` `` ',
|
||||
],
|
||||
[
|
||||
' ',
|
||||
' _,--._ ',
|
||||
' ( {E} {E} ) ',
|
||||
' /[======]\\ ',
|
||||
' `` `` ',
|
||||
],
|
||||
],
|
||||
[snail]: [
|
||||
[
|
||||
' ',
|
||||
' {E} .--. ',
|
||||
' \\ ( @ ) ',
|
||||
' \\_`--´ ',
|
||||
' ~~~~~~~ ',
|
||||
],
|
||||
[
|
||||
' ',
|
||||
' {E} .--. ',
|
||||
' | ( @ ) ',
|
||||
' \\_`--´ ',
|
||||
' ~~~~~~~ ',
|
||||
],
|
||||
[
|
||||
' ',
|
||||
' {E} .--. ',
|
||||
' \\ ( @ ) ',
|
||||
' \\_`--´ ',
|
||||
' ~~~~~~ ',
|
||||
],
|
||||
],
|
||||
[ghost]: [
|
||||
[
|
||||
' ',
|
||||
' .----. ',
|
||||
' / {E} {E} \\ ',
|
||||
' | | ',
|
||||
' ~`~``~`~ ',
|
||||
],
|
||||
[
|
||||
' ',
|
||||
' .----. ',
|
||||
' / {E} {E} \\ ',
|
||||
' | | ',
|
||||
' `~`~~`~` ',
|
||||
],
|
||||
[
|
||||
' ~ ~ ',
|
||||
' .----. ',
|
||||
' / {E} {E} \\ ',
|
||||
' | | ',
|
||||
' ~~`~~`~~ ',
|
||||
],
|
||||
],
|
||||
[axolotl]: [
|
||||
[
|
||||
' ',
|
||||
'}~(______)~{',
|
||||
'}~({E} .. {E})~{',
|
||||
' ( .--. ) ',
|
||||
' (_/ \\_) ',
|
||||
],
|
||||
[
|
||||
' ',
|
||||
'~}(______){~',
|
||||
'~}({E} .. {E}){~',
|
||||
' ( .--. ) ',
|
||||
' (_/ \\_) ',
|
||||
],
|
||||
[
|
||||
' ',
|
||||
'}~(______)~{',
|
||||
'}~({E} .. {E})~{',
|
||||
' ( -- ) ',
|
||||
' ~_/ \\_~ ',
|
||||
],
|
||||
],
|
||||
[capybara]: [
|
||||
[
|
||||
' ',
|
||||
' n______n ',
|
||||
' ( {E} {E} ) ',
|
||||
' ( oo ) ',
|
||||
' `------´ ',
|
||||
],
|
||||
[
|
||||
' ',
|
||||
' n______n ',
|
||||
' ( {E} {E} ) ',
|
||||
' ( Oo ) ',
|
||||
' `------´ ',
|
||||
],
|
||||
[
|
||||
' ~ ~ ',
|
||||
' u______n ',
|
||||
' ( {E} {E} ) ',
|
||||
' ( oo ) ',
|
||||
' `------´ ',
|
||||
],
|
||||
],
|
||||
[cactus]: [
|
||||
[
|
||||
' ',
|
||||
' n ____ n ',
|
||||
' | |{E} {E}| | ',
|
||||
' |_| |_| ',
|
||||
' | | ',
|
||||
],
|
||||
[
|
||||
' ',
|
||||
' ____ ',
|
||||
' n |{E} {E}| n ',
|
||||
' |_| |_| ',
|
||||
' | | ',
|
||||
],
|
||||
[
|
||||
' n n ',
|
||||
' | ____ | ',
|
||||
' | |{E} {E}| | ',
|
||||
' |_| |_| ',
|
||||
' | | ',
|
||||
],
|
||||
],
|
||||
[robot]: [
|
||||
[
|
||||
' ',
|
||||
' .[||]. ',
|
||||
' [ {E} {E} ] ',
|
||||
' [ ==== ] ',
|
||||
' `------´ ',
|
||||
],
|
||||
[
|
||||
' ',
|
||||
' .[||]. ',
|
||||
' [ {E} {E} ] ',
|
||||
' [ -==- ] ',
|
||||
' `------´ ',
|
||||
],
|
||||
[
|
||||
' * ',
|
||||
' .[||]. ',
|
||||
' [ {E} {E} ] ',
|
||||
' [ ==== ] ',
|
||||
' `------´ ',
|
||||
],
|
||||
],
|
||||
[rabbit]: [
|
||||
[
|
||||
' ',
|
||||
' (\\__/) ',
|
||||
' ( {E} {E} ) ',
|
||||
' =( .. )= ',
|
||||
' (")__(") ',
|
||||
],
|
||||
[
|
||||
' ',
|
||||
' (|__/) ',
|
||||
' ( {E} {E} ) ',
|
||||
' =( .. )= ',
|
||||
' (")__(") ',
|
||||
],
|
||||
[
|
||||
' ',
|
||||
' (\\__/) ',
|
||||
' ( {E} {E} ) ',
|
||||
' =( . . )= ',
|
||||
' (")__(") ',
|
||||
],
|
||||
],
|
||||
[mushroom]: [
|
||||
[
|
||||
' ',
|
||||
' .-o-OO-o-. ',
|
||||
'(__________)',
|
||||
' |{E} {E}| ',
|
||||
' |____| ',
|
||||
],
|
||||
[
|
||||
' ',
|
||||
' .-O-oo-O-. ',
|
||||
'(__________)',
|
||||
' |{E} {E}| ',
|
||||
' |____| ',
|
||||
],
|
||||
[
|
||||
' . o . ',
|
||||
' .-o-OO-o-. ',
|
||||
'(__________)',
|
||||
' |{E} {E}| ',
|
||||
' |____| ',
|
||||
],
|
||||
],
|
||||
[chonk]: [
|
||||
[
|
||||
' ',
|
||||
' /\\ /\\ ',
|
||||
' ( {E} {E} ) ',
|
||||
' ( .. ) ',
|
||||
' `------´ ',
|
||||
],
|
||||
[
|
||||
' ',
|
||||
' /\\ /| ',
|
||||
' ( {E} {E} ) ',
|
||||
' ( .. ) ',
|
||||
' `------´ ',
|
||||
],
|
||||
[
|
||||
' ',
|
||||
' /\\ /\\ ',
|
||||
' ( {E} {E} ) ',
|
||||
' ( .. ) ',
|
||||
' `------´~ ',
|
||||
],
|
||||
],
|
||||
}
|
||||
|
||||
const HAT_LINES: Record<Hat, string> = {
|
||||
none: '',
|
||||
crown: ' \\^^^/ ',
|
||||
tophat: ' [___] ',
|
||||
propeller: ' -+- ',
|
||||
halo: ' ( ) ',
|
||||
wizard: ' /^\\ ',
|
||||
beanie: ' (___) ',
|
||||
tinyduck: ' ,> ',
|
||||
}
|
||||
|
||||
export function renderSprite(bones: CompanionBones, frame = 0): string[] {
|
||||
const frames = BODIES[bones.species]
|
||||
const body = frames[frame % frames.length]!.map(line =>
|
||||
line.replaceAll('{E}', bones.eye),
|
||||
)
|
||||
const lines = [...body]
|
||||
// Only replace with hat if line 0 is empty (some fidget frames use it for smoke etc)
|
||||
if (bones.hat !== 'none' && !lines[0]!.trim()) {
|
||||
lines[0] = HAT_LINES[bones.hat]
|
||||
}
|
||||
// Drop blank hat slot — wastes a row in the Card and ambient sprite when
|
||||
// there's no hat and the frame isn't using it for smoke/antenna/etc.
|
||||
// Only safe when ALL frames have blank line 0; otherwise heights oscillate.
|
||||
if (!lines[0]!.trim() && frames.every(f => !f[0]!.trim())) lines.shift()
|
||||
return lines
|
||||
}
|
||||
|
||||
export function spriteFrameCount(species: Species): number {
|
||||
return BODIES[species].length
|
||||
}
|
||||
|
||||
export function renderFace(bones: CompanionBones): string {
|
||||
const eye: Eye = bones.eye
|
||||
switch (bones.species) {
|
||||
case duck:
|
||||
case goose:
|
||||
return `(${eye}>`
|
||||
case blob:
|
||||
return `(${eye}${eye})`
|
||||
case cat:
|
||||
return `=${eye}ω${eye}=`
|
||||
case dragon:
|
||||
return `<${eye}~${eye}>`
|
||||
case octopus:
|
||||
return `~(${eye}${eye})~`
|
||||
case owl:
|
||||
return `(${eye})(${eye})`
|
||||
case penguin:
|
||||
return `(${eye}>)`
|
||||
case turtle:
|
||||
return `[${eye}_${eye}]`
|
||||
case snail:
|
||||
return `${eye}(@)`
|
||||
case ghost:
|
||||
return `/${eye}${eye}\\`
|
||||
case axolotl:
|
||||
return `}${eye}.${eye}{`
|
||||
case capybara:
|
||||
return `(${eye}oo${eye})`
|
||||
case cactus:
|
||||
return `|${eye} ${eye}|`
|
||||
case robot:
|
||||
return `[${eye}${eye}]`
|
||||
case rabbit:
|
||||
return `(${eye}..${eye})`
|
||||
case mushroom:
|
||||
return `|${eye} ${eye}|`
|
||||
case chonk:
|
||||
return `(${eye}.${eye})`
|
||||
}
|
||||
}
|
||||
148
src/buddy/types.ts
Normal file
148
src/buddy/types.ts
Normal file
@ -0,0 +1,148 @@
|
||||
export const RARITIES = [
|
||||
'common',
|
||||
'uncommon',
|
||||
'rare',
|
||||
'epic',
|
||||
'legendary',
|
||||
] as const
|
||||
export type Rarity = (typeof RARITIES)[number]
|
||||
|
||||
// One species name collides with a model-codename canary in excluded-strings.txt.
|
||||
// The check greps build output (not source), so runtime-constructing the value keeps
|
||||
// the literal out of the bundle while the check stays armed for the actual codename.
|
||||
// All species encoded uniformly; `as` casts are type-position only (erased pre-bundle).
|
||||
const c = String.fromCharCode
|
||||
// biome-ignore format: keep the species list compact
|
||||
|
||||
export const duck = c(0x64,0x75,0x63,0x6b) as 'duck'
|
||||
export const goose = c(0x67, 0x6f, 0x6f, 0x73, 0x65) as 'goose'
|
||||
export const blob = c(0x62, 0x6c, 0x6f, 0x62) as 'blob'
|
||||
export const cat = c(0x63, 0x61, 0x74) as 'cat'
|
||||
export const dragon = c(0x64, 0x72, 0x61, 0x67, 0x6f, 0x6e) as 'dragon'
|
||||
export const octopus = c(0x6f, 0x63, 0x74, 0x6f, 0x70, 0x75, 0x73) as 'octopus'
|
||||
export const owl = c(0x6f, 0x77, 0x6c) as 'owl'
|
||||
export const penguin = c(0x70, 0x65, 0x6e, 0x67, 0x75, 0x69, 0x6e) as 'penguin'
|
||||
export const turtle = c(0x74, 0x75, 0x72, 0x74, 0x6c, 0x65) as 'turtle'
|
||||
export const snail = c(0x73, 0x6e, 0x61, 0x69, 0x6c) as 'snail'
|
||||
export const ghost = c(0x67, 0x68, 0x6f, 0x73, 0x74) as 'ghost'
|
||||
export const axolotl = c(0x61, 0x78, 0x6f, 0x6c, 0x6f, 0x74, 0x6c) as 'axolotl'
|
||||
export const capybara = c(
|
||||
0x63,
|
||||
0x61,
|
||||
0x70,
|
||||
0x79,
|
||||
0x62,
|
||||
0x61,
|
||||
0x72,
|
||||
0x61,
|
||||
) as 'capybara'
|
||||
export const cactus = c(0x63, 0x61, 0x63, 0x74, 0x75, 0x73) as 'cactus'
|
||||
export const robot = c(0x72, 0x6f, 0x62, 0x6f, 0x74) as 'robot'
|
||||
export const rabbit = c(0x72, 0x61, 0x62, 0x62, 0x69, 0x74) as 'rabbit'
|
||||
export const mushroom = c(
|
||||
0x6d,
|
||||
0x75,
|
||||
0x73,
|
||||
0x68,
|
||||
0x72,
|
||||
0x6f,
|
||||
0x6f,
|
||||
0x6d,
|
||||
) as 'mushroom'
|
||||
export const chonk = c(0x63, 0x68, 0x6f, 0x6e, 0x6b) as 'chonk'
|
||||
|
||||
export const SPECIES = [
|
||||
duck,
|
||||
goose,
|
||||
blob,
|
||||
cat,
|
||||
dragon,
|
||||
octopus,
|
||||
owl,
|
||||
penguin,
|
||||
turtle,
|
||||
snail,
|
||||
ghost,
|
||||
axolotl,
|
||||
capybara,
|
||||
cactus,
|
||||
robot,
|
||||
rabbit,
|
||||
mushroom,
|
||||
chonk,
|
||||
] as const
|
||||
export type Species = (typeof SPECIES)[number] // biome-ignore format: keep compact
|
||||
|
||||
export const EYES = ['·', '✦', '×', '◉', '@', '°'] as const
|
||||
export type Eye = (typeof EYES)[number]
|
||||
|
||||
export const HATS = [
|
||||
'none',
|
||||
'crown',
|
||||
'tophat',
|
||||
'propeller',
|
||||
'halo',
|
||||
'wizard',
|
||||
'beanie',
|
||||
'tinyduck',
|
||||
] as const
|
||||
export type Hat = (typeof HATS)[number]
|
||||
|
||||
export const STAT_NAMES = [
|
||||
'DEBUGGING',
|
||||
'PATIENCE',
|
||||
'CHAOS',
|
||||
'WISDOM',
|
||||
'SNARK',
|
||||
] as const
|
||||
export type StatName = (typeof STAT_NAMES)[number]
|
||||
|
||||
// Deterministic parts — derived from hash(userId)
|
||||
export type CompanionBones = {
|
||||
rarity: Rarity
|
||||
species: Species
|
||||
eye: Eye
|
||||
hat: Hat
|
||||
shiny: boolean
|
||||
stats: Record<StatName, number>
|
||||
}
|
||||
|
||||
// Model-generated soul — stored in config after first hatch
|
||||
export type CompanionSoul = {
|
||||
name: string
|
||||
personality: string
|
||||
}
|
||||
|
||||
export type Companion = CompanionBones &
|
||||
CompanionSoul & {
|
||||
hatchedAt: number
|
||||
}
|
||||
|
||||
// What actually persists in config. Bones are regenerated from hash(userId)
|
||||
// on every read so species renames don't break stored companions and users
|
||||
// can't edit their way to a legendary.
|
||||
export type StoredCompanion = CompanionSoul & { hatchedAt: number }
|
||||
|
||||
export const RARITY_WEIGHTS = {
|
||||
common: 60,
|
||||
uncommon: 25,
|
||||
rare: 10,
|
||||
epic: 4,
|
||||
legendary: 1,
|
||||
} as const satisfies Record<Rarity, number>
|
||||
|
||||
export const RARITY_STARS = {
|
||||
common: '★',
|
||||
uncommon: '★★',
|
||||
rare: '★★★',
|
||||
epic: '★★★★',
|
||||
legendary: '★★★★★',
|
||||
} as const satisfies Record<Rarity, string>
|
||||
|
||||
export const RARITY_COLORS = {
|
||||
common: 'inactive',
|
||||
uncommon: 'success',
|
||||
rare: 'permission',
|
||||
epic: 'autoAccept',
|
||||
legendary: 'warning',
|
||||
} as const satisfies Record<Rarity, keyof import('../utils/theme.js').Theme>
|
||||
98
src/buddy/useBuddyNotification.tsx
Normal file
98
src/buddy/useBuddyNotification.tsx
Normal file
File diff suppressed because one or more lines are too long
754
src/commands.ts
Normal file
754
src/commands.ts
Normal file
@ -0,0 +1,754 @@
|
||||
// biome-ignore-all assist/source/organizeImports: ANT-ONLY import markers must not be reordered
|
||||
import addDir from './commands/add-dir/index.js'
|
||||
import autofixPr from './commands/autofix-pr/index.js'
|
||||
import backfillSessions from './commands/backfill-sessions/index.js'
|
||||
import btw from './commands/btw/index.js'
|
||||
import goodClaude from './commands/good-claude/index.js'
|
||||
import issue from './commands/issue/index.js'
|
||||
import feedback from './commands/feedback/index.js'
|
||||
import clear from './commands/clear/index.js'
|
||||
import color from './commands/color/index.js'
|
||||
import commit from './commands/commit.js'
|
||||
import copy from './commands/copy/index.js'
|
||||
import desktop from './commands/desktop/index.js'
|
||||
import commitPushPr from './commands/commit-push-pr.js'
|
||||
import compact from './commands/compact/index.js'
|
||||
import config from './commands/config/index.js'
|
||||
import { context, contextNonInteractive } from './commands/context/index.js'
|
||||
import cost from './commands/cost/index.js'
|
||||
import diff from './commands/diff/index.js'
|
||||
import ctx_viz from './commands/ctx_viz/index.js'
|
||||
import doctor from './commands/doctor/index.js'
|
||||
import memory from './commands/memory/index.js'
|
||||
import help from './commands/help/index.js'
|
||||
import ide from './commands/ide/index.js'
|
||||
import init from './commands/init.js'
|
||||
import initVerifiers from './commands/init-verifiers.js'
|
||||
import keybindings from './commands/keybindings/index.js'
|
||||
import login from './commands/login/index.js'
|
||||
import logout from './commands/logout/index.js'
|
||||
import installGitHubApp from './commands/install-github-app/index.js'
|
||||
import installSlackApp from './commands/install-slack-app/index.js'
|
||||
import breakCache from './commands/break-cache/index.js'
|
||||
import mcp from './commands/mcp/index.js'
|
||||
import mobile from './commands/mobile/index.js'
|
||||
import onboarding from './commands/onboarding/index.js'
|
||||
import pr_comments from './commands/pr_comments/index.js'
|
||||
import releaseNotes from './commands/release-notes/index.js'
|
||||
import rename from './commands/rename/index.js'
|
||||
import resume from './commands/resume/index.js'
|
||||
import review, { ultrareview } from './commands/review.js'
|
||||
import session from './commands/session/index.js'
|
||||
import share from './commands/share/index.js'
|
||||
import skills from './commands/skills/index.js'
|
||||
import status from './commands/status/index.js'
|
||||
import tasks from './commands/tasks/index.js'
|
||||
import teleport from './commands/teleport/index.js'
|
||||
/* eslint-disable @typescript-eslint/no-require-imports */
|
||||
const agentsPlatform =
|
||||
process.env.USER_TYPE === 'ant'
|
||||
? require('./commands/agents-platform/index.js').default
|
||||
: null
|
||||
/* eslint-enable @typescript-eslint/no-require-imports */
|
||||
import securityReview from './commands/security-review.js'
|
||||
import bughunter from './commands/bughunter/index.js'
|
||||
import terminalSetup from './commands/terminalSetup/index.js'
|
||||
import usage from './commands/usage/index.js'
|
||||
import theme from './commands/theme/index.js'
|
||||
import vim from './commands/vim/index.js'
|
||||
import { feature } from 'bun:bundle'
|
||||
// Dead code elimination: conditional imports
|
||||
/* eslint-disable @typescript-eslint/no-require-imports */
|
||||
const proactive =
|
||||
feature('PROACTIVE') || feature('KAIROS')
|
||||
? require('./commands/proactive.js').default
|
||||
: null
|
||||
const briefCommand =
|
||||
feature('KAIROS') || feature('KAIROS_BRIEF')
|
||||
? require('./commands/brief.js').default
|
||||
: null
|
||||
const assistantCommand = feature('KAIROS')
|
||||
? require('./commands/assistant/index.js').default
|
||||
: null
|
||||
const bridge = feature('BRIDGE_MODE')
|
||||
? require('./commands/bridge/index.js').default
|
||||
: null
|
||||
const remoteControlServerCommand =
|
||||
feature('DAEMON') && feature('BRIDGE_MODE')
|
||||
? require('./commands/remoteControlServer/index.js').default
|
||||
: null
|
||||
const voiceCommand = feature('VOICE_MODE')
|
||||
? require('./commands/voice/index.js').default
|
||||
: null
|
||||
const forceSnip = feature('HISTORY_SNIP')
|
||||
? require('./commands/force-snip.js').default
|
||||
: null
|
||||
const workflowsCmd = feature('WORKFLOW_SCRIPTS')
|
||||
? (
|
||||
require('./commands/workflows/index.js') as typeof import('./commands/workflows/index.js')
|
||||
).default
|
||||
: null
|
||||
const webCmd = feature('CCR_REMOTE_SETUP')
|
||||
? (
|
||||
require('./commands/remote-setup/index.js') as typeof import('./commands/remote-setup/index.js')
|
||||
).default
|
||||
: null
|
||||
const clearSkillIndexCache = feature('EXPERIMENTAL_SKILL_SEARCH')
|
||||
? (
|
||||
require('./services/skillSearch/localSearch.js') as typeof import('./services/skillSearch/localSearch.js')
|
||||
).clearSkillIndexCache
|
||||
: null
|
||||
const subscribePr = feature('KAIROS_GITHUB_WEBHOOKS')
|
||||
? require('./commands/subscribe-pr.js').default
|
||||
: null
|
||||
const ultraplan = feature('ULTRAPLAN')
|
||||
? require('./commands/ultraplan.js').default
|
||||
: null
|
||||
const torch = feature('TORCH') ? require('./commands/torch.js').default : null
|
||||
const peersCmd = feature('UDS_INBOX')
|
||||
? (
|
||||
require('./commands/peers/index.js') as typeof import('./commands/peers/index.js')
|
||||
).default
|
||||
: null
|
||||
const forkCmd = feature('FORK_SUBAGENT')
|
||||
? (
|
||||
require('./commands/fork/index.js') as typeof import('./commands/fork/index.js')
|
||||
).default
|
||||
: null
|
||||
const buddy = feature('BUDDY')
|
||||
? (
|
||||
require('./commands/buddy/index.js') as typeof import('./commands/buddy/index.js')
|
||||
).default
|
||||
: null
|
||||
/* eslint-enable @typescript-eslint/no-require-imports */
|
||||
import thinkback from './commands/thinkback/index.js'
|
||||
import thinkbackPlay from './commands/thinkback-play/index.js'
|
||||
import permissions from './commands/permissions/index.js'
|
||||
import plan from './commands/plan/index.js'
|
||||
import fast from './commands/fast/index.js'
|
||||
import passes from './commands/passes/index.js'
|
||||
import privacySettings from './commands/privacy-settings/index.js'
|
||||
import hooks from './commands/hooks/index.js'
|
||||
import files from './commands/files/index.js'
|
||||
import branch from './commands/branch/index.js'
|
||||
import agents from './commands/agents/index.js'
|
||||
import plugin from './commands/plugin/index.js'
|
||||
import reloadPlugins from './commands/reload-plugins/index.js'
|
||||
import rewind from './commands/rewind/index.js'
|
||||
import heapDump from './commands/heapdump/index.js'
|
||||
import mockLimits from './commands/mock-limits/index.js'
|
||||
import bridgeKick from './commands/bridge-kick.js'
|
||||
import version from './commands/version.js'
|
||||
import summary from './commands/summary/index.js'
|
||||
import {
|
||||
resetLimits,
|
||||
resetLimitsNonInteractive,
|
||||
} from './commands/reset-limits/index.js'
|
||||
import antTrace from './commands/ant-trace/index.js'
|
||||
import perfIssue from './commands/perf-issue/index.js'
|
||||
import sandboxToggle from './commands/sandbox-toggle/index.js'
|
||||
import chrome from './commands/chrome/index.js'
|
||||
import stickers from './commands/stickers/index.js'
|
||||
import advisor from './commands/advisor.js'
|
||||
import { logError } from './utils/log.js'
|
||||
import { toError } from './utils/errors.js'
|
||||
import { logForDebugging } from './utils/debug.js'
|
||||
import {
|
||||
getSkillDirCommands,
|
||||
clearSkillCaches,
|
||||
getDynamicSkills,
|
||||
} from './skills/loadSkillsDir.js'
|
||||
import { getBundledSkills } from './skills/bundledSkills.js'
|
||||
import { getBuiltinPluginSkillCommands } from './plugins/builtinPlugins.js'
|
||||
import {
|
||||
getPluginCommands,
|
||||
clearPluginCommandCache,
|
||||
getPluginSkills,
|
||||
clearPluginSkillsCache,
|
||||
} from './utils/plugins/loadPluginCommands.js'
|
||||
import memoize from 'lodash-es/memoize.js'
|
||||
import { isUsing3PServices, isClaudeAISubscriber } from './utils/auth.js'
|
||||
import { isFirstPartyAnthropicBaseUrl } from './utils/model/providers.js'
|
||||
import env from './commands/env/index.js'
|
||||
import exit from './commands/exit/index.js'
|
||||
import exportCommand from './commands/export/index.js'
|
||||
import model from './commands/model/index.js'
|
||||
import tag from './commands/tag/index.js'
|
||||
import outputStyle from './commands/output-style/index.js'
|
||||
import remoteEnv from './commands/remote-env/index.js'
|
||||
import upgrade from './commands/upgrade/index.js'
|
||||
import {
|
||||
extraUsage,
|
||||
extraUsageNonInteractive,
|
||||
} from './commands/extra-usage/index.js'
|
||||
import rateLimitOptions from './commands/rate-limit-options/index.js'
|
||||
import statusline from './commands/statusline.js'
|
||||
import effort from './commands/effort/index.js'
|
||||
import stats from './commands/stats/index.js'
|
||||
// insights.ts is 113KB (3200 lines, includes diffLines/html rendering). Lazy
|
||||
// shim defers the heavy module until /insights is actually invoked.
|
||||
const usageReport: Command = {
|
||||
type: 'prompt',
|
||||
name: 'insights',
|
||||
description: 'Generate a report analyzing your Claude Code sessions',
|
||||
contentLength: 0,
|
||||
progressMessage: 'analyzing your sessions',
|
||||
source: 'builtin',
|
||||
async getPromptForCommand(args, context) {
|
||||
const real = (await import('./commands/insights.js')).default
|
||||
if (real.type !== 'prompt') throw new Error('unreachable')
|
||||
return real.getPromptForCommand(args, context)
|
||||
},
|
||||
}
|
||||
import oauthRefresh from './commands/oauth-refresh/index.js'
|
||||
import debugToolCall from './commands/debug-tool-call/index.js'
|
||||
import { getSettingSourceName } from './utils/settings/constants.js'
|
||||
import {
|
||||
type Command,
|
||||
getCommandName,
|
||||
isCommandEnabled,
|
||||
} from './types/command.js'
|
||||
|
||||
// Re-export types from the centralized location
|
||||
export type {
|
||||
Command,
|
||||
CommandBase,
|
||||
CommandResultDisplay,
|
||||
LocalCommandResult,
|
||||
LocalJSXCommandContext,
|
||||
PromptCommand,
|
||||
ResumeEntrypoint,
|
||||
} from './types/command.js'
|
||||
export { getCommandName, isCommandEnabled } from './types/command.js'
|
||||
|
||||
// Commands that get eliminated from the external build
|
||||
export const INTERNAL_ONLY_COMMANDS = [
|
||||
backfillSessions,
|
||||
breakCache,
|
||||
bughunter,
|
||||
commit,
|
||||
commitPushPr,
|
||||
ctx_viz,
|
||||
goodClaude,
|
||||
issue,
|
||||
initVerifiers,
|
||||
...(forceSnip ? [forceSnip] : []),
|
||||
mockLimits,
|
||||
bridgeKick,
|
||||
version,
|
||||
...(subscribePr ? [subscribePr] : []),
|
||||
resetLimits,
|
||||
resetLimitsNonInteractive,
|
||||
onboarding,
|
||||
share,
|
||||
summary,
|
||||
teleport,
|
||||
antTrace,
|
||||
perfIssue,
|
||||
env,
|
||||
oauthRefresh,
|
||||
debugToolCall,
|
||||
agentsPlatform,
|
||||
autofixPr,
|
||||
].filter(Boolean)
|
||||
|
||||
// Declared as a function so that we don't run this until getCommands is called,
|
||||
// since underlying functions read from config, which can't be read at module initialization time
|
||||
const COMMANDS = memoize((): Command[] => [
|
||||
addDir,
|
||||
advisor,
|
||||
agents,
|
||||
branch,
|
||||
btw,
|
||||
chrome,
|
||||
clear,
|
||||
color,
|
||||
compact,
|
||||
config,
|
||||
copy,
|
||||
desktop,
|
||||
context,
|
||||
contextNonInteractive,
|
||||
cost,
|
||||
diff,
|
||||
doctor,
|
||||
effort,
|
||||
exit,
|
||||
fast,
|
||||
files,
|
||||
heapDump,
|
||||
help,
|
||||
ide,
|
||||
init,
|
||||
keybindings,
|
||||
installGitHubApp,
|
||||
installSlackApp,
|
||||
mcp,
|
||||
memory,
|
||||
mobile,
|
||||
model,
|
||||
outputStyle,
|
||||
remoteEnv,
|
||||
plugin,
|
||||
pr_comments,
|
||||
releaseNotes,
|
||||
reloadPlugins,
|
||||
rename,
|
||||
resume,
|
||||
session,
|
||||
skills,
|
||||
stats,
|
||||
status,
|
||||
statusline,
|
||||
stickers,
|
||||
tag,
|
||||
theme,
|
||||
feedback,
|
||||
review,
|
||||
ultrareview,
|
||||
rewind,
|
||||
securityReview,
|
||||
terminalSetup,
|
||||
upgrade,
|
||||
extraUsage,
|
||||
extraUsageNonInteractive,
|
||||
rateLimitOptions,
|
||||
usage,
|
||||
usageReport,
|
||||
vim,
|
||||
...(webCmd ? [webCmd] : []),
|
||||
...(forkCmd ? [forkCmd] : []),
|
||||
...(buddy ? [buddy] : []),
|
||||
...(proactive ? [proactive] : []),
|
||||
...(briefCommand ? [briefCommand] : []),
|
||||
...(assistantCommand ? [assistantCommand] : []),
|
||||
...(bridge ? [bridge] : []),
|
||||
...(remoteControlServerCommand ? [remoteControlServerCommand] : []),
|
||||
...(voiceCommand ? [voiceCommand] : []),
|
||||
thinkback,
|
||||
thinkbackPlay,
|
||||
permissions,
|
||||
plan,
|
||||
privacySettings,
|
||||
hooks,
|
||||
exportCommand,
|
||||
sandboxToggle,
|
||||
...(!isUsing3PServices() ? [logout, login()] : []),
|
||||
passes,
|
||||
...(peersCmd ? [peersCmd] : []),
|
||||
tasks,
|
||||
...(workflowsCmd ? [workflowsCmd] : []),
|
||||
...(torch ? [torch] : []),
|
||||
...(ultraplan ? [ultraplan] : []),
|
||||
...(process.env.USER_TYPE === 'ant' && !process.env.IS_DEMO
|
||||
? INTERNAL_ONLY_COMMANDS
|
||||
: []),
|
||||
])
|
||||
|
||||
export const builtInCommandNames = memoize(
|
||||
(): Set<string> =>
|
||||
new Set(COMMANDS().flatMap(_ => [_.name, ...(_.aliases ?? [])])),
|
||||
)
|
||||
|
||||
async function getSkills(cwd: string): Promise<{
|
||||
skillDirCommands: Command[]
|
||||
pluginSkills: Command[]
|
||||
bundledSkills: Command[]
|
||||
builtinPluginSkills: Command[]
|
||||
}> {
|
||||
try {
|
||||
const [skillDirCommands, pluginSkills] = await Promise.all([
|
||||
getSkillDirCommands(cwd).catch(err => {
|
||||
logError(toError(err))
|
||||
logForDebugging(
|
||||
'Skill directory commands failed to load, continuing without them',
|
||||
)
|
||||
return []
|
||||
}),
|
||||
getPluginSkills().catch(err => {
|
||||
logError(toError(err))
|
||||
logForDebugging('Plugin skills failed to load, continuing without them')
|
||||
return []
|
||||
}),
|
||||
])
|
||||
// Bundled skills are registered synchronously at startup
|
||||
const bundledSkills = getBundledSkills()
|
||||
// Built-in plugin skills come from enabled built-in plugins
|
||||
const builtinPluginSkills = getBuiltinPluginSkillCommands()
|
||||
logForDebugging(
|
||||
`getSkills returning: ${skillDirCommands.length} skill dir commands, ${pluginSkills.length} plugin skills, ${bundledSkills.length} bundled skills, ${builtinPluginSkills.length} builtin plugin skills`,
|
||||
)
|
||||
return {
|
||||
skillDirCommands,
|
||||
pluginSkills,
|
||||
bundledSkills,
|
||||
builtinPluginSkills,
|
||||
}
|
||||
} catch (err) {
|
||||
// This should never happen since we catch at the Promise level, but defensive
|
||||
logError(toError(err))
|
||||
logForDebugging('Unexpected error in getSkills, returning empty')
|
||||
return {
|
||||
skillDirCommands: [],
|
||||
pluginSkills: [],
|
||||
bundledSkills: [],
|
||||
builtinPluginSkills: [],
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/* eslint-disable @typescript-eslint/no-require-imports */
|
||||
const getWorkflowCommands = feature('WORKFLOW_SCRIPTS')
|
||||
? (
|
||||
require('./tools/WorkflowTool/createWorkflowCommand.js') as typeof import('./tools/WorkflowTool/createWorkflowCommand.js')
|
||||
).getWorkflowCommands
|
||||
: null
|
||||
/* eslint-enable @typescript-eslint/no-require-imports */
|
||||
|
||||
/**
|
||||
* Filters commands by their declared `availability` (auth/provider requirement).
|
||||
* Commands without `availability` are treated as universal.
|
||||
* This runs before `isEnabled()` so that provider-gated commands are hidden
|
||||
* regardless of feature-flag state.
|
||||
*
|
||||
* Not memoized — auth state can change mid-session (e.g. after /login),
|
||||
* so this must be re-evaluated on every getCommands() call.
|
||||
*/
|
||||
export function meetsAvailabilityRequirement(cmd: Command): boolean {
|
||||
if (!cmd.availability) return true
|
||||
for (const a of cmd.availability) {
|
||||
switch (a) {
|
||||
case 'claude-ai':
|
||||
if (isClaudeAISubscriber()) return true
|
||||
break
|
||||
case 'console':
|
||||
// Console API key user = direct 1P API customer (not 3P, not claude.ai).
|
||||
// Excludes 3P (Bedrock/Vertex/Foundry) who don't set ANTHROPIC_BASE_URL
|
||||
// and gateway users who proxy through a custom base URL.
|
||||
if (
|
||||
!isClaudeAISubscriber() &&
|
||||
!isUsing3PServices() &&
|
||||
isFirstPartyAnthropicBaseUrl()
|
||||
)
|
||||
return true
|
||||
break
|
||||
default: {
|
||||
const _exhaustive: never = a
|
||||
void _exhaustive
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
/**
|
||||
* Loads all command sources (skills, plugins, workflows). Memoized by cwd
|
||||
* because loading is expensive (disk I/O, dynamic imports).
|
||||
*/
|
||||
const loadAllCommands = memoize(async (cwd: string): Promise<Command[]> => {
|
||||
const [
|
||||
{ skillDirCommands, pluginSkills, bundledSkills, builtinPluginSkills },
|
||||
pluginCommands,
|
||||
workflowCommands,
|
||||
] = await Promise.all([
|
||||
getSkills(cwd),
|
||||
getPluginCommands(),
|
||||
getWorkflowCommands ? getWorkflowCommands(cwd) : Promise.resolve([]),
|
||||
])
|
||||
|
||||
return [
|
||||
...bundledSkills,
|
||||
...builtinPluginSkills,
|
||||
...skillDirCommands,
|
||||
...workflowCommands,
|
||||
...pluginCommands,
|
||||
...pluginSkills,
|
||||
...COMMANDS(),
|
||||
]
|
||||
})
|
||||
|
||||
/**
|
||||
* Returns commands available to the current user. The expensive loading is
|
||||
* memoized, but availability and isEnabled checks run fresh every call so
|
||||
* auth changes (e.g. /login) take effect immediately.
|
||||
*/
|
||||
export async function getCommands(cwd: string): Promise<Command[]> {
|
||||
const allCommands = await loadAllCommands(cwd)
|
||||
|
||||
// Get dynamic skills discovered during file operations
|
||||
const dynamicSkills = getDynamicSkills()
|
||||
|
||||
// Build base commands without dynamic skills
|
||||
const baseCommands = allCommands.filter(
|
||||
_ => meetsAvailabilityRequirement(_) && isCommandEnabled(_),
|
||||
)
|
||||
|
||||
if (dynamicSkills.length === 0) {
|
||||
return baseCommands
|
||||
}
|
||||
|
||||
// Dedupe dynamic skills - only add if not already present
|
||||
const baseCommandNames = new Set(baseCommands.map(c => c.name))
|
||||
const uniqueDynamicSkills = dynamicSkills.filter(
|
||||
s =>
|
||||
!baseCommandNames.has(s.name) &&
|
||||
meetsAvailabilityRequirement(s) &&
|
||||
isCommandEnabled(s),
|
||||
)
|
||||
|
||||
if (uniqueDynamicSkills.length === 0) {
|
||||
return baseCommands
|
||||
}
|
||||
|
||||
// Insert dynamic skills after plugin skills but before built-in commands
|
||||
const builtInNames = new Set(COMMANDS().map(c => c.name))
|
||||
const insertIndex = baseCommands.findIndex(c => builtInNames.has(c.name))
|
||||
|
||||
if (insertIndex === -1) {
|
||||
return [...baseCommands, ...uniqueDynamicSkills]
|
||||
}
|
||||
|
||||
return [
|
||||
...baseCommands.slice(0, insertIndex),
|
||||
...uniqueDynamicSkills,
|
||||
...baseCommands.slice(insertIndex),
|
||||
]
|
||||
}
|
||||
|
||||
/**
|
||||
* Clears only the memoization caches for commands, WITHOUT clearing skill caches.
|
||||
* Use this when dynamic skills are added to invalidate cached command lists.
|
||||
*/
|
||||
export function clearCommandMemoizationCaches(): void {
|
||||
loadAllCommands.cache?.clear?.()
|
||||
getSkillToolCommands.cache?.clear?.()
|
||||
getSlashCommandToolSkills.cache?.clear?.()
|
||||
// getSkillIndex in skillSearch/localSearch.ts is a separate memoization layer
|
||||
// built ON TOP of getSkillToolCommands/getCommands. Clearing only the inner
|
||||
// caches is a no-op for the outer — lodash memoize returns the cached result
|
||||
// without ever reaching the cleared inners. Must clear it explicitly.
|
||||
clearSkillIndexCache?.()
|
||||
}
|
||||
|
||||
export function clearCommandsCache(): void {
|
||||
clearCommandMemoizationCaches()
|
||||
clearPluginCommandCache()
|
||||
clearPluginSkillsCache()
|
||||
clearSkillCaches()
|
||||
}
|
||||
|
||||
/**
|
||||
* Filter AppState.mcp.commands to MCP-provided skills (prompt-type,
|
||||
* model-invocable, loaded from MCP). These live outside getCommands() so
|
||||
* callers that need MCP skills in their skill index thread them through
|
||||
* separately.
|
||||
*/
|
||||
export function getMcpSkillCommands(
|
||||
mcpCommands: readonly Command[],
|
||||
): readonly Command[] {
|
||||
if (feature('MCP_SKILLS')) {
|
||||
return mcpCommands.filter(
|
||||
cmd =>
|
||||
cmd.type === 'prompt' &&
|
||||
cmd.loadedFrom === 'mcp' &&
|
||||
!cmd.disableModelInvocation,
|
||||
)
|
||||
}
|
||||
return []
|
||||
}
|
||||
|
||||
// SkillTool shows ALL prompt-based commands that the model can invoke
|
||||
// This includes both skills (from /skills/) and commands (from /commands/)
|
||||
export const getSkillToolCommands = memoize(
|
||||
async (cwd: string): Promise<Command[]> => {
|
||||
const allCommands = await getCommands(cwd)
|
||||
return allCommands.filter(
|
||||
cmd =>
|
||||
cmd.type === 'prompt' &&
|
||||
!cmd.disableModelInvocation &&
|
||||
cmd.source !== 'builtin' &&
|
||||
// Always include skills from /skills/ dirs, bundled skills, and legacy /commands/ entries
|
||||
// (they all get an auto-derived description from the first line if frontmatter is missing).
|
||||
// Plugin/MCP commands still require an explicit description to appear in the listing.
|
||||
(cmd.loadedFrom === 'bundled' ||
|
||||
cmd.loadedFrom === 'skills' ||
|
||||
cmd.loadedFrom === 'commands_DEPRECATED' ||
|
||||
cmd.hasUserSpecifiedDescription ||
|
||||
cmd.whenToUse),
|
||||
)
|
||||
},
|
||||
)
|
||||
|
||||
// Filters commands to include only skills. Skills are commands that provide
|
||||
// specialized capabilities for the model to use. They are identified by
|
||||
// loadedFrom being 'skills', 'plugin', or 'bundled', or having disableModelInvocation set.
|
||||
export const getSlashCommandToolSkills = memoize(
|
||||
async (cwd: string): Promise<Command[]> => {
|
||||
try {
|
||||
const allCommands = await getCommands(cwd)
|
||||
return allCommands.filter(
|
||||
cmd =>
|
||||
cmd.type === 'prompt' &&
|
||||
cmd.source !== 'builtin' &&
|
||||
(cmd.hasUserSpecifiedDescription || cmd.whenToUse) &&
|
||||
(cmd.loadedFrom === 'skills' ||
|
||||
cmd.loadedFrom === 'plugin' ||
|
||||
cmd.loadedFrom === 'bundled' ||
|
||||
cmd.disableModelInvocation),
|
||||
)
|
||||
} catch (error) {
|
||||
logError(toError(error))
|
||||
// Return empty array rather than throwing - skills are non-critical
|
||||
// This prevents skill loading failures from breaking the entire system
|
||||
logForDebugging('Returning empty skills array due to load failure')
|
||||
return []
|
||||
}
|
||||
},
|
||||
)
|
||||
|
||||
/**
|
||||
* Commands that are safe to use in remote mode (--remote).
|
||||
* These only affect local TUI state and don't depend on local filesystem,
|
||||
* git, shell, IDE, MCP, or other local execution context.
|
||||
*
|
||||
* Used in two places:
|
||||
* 1. Pre-filtering commands in main.tsx before REPL renders (prevents race with CCR init)
|
||||
* 2. Preserving local-only commands in REPL's handleRemoteInit after CCR filters
|
||||
*/
|
||||
export const REMOTE_SAFE_COMMANDS: Set<Command> = new Set([
|
||||
session, // Shows QR code / URL for remote session
|
||||
exit, // Exit the TUI
|
||||
clear, // Clear screen
|
||||
help, // Show help
|
||||
theme, // Change terminal theme
|
||||
color, // Change agent color
|
||||
vim, // Toggle vim mode
|
||||
cost, // Show session cost (local cost tracking)
|
||||
usage, // Show usage info
|
||||
copy, // Copy last message
|
||||
btw, // Quick note
|
||||
feedback, // Send feedback
|
||||
plan, // Plan mode toggle
|
||||
keybindings, // Keybinding management
|
||||
statusline, // Status line toggle
|
||||
stickers, // Stickers
|
||||
mobile, // Mobile QR code
|
||||
])
|
||||
|
||||
/**
|
||||
* Builtin commands of type 'local' that ARE safe to execute when received
|
||||
* over the Remote Control bridge. These produce text output that streams
|
||||
* back to the mobile/web client and have no terminal-only side effects.
|
||||
*
|
||||
* 'local-jsx' commands are blocked by type (they render Ink UI) and
|
||||
* 'prompt' commands are allowed by type (they expand to text sent to the
|
||||
* model) — this set only gates 'local' commands.
|
||||
*
|
||||
* When adding a new 'local' command that should work from mobile, add it
|
||||
* here. Default is blocked.
|
||||
*/
|
||||
export const BRIDGE_SAFE_COMMANDS: Set<Command> = new Set(
|
||||
[
|
||||
compact, // Shrink context — useful mid-session from a phone
|
||||
clear, // Wipe transcript
|
||||
cost, // Show session cost
|
||||
summary, // Summarize conversation
|
||||
releaseNotes, // Show changelog
|
||||
files, // List tracked files
|
||||
].filter((c): c is Command => c !== null),
|
||||
)
|
||||
|
||||
/**
|
||||
* Whether a slash command is safe to execute when its input arrived over the
|
||||
* Remote Control bridge (mobile/web client).
|
||||
*
|
||||
* PR #19134 blanket-blocked all slash commands from bridge inbound because
|
||||
* `/model` from iOS was popping the local Ink picker. This predicate relaxes
|
||||
* that with an explicit allowlist: 'prompt' commands (skills) expand to text
|
||||
* and are safe by construction; 'local' commands need an explicit opt-in via
|
||||
* BRIDGE_SAFE_COMMANDS; 'local-jsx' commands render Ink UI and stay blocked.
|
||||
*/
|
||||
export function isBridgeSafeCommand(cmd: Command): boolean {
|
||||
if (cmd.type === 'local-jsx') return false
|
||||
if (cmd.type === 'prompt') return true
|
||||
return BRIDGE_SAFE_COMMANDS.has(cmd)
|
||||
}
|
||||
|
||||
/**
|
||||
* Filter commands to only include those safe for remote mode.
|
||||
* Used to pre-filter commands when rendering the REPL in --remote mode,
|
||||
* preventing local-only commands from being briefly available before
|
||||
* the CCR init message arrives.
|
||||
*/
|
||||
export function filterCommandsForRemoteMode(commands: Command[]): Command[] {
|
||||
return commands.filter(cmd => REMOTE_SAFE_COMMANDS.has(cmd))
|
||||
}
|
||||
|
||||
export function findCommand(
|
||||
commandName: string,
|
||||
commands: Command[],
|
||||
): Command | undefined {
|
||||
return commands.find(
|
||||
_ =>
|
||||
_.name === commandName ||
|
||||
getCommandName(_) === commandName ||
|
||||
_.aliases?.includes(commandName),
|
||||
)
|
||||
}
|
||||
|
||||
export function hasCommand(commandName: string, commands: Command[]): boolean {
|
||||
return findCommand(commandName, commands) !== undefined
|
||||
}
|
||||
|
||||
export function getCommand(commandName: string, commands: Command[]): Command {
|
||||
const command = findCommand(commandName, commands)
|
||||
if (!command) {
|
||||
throw ReferenceError(
|
||||
`Command ${commandName} not found. Available commands: ${commands
|
||||
.map(_ => {
|
||||
const name = getCommandName(_)
|
||||
return _.aliases ? `${name} (aliases: ${_.aliases.join(', ')})` : name
|
||||
})
|
||||
.sort((a, b) => a.localeCompare(b))
|
||||
.join(', ')}`,
|
||||
)
|
||||
}
|
||||
|
||||
return command
|
||||
}
|
||||
|
||||
/**
|
||||
* Formats a command's description with its source annotation for user-facing UI.
|
||||
* Use this in typeahead, help screens, and other places where users need to see
|
||||
* where a command comes from.
|
||||
*
|
||||
* For model-facing prompts (like SkillTool), use cmd.description directly.
|
||||
*/
|
||||
export function formatDescriptionWithSource(cmd: Command): string {
|
||||
if (cmd.type !== 'prompt') {
|
||||
return cmd.description
|
||||
}
|
||||
|
||||
if (cmd.kind === 'workflow') {
|
||||
return `${cmd.description} (workflow)`
|
||||
}
|
||||
|
||||
if (cmd.source === 'plugin') {
|
||||
const pluginName = cmd.pluginInfo?.pluginManifest.name
|
||||
if (pluginName) {
|
||||
return `(${pluginName}) ${cmd.description}`
|
||||
}
|
||||
return `${cmd.description} (plugin)`
|
||||
}
|
||||
|
||||
if (cmd.source === 'builtin' || cmd.source === 'mcp') {
|
||||
return cmd.description
|
||||
}
|
||||
|
||||
if (cmd.source === 'bundled') {
|
||||
return `${cmd.description} (bundled)`
|
||||
}
|
||||
|
||||
return `${cmd.description} (${getSettingSourceName(cmd.source)})`
|
||||
}
|
||||
126
src/commands/add-dir/add-dir.tsx
Normal file
126
src/commands/add-dir/add-dir.tsx
Normal file
File diff suppressed because one or more lines are too long
11
src/commands/add-dir/index.ts
Normal file
11
src/commands/add-dir/index.ts
Normal file
@ -0,0 +1,11 @@
|
||||
import type { Command } from '../../commands.js'
|
||||
|
||||
const addDir = {
|
||||
type: 'local-jsx',
|
||||
name: 'add-dir',
|
||||
description: 'Add a new working directory',
|
||||
argumentHint: '<path>',
|
||||
load: () => import('./add-dir.js'),
|
||||
} satisfies Command
|
||||
|
||||
export default addDir
|
||||
110
src/commands/add-dir/validation.ts
Normal file
110
src/commands/add-dir/validation.ts
Normal file
@ -0,0 +1,110 @@
|
||||
import chalk from 'chalk'
|
||||
import { stat } from 'fs/promises'
|
||||
import { dirname, resolve } from 'path'
|
||||
import type { ToolPermissionContext } from '../../Tool.js'
|
||||
import { getErrnoCode } from '../../utils/errors.js'
|
||||
import { expandPath } from '../../utils/path.js'
|
||||
import {
|
||||
allWorkingDirectories,
|
||||
pathInWorkingPath,
|
||||
} from '../../utils/permissions/filesystem.js'
|
||||
|
||||
export type AddDirectoryResult =
|
||||
| {
|
||||
resultType: 'success'
|
||||
absolutePath: string
|
||||
}
|
||||
| {
|
||||
resultType: 'emptyPath'
|
||||
}
|
||||
| {
|
||||
resultType: 'pathNotFound' | 'notADirectory'
|
||||
directoryPath: string
|
||||
absolutePath: string
|
||||
}
|
||||
| {
|
||||
resultType: 'alreadyInWorkingDirectory'
|
||||
directoryPath: string
|
||||
workingDir: string
|
||||
}
|
||||
|
||||
export async function validateDirectoryForWorkspace(
|
||||
directoryPath: string,
|
||||
permissionContext: ToolPermissionContext,
|
||||
): Promise<AddDirectoryResult> {
|
||||
if (!directoryPath) {
|
||||
return {
|
||||
resultType: 'emptyPath',
|
||||
}
|
||||
}
|
||||
|
||||
// resolve() strips the trailing slash expandPath can leave on absolute
|
||||
// inputs, so /foo and /foo/ map to the same storage key (CC-33).
|
||||
const absolutePath = resolve(expandPath(directoryPath))
|
||||
|
||||
// Check if path exists and is a directory (single syscall)
|
||||
try {
|
||||
const stats = await stat(absolutePath)
|
||||
if (!stats.isDirectory()) {
|
||||
return {
|
||||
resultType: 'notADirectory',
|
||||
directoryPath,
|
||||
absolutePath,
|
||||
}
|
||||
}
|
||||
} catch (e: unknown) {
|
||||
const code = getErrnoCode(e)
|
||||
// Match prior existsSync() semantics: treat any of these as "not found"
|
||||
// rather than re-throwing. EACCES/EPERM in particular must not crash
|
||||
// startup when a settings-configured additional directory is inaccessible.
|
||||
if (
|
||||
code === 'ENOENT' ||
|
||||
code === 'ENOTDIR' ||
|
||||
code === 'EACCES' ||
|
||||
code === 'EPERM'
|
||||
) {
|
||||
return {
|
||||
resultType: 'pathNotFound',
|
||||
directoryPath,
|
||||
absolutePath,
|
||||
}
|
||||
}
|
||||
throw e
|
||||
}
|
||||
|
||||
// Get current permission context
|
||||
const currentWorkingDirs = allWorkingDirectories(permissionContext)
|
||||
|
||||
// Check if already within an existing working directory
|
||||
for (const workingDir of currentWorkingDirs) {
|
||||
if (pathInWorkingPath(absolutePath, workingDir)) {
|
||||
return {
|
||||
resultType: 'alreadyInWorkingDirectory',
|
||||
directoryPath,
|
||||
workingDir,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
resultType: 'success',
|
||||
absolutePath,
|
||||
}
|
||||
}
|
||||
|
||||
export function addDirHelpMessage(result: AddDirectoryResult): string {
|
||||
switch (result.resultType) {
|
||||
case 'emptyPath':
|
||||
return 'Please provide a directory path.'
|
||||
case 'pathNotFound':
|
||||
return `Path ${chalk.bold(result.absolutePath)} was not found.`
|
||||
case 'notADirectory': {
|
||||
const parentDir = dirname(result.absolutePath)
|
||||
return `${chalk.bold(result.directoryPath)} is not a directory. Did you mean to add the parent directory ${chalk.bold(parentDir)}?`
|
||||
}
|
||||
case 'alreadyInWorkingDirectory':
|
||||
return `${chalk.bold(result.directoryPath)} is already accessible within the existing working directory ${chalk.bold(result.workingDir)}.`
|
||||
case 'success':
|
||||
return `Added ${chalk.bold(result.absolutePath)} as a working directory.`
|
||||
}
|
||||
}
|
||||
109
src/commands/advisor.ts
Normal file
109
src/commands/advisor.ts
Normal file
@ -0,0 +1,109 @@
|
||||
import type { Command } from '../commands.js'
|
||||
import type { LocalCommandCall } from '../types/command.js'
|
||||
import {
|
||||
canUserConfigureAdvisor,
|
||||
isValidAdvisorModel,
|
||||
modelSupportsAdvisor,
|
||||
} from '../utils/advisor.js'
|
||||
import {
|
||||
getDefaultMainLoopModelSetting,
|
||||
normalizeModelStringForAPI,
|
||||
parseUserSpecifiedModel,
|
||||
} from '../utils/model/model.js'
|
||||
import { validateModel } from '../utils/model/validateModel.js'
|
||||
import { updateSettingsForSource } from '../utils/settings/settings.js'
|
||||
|
||||
const call: LocalCommandCall = async (args, context) => {
|
||||
const arg = args.trim().toLowerCase()
|
||||
const baseModel = parseUserSpecifiedModel(
|
||||
context.getAppState().mainLoopModel ?? getDefaultMainLoopModelSetting(),
|
||||
)
|
||||
|
||||
if (!arg) {
|
||||
const current = context.getAppState().advisorModel
|
||||
if (!current) {
|
||||
return {
|
||||
type: 'text',
|
||||
value:
|
||||
'Advisor: not set\nUse "/advisor <model>" to enable (e.g. "/advisor opus").',
|
||||
}
|
||||
}
|
||||
if (!modelSupportsAdvisor(baseModel)) {
|
||||
return {
|
||||
type: 'text',
|
||||
value: `Advisor: ${current} (inactive)\nThe current model (${baseModel}) does not support advisors.`,
|
||||
}
|
||||
}
|
||||
return {
|
||||
type: 'text',
|
||||
value: `Advisor: ${current}\nUse "/advisor unset" to disable or "/advisor <model>" to change.`,
|
||||
}
|
||||
}
|
||||
|
||||
if (arg === 'unset' || arg === 'off') {
|
||||
const prev = context.getAppState().advisorModel
|
||||
context.setAppState(s => {
|
||||
if (s.advisorModel === undefined) return s
|
||||
return { ...s, advisorModel: undefined }
|
||||
})
|
||||
updateSettingsForSource('userSettings', { advisorModel: undefined })
|
||||
return {
|
||||
type: 'text',
|
||||
value: prev
|
||||
? `Advisor disabled (was ${prev}).`
|
||||
: 'Advisor already unset.',
|
||||
}
|
||||
}
|
||||
|
||||
const normalizedModel = normalizeModelStringForAPI(arg)
|
||||
const resolvedModel = parseUserSpecifiedModel(arg)
|
||||
const { valid, error } = await validateModel(resolvedModel)
|
||||
if (!valid) {
|
||||
return {
|
||||
type: 'text',
|
||||
value: error
|
||||
? `Invalid advisor model: ${error}`
|
||||
: `Unknown model: ${arg} (${resolvedModel})`,
|
||||
}
|
||||
}
|
||||
|
||||
if (!isValidAdvisorModel(resolvedModel)) {
|
||||
return {
|
||||
type: 'text',
|
||||
value: `The model ${arg} (${resolvedModel}) cannot be used as an advisor`,
|
||||
}
|
||||
}
|
||||
|
||||
context.setAppState(s => {
|
||||
if (s.advisorModel === normalizedModel) return s
|
||||
return { ...s, advisorModel: normalizedModel }
|
||||
})
|
||||
updateSettingsForSource('userSettings', { advisorModel: normalizedModel })
|
||||
|
||||
if (!modelSupportsAdvisor(baseModel)) {
|
||||
return {
|
||||
type: 'text',
|
||||
value: `Advisor set to ${normalizedModel}.\nNote: Your current model (${baseModel}) does not support advisors. Switch to a supported model to use the advisor.`,
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
type: 'text',
|
||||
value: `Advisor set to ${normalizedModel}.`,
|
||||
}
|
||||
}
|
||||
|
||||
const advisor = {
|
||||
type: 'local',
|
||||
name: 'advisor',
|
||||
description: 'Configure the advisor model',
|
||||
argumentHint: '[<model>|off]',
|
||||
isEnabled: () => canUserConfigureAdvisor(),
|
||||
get isHidden() {
|
||||
return !canUserConfigureAdvisor()
|
||||
},
|
||||
supportsNonInteractive: true,
|
||||
load: () => Promise.resolve({ call }),
|
||||
} satisfies Command
|
||||
|
||||
export default advisor
|
||||
12
src/commands/agents/agents.tsx
Normal file
12
src/commands/agents/agents.tsx
Normal file
@ -0,0 +1,12 @@
|
||||
import * as React from 'react';
|
||||
import { AgentsMenu } from '../../components/agents/AgentsMenu.js';
|
||||
import type { ToolUseContext } from '../../Tool.js';
|
||||
import { getTools } from '../../tools.js';
|
||||
import type { LocalJSXCommandOnDone } from '../../types/command.js';
|
||||
export async function call(onDone: LocalJSXCommandOnDone, context: ToolUseContext): Promise<React.ReactNode> {
|
||||
const appState = context.getAppState();
|
||||
const permissionContext = appState.toolPermissionContext;
|
||||
const tools = getTools(permissionContext);
|
||||
return <AgentsMenu tools={tools} onExit={onDone} />;
|
||||
}
|
||||
//# sourceMappingURL=data:application/json;charset=utf-8;base64,eyJ2ZXJzaW9uIjozLCJuYW1lcyI6WyJSZWFjdCIsIkFnZW50c01lbnUiLCJUb29sVXNlQ29udGV4dCIsImdldFRvb2xzIiwiTG9jYWxKU1hDb21tYW5kT25Eb25lIiwiY2FsbCIsIm9uRG9uZSIsImNvbnRleHQiLCJQcm9taXNlIiwiUmVhY3ROb2RlIiwiYXBwU3RhdGUiLCJnZXRBcHBTdGF0ZSIsInBlcm1pc3Npb25Db250ZXh0IiwidG9vbFBlcm1pc3Npb25Db250ZXh0IiwidG9vbHMiXSwic291cmNlcyI6WyJhZ2VudHMudHN4Il0sInNvdXJjZXNDb250ZW50IjpbImltcG9ydCAqIGFzIFJlYWN0IGZyb20gJ3JlYWN0J1xuaW1wb3J0IHsgQWdlbnRzTWVudSB9IGZyb20gJy4uLy4uL2NvbXBvbmVudHMvYWdlbnRzL0FnZW50c01lbnUuanMnXG5pbXBvcnQgdHlwZSB7IFRvb2xVc2VDb250ZXh0IH0gZnJvbSAnLi4vLi4vVG9vbC5qcydcbmltcG9ydCB7IGdldFRvb2xzIH0gZnJvbSAnLi4vLi4vdG9vbHMuanMnXG5pbXBvcnQgdHlwZSB7IExvY2FsSlNYQ29tbWFuZE9uRG9uZSB9IGZyb20gJy4uLy4uL3R5cGVzL2NvbW1hbmQuanMnXG5cbmV4cG9ydCBhc3luYyBmdW5jdGlvbiBjYWxsKFxuICBvbkRvbmU6IExvY2FsSlNYQ29tbWFuZE9uRG9uZSxcbiAgY29udGV4dDogVG9vbFVzZUNvbnRleHQsXG4pOiBQcm9taXNlPFJlYWN0LlJlYWN0Tm9kZT4ge1xuICBjb25zdCBhcHBTdGF0ZSA9IGNvbnRleHQuZ2V0QXBwU3RhdGUoKVxuICBjb25zdCBwZXJtaXNzaW9uQ29udGV4dCA9IGFwcFN0YXRlLnRvb2xQZXJtaXNzaW9uQ29udGV4dFxuICBjb25zdCB0b29scyA9IGdldFRvb2xzKHBlcm1pc3Npb25Db250ZXh0KVxuXG4gIHJldHVybiA8QWdlbnRzTWVudSB0b29scz17dG9vbHN9IG9uRXhpdD17b25Eb25lfSAvPlxufVxuIl0sIm1hcHBpbmdzIjoiQUFBQSxPQUFPLEtBQUtBLEtBQUssTUFBTSxPQUFPO0FBQzlCLFNBQVNDLFVBQVUsUUFBUSx1Q0FBdUM7QUFDbEUsY0FBY0MsY0FBYyxRQUFRLGVBQWU7QUFDbkQsU0FBU0MsUUFBUSxRQUFRLGdCQUFnQjtBQUN6QyxjQUFjQyxxQkFBcUIsUUFBUSx3QkFBd0I7QUFFbkUsT0FBTyxlQUFlQyxJQUFJQSxDQUN4QkMsTUFBTSxFQUFFRixxQkFBcUIsRUFDN0JHLE9BQU8sRUFBRUwsY0FBYyxDQUN4QixFQUFFTSxPQUFPLENBQUNSLEtBQUssQ0FBQ1MsU0FBUyxDQUFDLENBQUM7RUFDMUIsTUFBTUMsUUFBUSxHQUFHSCxPQUFPLENBQUNJLFdBQVcsQ0FBQyxDQUFDO0VBQ3RDLE1BQU1DLGlCQUFpQixHQUFHRixRQUFRLENBQUNHLHFCQUFxQjtFQUN4RCxNQUFNQyxLQUFLLEdBQUdYLFFBQVEsQ0FBQ1MsaUJBQWlCLENBQUM7RUFFekMsT0FBTyxDQUFDLFVBQVUsQ0FBQyxLQUFLLENBQUMsQ0FBQ0UsS0FBSyxDQUFDLENBQUMsTUFBTSxDQUFDLENBQUNSLE1BQU0sQ0FBQyxHQUFHO0FBQ3JEIiwiaWdub3JlTGlzdCI6W119
|
||||
10
src/commands/agents/index.ts
Normal file
10
src/commands/agents/index.ts
Normal file
@ -0,0 +1,10 @@
|
||||
import type { Command } from '../../commands.js'
|
||||
|
||||
const agents = {
|
||||
type: 'local-jsx',
|
||||
name: 'agents',
|
||||
description: 'Manage agent configurations',
|
||||
load: () => import('./agents.js'),
|
||||
} satisfies Command
|
||||
|
||||
export default agents
|
||||
1
src/commands/ant-trace/index.js
Normal file
1
src/commands/ant-trace/index.js
Normal file
@ -0,0 +1 @@
|
||||
export default { isEnabled: () => false, isHidden: true, name: 'stub' };
|
||||
22
src/commands/assistant/assistant.tsx
Normal file
22
src/commands/assistant/assistant.tsx
Normal file
@ -0,0 +1,22 @@
|
||||
import { homedir } from 'os'
|
||||
import { join } from 'path'
|
||||
import { useEffect } from 'react'
|
||||
|
||||
type Props = {
|
||||
defaultDir: string
|
||||
onInstalled: (dir: string) => void
|
||||
onCancel: () => void
|
||||
onError: (message: string) => void
|
||||
}
|
||||
|
||||
export async function computeDefaultInstallDir(): Promise<string> {
|
||||
return join(homedir(), '.claude', 'assistant')
|
||||
}
|
||||
|
||||
export function NewInstallWizard({ onCancel }: Props) {
|
||||
useEffect(() => {
|
||||
onCancel()
|
||||
}, [onCancel])
|
||||
|
||||
return null
|
||||
}
|
||||
1
src/commands/autofix-pr/index.js
Normal file
1
src/commands/autofix-pr/index.js
Normal file
@ -0,0 +1 @@
|
||||
export default { isEnabled: () => false, isHidden: true, name: 'stub' };
|
||||
1
src/commands/backfill-sessions/index.js
Normal file
1
src/commands/backfill-sessions/index.js
Normal file
@ -0,0 +1 @@
|
||||
export default { isEnabled: () => false, isHidden: true, name: 'stub' };
|
||||
296
src/commands/branch/branch.ts
Normal file
296
src/commands/branch/branch.ts
Normal file
@ -0,0 +1,296 @@
|
||||
import { randomUUID, type UUID } from 'crypto'
|
||||
import { mkdir, readFile, writeFile } from 'fs/promises'
|
||||
import { getOriginalCwd, getSessionId } from '../../bootstrap/state.js'
|
||||
import type { LocalJSXCommandContext } from '../../commands.js'
|
||||
import { logEvent } from '../../services/analytics/index.js'
|
||||
import type { LocalJSXCommandOnDone } from '../../types/command.js'
|
||||
import type {
|
||||
ContentReplacementEntry,
|
||||
Entry,
|
||||
LogOption,
|
||||
SerializedMessage,
|
||||
TranscriptMessage,
|
||||
} from '../../types/logs.js'
|
||||
import { parseJSONL } from '../../utils/json.js'
|
||||
import {
|
||||
getProjectDir,
|
||||
getTranscriptPath,
|
||||
getTranscriptPathForSession,
|
||||
isTranscriptMessage,
|
||||
saveCustomTitle,
|
||||
searchSessionsByCustomTitle,
|
||||
} from '../../utils/sessionStorage.js'
|
||||
import { jsonStringify } from '../../utils/slowOperations.js'
|
||||
import { escapeRegExp } from '../../utils/stringUtils.js'
|
||||
|
||||
type TranscriptEntry = TranscriptMessage & {
|
||||
forkedFrom?: {
|
||||
sessionId: string
|
||||
messageUuid: UUID
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Derive a single-line title base from the first user message.
|
||||
* Collapses whitespace — multiline first messages (pasted stacks, code)
|
||||
* otherwise flow into the saved title and break the resume hint.
|
||||
*/
|
||||
export function deriveFirstPrompt(
|
||||
firstUserMessage: Extract<SerializedMessage, { type: 'user' }> | undefined,
|
||||
): string {
|
||||
const content = firstUserMessage?.message?.content
|
||||
if (!content) return 'Branched conversation'
|
||||
const raw =
|
||||
typeof content === 'string'
|
||||
? content
|
||||
: content.find(
|
||||
(block): block is { type: 'text'; text: string } =>
|
||||
block.type === 'text',
|
||||
)?.text
|
||||
if (!raw) return 'Branched conversation'
|
||||
return (
|
||||
raw.replace(/\s+/g, ' ').trim().slice(0, 100) || 'Branched conversation'
|
||||
)
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a fork of the current conversation by copying from the transcript file.
|
||||
* Preserves all original metadata (timestamps, gitBranch, etc.) while updating
|
||||
* sessionId and adding forkedFrom traceability.
|
||||
*/
|
||||
async function createFork(customTitle?: string): Promise<{
|
||||
sessionId: UUID
|
||||
title: string | undefined
|
||||
forkPath: string
|
||||
serializedMessages: SerializedMessage[]
|
||||
contentReplacementRecords: ContentReplacementEntry['replacements']
|
||||
}> {
|
||||
const forkSessionId = randomUUID() as UUID
|
||||
const originalSessionId = getSessionId()
|
||||
const projectDir = getProjectDir(getOriginalCwd())
|
||||
const forkSessionPath = getTranscriptPathForSession(forkSessionId)
|
||||
const currentTranscriptPath = getTranscriptPath()
|
||||
|
||||
// Ensure project directory exists
|
||||
await mkdir(projectDir, { recursive: true, mode: 0o700 })
|
||||
|
||||
// Read current transcript file
|
||||
let transcriptContent: Buffer
|
||||
try {
|
||||
transcriptContent = await readFile(currentTranscriptPath)
|
||||
} catch {
|
||||
throw new Error('No conversation to branch')
|
||||
}
|
||||
|
||||
if (transcriptContent.length === 0) {
|
||||
throw new Error('No conversation to branch')
|
||||
}
|
||||
|
||||
// Parse all transcript entries (messages + metadata entries like content-replacement)
|
||||
const entries = parseJSONL<Entry>(transcriptContent)
|
||||
|
||||
// Filter to only main conversation messages (exclude sidechains and non-message entries)
|
||||
const mainConversationEntries = entries.filter(
|
||||
(entry): entry is TranscriptMessage =>
|
||||
isTranscriptMessage(entry) && !entry.isSidechain,
|
||||
)
|
||||
|
||||
// Content-replacement entries for the original session. These record which
|
||||
// tool_result blocks were replaced with previews by the per-message budget.
|
||||
// Without them in the fork JSONL, `claude -r {forkId}` reconstructs state
|
||||
// with an empty replacements Map → previously-replaced results are classified
|
||||
// as FROZEN and sent as full content (prompt cache miss + permanent overage).
|
||||
// sessionId must be rewritten since loadTranscriptFile keys lookup by the
|
||||
// session's messages' sessionId.
|
||||
const contentReplacementRecords = entries
|
||||
.filter(
|
||||
(entry): entry is ContentReplacementEntry =>
|
||||
entry.type === 'content-replacement' &&
|
||||
entry.sessionId === originalSessionId,
|
||||
)
|
||||
.flatMap(entry => entry.replacements)
|
||||
|
||||
if (mainConversationEntries.length === 0) {
|
||||
throw new Error('No messages to branch')
|
||||
}
|
||||
|
||||
// Build forked entries with new sessionId and preserved metadata
|
||||
let parentUuid: UUID | null = null
|
||||
const lines: string[] = []
|
||||
const serializedMessages: SerializedMessage[] = []
|
||||
|
||||
for (const entry of mainConversationEntries) {
|
||||
// Create forked transcript entry preserving all original metadata
|
||||
const forkedEntry: TranscriptEntry = {
|
||||
...entry,
|
||||
sessionId: forkSessionId,
|
||||
parentUuid,
|
||||
isSidechain: false,
|
||||
forkedFrom: {
|
||||
sessionId: originalSessionId,
|
||||
messageUuid: entry.uuid,
|
||||
},
|
||||
}
|
||||
|
||||
// Build serialized message for LogOption
|
||||
const serialized: SerializedMessage = {
|
||||
...entry,
|
||||
sessionId: forkSessionId,
|
||||
}
|
||||
|
||||
serializedMessages.push(serialized)
|
||||
lines.push(jsonStringify(forkedEntry))
|
||||
if (entry.type !== 'progress') {
|
||||
parentUuid = entry.uuid
|
||||
}
|
||||
}
|
||||
|
||||
// Append content-replacement entry (if any) with the fork's sessionId.
|
||||
// Written as a SINGLE entry (same shape as insertContentReplacement) so
|
||||
// loadTranscriptFile's content-replacement branch picks it up.
|
||||
if (contentReplacementRecords.length > 0) {
|
||||
const forkedReplacementEntry: ContentReplacementEntry = {
|
||||
type: 'content-replacement',
|
||||
sessionId: forkSessionId,
|
||||
replacements: contentReplacementRecords,
|
||||
}
|
||||
lines.push(jsonStringify(forkedReplacementEntry))
|
||||
}
|
||||
|
||||
// Write the fork session file
|
||||
await writeFile(forkSessionPath, lines.join('\n') + '\n', {
|
||||
encoding: 'utf8',
|
||||
mode: 0o600,
|
||||
})
|
||||
|
||||
return {
|
||||
sessionId: forkSessionId,
|
||||
title: customTitle,
|
||||
forkPath: forkSessionPath,
|
||||
serializedMessages,
|
||||
contentReplacementRecords,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Generates a unique fork name by checking for collisions with existing session names.
|
||||
* If "baseName (Branch)" already exists, tries "baseName (Branch 2)", "baseName (Branch 3)", etc.
|
||||
*/
|
||||
async function getUniqueForkName(baseName: string): Promise<string> {
|
||||
const candidateName = `${baseName} (Branch)`
|
||||
|
||||
// Check if this exact name already exists
|
||||
const existingWithExactName = await searchSessionsByCustomTitle(
|
||||
candidateName,
|
||||
{ exact: true },
|
||||
)
|
||||
|
||||
if (existingWithExactName.length === 0) {
|
||||
return candidateName
|
||||
}
|
||||
|
||||
// Name collision - find a unique numbered suffix
|
||||
// Search for all sessions that start with the base pattern
|
||||
const existingForks = await searchSessionsByCustomTitle(`${baseName} (Branch`)
|
||||
|
||||
// Extract existing fork numbers to find the next available
|
||||
const usedNumbers = new Set<number>([1]) // Consider " (Branch)" as number 1
|
||||
const forkNumberPattern = new RegExp(
|
||||
`^${escapeRegExp(baseName)} \\(Branch(?: (\\d+))?\\)$`,
|
||||
)
|
||||
|
||||
for (const session of existingForks) {
|
||||
const match = session.customTitle?.match(forkNumberPattern)
|
||||
if (match) {
|
||||
if (match[1]) {
|
||||
usedNumbers.add(parseInt(match[1], 10))
|
||||
} else {
|
||||
usedNumbers.add(1) // " (Branch)" without number is treated as 1
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Find the next available number
|
||||
let nextNumber = 2
|
||||
while (usedNumbers.has(nextNumber)) {
|
||||
nextNumber++
|
||||
}
|
||||
|
||||
return `${baseName} (Branch ${nextNumber})`
|
||||
}
|
||||
|
||||
export async function call(
|
||||
onDone: LocalJSXCommandOnDone,
|
||||
context: LocalJSXCommandContext,
|
||||
args: string,
|
||||
): Promise<React.ReactNode> {
|
||||
const customTitle = args?.trim() || undefined
|
||||
|
||||
const originalSessionId = getSessionId()
|
||||
|
||||
try {
|
||||
const {
|
||||
sessionId,
|
||||
title,
|
||||
forkPath,
|
||||
serializedMessages,
|
||||
contentReplacementRecords,
|
||||
} = await createFork(customTitle)
|
||||
|
||||
// Build LogOption for resume
|
||||
const now = new Date()
|
||||
const firstPrompt = deriveFirstPrompt(
|
||||
serializedMessages.find(m => m.type === 'user'),
|
||||
)
|
||||
|
||||
// Save custom title - use provided title or firstPrompt as default
|
||||
// This ensures /status and /resume show the same session name
|
||||
// Always add " (Branch)" suffix to make it clear this is a branched session
|
||||
// Handle collisions by adding a number suffix (e.g., " (Branch 2)", " (Branch 3)")
|
||||
const baseName = title ?? firstPrompt
|
||||
const effectiveTitle = await getUniqueForkName(baseName)
|
||||
await saveCustomTitle(sessionId, effectiveTitle, forkPath)
|
||||
|
||||
logEvent('tengu_conversation_forked', {
|
||||
message_count: serializedMessages.length,
|
||||
has_custom_title: !!title,
|
||||
})
|
||||
|
||||
const forkLog: LogOption = {
|
||||
date: now.toISOString().split('T')[0]!,
|
||||
messages: serializedMessages,
|
||||
fullPath: forkPath,
|
||||
value: now.getTime(),
|
||||
created: now,
|
||||
modified: now,
|
||||
firstPrompt,
|
||||
messageCount: serializedMessages.length,
|
||||
isSidechain: false,
|
||||
sessionId,
|
||||
customTitle: effectiveTitle,
|
||||
contentReplacements: contentReplacementRecords,
|
||||
}
|
||||
|
||||
// Resume into the fork
|
||||
const titleInfo = title ? ` "${title}"` : ''
|
||||
const resumeHint = `\nTo resume the original: claude -r ${originalSessionId}`
|
||||
const successMessage = `Branched conversation${titleInfo}. You are now in the branch.${resumeHint}`
|
||||
|
||||
if (context.resume) {
|
||||
await context.resume(sessionId, forkLog, 'fork')
|
||||
onDone(successMessage, { display: 'system' })
|
||||
} else {
|
||||
// Fallback if resume not available
|
||||
onDone(
|
||||
`Branched conversation${titleInfo}. Resume with: /resume ${sessionId}`,
|
||||
)
|
||||
}
|
||||
|
||||
return null
|
||||
} catch (error) {
|
||||
const message =
|
||||
error instanceof Error ? error.message : 'Unknown error occurred'
|
||||
onDone(`Failed to branch conversation: ${message}`)
|
||||
return null
|
||||
}
|
||||
}
|
||||
14
src/commands/branch/index.ts
Normal file
14
src/commands/branch/index.ts
Normal file
@ -0,0 +1,14 @@
|
||||
import { feature } from 'bun:bundle'
|
||||
import type { Command } from '../../commands.js'
|
||||
|
||||
const branch = {
|
||||
type: 'local-jsx',
|
||||
name: 'branch',
|
||||
// 'fork' alias only when /fork doesn't exist as its own command
|
||||
aliases: feature('FORK_SUBAGENT') ? [] : ['fork'],
|
||||
description: 'Create a branch of the current conversation at this point',
|
||||
argumentHint: '[name]',
|
||||
load: () => import('./branch.js'),
|
||||
} satisfies Command
|
||||
|
||||
export default branch
|
||||
1
src/commands/break-cache/index.js
Normal file
1
src/commands/break-cache/index.js
Normal file
@ -0,0 +1 @@
|
||||
export default { isEnabled: () => false, isHidden: true, name: 'stub' };
|
||||
200
src/commands/bridge-kick.ts
Normal file
200
src/commands/bridge-kick.ts
Normal file
@ -0,0 +1,200 @@
|
||||
import { getBridgeDebugHandle } from '../bridge/bridgeDebug.js'
|
||||
import type { Command } from '../commands.js'
|
||||
import type { LocalCommandCall } from '../types/command.js'
|
||||
|
||||
/**
|
||||
* Ant-only: inject bridge failure states to manually test recovery paths.
|
||||
*
|
||||
* /bridge-kick close 1002 — fire ws_closed with code 1002
|
||||
* /bridge-kick close 1006 — fire ws_closed with code 1006
|
||||
* /bridge-kick poll 404 — next poll throws 404/not_found_error
|
||||
* /bridge-kick poll 404 <type> — next poll throws 404 with error_type
|
||||
* /bridge-kick poll 401 — next poll throws 401 (auth)
|
||||
* /bridge-kick poll transient — next poll throws axios-style rejection
|
||||
* /bridge-kick register fail — next register (inside doReconnect) transient-fails
|
||||
* /bridge-kick register fail 3 — next 3 registers transient-fail
|
||||
* /bridge-kick register fatal — next register 403s (terminal)
|
||||
* /bridge-kick reconnect-session fail — POST /bridge/reconnect fails (→ Strategy 2)
|
||||
* /bridge-kick heartbeat 401 — next heartbeat 401s (JWT expired)
|
||||
* /bridge-kick reconnect — call doReconnect directly (= SIGUSR2)
|
||||
* /bridge-kick status — print current bridge state
|
||||
*
|
||||
* Workflow: connect Remote Control, run a subcommand, `tail -f debug.log`
|
||||
* and watch [bridge:repl] / [bridge:debug] lines for the recovery reaction.
|
||||
*
|
||||
* Composite sequences — the failure modes in the BQ data are chains, not
|
||||
* single events. Queue faults then fire the trigger:
|
||||
*
|
||||
* # #22148 residual: ws_closed → register transient-blips → teardown?
|
||||
* /bridge-kick register fail 2
|
||||
* /bridge-kick close 1002
|
||||
* → expect: doReconnect tries register, fails, returns false → teardown
|
||||
* (demonstrates the retry gap that needs fixing)
|
||||
*
|
||||
* # Dead gate: poll 404/not_found_error → does onEnvironmentLost fire?
|
||||
* /bridge-kick poll 404
|
||||
* → expect: tengu_bridge_repl_fatal_error (gate is dead — 147K/wk)
|
||||
* after fix: tengu_bridge_repl_env_lost → doReconnect
|
||||
*/
|
||||
|
||||
const USAGE = `/bridge-kick <subcommand>
|
||||
close <code> fire ws_closed with the given code (e.g. 1002)
|
||||
poll <status> [type] next poll throws BridgeFatalError(status, type)
|
||||
poll transient next poll throws axios-style rejection (5xx/net)
|
||||
register fail [N] next N registers transient-fail (default 1)
|
||||
register fatal next register 403s (terminal)
|
||||
reconnect-session fail next POST /bridge/reconnect fails
|
||||
heartbeat <status> next heartbeat throws BridgeFatalError(status)
|
||||
reconnect call reconnectEnvironmentWithSession directly
|
||||
status print bridge state`
|
||||
|
||||
const call: LocalCommandCall = async args => {
|
||||
const h = getBridgeDebugHandle()
|
||||
if (!h) {
|
||||
return {
|
||||
type: 'text',
|
||||
value:
|
||||
'No bridge debug handle registered. Remote Control must be connected (USER_TYPE=ant).',
|
||||
}
|
||||
}
|
||||
|
||||
const [sub, a, b] = args.trim().split(/\s+/)
|
||||
|
||||
switch (sub) {
|
||||
case 'close': {
|
||||
const code = Number(a)
|
||||
if (!Number.isFinite(code)) {
|
||||
return { type: 'text', value: `close: need a numeric code\n${USAGE}` }
|
||||
}
|
||||
h.fireClose(code)
|
||||
return {
|
||||
type: 'text',
|
||||
value: `Fired transport close(${code}). Watch debug.log for [bridge:repl] recovery.`,
|
||||
}
|
||||
}
|
||||
|
||||
case 'poll': {
|
||||
if (a === 'transient') {
|
||||
h.injectFault({
|
||||
method: 'pollForWork',
|
||||
kind: 'transient',
|
||||
status: 503,
|
||||
count: 1,
|
||||
})
|
||||
h.wakePollLoop()
|
||||
return {
|
||||
type: 'text',
|
||||
value:
|
||||
'Next poll will throw a transient (axios rejection). Poll loop woken.',
|
||||
}
|
||||
}
|
||||
const status = Number(a)
|
||||
if (!Number.isFinite(status)) {
|
||||
return {
|
||||
type: 'text',
|
||||
value: `poll: need 'transient' or a status code\n${USAGE}`,
|
||||
}
|
||||
}
|
||||
// Default to what the server ACTUALLY sends for 404 (BQ-verified),
|
||||
// so `/bridge-kick poll 404` reproduces the real 147K/week state.
|
||||
const errorType =
|
||||
b ?? (status === 404 ? 'not_found_error' : 'authentication_error')
|
||||
h.injectFault({
|
||||
method: 'pollForWork',
|
||||
kind: 'fatal',
|
||||
status,
|
||||
errorType,
|
||||
count: 1,
|
||||
})
|
||||
h.wakePollLoop()
|
||||
return {
|
||||
type: 'text',
|
||||
value: `Next poll will throw BridgeFatalError(${status}, ${errorType}). Poll loop woken.`,
|
||||
}
|
||||
}
|
||||
|
||||
case 'register': {
|
||||
if (a === 'fatal') {
|
||||
h.injectFault({
|
||||
method: 'registerBridgeEnvironment',
|
||||
kind: 'fatal',
|
||||
status: 403,
|
||||
errorType: 'permission_error',
|
||||
count: 1,
|
||||
})
|
||||
return {
|
||||
type: 'text',
|
||||
value:
|
||||
'Next registerBridgeEnvironment will 403. Trigger with close/reconnect.',
|
||||
}
|
||||
}
|
||||
const n = Number(b) || 1
|
||||
h.injectFault({
|
||||
method: 'registerBridgeEnvironment',
|
||||
kind: 'transient',
|
||||
status: 503,
|
||||
count: n,
|
||||
})
|
||||
return {
|
||||
type: 'text',
|
||||
value: `Next ${n} registerBridgeEnvironment call(s) will transient-fail. Trigger with close/reconnect.`,
|
||||
}
|
||||
}
|
||||
|
||||
case 'reconnect-session': {
|
||||
h.injectFault({
|
||||
method: 'reconnectSession',
|
||||
kind: 'fatal',
|
||||
status: 404,
|
||||
errorType: 'not_found_error',
|
||||
count: 2,
|
||||
})
|
||||
return {
|
||||
type: 'text',
|
||||
value:
|
||||
'Next 2 POST /bridge/reconnect calls will 404. doReconnect Strategy 1 falls through to Strategy 2.',
|
||||
}
|
||||
}
|
||||
|
||||
case 'heartbeat': {
|
||||
const status = Number(a) || 401
|
||||
h.injectFault({
|
||||
method: 'heartbeatWork',
|
||||
kind: 'fatal',
|
||||
status,
|
||||
errorType: status === 401 ? 'authentication_error' : 'not_found_error',
|
||||
count: 1,
|
||||
})
|
||||
return {
|
||||
type: 'text',
|
||||
value: `Next heartbeat will ${status}. Watch for onHeartbeatFatal → work-state teardown.`,
|
||||
}
|
||||
}
|
||||
|
||||
case 'reconnect': {
|
||||
h.forceReconnect()
|
||||
return {
|
||||
type: 'text',
|
||||
value: 'Called reconnectEnvironmentWithSession(). Watch debug.log.',
|
||||
}
|
||||
}
|
||||
|
||||
case 'status': {
|
||||
return { type: 'text', value: h.describe() }
|
||||
}
|
||||
|
||||
default:
|
||||
return { type: 'text', value: USAGE }
|
||||
}
|
||||
}
|
||||
|
||||
const bridgeKick = {
|
||||
type: 'local',
|
||||
name: 'bridge-kick',
|
||||
description: 'Inject bridge failure states for manual recovery testing',
|
||||
isEnabled: () => process.env.USER_TYPE === 'ant',
|
||||
supportsNonInteractive: false,
|
||||
load: () => Promise.resolve({ call }),
|
||||
} satisfies Command
|
||||
|
||||
export default bridgeKick
|
||||
509
src/commands/bridge/bridge.tsx
Normal file
509
src/commands/bridge/bridge.tsx
Normal file
File diff suppressed because one or more lines are too long
26
src/commands/bridge/index.ts
Normal file
26
src/commands/bridge/index.ts
Normal file
@ -0,0 +1,26 @@
|
||||
import { feature } from 'bun:bundle'
|
||||
import { isBridgeEnabled } from '../../bridge/bridgeEnabled.js'
|
||||
import type { Command } from '../../commands.js'
|
||||
|
||||
function isEnabled(): boolean {
|
||||
if (!feature('BRIDGE_MODE')) {
|
||||
return false
|
||||
}
|
||||
return isBridgeEnabled()
|
||||
}
|
||||
|
||||
const bridge = {
|
||||
type: 'local-jsx',
|
||||
name: 'remote-control',
|
||||
aliases: ['rc'],
|
||||
description: 'Connect this terminal for remote-control sessions',
|
||||
argumentHint: '[name]',
|
||||
isEnabled,
|
||||
get isHidden() {
|
||||
return !isEnabled()
|
||||
},
|
||||
immediate: true,
|
||||
load: () => import('./bridge.js'),
|
||||
} satisfies Command
|
||||
|
||||
export default bridge
|
||||
130
src/commands/brief.ts
Normal file
130
src/commands/brief.ts
Normal file
@ -0,0 +1,130 @@
|
||||
import { feature } from 'bun:bundle'
|
||||
import { z } from 'zod/v4'
|
||||
import { getKairosActive, setUserMsgOptIn } from '../bootstrap/state.js'
|
||||
import { getFeatureValue_CACHED_MAY_BE_STALE } from '../services/analytics/growthbook.js'
|
||||
import {
|
||||
type AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS,
|
||||
logEvent,
|
||||
} from '../services/analytics/index.js'
|
||||
import type { ToolUseContext } from '../Tool.js'
|
||||
import { isBriefEntitled } from '../tools/BriefTool/BriefTool.js'
|
||||
import { BRIEF_TOOL_NAME } from '../tools/BriefTool/prompt.js'
|
||||
import type {
|
||||
Command,
|
||||
LocalJSXCommandContext,
|
||||
LocalJSXCommandOnDone,
|
||||
} from '../types/command.js'
|
||||
import { lazySchema } from '../utils/lazySchema.js'
|
||||
|
||||
// Zod guards against fat-fingered GB pushes (same pattern as pollConfig.ts /
|
||||
// cronScheduler.ts). A malformed config falls back to DEFAULT_BRIEF_CONFIG
|
||||
// entirely rather than being partially trusted.
|
||||
const briefConfigSchema = lazySchema(() =>
|
||||
z.object({
|
||||
enable_slash_command: z.boolean(),
|
||||
}),
|
||||
)
|
||||
type BriefConfig = z.infer<ReturnType<typeof briefConfigSchema>>
|
||||
|
||||
const DEFAULT_BRIEF_CONFIG: BriefConfig = {
|
||||
enable_slash_command: false,
|
||||
}
|
||||
|
||||
// No TTL — this gate controls slash-command *visibility*, not a kill switch.
|
||||
// CACHED_MAY_BE_STALE still has one background-update flip (first call kicks
|
||||
// off fetch; second call sees fresh value), but no additional flips after that.
|
||||
// The tool-availability gate (tengu_kairos_brief in isBriefEnabled) keeps its
|
||||
// 5-min TTL because that one IS a kill switch.
|
||||
function getBriefConfig(): BriefConfig {
|
||||
const raw = getFeatureValue_CACHED_MAY_BE_STALE<unknown>(
|
||||
'tengu_kairos_brief_config',
|
||||
DEFAULT_BRIEF_CONFIG,
|
||||
)
|
||||
const parsed = briefConfigSchema().safeParse(raw)
|
||||
return parsed.success ? parsed.data : DEFAULT_BRIEF_CONFIG
|
||||
}
|
||||
|
||||
const brief = {
|
||||
type: 'local-jsx',
|
||||
name: 'brief',
|
||||
description: 'Toggle brief-only mode',
|
||||
isEnabled: () => {
|
||||
if (feature('KAIROS') || feature('KAIROS_BRIEF')) {
|
||||
return getBriefConfig().enable_slash_command
|
||||
}
|
||||
return false
|
||||
},
|
||||
immediate: true,
|
||||
load: () =>
|
||||
Promise.resolve({
|
||||
async call(
|
||||
onDone: LocalJSXCommandOnDone,
|
||||
context: ToolUseContext & LocalJSXCommandContext,
|
||||
): Promise<React.ReactNode> {
|
||||
const current = context.getAppState().isBriefOnly
|
||||
const newState = !current
|
||||
|
||||
// Entitlement check only gates the on-transition — off is always
|
||||
// allowed so a user whose GB gate flipped mid-session isn't stuck.
|
||||
if (newState && !isBriefEntitled()) {
|
||||
logEvent('tengu_brief_mode_toggled', {
|
||||
enabled: false,
|
||||
gated: true,
|
||||
source:
|
||||
'slash_command' as AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS,
|
||||
})
|
||||
onDone('Brief tool is not enabled for your account', {
|
||||
display: 'system',
|
||||
})
|
||||
return null
|
||||
}
|
||||
|
||||
// Two-way: userMsgOptIn tracks isBriefOnly so the tool is available
|
||||
// exactly when brief mode is on. This invalidates prompt cache on
|
||||
// each toggle (tool list changes), but a stale tool list is worse —
|
||||
// when /brief is enabled mid-session the model was previously left
|
||||
// without the tool, emitting plain text the filter hides.
|
||||
setUserMsgOptIn(newState)
|
||||
|
||||
context.setAppState(prev => {
|
||||
if (prev.isBriefOnly === newState) return prev
|
||||
return { ...prev, isBriefOnly: newState }
|
||||
})
|
||||
|
||||
logEvent('tengu_brief_mode_toggled', {
|
||||
enabled: newState,
|
||||
gated: false,
|
||||
source:
|
||||
'slash_command' as AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS,
|
||||
})
|
||||
|
||||
// The tool list change alone isn't a strong enough signal mid-session
|
||||
// (model may keep emitting plain text from inertia, or keep calling a
|
||||
// tool that just vanished). Inject an explicit reminder into the next
|
||||
// turn's context so the transition is unambiguous.
|
||||
// Skip when Kairos is active: isBriefEnabled() short-circuits on
|
||||
// getKairosActive() so the tool never actually leaves the list, and
|
||||
// the Kairos system prompt already mandates SendUserMessage.
|
||||
// Inline <system-reminder> wrap — importing wrapInSystemReminder from
|
||||
// utils/messages.ts pulls constants/xml.ts into the bridge SDK bundle
|
||||
// via this module's import chain, tripping the excluded-strings check.
|
||||
const metaMessages = getKairosActive()
|
||||
? undefined
|
||||
: [
|
||||
`<system-reminder>\n${
|
||||
newState
|
||||
? `Brief mode is now enabled. Use the ${BRIEF_TOOL_NAME} tool for all user-facing output — plain text outside it is hidden from the user's view.`
|
||||
: `Brief mode is now disabled. The ${BRIEF_TOOL_NAME} tool is no longer available — reply with plain text.`
|
||||
}\n</system-reminder>`,
|
||||
]
|
||||
|
||||
onDone(
|
||||
newState ? 'Brief-only mode enabled' : 'Brief-only mode disabled',
|
||||
{ display: 'system', metaMessages },
|
||||
)
|
||||
return null
|
||||
},
|
||||
}),
|
||||
} satisfies Command
|
||||
|
||||
export default brief
|
||||
243
src/commands/btw/btw.tsx
Normal file
243
src/commands/btw/btw.tsx
Normal file
File diff suppressed because one or more lines are too long
13
src/commands/btw/index.ts
Normal file
13
src/commands/btw/index.ts
Normal file
@ -0,0 +1,13 @@
|
||||
import type { Command } from '../../commands.js'
|
||||
|
||||
const btw = {
|
||||
type: 'local-jsx',
|
||||
name: 'btw',
|
||||
description:
|
||||
'Ask a quick side question without interrupting the main conversation',
|
||||
immediate: true,
|
||||
argumentHint: '<question>',
|
||||
load: () => import('./btw.js'),
|
||||
} satisfies Command
|
||||
|
||||
export default btw
|
||||
1
src/commands/bughunter/index.js
Normal file
1
src/commands/bughunter/index.js
Normal file
@ -0,0 +1 @@
|
||||
export default { isEnabled: () => false, isHidden: true, name: 'stub' };
|
||||
285
src/commands/chrome/chrome.tsx
Normal file
285
src/commands/chrome/chrome.tsx
Normal file
File diff suppressed because one or more lines are too long
13
src/commands/chrome/index.ts
Normal file
13
src/commands/chrome/index.ts
Normal file
@ -0,0 +1,13 @@
|
||||
import { getIsNonInteractiveSession } from '../../bootstrap/state.js'
|
||||
import type { Command } from '../../commands.js'
|
||||
|
||||
const command: Command = {
|
||||
name: 'chrome',
|
||||
description: 'Claude in Chrome (Beta) settings',
|
||||
availability: ['claude-ai'],
|
||||
isEnabled: () => !getIsNonInteractiveSession(),
|
||||
type: 'local-jsx',
|
||||
load: () => import('./chrome.js'),
|
||||
}
|
||||
|
||||
export default command
|
||||
144
src/commands/clear/caches.ts
Normal file
144
src/commands/clear/caches.ts
Normal file
@ -0,0 +1,144 @@
|
||||
/**
|
||||
* Session cache clearing utilities.
|
||||
* This module is imported at startup by main.tsx, so keep imports minimal.
|
||||
*/
|
||||
import { feature } from 'bun:bundle'
|
||||
import {
|
||||
clearInvokedSkills,
|
||||
setLastEmittedDate,
|
||||
} from '../../bootstrap/state.js'
|
||||
import { clearCommandsCache } from '../../commands.js'
|
||||
import { getSessionStartDate } from '../../constants/common.js'
|
||||
import {
|
||||
getGitStatus,
|
||||
getSystemContext,
|
||||
getUserContext,
|
||||
setSystemPromptInjection,
|
||||
} from '../../context.js'
|
||||
import { clearFileSuggestionCaches } from '../../hooks/fileSuggestions.js'
|
||||
import { clearAllPendingCallbacks } from '../../hooks/useSwarmPermissionPoller.js'
|
||||
import { clearAllDumpState } from '../../services/api/dumpPrompts.js'
|
||||
import { resetPromptCacheBreakDetection } from '../../services/api/promptCacheBreakDetection.js'
|
||||
import { clearAllSessions } from '../../services/api/sessionIngress.js'
|
||||
import { runPostCompactCleanup } from '../../services/compact/postCompactCleanup.js'
|
||||
import { resetAllLSPDiagnosticState } from '../../services/lsp/LSPDiagnosticRegistry.js'
|
||||
import { clearTrackedMagicDocs } from '../../services/MagicDocs/magicDocs.js'
|
||||
import { clearDynamicSkills } from '../../skills/loadSkillsDir.js'
|
||||
import { resetSentSkillNames } from '../../utils/attachments.js'
|
||||
import { clearCommandPrefixCaches } from '../../utils/bash/commands.js'
|
||||
import { resetGetMemoryFilesCache } from '../../utils/claudemd.js'
|
||||
import { clearRepositoryCaches } from '../../utils/detectRepository.js'
|
||||
import { clearResolveGitDirCache } from '../../utils/git/gitFilesystem.js'
|
||||
import { clearStoredImagePaths } from '../../utils/imageStore.js'
|
||||
import { clearSessionEnvVars } from '../../utils/sessionEnvVars.js'
|
||||
|
||||
/**
|
||||
* Clear all session-related caches.
|
||||
* Call this when resuming a session to ensure fresh file/skill discovery.
|
||||
* This is a subset of what clearConversation does - it only clears caches
|
||||
* without affecting messages, session ID, or triggering hooks.
|
||||
*
|
||||
* @param preservedAgentIds - Agent IDs whose per-agent state should survive
|
||||
* the clear (e.g., background tasks preserved across /clear). When non-empty,
|
||||
* agentId-keyed state (invoked skills) is selectively cleared and requestId-keyed
|
||||
* state (pending permission callbacks, dump state, cache-break tracking) is left
|
||||
* intact since it cannot be safely scoped to the main session.
|
||||
*/
|
||||
export function clearSessionCaches(
|
||||
preservedAgentIds: ReadonlySet<string> = new Set(),
|
||||
): void {
|
||||
const hasPreserved = preservedAgentIds.size > 0
|
||||
// Clear context caches
|
||||
getUserContext.cache.clear?.()
|
||||
getSystemContext.cache.clear?.()
|
||||
getGitStatus.cache.clear?.()
|
||||
getSessionStartDate.cache.clear?.()
|
||||
// Clear file suggestion caches (for @ mentions)
|
||||
clearFileSuggestionCaches()
|
||||
|
||||
// Clear commands/skills cache
|
||||
clearCommandsCache()
|
||||
|
||||
// Clear prompt cache break detection state
|
||||
if (!hasPreserved) resetPromptCacheBreakDetection()
|
||||
|
||||
// Clear system prompt injection (cache breaker)
|
||||
setSystemPromptInjection(null)
|
||||
|
||||
// Clear last emitted date so it's re-detected on next turn
|
||||
setLastEmittedDate(null)
|
||||
|
||||
// Run post-compaction cleanup (clears system prompt sections, microcompact tracking,
|
||||
// classifier approvals, speculative checks, and — for main-thread compacts — memory
|
||||
// files cache with load_reason 'compact').
|
||||
runPostCompactCleanup()
|
||||
// Reset sent skill names so the skill listing is re-sent after /clear.
|
||||
// runPostCompactCleanup intentionally does NOT reset this (post-compact
|
||||
// re-injection costs ~4K tokens), but /clear wipes messages entirely so
|
||||
// the model needs the full listing again.
|
||||
resetSentSkillNames()
|
||||
// Override the memory cache reset with 'session_start': clearSessionCaches is called
|
||||
// from /clear and --resume/--continue, which are NOT compaction events. Without this,
|
||||
// the InstructionsLoaded hook would fire with load_reason 'compact' instead of
|
||||
// 'session_start' on the next getMemoryFiles() call.
|
||||
resetGetMemoryFilesCache('session_start')
|
||||
|
||||
// Clear stored image paths cache
|
||||
clearStoredImagePaths()
|
||||
|
||||
// Clear all session ingress caches (lastUuidMap, sequentialAppendBySession)
|
||||
clearAllSessions()
|
||||
// Clear swarm permission pending callbacks
|
||||
if (!hasPreserved) clearAllPendingCallbacks()
|
||||
|
||||
// Clear tungsten session usage tracking
|
||||
if (process.env.USER_TYPE === 'ant') {
|
||||
void import('../../tools/TungstenTool/TungstenTool.js').then(
|
||||
({ clearSessionsWithTungstenUsage, resetInitializationState }) => {
|
||||
clearSessionsWithTungstenUsage()
|
||||
resetInitializationState()
|
||||
},
|
||||
)
|
||||
}
|
||||
// Clear attribution caches (file content cache, pending bash states)
|
||||
// Dynamic import to preserve dead code elimination for COMMIT_ATTRIBUTION feature flag
|
||||
if (feature('COMMIT_ATTRIBUTION')) {
|
||||
void import('../../utils/attributionHooks.js').then(
|
||||
({ clearAttributionCaches }) => clearAttributionCaches(),
|
||||
)
|
||||
}
|
||||
// Clear repository detection caches
|
||||
clearRepositoryCaches()
|
||||
// Clear bash command prefix caches (Haiku-extracted prefixes)
|
||||
clearCommandPrefixCaches()
|
||||
// Clear dump prompts state
|
||||
if (!hasPreserved) clearAllDumpState()
|
||||
// Clear invoked skills cache (each entry holds full skill file content)
|
||||
clearInvokedSkills(preservedAgentIds)
|
||||
// Clear git dir resolution cache
|
||||
clearResolveGitDirCache()
|
||||
// Clear dynamic skills (loaded from skill directories)
|
||||
clearDynamicSkills()
|
||||
// Clear LSP diagnostic tracking state
|
||||
resetAllLSPDiagnosticState()
|
||||
// Clear tracked magic docs
|
||||
clearTrackedMagicDocs()
|
||||
// Clear session environment variables
|
||||
clearSessionEnvVars()
|
||||
// Clear WebFetch URL cache (up to 50MB of cached page content)
|
||||
void import('../../tools/WebFetchTool/utils.js').then(
|
||||
({ clearWebFetchCache }) => clearWebFetchCache(),
|
||||
)
|
||||
// Clear ToolSearch description cache (full tool prompts, ~500KB for 50 MCP tools)
|
||||
void import('../../tools/ToolSearchTool/ToolSearchTool.js').then(
|
||||
({ clearToolSearchDescriptionCache }) => clearToolSearchDescriptionCache(),
|
||||
)
|
||||
// Clear agent definitions cache (accumulates per-cwd via EnterWorktreeTool)
|
||||
void import('../../tools/AgentTool/loadAgentsDir.js').then(
|
||||
({ clearAgentDefinitionsCache }) => clearAgentDefinitionsCache(),
|
||||
)
|
||||
// Clear SkillTool prompt cache (accumulates per project root)
|
||||
void import('../../tools/SkillTool/prompt.js').then(({ clearPromptCache }) =>
|
||||
clearPromptCache(),
|
||||
)
|
||||
}
|
||||
7
src/commands/clear/clear.ts
Normal file
7
src/commands/clear/clear.ts
Normal file
@ -0,0 +1,7 @@
|
||||
import type { LocalCommandCall } from '../../types/command.js'
|
||||
import { clearConversation } from './conversation.js'
|
||||
|
||||
export const call: LocalCommandCall = async (_, context) => {
|
||||
await clearConversation(context)
|
||||
return { type: 'text', value: '' }
|
||||
}
|
||||
251
src/commands/clear/conversation.ts
Normal file
251
src/commands/clear/conversation.ts
Normal file
@ -0,0 +1,251 @@
|
||||
/**
|
||||
* Conversation clearing utility.
|
||||
* This module has heavier dependencies and should be lazy-loaded when possible.
|
||||
*/
|
||||
import { feature } from 'bun:bundle'
|
||||
import { randomUUID, type UUID } from 'crypto'
|
||||
import {
|
||||
getLastMainRequestId,
|
||||
getOriginalCwd,
|
||||
getSessionId,
|
||||
regenerateSessionId,
|
||||
} from '../../bootstrap/state.js'
|
||||
import {
|
||||
type AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS,
|
||||
logEvent,
|
||||
} from '../../services/analytics/index.js'
|
||||
import type { AppState } from '../../state/AppState.js'
|
||||
import { isInProcessTeammateTask } from '../../tasks/InProcessTeammateTask/types.js'
|
||||
import {
|
||||
isLocalAgentTask,
|
||||
type LocalAgentTaskState,
|
||||
} from '../../tasks/LocalAgentTask/LocalAgentTask.js'
|
||||
import { isLocalShellTask } from '../../tasks/LocalShellTask/guards.js'
|
||||
import { asAgentId } from '../../types/ids.js'
|
||||
import type { Message } from '../../types/message.js'
|
||||
import { createEmptyAttributionState } from '../../utils/commitAttribution.js'
|
||||
import type { FileStateCache } from '../../utils/fileStateCache.js'
|
||||
import {
|
||||
executeSessionEndHooks,
|
||||
getSessionEndHookTimeoutMs,
|
||||
} from '../../utils/hooks.js'
|
||||
import { logError } from '../../utils/log.js'
|
||||
import { clearAllPlanSlugs } from '../../utils/plans.js'
|
||||
import { setCwd } from '../../utils/Shell.js'
|
||||
import { processSessionStartHooks } from '../../utils/sessionStart.js'
|
||||
import {
|
||||
clearSessionMetadata,
|
||||
getAgentTranscriptPath,
|
||||
resetSessionFilePointer,
|
||||
saveWorktreeState,
|
||||
} from '../../utils/sessionStorage.js'
|
||||
import {
|
||||
evictTaskOutput,
|
||||
initTaskOutputAsSymlink,
|
||||
} from '../../utils/task/diskOutput.js'
|
||||
import { getCurrentWorktreeSession } from '../../utils/worktree.js'
|
||||
import { clearSessionCaches } from './caches.js'
|
||||
|
||||
export async function clearConversation({
|
||||
setMessages,
|
||||
readFileState,
|
||||
discoveredSkillNames,
|
||||
loadedNestedMemoryPaths,
|
||||
getAppState,
|
||||
setAppState,
|
||||
setConversationId,
|
||||
}: {
|
||||
setMessages: (updater: (prev: Message[]) => Message[]) => void
|
||||
readFileState: FileStateCache
|
||||
discoveredSkillNames?: Set<string>
|
||||
loadedNestedMemoryPaths?: Set<string>
|
||||
getAppState?: () => AppState
|
||||
setAppState?: (f: (prev: AppState) => AppState) => void
|
||||
setConversationId?: (id: UUID) => void
|
||||
}): Promise<void> {
|
||||
// Execute SessionEnd hooks before clearing (bounded by
|
||||
// CLAUDE_CODE_SESSIONEND_HOOKS_TIMEOUT_MS, default 1.5s)
|
||||
const sessionEndTimeoutMs = getSessionEndHookTimeoutMs()
|
||||
await executeSessionEndHooks('clear', {
|
||||
getAppState,
|
||||
setAppState,
|
||||
signal: AbortSignal.timeout(sessionEndTimeoutMs),
|
||||
timeoutMs: sessionEndTimeoutMs,
|
||||
})
|
||||
|
||||
// Signal to inference that this conversation's cache can be evicted.
|
||||
const lastRequestId = getLastMainRequestId()
|
||||
if (lastRequestId) {
|
||||
logEvent('tengu_cache_eviction_hint', {
|
||||
scope:
|
||||
'conversation_clear' as AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS,
|
||||
last_request_id:
|
||||
lastRequestId as AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS,
|
||||
})
|
||||
}
|
||||
|
||||
// Compute preserved tasks up front so their per-agent state survives the
|
||||
// cache wipe below. A task is preserved unless it explicitly has
|
||||
// isBackgrounded === false. Main-session tasks (Ctrl+B) are preserved —
|
||||
// they write to an isolated per-task transcript and run under an agent
|
||||
// context, so they're safe across session ID regeneration. See
|
||||
// LocalMainSessionTask.ts startBackgroundSession.
|
||||
const preservedAgentIds = new Set<string>()
|
||||
const preservedLocalAgents: LocalAgentTaskState[] = []
|
||||
const shouldKillTask = (task: AppState['tasks'][string]): boolean =>
|
||||
'isBackgrounded' in task && task.isBackgrounded === false
|
||||
if (getAppState) {
|
||||
for (const task of Object.values(getAppState().tasks)) {
|
||||
if (shouldKillTask(task)) continue
|
||||
if (isLocalAgentTask(task)) {
|
||||
preservedAgentIds.add(task.agentId)
|
||||
preservedLocalAgents.push(task)
|
||||
} else if (isInProcessTeammateTask(task)) {
|
||||
preservedAgentIds.add(task.identity.agentId)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
setMessages(() => [])
|
||||
|
||||
// Clear context-blocked flag so proactive ticks resume after /clear
|
||||
if (feature('PROACTIVE') || feature('KAIROS')) {
|
||||
/* eslint-disable @typescript-eslint/no-require-imports */
|
||||
const { setContextBlocked } = require('../../proactive/index.js')
|
||||
/* eslint-enable @typescript-eslint/no-require-imports */
|
||||
setContextBlocked(false)
|
||||
}
|
||||
|
||||
// Force logo re-render by updating conversationId
|
||||
if (setConversationId) {
|
||||
setConversationId(randomUUID())
|
||||
}
|
||||
|
||||
// Clear all session-related caches. Per-agent state for preserved background
|
||||
// tasks (invoked skills, pending permission callbacks, dump state, cache-break
|
||||
// tracking) is retained so those agents keep functioning.
|
||||
clearSessionCaches(preservedAgentIds)
|
||||
|
||||
setCwd(getOriginalCwd())
|
||||
readFileState.clear()
|
||||
discoveredSkillNames?.clear()
|
||||
loadedNestedMemoryPaths?.clear()
|
||||
|
||||
// Clean out necessary items from App State
|
||||
if (setAppState) {
|
||||
setAppState(prev => {
|
||||
// Partition tasks using the same predicate computed above:
|
||||
// kill+remove foreground tasks, preserve everything else.
|
||||
const nextTasks: AppState['tasks'] = {}
|
||||
for (const [taskId, task] of Object.entries(prev.tasks)) {
|
||||
if (!shouldKillTask(task)) {
|
||||
nextTasks[taskId] = task
|
||||
continue
|
||||
}
|
||||
// Foreground task: kill it and drop from state
|
||||
try {
|
||||
if (task.status === 'running') {
|
||||
if (isLocalShellTask(task)) {
|
||||
task.shellCommand?.kill()
|
||||
task.shellCommand?.cleanup()
|
||||
if (task.cleanupTimeoutId) {
|
||||
clearTimeout(task.cleanupTimeoutId)
|
||||
}
|
||||
}
|
||||
if ('abortController' in task) {
|
||||
task.abortController?.abort()
|
||||
}
|
||||
if ('unregisterCleanup' in task) {
|
||||
task.unregisterCleanup?.()
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
logError(error)
|
||||
}
|
||||
void evictTaskOutput(taskId)
|
||||
}
|
||||
|
||||
return {
|
||||
...prev,
|
||||
tasks: nextTasks,
|
||||
attribution: createEmptyAttributionState(),
|
||||
// Clear standalone agent context (name/color set by /rename, /color)
|
||||
// so the new session doesn't display the old session's identity badge
|
||||
standaloneAgentContext: undefined,
|
||||
fileHistory: {
|
||||
snapshots: [],
|
||||
trackedFiles: new Set(),
|
||||
snapshotSequence: 0,
|
||||
},
|
||||
// Reset MCP state to default to trigger re-initialization.
|
||||
// Preserve pluginReconnectKey so /clear doesn't cause a no-op
|
||||
// (it's only bumped by /reload-plugins).
|
||||
mcp: {
|
||||
clients: [],
|
||||
tools: [],
|
||||
commands: [],
|
||||
resources: {},
|
||||
pluginReconnectKey: prev.mcp.pluginReconnectKey,
|
||||
},
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// Clear plan slug cache so a new plan file is used after /clear
|
||||
clearAllPlanSlugs()
|
||||
|
||||
// Clear cached session metadata (title, tag, agent name/color)
|
||||
// so the new session doesn't inherit the previous session's identity
|
||||
clearSessionMetadata()
|
||||
|
||||
// Generate new session ID to provide fresh state
|
||||
// Set the old session as parent for analytics lineage tracking
|
||||
regenerateSessionId({ setCurrentAsParent: true })
|
||||
// Update the environment variable so subprocesses use the new session ID
|
||||
if (process.env.USER_TYPE === 'ant' && process.env.CLAUDE_CODE_SESSION_ID) {
|
||||
process.env.CLAUDE_CODE_SESSION_ID = getSessionId()
|
||||
}
|
||||
await resetSessionFilePointer()
|
||||
|
||||
// Preserved local_agent tasks had their TaskOutput symlink baked against the
|
||||
// old session ID at spawn time, but post-clear transcript writes land under
|
||||
// the new session directory (appendEntry re-reads getSessionId()). Re-point
|
||||
// the symlinks so TaskOutput reads the live file instead of a frozen pre-clear
|
||||
// snapshot. Only re-point running tasks — finished tasks will never write
|
||||
// again, so re-pointing would replace a valid symlink with a dangling one.
|
||||
// Main-session tasks use the same per-agent path (they write via
|
||||
// recordSidechainTranscript to getAgentTranscriptPath), so no special case.
|
||||
for (const task of preservedLocalAgents) {
|
||||
if (task.status !== 'running') continue
|
||||
void initTaskOutputAsSymlink(
|
||||
task.id,
|
||||
getAgentTranscriptPath(asAgentId(task.agentId)),
|
||||
)
|
||||
}
|
||||
|
||||
// Re-persist mode and worktree state after the clear so future --resume
|
||||
// knows what the new post-clear session was in. clearSessionMetadata
|
||||
// wiped both from the cache, but the process is still in the same mode
|
||||
// and (if applicable) the same worktree directory.
|
||||
if (feature('COORDINATOR_MODE')) {
|
||||
/* eslint-disable @typescript-eslint/no-require-imports */
|
||||
const { saveMode } = require('../../utils/sessionStorage.js')
|
||||
const {
|
||||
isCoordinatorMode,
|
||||
} = require('../../coordinator/coordinatorMode.js')
|
||||
/* eslint-enable @typescript-eslint/no-require-imports */
|
||||
saveMode(isCoordinatorMode() ? 'coordinator' : 'normal')
|
||||
}
|
||||
const worktreeSession = getCurrentWorktreeSession()
|
||||
if (worktreeSession) {
|
||||
saveWorktreeState(worktreeSession)
|
||||
}
|
||||
|
||||
// Execute SessionStart hooks after clearing
|
||||
const hookMessages = await processSessionStartHooks('clear')
|
||||
|
||||
// Update messages with hook results
|
||||
if (hookMessages.length > 0) {
|
||||
setMessages(() => hookMessages)
|
||||
}
|
||||
}
|
||||
19
src/commands/clear/index.ts
Normal file
19
src/commands/clear/index.ts
Normal file
@ -0,0 +1,19 @@
|
||||
/**
|
||||
* Clear command - minimal metadata only.
|
||||
* Implementation is lazy-loaded from clear.ts to reduce startup time.
|
||||
* Utility functions:
|
||||
* - clearSessionCaches: import from './clear/caches.js'
|
||||
* - clearConversation: import from './clear/conversation.js'
|
||||
*/
|
||||
import type { Command } from '../../commands.js'
|
||||
|
||||
const clear = {
|
||||
type: 'local',
|
||||
name: 'clear',
|
||||
description: 'Clear conversation history and free up context',
|
||||
aliases: ['reset', 'new'],
|
||||
supportsNonInteractive: false, // Should just create a new session
|
||||
load: () => import('./clear.js'),
|
||||
} satisfies Command
|
||||
|
||||
export default clear
|
||||
93
src/commands/color/color.ts
Normal file
93
src/commands/color/color.ts
Normal file
@ -0,0 +1,93 @@
|
||||
import type { UUID } from 'crypto'
|
||||
import { getSessionId } from '../../bootstrap/state.js'
|
||||
import type { ToolUseContext } from '../../Tool.js'
|
||||
import {
|
||||
AGENT_COLORS,
|
||||
type AgentColorName,
|
||||
} from '../../tools/AgentTool/agentColorManager.js'
|
||||
import type {
|
||||
LocalJSXCommandContext,
|
||||
LocalJSXCommandOnDone,
|
||||
} from '../../types/command.js'
|
||||
import {
|
||||
getTranscriptPath,
|
||||
saveAgentColor,
|
||||
} from '../../utils/sessionStorage.js'
|
||||
import { isTeammate } from '../../utils/teammate.js'
|
||||
|
||||
const RESET_ALIASES = ['default', 'reset', 'none', 'gray', 'grey'] as const
|
||||
|
||||
export async function call(
|
||||
onDone: LocalJSXCommandOnDone,
|
||||
context: ToolUseContext & LocalJSXCommandContext,
|
||||
args: string,
|
||||
): Promise<null> {
|
||||
// Teammates cannot set their own color
|
||||
if (isTeammate()) {
|
||||
onDone(
|
||||
'Cannot set color: This session is a swarm teammate. Teammate colors are assigned by the team leader.',
|
||||
{ display: 'system' },
|
||||
)
|
||||
return null
|
||||
}
|
||||
|
||||
if (!args || args.trim() === '') {
|
||||
const colorList = AGENT_COLORS.join(', ')
|
||||
onDone(`Please provide a color. Available colors: ${colorList}, default`, {
|
||||
display: 'system',
|
||||
})
|
||||
return null
|
||||
}
|
||||
|
||||
const colorArg = args.trim().toLowerCase()
|
||||
|
||||
// Handle reset to default (gray)
|
||||
if (RESET_ALIASES.includes(colorArg as (typeof RESET_ALIASES)[number])) {
|
||||
const sessionId = getSessionId() as UUID
|
||||
const fullPath = getTranscriptPath()
|
||||
|
||||
// Use "default" sentinel (not empty string) so truthiness guards
|
||||
// in sessionStorage.ts persist the reset across session restarts
|
||||
await saveAgentColor(sessionId, 'default', fullPath)
|
||||
|
||||
context.setAppState(prev => ({
|
||||
...prev,
|
||||
standaloneAgentContext: {
|
||||
...prev.standaloneAgentContext,
|
||||
name: prev.standaloneAgentContext?.name ?? '',
|
||||
color: undefined,
|
||||
},
|
||||
}))
|
||||
|
||||
onDone('Session color reset to default', { display: 'system' })
|
||||
return null
|
||||
}
|
||||
|
||||
if (!AGENT_COLORS.includes(colorArg as AgentColorName)) {
|
||||
const colorList = AGENT_COLORS.join(', ')
|
||||
onDone(
|
||||
`Invalid color "${colorArg}". Available colors: ${colorList}, default`,
|
||||
{ display: 'system' },
|
||||
)
|
||||
return null
|
||||
}
|
||||
|
||||
const sessionId = getSessionId() as UUID
|
||||
const fullPath = getTranscriptPath()
|
||||
|
||||
// Save to transcript for persistence across sessions
|
||||
await saveAgentColor(sessionId, colorArg, fullPath)
|
||||
|
||||
// Update AppState for immediate effect
|
||||
context.setAppState(prev => ({
|
||||
...prev,
|
||||
standaloneAgentContext: {
|
||||
...prev.standaloneAgentContext,
|
||||
name: prev.standaloneAgentContext?.name ?? '',
|
||||
color: colorArg as AgentColorName,
|
||||
},
|
||||
}))
|
||||
|
||||
onDone(`Session color set to: ${colorArg}`, { display: 'system' })
|
||||
return null
|
||||
}
|
||||
16
src/commands/color/index.ts
Normal file
16
src/commands/color/index.ts
Normal file
@ -0,0 +1,16 @@
|
||||
/**
|
||||
* Color command - minimal metadata only.
|
||||
* Implementation is lazy-loaded from color.ts to reduce startup time.
|
||||
*/
|
||||
import type { Command } from '../../commands.js'
|
||||
|
||||
const color = {
|
||||
type: 'local-jsx',
|
||||
name: 'color',
|
||||
description: 'Set the prompt bar color for this session',
|
||||
immediate: true,
|
||||
argumentHint: '<color|default>',
|
||||
load: () => import('./color.js'),
|
||||
} satisfies Command
|
||||
|
||||
export default color
|
||||
158
src/commands/commit-push-pr.ts
Normal file
158
src/commands/commit-push-pr.ts
Normal file
@ -0,0 +1,158 @@
|
||||
import type { Command } from '../commands.js'
|
||||
import {
|
||||
getAttributionTexts,
|
||||
getEnhancedPRAttribution,
|
||||
} from '../utils/attribution.js'
|
||||
import { getDefaultBranch } from '../utils/git.js'
|
||||
import { executeShellCommandsInPrompt } from '../utils/promptShellExecution.js'
|
||||
import { getUndercoverInstructions, isUndercover } from '../utils/undercover.js'
|
||||
|
||||
const ALLOWED_TOOLS = [
|
||||
'Bash(git checkout --branch:*)',
|
||||
'Bash(git checkout -b:*)',
|
||||
'Bash(git add:*)',
|
||||
'Bash(git status:*)',
|
||||
'Bash(git push:*)',
|
||||
'Bash(git commit:*)',
|
||||
'Bash(gh pr create:*)',
|
||||
'Bash(gh pr edit:*)',
|
||||
'Bash(gh pr view:*)',
|
||||
'Bash(gh pr merge:*)',
|
||||
'ToolSearch',
|
||||
'mcp__slack__send_message',
|
||||
'mcp__claude_ai_Slack__slack_send_message',
|
||||
]
|
||||
|
||||
function getPromptContent(
|
||||
defaultBranch: string,
|
||||
prAttribution?: string,
|
||||
): string {
|
||||
const { commit: commitAttribution, pr: defaultPrAttribution } =
|
||||
getAttributionTexts()
|
||||
// Use provided PR attribution or fall back to default
|
||||
const effectivePrAttribution = prAttribution ?? defaultPrAttribution
|
||||
const safeUser = process.env.SAFEUSER || ''
|
||||
const username = process.env.USER || ''
|
||||
|
||||
let prefix = ''
|
||||
let reviewerArg = ' and `--reviewer anthropics/claude-code`'
|
||||
let addReviewerArg = ' (and add `--add-reviewer anthropics/claude-code`)'
|
||||
let changelogSection = `
|
||||
|
||||
## Changelog
|
||||
<!-- CHANGELOG:START -->
|
||||
[If this PR contains user-facing changes, add a changelog entry here. Otherwise, remove this section.]
|
||||
<!-- CHANGELOG:END -->`
|
||||
let slackStep = `
|
||||
|
||||
5. After creating/updating the PR, check if the user's CLAUDE.md mentions posting to Slack channels. If it does, use ToolSearch to search for "slack send message" tools. If ToolSearch finds a Slack tool, ask the user if they'd like you to post the PR URL to the relevant Slack channel. Only post if the user confirms. If ToolSearch returns no results or errors, skip this step silently—do not mention the failure, do not attempt workarounds, and do not try alternative approaches.`
|
||||
if (process.env.USER_TYPE === 'ant' && isUndercover()) {
|
||||
prefix = getUndercoverInstructions() + '\n'
|
||||
reviewerArg = ''
|
||||
addReviewerArg = ''
|
||||
changelogSection = ''
|
||||
slackStep = ''
|
||||
}
|
||||
|
||||
return `${prefix}## Context
|
||||
|
||||
- \`SAFEUSER\`: ${safeUser}
|
||||
- \`whoami\`: ${username}
|
||||
- \`git status\`: !\`git status\`
|
||||
- \`git diff HEAD\`: !\`git diff HEAD\`
|
||||
- \`git branch --show-current\`: !\`git branch --show-current\`
|
||||
- \`git diff ${defaultBranch}...HEAD\`: !\`git diff ${defaultBranch}...HEAD\`
|
||||
- \`gh pr view --json number 2>/dev/null || true\`: !\`gh pr view --json number 2>/dev/null || true\`
|
||||
|
||||
## Git Safety Protocol
|
||||
|
||||
- NEVER update the git config
|
||||
- NEVER run destructive/irreversible git commands (like push --force, hard reset, etc) unless the user explicitly requests them
|
||||
- NEVER skip hooks (--no-verify, --no-gpg-sign, etc) unless the user explicitly requests it
|
||||
- NEVER run force push to main/master, warn the user if they request it
|
||||
- Do not commit files that likely contain secrets (.env, credentials.json, etc)
|
||||
- Never use git commands with the -i flag (like git rebase -i or git add -i) since they require interactive input which is not supported
|
||||
|
||||
## Your task
|
||||
|
||||
Analyze all changes that will be included in the pull request, making sure to look at all relevant commits (NOT just the latest commit, but ALL commits that will be included in the pull request from the git diff ${defaultBranch}...HEAD output above).
|
||||
|
||||
Based on the above changes:
|
||||
1. Create a new branch if on ${defaultBranch} (use SAFEUSER from context above for the branch name prefix, falling back to whoami if SAFEUSER is empty, e.g., \`username/feature-name\`)
|
||||
2. Create a single commit with an appropriate message using heredoc syntax${commitAttribution ? `, ending with the attribution text shown in the example below` : ''}:
|
||||
\`\`\`
|
||||
git commit -m "$(cat <<'EOF'
|
||||
Commit message here.${commitAttribution ? `\n\n${commitAttribution}` : ''}
|
||||
EOF
|
||||
)"
|
||||
\`\`\`
|
||||
3. Push the branch to origin
|
||||
4. If a PR already exists for this branch (check the gh pr view output above), update the PR title and body using \`gh pr edit\` to reflect the current diff${addReviewerArg}. Otherwise, create a pull request using \`gh pr create\` with heredoc syntax for the body${reviewerArg}.
|
||||
- IMPORTANT: Keep PR titles short (under 70 characters). Use the body for details.
|
||||
\`\`\`
|
||||
gh pr create --title "Short, descriptive title" --body "$(cat <<'EOF'
|
||||
## Summary
|
||||
<1-3 bullet points>
|
||||
|
||||
## Test plan
|
||||
[Bulleted markdown checklist of TODOs for testing the pull request...]${changelogSection}${effectivePrAttribution ? `\n\n${effectivePrAttribution}` : ''}
|
||||
EOF
|
||||
)"
|
||||
\`\`\`
|
||||
|
||||
You have the capability to call multiple tools in a single response. You MUST do all of the above in a single message.${slackStep}
|
||||
|
||||
Return the PR URL when you're done, so the user can see it.`
|
||||
}
|
||||
|
||||
const command = {
|
||||
type: 'prompt',
|
||||
name: 'commit-push-pr',
|
||||
description: 'Commit, push, and open a PR',
|
||||
allowedTools: ALLOWED_TOOLS,
|
||||
get contentLength() {
|
||||
// Use 'main' as estimate for content length calculation
|
||||
return getPromptContent('main').length
|
||||
},
|
||||
progressMessage: 'creating commit and PR',
|
||||
source: 'builtin',
|
||||
async getPromptForCommand(args, context) {
|
||||
// Get default branch and enhanced PR attribution
|
||||
const [defaultBranch, prAttribution] = await Promise.all([
|
||||
getDefaultBranch(),
|
||||
getEnhancedPRAttribution(context.getAppState),
|
||||
])
|
||||
let promptContent = getPromptContent(defaultBranch, prAttribution)
|
||||
|
||||
// Append user instructions if args provided
|
||||
const trimmedArgs = args?.trim()
|
||||
if (trimmedArgs) {
|
||||
promptContent += `\n\n## Additional instructions from user\n\n${trimmedArgs}`
|
||||
}
|
||||
|
||||
const finalContent = await executeShellCommandsInPrompt(
|
||||
promptContent,
|
||||
{
|
||||
...context,
|
||||
getAppState() {
|
||||
const appState = context.getAppState()
|
||||
return {
|
||||
...appState,
|
||||
toolPermissionContext: {
|
||||
...appState.toolPermissionContext,
|
||||
alwaysAllowRules: {
|
||||
...appState.toolPermissionContext.alwaysAllowRules,
|
||||
command: ALLOWED_TOOLS,
|
||||
},
|
||||
},
|
||||
}
|
||||
},
|
||||
},
|
||||
'/commit-push-pr',
|
||||
)
|
||||
|
||||
return [{ type: 'text', text: finalContent }]
|
||||
},
|
||||
} satisfies Command
|
||||
|
||||
export default command
|
||||
92
src/commands/commit.ts
Normal file
92
src/commands/commit.ts
Normal file
@ -0,0 +1,92 @@
|
||||
import type { Command } from '../commands.js'
|
||||
import { getAttributionTexts } from '../utils/attribution.js'
|
||||
import { executeShellCommandsInPrompt } from '../utils/promptShellExecution.js'
|
||||
import { getUndercoverInstructions, isUndercover } from '../utils/undercover.js'
|
||||
|
||||
const ALLOWED_TOOLS = [
|
||||
'Bash(git add:*)',
|
||||
'Bash(git status:*)',
|
||||
'Bash(git commit:*)',
|
||||
]
|
||||
|
||||
function getPromptContent(): string {
|
||||
const { commit: commitAttribution } = getAttributionTexts()
|
||||
|
||||
let prefix = ''
|
||||
if (process.env.USER_TYPE === 'ant' && isUndercover()) {
|
||||
prefix = getUndercoverInstructions() + '\n'
|
||||
}
|
||||
|
||||
return `${prefix}## Context
|
||||
|
||||
- Current git status: !\`git status\`
|
||||
- Current git diff (staged and unstaged changes): !\`git diff HEAD\`
|
||||
- Current branch: !\`git branch --show-current\`
|
||||
- Recent commits: !\`git log --oneline -10\`
|
||||
|
||||
## Git Safety Protocol
|
||||
|
||||
- NEVER update the git config
|
||||
- NEVER skip hooks (--no-verify, --no-gpg-sign, etc) unless the user explicitly requests it
|
||||
- CRITICAL: ALWAYS create NEW commits. NEVER use git commit --amend, unless the user explicitly requests it
|
||||
- Do not commit files that likely contain secrets (.env, credentials.json, etc). Warn the user if they specifically request to commit those files
|
||||
- If there are no changes to commit (i.e., no untracked files and no modifications), do not create an empty commit
|
||||
- Never use git commands with the -i flag (like git rebase -i or git add -i) since they require interactive input which is not supported
|
||||
|
||||
## Your task
|
||||
|
||||
Based on the above changes, create a single git commit:
|
||||
|
||||
1. Analyze all staged changes and draft a commit message:
|
||||
- Look at the recent commits above to follow this repository's commit message style
|
||||
- Summarize the nature of the changes (new feature, enhancement, bug fix, refactoring, test, docs, etc.)
|
||||
- Ensure the message accurately reflects the changes and their purpose (i.e. "add" means a wholly new feature, "update" means an enhancement to an existing feature, "fix" means a bug fix, etc.)
|
||||
- Draft a concise (1-2 sentences) commit message that focuses on the "why" rather than the "what"
|
||||
|
||||
2. Stage relevant files and create the commit using HEREDOC syntax:
|
||||
\`\`\`
|
||||
git commit -m "$(cat <<'EOF'
|
||||
Commit message here.${commitAttribution ? `\n\n${commitAttribution}` : ''}
|
||||
EOF
|
||||
)"
|
||||
\`\`\`
|
||||
|
||||
You have the capability to call multiple tools in a single response. Stage and create the commit using a single message. Do not use any other tools or do anything else. Do not send any other text or messages besides these tool calls.`
|
||||
}
|
||||
|
||||
const command = {
|
||||
type: 'prompt',
|
||||
name: 'commit',
|
||||
description: 'Create a git commit',
|
||||
allowedTools: ALLOWED_TOOLS,
|
||||
contentLength: 0, // Dynamic content
|
||||
progressMessage: 'creating commit',
|
||||
source: 'builtin',
|
||||
async getPromptForCommand(_args, context) {
|
||||
const promptContent = getPromptContent()
|
||||
const finalContent = await executeShellCommandsInPrompt(
|
||||
promptContent,
|
||||
{
|
||||
...context,
|
||||
getAppState() {
|
||||
const appState = context.getAppState()
|
||||
return {
|
||||
...appState,
|
||||
toolPermissionContext: {
|
||||
...appState.toolPermissionContext,
|
||||
alwaysAllowRules: {
|
||||
...appState.toolPermissionContext.alwaysAllowRules,
|
||||
command: ALLOWED_TOOLS,
|
||||
},
|
||||
},
|
||||
}
|
||||
},
|
||||
},
|
||||
'/commit',
|
||||
)
|
||||
|
||||
return [{ type: 'text', text: finalContent }]
|
||||
},
|
||||
} satisfies Command
|
||||
|
||||
export default command
|
||||
287
src/commands/compact/compact.ts
Normal file
287
src/commands/compact/compact.ts
Normal file
@ -0,0 +1,287 @@
|
||||
import { feature } from 'bun:bundle'
|
||||
import chalk from 'chalk'
|
||||
import { markPostCompaction } from 'src/bootstrap/state.js'
|
||||
import { getSystemPrompt } from '../../constants/prompts.js'
|
||||
import { getSystemContext, getUserContext } from '../../context.js'
|
||||
import { getShortcutDisplay } from '../../keybindings/shortcutFormat.js'
|
||||
import { notifyCompaction } from '../../services/api/promptCacheBreakDetection.js'
|
||||
import {
|
||||
type CompactionResult,
|
||||
compactConversation,
|
||||
ERROR_MESSAGE_INCOMPLETE_RESPONSE,
|
||||
ERROR_MESSAGE_NOT_ENOUGH_MESSAGES,
|
||||
ERROR_MESSAGE_USER_ABORT,
|
||||
mergeHookInstructions,
|
||||
} from '../../services/compact/compact.js'
|
||||
import { suppressCompactWarning } from '../../services/compact/compactWarningState.js'
|
||||
import { microcompactMessages } from '../../services/compact/microCompact.js'
|
||||
import { runPostCompactCleanup } from '../../services/compact/postCompactCleanup.js'
|
||||
import { trySessionMemoryCompaction } from '../../services/compact/sessionMemoryCompact.js'
|
||||
import { setLastSummarizedMessageId } from '../../services/SessionMemory/sessionMemoryUtils.js'
|
||||
import type { ToolUseContext } from '../../Tool.js'
|
||||
import type { LocalCommandCall } from '../../types/command.js'
|
||||
import type { Message } from '../../types/message.js'
|
||||
import { hasExactErrorMessage } from '../../utils/errors.js'
|
||||
import { executePreCompactHooks } from '../../utils/hooks.js'
|
||||
import { logError } from '../../utils/log.js'
|
||||
import { getMessagesAfterCompactBoundary } from '../../utils/messages.js'
|
||||
import { getUpgradeMessage } from '../../utils/model/contextWindowUpgradeCheck.js'
|
||||
import {
|
||||
buildEffectiveSystemPrompt,
|
||||
type SystemPrompt,
|
||||
} from '../../utils/systemPrompt.js'
|
||||
|
||||
/* eslint-disable @typescript-eslint/no-require-imports */
|
||||
const reactiveCompact = feature('REACTIVE_COMPACT')
|
||||
? (require('../../services/compact/reactiveCompact.js') as typeof import('../../services/compact/reactiveCompact.js'))
|
||||
: null
|
||||
/* eslint-enable @typescript-eslint/no-require-imports */
|
||||
|
||||
export const call: LocalCommandCall = async (args, context) => {
|
||||
const { abortController } = context
|
||||
let { messages } = context
|
||||
|
||||
// REPL keeps snipped messages for UI scrollback — project so the compact
|
||||
// model doesn't summarize content that was intentionally removed.
|
||||
messages = getMessagesAfterCompactBoundary(messages)
|
||||
|
||||
if (messages.length === 0) {
|
||||
throw new Error('No messages to compact')
|
||||
}
|
||||
|
||||
const customInstructions = args.trim()
|
||||
|
||||
try {
|
||||
// Try session memory compaction first if no custom instructions
|
||||
// (session memory compaction doesn't support custom instructions)
|
||||
if (!customInstructions) {
|
||||
const sessionMemoryResult = await trySessionMemoryCompaction(
|
||||
messages,
|
||||
context.agentId,
|
||||
)
|
||||
if (sessionMemoryResult) {
|
||||
getUserContext.cache.clear?.()
|
||||
runPostCompactCleanup()
|
||||
// Reset cache read baseline so the post-compact drop isn't flagged
|
||||
// as a break. compactConversation does this internally; SM-compact doesn't.
|
||||
if (feature('PROMPT_CACHE_BREAK_DETECTION')) {
|
||||
notifyCompaction(
|
||||
context.options.querySource ?? 'compact',
|
||||
context.agentId,
|
||||
)
|
||||
}
|
||||
markPostCompaction()
|
||||
// Suppress warning immediately after successful compaction
|
||||
suppressCompactWarning()
|
||||
|
||||
return {
|
||||
type: 'compact',
|
||||
compactionResult: sessionMemoryResult,
|
||||
displayText: buildDisplayText(context),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Reactive-only mode: route /compact through the reactive path.
|
||||
// Checked after session-memory (that path is cheap and orthogonal).
|
||||
if (reactiveCompact?.isReactiveOnlyMode()) {
|
||||
return await compactViaReactive(
|
||||
messages,
|
||||
context,
|
||||
customInstructions,
|
||||
reactiveCompact,
|
||||
)
|
||||
}
|
||||
|
||||
// Fall back to traditional compaction
|
||||
// Run microcompact first to reduce tokens before summarization
|
||||
const microcompactResult = await microcompactMessages(messages, context)
|
||||
const messagesForCompact = microcompactResult.messages
|
||||
|
||||
const result = await compactConversation(
|
||||
messagesForCompact,
|
||||
context,
|
||||
await getCacheSharingParams(context, messagesForCompact),
|
||||
false,
|
||||
customInstructions,
|
||||
false,
|
||||
)
|
||||
|
||||
// Reset lastSummarizedMessageId since legacy compaction replaces all messages
|
||||
// and the old message UUID will no longer exist in the new messages array
|
||||
setLastSummarizedMessageId(undefined)
|
||||
|
||||
// Suppress the "Context left until auto-compact" warning after successful compaction
|
||||
suppressCompactWarning()
|
||||
|
||||
getUserContext.cache.clear?.()
|
||||
runPostCompactCleanup()
|
||||
|
||||
return {
|
||||
type: 'compact',
|
||||
compactionResult: result,
|
||||
displayText: buildDisplayText(context, result.userDisplayMessage),
|
||||
}
|
||||
} catch (error) {
|
||||
if (abortController.signal.aborted) {
|
||||
throw new Error('Compaction canceled.')
|
||||
} else if (hasExactErrorMessage(error, ERROR_MESSAGE_NOT_ENOUGH_MESSAGES)) {
|
||||
throw new Error(ERROR_MESSAGE_NOT_ENOUGH_MESSAGES)
|
||||
} else if (hasExactErrorMessage(error, ERROR_MESSAGE_INCOMPLETE_RESPONSE)) {
|
||||
throw new Error(ERROR_MESSAGE_INCOMPLETE_RESPONSE)
|
||||
} else {
|
||||
logError(error)
|
||||
throw new Error(`Error during compaction: ${error}`)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async function compactViaReactive(
|
||||
messages: Message[],
|
||||
context: ToolUseContext,
|
||||
customInstructions: string,
|
||||
reactive: NonNullable<typeof reactiveCompact>,
|
||||
): Promise<{
|
||||
type: 'compact'
|
||||
compactionResult: CompactionResult
|
||||
displayText: string
|
||||
}> {
|
||||
context.onCompactProgress?.({
|
||||
type: 'hooks_start',
|
||||
hookType: 'pre_compact',
|
||||
})
|
||||
context.setSDKStatus?.('compacting')
|
||||
|
||||
try {
|
||||
// Hooks and cache-param build are independent — run concurrently.
|
||||
// getCacheSharingParams walks all tools to build the system prompt;
|
||||
// pre-compact hooks spawn subprocesses. Neither depends on the other.
|
||||
const [hookResult, cacheSafeParams] = await Promise.all([
|
||||
executePreCompactHooks(
|
||||
{ trigger: 'manual', customInstructions: customInstructions || null },
|
||||
context.abortController.signal,
|
||||
),
|
||||
getCacheSharingParams(context, messages),
|
||||
])
|
||||
const mergedInstructions = mergeHookInstructions(
|
||||
customInstructions,
|
||||
hookResult.newCustomInstructions,
|
||||
)
|
||||
|
||||
context.setStreamMode?.('requesting')
|
||||
context.setResponseLength?.(() => 0)
|
||||
context.onCompactProgress?.({ type: 'compact_start' })
|
||||
|
||||
const outcome = await reactive.reactiveCompactOnPromptTooLong(
|
||||
messages,
|
||||
cacheSafeParams,
|
||||
{ customInstructions: mergedInstructions, trigger: 'manual' },
|
||||
)
|
||||
|
||||
if (!outcome.ok) {
|
||||
// The outer catch in `call` translates these: aborted → "Compaction
|
||||
// canceled." (via abortController.signal.aborted check), NOT_ENOUGH →
|
||||
// re-thrown as-is, everything else → "Error during compaction: …".
|
||||
switch (outcome.reason) {
|
||||
case 'too_few_groups':
|
||||
throw new Error(ERROR_MESSAGE_NOT_ENOUGH_MESSAGES)
|
||||
case 'aborted':
|
||||
throw new Error(ERROR_MESSAGE_USER_ABORT)
|
||||
case 'exhausted':
|
||||
case 'error':
|
||||
case 'media_unstrippable':
|
||||
throw new Error(ERROR_MESSAGE_INCOMPLETE_RESPONSE)
|
||||
}
|
||||
}
|
||||
|
||||
// Mirrors the post-success cleanup in tryReactiveCompact, minus
|
||||
// resetMicrocompactState — processSlashCommand calls that for all
|
||||
// type:'compact' results.
|
||||
setLastSummarizedMessageId(undefined)
|
||||
runPostCompactCleanup()
|
||||
suppressCompactWarning()
|
||||
getUserContext.cache.clear?.()
|
||||
|
||||
// reactiveCompactOnPromptTooLong runs PostCompact hooks but not PreCompact
|
||||
// — both callers (here and tryReactiveCompact) run PreCompact outside so
|
||||
// they can merge its userDisplayMessage with PostCompact's here. This
|
||||
// caller additionally runs it concurrently with getCacheSharingParams.
|
||||
const combinedMessage =
|
||||
[hookResult.userDisplayMessage, outcome.result.userDisplayMessage]
|
||||
.filter(Boolean)
|
||||
.join('\n') || undefined
|
||||
|
||||
return {
|
||||
type: 'compact',
|
||||
compactionResult: {
|
||||
...outcome.result,
|
||||
userDisplayMessage: combinedMessage,
|
||||
},
|
||||
displayText: buildDisplayText(context, combinedMessage),
|
||||
}
|
||||
} finally {
|
||||
context.setStreamMode?.('requesting')
|
||||
context.setResponseLength?.(() => 0)
|
||||
context.onCompactProgress?.({ type: 'compact_end' })
|
||||
context.setSDKStatus?.(null)
|
||||
}
|
||||
}
|
||||
|
||||
function buildDisplayText(
|
||||
context: ToolUseContext,
|
||||
userDisplayMessage?: string,
|
||||
): string {
|
||||
const upgradeMessage = getUpgradeMessage('tip')
|
||||
const expandShortcut = getShortcutDisplay(
|
||||
'app:toggleTranscript',
|
||||
'Global',
|
||||
'ctrl+o',
|
||||
)
|
||||
const dimmed = [
|
||||
...(context.options.verbose
|
||||
? []
|
||||
: [`(${expandShortcut} to see full summary)`]),
|
||||
...(userDisplayMessage ? [userDisplayMessage] : []),
|
||||
...(upgradeMessage ? [upgradeMessage] : []),
|
||||
]
|
||||
return chalk.dim('Compacted ' + dimmed.join('\n'))
|
||||
}
|
||||
|
||||
async function getCacheSharingParams(
|
||||
context: ToolUseContext,
|
||||
forkContextMessages: Message[],
|
||||
): Promise<{
|
||||
systemPrompt: SystemPrompt
|
||||
userContext: { [k: string]: string }
|
||||
systemContext: { [k: string]: string }
|
||||
toolUseContext: ToolUseContext
|
||||
forkContextMessages: Message[]
|
||||
}> {
|
||||
const appState = context.getAppState()
|
||||
const defaultSysPrompt = await getSystemPrompt(
|
||||
context.options.tools,
|
||||
context.options.mainLoopModel,
|
||||
Array.from(
|
||||
appState.toolPermissionContext.additionalWorkingDirectories.keys(),
|
||||
),
|
||||
context.options.mcpClients,
|
||||
)
|
||||
const systemPrompt = buildEffectiveSystemPrompt({
|
||||
mainThreadAgentDefinition: undefined,
|
||||
toolUseContext: context,
|
||||
customSystemPrompt: context.options.customSystemPrompt,
|
||||
defaultSystemPrompt: defaultSysPrompt,
|
||||
appendSystemPrompt: context.options.appendSystemPrompt,
|
||||
})
|
||||
const [userContext, systemContext] = await Promise.all([
|
||||
getUserContext(),
|
||||
getSystemContext(),
|
||||
])
|
||||
return {
|
||||
systemPrompt,
|
||||
userContext,
|
||||
systemContext,
|
||||
toolUseContext: context,
|
||||
forkContextMessages,
|
||||
}
|
||||
}
|
||||
15
src/commands/compact/index.ts
Normal file
15
src/commands/compact/index.ts
Normal file
@ -0,0 +1,15 @@
|
||||
import type { Command } from '../../commands.js'
|
||||
import { isEnvTruthy } from '../../utils/envUtils.js'
|
||||
|
||||
const compact = {
|
||||
type: 'local',
|
||||
name: 'compact',
|
||||
description:
|
||||
'Clear conversation history but keep a summary in context. Optional: /compact [instructions for summarization]',
|
||||
isEnabled: () => !isEnvTruthy(process.env.DISABLE_COMPACT),
|
||||
supportsNonInteractive: true,
|
||||
argumentHint: '<optional custom summarization instructions>',
|
||||
load: () => import('./compact.js'),
|
||||
} satisfies Command
|
||||
|
||||
export default compact
|
||||
7
src/commands/config/config.tsx
Normal file
7
src/commands/config/config.tsx
Normal file
@ -0,0 +1,7 @@
|
||||
import * as React from 'react';
|
||||
import { Settings } from '../../components/Settings/Settings.js';
|
||||
import type { LocalJSXCommandCall } from '../../types/command.js';
|
||||
export const call: LocalJSXCommandCall = async (onDone, context) => {
|
||||
return <Settings onClose={onDone} context={context} defaultTab="Config" />;
|
||||
};
|
||||
//# sourceMappingURL=data:application/json;charset=utf-8;base64,eyJ2ZXJzaW9uIjozLCJuYW1lcyI6WyJSZWFjdCIsIlNldHRpbmdzIiwiTG9jYWxKU1hDb21tYW5kQ2FsbCIsImNhbGwiLCJvbkRvbmUiLCJjb250ZXh0Il0sInNvdXJjZXMiOlsiY29uZmlnLnRzeCJdLCJzb3VyY2VzQ29udGVudCI6WyJpbXBvcnQgKiBhcyBSZWFjdCBmcm9tICdyZWFjdCdcbmltcG9ydCB7IFNldHRpbmdzIH0gZnJvbSAnLi4vLi4vY29tcG9uZW50cy9TZXR0aW5ncy9TZXR0aW5ncy5qcydcbmltcG9ydCB0eXBlIHsgTG9jYWxKU1hDb21tYW5kQ2FsbCB9IGZyb20gJy4uLy4uL3R5cGVzL2NvbW1hbmQuanMnXG5cbmV4cG9ydCBjb25zdCBjYWxsOiBMb2NhbEpTWENvbW1hbmRDYWxsID0gYXN5bmMgKG9uRG9uZSwgY29udGV4dCkgPT4ge1xuICByZXR1cm4gPFNldHRpbmdzIG9uQ2xvc2U9e29uRG9uZX0gY29udGV4dD17Y29udGV4dH0gZGVmYXVsdFRhYj1cIkNvbmZpZ1wiIC8+XG59XG4iXSwibWFwcGluZ3MiOiJBQUFBLE9BQU8sS0FBS0EsS0FBSyxNQUFNLE9BQU87QUFDOUIsU0FBU0MsUUFBUSxRQUFRLHVDQUF1QztBQUNoRSxjQUFjQyxtQkFBbUIsUUFBUSx3QkFBd0I7QUFFakUsT0FBTyxNQUFNQyxJQUFJLEVBQUVELG1CQUFtQixHQUFHLE1BQUFDLENBQU9DLE1BQU0sRUFBRUMsT0FBTyxLQUFLO0VBQ2xFLE9BQU8sQ0FBQyxRQUFRLENBQUMsT0FBTyxDQUFDLENBQUNELE1BQU0sQ0FBQyxDQUFDLE9BQU8sQ0FBQyxDQUFDQyxPQUFPLENBQUMsQ0FBQyxVQUFVLENBQUMsUUFBUSxHQUFHO0FBQzVFLENBQUMiLCJpZ25vcmVMaXN0IjpbXX0=
|
||||
11
src/commands/config/index.ts
Normal file
11
src/commands/config/index.ts
Normal file
@ -0,0 +1,11 @@
|
||||
import type { Command } from '../../commands.js'
|
||||
|
||||
const config = {
|
||||
aliases: ['settings'],
|
||||
type: 'local-jsx',
|
||||
name: 'config',
|
||||
description: 'Open config panel',
|
||||
load: () => import('./config.js'),
|
||||
} satisfies Command
|
||||
|
||||
export default config
|
||||
325
src/commands/context/context-noninteractive.ts
Normal file
325
src/commands/context/context-noninteractive.ts
Normal file
@ -0,0 +1,325 @@
|
||||
import { feature } from 'bun:bundle'
|
||||
import { microcompactMessages } from '../../services/compact/microCompact.js'
|
||||
import type { AppState } from '../../state/AppStateStore.js'
|
||||
import type { Tools, ToolUseContext } from '../../Tool.js'
|
||||
import type { AgentDefinitionsResult } from '../../tools/AgentTool/loadAgentsDir.js'
|
||||
import type { Message } from '../../types/message.js'
|
||||
import {
|
||||
analyzeContextUsage,
|
||||
type ContextData,
|
||||
} from '../../utils/analyzeContext.js'
|
||||
import { formatTokens } from '../../utils/format.js'
|
||||
import { getMessagesAfterCompactBoundary } from '../../utils/messages.js'
|
||||
import { getSourceDisplayName } from '../../utils/settings/constants.js'
|
||||
import { plural } from '../../utils/stringUtils.js'
|
||||
|
||||
/**
|
||||
* Shared data-collection path for `/context` (slash command) and the SDK
|
||||
* `get_context_usage` control request. Mirrors query.ts's pre-API transforms
|
||||
* (compact boundary, projectView, microcompact) so the token count reflects
|
||||
* what the model actually sees.
|
||||
*/
|
||||
type CollectContextDataInput = {
|
||||
messages: Message[]
|
||||
getAppState: () => AppState
|
||||
options: {
|
||||
mainLoopModel: string
|
||||
tools: Tools
|
||||
agentDefinitions: AgentDefinitionsResult
|
||||
customSystemPrompt?: string
|
||||
appendSystemPrompt?: string
|
||||
}
|
||||
}
|
||||
|
||||
export async function collectContextData(
|
||||
context: CollectContextDataInput,
|
||||
): Promise<ContextData> {
|
||||
const {
|
||||
messages,
|
||||
getAppState,
|
||||
options: {
|
||||
mainLoopModel,
|
||||
tools,
|
||||
agentDefinitions,
|
||||
customSystemPrompt,
|
||||
appendSystemPrompt,
|
||||
},
|
||||
} = context
|
||||
|
||||
let apiView = getMessagesAfterCompactBoundary(messages)
|
||||
if (feature('CONTEXT_COLLAPSE')) {
|
||||
/* eslint-disable @typescript-eslint/no-require-imports */
|
||||
const { projectView } =
|
||||
require('../../services/contextCollapse/operations.js') as typeof import('../../services/contextCollapse/operations.js')
|
||||
/* eslint-enable @typescript-eslint/no-require-imports */
|
||||
apiView = projectView(apiView)
|
||||
}
|
||||
|
||||
const { messages: compactedMessages } = await microcompactMessages(apiView)
|
||||
const appState = getAppState()
|
||||
|
||||
return analyzeContextUsage(
|
||||
compactedMessages,
|
||||
mainLoopModel,
|
||||
async () => appState.toolPermissionContext,
|
||||
tools,
|
||||
agentDefinitions,
|
||||
undefined, // terminalWidth
|
||||
// analyzeContextUsage only reads options.{customSystemPrompt,appendSystemPrompt}
|
||||
// but its signature declares the full Pick<ToolUseContext, 'options'>.
|
||||
{ options: { customSystemPrompt, appendSystemPrompt } } as Pick<
|
||||
ToolUseContext,
|
||||
'options'
|
||||
>,
|
||||
undefined, // mainThreadAgentDefinition
|
||||
apiView, // original messages for API usage extraction
|
||||
)
|
||||
}
|
||||
|
||||
export async function call(
|
||||
_args: string,
|
||||
context: ToolUseContext,
|
||||
): Promise<{ type: 'text'; value: string }> {
|
||||
const data = await collectContextData(context)
|
||||
return {
|
||||
type: 'text' as const,
|
||||
value: formatContextAsMarkdownTable(data),
|
||||
}
|
||||
}
|
||||
|
||||
function formatContextAsMarkdownTable(data: ContextData): string {
|
||||
const {
|
||||
categories,
|
||||
totalTokens,
|
||||
rawMaxTokens,
|
||||
percentage,
|
||||
model,
|
||||
memoryFiles,
|
||||
mcpTools,
|
||||
agents,
|
||||
skills,
|
||||
messageBreakdown,
|
||||
systemTools,
|
||||
systemPromptSections,
|
||||
} = data
|
||||
|
||||
let output = `## Context Usage\n\n`
|
||||
output += `**Model:** ${model} \n`
|
||||
output += `**Tokens:** ${formatTokens(totalTokens)} / ${formatTokens(rawMaxTokens)} (${percentage}%)\n`
|
||||
|
||||
// Context-collapse status. Always show when the runtime gate is on —
|
||||
// the user needs to know which strategy is managing their context
|
||||
// even before anything has fired.
|
||||
if (feature('CONTEXT_COLLAPSE')) {
|
||||
/* eslint-disable @typescript-eslint/no-require-imports */
|
||||
const { getStats, isContextCollapseEnabled } =
|
||||
require('../../services/contextCollapse/index.js') as typeof import('../../services/contextCollapse/index.js')
|
||||
/* eslint-enable @typescript-eslint/no-require-imports */
|
||||
if (isContextCollapseEnabled()) {
|
||||
const s = getStats()
|
||||
const { health: h } = s
|
||||
|
||||
const parts = []
|
||||
if (s.collapsedSpans > 0) {
|
||||
parts.push(
|
||||
`${s.collapsedSpans} ${plural(s.collapsedSpans, 'span')} summarized (${s.collapsedMessages} messages)`,
|
||||
)
|
||||
}
|
||||
if (s.stagedSpans > 0) parts.push(`${s.stagedSpans} staged`)
|
||||
const summary =
|
||||
parts.length > 0
|
||||
? parts.join(', ')
|
||||
: h.totalSpawns > 0
|
||||
? `${h.totalSpawns} ${plural(h.totalSpawns, 'spawn')}, nothing staged yet`
|
||||
: 'waiting for first trigger'
|
||||
output += `**Context strategy:** collapse (${summary})\n`
|
||||
|
||||
if (h.totalErrors > 0) {
|
||||
output += `**Collapse errors:** ${h.totalErrors}/${h.totalSpawns} spawns failed`
|
||||
if (h.lastError) {
|
||||
output += ` (last: ${h.lastError.slice(0, 80)})`
|
||||
}
|
||||
output += '\n'
|
||||
} else if (h.emptySpawnWarningEmitted) {
|
||||
output += `**Collapse idle:** ${h.totalEmptySpawns} consecutive empty runs\n`
|
||||
}
|
||||
}
|
||||
}
|
||||
output += '\n'
|
||||
|
||||
// Main categories table
|
||||
const visibleCategories = categories.filter(
|
||||
cat =>
|
||||
cat.tokens > 0 &&
|
||||
cat.name !== 'Free space' &&
|
||||
cat.name !== 'Autocompact buffer',
|
||||
)
|
||||
|
||||
if (visibleCategories.length > 0) {
|
||||
output += `### Estimated usage by category\n\n`
|
||||
output += `| Category | Tokens | Percentage |\n`
|
||||
output += `|----------|--------|------------|\n`
|
||||
|
||||
for (const cat of visibleCategories) {
|
||||
const percentDisplay = ((cat.tokens / rawMaxTokens) * 100).toFixed(1)
|
||||
output += `| ${cat.name} | ${formatTokens(cat.tokens)} | ${percentDisplay}% |\n`
|
||||
}
|
||||
|
||||
const freeSpaceCategory = categories.find(c => c.name === 'Free space')
|
||||
if (freeSpaceCategory && freeSpaceCategory.tokens > 0) {
|
||||
const percentDisplay = (
|
||||
(freeSpaceCategory.tokens / rawMaxTokens) *
|
||||
100
|
||||
).toFixed(1)
|
||||
output += `| Free space | ${formatTokens(freeSpaceCategory.tokens)} | ${percentDisplay}% |\n`
|
||||
}
|
||||
|
||||
const autocompactCategory = categories.find(
|
||||
c => c.name === 'Autocompact buffer',
|
||||
)
|
||||
if (autocompactCategory && autocompactCategory.tokens > 0) {
|
||||
const percentDisplay = (
|
||||
(autocompactCategory.tokens / rawMaxTokens) *
|
||||
100
|
||||
).toFixed(1)
|
||||
output += `| Autocompact buffer | ${formatTokens(autocompactCategory.tokens)} | ${percentDisplay}% |\n`
|
||||
}
|
||||
|
||||
output += `\n`
|
||||
}
|
||||
|
||||
// MCP tools
|
||||
if (mcpTools.length > 0) {
|
||||
output += `### MCP Tools\n\n`
|
||||
output += `| Tool | Server | Tokens |\n`
|
||||
output += `|------|--------|--------|\n`
|
||||
for (const tool of mcpTools) {
|
||||
output += `| ${tool.name} | ${tool.serverName} | ${formatTokens(tool.tokens)} |\n`
|
||||
}
|
||||
output += `\n`
|
||||
}
|
||||
|
||||
// System tools (ant-only)
|
||||
if (
|
||||
systemTools &&
|
||||
systemTools.length > 0 &&
|
||||
process.env.USER_TYPE === 'ant'
|
||||
) {
|
||||
output += `### [ANT-ONLY] System Tools\n\n`
|
||||
output += `| Tool | Tokens |\n`
|
||||
output += `|------|--------|\n`
|
||||
for (const tool of systemTools) {
|
||||
output += `| ${tool.name} | ${formatTokens(tool.tokens)} |\n`
|
||||
}
|
||||
output += `\n`
|
||||
}
|
||||
|
||||
// System prompt sections (ant-only)
|
||||
if (
|
||||
systemPromptSections &&
|
||||
systemPromptSections.length > 0 &&
|
||||
process.env.USER_TYPE === 'ant'
|
||||
) {
|
||||
output += `### [ANT-ONLY] System Prompt Sections\n\n`
|
||||
output += `| Section | Tokens |\n`
|
||||
output += `|---------|--------|\n`
|
||||
for (const section of systemPromptSections) {
|
||||
output += `| ${section.name} | ${formatTokens(section.tokens)} |\n`
|
||||
}
|
||||
output += `\n`
|
||||
}
|
||||
|
||||
// Custom agents
|
||||
if (agents.length > 0) {
|
||||
output += `### Custom Agents\n\n`
|
||||
output += `| Agent Type | Source | Tokens |\n`
|
||||
output += `|------------|--------|--------|\n`
|
||||
for (const agent of agents) {
|
||||
let sourceDisplay: string
|
||||
switch (agent.source) {
|
||||
case 'projectSettings':
|
||||
sourceDisplay = 'Project'
|
||||
break
|
||||
case 'userSettings':
|
||||
sourceDisplay = 'User'
|
||||
break
|
||||
case 'localSettings':
|
||||
sourceDisplay = 'Local'
|
||||
break
|
||||
case 'flagSettings':
|
||||
sourceDisplay = 'Flag'
|
||||
break
|
||||
case 'policySettings':
|
||||
sourceDisplay = 'Policy'
|
||||
break
|
||||
case 'plugin':
|
||||
sourceDisplay = 'Plugin'
|
||||
break
|
||||
case 'built-in':
|
||||
sourceDisplay = 'Built-in'
|
||||
break
|
||||
default:
|
||||
sourceDisplay = String(agent.source)
|
||||
}
|
||||
output += `| ${agent.agentType} | ${sourceDisplay} | ${formatTokens(agent.tokens)} |\n`
|
||||
}
|
||||
output += `\n`
|
||||
}
|
||||
|
||||
// Memory files
|
||||
if (memoryFiles.length > 0) {
|
||||
output += `### Memory Files\n\n`
|
||||
output += `| Type | Path | Tokens |\n`
|
||||
output += `|------|------|--------|\n`
|
||||
for (const file of memoryFiles) {
|
||||
output += `| ${file.type} | ${file.path} | ${formatTokens(file.tokens)} |\n`
|
||||
}
|
||||
output += `\n`
|
||||
}
|
||||
|
||||
// Skills
|
||||
if (skills && skills.tokens > 0 && skills.skillFrontmatter.length > 0) {
|
||||
output += `### Skills\n\n`
|
||||
output += `| Skill | Source | Tokens |\n`
|
||||
output += `|-------|--------|--------|\n`
|
||||
for (const skill of skills.skillFrontmatter) {
|
||||
output += `| ${skill.name} | ${getSourceDisplayName(skill.source)} | ${formatTokens(skill.tokens)} |\n`
|
||||
}
|
||||
output += `\n`
|
||||
}
|
||||
|
||||
// Message breakdown (ant-only)
|
||||
if (messageBreakdown && process.env.USER_TYPE === 'ant') {
|
||||
output += `### [ANT-ONLY] Message Breakdown\n\n`
|
||||
output += `| Category | Tokens |\n`
|
||||
output += `|----------|--------|\n`
|
||||
output += `| Tool calls | ${formatTokens(messageBreakdown.toolCallTokens)} |\n`
|
||||
output += `| Tool results | ${formatTokens(messageBreakdown.toolResultTokens)} |\n`
|
||||
output += `| Attachments | ${formatTokens(messageBreakdown.attachmentTokens)} |\n`
|
||||
output += `| Assistant messages (non-tool) | ${formatTokens(messageBreakdown.assistantMessageTokens)} |\n`
|
||||
output += `| User messages (non-tool-result) | ${formatTokens(messageBreakdown.userMessageTokens)} |\n`
|
||||
output += `\n`
|
||||
|
||||
if (messageBreakdown.toolCallsByType.length > 0) {
|
||||
output += `#### Top Tools\n\n`
|
||||
output += `| Tool | Call Tokens | Result Tokens |\n`
|
||||
output += `|------|-------------|---------------|\n`
|
||||
for (const tool of messageBreakdown.toolCallsByType) {
|
||||
output += `| ${tool.name} | ${formatTokens(tool.callTokens)} | ${formatTokens(tool.resultTokens)} |\n`
|
||||
}
|
||||
output += `\n`
|
||||
}
|
||||
|
||||
if (messageBreakdown.attachmentsByType.length > 0) {
|
||||
output += `#### Top Attachments\n\n`
|
||||
output += `| Attachment | Tokens |\n`
|
||||
output += `|------------|--------|\n`
|
||||
for (const attachment of messageBreakdown.attachmentsByType) {
|
||||
output += `| ${attachment.name} | ${formatTokens(attachment.tokens)} |\n`
|
||||
}
|
||||
output += `\n`
|
||||
}
|
||||
}
|
||||
|
||||
return output
|
||||
}
|
||||
64
src/commands/context/context.tsx
Normal file
64
src/commands/context/context.tsx
Normal file
File diff suppressed because one or more lines are too long
24
src/commands/context/index.ts
Normal file
24
src/commands/context/index.ts
Normal file
@ -0,0 +1,24 @@
|
||||
import { getIsNonInteractiveSession } from '../../bootstrap/state.js'
|
||||
import type { Command } from '../../commands.js'
|
||||
|
||||
export const context: Command = {
|
||||
name: 'context',
|
||||
description: 'Visualize current context usage as a colored grid',
|
||||
isEnabled: () => !getIsNonInteractiveSession(),
|
||||
type: 'local-jsx',
|
||||
load: () => import('./context.js'),
|
||||
}
|
||||
|
||||
export const contextNonInteractive: Command = {
|
||||
type: 'local',
|
||||
name: 'context',
|
||||
supportsNonInteractive: true,
|
||||
description: 'Show current context usage',
|
||||
get isHidden() {
|
||||
return !getIsNonInteractiveSession()
|
||||
},
|
||||
isEnabled() {
|
||||
return getIsNonInteractiveSession()
|
||||
},
|
||||
load: () => import('./context-noninteractive.js'),
|
||||
}
|
||||
371
src/commands/copy/copy.tsx
Normal file
371
src/commands/copy/copy.tsx
Normal file
File diff suppressed because one or more lines are too long
15
src/commands/copy/index.ts
Normal file
15
src/commands/copy/index.ts
Normal file
@ -0,0 +1,15 @@
|
||||
/**
|
||||
* Copy command - minimal metadata only.
|
||||
* Implementation is lazy-loaded from copy.tsx to reduce startup time.
|
||||
*/
|
||||
import type { Command } from '../../commands.js'
|
||||
|
||||
const copy = {
|
||||
type: 'local-jsx',
|
||||
name: 'copy',
|
||||
description:
|
||||
"Copy Claude's last response to clipboard (or /copy N for the Nth-latest)",
|
||||
load: () => import('./copy.js'),
|
||||
} satisfies Command
|
||||
|
||||
export default copy
|
||||
24
src/commands/cost/cost.ts
Normal file
24
src/commands/cost/cost.ts
Normal file
@ -0,0 +1,24 @@
|
||||
import { formatTotalCost } from '../../cost-tracker.js'
|
||||
import { currentLimits } from '../../services/claudeAiLimits.js'
|
||||
import type { LocalCommandCall } from '../../types/command.js'
|
||||
import { isClaudeAISubscriber } from '../../utils/auth.js'
|
||||
|
||||
export const call: LocalCommandCall = async () => {
|
||||
if (isClaudeAISubscriber()) {
|
||||
let value: string
|
||||
|
||||
if (currentLimits.isUsingOverage) {
|
||||
value =
|
||||
'You are currently using your overages to power your Claude Code usage. We will automatically switch you back to your subscription rate limits when they reset'
|
||||
} else {
|
||||
value =
|
||||
'You are currently using your subscription to power your Claude Code usage'
|
||||
}
|
||||
|
||||
if (process.env.USER_TYPE === 'ant') {
|
||||
value += `\n\n[ANT-ONLY] Showing cost anyway:\n ${formatTotalCost()}`
|
||||
}
|
||||
return { type: 'text', value }
|
||||
}
|
||||
return { type: 'text', value: formatTotalCost() }
|
||||
}
|
||||
23
src/commands/cost/index.ts
Normal file
23
src/commands/cost/index.ts
Normal file
@ -0,0 +1,23 @@
|
||||
/**
|
||||
* Cost command - minimal metadata only.
|
||||
* Implementation is lazy-loaded from cost.ts to reduce startup time.
|
||||
*/
|
||||
import type { Command } from '../../commands.js'
|
||||
import { isClaudeAISubscriber } from '../../utils/auth.js'
|
||||
|
||||
const cost = {
|
||||
type: 'local',
|
||||
name: 'cost',
|
||||
description: 'Show the total cost and duration of the current session',
|
||||
get isHidden() {
|
||||
// Keep visible for Ants even if they're subscribers (they see cost breakdowns)
|
||||
if (process.env.USER_TYPE === 'ant') {
|
||||
return false
|
||||
}
|
||||
return isClaudeAISubscriber()
|
||||
},
|
||||
supportsNonInteractive: true,
|
||||
load: () => import('./cost.js'),
|
||||
} satisfies Command
|
||||
|
||||
export default cost
|
||||
65
src/commands/createMovedToPluginCommand.ts
Normal file
65
src/commands/createMovedToPluginCommand.ts
Normal file
@ -0,0 +1,65 @@
|
||||
import type { ContentBlockParam } from '@anthropic-ai/sdk/resources/messages.js'
|
||||
import type { Command } from '../commands.js'
|
||||
import type { ToolUseContext } from '../Tool.js'
|
||||
|
||||
type Options = {
|
||||
name: string
|
||||
description: string
|
||||
progressMessage: string
|
||||
pluginName: string
|
||||
pluginCommand: string
|
||||
/**
|
||||
* The prompt to use while the marketplace is private.
|
||||
* External users will get this prompt. Once the marketplace is public,
|
||||
* this parameter and the fallback logic can be removed.
|
||||
*/
|
||||
getPromptWhileMarketplaceIsPrivate: (
|
||||
args: string,
|
||||
context: ToolUseContext,
|
||||
) => Promise<ContentBlockParam[]>
|
||||
}
|
||||
|
||||
export function createMovedToPluginCommand({
|
||||
name,
|
||||
description,
|
||||
progressMessage,
|
||||
pluginName,
|
||||
pluginCommand,
|
||||
getPromptWhileMarketplaceIsPrivate,
|
||||
}: Options): Command {
|
||||
return {
|
||||
type: 'prompt',
|
||||
name,
|
||||
description,
|
||||
progressMessage,
|
||||
contentLength: 0, // Dynamic content
|
||||
userFacingName() {
|
||||
return name
|
||||
},
|
||||
source: 'builtin',
|
||||
async getPromptForCommand(
|
||||
args: string,
|
||||
context: ToolUseContext,
|
||||
): Promise<ContentBlockParam[]> {
|
||||
if (process.env.USER_TYPE === 'ant') {
|
||||
return [
|
||||
{
|
||||
type: 'text',
|
||||
text: `This command has been moved to a plugin. Tell the user:
|
||||
|
||||
1. To install the plugin, run:
|
||||
claude plugin install ${pluginName}@claude-code-marketplace
|
||||
|
||||
2. After installation, use /${pluginName}:${pluginCommand} to run this command
|
||||
|
||||
3. For more information, see: https://github.com/anthropics/claude-code-marketplace/blob/main/${pluginName}/README.md
|
||||
|
||||
Do not attempt to run the command. Simply inform the user about the plugin installation.`,
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
return getPromptWhileMarketplaceIsPrivate(args, context)
|
||||
},
|
||||
}
|
||||
}
|
||||
1
src/commands/ctx_viz/index.js
Normal file
1
src/commands/ctx_viz/index.js
Normal file
@ -0,0 +1 @@
|
||||
export default { isEnabled: () => false, isHidden: true, name: 'stub' };
|
||||
1
src/commands/debug-tool-call/index.js
Normal file
1
src/commands/debug-tool-call/index.js
Normal file
@ -0,0 +1 @@
|
||||
export default { isEnabled: () => false, isHidden: true, name: 'stub' };
|
||||
9
src/commands/desktop/desktop.tsx
Normal file
9
src/commands/desktop/desktop.tsx
Normal file
@ -0,0 +1,9 @@
|
||||
import React from 'react';
|
||||
import type { CommandResultDisplay } from '../../commands.js';
|
||||
import { DesktopHandoff } from '../../components/DesktopHandoff.js';
|
||||
export async function call(onDone: (result?: string, options?: {
|
||||
display?: CommandResultDisplay;
|
||||
}) => void): Promise<React.ReactNode> {
|
||||
return <DesktopHandoff onDone={onDone} />;
|
||||
}
|
||||
//# sourceMappingURL=data:application/json;charset=utf-8;base64,eyJ2ZXJzaW9uIjozLCJuYW1lcyI6WyJSZWFjdCIsIkNvbW1hbmRSZXN1bHREaXNwbGF5IiwiRGVza3RvcEhhbmRvZmYiLCJjYWxsIiwib25Eb25lIiwicmVzdWx0Iiwib3B0aW9ucyIsImRpc3BsYXkiLCJQcm9taXNlIiwiUmVhY3ROb2RlIl0sInNvdXJjZXMiOlsiZGVza3RvcC50c3giXSwic291cmNlc0NvbnRlbnQiOlsiaW1wb3J0IFJlYWN0IGZyb20gJ3JlYWN0J1xuaW1wb3J0IHR5cGUgeyBDb21tYW5kUmVzdWx0RGlzcGxheSB9IGZyb20gJy4uLy4uL2NvbW1hbmRzLmpzJ1xuaW1wb3J0IHsgRGVza3RvcEhhbmRvZmYgfSBmcm9tICcuLi8uLi9jb21wb25lbnRzL0Rlc2t0b3BIYW5kb2ZmLmpzJ1xuXG5leHBvcnQgYXN5bmMgZnVuY3Rpb24gY2FsbChcbiAgb25Eb25lOiAoXG4gICAgcmVzdWx0Pzogc3RyaW5nLFxuICAgIG9wdGlvbnM/OiB7IGRpc3BsYXk/OiBDb21tYW5kUmVzdWx0RGlzcGxheSB9LFxuICApID0+IHZvaWQsXG4pOiBQcm9taXNlPFJlYWN0LlJlYWN0Tm9kZT4ge1xuICByZXR1cm4gPERlc2t0b3BIYW5kb2ZmIG9uRG9uZT17b25Eb25lfSAvPlxufVxuIl0sIm1hcHBpbmdzIjoiQUFBQSxPQUFPQSxLQUFLLE1BQU0sT0FBTztBQUN6QixjQUFjQyxvQkFBb0IsUUFBUSxtQkFBbUI7QUFDN0QsU0FBU0MsY0FBYyxRQUFRLG9DQUFvQztBQUVuRSxPQUFPLGVBQWVDLElBQUlBLENBQ3hCQyxNQUFNLEVBQUUsQ0FDTkMsTUFBZSxDQUFSLEVBQUUsTUFBTSxFQUNmQyxPQUE0QyxDQUFwQyxFQUFFO0VBQUVDLE9BQU8sQ0FBQyxFQUFFTixvQkFBb0I7QUFBQyxDQUFDLEVBQzVDLEdBQUcsSUFBSSxDQUNWLEVBQUVPLE9BQU8sQ0FBQ1IsS0FBSyxDQUFDUyxTQUFTLENBQUMsQ0FBQztFQUMxQixPQUFPLENBQUMsY0FBYyxDQUFDLE1BQU0sQ0FBQyxDQUFDTCxNQUFNLENBQUMsR0FBRztBQUMzQyIsImlnbm9yZUxpc3QiOltdfQ==
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user