Merge pull request #2834 from ultraworkers/docs/roadmap-248-prompt-mode-silent-hang

docs(roadmap): add #248 prompt-mode silent-hang pinpoint
This commit is contained in:
Bellman 2026-04-29 18:31:48 +09:00 committed by GitHub
commit 8e22f757d8
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -6253,4 +6253,6 @@ Original filing (2026-04-18): the session emitted `SessionStart hook (completed)
246. **Dogfood reminder cron can self-fail by timing out during active cycles, so the nudge loop itself is not trustworthy as an observability surface** — dogfooded 2026-04-21 in `#clawcode-building-in-public` after multiple consecutive alerts: `Cron job "clawcode-dogfood-cycle-reminder" failed: cron: job execution timed out` at 14:14, 14:24, 14:34, 14:44, 15:13, and 15:23 KST while the same dogfood cycle was actively producing reports and fixes. This is not just scheduler noise — it is a clawability gap in the reminder/control loop itself. A downstream claw seeing both repeated dogfood nudges and repeated cron timeouts cannot tell whether the reminder actually delivered, partially delivered, duplicated, or died after side effects. **Required fix shape:** (a) classify reminder execution outcome explicitly (`delivered`, `timed_out_after_send`, `timed_out_before_send`, `suppressed_as_duplicate`, `skipped_due_to_active_cycle`) instead of a single generic timeout; (b) attach the target message/report cycle id and whether a Discord post was already emitted before timeout; (c) add a fast-path/no-op path when the cycle state is unchanged or an active report is already in flight so the reminder job can exit cleanly instead of hanging; (d) add regression coverage proving repeated unchanged-state cycles do not stack timeouts or duplicate nudges. **Why this matters:** if the reminder loop itself is ambiguous, claws waste time responding to scheduler artifacts instead of real product state, and the dogfood surface stops being a reliable source of truth. Source: live clawhip/Jobdori dogfood cycle on 2026-04-21 with repeated timeout alerts in `#clawcode-building-in-public`.
247. **MCP memory permission prompts can recur after a transport failure, leaving an active worker blocked in a second consent loop instead of a typed degraded state** — dogfooded 2026-04-27 from live session `clawcode-human` while responding to the claw-code dogfood nudge. The session first asked permission for `omx_memory.project_memory_read`; after approval, the call failed with `Transport closed`, then the runtime immediately attempted `omx_memory.notepad_read` and blocked again on a fresh allow prompt. From the outside this looks like an automation-hostile MCP lifecycle gap: the worker is neither cleanly ready nor cleanly failed, and downstream claws must scrape the pane to learn that memory MCP is both consent-gated and transport-degraded. **Required fix shape:** (a) after an MCP transport closes, emit a typed degraded state such as `mcp_transport_closed` with server/tool identity; (b) suppress or batch follow-up permission prompts for the same failed MCP server until transport recovery is proven; (c) expose whether the task can continue without that MCP tool or is blocked on memory; (d) add regression coverage for `permission granted -> transport closed -> follow-up tool attempt` so it becomes one structured blocker instead of repeated interactive consent loops. **Why this matters:** MCP memory should either be available, explicitly degraded, or explicitly blocked; repeated permission prompts after a closed transport make prompt delivery and readiness ambiguous. Source: live `clawcode-human` pane on 2026-04-27 04:3x UTC.
247. **MCP memory permission prompts can recur after a transport failure, leaving an active worker blocked in a second consent loop instead of a typed degraded state** — dogfooded 2026-04-27 from live session `clawcode-human` while responding to the claw-code dogfood nudge. The session first asked permission for `omx_memory.project_memory_read`; after approval, the call failed with `Transport closed`, then the runtime immediately attempted `omx_memory.notepad_read` and blocked again on a fresh allow prompt. From the outside this looks like an automation-hostile MCP lifecycle gap: the worker is neither cleanly ready nor cleanly failed, and downstream claws must scrape the pane to learn that memory MCP is both consent-gated and transport-degraded. **Required fix shape:** (a) after an MCP transport closes, emit a typed degraded state such as `mcp_transport_closed` with server/tool identity; (b) suppress or batch follow-up permission prompts for the same failed MCP server until transport recovery is proven; (c) expose whether the task can continue without that MCP tool or is blocked on memory; (d) add regression coverage for `permission granted -> transport closed -> follow-up tool attempt` so it becomes one structured blocker instead of repeated interactive consent loops. **Why this matters:** MCP memory should either be available, explicitly degraded, or explicitly blocked; repeated permission prompts after a closed transport make prompt delivery and readiness ambiguous. Source: live `clawcode-human` pane on 2026-04-27 04:3x UTC. **Fresh-run follow-up 2026-04-29:** owner-requested live session `claw-code-issue-247-human-fresh-run` used the actual `./rust/target/debug/claw` binary; `doctor` and `status` were green, so the remaining Phase-0 fresh-run evidence moved from MCP consent-loop reproduction to the non-interactive prompt silent-hang captured separately as #248.
248. **Non-interactive prompt mode can exceed caller timeouts with no in-band startup/API phase event or partial status artifact** — dogfooded 2026-04-29 from live tmux session `claw-code-issue-247-human-fresh-run` after the owner explicitly asked gaebal-gajae to make a fresh session and use `claw-code` directly. The actual `./rust/target/debug/claw` binary was launched via `clawhip tmux new` on current main. `claw doctor --output-format json` and `claw status --output-format json` both succeeded and reported auth/config/workspace ok, but minimal non-interactive prompt calls (`timeout 120 ./rust/target/debug/claw --output-format json --dangerously-skip-permissions "echo hello"` and `timeout 120 ./rust/target/debug/claw --output-format json prompt "Reply with just the word hello"`) both timed out from the outer harness after roughly 150s with only `Command exceeded timeout` visible. There was no machine-readable `api_request_started`, `waiting_for_first_token`, provider/model/base-url identity, retry count, or partial status file/event that would let clawhip distinguish slow provider, network stall, auth/OAuth drift, stream parser hang, or prompt-mode bug. **Required fix shape:** (a) emit structured non-interactive lifecycle events for `startup_ok`, `api_request_started`, `first_byte/first_token`, retry/backoff, and terminal `timeout_or_stall` states; (b) include provider/model/base URL source and auth source category without leaking secrets; (c) support a CLI/request timeout flag or env override that returns a typed JSON error before the outer orchestrator kills the process; (d) write/emit a final partial status artifact on timeout so lane monitors do not have to infer state from a dead process. **Why this matters:** non-interactive prompt mode is the automation path; if it can hang past the caller's timeout while doctor/status are green, claws lose the ability to tell whether startup, auth, transport, provider latency, or stream consumption failed. Source: live session `claw-code-issue-247-human-fresh-run` on 2026-04-29.