docs: update README and installation guide

Update README with Anthropic blocking mention and revised model descriptions.
Fix markdown table alignment in both README and installation guide.

🤖 Generated with assistance of [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)
This commit is contained in:
YeonGyu-Kim 2026-02-21 17:07:44 +09:00
parent 6a31e911d8
commit 8ae2f4fa39
2 changed files with 82 additions and 80 deletions

View File

@ -33,9 +33,11 @@
</div> </div>
> Anthropic wants you locked in. Claude Code's a nice prison, but it's still a prison. > Anthropic [**blocked OpenCode because of us.**](https://x.com/thdxr/status/2010149530486911014) **Yes this is true.**
> They want you locked in. Claude Code's a nice prison, but it's still a prison.
> >
> We don't do lock-in here. We ride every model. Claude for orchestration. GPT for reasoning. Kimi for speed. Gemini for vision. The future isn't picking one winner—it's orchestrating them all. Models get cheaper every month. Smarter every month. No single provider will dominate. We're building for that open market, not their walled gardens. > We don't do lock-in here. We ride every model. Claude / Kimi / GLM for orchestration. GPT for reasoning. Minimax for speed. Gemini for creativity.
> The future isn't picking one winner—it's orchestrating them all. Models get cheaper every month. Smarter every month. No single provider will dominate. We're building for that open market, not their walled gardens.
<div align="center"> <div align="center">
@ -137,7 +139,7 @@ Even only with following subscriptions, ultrawork will work well (this project i
- If you are eligible for pay-per-token, using kimi and gemini models won't cost you that much. - If you are eligible for pay-per-token, using kimi and gemini models won't cost you that much.
| | Feature | What it does | | | Feature | What it does |
| :---: | :--------------------------- | :---------------------------------------------------------------------------------------------------------------------------------- | | :---: | :------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 🤖 | **Discipline Agents** | Sisyphus orchestrates Hephaestus, Oracle, Librarian, Explore. A full AI dev team in parallel. | | 🤖 | **Discipline Agents** | Sisyphus orchestrates Hephaestus, Oracle, Librarian, Explore. A full AI dev team in parallel. |
| ⚡ | **`ultrawork` / `ulw`** | One word. Every agent activates. Doesn't stop until done. | | ⚡ | **`ultrawork` / `ulw`** | One word. Every agent activates. Doesn't stop until done. |
| 🚪 | **[IntentGate](https://factory.ai/news/terminal-bench)** | Analyzes true user intent before classifying or acting. No more literal misinterpretations. | | 🚪 | **[IntentGate](https://factory.ai/news/terminal-bench)** | Analyzes true user intent before classifying or acting. No more literal misinterpretations. |

View File

@ -195,7 +195,7 @@ GitHub Copilot is supported as a **fallback provider** when native providers are
When GitHub Copilot is the best available provider, oh-my-opencode uses these model assignments: When GitHub Copilot is the best available provider, oh-my-opencode uses these model assignments:
| Agent | Model | | Agent | Model |
| ------------- | -------------------------------- | | ------------- | --------------------------------------------------------- |
| **Sisyphus** | `github-copilot/claude-opus-4-6` | | **Sisyphus** | `github-copilot/claude-opus-4-6` |
| **Oracle** | `github-copilot/gpt-5.2` | | **Oracle** | `github-copilot/gpt-5.2` |
| **Explore** | `opencode/gpt-5-nano` | | **Explore** | `opencode/gpt-5-nano` |
@ -210,7 +210,7 @@ Z.ai Coding Plan provides access to GLM-4.7 models. When enabled, the **Libraria
If Z.ai is the only provider available, all agents will use GLM models: If Z.ai is the only provider available, all agents will use GLM models:
| Agent | Model | | Agent | Model |
| ------------- | -------------------------------- | | ------------- | ------------------------------- |
| **Sisyphus** | `zai-coding-plan/glm-4.7` | | **Sisyphus** | `zai-coding-plan/glm-4.7` |
| **Oracle** | `zai-coding-plan/glm-4.7` | | **Oracle** | `zai-coding-plan/glm-4.7` |
| **Explore** | `zai-coding-plan/glm-4.7-flash` | | **Explore** | `zai-coding-plan/glm-4.7-flash` |
@ -223,7 +223,7 @@ OpenCode Zen provides access to `opencode/` prefixed models including `opencode/
When OpenCode Zen is the best available provider (no native or Copilot), these models are used: When OpenCode Zen is the best available provider (no native or Copilot), these models are used:
| Agent | Model | | Agent | Model |
| ------------- | -------------------------------- | | ------------- | -------------------------- |
| **Sisyphus** | `opencode/claude-opus-4-6` | | **Sisyphus** | `opencode/claude-opus-4-6` |
| **Oracle** | `opencode/gpt-5.2` | | **Oracle** | `opencode/gpt-5.2` |
| **Explore** | `opencode/gpt-5-nano` | | **Explore** | `opencode/gpt-5-nano` |
@ -264,7 +264,7 @@ Not all models behave the same way. Understanding which models are "similar" hel
**Claude-like Models** (instruction-following, structured output): **Claude-like Models** (instruction-following, structured output):
| Model | Provider(s) | Notes | | Model | Provider(s) | Notes |
|-------|-------------|-------| | ------------------------ | ----------------------------------- | ----------------------------------------------------------------------- |
| **Claude Opus 4.6** | anthropic, github-copilot, opencode | Best overall. Default for Sisyphus. | | **Claude Opus 4.6** | anthropic, github-copilot, opencode | Best overall. Default for Sisyphus. |
| **Claude Sonnet 4.6** | anthropic, github-copilot, opencode | Faster, cheaper. Good balance. | | **Claude Sonnet 4.6** | anthropic, github-copilot, opencode | Faster, cheaper. Good balance. |
| **Claude Haiku 4.5** | anthropic, opencode | Fast and cheap. Good for quick tasks. | | **Claude Haiku 4.5** | anthropic, opencode | Fast and cheap. Good for quick tasks. |
@ -276,7 +276,7 @@ Not all models behave the same way. Understanding which models are "similar" hel
**GPT Models** (explicit reasoning, principle-driven): **GPT Models** (explicit reasoning, principle-driven):
| Model | Provider(s) | Notes | | Model | Provider(s) | Notes |
|-------|-------------|-------| | ----------------- | -------------------------------- | ------------------------------------------------- |
| **GPT-5.3-codex** | openai, github-copilot, opencode | Deep coding powerhouse. Required for Hephaestus. | | **GPT-5.3-codex** | openai, github-copilot, opencode | Deep coding powerhouse. Required for Hephaestus. |
| **GPT-5.2** | openai, github-copilot, opencode | High intelligence. Default for Oracle. | | **GPT-5.2** | openai, github-copilot, opencode | High intelligence. Default for Oracle. |
| **GPT-5-Nano** | opencode | Ultra-cheap, fast. Good for simple utility tasks. | | **GPT-5-Nano** | opencode | Ultra-cheap, fast. Good for simple utility tasks. |
@ -284,7 +284,7 @@ Not all models behave the same way. Understanding which models are "similar" hel
**Different-Behavior Models**: **Different-Behavior Models**:
| Model | Provider(s) | Notes | | Model | Provider(s) | Notes |
|-------|-------------|-------| | --------------------- | -------------------------------- | ----------------------------------------------------------- |
| **Gemini 3 Pro** | google, github-copilot, opencode | Excels at visual/frontend tasks. Different reasoning style. | | **Gemini 3 Pro** | google, github-copilot, opencode | Excels at visual/frontend tasks. Different reasoning style. |
| **Gemini 3 Flash** | google, github-copilot, opencode | Fast, good for doc search and light tasks. | | **Gemini 3 Flash** | google, github-copilot, opencode | Fast, good for doc search and light tasks. |
| **MiniMax M2.5** | venice | Fast and smart. Good for utility tasks. | | **MiniMax M2.5** | venice | Fast and smart. Good for utility tasks. |
@ -293,7 +293,7 @@ Not all models behave the same way. Understanding which models are "similar" hel
**Speed-Focused Models**: **Speed-Focused Models**:
| Model | Provider(s) | Speed | Notes | | Model | Provider(s) | Speed | Notes |
|-------|-------------|-------|-------| | ----------------------- | ---------------------- | -------------- | --------------------------------------------------------------------------------------------------------------------------------------------- |
| **Grok Code Fast 1** | github-copilot, venice | Very fast | Optimized for code grep/search. Default for Explore. | | **Grok Code Fast 1** | github-copilot, venice | Very fast | Optimized for code grep/search. Default for Explore. |
| **Claude Haiku 4.5** | anthropic, opencode | Fast | Good balance of speed and intelligence. | | **Claude Haiku 4.5** | anthropic, opencode | Fast | Good balance of speed and intelligence. |
| **MiniMax M2.5 (Free)** | opencode, venice | Fast | Smart for its speed class. | | **MiniMax M2.5 (Free)** | opencode, venice | Fast | Smart for its speed class. |
@ -306,7 +306,7 @@ Based on your subscriptions, here's how the agents were configured:
**Claude-Optimized Agents** (prompts tuned for Claude-family models): **Claude-Optimized Agents** (prompts tuned for Claude-family models):
| Agent | Role | Default Chain | What It Does | | Agent | Role | Default Chain | What It Does |
|-------|------|---------------|--------------| | ------------ | ---------------- | ----------------------------------------------- | ---------------------------------------------------------------------------------------- |
| **Sisyphus** | Main ultraworker | Opus (max) → Kimi K2.5 → GLM 5 → Big Pickle | Primary coding agent. Orchestrates everything. **Never use GPT — no GPT prompt exists.** | | **Sisyphus** | Main ultraworker | Opus (max) → Kimi K2.5 → GLM 5 → Big Pickle | Primary coding agent. Orchestrates everything. **Never use GPT — no GPT prompt exists.** |
| **Metis** | Plan review | Opus (max) → Kimi K2.5 → GPT-5.2 → Gemini 3 Pro | Reviews Prometheus plans for gaps. | | **Metis** | Plan review | Opus (max) → Kimi K2.5 → GPT-5.2 → Gemini 3 Pro | Reviews Prometheus plans for gaps. |
@ -317,14 +317,14 @@ These agents detect your model family at runtime and switch to the appropriate p
Priority: **Claude > GPT > Claude-like models** Priority: **Claude > GPT > Claude-like models**
| Agent | Role | Default Chain | GPT Prompt? | | Agent | Role | Default Chain | GPT Prompt? |
|-------|------|---------------|-------------| | -------------- | ----------------- | ---------------------------------------------------------- | ---------------------------------------------------------------- |
| **Prometheus** | Strategic planner | Opus (max) → **GPT-5.2 (high)** → Kimi K2.5 → Gemini 3 Pro | Yes — XML-tagged, principle-driven (~300 lines vs ~1,100 Claude) | | **Prometheus** | Strategic planner | Opus (max) → **GPT-5.2 (high)** → Kimi K2.5 → Gemini 3 Pro | Yes — XML-tagged, principle-driven (~300 lines vs ~1,100 Claude) |
| **Atlas** | Todo orchestrator | **Kimi K2.5** → Sonnet → GPT-5.2 | Yes — GPT-optimized todo management | | **Atlas** | Todo orchestrator | **Kimi K2.5** → Sonnet → GPT-5.2 | Yes — GPT-optimized todo management |
**GPT-Native Agents** (built for GPT, don't override to Claude): **GPT-Native Agents** (built for GPT, don't override to Claude):
| Agent | Role | Default Chain | Notes | | Agent | Role | Default Chain | Notes |
|-------|------|---------------|-------| | -------------- | ---------------------- | -------------------------------------- | ------------------------------------------------------ |
| **Hephaestus** | Deep autonomous worker | GPT-5.3-codex (medium) only | "Codex on steroids." No fallback. Requires GPT access. | | **Hephaestus** | Deep autonomous worker | GPT-5.3-codex (medium) only | "Codex on steroids." No fallback. Requires GPT access. |
| **Oracle** | Architecture/debugging | GPT-5.2 (high) → Gemini 3 Pro → Opus | High-IQ strategic backup. GPT preferred. | | **Oracle** | Architecture/debugging | GPT-5.2 (high) → Gemini 3 Pro → Opus | High-IQ strategic backup. GPT preferred. |
| **Momus** | High-accuracy reviewer | GPT-5.2 (medium) → Opus → Gemini 3 Pro | Verification agent. GPT preferred. | | **Momus** | High-accuracy reviewer | GPT-5.2 (medium) → Opus → Gemini 3 Pro | Verification agent. GPT preferred. |
@ -334,7 +334,7 @@ Priority: **Claude > GPT > Claude-like models**
These agents do search, grep, and retrieval. They intentionally use fast, cheap models. **Don't "upgrade" them to Opus — it wastes tokens on simple tasks.** These agents do search, grep, and retrieval. They intentionally use fast, cheap models. **Don't "upgrade" them to Opus — it wastes tokens on simple tasks.**
| Agent | Role | Default Chain | Design Rationale | | Agent | Role | Default Chain | Design Rationale |
|-------|------|---------------|------------------| | --------------------- | ------------------ | ---------------------------------------------------------------------- | -------------------------------------------------------------- |
| **Explore** | Fast codebase grep | MiniMax M2.5 Free → Grok Code Fast → MiniMax M2.5 → Haiku → GPT-5-Nano | Speed is everything. Grok is blazing fast for grep. | | **Explore** | Fast codebase grep | MiniMax M2.5 Free → Grok Code Fast → MiniMax M2.5 → Haiku → GPT-5-Nano | Speed is everything. Grok is blazing fast for grep. |
| **Librarian** | Docs/code search | MiniMax M2.5 Free → Gemini Flash → Big Pickle | Entirely free-tier. Doc retrieval doesn't need deep reasoning. | | **Librarian** | Docs/code search | MiniMax M2.5 Free → Gemini Flash → Big Pickle | Entirely free-tier. Doc retrieval doesn't need deep reasoning. |
| **Multimodal Looker** | Vision/screenshots | Kimi K2.5 → Kimi Free → Gemini Flash → GPT-5.2 → GLM-4.6v | Kimi excels at multimodal understanding. | | **Multimodal Looker** | Vision/screenshots | Kimi K2.5 → Kimi Free → Gemini Flash → GPT-5.2 → GLM-4.6v | Kimi excels at multimodal understanding. |
@ -446,6 +446,6 @@ Tell the user of following:
3. **Need precision?** Press **Tab** to enter Prometheus (Planner) mode, create a work plan through an interview process, then run `/start-work` to execute it with full orchestration. 3. **Need precision?** Press **Tab** to enter Prometheus (Planner) mode, create a work plan through an interview process, then run `/start-work` to execute it with full orchestration.
4. You wanna know more? Checkout: [Overview Guide](./overview.md) 4. You wanna have your own agent- catalog setup? I can read the [docs](docs/guide/agent-model-matching.md) and set up for you after interviewing!
That's it. The agent will figure out the rest and handle everything automatically. That's it. The agent will figure out the rest and handle everything automatically.