docs: add future-betting manifesto to all READMEs and overview

Add 'The Bet' section to all 4 language READMEs (en, ko, ja, zh-cn):
- Models getting cheaper every month
- Models getting smarter every month
- No single provider will dominate the future
- We leverage ALL models, not just Claude
- Architecture gets more valuable as models specialize
- We're building for the open multi-model future

Also update overview.md to move 'Better Than Pure Codex' into
Hephaestus section and add 'Better Than Pure Claude Code' section
with fundamental multi-model advantage explanation.
This commit is contained in:
YeonGyu-Kim 2026-02-21 04:17:33 +09:00
parent 5ae9de0e8e
commit 86e3c7d199
5 changed files with 94 additions and 8 deletions

View File

@ -85,6 +85,24 @@ Claude Code、Codex、数々のOSSモデルに振り回されていませんか
OmOをインストールして、`ultrawork`とタイプしてください。それだけです。
## 私たちが賭ける未来
AIの未来について、明確な信念を持っています。
**モデルはどんどん安くなっている。** 一年前の最上位モデルが今日では無料で使える時代です。推論コストの急激な低下は、もはや不可逆なトレンドです。
**モデルはどんどん賢くなっている。** GPT-4レベルの性能が当たり前になり、次々と新たな能力が解き放たれています。賢さの向上も、同様に不可逆です。
**特定のプロバイダーが独占しない未来が来る。** OpenAI、Anthropic、Google、中国のモデル、オープンソースのモデル。それぞれが独自の強みを持ち、どれもが不可欠なツールとなっています。
**私たちはすべてのモデルを活用する。** 単一のプロバイダーに依存するのではなく、タスクに最適なモデルを選び、組み合わせ、オーケストレーションします。これが、真のパフォーマンスを引き出す唯一の方法です。
**モデルが専門化すればするほど、このアーキテクチャの価値は高まる。** コーディングに特化したモデル、推論に長けたモデル、創造性に優れたモデル。それぞれを適材適所で使い分けることで、初めて「チームとしてのAI」が実現します。
**私たちはその未来に賭けている。**
Oh My OpenCodeは、マルチモデル時代のためのインフラストラクチャです。プロバイダーの囲い込みから解放され、常に最適なツールを選択し続ける。それが私たちの約束です。
## インストール
### 人間向け

View File

@ -85,6 +85,26 @@ Claude Code, Codex, 온갖 OSS 모델들 사이에서 헤매고 있나요. 워
OmO 설치하고. `ultrawork` 치세요. 끝.
## 우리가 보는 미래
모델은 계속 싸지고 있다.
모델은 계속 똑똑해지고 있다.
Anthropic이 독재하던 시대는 끝났다. OpenAI도, Google도, 어떤 단일 프로바이더도 전체를 집어삼키지 못할 것이다.
우리는 그 미래에 베팅하고 있다.
모든 모델을 활용한다. Kimi가 프론트엔드에서 빛나면 쓰고, GPT-5.3 Codex가 깊은 리서치를 잘하면 쓰고, Claude가 논리가 날카로우면 쓴다. 프로바이더 충성도 같은 건 없다. 오직 결과만 있다.
모델들이 전문화될수록 이 아키텍처의 가치는 커진다. 하나의 거대한 모델에게 모든 걸 맡기던 시대는 가고, 각자의 강점이 뚜렷한 전문가 모델들이 나타나고 있다. 우리는 그 흐름을 타고 있다.
**우리는 베팅했다. 모든 모델을 아우르는 오케스트레이션이 곧 표준이 될 것이라고.**
OmO는 그 미래를 지금 여기서 구현한다.
---
## 설치
### 사람용

View File

@ -88,6 +88,25 @@ We did the work. Tested everything. Kept what actually shipped.
Install OmO. Type `ultrawork`. Done.
## The Bet
We're betting on a future where models are cheaper, smarter, and no single provider dominates.
That future is arriving faster than anyone expected.
**Models get cheaper every month.** What cost $20 last year costs $2 now. The race to the bottom is real, and we're here for it.
**Models get smarter every month.** Today's cheap model beats last year's flagship. The gap between frontier and affordable is collapsing.
**No one wins.** Anthropic, OpenAI, Google, DeepSeek, Alibaba - they're all building incredible models. The future isn't locked to one provider. It's a multi-model world.
We built for that future.
Sisyphus doesn't care if he's running on Claude, Kimi, or GLM. He picks the right model for the task. Frontend work? Kimi K2.5. Deep reasoning? Claude Opus. Code generation? GPT-5.3 Codex. The architecture treats models as interchangeable commodities, not sacred cows.
This gets more valuable as models specialize. When Gemini dominates visual tasks and Qwen crushes long-context, we plug them in. Zero rewrites. Zero lock-in.
Anthropic wants you locked in. We're building for the open market.
The harness that wins isn't the one married to a single model. It's the one that rides them all.
## Installation
### For Humans

View File

@ -85,6 +85,25 @@
安装 OmO。敲下 `ultrawork`。完事。
---
## 我们押注的未来
模型正在变得越来越便宜。
模型正在变得越来越聪明。
没有一个提供商能够垄断未来。
我们利用所有模型。
随着模型专业化这种架构的价值会越来越大。Claude 擅长某些事Kimi 擅长另一些GPT-5.3 Codex 在代码上独树一帜。未来不是押注一个赢家,而是让正确的模型做正确的事。
我们正在押注那个未来。
这不是一个框架。这是一场对 AI 基础设施的重新想象。
---
## 安装
### 给人类看的

View File

@ -42,19 +42,29 @@ Hephaestus runs on GPT-5.3 Codex. Give him a goal, not a recipe. He explores the
Use Hephaestus when you need deep architectural reasoning, complex debugging across many files, or cross-domain knowledge synthesis. Switch to him explicitly when the work demands GPT-5.3 Codex's particular strengths.
**Why this beats vanilla Codex CLI:**
**Multi-model orchestration.** Pure Codex is single-model. OmO routes different tasks to different models automatically. GPT for deep reasoning. Gemini for frontend. Haiku for speed. The right brain for the right job.
**Background agents.** Fire 5+ agents in parallel. Something Codex simply cannot do. While one agent writes code, another researches patterns, another checks documentation. Like a real dev team.
**Category system.** Tasks are routed by intent, not model name. `visual-engineering` gets Gemini. `ultrabrain` gets GPT-5.3 Codex. `quick` gets Haiku. No manual juggling.
**Accumulated wisdom.** Subagents learn from previous results. Conventions discovered in task 1 are passed to task 5. Mistakes made early aren't repeated. The system gets smarter as it works.
---
## Better Than Pure Codex
## Better Than Pure Claude Code
Sisyphus + Hephaestus outperform vanilla Codex CLI. Here's why:
Claude Code is good. But it's a single agent running a single model doing everything alone.
**Multi-model orchestration.** Pure Codex is single-model. Oh My OpenCode routes different tasks to different models automatically. GPT for deep reasoning. Gemini for frontend. Haiku for speed tasks. The right brain for the right job.
Oh My OpenCode turns that into a coordinated team:
**Background agents.** Fire 5+ agents in parallel. Something Codex simply cannot do. While one agent writes code, another researches patterns, another checks documentation. Like a real dev team.
**Parallel execution.** Claude Code processes one thing at a time. OmO fires background agents in parallel — research, implementation, and verification happening simultaneously. Like having 5 engineers instead of 1.
**Hash-anchored edits.** Claude Code's edit tool fails when the model can't reproduce lines exactly. OmO's `LINE#ID` content hashing validates every edit before applying. Grok Code Fast 1 went from 6.7% to 68.3% success rate just from this change.
**Intent Gate.** Claude Code takes your prompt and runs. OmO classifies your true intent first — research, implementation, investigation, fix — then routes accordingly. Fewer misinterpretations, better results.
**LSP + AST tools.** Workspace-level rename, go-to-definition, find-references, pre-build diagnostics, AST-aware code rewrites. IDE precision that vanilla Claude Code doesn't have.
**Skills with embedded MCPs.** Each skill brings its own MCP servers, scoped to the task. Context window stays clean instead of bloating with every tool.
**Discipline enforcement.** Todo enforcer yanks idle agents back to work. Comment checker strips AI slop. Ralph Loop keeps going until 100% done. The system doesn't let the agent slack off.
**Category system.** Tasks are routed by intent, not model name. `category="visual-engineering"` gets Gemini. `category="ultrabrain"` gets GPT-5.3 Codex. `category="quick"` gets Haiku. No manual juggling.
**Accumulated wisdom.** Subagents learn from previous results. Conventions discovered in task 1 are passed to task 5. Mistakes made early aren't repeated. The system gets smarter as it works.
**The fundamental advantage.** Models have different temperaments. Claude thinks deeply. GPT reasons architecturally. Gemini visualizes. Haiku moves fast. Single-model tools force you to pick one personality for all tasks. Oh My OpenCode leverages them all, routing by task type. This isn't a temporary hack — it's the only architecture that makes sense as models specialize further. The gap between multi-model orchestration and single-model limitation widens every month. We're betting on that future.
---
@ -64,7 +74,7 @@ Before acting on any request, Sisyphus classifies your true intent.
Are you asking for research? Implementation? Investigation? A fix? The Intent Gate figures out what you actually want, not just the literal words you typed. This means the agent understands context, nuance, and the real goal behind your request.
Regular Codex doesn't have this. It takes your prompt and runs. Oh My OpenCode thinks first, then acts.
Claude Code doesn't have this. It takes your prompt and runs. Oh My OpenCode thinks first, then acts.
---