Compare commits

..

No commits in common. "main" and "v1.4.1" have entirely different histories.
main ... v1.4.1

1869 changed files with 14657 additions and 350520 deletions

View File

@ -1,21 +0,0 @@
{
"name": "ecc",
"interface": {
"displayName": "Everything Claude Code"
},
"plugins": [
{
"name": "ecc",
"version": "1.10.0",
"source": {
"source": "local",
"path": "../.."
},
"policy": {
"installation": "AVAILABLE",
"authentication": "ON_INSTALL"
},
"category": "Productivity"
}
]
}

View File

@ -1,153 +0,0 @@
---
name: agent-introspection-debugging
description: Structured self-debugging workflow for AI agent failures using capture, diagnosis, contained recovery, and introspection reports.
origin: ECC
---
# Agent Introspection Debugging
Use this skill when an agent run is failing repeatedly, consuming tokens without progress, looping on the same tools, or drifting away from the intended task.
This is a workflow skill, not a hidden runtime. It teaches the agent to debug itself systematically before escalating to a human.
## When to Activate
- Maximum tool call / loop-limit failures
- Repeated retries with no forward progress
- Context growth or prompt drift that starts degrading output quality
- File-system or environment state mismatch between expectation and reality
- Tool failures that are likely recoverable with diagnosis and a smaller corrective action
## Scope Boundaries
Activate this skill for:
- capturing failure state before retrying blindly
- diagnosing common agent-specific failure patterns
- applying contained recovery actions
- producing a structured human-readable debug report
Do not use this skill as the primary source for:
- feature verification after code changes; use `verification-loop`
- framework-specific debugging when a narrower ECC skill already exists
- runtime promises the current harness cannot enforce automatically
## Four-Phase Loop
### Phase 1: Failure Capture
Before trying to recover, record the failure precisely.
Capture:
- error type, message, and stack trace when available
- last meaningful tool call sequence
- what the agent was trying to do
- current context pressure: repeated prompts, oversized pasted logs, duplicated plans, or runaway notes
- current environment assumptions: cwd, branch, relevant service state, expected files
Minimum capture template:
```markdown
## Failure Capture
- Session / task:
- Goal in progress:
- Error:
- Last successful step:
- Last failed tool / command:
- Repeated pattern seen:
- Environment assumptions to verify:
```
### Phase 2: Root-Cause Diagnosis
Match the failure to a known pattern before changing anything.
| Pattern | Likely Cause | Check |
| --- | --- | --- |
| Maximum tool calls / repeated same command | loop or no-exit observer path | inspect the last N tool calls for repetition |
| Context overflow / degraded reasoning | unbounded notes, repeated plans, oversized logs | inspect recent context for duplication and low-signal bulk |
| `ECONNREFUSED` / timeout | service unavailable or wrong port | verify service health, URL, and port assumptions |
| `429` / quota exhaustion | retry storm or missing backoff | count repeated calls and inspect retry spacing |
| file missing after write / stale diff | race, wrong cwd, or branch drift | re-check path, cwd, git status, and actual file existence |
| tests still failing after “fix” | wrong hypothesis | isolate the exact failing test and re-derive the bug |
Diagnosis questions:
- is this a logic failure, state failure, environment failure, or policy failure?
- did the agent lose the real objective and start optimizing the wrong subtask?
- is the failure deterministic or transient?
- what is the smallest reversible action that would validate the diagnosis?
### Phase 3: Contained Recovery
Recover with the smallest action that changes the diagnosis surface.
Safe recovery actions:
- stop repeated retries and restate the hypothesis
- trim low-signal context and keep only the active goal, blockers, and evidence
- re-check the actual filesystem / branch / process state
- narrow the task to one failing command, one file, or one test
- switch from speculative reasoning to direct observation
- escalate to a human when the failure is high-risk or externally blocked
Do not claim unsupported auto-healing actions like “reset agent state” or “update harness config” unless you are actually doing them through real tools in the current environment.
Contained recovery checklist:
```markdown
## Recovery Action
- Diagnosis chosen:
- Smallest action taken:
- Why this is safe:
- What evidence would prove the fix worked:
```
### Phase 4: Introspection Report
End with a report that makes the recovery legible to the next agent or human.
```markdown
## Agent Self-Debug Report
- Session / task:
- Failure:
- Root cause:
- Recovery action:
- Result: success | partial | blocked
- Token / time burn risk:
- Follow-up needed:
- Preventive change to encode later:
```
## Recovery Heuristics
Prefer these interventions in order:
1. Restate the real objective in one sentence.
2. Verify the world state instead of trusting memory.
3. Shrink the failing scope.
4. Run one discriminating check.
5. Only then retry.
Bad pattern:
- retrying the same action three times with slightly different wording
Good pattern:
- capture failure
- classify the pattern
- run one direct check
- change the plan only if the check supports it
## Integration with ECC
- Use `verification-loop` after recovery if code was changed.
- Use `continuous-learning-v2` when the failure pattern is worth turning into an instinct or later skill.
- Use `council` when the issue is not technical failure but decision ambiguity.
- Use `workspace-surface-audit` if the failure came from conflicting local state or repo drift.
## Output Standard
When this skill is active, do not end with “I fixed it” alone.
Always provide:
- the failure pattern
- the root-cause hypothesis
- the recovery action
- the evidence that the situation is now better or still blocked

View File

@ -1,215 +0,0 @@
---
name: agent-sort
description: Build an evidence-backed ECC install plan for a specific repo by sorting skills, commands, rules, hooks, and extras into DAILY vs LIBRARY buckets using parallel repo-aware review passes. Use when ECC should be trimmed to what a project actually needs instead of loading the full bundle.
origin: ECC
---
# Agent Sort
Use this skill when a repo needs a project-specific ECC surface instead of the default full install.
The goal is not to guess what "feels useful." The goal is to classify ECC components with evidence from the actual codebase.
## When to Use
- A project only needs a subset of ECC and full installs are too noisy
- The repo stack is clear, but nobody wants to hand-curate skills one by one
- A team wants a repeatable install decision backed by grep evidence instead of opinion
- You need to separate always-loaded daily workflow surfaces from searchable library/reference surfaces
- A repo has drifted into the wrong language, rule, or hook set and needs cleanup
## Non-Negotiable Rules
- Use the current repository as the source of truth, not generic preferences
- Every DAILY decision must cite concrete repo evidence
- LIBRARY does not mean "delete"; it means "keep accessible without loading by default"
- Do not install hooks, rules, or scripts that the current repo cannot use
- Prefer ECC-native surfaces; do not introduce a second install system
## Outputs
Produce these artifacts in order:
1. DAILY inventory
2. LIBRARY inventory
3. install plan
4. verification report
5. optional `skill-library` router if the project wants one
## Classification Model
Use two buckets only:
- `DAILY`
- should load every session for this repo
- strongly matched to the repo's language, framework, workflow, or operator surface
- `LIBRARY`
- useful to retain, but not worth loading by default
- should remain reachable through search, router skill, or selective manual use
## Evidence Sources
Use repo-local evidence before making any classification:
- file extensions
- package managers and lockfiles
- framework configs
- CI and hook configs
- build/test scripts
- imports and dependency manifests
- repo docs that explicitly describe the stack
Useful commands include:
```bash
rg --files
rg -n "typescript|react|next|supabase|django|spring|flutter|swift"
cat package.json
cat pyproject.toml
cat Cargo.toml
cat pubspec.yaml
cat go.mod
```
## Parallel Review Passes
If parallel subagents are available, split the review into these passes:
1. Agents
- classify `agents/*`
2. Skills
- classify `skills/*`
3. Commands
- classify `commands/*`
4. Rules
- classify `rules/*`
5. Hooks and scripts
- classify hook surfaces, MCP health checks, helper scripts, and OS compatibility
6. Extras
- classify contexts, examples, MCP configs, templates, and guidance docs
If subagents are not available, run the same passes sequentially.
## Core Workflow
### 1. Read the repo
Establish the real stack before classifying anything:
- languages in use
- frameworks in use
- primary package manager
- test stack
- lint/format stack
- deployment/runtime surface
- operator integrations already present
### 2. Build the evidence table
For every candidate surface, record:
- component path
- component type
- proposed bucket
- repo evidence
- short justification
Use this format:
```text
skills/frontend-patterns | skill | DAILY | 84 .tsx files, next.config.ts present | core frontend stack
skills/django-patterns | skill | LIBRARY | no .py files, no pyproject.toml | not active in this repo
rules/typescript/* | rules | DAILY | package.json + tsconfig.json | active TS repo
rules/python/* | rules | LIBRARY | zero Python source files | keep accessible only
```
### 3. Decide DAILY vs LIBRARY
Promote to `DAILY` when:
- the repo clearly uses the matching stack
- the component is general enough to help every session
- the repo already depends on the corresponding runtime or workflow
Demote to `LIBRARY` when:
- the component is off-stack
- the repo might need it later, but not every day
- it adds context overhead without immediate relevance
### 4. Build the install plan
Translate the classification into action:
- DAILY skills -> install or keep in `.claude/skills/`
- DAILY commands -> keep as explicit shims only if still useful
- DAILY rules -> install only matching language sets
- DAILY hooks/scripts -> keep only compatible ones
- LIBRARY surfaces -> keep accessible through search or `skill-library`
If the repo already uses selective installs, update that plan instead of creating another system.
### 5. Create the optional library router
If the project wants a searchable library surface, create:
- `.claude/skills/skill-library/SKILL.md`
That router should contain:
- a short explanation of DAILY vs LIBRARY
- grouped trigger keywords
- where the library references live
Do not duplicate every skill body inside the router.
### 6. Verify the result
After the plan is applied, verify:
- every DAILY file exists where expected
- stale language rules were not left active
- incompatible hooks were not installed
- the resulting install actually matches the repo stack
Return a compact report with:
- DAILY count
- LIBRARY count
- removed stale surfaces
- open questions
## Handoffs
If the next step is interactive installation or repair, hand off to:
- `configure-ecc`
If the next step is overlap cleanup or catalog review, hand off to:
- `skill-stocktake`
If the next step is broader context trimming, hand off to:
- `strategic-compact`
## Output Format
Return the result in this order:
```text
STACK
- language/framework/runtime summary
DAILY
- always-loaded items with evidence
LIBRARY
- searchable/reference items with evidence
INSTALL PLAN
- what should be installed, removed, or routed
VERIFICATION
- checks run and remaining gaps
```

View File

@ -1,523 +0,0 @@
---
name: api-design
description: REST API design patterns including resource naming, status codes, pagination, filtering, error responses, versioning, and rate limiting for production APIs.
origin: ECC
---
# API Design Patterns
Conventions and best practices for designing consistent, developer-friendly REST APIs.
## When to Activate
- Designing new API endpoints
- Reviewing existing API contracts
- Adding pagination, filtering, or sorting
- Implementing error handling for APIs
- Planning API versioning strategy
- Building public or partner-facing APIs
## Resource Design
### URL Structure
```
# Resources are nouns, plural, lowercase, kebab-case
GET /api/v1/users
GET /api/v1/users/:id
POST /api/v1/users
PUT /api/v1/users/:id
PATCH /api/v1/users/:id
DELETE /api/v1/users/:id
# Sub-resources for relationships
GET /api/v1/users/:id/orders
POST /api/v1/users/:id/orders
# Actions that don't map to CRUD (use verbs sparingly)
POST /api/v1/orders/:id/cancel
POST /api/v1/auth/login
POST /api/v1/auth/refresh
```
### Naming Rules
```
# GOOD
/api/v1/team-members # kebab-case for multi-word resources
/api/v1/orders?status=active # query params for filtering
/api/v1/users/123/orders # nested resources for ownership
# BAD
/api/v1/getUsers # verb in URL
/api/v1/user # singular (use plural)
/api/v1/team_members # snake_case in URLs
/api/v1/users/123/getOrders # verb in nested resource
```
## HTTP Methods and Status Codes
### Method Semantics
| Method | Idempotent | Safe | Use For |
|--------|-----------|------|---------|
| GET | Yes | Yes | Retrieve resources |
| POST | No | No | Create resources, trigger actions |
| PUT | Yes | No | Full replacement of a resource |
| PATCH | No* | No | Partial update of a resource |
| DELETE | Yes | No | Remove a resource |
*PATCH can be made idempotent with proper implementation
### Status Code Reference
```
# Success
200 OK — GET, PUT, PATCH (with response body)
201 Created — POST (include Location header)
204 No Content — DELETE, PUT (no response body)
# Client Errors
400 Bad Request — Validation failure, malformed JSON
401 Unauthorized — Missing or invalid authentication
403 Forbidden — Authenticated but not authorized
404 Not Found — Resource doesn't exist
409 Conflict — Duplicate entry, state conflict
422 Unprocessable Entity — Semantically invalid (valid JSON, bad data)
429 Too Many Requests — Rate limit exceeded
# Server Errors
500 Internal Server Error — Unexpected failure (never expose details)
502 Bad Gateway — Upstream service failed
503 Service Unavailable — Temporary overload, include Retry-After
```
### Common Mistakes
```
# BAD: 200 for everything
{ "status": 200, "success": false, "error": "Not found" }
# GOOD: Use HTTP status codes semantically
HTTP/1.1 404 Not Found
{ "error": { "code": "not_found", "message": "User not found" } }
# BAD: 500 for validation errors
# GOOD: 400 or 422 with field-level details
# BAD: 200 for created resources
# GOOD: 201 with Location header
HTTP/1.1 201 Created
Location: /api/v1/users/abc-123
```
## Response Format
### Success Response
```json
{
"data": {
"id": "abc-123",
"email": "alice@example.com",
"name": "Alice",
"created_at": "2025-01-15T10:30:00Z"
}
}
```
### Collection Response (with Pagination)
```json
{
"data": [
{ "id": "abc-123", "name": "Alice" },
{ "id": "def-456", "name": "Bob" }
],
"meta": {
"total": 142,
"page": 1,
"per_page": 20,
"total_pages": 8
},
"links": {
"self": "/api/v1/users?page=1&per_page=20",
"next": "/api/v1/users?page=2&per_page=20",
"last": "/api/v1/users?page=8&per_page=20"
}
}
```
### Error Response
```json
{
"error": {
"code": "validation_error",
"message": "Request validation failed",
"details": [
{
"field": "email",
"message": "Must be a valid email address",
"code": "invalid_format"
},
{
"field": "age",
"message": "Must be between 0 and 150",
"code": "out_of_range"
}
]
}
}
```
### Response Envelope Variants
```typescript
// Option A: Envelope with data wrapper (recommended for public APIs)
interface ApiResponse<T> {
data: T;
meta?: PaginationMeta;
links?: PaginationLinks;
}
interface ApiError {
error: {
code: string;
message: string;
details?: FieldError[];
};
}
// Option B: Flat response (simpler, common for internal APIs)
// Success: just return the resource directly
// Error: return error object
// Distinguish by HTTP status code
```
## Pagination
### Offset-Based (Simple)
```
GET /api/v1/users?page=2&per_page=20
# Implementation
SELECT * FROM users
ORDER BY created_at DESC
LIMIT 20 OFFSET 20;
```
**Pros:** Easy to implement, supports "jump to page N"
**Cons:** Slow on large offsets (OFFSET 100000), inconsistent with concurrent inserts
### Cursor-Based (Scalable)
```
GET /api/v1/users?cursor=eyJpZCI6MTIzfQ&limit=20
# Implementation
SELECT * FROM users
WHERE id > :cursor_id
ORDER BY id ASC
LIMIT 21; -- fetch one extra to determine has_next
```
```json
{
"data": [...],
"meta": {
"has_next": true,
"next_cursor": "eyJpZCI6MTQzfQ"
}
}
```
**Pros:** Consistent performance regardless of position, stable with concurrent inserts
**Cons:** Cannot jump to arbitrary page, cursor is opaque
### When to Use Which
| Use Case | Pagination Type |
|----------|----------------|
| Admin dashboards, small datasets (<10K) | Offset |
| Infinite scroll, feeds, large datasets | Cursor |
| Public APIs | Cursor (default) with offset (optional) |
| Search results | Offset (users expect page numbers) |
## Filtering, Sorting, and Search
### Filtering
```
# Simple equality
GET /api/v1/orders?status=active&customer_id=abc-123
# Comparison operators (use bracket notation)
GET /api/v1/products?price[gte]=10&price[lte]=100
GET /api/v1/orders?created_at[after]=2025-01-01
# Multiple values (comma-separated)
GET /api/v1/products?category=electronics,clothing
# Nested fields (dot notation)
GET /api/v1/orders?customer.country=US
```
### Sorting
```
# Single field (prefix - for descending)
GET /api/v1/products?sort=-created_at
# Multiple fields (comma-separated)
GET /api/v1/products?sort=-featured,price,-created_at
```
### Full-Text Search
```
# Search query parameter
GET /api/v1/products?q=wireless+headphones
# Field-specific search
GET /api/v1/users?email=alice
```
### Sparse Fieldsets
```
# Return only specified fields (reduces payload)
GET /api/v1/users?fields=id,name,email
GET /api/v1/orders?fields=id,total,status&include=customer.name
```
## Authentication and Authorization
### Token-Based Auth
```
# Bearer token in Authorization header
GET /api/v1/users
Authorization: Bearer eyJhbGciOiJIUzI1NiIs...
# API key (for server-to-server)
GET /api/v1/data
X-API-Key: sk_live_abc123
```
### Authorization Patterns
```typescript
// Resource-level: check ownership
app.get("/api/v1/orders/:id", async (req, res) => {
const order = await Order.findById(req.params.id);
if (!order) return res.status(404).json({ error: { code: "not_found" } });
if (order.userId !== req.user.id) return res.status(403).json({ error: { code: "forbidden" } });
return res.json({ data: order });
});
// Role-based: check permissions
app.delete("/api/v1/users/:id", requireRole("admin"), async (req, res) => {
await User.delete(req.params.id);
return res.status(204).send();
});
```
## Rate Limiting
### Headers
```
HTTP/1.1 200 OK
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 95
X-RateLimit-Reset: 1640000000
# When exceeded
HTTP/1.1 429 Too Many Requests
Retry-After: 60
{
"error": {
"code": "rate_limit_exceeded",
"message": "Rate limit exceeded. Try again in 60 seconds."
}
}
```
### Rate Limit Tiers
| Tier | Limit | Window | Use Case |
|------|-------|--------|----------|
| Anonymous | 30/min | Per IP | Public endpoints |
| Authenticated | 100/min | Per user | Standard API access |
| Premium | 1000/min | Per API key | Paid API plans |
| Internal | 10000/min | Per service | Service-to-service |
## Versioning
### URL Path Versioning (Recommended)
```
/api/v1/users
/api/v2/users
```
**Pros:** Explicit, easy to route, cacheable
**Cons:** URL changes between versions
### Header Versioning
```
GET /api/users
Accept: application/vnd.myapp.v2+json
```
**Pros:** Clean URLs
**Cons:** Harder to test, easy to forget
### Versioning Strategy
```
1. Start with /api/v1/ — don't version until you need to
2. Maintain at most 2 active versions (current + previous)
3. Deprecation timeline:
- Announce deprecation (6 months notice for public APIs)
- Add Sunset header: Sunset: Sat, 01 Jan 2026 00:00:00 GMT
- Return 410 Gone after sunset date
4. Non-breaking changes don't need a new version:
- Adding new fields to responses
- Adding new optional query parameters
- Adding new endpoints
5. Breaking changes require a new version:
- Removing or renaming fields
- Changing field types
- Changing URL structure
- Changing authentication method
```
## Implementation Patterns
### TypeScript (Next.js API Route)
```typescript
import { z } from "zod";
import { NextRequest, NextResponse } from "next/server";
const createUserSchema = z.object({
email: z.string().email(),
name: z.string().min(1).max(100),
});
export async function POST(req: NextRequest) {
const body = await req.json();
const parsed = createUserSchema.safeParse(body);
if (!parsed.success) {
return NextResponse.json({
error: {
code: "validation_error",
message: "Request validation failed",
details: parsed.error.issues.map(i => ({
field: i.path.join("."),
message: i.message,
code: i.code,
})),
},
}, { status: 422 });
}
const user = await createUser(parsed.data);
return NextResponse.json(
{ data: user },
{
status: 201,
headers: { Location: `/api/v1/users/${user.id}` },
},
);
}
```
### Python (Django REST Framework)
```python
from rest_framework import serializers, viewsets, status
from rest_framework.response import Response
class CreateUserSerializer(serializers.Serializer):
email = serializers.EmailField()
name = serializers.CharField(max_length=100)
class UserSerializer(serializers.ModelSerializer):
class Meta:
model = User
fields = ["id", "email", "name", "created_at"]
class UserViewSet(viewsets.ModelViewSet):
serializer_class = UserSerializer
permission_classes = [IsAuthenticated]
def get_serializer_class(self):
if self.action == "create":
return CreateUserSerializer
return UserSerializer
def create(self, request):
serializer = CreateUserSerializer(data=request.data)
serializer.is_valid(raise_exception=True)
user = UserService.create(**serializer.validated_data)
return Response(
{"data": UserSerializer(user).data},
status=status.HTTP_201_CREATED,
headers={"Location": f"/api/v1/users/{user.id}"},
)
```
### Go (net/http)
```go
func (h *UserHandler) CreateUser(w http.ResponseWriter, r *http.Request) {
var req CreateUserRequest
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
writeError(w, http.StatusBadRequest, "invalid_json", "Invalid request body")
return
}
if err := req.Validate(); err != nil {
writeError(w, http.StatusUnprocessableEntity, "validation_error", err.Error())
return
}
user, err := h.service.Create(r.Context(), req)
if err != nil {
switch {
case errors.Is(err, domain.ErrEmailTaken):
writeError(w, http.StatusConflict, "email_taken", "Email already registered")
default:
writeError(w, http.StatusInternalServerError, "internal_error", "Internal error")
}
return
}
w.Header().Set("Location", fmt.Sprintf("/api/v1/users/%s", user.ID))
writeJSON(w, http.StatusCreated, map[string]any{"data": user})
}
```
## API Design Checklist
Before shipping a new endpoint:
- [ ] Resource URL follows naming conventions (plural, kebab-case, no verbs)
- [ ] Correct HTTP method used (GET for reads, POST for creates, etc.)
- [ ] Appropriate status codes returned (not 200 for everything)
- [ ] Input validated with schema (Zod, Pydantic, Bean Validation)
- [ ] Error responses follow standard format with codes and messages
- [ ] Pagination implemented for list endpoints (cursor or offset)
- [ ] Authentication required (or explicitly marked as public)
- [ ] Authorization checked (user can only access their own resources)
- [ ] Rate limiting configured
- [ ] Response does not leak internal details (stack traces, SQL errors)
- [ ] Consistent naming with existing endpoints (camelCase vs snake_case)
- [ ] Documented (OpenAPI/Swagger spec updated)

View File

@ -1,7 +0,0 @@
interface:
display_name: "API Design"
short_description: "REST API design patterns and best practices"
brand_color: "#F97316"
default_prompt: "Design REST API: resources, status codes, pagination"
policy:
allow_implicit_invocation: true

View File

@ -1,79 +0,0 @@
---
name: article-writing
description: Write articles, guides, blog posts, tutorials, newsletter issues, and other long-form content in a distinctive voice derived from supplied examples or brand guidance. Use when the user wants polished written content longer than a paragraph, especially when voice consistency, structure, and credibility matter.
origin: ECC
---
# Article Writing
Write long-form content that sounds like an actual person with a point of view, not an LLM smoothing itself into paste.
## When to Activate
- drafting blog posts, essays, launch posts, guides, tutorials, or newsletter issues
- turning notes, transcripts, or research into polished articles
- matching an existing founder, operator, or brand voice from examples
- tightening structure, pacing, and evidence in already-written long-form copy
## Core Rules
1. Lead with the concrete thing: artifact, example, output, anecdote, number, screenshot, or code.
2. Explain after the example, not before.
3. Keep sentences tight unless the source voice is intentionally expansive.
4. Use proof instead of adjectives.
5. Never invent facts, credibility, or customer evidence.
## Voice Handling
If the user wants a specific voice, run `brand-voice` first and reuse its `VOICE PROFILE`.
Do not duplicate a second style-analysis pass here unless the user explicitly asks for one.
If no voice references are given, default to a sharp operator voice: concrete, unsentimental, useful.
## Banned Patterns
Delete and rewrite any of these:
- "In today's rapidly evolving landscape"
- "game-changer", "cutting-edge", "revolutionary"
- "here's why this matters" as a standalone bridge
- fake vulnerability arcs
- a closing question added only to juice engagement
- biography padding that does not move the argument
- generic AI throat-clearing that delays the point
## Writing Process
1. Clarify the audience and purpose.
2. Build a hard outline with one job per section.
3. Start sections with proof, artifact, conflict, or example.
4. Expand only where the next sentence earns space.
5. Cut anything that sounds templated, overexplained, or self-congratulatory.
## Structure Guidance
### Technical Guides
- open with what the reader gets
- use code, commands, screenshots, or concrete output in major sections
- end with actionable takeaways, not a soft recap
### Essays / Opinion
- start with tension, contradiction, or a specific observation
- keep one argument thread per section
- make opinions answer to evidence
### Newsletters
- keep the first screen doing real work
- do not front-load diary filler
- use section labels only when they improve scanability
## Quality Gate
Before delivering:
- factual claims are backed by provided sources
- generic AI transitions are gone
- the voice matches the supplied examples or the agreed `VOICE PROFILE`
- every section adds something new
- formatting matches the intended medium

View File

@ -1,7 +0,0 @@
interface:
display_name: "Article Writing"
short_description: "Write long-form content in a supplied voice without sounding templated"
brand_color: "#B45309"
default_prompt: "Draft a sharp long-form article from these notes and examples"
policy:
allow_implicit_invocation: true

View File

@ -1,598 +0,0 @@
---
name: backend-patterns
description: Backend architecture patterns, API design, database optimization, and server-side best practices for Node.js, Express, and Next.js API routes.
origin: ECC
---
# Backend Development Patterns
Backend architecture patterns and best practices for scalable server-side applications.
## When to Activate
- Designing REST or GraphQL API endpoints
- Implementing repository, service, or controller layers
- Optimizing database queries (N+1, indexing, connection pooling)
- Adding caching (Redis, in-memory, HTTP cache headers)
- Setting up background jobs or async processing
- Structuring error handling and validation for APIs
- Building middleware (auth, logging, rate limiting)
## API Design Patterns
### RESTful API Structure
```typescript
// PASS: Resource-based URLs
GET /api/markets # List resources
GET /api/markets/:id # Get single resource
POST /api/markets # Create resource
PUT /api/markets/:id # Replace resource
PATCH /api/markets/:id # Update resource
DELETE /api/markets/:id # Delete resource
// PASS: Query parameters for filtering, sorting, pagination
GET /api/markets?status=active&sort=volume&limit=20&offset=0
```
### Repository Pattern
```typescript
// Abstract data access logic
interface MarketRepository {
findAll(filters?: MarketFilters): Promise<Market[]>
findById(id: string): Promise<Market | null>
create(data: CreateMarketDto): Promise<Market>
update(id: string, data: UpdateMarketDto): Promise<Market>
delete(id: string): Promise<void>
}
class SupabaseMarketRepository implements MarketRepository {
async findAll(filters?: MarketFilters): Promise<Market[]> {
let query = supabase.from('markets').select('*')
if (filters?.status) {
query = query.eq('status', filters.status)
}
if (filters?.limit) {
query = query.limit(filters.limit)
}
const { data, error } = await query
if (error) throw new Error(error.message)
return data
}
// Other methods...
}
```
### Service Layer Pattern
```typescript
// Business logic separated from data access
class MarketService {
constructor(private marketRepo: MarketRepository) {}
async searchMarkets(query: string, limit: number = 10): Promise<Market[]> {
// Business logic
const embedding = await generateEmbedding(query)
const results = await this.vectorSearch(embedding, limit)
// Fetch full data
const markets = await this.marketRepo.findByIds(results.map(r => r.id))
// Sort by similarity
return markets.sort((a, b) => {
const scoreA = results.find(r => r.id === a.id)?.score || 0
const scoreB = results.find(r => r.id === b.id)?.score || 0
return scoreA - scoreB
})
}
private async vectorSearch(embedding: number[], limit: number) {
// Vector search implementation
}
}
```
### Middleware Pattern
```typescript
// Request/response processing pipeline
export function withAuth(handler: NextApiHandler): NextApiHandler {
return async (req, res) => {
const token = req.headers.authorization?.replace('Bearer ', '')
if (!token) {
return res.status(401).json({ error: 'Unauthorized' })
}
try {
const user = await verifyToken(token)
req.user = user
return handler(req, res)
} catch (error) {
return res.status(401).json({ error: 'Invalid token' })
}
}
}
// Usage
export default withAuth(async (req, res) => {
// Handler has access to req.user
})
```
## Database Patterns
### Query Optimization
```typescript
// PASS: GOOD: Select only needed columns
const { data } = await supabase
.from('markets')
.select('id, name, status, volume')
.eq('status', 'active')
.order('volume', { ascending: false })
.limit(10)
// FAIL: BAD: Select everything
const { data } = await supabase
.from('markets')
.select('*')
```
### N+1 Query Prevention
```typescript
// FAIL: BAD: N+1 query problem
const markets = await getMarkets()
for (const market of markets) {
market.creator = await getUser(market.creator_id) // N queries
}
// PASS: GOOD: Batch fetch
const markets = await getMarkets()
const creatorIds = markets.map(m => m.creator_id)
const creators = await getUsers(creatorIds) // 1 query
const creatorMap = new Map(creators.map(c => [c.id, c]))
markets.forEach(market => {
market.creator = creatorMap.get(market.creator_id)
})
```
### Transaction Pattern
```typescript
async function createMarketWithPosition(
marketData: CreateMarketDto,
positionData: CreatePositionDto
) {
// Use Supabase transaction
const { data, error } = await supabase.rpc('create_market_with_position', {
market_data: marketData,
position_data: positionData
})
if (error) throw new Error('Transaction failed')
return data
}
// SQL function in Supabase
CREATE OR REPLACE FUNCTION create_market_with_position(
market_data jsonb,
position_data jsonb
)
RETURNS jsonb
LANGUAGE plpgsql
AS $$
BEGIN
-- Start transaction automatically
INSERT INTO markets VALUES (market_data);
INSERT INTO positions VALUES (position_data);
RETURN jsonb_build_object('success', true);
EXCEPTION
WHEN OTHERS THEN
-- Rollback happens automatically
RETURN jsonb_build_object('success', false, 'error', SQLERRM);
END;
$$;
```
## Caching Strategies
### Redis Caching Layer
```typescript
class CachedMarketRepository implements MarketRepository {
constructor(
private baseRepo: MarketRepository,
private redis: RedisClient
) {}
async findById(id: string): Promise<Market | null> {
// Check cache first
const cached = await this.redis.get(`market:${id}`)
if (cached) {
return JSON.parse(cached)
}
// Cache miss - fetch from database
const market = await this.baseRepo.findById(id)
if (market) {
// Cache for 5 minutes
await this.redis.setex(`market:${id}`, 300, JSON.stringify(market))
}
return market
}
async invalidateCache(id: string): Promise<void> {
await this.redis.del(`market:${id}`)
}
}
```
### Cache-Aside Pattern
```typescript
async function getMarketWithCache(id: string): Promise<Market> {
const cacheKey = `market:${id}`
// Try cache
const cached = await redis.get(cacheKey)
if (cached) return JSON.parse(cached)
// Cache miss - fetch from DB
const market = await db.markets.findUnique({ where: { id } })
if (!market) throw new Error('Market not found')
// Update cache
await redis.setex(cacheKey, 300, JSON.stringify(market))
return market
}
```
## Error Handling Patterns
### Centralized Error Handler
```typescript
class ApiError extends Error {
constructor(
public statusCode: number,
public message: string,
public isOperational = true
) {
super(message)
Object.setPrototypeOf(this, ApiError.prototype)
}
}
export function errorHandler(error: unknown, req: Request): Response {
if (error instanceof ApiError) {
return NextResponse.json({
success: false,
error: error.message
}, { status: error.statusCode })
}
if (error instanceof z.ZodError) {
return NextResponse.json({
success: false,
error: 'Validation failed',
details: error.errors
}, { status: 400 })
}
// Log unexpected errors
console.error('Unexpected error:', error)
return NextResponse.json({
success: false,
error: 'Internal server error'
}, { status: 500 })
}
// Usage
export async function GET(request: Request) {
try {
const data = await fetchData()
return NextResponse.json({ success: true, data })
} catch (error) {
return errorHandler(error, request)
}
}
```
### Retry with Exponential Backoff
```typescript
async function fetchWithRetry<T>(
fn: () => Promise<T>,
maxRetries = 3
): Promise<T> {
let lastError: Error
for (let i = 0; i < maxRetries; i++) {
try {
return await fn()
} catch (error) {
lastError = error as Error
if (i < maxRetries - 1) {
// Exponential backoff: 1s, 2s, 4s
const delay = Math.pow(2, i) * 1000
await new Promise(resolve => setTimeout(resolve, delay))
}
}
}
throw lastError!
}
// Usage
const data = await fetchWithRetry(() => fetchFromAPI())
```
## Authentication & Authorization
### JWT Token Validation
```typescript
import jwt from 'jsonwebtoken'
interface JWTPayload {
userId: string
email: string
role: 'admin' | 'user'
}
export function verifyToken(token: string): JWTPayload {
try {
const payload = jwt.verify(token, process.env.JWT_SECRET!) as JWTPayload
return payload
} catch (error) {
throw new ApiError(401, 'Invalid token')
}
}
export async function requireAuth(request: Request) {
const token = request.headers.get('authorization')?.replace('Bearer ', '')
if (!token) {
throw new ApiError(401, 'Missing authorization token')
}
return verifyToken(token)
}
// Usage in API route
export async function GET(request: Request) {
const user = await requireAuth(request)
const data = await getDataForUser(user.userId)
return NextResponse.json({ success: true, data })
}
```
### Role-Based Access Control
```typescript
type Permission = 'read' | 'write' | 'delete' | 'admin'
interface User {
id: string
role: 'admin' | 'moderator' | 'user'
}
const rolePermissions: Record<User['role'], Permission[]> = {
admin: ['read', 'write', 'delete', 'admin'],
moderator: ['read', 'write', 'delete'],
user: ['read', 'write']
}
export function hasPermission(user: User, permission: Permission): boolean {
return rolePermissions[user.role].includes(permission)
}
export function requirePermission(permission: Permission) {
return (handler: (request: Request, user: User) => Promise<Response>) => {
return async (request: Request) => {
const user = await requireAuth(request)
if (!hasPermission(user, permission)) {
throw new ApiError(403, 'Insufficient permissions')
}
return handler(request, user)
}
}
}
// Usage - HOF wraps the handler
export const DELETE = requirePermission('delete')(
async (request: Request, user: User) => {
// Handler receives authenticated user with verified permission
return new Response('Deleted', { status: 200 })
}
)
```
## Rate Limiting
### Simple In-Memory Rate Limiter
```typescript
class RateLimiter {
private requests = new Map<string, number[]>()
async checkLimit(
identifier: string,
maxRequests: number,
windowMs: number
): Promise<boolean> {
const now = Date.now()
const requests = this.requests.get(identifier) || []
// Remove old requests outside window
const recentRequests = requests.filter(time => now - time < windowMs)
if (recentRequests.length >= maxRequests) {
return false // Rate limit exceeded
}
// Add current request
recentRequests.push(now)
this.requests.set(identifier, recentRequests)
return true
}
}
const limiter = new RateLimiter()
export async function GET(request: Request) {
const ip = request.headers.get('x-forwarded-for') || 'unknown'
const allowed = await limiter.checkLimit(ip, 100, 60000) // 100 req/min
if (!allowed) {
return NextResponse.json({
error: 'Rate limit exceeded'
}, { status: 429 })
}
// Continue with request
}
```
## Background Jobs & Queues
### Simple Queue Pattern
```typescript
class JobQueue<T> {
private queue: T[] = []
private processing = false
async add(job: T): Promise<void> {
this.queue.push(job)
if (!this.processing) {
this.process()
}
}
private async process(): Promise<void> {
this.processing = true
while (this.queue.length > 0) {
const job = this.queue.shift()!
try {
await this.execute(job)
} catch (error) {
console.error('Job failed:', error)
}
}
this.processing = false
}
private async execute(job: T): Promise<void> {
// Job execution logic
}
}
// Usage for indexing markets
interface IndexJob {
marketId: string
}
const indexQueue = new JobQueue<IndexJob>()
export async function POST(request: Request) {
const { marketId } = await request.json()
// Add to queue instead of blocking
await indexQueue.add({ marketId })
return NextResponse.json({ success: true, message: 'Job queued' })
}
```
## Logging & Monitoring
### Structured Logging
```typescript
interface LogContext {
userId?: string
requestId?: string
method?: string
path?: string
[key: string]: unknown
}
class Logger {
log(level: 'info' | 'warn' | 'error', message: string, context?: LogContext) {
const entry = {
timestamp: new Date().toISOString(),
level,
message,
...context
}
console.log(JSON.stringify(entry))
}
info(message: string, context?: LogContext) {
this.log('info', message, context)
}
warn(message: string, context?: LogContext) {
this.log('warn', message, context)
}
error(message: string, error: Error, context?: LogContext) {
this.log('error', message, {
...context,
error: error.message,
stack: error.stack
})
}
}
const logger = new Logger()
// Usage
export async function GET(request: Request) {
const requestId = crypto.randomUUID()
logger.info('Fetching markets', {
requestId,
method: 'GET',
path: '/api/markets'
})
try {
const markets = await fetchMarkets()
return NextResponse.json({ success: true, data: markets })
} catch (error) {
logger.error('Failed to fetch markets', error as Error, { requestId })
return NextResponse.json({ error: 'Internal error' }, { status: 500 })
}
}
```
**Remember**: Backend patterns enable scalable, maintainable server-side applications. Choose patterns that fit your complexity level.

View File

@ -1,7 +0,0 @@
interface:
display_name: "Backend Patterns"
short_description: "API design, database, and server-side patterns"
brand_color: "#F59E0B"
default_prompt: "Apply backend patterns: API design, repository, caching"
policy:
allow_implicit_invocation: true

View File

@ -1,97 +0,0 @@
---
name: brand-voice
description: Build a source-derived writing style profile from real posts, essays, launch notes, docs, or site copy, then reuse that profile across content, outreach, and social workflows. Use when the user wants voice consistency without generic AI writing tropes.
origin: ECC
---
# Brand Voice
Build a durable voice profile from real source material, then use that profile everywhere instead of re-deriving style from scratch or defaulting to generic AI copy.
## When to Activate
- the user wants content or outreach in a specific voice
- writing for X, LinkedIn, email, launch posts, threads, or product updates
- adapting a known author's tone across channels
- the existing content lane needs a reusable style system instead of one-off mimicry
## Source Priority
Use the strongest real source set available, in this order:
1. recent original X posts and threads
2. articles, essays, memos, launch notes, or newsletters
3. real outbound emails or DMs that worked
4. product docs, changelogs, README framing, and site copy
Do not use generic platform exemplars as source material.
## Collection Workflow
1. Gather 5 to 20 representative samples when available.
2. Prefer recent material over old material unless the user says the older writing is more canonical.
3. Separate "public launch voice" from "private working voice" if the source set clearly splits.
4. If live X access is available, use `x-api` to pull recent original posts before drafting.
5. If site copy matters, include the current ECC landing page and repo/plugin framing.
## What to Extract
- rhythm and sentence length
- compression vs explanation
- capitalization norms
- parenthetical use
- question frequency and purpose
- how sharply claims are made
- how often numbers, mechanisms, or receipts show up
- how transitions work
- what the author never does
## Output Contract
Produce a reusable `VOICE PROFILE` block that downstream skills can consume directly. Use the schema in [references/voice-profile-schema.md](references/voice-profile-schema.md).
Keep the profile structured and short enough to reuse in session context. The point is not literary criticism. The point is operational reuse.
## Affaan / ECC Defaults
If the user wants Affaan / ECC voice and live sources are thin, start here unless newer source material overrides it:
- direct, compressed, concrete
- specifics, mechanisms, receipts, and numbers beat adjectives
- parentheticals are for qualification, narrowing, or over-clarification
- capitalization is conventional unless there is a real reason to break it
- questions are rare and should not be used as bait
- tone can be sharp, blunt, skeptical, or dry
- transitions should feel earned, not smoothed over
## Hard Bans
Delete and rewrite any of these:
- fake curiosity hooks
- "not X, just Y"
- "no fluff"
- forced lowercase
- LinkedIn thought-leader cadence
- bait questions
- "Excited to share"
- generic founder-journey filler
- corny parentheticals
## Persistence Rules
- Reuse the latest confirmed `VOICE PROFILE` across related tasks in the same session.
- If the user asks for a durable artifact, save the profile in the requested workspace location or memory surface.
- Do not create repo-tracked files that store personal voice fingerprints unless the user explicitly asks for that.
## Downstream Use
Use this skill before or inside:
- `content-engine`
- `crosspost`
- `lead-intelligence`
- article or launch writing
- cold or warm outbound across X, LinkedIn, and email
If another skill already has a partial voice capture section, this skill is the canonical source of truth.

View File

@ -1,55 +0,0 @@
# Voice Profile Schema
Use this exact structure when building a reusable voice profile:
```text
VOICE PROFILE
=============
Author:
Goal:
Confidence:
Source Set
- source 1
- source 2
- source 3
Rhythm
- short note on sentence length, pacing, and fragmentation
Compression
- how dense or explanatory the writing is
Capitalization
- conventional, mixed, or situational
Parentheticals
- how they are used and how they are not used
Question Use
- rare, frequent, rhetorical, direct, or mostly absent
Claim Style
- how claims are framed, supported, and sharpened
Preferred Moves
- concrete moves the author does use
Banned Moves
- specific patterns the author does not use
CTA Rules
- how, when, or whether to close with asks
Channel Notes
- X:
- LinkedIn:
- Email:
```
Guidelines:
- Keep the profile concrete and source-backed.
- Use short bullets, not essay paragraphs.
- Every banned move should be observable in the source set or explicitly requested by the user.
- If the source set conflicts, call out the split instead of averaging it into mush.

View File

@ -1,84 +0,0 @@
---
name: bun-runtime
description: Bun as runtime, package manager, bundler, and test runner. When to choose Bun vs Node, migration notes, and Vercel support.
origin: ECC
---
# Bun Runtime
Bun is a fast all-in-one JavaScript runtime and toolkit: runtime, package manager, bundler, and test runner.
## When to Use
- **Prefer Bun** for: new JS/TS projects, scripts where install/run speed matters, Vercel deployments with Bun runtime, and when you want a single toolchain (run + install + test + build).
- **Prefer Node** for: maximum ecosystem compatibility, legacy tooling that assumes Node, or when a dependency has known Bun issues.
Use when: adopting Bun, migrating from Node, writing or debugging Bun scripts/tests, or configuring Bun on Vercel or other platforms.
## How It Works
- **Runtime**: Drop-in Node-compatible runtime (built on JavaScriptCore, implemented in Zig).
- **Package manager**: `bun install` is significantly faster than npm/yarn. Lockfile is `bun.lock` (text) by default in current Bun; older versions used `bun.lockb` (binary).
- **Bundler**: Built-in bundler and transpiler for apps and libraries.
- **Test runner**: Built-in `bun test` with Jest-like API.
**Migration from Node**: Replace `node script.js` with `bun run script.js` or `bun script.js`. Run `bun install` in place of `npm install`; most packages work. Use `bun run` for npm scripts; `bun x` for npx-style one-off runs. Node built-ins are supported; prefer Bun APIs where they exist for better performance.
**Vercel**: Set runtime to Bun in project settings. Build: `bun run build` or `bun build ./src/index.ts --outdir=dist`. Install: `bun install --frozen-lockfile` for reproducible deploys.
## Examples
### Run and install
```bash
# Install dependencies (creates/updates bun.lock or bun.lockb)
bun install
# Run a script or file
bun run dev
bun run src/index.ts
bun src/index.ts
```
### Scripts and env
```bash
bun run --env-file=.env dev
FOO=bar bun run script.ts
```
### Testing
```bash
bun test
bun test --watch
```
```typescript
// test/example.test.ts
import { expect, test } from "bun:test";
test("add", () => {
expect(1 + 2).toBe(3);
});
```
### Runtime API
```typescript
const file = Bun.file("package.json");
const json = await file.json();
Bun.serve({
port: 3000,
fetch(req) {
return new Response("Hello");
},
});
```
## Best Practices
- Commit the lockfile (`bun.lock` or `bun.lockb`) for reproducible installs.
- Prefer `bun run` for scripts. For TypeScript, Bun runs `.ts` natively.
- Keep dependencies up to date; Bun and the ecosystem evolve quickly.

View File

@ -1,7 +0,0 @@
interface:
display_name: "Bun Runtime"
short_description: "Bun as runtime, package manager, bundler, and test runner"
brand_color: "#FBF0DF"
default_prompt: "Use Bun for scripts, install, or run"
policy:
allow_implicit_invocation: true

View File

@ -1,337 +0,0 @@
---
name: claude-api
description: Anthropic Claude API patterns for Python and TypeScript. Covers Messages API, streaming, tool use, vision, extended thinking, batches, prompt caching, and Claude Agent SDK. Use when building applications with the Claude API or Anthropic SDKs.
origin: ECC
---
# Claude API
Build applications with the Anthropic Claude API and SDKs.
## When to Activate
- Building applications that call the Claude API
- Code imports `anthropic` (Python) or `@anthropic-ai/sdk` (TypeScript)
- User asks about Claude API patterns, tool use, streaming, or vision
- Implementing agent workflows with Claude Agent SDK
- Optimizing API costs, token usage, or latency
## Model Selection
| Model | ID | Best For |
|-------|-----|----------|
| Opus 4.6 | `claude-opus-4-6` | Complex reasoning, architecture, research |
| Sonnet 4.6 | `claude-sonnet-4-6` | Balanced coding, most development tasks |
| Haiku 4.5 | `claude-haiku-4-5-20251001` | Fast responses, high-volume, cost-sensitive |
Default to Sonnet 4.6 unless the task requires deep reasoning (Opus) or speed/cost optimization (Haiku).
## Python SDK
### Installation
```bash
pip install anthropic
```
### Basic Message
```python
import anthropic
client = anthropic.Anthropic() # reads ANTHROPIC_API_KEY from env
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[
{"role": "user", "content": "Explain async/await in Python"}
]
)
print(message.content[0].text)
```
### Streaming
```python
with client.messages.stream(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[{"role": "user", "content": "Write a haiku about coding"}]
) as stream:
for text in stream.text_stream:
print(text, end="", flush=True)
```
### System Prompt
```python
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
system="You are a senior Python developer. Be concise.",
messages=[{"role": "user", "content": "Review this function"}]
)
```
## TypeScript SDK
### Installation
```bash
npm install @anthropic-ai/sdk
```
### Basic Message
```typescript
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic(); // reads ANTHROPIC_API_KEY from env
const message = await client.messages.create({
model: "claude-sonnet-4-6",
max_tokens: 1024,
messages: [
{ role: "user", content: "Explain async/await in TypeScript" }
],
});
console.log(message.content[0].text);
```
### Streaming
```typescript
const stream = client.messages.stream({
model: "claude-sonnet-4-6",
max_tokens: 1024,
messages: [{ role: "user", content: "Write a haiku" }],
});
for await (const event of stream) {
if (event.type === "content_block_delta" && event.delta.type === "text_delta") {
process.stdout.write(event.delta.text);
}
}
```
## Tool Use
Define tools and let Claude call them:
```python
tools = [
{
"name": "get_weather",
"description": "Get current weather for a location",
"input_schema": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City name"},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
},
"required": ["location"]
}
}
]
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
tools=tools,
messages=[{"role": "user", "content": "What's the weather in SF?"}]
)
# Handle tool use response
for block in message.content:
if block.type == "tool_use":
# Execute the tool with block.input
result = get_weather(**block.input)
# Send result back
follow_up = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
tools=tools,
messages=[
{"role": "user", "content": "What's the weather in SF?"},
{"role": "assistant", "content": message.content},
{"role": "user", "content": [
{"type": "tool_result", "tool_use_id": block.id, "content": str(result)}
]}
]
)
```
## Vision
Send images for analysis:
```python
import base64
with open("diagram.png", "rb") as f:
image_data = base64.standard_b64encode(f.read()).decode("utf-8")
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[{
"role": "user",
"content": [
{"type": "image", "source": {"type": "base64", "media_type": "image/png", "data": image_data}},
{"type": "text", "text": "Describe this diagram"}
]
}]
)
```
## Extended Thinking
For complex reasoning tasks:
```python
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=16000,
thinking={
"type": "enabled",
"budget_tokens": 10000
},
messages=[{"role": "user", "content": "Solve this math problem step by step..."}]
)
for block in message.content:
if block.type == "thinking":
print(f"Thinking: {block.thinking}")
elif block.type == "text":
print(f"Answer: {block.text}")
```
## Prompt Caching
Cache large system prompts or context to reduce costs:
```python
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
system=[
{"type": "text", "text": large_system_prompt, "cache_control": {"type": "ephemeral"}}
],
messages=[{"role": "user", "content": "Question about the cached context"}]
)
# Check cache usage
print(f"Cache read: {message.usage.cache_read_input_tokens}")
print(f"Cache creation: {message.usage.cache_creation_input_tokens}")
```
## Batches API
Process large volumes asynchronously at 50% cost reduction:
```python
import time
batch = client.messages.batches.create(
requests=[
{
"custom_id": f"request-{i}",
"params": {
"model": "claude-sonnet-4-6",
"max_tokens": 1024,
"messages": [{"role": "user", "content": prompt}]
}
}
for i, prompt in enumerate(prompts)
]
)
# Poll for completion
while True:
status = client.messages.batches.retrieve(batch.id)
if status.processing_status == "ended":
break
time.sleep(30)
# Get results
for result in client.messages.batches.results(batch.id):
print(result.result.message.content[0].text)
```
## Claude Agent SDK
Build multi-step agents:
```python
# Note: Agent SDK API surface may change — check official docs
import anthropic
# Define tools as functions
tools = [{
"name": "search_codebase",
"description": "Search the codebase for relevant code",
"input_schema": {
"type": "object",
"properties": {"query": {"type": "string"}},
"required": ["query"]
}
}]
# Run an agentic loop with tool use
client = anthropic.Anthropic()
messages = [{"role": "user", "content": "Review the auth module for security issues"}]
while True:
response = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=4096,
tools=tools,
messages=messages,
)
if response.stop_reason == "end_turn":
break
# Handle tool calls and continue the loop
messages.append({"role": "assistant", "content": response.content})
# ... execute tools and append tool_result messages
```
## Cost Optimization
| Strategy | Savings | When to Use |
|----------|---------|-------------|
| Prompt caching | Up to 90% on cached tokens | Repeated system prompts or context |
| Batches API | 50% | Non-time-sensitive bulk processing |
| Haiku instead of Sonnet | ~75% | Simple tasks, classification, extraction |
| Shorter max_tokens | Variable | When you know output will be short |
| Streaming | None (same cost) | Better UX, same price |
## Error Handling
```python
import time
from anthropic import APIError, RateLimitError, APIConnectionError
try:
message = client.messages.create(...)
except RateLimitError:
# Back off and retry
time.sleep(60)
except APIConnectionError:
# Network issue, retry with backoff
pass
except APIError as e:
print(f"API error {e.status_code}: {e.message}")
```
## Environment Setup
```bash
# Required
export ANTHROPIC_API_KEY="your-api-key-here"
# Optional: set default model
export ANTHROPIC_MODEL="claude-sonnet-4-6"
```
Never hardcode API keys. Always use environment variables.

View File

@ -1,7 +0,0 @@
interface:
display_name: "Claude API"
short_description: "Anthropic Claude API patterns and SDKs"
brand_color: "#D97706"
default_prompt: "Build applications with the Claude API using Messages, tool use, streaming, and Agent SDK"
policy:
allow_implicit_invocation: true

View File

@ -1,549 +0,0 @@
---
name: coding-standards
description: Baseline cross-project coding conventions for naming, readability, immutability, and code-quality review. Use detailed frontend or backend skills for framework-specific patterns.
origin: ECC
---
# Coding Standards & Best Practices
Baseline coding conventions applicable across projects.
This skill is the shared floor, not the detailed framework playbook.
- Use `frontend-patterns` for React, state, forms, rendering, and UI architecture.
- Use `backend-patterns` or `api-design` for repository/service layers, endpoint design, validation, and server-specific concerns.
- Use `rules/common/coding-style.md` when you need the shortest reusable rule layer instead of a full skill walkthrough.
## When to Activate
- Starting a new project or module
- Reviewing code for quality and maintainability
- Refactoring existing code to follow conventions
- Enforcing naming, formatting, or structural consistency
- Setting up linting, formatting, or type-checking rules
- Onboarding new contributors to coding conventions
## Scope Boundaries
Activate this skill for:
- descriptive naming
- immutability defaults
- readability, KISS, DRY, and YAGNI enforcement
- error-handling expectations and code-smell review
Do not use this skill as the primary source for:
- React composition, hooks, or rendering patterns
- backend architecture, API design, or database layering
- domain-specific framework guidance when a narrower ECC skill already exists
## Code Quality Principles
### 1. Readability First
- Code is read more than written
- Clear variable and function names
- Self-documenting code preferred over comments
- Consistent formatting
### 2. KISS (Keep It Simple, Stupid)
- Simplest solution that works
- Avoid over-engineering
- No premature optimization
- Easy to understand > clever code
### 3. DRY (Don't Repeat Yourself)
- Extract common logic into functions
- Create reusable components
- Share utilities across modules
- Avoid copy-paste programming
### 4. YAGNI (You Aren't Gonna Need It)
- Don't build features before they're needed
- Avoid speculative generality
- Add complexity only when required
- Start simple, refactor when needed
## TypeScript/JavaScript Standards
### Variable Naming
```typescript
// PASS: GOOD: Descriptive names
const marketSearchQuery = 'election'
const isUserAuthenticated = true
const totalRevenue = 1000
// FAIL: BAD: Unclear names
const q = 'election'
const flag = true
const x = 1000
```
### Function Naming
```typescript
// PASS: GOOD: Verb-noun pattern
async function fetchMarketData(marketId: string) { }
function calculateSimilarity(a: number[], b: number[]) { }
function isValidEmail(email: string): boolean { }
// FAIL: BAD: Unclear or noun-only
async function market(id: string) { }
function similarity(a, b) { }
function email(e) { }
```
### Immutability Pattern (CRITICAL)
```typescript
// PASS: ALWAYS use spread operator
const updatedUser = {
...user,
name: 'New Name'
}
const updatedArray = [...items, newItem]
// FAIL: NEVER mutate directly
user.name = 'New Name' // BAD
items.push(newItem) // BAD
```
### Error Handling
```typescript
// PASS: GOOD: Comprehensive error handling
async function fetchData(url: string) {
try {
const response = await fetch(url)
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`)
}
return await response.json()
} catch (error) {
console.error('Fetch failed:', error)
throw new Error('Failed to fetch data')
}
}
// FAIL: BAD: No error handling
async function fetchData(url) {
const response = await fetch(url)
return response.json()
}
```
### Async/Await Best Practices
```typescript
// PASS: GOOD: Parallel execution when possible
const [users, markets, stats] = await Promise.all([
fetchUsers(),
fetchMarkets(),
fetchStats()
])
// FAIL: BAD: Sequential when unnecessary
const users = await fetchUsers()
const markets = await fetchMarkets()
const stats = await fetchStats()
```
### Type Safety
```typescript
// PASS: GOOD: Proper types
interface Market {
id: string
name: string
status: 'active' | 'resolved' | 'closed'
created_at: Date
}
function getMarket(id: string): Promise<Market> {
// Implementation
}
// FAIL: BAD: Using 'any'
function getMarket(id: any): Promise<any> {
// Implementation
}
```
## React Best Practices
### Component Structure
```typescript
// PASS: GOOD: Functional component with types
interface ButtonProps {
children: React.ReactNode
onClick: () => void
disabled?: boolean
variant?: 'primary' | 'secondary'
}
export function Button({
children,
onClick,
disabled = false,
variant = 'primary'
}: ButtonProps) {
return (
<button
onClick={onClick}
disabled={disabled}
className={`btn btn-${variant}`}
>
{children}
</button>
)
}
// FAIL: BAD: No types, unclear structure
export function Button(props) {
return <button onClick={props.onClick}>{props.children}</button>
}
```
### Custom Hooks
```typescript
// PASS: GOOD: Reusable custom hook
export function useDebounce<T>(value: T, delay: number): T {
const [debouncedValue, setDebouncedValue] = useState<T>(value)
useEffect(() => {
const handler = setTimeout(() => {
setDebouncedValue(value)
}, delay)
return () => clearTimeout(handler)
}, [value, delay])
return debouncedValue
}
// Usage
const debouncedQuery = useDebounce(searchQuery, 500)
```
### State Management
```typescript
// PASS: GOOD: Proper state updates
const [count, setCount] = useState(0)
// Functional update for state based on previous state
setCount(prev => prev + 1)
// FAIL: BAD: Direct state reference
setCount(count + 1) // Can be stale in async scenarios
```
### Conditional Rendering
```typescript
// PASS: GOOD: Clear conditional rendering
{isLoading && <Spinner />}
{error && <ErrorMessage error={error} />}
{data && <DataDisplay data={data} />}
// FAIL: BAD: Ternary hell
{isLoading ? <Spinner /> : error ? <ErrorMessage error={error} /> : data ? <DataDisplay data={data} /> : null}
```
## API Design Standards
### REST API Conventions
```
GET /api/markets # List all markets
GET /api/markets/:id # Get specific market
POST /api/markets # Create new market
PUT /api/markets/:id # Update market (full)
PATCH /api/markets/:id # Update market (partial)
DELETE /api/markets/:id # Delete market
# Query parameters for filtering
GET /api/markets?status=active&limit=10&offset=0
```
### Response Format
```typescript
// PASS: GOOD: Consistent response structure
interface ApiResponse<T> {
success: boolean
data?: T
error?: string
meta?: {
total: number
page: number
limit: number
}
}
// Success response
return NextResponse.json({
success: true,
data: markets,
meta: { total: 100, page: 1, limit: 10 }
})
// Error response
return NextResponse.json({
success: false,
error: 'Invalid request'
}, { status: 400 })
```
### Input Validation
```typescript
import { z } from 'zod'
// PASS: GOOD: Schema validation
const CreateMarketSchema = z.object({
name: z.string().min(1).max(200),
description: z.string().min(1).max(2000),
endDate: z.string().datetime(),
categories: z.array(z.string()).min(1)
})
export async function POST(request: Request) {
const body = await request.json()
try {
const validated = CreateMarketSchema.parse(body)
// Proceed with validated data
} catch (error) {
if (error instanceof z.ZodError) {
return NextResponse.json({
success: false,
error: 'Validation failed',
details: error.errors
}, { status: 400 })
}
}
}
```
## File Organization
### Project Structure
```
src/
├── app/ # Next.js App Router
│ ├── api/ # API routes
│ ├── markets/ # Market pages
│ └── (auth)/ # Auth pages (route groups)
├── components/ # React components
│ ├── ui/ # Generic UI components
│ ├── forms/ # Form components
│ └── layouts/ # Layout components
├── hooks/ # Custom React hooks
├── lib/ # Utilities and configs
│ ├── api/ # API clients
│ ├── utils/ # Helper functions
│ └── constants/ # Constants
├── types/ # TypeScript types
└── styles/ # Global styles
```
### File Naming
```
components/Button.tsx # PascalCase for components
hooks/useAuth.ts # camelCase with 'use' prefix
lib/formatDate.ts # camelCase for utilities
types/market.types.ts # camelCase with .types suffix
```
## Comments & Documentation
### When to Comment
```typescript
// PASS: GOOD: Explain WHY, not WHAT
// Use exponential backoff to avoid overwhelming the API during outages
const delay = Math.min(1000 * Math.pow(2, retryCount), 30000)
// Deliberately using mutation here for performance with large arrays
items.push(newItem)
// FAIL: BAD: Stating the obvious
// Increment counter by 1
count++
// Set name to user's name
name = user.name
```
### JSDoc for Public APIs
```typescript
/**
* Searches markets using semantic similarity.
*
* @param query - Natural language search query
* @param limit - Maximum number of results (default: 10)
* @returns Array of markets sorted by similarity score
* @throws {Error} If OpenAI API fails or Redis unavailable
*
* @example
* ```typescript
* const results = await searchMarkets('election', 5)
* console.log(results[0].name) // "Trump vs Biden"
* ```
*/
export async function searchMarkets(
query: string,
limit: number = 10
): Promise<Market[]> {
// Implementation
}
```
## Performance Best Practices
### Memoization
```typescript
import { useMemo, useCallback } from 'react'
// PASS: GOOD: Memoize expensive computations
const sortedMarkets = useMemo(() => {
return markets.sort((a, b) => b.volume - a.volume)
}, [markets])
// PASS: GOOD: Memoize callbacks
const handleSearch = useCallback((query: string) => {
setSearchQuery(query)
}, [])
```
### Lazy Loading
```typescript
import { lazy, Suspense } from 'react'
// PASS: GOOD: Lazy load heavy components
const HeavyChart = lazy(() => import('./HeavyChart'))
export function Dashboard() {
return (
<Suspense fallback={<Spinner />}>
<HeavyChart />
</Suspense>
)
}
```
### Database Queries
```typescript
// PASS: GOOD: Select only needed columns
const { data } = await supabase
.from('markets')
.select('id, name, status')
.limit(10)
// FAIL: BAD: Select everything
const { data } = await supabase
.from('markets')
.select('*')
```
## Testing Standards
### Test Structure (AAA Pattern)
```typescript
test('calculates similarity correctly', () => {
// Arrange
const vector1 = [1, 0, 0]
const vector2 = [0, 1, 0]
// Act
const similarity = calculateCosineSimilarity(vector1, vector2)
// Assert
expect(similarity).toBe(0)
})
```
### Test Naming
```typescript
// PASS: GOOD: Descriptive test names
test('returns empty array when no markets match query', () => { })
test('throws error when OpenAI API key is missing', () => { })
test('falls back to substring search when Redis unavailable', () => { })
// FAIL: BAD: Vague test names
test('works', () => { })
test('test search', () => { })
```
## Code Smell Detection
Watch for these anti-patterns:
### 1. Long Functions
```typescript
// FAIL: BAD: Function > 50 lines
function processMarketData() {
// 100 lines of code
}
// PASS: GOOD: Split into smaller functions
function processMarketData() {
const validated = validateData()
const transformed = transformData(validated)
return saveData(transformed)
}
```
### 2. Deep Nesting
```typescript
// FAIL: BAD: 5+ levels of nesting
if (user) {
if (user.isAdmin) {
if (market) {
if (market.isActive) {
if (hasPermission) {
// Do something
}
}
}
}
}
// PASS: GOOD: Early returns
if (!user) return
if (!user.isAdmin) return
if (!market) return
if (!market.isActive) return
if (!hasPermission) return
// Do something
```
### 3. Magic Numbers
```typescript
// FAIL: BAD: Unexplained numbers
if (retryCount > 3) { }
setTimeout(callback, 500)
// PASS: GOOD: Named constants
const MAX_RETRIES = 3
const DEBOUNCE_DELAY_MS = 500
if (retryCount > MAX_RETRIES) { }
setTimeout(callback, DEBOUNCE_DELAY_MS)
```
**Remember**: Code quality is not negotiable. Clear, maintainable code enables rapid development and confident refactoring.

View File

@ -1,7 +0,0 @@
interface:
display_name: "Coding Standards"
short_description: "Universal coding standards and best practices"
brand_color: "#3B82F6"
default_prompt: "Apply standards: immutability, error handling, type safety"
policy:
allow_implicit_invocation: true

View File

@ -1,131 +0,0 @@
---
name: content-engine
description: Create platform-native content systems for X, LinkedIn, TikTok, YouTube, newsletters, and repurposed multi-platform campaigns. Use when the user wants social posts, threads, scripts, content calendars, or one source asset adapted cleanly across platforms.
origin: ECC
---
# Content Engine
Build platform-native content without flattening the author's real voice into platform slop.
## When to Activate
- writing X posts or threads
- drafting LinkedIn posts or launch updates
- scripting short-form video or YouTube explainers
- repurposing articles, podcasts, demos, docs, or internal notes into public content
- building a launch sequence or ongoing content system around a product, insight, or narrative
## Non-Negotiables
1. Start from source material, not generic post formulas.
2. Adapt the format for the platform, not the persona.
3. One post should carry one actual claim.
4. Specificity beats adjectives.
5. No engagement bait unless the user explicitly asks for it.
## Source-First Workflow
Before drafting, identify the source set:
- published articles
- notes or internal memos
- product demos
- docs or changelogs
- transcripts
- screenshots
- prior posts from the same author
If the user wants a specific voice, build a voice profile from real examples before writing.
Use `brand-voice` as the canonical workflow when voice consistency matters across more than one output.
## Voice Handling
`brand-voice` is the canonical voice layer.
Run it first when:
- there are multiple downstream outputs
- the user explicitly cares about writing style
- the content is launch, outreach, or reputation-sensitive
Reuse the resulting `VOICE PROFILE` here instead of rebuilding a second voice model.
If the user wants Affaan / ECC voice specifically, still treat `brand-voice` as the source of truth and feed it the best live or source-derived material available.
## Hard Bans
Delete and rewrite any of these:
- "In today's rapidly evolving landscape"
- "game-changer", "revolutionary", "cutting-edge"
- "here's why this matters" unless it is followed immediately by something concrete
- ending with a LinkedIn-style question just to farm replies
- forced casualness on LinkedIn
- fake engagement padding that was not present in the source material
## Platform Adaptation Rules
### X
- open with the strongest claim, artifact, or tension
- keep the compression if the source voice is compressed
- if writing a thread, each post must advance the argument
- do not pad with context the audience does not need
### LinkedIn
- expand only enough for people outside the immediate niche to follow
- do not turn it into a fake lesson post unless the source material actually is reflective
- no corporate inspiration cadence
- no praise-stacking, no "journey" filler
### Short Video
- script around the visual sequence and proof points
- first seconds should show the result, problem, or punch
- do not write narration that sounds better on paper than on screen
### YouTube
- show the result or tension early
- organize by argument or progression, not filler sections
- use chaptering only when it helps clarity
### Newsletter
- open with the point, conflict, or artifact
- do not spend the first paragraph warming up
- every section needs to add something new
## Repurposing Flow
1. Pick the anchor asset.
2. Extract 3 to 7 atomic claims or scenes.
3. Rank them by sharpness, novelty, and proof.
4. Assign one strong idea per output.
5. Adapt structure for each platform.
6. Strip platform-shaped filler.
7. Run the quality gate.
## Deliverables
When asked for a campaign, return:
- a short voice profile if voice matching matters
- the core angle
- platform-native drafts
- posting order only if it helps execution
- gaps that must be filled before publishing
## Quality Gate
Before delivering:
- every draft sounds like the intended author, not the platform stereotype
- every draft contains a real claim, proof point, or concrete observation
- no generic hype language remains
- no fake engagement bait remains
- no duplicated copy across platforms unless requested
- any CTA is earned and user-approved
## Related Skills
- `brand-voice` for source-derived voice profiles
- `crosspost` for platform-specific distribution
- `x-api` for sourcing recent posts and publishing approved X output

View File

@ -1,7 +0,0 @@
interface:
display_name: "Content Engine"
short_description: "Turn one idea into platform-native social and content outputs"
brand_color: "#DC2626"
default_prompt: "Turn this source asset into strong multi-platform content"
policy:
allow_implicit_invocation: true

View File

@ -1,111 +0,0 @@
---
name: crosspost
description: Multi-platform content distribution across X, LinkedIn, Threads, and Bluesky. Adapts content per platform using content-engine patterns. Never posts identical content cross-platform. Use when the user wants to distribute content across social platforms.
origin: ECC
---
# Crosspost
Distribute content across platforms without turning it into the same fake post in four costumes.
## When to Activate
- the user wants to publish the same underlying idea across multiple platforms
- a launch, update, release, or essay needs platform-specific versions
- the user says "crosspost", "post this everywhere", or "adapt this for X and LinkedIn"
## Core Rules
1. Do not publish identical copy across platforms.
2. Preserve the author's voice across platforms.
3. Adapt for constraints, not stereotypes.
4. One post should still be about one thing.
5. Do not invent a CTA, question, or moral if the source did not earn one.
## Workflow
### Step 1: Start with the Primary Version
Pick the strongest source version first:
- the original X post
- the original article
- the launch note
- the thread
- the memo or changelog
Use `content-engine` first if the source still needs voice shaping.
### Step 2: Capture the Voice Fingerprint
Run `brand-voice` first if the source voice is not already captured in the current session.
Reuse the resulting `VOICE PROFILE` directly.
Do not build a second ad hoc voice checklist here unless the user explicitly wants a fresh override for this campaign.
### Step 3: Adapt by Platform Constraint
### X
- keep it compressed
- lead with the sharpest claim or artifact
- use a thread only when a single post would collapse the argument
- avoid hashtags and generic filler
### LinkedIn
- add only the context needed for people outside the niche
- do not turn it into a fake founder-reflection post
- do not add a closing question just because it is LinkedIn
- do not force a polished "professional tone" if the author is naturally sharper
### Threads
- keep it readable and direct
- do not write fake hyper-casual creator copy
- do not paste the LinkedIn version and shorten it
### Bluesky
- keep it concise
- preserve the author's cadence
- do not rely on hashtags or feed-gaming language
## Posting Order
Default:
1. post the strongest native version first
2. adapt for the secondary platforms
3. stagger timing only if the user wants sequencing help
Do not add cross-platform references unless useful. Most of the time, the post should stand on its own.
## Banned Patterns
Delete and rewrite any of these:
- "Excited to share"
- "Here's what I learned"
- "What do you think?"
- "link in bio" unless that is literally true
- generic "professional takeaway" paragraphs that were not in the source
## Output Format
Return:
- the primary platform version
- adapted variants for each requested platform
- a short note on what changed and why
- any publishing constraint the user still needs to resolve
## Quality Gate
Before delivering:
- each version reads like the same author under different constraints
- no platform version feels padded or sanitized
- no copy is duplicated verbatim across platforms
- any extra context added for LinkedIn or newsletter use is actually necessary
## Related Skills
- `brand-voice` for reusable source-derived voice capture
- `content-engine` for voice capture and source shaping
- `x-api` for X publishing workflows

View File

@ -1,7 +0,0 @@
interface:
display_name: "Crosspost"
short_description: "Multi-platform content distribution with native adaptation"
brand_color: "#EC4899"
default_prompt: "Distribute content across X, LinkedIn, Threads, and Bluesky with platform-native adaptation"
policy:
allow_implicit_invocation: true

View File

@ -1,155 +0,0 @@
---
name: deep-research
description: Multi-source deep research using firecrawl and exa MCPs. Searches the web, synthesizes findings, and delivers cited reports with source attribution. Use when the user wants thorough research on any topic with evidence and citations.
origin: ECC
---
# Deep Research
Produce thorough, cited research reports from multiple web sources using firecrawl and exa MCP tools.
## When to Activate
- User asks to research any topic in depth
- Competitive analysis, technology evaluation, or market sizing
- Due diligence on companies, investors, or technologies
- Any question requiring synthesis from multiple sources
- User says "research", "deep dive", "investigate", or "what's the current state of"
## MCP Requirements
At least one of:
- **firecrawl**`firecrawl_search`, `firecrawl_scrape`, `firecrawl_crawl`
- **exa**`web_search_exa`, `web_search_advanced_exa`, `crawling_exa`
Both together give the best coverage. Configure in `~/.claude.json` or `~/.codex/config.toml`.
## Workflow
### Step 1: Understand the Goal
Ask 1-2 quick clarifying questions:
- "What's your goal — learning, making a decision, or writing something?"
- "Any specific angle or depth you want?"
If the user says "just research it" — skip ahead with reasonable defaults.
### Step 2: Plan the Research
Break the topic into 3-5 research sub-questions. Example:
- Topic: "Impact of AI on healthcare"
- What are the main AI applications in healthcare today?
- What clinical outcomes have been measured?
- What are the regulatory challenges?
- What companies are leading this space?
- What's the market size and growth trajectory?
### Step 3: Execute Multi-Source Search
For EACH sub-question, search using available MCP tools:
**With firecrawl:**
```
firecrawl_search(query: "<sub-question keywords>", limit: 8)
```
**With exa:**
```
web_search_exa(query: "<sub-question keywords>", numResults: 8)
web_search_advanced_exa(query: "<keywords>", numResults: 5, startPublishedDate: "2025-01-01")
```
**Search strategy:**
- Use 2-3 different keyword variations per sub-question
- Mix general and news-focused queries
- Aim for 15-30 unique sources total
- Prioritize: academic, official, reputable news > blogs > forums
### Step 4: Deep-Read Key Sources
For the most promising URLs, fetch full content:
**With firecrawl:**
```
firecrawl_scrape(url: "<url>")
```
**With exa:**
```
crawling_exa(url: "<url>", tokensNum: 5000)
```
Read 3-5 key sources in full for depth. Do not rely only on search snippets.
### Step 5: Synthesize and Write Report
Structure the report:
```markdown
# [Topic]: Research Report
*Generated: [date] | Sources: [N] | Confidence: [High/Medium/Low]*
## Executive Summary
[3-5 sentence overview of key findings]
## 1. [First Major Theme]
[Findings with inline citations]
- Key point ([Source Name](url))
- Supporting data ([Source Name](url))
## 2. [Second Major Theme]
...
## 3. [Third Major Theme]
...
## Key Takeaways
- [Actionable insight 1]
- [Actionable insight 2]
- [Actionable insight 3]
## Sources
1. [Title](url) — [one-line summary]
2. ...
## Methodology
Searched [N] queries across web and news. Analyzed [M] sources.
Sub-questions investigated: [list]
```
### Step 6: Deliver
- **Short topics**: Post the full report in chat
- **Long reports**: Post the executive summary + key takeaways, save full report to a file
## Parallel Research with Subagents
For broad topics, use Claude Code's Task tool to parallelize:
```
Launch 3 research agents in parallel:
1. Agent 1: Research sub-questions 1-2
2. Agent 2: Research sub-questions 3-4
3. Agent 3: Research sub-question 5 + cross-cutting themes
```
Each agent searches, reads sources, and returns findings. The main session synthesizes into the final report.
## Quality Rules
1. **Every claim needs a source.** No unsourced assertions.
2. **Cross-reference.** If only one source says it, flag it as unverified.
3. **Recency matters.** Prefer sources from the last 12 months.
4. **Acknowledge gaps.** If you couldn't find good info on a sub-question, say so.
5. **No hallucination.** If you don't know, say "insufficient data found."
6. **Separate fact from inference.** Label estimates, projections, and opinions clearly.
## Examples
```
"Research the current state of nuclear fusion energy"
"Deep dive into Rust vs Go for backend services in 2026"
"Research the best strategies for bootstrapping a SaaS business"
"What's happening with the US housing market right now?"
"Investigate the competitive landscape for AI code editors"
```

View File

@ -1,7 +0,0 @@
interface:
display_name: "Deep Research"
short_description: "Multi-source deep research with firecrawl and exa MCPs"
brand_color: "#6366F1"
default_prompt: "Research the given topic using firecrawl and exa, produce a cited report"
policy:
allow_implicit_invocation: true

View File

@ -1,144 +0,0 @@
---
name: dmux-workflows
description: Multi-agent orchestration using dmux (tmux pane manager for AI agents). Patterns for parallel agent workflows across Claude Code, Codex, OpenCode, and other harnesses. Use when running multiple agent sessions in parallel or coordinating multi-agent development workflows.
origin: ECC
---
# dmux Workflows
Orchestrate parallel AI agent sessions using dmux, a tmux pane manager for agent harnesses.
## When to Activate
- Running multiple agent sessions in parallel
- Coordinating work across Claude Code, Codex, and other harnesses
- Complex tasks that benefit from divide-and-conquer parallelism
- User says "run in parallel", "split this work", "use dmux", or "multi-agent"
## What is dmux
dmux is a tmux-based orchestration tool that manages AI agent panes:
- Press `n` to create a new pane with a prompt
- Press `m` to merge pane output back to the main session
- Supports: Claude Code, Codex, OpenCode, Cline, Gemini, Qwen
**Install:** `npm install -g dmux` or see [github.com/standardagents/dmux](https://github.com/standardagents/dmux)
## Quick Start
```bash
# Start dmux session
dmux
# Create agent panes (press 'n' in dmux, then type prompt)
# Pane 1: "Implement the auth middleware in src/auth/"
# Pane 2: "Write tests for the user service"
# Pane 3: "Update API documentation"
# Each pane runs its own agent session
# Press 'm' to merge results back
```
## Workflow Patterns
### Pattern 1: Research + Implement
Split research and implementation into parallel tracks:
```
Pane 1 (Research): "Research best practices for rate limiting in Node.js.
Check current libraries, compare approaches, and write findings to
/tmp/rate-limit-research.md"
Pane 2 (Implement): "Implement rate limiting middleware for our Express API.
Start with a basic token bucket, we'll refine after research completes."
# After Pane 1 completes, merge findings into Pane 2's context
```
### Pattern 2: Multi-File Feature
Parallelize work across independent files:
```
Pane 1: "Create the database schema and migrations for the billing feature"
Pane 2: "Build the billing API endpoints in src/api/billing/"
Pane 3: "Create the billing dashboard UI components"
# Merge all, then do integration in main pane
```
### Pattern 3: Test + Fix Loop
Run tests in one pane, fix in another:
```
Pane 1 (Watcher): "Run the test suite in watch mode. When tests fail,
summarize the failures."
Pane 2 (Fixer): "Fix failing tests based on the error output from pane 1"
```
### Pattern 4: Cross-Harness
Use different AI tools for different tasks:
```
Pane 1 (Claude Code): "Review the security of the auth module"
Pane 2 (Codex): "Refactor the utility functions for performance"
Pane 3 (Claude Code): "Write E2E tests for the checkout flow"
```
### Pattern 5: Code Review Pipeline
Parallel review perspectives:
```
Pane 1: "Review src/api/ for security vulnerabilities"
Pane 2: "Review src/api/ for performance issues"
Pane 3: "Review src/api/ for test coverage gaps"
# Merge all reviews into a single report
```
## Best Practices
1. **Independent tasks only.** Don't parallelize tasks that depend on each other's output.
2. **Clear boundaries.** Each pane should work on distinct files or concerns.
3. **Merge strategically.** Review pane output before merging to avoid conflicts.
4. **Use git worktrees.** For file-conflict-prone work, use separate worktrees per pane.
5. **Resource awareness.** Each pane uses API tokens — keep total panes under 5-6.
## Git Worktree Integration
For tasks that touch overlapping files:
```bash
# Create worktrees for isolation
git worktree add ../feature-auth feat/auth
git worktree add ../feature-billing feat/billing
# Run agents in separate worktrees
# Pane 1: cd ../feature-auth && claude
# Pane 2: cd ../feature-billing && claude
# Merge branches when done
git merge feat/auth
git merge feat/billing
```
## Complementary Tools
| Tool | What It Does | When to Use |
|------|-------------|-------------|
| **dmux** | tmux pane management for agents | Parallel agent sessions |
| **Superset** | Terminal IDE for 10+ parallel agents | Large-scale orchestration |
| **Claude Code Task tool** | In-process subagent spawning | Programmatic parallelism within a session |
| **Codex multi-agent** | Built-in agent roles | Codex-specific parallel work |
## Troubleshooting
- **Pane not responding:** Check if the agent session is waiting for input. Use `m` to read output.
- **Merge conflicts:** Use git worktrees to isolate file changes per pane.
- **High token usage:** Reduce number of parallel panes. Each pane is a full agent session.
- **tmux not found:** Install with `brew install tmux` (macOS) or `apt install tmux` (Linux).

View File

@ -1,7 +0,0 @@
interface:
display_name: "dmux Workflows"
short_description: "Multi-agent orchestration with dmux"
brand_color: "#14B8A6"
default_prompt: "Orchestrate parallel agent sessions using dmux pane manager"
policy:
allow_implicit_invocation: true

View File

@ -1,90 +0,0 @@
---
name: documentation-lookup
description: Use up-to-date library and framework docs via Context7 MCP instead of training data. Activates for setup questions, API references, code examples, or when the user names a framework (e.g. React, Next.js, Prisma).
origin: ECC
---
# Documentation Lookup (Context7)
When the user asks about libraries, frameworks, or APIs, fetch current documentation via the Context7 MCP (tools `resolve-library-id` and `query-docs`) instead of relying on training data.
## Core Concepts
- **Context7**: MCP server that exposes live documentation; use it instead of training data for libraries and APIs.
- **resolve-library-id**: Returns Context7-compatible library IDs (e.g. `/vercel/next.js`) from a library name and query.
- **query-docs**: Fetches documentation and code snippets for a given library ID and question. Always call resolve-library-id first to get a valid library ID.
## When to use
Activate when the user:
- Asks setup or configuration questions (e.g. "How do I configure Next.js middleware?")
- Requests code that depends on a library ("Write a Prisma query for...")
- Needs API or reference information ("What are the Supabase auth methods?")
- Mentions specific frameworks or libraries (React, Vue, Svelte, Express, Tailwind, Prisma, Supabase, etc.)
Use this skill whenever the request depends on accurate, up-to-date behavior of a library, framework, or API. Applies across harnesses that have the Context7 MCP configured (e.g. Claude Code, Cursor, Codex).
## How it works
### Step 1: Resolve the Library ID
Call the **resolve-library-id** MCP tool with:
- **libraryName**: The library or product name taken from the user's question (e.g. `Next.js`, `Prisma`, `Supabase`).
- **query**: The user's full question. This improves relevance ranking of results.
You must obtain a Context7-compatible library ID (format `/org/project` or `/org/project/version`) before querying docs. Do not call query-docs without a valid library ID from this step.
### Step 2: Select the Best Match
From the resolution results, choose one result using:
- **Name match**: Prefer exact or closest match to what the user asked for.
- **Benchmark score**: Higher scores indicate better documentation quality (100 is highest).
- **Source reputation**: Prefer High or Medium reputation when available.
- **Version**: If the user specified a version (e.g. "React 19", "Next.js 15"), prefer a version-specific library ID if listed (e.g. `/org/project/v1.2.0`).
### Step 3: Fetch the Documentation
Call the **query-docs** MCP tool with:
- **libraryId**: The selected Context7 library ID from Step 2 (e.g. `/vercel/next.js`).
- **query**: The user's specific question or task. Be specific to get relevant snippets.
Limit: do not call query-docs (or resolve-library-id) more than 3 times per question. If the answer is unclear after 3 calls, state the uncertainty and use the best information you have rather than guessing.
### Step 4: Use the Documentation
- Answer the user's question using the fetched, current information.
- Include relevant code examples from the docs when helpful.
- Cite the library or version when it matters (e.g. "In Next.js 15...").
## Examples
### Example: Next.js middleware
1. Call **resolve-library-id** with `libraryName: "Next.js"`, `query: "How do I set up Next.js middleware?"`.
2. From results, pick the best match (e.g. `/vercel/next.js`) by name and benchmark score.
3. Call **query-docs** with `libraryId: "/vercel/next.js"`, `query: "How do I set up Next.js middleware?"`.
4. Use the returned snippets and text to answer; include a minimal `middleware.ts` example from the docs if relevant.
### Example: Prisma query
1. Call **resolve-library-id** with `libraryName: "Prisma"`, `query: "How do I query with relations?"`.
2. Select the official Prisma library ID (e.g. `/prisma/prisma`).
3. Call **query-docs** with that `libraryId` and the query.
4. Return the Prisma Client pattern (e.g. `include` or `select`) with a short code snippet from the docs.
### Example: Supabase auth methods
1. Call **resolve-library-id** with `libraryName: "Supabase"`, `query: "What are the auth methods?"`.
2. Pick the Supabase docs library ID.
3. Call **query-docs**; summarize the auth methods and show minimal examples from the fetched docs.
## Best Practices
- **Be specific**: Use the user's full question as the query where possible for better relevance.
- **Version awareness**: When users mention versions, use version-specific library IDs from the resolve step when available.
- **Prefer official sources**: When multiple matches exist, prefer official or primary packages over community forks.
- **No sensitive data**: Redact API keys, passwords, tokens, and other secrets from any query sent to Context7. Treat the user's question as potentially containing secrets before passing it to resolve-library-id or query-docs.

View File

@ -1,7 +0,0 @@
interface:
display_name: "Documentation Lookup"
short_description: "Fetch up-to-date library docs via Context7 MCP"
brand_color: "#6366F1"
default_prompt: "Look up docs for a library or API"
policy:
allow_implicit_invocation: true

View File

@ -1,326 +0,0 @@
---
name: e2e-testing
description: Playwright E2E testing patterns, Page Object Model, configuration, CI/CD integration, artifact management, and flaky test strategies.
origin: ECC
---
# E2E Testing Patterns
Comprehensive Playwright patterns for building stable, fast, and maintainable E2E test suites.
## Test File Organization
```
tests/
├── e2e/
│ ├── auth/
│ │ ├── login.spec.ts
│ │ ├── logout.spec.ts
│ │ └── register.spec.ts
│ ├── features/
│ │ ├── browse.spec.ts
│ │ ├── search.spec.ts
│ │ └── create.spec.ts
│ └── api/
│ └── endpoints.spec.ts
├── fixtures/
│ ├── auth.ts
│ └── data.ts
└── playwright.config.ts
```
## Page Object Model (POM)
```typescript
import { Page, Locator } from '@playwright/test'
export class ItemsPage {
readonly page: Page
readonly searchInput: Locator
readonly itemCards: Locator
readonly createButton: Locator
constructor(page: Page) {
this.page = page
this.searchInput = page.locator('[data-testid="search-input"]')
this.itemCards = page.locator('[data-testid="item-card"]')
this.createButton = page.locator('[data-testid="create-btn"]')
}
async goto() {
await this.page.goto('/items')
await this.page.waitForLoadState('networkidle')
}
async search(query: string) {
await this.searchInput.fill(query)
await this.page.waitForResponse(resp => resp.url().includes('/api/search'))
await this.page.waitForLoadState('networkidle')
}
async getItemCount() {
return await this.itemCards.count()
}
}
```
## Test Structure
```typescript
import { test, expect } from '@playwright/test'
import { ItemsPage } from '../../pages/ItemsPage'
test.describe('Item Search', () => {
let itemsPage: ItemsPage
test.beforeEach(async ({ page }) => {
itemsPage = new ItemsPage(page)
await itemsPage.goto()
})
test('should search by keyword', async ({ page }) => {
await itemsPage.search('test')
const count = await itemsPage.getItemCount()
expect(count).toBeGreaterThan(0)
await expect(itemsPage.itemCards.first()).toContainText(/test/i)
await page.screenshot({ path: 'artifacts/search-results.png' })
})
test('should handle no results', async ({ page }) => {
await itemsPage.search('xyznonexistent123')
await expect(page.locator('[data-testid="no-results"]')).toBeVisible()
expect(await itemsPage.getItemCount()).toBe(0)
})
})
```
## Playwright Configuration
```typescript
import { defineConfig, devices } from '@playwright/test'
export default defineConfig({
testDir: './tests/e2e',
fullyParallel: true,
forbidOnly: !!process.env.CI,
retries: process.env.CI ? 2 : 0,
workers: process.env.CI ? 1 : undefined,
reporter: [
['html', { outputFolder: 'playwright-report' }],
['junit', { outputFile: 'playwright-results.xml' }],
['json', { outputFile: 'playwright-results.json' }]
],
use: {
baseURL: process.env.BASE_URL || 'http://localhost:3000',
trace: 'on-first-retry',
screenshot: 'only-on-failure',
video: 'retain-on-failure',
actionTimeout: 10000,
navigationTimeout: 30000,
},
projects: [
{ name: 'chromium', use: { ...devices['Desktop Chrome'] } },
{ name: 'firefox', use: { ...devices['Desktop Firefox'] } },
{ name: 'webkit', use: { ...devices['Desktop Safari'] } },
{ name: 'mobile-chrome', use: { ...devices['Pixel 5'] } },
],
webServer: {
command: 'npm run dev',
url: 'http://localhost:3000',
reuseExistingServer: !process.env.CI,
timeout: 120000,
},
})
```
## Flaky Test Patterns
### Quarantine
```typescript
test('flaky: complex search', async ({ page }) => {
test.fixme(true, 'Flaky - Issue #123')
// test code...
})
test('conditional skip', async ({ page }) => {
test.skip(process.env.CI, 'Flaky in CI - Issue #123')
// test code...
})
```
### Identify Flakiness
```bash
npx playwright test tests/search.spec.ts --repeat-each=10
npx playwright test tests/search.spec.ts --retries=3
```
### Common Causes & Fixes
**Race conditions:**
```typescript
// Bad: assumes element is ready
await page.click('[data-testid="button"]')
// Good: auto-wait locator
await page.locator('[data-testid="button"]').click()
```
**Network timing:**
```typescript
// Bad: arbitrary timeout
await page.waitForTimeout(5000)
// Good: wait for specific condition
await page.waitForResponse(resp => resp.url().includes('/api/data'))
```
**Animation timing:**
```typescript
// Bad: click during animation
await page.click('[data-testid="menu-item"]')
// Good: wait for stability
await page.locator('[data-testid="menu-item"]').waitFor({ state: 'visible' })
await page.waitForLoadState('networkidle')
await page.locator('[data-testid="menu-item"]').click()
```
## Artifact Management
### Screenshots
```typescript
await page.screenshot({ path: 'artifacts/after-login.png' })
await page.screenshot({ path: 'artifacts/full-page.png', fullPage: true })
await page.locator('[data-testid="chart"]').screenshot({ path: 'artifacts/chart.png' })
```
### Traces
```typescript
await browser.startTracing(page, {
path: 'artifacts/trace.json',
screenshots: true,
snapshots: true,
})
// ... test actions ...
await browser.stopTracing()
```
### Video
```typescript
// In playwright.config.ts
use: {
video: 'retain-on-failure',
videosPath: 'artifacts/videos/'
}
```
## CI/CD Integration
```yaml
# .github/workflows/e2e.yml
name: E2E Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- run: npm ci
- run: npx playwright install --with-deps
- run: npx playwright test
env:
BASE_URL: ${{ vars.STAGING_URL }}
- uses: actions/upload-artifact@v4
if: always()
with:
name: playwright-report
path: playwright-report/
retention-days: 30
```
## Test Report Template
```markdown
# E2E Test Report
**Date:** YYYY-MM-DD HH:MM
**Duration:** Xm Ys
**Status:** PASSING / FAILING
## Summary
- Total: X | Passed: Y (Z%) | Failed: A | Flaky: B | Skipped: C
## Failed Tests
### test-name
**File:** `tests/e2e/feature.spec.ts:45`
**Error:** Expected element to be visible
**Screenshot:** artifacts/failed.png
**Recommended Fix:** [description]
## Artifacts
- HTML Report: playwright-report/index.html
- Screenshots: artifacts/*.png
- Videos: artifacts/videos/*.webm
- Traces: artifacts/*.zip
```
## Wallet / Web3 Testing
```typescript
test('wallet connection', async ({ page, context }) => {
// Mock wallet provider
await context.addInitScript(() => {
window.ethereum = {
isMetaMask: true,
request: async ({ method }) => {
if (method === 'eth_requestAccounts')
return ['0x1234567890123456789012345678901234567890']
if (method === 'eth_chainId') return '0x1'
}
}
})
await page.goto('/')
await page.locator('[data-testid="connect-wallet"]').click()
await expect(page.locator('[data-testid="wallet-address"]')).toContainText('0x1234')
})
```
## Financial / Critical Flow Testing
```typescript
test('trade execution', async ({ page }) => {
// Skip on production — real money
test.skip(process.env.NODE_ENV === 'production', 'Skip on production')
await page.goto('/markets/test-market')
await page.locator('[data-testid="position-yes"]').click()
await page.locator('[data-testid="trade-amount"]').fill('1.0')
// Verify preview
const preview = page.locator('[data-testid="trade-preview"]')
await expect(preview).toContainText('1.0')
// Confirm and wait for blockchain
await page.locator('[data-testid="confirm-trade"]').click()
await page.waitForResponse(
resp => resp.url().includes('/api/trade') && resp.status() === 200,
{ timeout: 30000 }
)
await expect(page.locator('[data-testid="trade-success"]')).toBeVisible()
})
```

View File

@ -1,7 +0,0 @@
interface:
display_name: "E2E Testing"
short_description: "Playwright end-to-end testing"
brand_color: "#06B6D4"
default_prompt: "Generate Playwright E2E tests with Page Object Model"
policy:
allow_implicit_invocation: true

View File

@ -1,236 +0,0 @@
---
name: eval-harness
description: Formal evaluation framework for Claude Code sessions implementing eval-driven development (EDD) principles
origin: ECC
tools: Read, Write, Edit, Bash, Grep, Glob
---
# Eval Harness Skill
A formal evaluation framework for Claude Code sessions, implementing eval-driven development (EDD) principles.
## When to Activate
- Setting up eval-driven development (EDD) for AI-assisted workflows
- Defining pass/fail criteria for Claude Code task completion
- Measuring agent reliability with pass@k metrics
- Creating regression test suites for prompt or agent changes
- Benchmarking agent performance across model versions
## Philosophy
Eval-Driven Development treats evals as the "unit tests of AI development":
- Define expected behavior BEFORE implementation
- Run evals continuously during development
- Track regressions with each change
- Use pass@k metrics for reliability measurement
## Eval Types
### Capability Evals
Test if Claude can do something it couldn't before:
```markdown
[CAPABILITY EVAL: feature-name]
Task: Description of what Claude should accomplish
Success Criteria:
- [ ] Criterion 1
- [ ] Criterion 2
- [ ] Criterion 3
Expected Output: Description of expected result
```
### Regression Evals
Ensure changes don't break existing functionality:
```markdown
[REGRESSION EVAL: feature-name]
Baseline: SHA or checkpoint name
Tests:
- existing-test-1: PASS/FAIL
- existing-test-2: PASS/FAIL
- existing-test-3: PASS/FAIL
Result: X/Y passed (previously Y/Y)
```
## Grader Types
### 1. Code-Based Grader
Deterministic checks using code:
```bash
# Check if file contains expected pattern
grep -q "export function handleAuth" src/auth.ts && echo "PASS" || echo "FAIL"
# Check if tests pass
npm test -- --testPathPattern="auth" && echo "PASS" || echo "FAIL"
# Check if build succeeds
npm run build && echo "PASS" || echo "FAIL"
```
### 2. Model-Based Grader
Use Claude to evaluate open-ended outputs:
```markdown
[MODEL GRADER PROMPT]
Evaluate the following code change:
1. Does it solve the stated problem?
2. Is it well-structured?
3. Are edge cases handled?
4. Is error handling appropriate?
Score: 1-5 (1=poor, 5=excellent)
Reasoning: [explanation]
```
### 3. Human Grader
Flag for manual review:
```markdown
[HUMAN REVIEW REQUIRED]
Change: Description of what changed
Reason: Why human review is needed
Risk Level: LOW/MEDIUM/HIGH
```
## Metrics
### pass@k
"At least one success in k attempts"
- pass@1: First attempt success rate
- pass@3: Success within 3 attempts
- Typical target: pass@3 > 90%
### pass^k
"All k trials succeed"
- Higher bar for reliability
- pass^3: 3 consecutive successes
- Use for critical paths
## Eval Workflow
### 1. Define (Before Coding)
```markdown
## EVAL DEFINITION: feature-xyz
### Capability Evals
1. Can create new user account
2. Can validate email format
3. Can hash password securely
### Regression Evals
1. Existing login still works
2. Session management unchanged
3. Logout flow intact
### Success Metrics
- pass@3 > 90% for capability evals
- pass^3 = 100% for regression evals
```
### 2. Implement
Write code to pass the defined evals.
### 3. Evaluate
```bash
# Run capability evals
[Run each capability eval, record PASS/FAIL]
# Run regression evals
npm test -- --testPathPattern="existing"
# Generate report
```
### 4. Report
```markdown
EVAL REPORT: feature-xyz
========================
Capability Evals:
create-user: PASS (pass@1)
validate-email: PASS (pass@2)
hash-password: PASS (pass@1)
Overall: 3/3 passed
Regression Evals:
login-flow: PASS
session-mgmt: PASS
logout-flow: PASS
Overall: 3/3 passed
Metrics:
pass@1: 67% (2/3)
pass@3: 100% (3/3)
Status: READY FOR REVIEW
```
## Integration Patterns
### Pre-Implementation
```
/eval define feature-name
```
Creates eval definition file at `.claude/evals/feature-name.md`
### During Implementation
```
/eval check feature-name
```
Runs current evals and reports status
### Post-Implementation
```
/eval report feature-name
```
Generates full eval report
## Eval Storage
Store evals in project:
```
.claude/
evals/
feature-xyz.md # Eval definition
feature-xyz.log # Eval run history
baseline.json # Regression baselines
```
## Best Practices
1. **Define evals BEFORE coding** - Forces clear thinking about success criteria
2. **Run evals frequently** - Catch regressions early
3. **Track pass@k over time** - Monitor reliability trends
4. **Use code graders when possible** - Deterministic > probabilistic
5. **Human review for security** - Never fully automate security checks
6. **Keep evals fast** - Slow evals don't get run
7. **Version evals with code** - Evals are first-class artifacts
## Example: Adding Authentication
```markdown
## EVAL: add-authentication
### Phase 1: Define (10 min)
Capability Evals:
- [ ] User can register with email/password
- [ ] User can login with valid credentials
- [ ] Invalid credentials rejected with proper error
- [ ] Sessions persist across page reloads
- [ ] Logout clears session
Regression Evals:
- [ ] Public routes still accessible
- [ ] API responses unchanged
- [ ] Database schema compatible
### Phase 2: Implement (varies)
[Write code]
### Phase 3: Evaluate
Run: /eval check add-authentication
### Phase 4: Report
EVAL REPORT: add-authentication
==============================
Capability: 5/5 passed (pass@3: 100%)
Regression: 3/3 passed (pass^3: 100%)
Status: SHIP IT
```

View File

@ -1,7 +0,0 @@
interface:
display_name: "Eval Harness"
short_description: "Eval-driven development with pass/fail criteria"
brand_color: "#EC4899"
default_prompt: "Set up eval-driven development with pass/fail criteria"
policy:
allow_implicit_invocation: true

View File

@ -1,442 +0,0 @@
---
name: everything-claude-code-conventions
description: Development conventions and patterns for everything-claude-code. JavaScript project with conventional commits.
---
# Everything Claude Code Conventions
> Generated from [affaan-m/everything-claude-code](https://github.com/affaan-m/everything-claude-code) on 2026-03-20
## Overview
This skill teaches Claude the development patterns and conventions used in everything-claude-code.
## Tech Stack
- **Primary Language**: JavaScript
- **Architecture**: hybrid module organization
- **Test Location**: separate
## When to Use This Skill
Activate this skill when:
- Making changes to this repository
- Adding new features following established patterns
- Writing tests that match project conventions
- Creating commits with proper message format
## Commit Conventions
Follow these commit message conventions based on 500 analyzed commits.
### Commit Style: Conventional Commits
### Prefixes Used
- `fix`
- `test`
- `feat`
- `docs`
### Message Guidelines
- Average message length: ~65 characters
- Keep first line concise and descriptive
- Use imperative mood ("Add feature" not "Added feature")
*Commit message example*
```text
feat(rules): add C# language support
```
*Commit message example*
```text
chore(deps-dev): bump flatted (#675)
```
*Commit message example*
```text
fix: auto-detect ECC root from plugin cache when CLAUDE_PLUGIN_ROOT is unset (#547) (#691)
```
*Commit message example*
```text
docs: add Antigravity setup and usage guide (#552)
```
*Commit message example*
```text
merge: PR #529 — feat(skills): add documentation-lookup, bun-runtime, nextjs-turbopack; feat(agents): add rust-reviewer
```
*Commit message example*
```text
Revert "Add Kiro IDE support (.kiro/) (#548)"
```
*Commit message example*
```text
Add Kiro IDE support (.kiro/) (#548)
```
*Commit message example*
```text
feat: add block-no-verify hook for Claude Code and Cursor (#649)
```
## Architecture
### Project Structure: Single Package
This project uses **hybrid** module organization.
### Configuration Files
- `.github/workflows/ci.yml`
- `.github/workflows/maintenance.yml`
- `.github/workflows/monthly-metrics.yml`
- `.github/workflows/release.yml`
- `.github/workflows/reusable-release.yml`
- `.github/workflows/reusable-test.yml`
- `.github/workflows/reusable-validate.yml`
- `.opencode/package.json`
- `.opencode/tsconfig.json`
- `.prettierrc`
- `eslint.config.js`
- `package.json`
### Guidelines
- This project uses a hybrid organization
- Follow existing patterns when adding new code
## Code Style
### Language: JavaScript
### Naming Conventions
| Element | Convention |
|---------|------------|
| Files | camelCase |
| Functions | camelCase |
| Classes | PascalCase |
| Constants | SCREAMING_SNAKE_CASE |
### Import Style: Relative Imports
### Export Style: Mixed Style
*Preferred import style*
```typescript
// Use relative imports
import { Button } from '../components/Button'
import { useAuth } from './hooks/useAuth'
```
## Testing
### Test Framework
No specific test framework detected — use the repository's existing test patterns.
### File Pattern: `*.test.js`
### Test Types
- **Unit tests**: Test individual functions and components in isolation
- **Integration tests**: Test interactions between multiple components/services
### Coverage
This project has coverage reporting configured. Aim for 80%+ coverage.
## Error Handling
### Error Handling Style: Try-Catch Blocks
*Standard error handling pattern*
```typescript
try {
const result = await riskyOperation()
return result
} catch (error) {
console.error('Operation failed:', error)
throw new Error('User-friendly message')
}
```
## Common Workflows
These workflows were detected from analyzing commit patterns.
### Database Migration
Database schema changes with migration files
**Frequency**: ~2 times per month
**Steps**:
1. Create migration file
2. Update schema definitions
3. Generate/update types
**Files typically involved**:
- `**/schema.*`
- `migrations/*`
**Example commit sequence**:
```
feat: implement --with/--without selective install flags (#679)
fix: sync catalog counts with filesystem (27 agents, 113 skills, 58 commands) (#693)
feat(rules): add Rust language rules (rebased #660) (#686)
```
### Feature Development
Standard feature implementation workflow
**Frequency**: ~22 times per month
**Steps**:
1. Add feature implementation
2. Add tests for feature
3. Update documentation
**Files typically involved**:
- `manifests/*`
- `schemas/*`
- `**/*.test.*`
- `**/api/**`
**Example commit sequence**:
```
feat(skills): add documentation-lookup, bun-runtime, nextjs-turbopack; feat(agents): add rust-reviewer
docs(skills): align documentation-lookup with CONTRIBUTING template; add cross-harness (Codex/Cursor) skill copies
fix: address PR review — skill template (When to use, How it works, Examples), bun.lock, next build note, rust-reviewer CI note, doc-lookup privacy/uncertainty
```
### Add Language Rules
Adds a new programming language to the rules system, including coding style, hooks, patterns, security, and testing guidelines.
**Frequency**: ~2 times per month
**Steps**:
1. Create a new directory under rules/{language}/
2. Add coding-style.md, hooks.md, patterns.md, security.md, and testing.md files with language-specific content
3. Optionally reference or link to related skills
**Files typically involved**:
- `rules/*/coding-style.md`
- `rules/*/hooks.md`
- `rules/*/patterns.md`
- `rules/*/security.md`
- `rules/*/testing.md`
**Example commit sequence**:
```
Create a new directory under rules/{language}/
Add coding-style.md, hooks.md, patterns.md, security.md, and testing.md files with language-specific content
Optionally reference or link to related skills
```
### Add New Skill
Adds a new skill to the system, documenting its workflow, triggers, and usage, often with supporting scripts.
**Frequency**: ~4 times per month
**Steps**:
1. Create a new directory under skills/{skill-name}/
2. Add SKILL.md with documentation (When to Use, How It Works, Examples, etc.)
3. Optionally add scripts or supporting files under skills/{skill-name}/scripts/
4. Address review feedback and iterate on documentation
**Files typically involved**:
- `skills/*/SKILL.md`
- `skills/*/scripts/*.sh`
- `skills/*/scripts/*.js`
**Example commit sequence**:
```
Create a new directory under skills/{skill-name}/
Add SKILL.md with documentation (When to Use, How It Works, Examples, etc.)
Optionally add scripts or supporting files under skills/{skill-name}/scripts/
Address review feedback and iterate on documentation
```
### Add New Agent
Adds a new agent to the system for code review, build resolution, or other automated tasks.
**Frequency**: ~2 times per month
**Steps**:
1. Create a new agent markdown file under agents/{agent-name}.md
2. Register the agent in AGENTS.md
3. Optionally update README.md and docs/COMMAND-AGENT-MAP.md
**Files typically involved**:
- `agents/*.md`
- `AGENTS.md`
- `README.md`
- `docs/COMMAND-AGENT-MAP.md`
**Example commit sequence**:
```
Create a new agent markdown file under agents/{agent-name}.md
Register the agent in AGENTS.md
Optionally update README.md and docs/COMMAND-AGENT-MAP.md
```
### Add New Workflow Surface
Adds or updates a workflow entrypoint. Default to skills-first; only add a command shim when legacy slash compatibility is still required.
**Frequency**: ~1 times per month
**Steps**:
1. Create or update the canonical workflow under skills/{skill-name}/SKILL.md
2. Only if needed, add or update commands/{command-name}.md as a compatibility shim
**Files typically involved**:
- `skills/*/SKILL.md`
- `commands/*.md` (only when a legacy shim is intentionally retained)
**Example commit sequence**:
```
Create or update the canonical skill under skills/{skill-name}/SKILL.md
Only if needed, add or update commands/{command-name}.md as a compatibility shim
```
### Sync Catalog Counts
Synchronizes the documented counts of agents, skills, and commands in AGENTS.md and README.md with the actual repository state.
**Frequency**: ~3 times per month
**Steps**:
1. Update agent, skill, and command counts in AGENTS.md
2. Update the same counts in README.md (quick-start, comparison table, etc.)
3. Optionally update other documentation files
**Files typically involved**:
- `AGENTS.md`
- `README.md`
**Example commit sequence**:
```
Update agent, skill, and command counts in AGENTS.md
Update the same counts in README.md (quick-start, comparison table, etc.)
Optionally update other documentation files
```
### Add Cross Harness Skill Copies
Adds skill copies for different agent harnesses (e.g., Codex, Cursor, Antigravity) to ensure compatibility across platforms.
**Frequency**: ~2 times per month
**Steps**:
1. Copy or adapt SKILL.md to .agents/skills/{skill}/SKILL.md and/or .cursor/skills/{skill}/SKILL.md
2. Optionally add harness-specific openai.yaml or config files
3. Address review feedback to align with CONTRIBUTING template
**Files typically involved**:
- `.agents/skills/*/SKILL.md`
- `.cursor/skills/*/SKILL.md`
- `.agents/skills/*/agents/openai.yaml`
**Example commit sequence**:
```
Copy or adapt SKILL.md to .agents/skills/{skill}/SKILL.md and/or .cursor/skills/{skill}/SKILL.md
Optionally add harness-specific openai.yaml or config files
Address review feedback to align with CONTRIBUTING template
```
### Add Or Update Hook
Adds or updates git or bash hooks to enforce workflow, quality, or security policies.
**Frequency**: ~1 times per month
**Steps**:
1. Add or update hook scripts in hooks/ or scripts/hooks/
2. Register the hook in hooks/hooks.json or similar config
3. Optionally add or update tests in tests/hooks/
**Files typically involved**:
- `hooks/*.hook`
- `hooks/hooks.json`
- `scripts/hooks/*.js`
- `tests/hooks/*.test.js`
- `.cursor/hooks.json`
**Example commit sequence**:
```
Add or update hook scripts in hooks/ or scripts/hooks/
Register the hook in hooks/hooks.json or similar config
Optionally add or update tests in tests/hooks/
```
### Address Review Feedback
Addresses code review feedback by updating documentation, scripts, or configuration for clarity, correctness, or convention alignment.
**Frequency**: ~4 times per month
**Steps**:
1. Edit SKILL.md, agent, or command files to address reviewer comments
2. Update examples, headings, or configuration as requested
3. Iterate until all review feedback is resolved
**Files typically involved**:
- `skills/*/SKILL.md`
- `agents/*.md`
- `commands/*.md`
- `.agents/skills/*/SKILL.md`
- `.cursor/skills/*/SKILL.md`
**Example commit sequence**:
```
Edit SKILL.md, agent, or command files to address reviewer comments
Update examples, headings, or configuration as requested
Iterate until all review feedback is resolved
```
## Best Practices
Based on analysis of the codebase, follow these practices:
### Do
- Use conventional commit format (feat:, fix:, etc.)
- Follow *.test.js naming pattern
- Use camelCase for file names
- Prefer mixed exports
### Don't
- Don't write vague commit messages
- Don't skip tests for new features
- Don't deviate from established patterns without discussion
---
*This skill was auto-generated by [ECC Tools](https://ecc.tools). Review and customize as needed for your team.*

View File

@ -1,6 +0,0 @@
interface:
display_name: "Everything Claude Code"
short_description: "Repo-specific patterns and workflows for everything-claude-code"
default_prompt: "Use the everything-claude-code repo skill to follow existing architecture, testing, and workflow conventions."
policy:
allow_implicit_invocation: true

View File

@ -1,170 +0,0 @@
---
name: exa-search
description: Neural search via Exa MCP for web, code, and company research. Use when the user needs web search, code examples, company intel, people lookup, or AI-powered deep research with Exa's neural search engine.
origin: ECC
---
# Exa Search
Neural search for web content, code, companies, and people via the Exa MCP server.
## When to Activate
- User needs current web information or news
- Searching for code examples, API docs, or technical references
- Researching companies, competitors, or market players
- Finding professional profiles or people in a domain
- Running background research for any development task
- User says "search for", "look up", "find", or "what's the latest on"
## MCP Requirement
Exa MCP server must be configured. Add to `~/.claude.json`:
```json
"exa-web-search": {
"command": "npx",
"args": ["-y", "exa-mcp-server"],
"env": { "EXA_API_KEY": "YOUR_EXA_API_KEY_HERE" }
}
```
Get an API key at [exa.ai](https://exa.ai).
## Core Tools
### web_search_exa
General web search for current information, news, or facts.
```
web_search_exa(query: "latest AI developments 2026", numResults: 5)
```
**Parameters:**
| Param | Type | Default | Notes |
|-------|------|---------|-------|
| `query` | string | required | Search query |
| `numResults` | number | 8 | Number of results |
### web_search_advanced_exa
Filtered search with domain and date constraints.
```
web_search_advanced_exa(
query: "React Server Components best practices",
numResults: 5,
includeDomains: ["github.com", "react.dev"],
startPublishedDate: "2025-01-01"
)
```
**Parameters:**
| Param | Type | Default | Notes |
|-------|------|---------|-------|
| `query` | string | required | Search query |
| `numResults` | number | 8 | Number of results |
| `includeDomains` | string[] | none | Limit to specific domains |
| `excludeDomains` | string[] | none | Exclude specific domains |
| `startPublishedDate` | string | none | ISO date filter (start) |
| `endPublishedDate` | string | none | ISO date filter (end) |
### get_code_context_exa
Find code examples and documentation from GitHub, Stack Overflow, and docs sites.
```
get_code_context_exa(query: "Python asyncio patterns", tokensNum: 3000)
```
**Parameters:**
| Param | Type | Default | Notes |
|-------|------|---------|-------|
| `query` | string | required | Code or API search query |
| `tokensNum` | number | 5000 | Content tokens (1000-50000) |
### company_research_exa
Research companies for business intelligence and news.
```
company_research_exa(companyName: "Anthropic", numResults: 5)
```
**Parameters:**
| Param | Type | Default | Notes |
|-------|------|---------|-------|
| `companyName` | string | required | Company name |
| `numResults` | number | 5 | Number of results |
### people_search_exa
Find professional profiles and bios.
```
people_search_exa(query: "AI safety researchers at Anthropic", numResults: 5)
```
### crawling_exa
Extract full page content from a URL.
```
crawling_exa(url: "https://example.com/article", tokensNum: 5000)
```
**Parameters:**
| Param | Type | Default | Notes |
|-------|------|---------|-------|
| `url` | string | required | URL to extract |
| `tokensNum` | number | 5000 | Content tokens |
### deep_researcher_start / deep_researcher_check
Start an AI research agent that runs asynchronously.
```
# Start research
deep_researcher_start(query: "comprehensive analysis of AI code editors in 2026")
# Check status (returns results when complete)
deep_researcher_check(researchId: "<id from start>")
```
## Usage Patterns
### Quick Lookup
```
web_search_exa(query: "Node.js 22 new features", numResults: 3)
```
### Code Research
```
get_code_context_exa(query: "Rust error handling patterns Result type", tokensNum: 3000)
```
### Company Due Diligence
```
company_research_exa(companyName: "Vercel", numResults: 5)
web_search_advanced_exa(query: "Vercel funding valuation 2026", numResults: 3)
```
### Technical Deep Dive
```
# Start async research
deep_researcher_start(query: "WebAssembly component model status and adoption")
# ... do other work ...
deep_researcher_check(researchId: "<id>")
```
## Tips
- Use `web_search_exa` for broad queries, `web_search_advanced_exa` for filtered results
- Lower `tokensNum` (1000-2000) for focused code snippets, higher (5000+) for comprehensive context
- Combine `company_research_exa` with `web_search_advanced_exa` for thorough company analysis
- Use `crawling_exa` to get full content from specific URLs found in search results
- `deep_researcher_start` is best for comprehensive topics that benefit from AI synthesis
## Related Skills
- `deep-research` — Full research workflow using firecrawl + exa together
- `market-research` — Business-oriented research with decision frameworks

View File

@ -1,7 +0,0 @@
interface:
display_name: "Exa Search"
short_description: "Neural search via Exa MCP for web, code, and companies"
brand_color: "#8B5CF6"
default_prompt: "Search using Exa MCP tools for web content, code, or company research"
policy:
allow_implicit_invocation: true

View File

@ -1,277 +0,0 @@
---
name: fal-ai-media
description: Unified media generation via fal.ai MCP — image, video, and audio. Covers text-to-image (Nano Banana), text/image-to-video (Seedance, Kling, Veo 3), text-to-speech (CSM-1B), and video-to-audio (ThinkSound). Use when the user wants to generate images, videos, or audio with AI.
origin: ECC
---
# fal.ai Media Generation
Generate images, videos, and audio using fal.ai models via MCP.
## When to Activate
- User wants to generate images from text prompts
- Creating videos from text or images
- Generating speech, music, or sound effects
- Any media generation task
- User says "generate image", "create video", "text to speech", "make a thumbnail", or similar
## MCP Requirement
fal.ai MCP server must be configured. Add to `~/.claude.json`:
```json
"fal-ai": {
"command": "npx",
"args": ["-y", "fal-ai-mcp-server"],
"env": { "FAL_KEY": "YOUR_FAL_KEY_HERE" }
}
```
Get an API key at [fal.ai](https://fal.ai).
## MCP Tools
The fal.ai MCP provides these tools:
- `search` — Find available models by keyword
- `find` — Get model details and parameters
- `generate` — Run a model with parameters
- `result` — Check async generation status
- `status` — Check job status
- `cancel` — Cancel a running job
- `estimate_cost` — Estimate generation cost
- `models` — List popular models
- `upload` — Upload files for use as inputs
---
## Image Generation
### Nano Banana 2 (Fast)
Best for: quick iterations, drafts, text-to-image, image editing.
```
generate(
model_name: "fal-ai/nano-banana-2",
input: {
"prompt": "a futuristic cityscape at sunset, cyberpunk style",
"image_size": "landscape_16_9",
"num_images": 1,
"seed": 42
}
)
```
### Nano Banana Pro (High Fidelity)
Best for: production images, realism, typography, detailed prompts.
```
generate(
model_name: "fal-ai/nano-banana-pro",
input: {
"prompt": "professional product photo of wireless headphones on marble surface, studio lighting",
"image_size": "square",
"num_images": 1,
"guidance_scale": 7.5
}
)
```
### Common Image Parameters
| Param | Type | Options | Notes |
|-------|------|---------|-------|
| `prompt` | string | required | Describe what you want |
| `image_size` | string | `square`, `portrait_4_3`, `landscape_16_9`, `portrait_16_9`, `landscape_4_3` | Aspect ratio |
| `num_images` | number | 1-4 | How many to generate |
| `seed` | number | any integer | Reproducibility |
| `guidance_scale` | number | 1-20 | How closely to follow the prompt (higher = more literal) |
### Image Editing
Use Nano Banana 2 with an input image for inpainting, outpainting, or style transfer:
```
# First upload the source image
upload(file_path: "/path/to/image.png")
# Then generate with image input
generate(
model_name: "fal-ai/nano-banana-2",
input: {
"prompt": "same scene but in watercolor style",
"image_url": "<uploaded_url>",
"image_size": "landscape_16_9"
}
)
```
---
## Video Generation
### Seedance 1.0 Pro (ByteDance)
Best for: text-to-video, image-to-video with high motion quality.
```
generate(
model_name: "fal-ai/seedance-1-0-pro",
input: {
"prompt": "a drone flyover of a mountain lake at golden hour, cinematic",
"duration": "5s",
"aspect_ratio": "16:9",
"seed": 42
}
)
```
### Kling Video v3 Pro
Best for: text/image-to-video with native audio generation.
```
generate(
model_name: "fal-ai/kling-video/v3/pro",
input: {
"prompt": "ocean waves crashing on a rocky coast, dramatic clouds",
"duration": "5s",
"aspect_ratio": "16:9"
}
)
```
### Veo 3 (Google DeepMind)
Best for: video with generated sound, high visual quality.
```
generate(
model_name: "fal-ai/veo-3",
input: {
"prompt": "a bustling Tokyo street market at night, neon signs, crowd noise",
"aspect_ratio": "16:9"
}
)
```
### Image-to-Video
Start from an existing image:
```
generate(
model_name: "fal-ai/seedance-1-0-pro",
input: {
"prompt": "camera slowly zooms out, gentle wind moves the trees",
"image_url": "<uploaded_image_url>",
"duration": "5s"
}
)
```
### Video Parameters
| Param | Type | Options | Notes |
|-------|------|---------|-------|
| `prompt` | string | required | Describe the video |
| `duration` | string | `"5s"`, `"10s"` | Video length |
| `aspect_ratio` | string | `"16:9"`, `"9:16"`, `"1:1"` | Frame ratio |
| `seed` | number | any integer | Reproducibility |
| `image_url` | string | URL | Source image for image-to-video |
---
## Audio Generation
### CSM-1B (Conversational Speech)
Text-to-speech with natural, conversational quality.
```
generate(
model_name: "fal-ai/csm-1b",
input: {
"text": "Hello, welcome to the demo. Let me show you how this works.",
"speaker_id": 0
}
)
```
### ThinkSound (Video-to-Audio)
Generate matching audio from video content.
```
generate(
model_name: "fal-ai/thinksound",
input: {
"video_url": "<video_url>",
"prompt": "ambient forest sounds with birds chirping"
}
)
```
### ElevenLabs (via API, no MCP)
For professional voice synthesis, use ElevenLabs directly:
```python
import os
import requests
resp = requests.post(
"https://api.elevenlabs.io/v1/text-to-speech/<voice_id>",
headers={
"xi-api-key": os.environ["ELEVENLABS_API_KEY"],
"Content-Type": "application/json"
},
json={
"text": "Your text here",
"model_id": "eleven_turbo_v2_5",
"voice_settings": {"stability": 0.5, "similarity_boost": 0.75}
}
)
with open("output.mp3", "wb") as f:
f.write(resp.content)
```
### VideoDB Generative Audio
If VideoDB is configured, use its generative audio:
```python
# Voice generation
audio = coll.generate_voice(text="Your narration here", voice="alloy")
# Music generation
music = coll.generate_music(prompt="upbeat electronic background music", duration=30)
# Sound effects
sfx = coll.generate_sound_effect(prompt="thunder crack followed by rain")
```
---
## Cost Estimation
Before generating, check estimated cost:
```
estimate_cost(model_name: "fal-ai/nano-banana-pro", input: {...})
```
## Model Discovery
Find models for specific tasks:
```
search(query: "text to video")
find(model_name: "fal-ai/seedance-1-0-pro")
models()
```
## Tips
- Use `seed` for reproducible results when iterating on prompts
- Start with lower-cost models (Nano Banana 2) for prompt iteration, then switch to Pro for finals
- For video, keep prompts descriptive but concise — focus on motion and scene
- Image-to-video produces more controlled results than pure text-to-video
- Check `estimate_cost` before running expensive video generations
## Related Skills
- `videodb` — Video processing, editing, and streaming
- `video-editing` — AI-powered video editing workflows
- `content-engine` — Content creation for social platforms

View File

@ -1,7 +0,0 @@
interface:
display_name: "fal.ai Media"
short_description: "AI image, video, and audio generation via fal.ai"
brand_color: "#F43F5E"
default_prompt: "Generate images, videos, or audio using fal.ai models"
policy:
allow_implicit_invocation: true

View File

@ -1,145 +0,0 @@
---
name: frontend-design
description: Create distinctive, production-grade frontend interfaces with high design quality. Use when the user asks to build web components, pages, or applications and the visual direction matters as much as the code quality.
origin: ECC
---
# Frontend Design
Use this when the task is not just "make it work" but "make it look designed."
This skill is for product pages, dashboards, app shells, components, or visual systems that need a clear point of view instead of generic AI-looking UI.
## When To Use
- building a landing page, dashboard, or app surface from scratch
- upgrading a bland interface into something intentional and memorable
- translating a product concept into a concrete visual direction
- implementing a frontend where typography, composition, and motion matter
## Core Principle
Pick a direction and commit to it.
Safe-average UI is usually worse than a strong, coherent aesthetic with a few bold choices.
## Design Workflow
### 1. Frame the interface first
Before coding, settle:
- purpose
- audience
- emotional tone
- visual direction
- one thing the user should remember
Possible directions:
- brutally minimal
- editorial
- industrial
- luxury
- playful
- geometric
- retro-futurist
- soft and organic
- maximalist
Do not mix directions casually. Choose one and execute it cleanly.
### 2. Build the visual system
Define:
- type hierarchy
- color variables
- spacing rhythm
- layout logic
- motion rules
- surface / border / shadow treatment
Use CSS variables or the project's token system so the interface stays coherent as it grows.
### 3. Compose with intention
Prefer:
- asymmetry when it sharpens hierarchy
- overlap when it creates depth
- strong whitespace when it clarifies focus
- dense layouts only when the product benefits from density
Avoid defaulting to a symmetrical card grid unless it is clearly the right fit.
### 4. Make motion meaningful
Use animation to:
- reveal hierarchy
- stage information
- reinforce user action
- create one or two memorable moments
Do not scatter generic micro-interactions everywhere. One well-directed load sequence is usually stronger than twenty random hover effects.
## Strong Defaults
### Typography
- pick fonts with character
- pair a distinctive display face with a readable body face when appropriate
- avoid generic defaults when the page is design-led
### Color
- commit to a clear palette
- one dominant field with selective accents usually works better than evenly weighted rainbow palettes
- avoid cliché purple-gradient-on-white unless the product genuinely calls for it
### Background
Use atmosphere:
- gradients
- meshes
- textures
- subtle noise
- patterns
- layered transparency
Flat empty backgrounds are rarely the best answer for a product-facing page.
### Layout
- break the grid when the composition benefits from it
- use diagonals, offsets, and grouping intentionally
- keep reading flow obvious even when the layout is unconventional
## Anti-Patterns
Never default to:
- interchangeable SaaS hero sections
- generic card piles with no hierarchy
- random accent colors without a system
- placeholder-feeling typography
- motion that exists only because animation was easy to add
## Execution Rules
- preserve the established design system when working inside an existing product
- match technical complexity to the visual idea
- keep accessibility and responsiveness intact
- frontends should feel deliberate on desktop and mobile
## Quality Gate
Before delivering:
- the interface has a clear visual point of view
- typography and spacing feel intentional
- color and motion support the product instead of decorating it randomly
- the result does not read like generic AI UI
- the implementation is production-grade, not just visually interesting

View File

@ -1,642 +0,0 @@
---
name: frontend-patterns
description: Frontend development patterns for React, Next.js, state management, performance optimization, and UI best practices.
origin: ECC
---
# Frontend Development Patterns
Modern frontend patterns for React, Next.js, and performant user interfaces.
## When to Activate
- Building React components (composition, props, rendering)
- Managing state (useState, useReducer, Zustand, Context)
- Implementing data fetching (SWR, React Query, server components)
- Optimizing performance (memoization, virtualization, code splitting)
- Working with forms (validation, controlled inputs, Zod schemas)
- Handling client-side routing and navigation
- Building accessible, responsive UI patterns
## Component Patterns
### Composition Over Inheritance
```typescript
// PASS: GOOD: Component composition
interface CardProps {
children: React.ReactNode
variant?: 'default' | 'outlined'
}
export function Card({ children, variant = 'default' }: CardProps) {
return <div className={`card card-${variant}`}>{children}</div>
}
export function CardHeader({ children }: { children: React.ReactNode }) {
return <div className="card-header">{children}</div>
}
export function CardBody({ children }: { children: React.ReactNode }) {
return <div className="card-body">{children}</div>
}
// Usage
<Card>
<CardHeader>Title</CardHeader>
<CardBody>Content</CardBody>
</Card>
```
### Compound Components
```typescript
interface TabsContextValue {
activeTab: string
setActiveTab: (tab: string) => void
}
const TabsContext = createContext<TabsContextValue | undefined>(undefined)
export function Tabs({ children, defaultTab }: {
children: React.ReactNode
defaultTab: string
}) {
const [activeTab, setActiveTab] = useState(defaultTab)
return (
<TabsContext.Provider value={{ activeTab, setActiveTab }}>
{children}
</TabsContext.Provider>
)
}
export function TabList({ children }: { children: React.ReactNode }) {
return <div className="tab-list">{children}</div>
}
export function Tab({ id, children }: { id: string, children: React.ReactNode }) {
const context = useContext(TabsContext)
if (!context) throw new Error('Tab must be used within Tabs')
return (
<button
className={context.activeTab === id ? 'active' : ''}
onClick={() => context.setActiveTab(id)}
>
{children}
</button>
)
}
// Usage
<Tabs defaultTab="overview">
<TabList>
<Tab id="overview">Overview</Tab>
<Tab id="details">Details</Tab>
</TabList>
</Tabs>
```
### Render Props Pattern
```typescript
interface DataLoaderProps<T> {
url: string
children: (data: T | null, loading: boolean, error: Error | null) => React.ReactNode
}
export function DataLoader<T>({ url, children }: DataLoaderProps<T>) {
const [data, setData] = useState<T | null>(null)
const [loading, setLoading] = useState(true)
const [error, setError] = useState<Error | null>(null)
useEffect(() => {
fetch(url)
.then(res => res.json())
.then(setData)
.catch(setError)
.finally(() => setLoading(false))
}, [url])
return <>{children(data, loading, error)}</>
}
// Usage
<DataLoader<Market[]> url="/api/markets">
{(markets, loading, error) => {
if (loading) return <Spinner />
if (error) return <Error error={error} />
return <MarketList markets={markets!} />
}}
</DataLoader>
```
## Custom Hooks Patterns
### State Management Hook
```typescript
export function useToggle(initialValue = false): [boolean, () => void] {
const [value, setValue] = useState(initialValue)
const toggle = useCallback(() => {
setValue(v => !v)
}, [])
return [value, toggle]
}
// Usage
const [isOpen, toggleOpen] = useToggle()
```
### Async Data Fetching Hook
```typescript
interface UseQueryOptions<T> {
onSuccess?: (data: T) => void
onError?: (error: Error) => void
enabled?: boolean
}
export function useQuery<T>(
key: string,
fetcher: () => Promise<T>,
options?: UseQueryOptions<T>
) {
const [data, setData] = useState<T | null>(null)
const [error, setError] = useState<Error | null>(null)
const [loading, setLoading] = useState(false)
const refetch = useCallback(async () => {
setLoading(true)
setError(null)
try {
const result = await fetcher()
setData(result)
options?.onSuccess?.(result)
} catch (err) {
const error = err as Error
setError(error)
options?.onError?.(error)
} finally {
setLoading(false)
}
}, [fetcher, options])
useEffect(() => {
if (options?.enabled !== false) {
refetch()
}
}, [key, refetch, options?.enabled])
return { data, error, loading, refetch }
}
// Usage
const { data: markets, loading, error, refetch } = useQuery(
'markets',
() => fetch('/api/markets').then(r => r.json()),
{
onSuccess: data => console.log('Fetched', data.length, 'markets'),
onError: err => console.error('Failed:', err)
}
)
```
### Debounce Hook
```typescript
export function useDebounce<T>(value: T, delay: number): T {
const [debouncedValue, setDebouncedValue] = useState<T>(value)
useEffect(() => {
const handler = setTimeout(() => {
setDebouncedValue(value)
}, delay)
return () => clearTimeout(handler)
}, [value, delay])
return debouncedValue
}
// Usage
const [searchQuery, setSearchQuery] = useState('')
const debouncedQuery = useDebounce(searchQuery, 500)
useEffect(() => {
if (debouncedQuery) {
performSearch(debouncedQuery)
}
}, [debouncedQuery])
```
## State Management Patterns
### Context + Reducer Pattern
```typescript
interface State {
markets: Market[]
selectedMarket: Market | null
loading: boolean
}
type Action =
| { type: 'SET_MARKETS'; payload: Market[] }
| { type: 'SELECT_MARKET'; payload: Market }
| { type: 'SET_LOADING'; payload: boolean }
function reducer(state: State, action: Action): State {
switch (action.type) {
case 'SET_MARKETS':
return { ...state, markets: action.payload }
case 'SELECT_MARKET':
return { ...state, selectedMarket: action.payload }
case 'SET_LOADING':
return { ...state, loading: action.payload }
default:
return state
}
}
const MarketContext = createContext<{
state: State
dispatch: Dispatch<Action>
} | undefined>(undefined)
export function MarketProvider({ children }: { children: React.ReactNode }) {
const [state, dispatch] = useReducer(reducer, {
markets: [],
selectedMarket: null,
loading: false
})
return (
<MarketContext.Provider value={{ state, dispatch }}>
{children}
</MarketContext.Provider>
)
}
export function useMarkets() {
const context = useContext(MarketContext)
if (!context) throw new Error('useMarkets must be used within MarketProvider')
return context
}
```
## Performance Optimization
### Memoization
```typescript
// PASS: useMemo for expensive computations
const sortedMarkets = useMemo(() => {
return markets.sort((a, b) => b.volume - a.volume)
}, [markets])
// PASS: useCallback for functions passed to children
const handleSearch = useCallback((query: string) => {
setSearchQuery(query)
}, [])
// PASS: React.memo for pure components
export const MarketCard = React.memo<MarketCardProps>(({ market }) => {
return (
<div className="market-card">
<h3>{market.name}</h3>
<p>{market.description}</p>
</div>
)
})
```
### Code Splitting & Lazy Loading
```typescript
import { lazy, Suspense } from 'react'
// PASS: Lazy load heavy components
const HeavyChart = lazy(() => import('./HeavyChart'))
const ThreeJsBackground = lazy(() => import('./ThreeJsBackground'))
export function Dashboard() {
return (
<div>
<Suspense fallback={<ChartSkeleton />}>
<HeavyChart data={data} />
</Suspense>
<Suspense fallback={null}>
<ThreeJsBackground />
</Suspense>
</div>
)
}
```
### Virtualization for Long Lists
```typescript
import { useVirtualizer } from '@tanstack/react-virtual'
export function VirtualMarketList({ markets }: { markets: Market[] }) {
const parentRef = useRef<HTMLDivElement>(null)
const virtualizer = useVirtualizer({
count: markets.length,
getScrollElement: () => parentRef.current,
estimateSize: () => 100, // Estimated row height
overscan: 5 // Extra items to render
})
return (
<div ref={parentRef} style={{ height: '600px', overflow: 'auto' }}>
<div
style={{
height: `${virtualizer.getTotalSize()}px`,
position: 'relative'
}}
>
{virtualizer.getVirtualItems().map(virtualRow => (
<div
key={virtualRow.index}
style={{
position: 'absolute',
top: 0,
left: 0,
width: '100%',
height: `${virtualRow.size}px`,
transform: `translateY(${virtualRow.start}px)`
}}
>
<MarketCard market={markets[virtualRow.index]} />
</div>
))}
</div>
</div>
)
}
```
## Form Handling Patterns
### Controlled Form with Validation
```typescript
interface FormData {
name: string
description: string
endDate: string
}
interface FormErrors {
name?: string
description?: string
endDate?: string
}
export function CreateMarketForm() {
const [formData, setFormData] = useState<FormData>({
name: '',
description: '',
endDate: ''
})
const [errors, setErrors] = useState<FormErrors>({})
const validate = (): boolean => {
const newErrors: FormErrors = {}
if (!formData.name.trim()) {
newErrors.name = 'Name is required'
} else if (formData.name.length > 200) {
newErrors.name = 'Name must be under 200 characters'
}
if (!formData.description.trim()) {
newErrors.description = 'Description is required'
}
if (!formData.endDate) {
newErrors.endDate = 'End date is required'
}
setErrors(newErrors)
return Object.keys(newErrors).length === 0
}
const handleSubmit = async (e: React.FormEvent) => {
e.preventDefault()
if (!validate()) return
try {
await createMarket(formData)
// Success handling
} catch (error) {
// Error handling
}
}
return (
<form onSubmit={handleSubmit}>
<input
value={formData.name}
onChange={e => setFormData(prev => ({ ...prev, name: e.target.value }))}
placeholder="Market name"
/>
{errors.name && <span className="error">{errors.name}</span>}
{/* Other fields */}
<button type="submit">Create Market</button>
</form>
)
}
```
## Error Boundary Pattern
```typescript
interface ErrorBoundaryState {
hasError: boolean
error: Error | null
}
export class ErrorBoundary extends React.Component<
{ children: React.ReactNode },
ErrorBoundaryState
> {
state: ErrorBoundaryState = {
hasError: false,
error: null
}
static getDerivedStateFromError(error: Error): ErrorBoundaryState {
return { hasError: true, error }
}
componentDidCatch(error: Error, errorInfo: React.ErrorInfo) {
console.error('Error boundary caught:', error, errorInfo)
}
render() {
if (this.state.hasError) {
return (
<div className="error-fallback">
<h2>Something went wrong</h2>
<p>{this.state.error?.message}</p>
<button onClick={() => this.setState({ hasError: false })}>
Try again
</button>
</div>
)
}
return this.props.children
}
}
// Usage
<ErrorBoundary>
<App />
</ErrorBoundary>
```
## Animation Patterns
### Framer Motion Animations
```typescript
import { motion, AnimatePresence } from 'framer-motion'
// PASS: List animations
export function AnimatedMarketList({ markets }: { markets: Market[] }) {
return (
<AnimatePresence>
{markets.map(market => (
<motion.div
key={market.id}
initial={{ opacity: 0, y: 20 }}
animate={{ opacity: 1, y: 0 }}
exit={{ opacity: 0, y: -20 }}
transition={{ duration: 0.3 }}
>
<MarketCard market={market} />
</motion.div>
))}
</AnimatePresence>
)
}
// PASS: Modal animations
export function Modal({ isOpen, onClose, children }: ModalProps) {
return (
<AnimatePresence>
{isOpen && (
<>
<motion.div
className="modal-overlay"
initial={{ opacity: 0 }}
animate={{ opacity: 1 }}
exit={{ opacity: 0 }}
onClick={onClose}
/>
<motion.div
className="modal-content"
initial={{ opacity: 0, scale: 0.9, y: 20 }}
animate={{ opacity: 1, scale: 1, y: 0 }}
exit={{ opacity: 0, scale: 0.9, y: 20 }}
>
{children}
</motion.div>
</>
)}
</AnimatePresence>
)
}
```
## Accessibility Patterns
### Keyboard Navigation
```typescript
export function Dropdown({ options, onSelect }: DropdownProps) {
const [isOpen, setIsOpen] = useState(false)
const [activeIndex, setActiveIndex] = useState(0)
const handleKeyDown = (e: React.KeyboardEvent) => {
switch (e.key) {
case 'ArrowDown':
e.preventDefault()
setActiveIndex(i => Math.min(i + 1, options.length - 1))
break
case 'ArrowUp':
e.preventDefault()
setActiveIndex(i => Math.max(i - 1, 0))
break
case 'Enter':
e.preventDefault()
onSelect(options[activeIndex])
setIsOpen(false)
break
case 'Escape':
setIsOpen(false)
break
}
}
return (
<div
role="combobox"
aria-expanded={isOpen}
aria-haspopup="listbox"
onKeyDown={handleKeyDown}
>
{/* Dropdown implementation */}
</div>
)
}
```
### Focus Management
```typescript
export function Modal({ isOpen, onClose, children }: ModalProps) {
const modalRef = useRef<HTMLDivElement>(null)
const previousFocusRef = useRef<HTMLElement | null>(null)
useEffect(() => {
if (isOpen) {
// Save currently focused element
previousFocusRef.current = document.activeElement as HTMLElement
// Focus modal
modalRef.current?.focus()
} else {
// Restore focus when closing
previousFocusRef.current?.focus()
}
}, [isOpen])
return isOpen ? (
<div
ref={modalRef}
role="dialog"
aria-modal="true"
tabIndex={-1}
onKeyDown={e => e.key === 'Escape' && onClose()}
>
{children}
</div>
) : null
}
```
**Remember**: Modern frontend patterns enable maintainable, performant user interfaces. Choose patterns that fit your project complexity.

View File

@ -1,7 +0,0 @@
interface:
display_name: "Frontend Patterns"
short_description: "React and Next.js patterns and best practices"
brand_color: "#8B5CF6"
default_prompt: "Apply React/Next.js patterns and best practices"
policy:
allow_implicit_invocation: true

View File

@ -1,184 +0,0 @@
---
name: frontend-slides
description: Create stunning, animation-rich HTML presentations from scratch or by converting PowerPoint files. Use when the user wants to build a presentation, convert a PPT/PPTX to web, or create slides for a talk/pitch. Helps non-designers discover their aesthetic through visual exploration rather than abstract choices.
origin: ECC
---
# Frontend Slides
Create zero-dependency, animation-rich HTML presentations that run entirely in the browser.
Inspired by the visual exploration approach showcased in work by [zarazhangrui](https://github.com/zarazhangrui).
## When to Activate
- Creating a talk deck, pitch deck, workshop deck, or internal presentation
- Converting `.ppt` or `.pptx` slides into an HTML presentation
- Improving an existing HTML presentation's layout, motion, or typography
- Exploring presentation styles with a user who does not know their design preference yet
## Non-Negotiables
1. **Zero dependencies**: default to one self-contained HTML file with inline CSS and JS.
2. **Viewport fit is mandatory**: every slide must fit inside one viewport with no internal scrolling.
3. **Show, don't tell**: use visual previews instead of abstract style questionnaires.
4. **Distinctive design**: avoid generic purple-gradient, Inter-on-white, template-looking decks.
5. **Production quality**: keep code commented, accessible, responsive, and performant.
Before generating, read `STYLE_PRESETS.md` for the viewport-safe CSS base, density limits, preset catalog, and CSS gotchas.
## Workflow
### 1. Detect Mode
Choose one path:
- **New presentation**: user has a topic, notes, or full draft
- **PPT conversion**: user has `.ppt` or `.pptx`
- **Enhancement**: user already has HTML slides and wants improvements
### 2. Discover Content
Ask only the minimum needed:
- purpose: pitch, teaching, conference talk, internal update
- length: short (5-10), medium (10-20), long (20+)
- content state: finished copy, rough notes, topic only
If the user has content, ask them to paste it before styling.
### 3. Discover Style
Default to visual exploration.
If the user already knows the desired preset, skip previews and use it directly.
Otherwise:
1. Ask what feeling the deck should create: impressed, energized, focused, inspired.
2. Generate **3 single-slide preview files** in `.ecc-design/slide-previews/`.
3. Each preview must be self-contained, show typography/color/motion clearly, and stay under roughly 100 lines of slide content.
4. Ask the user which preview to keep or what elements to mix.
Use the preset guide in `STYLE_PRESETS.md` when mapping mood to style.
### 4. Build the Presentation
Output either:
- `presentation.html`
- `[presentation-name].html`
Use an `assets/` folder only when the deck contains extracted or user-supplied images.
Required structure:
- semantic slide sections
- a viewport-safe CSS base from `STYLE_PRESETS.md`
- CSS custom properties for theme values
- a presentation controller class for keyboard, wheel, and touch navigation
- Intersection Observer for reveal animations
- reduced-motion support
### 5. Enforce Viewport Fit
Treat this as a hard gate.
Rules:
- every `.slide` must use `height: 100vh; height: 100dvh; overflow: hidden;`
- all type and spacing must scale with `clamp()`
- when content does not fit, split into multiple slides
- never solve overflow by shrinking text below readable sizes
- never allow scrollbars inside a slide
Use the density limits and mandatory CSS block in `STYLE_PRESETS.md`.
### 6. Validate
Check the finished deck at these sizes:
- 1920x1080
- 1280x720
- 768x1024
- 375x667
- 667x375
If browser automation is available, use it to verify no slide overflows and that keyboard navigation works.
### 7. Deliver
At handoff:
- delete temporary preview files unless the user wants to keep them
- open the deck with the platform-appropriate opener when useful
- summarize file path, preset used, slide count, and easy theme customization points
Use the correct opener for the current OS:
- macOS: `open file.html`
- Linux: `xdg-open file.html`
- Windows: `start "" file.html`
## PPT / PPTX Conversion
For PowerPoint conversion:
1. Prefer `python3` with `python-pptx` to extract text, images, and notes.
2. If `python-pptx` is unavailable, ask whether to install it or fall back to a manual/export-based workflow.
3. Preserve slide order, speaker notes, and extracted assets.
4. After extraction, run the same style-selection workflow as a new presentation.
Keep conversion cross-platform. Do not rely on macOS-only tools when Python can do the job.
## Implementation Requirements
### HTML / CSS
- Use inline CSS and JS unless the user explicitly wants a multi-file project.
- Fonts may come from Google Fonts or Fontshare.
- Prefer atmospheric backgrounds, strong type hierarchy, and a clear visual direction.
- Use abstract shapes, gradients, grids, noise, and geometry rather than illustrations.
### JavaScript
Include:
- keyboard navigation
- touch / swipe navigation
- mouse wheel navigation
- progress indicator or slide index
- reveal-on-enter animation triggers
### Accessibility
- use semantic structure (`main`, `section`, `nav`)
- keep contrast readable
- support keyboard-only navigation
- respect `prefers-reduced-motion`
## Content Density Limits
Use these maxima unless the user explicitly asks for denser slides and readability still holds:
| Slide type | Limit |
|------------|-------|
| Title | 1 heading + 1 subtitle + optional tagline |
| Content | 1 heading + 4-6 bullets or 2 short paragraphs |
| Feature grid | 6 cards max |
| Code | 8-10 lines max |
| Quote | 1 quote + attribution |
| Image | 1 image constrained by viewport |
## Anti-Patterns
- generic startup gradients with no visual identity
- system-font decks unless intentionally editorial
- long bullet walls
- code blocks that need scrolling
- fixed-height content boxes that break on short screens
- invalid negated CSS functions like `-clamp(...)`
## Related ECC Skills
- `frontend-patterns` for component and interaction patterns around the deck
- `liquid-glass-design` when a presentation intentionally borrows Apple glass aesthetics
- `e2e-testing` if you need automated browser verification for the final deck
## Deliverable Checklist
- presentation runs from a local file in a browser
- every slide fits the viewport without scrolling
- style is distinctive and intentional
- animation is meaningful, not noisy
- reduced motion is respected
- file paths and customization points are explained at handoff

View File

@ -1,330 +0,0 @@
# Style Presets Reference
Curated visual styles for `frontend-slides`.
Use this file for:
- the mandatory viewport-fitting CSS base
- preset selection and mood mapping
- CSS gotchas and validation rules
Abstract shapes only. Avoid illustrations unless the user explicitly asks for them.
## Viewport Fit Is Non-Negotiable
Every slide must fully fit in one viewport.
### Golden Rule
```text
Each slide = exactly one viewport height.
Too much content = split into more slides.
Never scroll inside a slide.
```
### Density Limits
| Slide Type | Maximum Content |
|------------|-----------------|
| Title slide | 1 heading + 1 subtitle + optional tagline |
| Content slide | 1 heading + 4-6 bullets or 2 paragraphs |
| Feature grid | 6 cards maximum |
| Code slide | 8-10 lines maximum |
| Quote slide | 1 quote + attribution |
| Image slide | 1 image, ideally under 60vh |
## Mandatory Base CSS
Copy this block into every generated presentation and then theme on top of it.
```css
/* ===========================================
VIEWPORT FITTING: MANDATORY BASE STYLES
=========================================== */
html, body {
height: 100%;
overflow-x: hidden;
}
html {
scroll-snap-type: y mandatory;
scroll-behavior: smooth;
}
.slide {
width: 100vw;
height: 100vh;
height: 100dvh;
overflow: hidden;
scroll-snap-align: start;
display: flex;
flex-direction: column;
position: relative;
}
.slide-content {
flex: 1;
display: flex;
flex-direction: column;
justify-content: center;
max-height: 100%;
overflow: hidden;
padding: var(--slide-padding);
}
:root {
--title-size: clamp(1.5rem, 5vw, 4rem);
--h2-size: clamp(1.25rem, 3.5vw, 2.5rem);
--h3-size: clamp(1rem, 2.5vw, 1.75rem);
--body-size: clamp(0.75rem, 1.5vw, 1.125rem);
--small-size: clamp(0.65rem, 1vw, 0.875rem);
--slide-padding: clamp(1rem, 4vw, 4rem);
--content-gap: clamp(0.5rem, 2vw, 2rem);
--element-gap: clamp(0.25rem, 1vw, 1rem);
}
.card, .container, .content-box {
max-width: min(90vw, 1000px);
max-height: min(80vh, 700px);
}
.feature-list, .bullet-list {
gap: clamp(0.4rem, 1vh, 1rem);
}
.feature-list li, .bullet-list li {
font-size: var(--body-size);
line-height: 1.4;
}
.grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(min(100%, 250px), 1fr));
gap: clamp(0.5rem, 1.5vw, 1rem);
}
img, .image-container {
max-width: 100%;
max-height: min(50vh, 400px);
object-fit: contain;
}
@media (max-height: 700px) {
:root {
--slide-padding: clamp(0.75rem, 3vw, 2rem);
--content-gap: clamp(0.4rem, 1.5vw, 1rem);
--title-size: clamp(1.25rem, 4.5vw, 2.5rem);
--h2-size: clamp(1rem, 3vw, 1.75rem);
}
}
@media (max-height: 600px) {
:root {
--slide-padding: clamp(0.5rem, 2.5vw, 1.5rem);
--content-gap: clamp(0.3rem, 1vw, 0.75rem);
--title-size: clamp(1.1rem, 4vw, 2rem);
--body-size: clamp(0.7rem, 1.2vw, 0.95rem);
}
.nav-dots, .keyboard-hint, .decorative {
display: none;
}
}
@media (max-height: 500px) {
:root {
--slide-padding: clamp(0.4rem, 2vw, 1rem);
--title-size: clamp(1rem, 3.5vw, 1.5rem);
--h2-size: clamp(0.9rem, 2.5vw, 1.25rem);
--body-size: clamp(0.65rem, 1vw, 0.85rem);
}
}
@media (max-width: 600px) {
:root {
--title-size: clamp(1.25rem, 7vw, 2.5rem);
}
.grid {
grid-template-columns: 1fr;
}
}
@media (prefers-reduced-motion: reduce) {
*, *::before, *::after {
animation-duration: 0.01ms !important;
transition-duration: 0.2s !important;
}
html {
scroll-behavior: auto;
}
}
```
## Viewport Checklist
- every `.slide` has `height: 100vh`, `height: 100dvh`, and `overflow: hidden`
- all typography uses `clamp()`
- all spacing uses `clamp()` or viewport units
- images have `max-height` constraints
- grids adapt with `auto-fit` + `minmax()`
- short-height breakpoints exist at `700px`, `600px`, and `500px`
- if anything feels cramped, split the slide
## Mood to Preset Mapping
| Mood | Good Presets |
|------|--------------|
| Impressed / Confident | Bold Signal, Electric Studio, Dark Botanical |
| Excited / Energized | Creative Voltage, Neon Cyber, Split Pastel |
| Calm / Focused | Notebook Tabs, Paper & Ink, Swiss Modern |
| Inspired / Moved | Dark Botanical, Vintage Editorial, Pastel Geometry |
## Preset Catalog
### 1. Bold Signal
- Vibe: confident, high-impact, keynote-ready
- Best for: pitch decks, launches, statements
- Fonts: Archivo Black + Space Grotesk
- Palette: charcoal base, hot orange focal card, crisp white text
- Signature: oversized section numbers, high-contrast card on dark field
### 2. Electric Studio
- Vibe: clean, bold, agency-polished
- Best for: client presentations, strategic reviews
- Fonts: Manrope only
- Palette: black, white, saturated cobalt accent
- Signature: two-panel split and sharp editorial alignment
### 3. Creative Voltage
- Vibe: energetic, retro-modern, playful confidence
- Best for: creative studios, brand work, product storytelling
- Fonts: Syne + Space Mono
- Palette: electric blue, neon yellow, deep navy
- Signature: halftone textures, badges, punchy contrast
### 4. Dark Botanical
- Vibe: elegant, premium, atmospheric
- Best for: luxury brands, thoughtful narratives, premium product decks
- Fonts: Cormorant + IBM Plex Sans
- Palette: near-black, warm ivory, blush, gold, terracotta
- Signature: blurred abstract circles, fine rules, restrained motion
### 5. Notebook Tabs
- Vibe: editorial, organized, tactile
- Best for: reports, reviews, structured storytelling
- Fonts: Bodoni Moda + DM Sans
- Palette: cream paper on charcoal with pastel tabs
- Signature: paper sheet, colored side tabs, binder details
### 6. Pastel Geometry
- Vibe: approachable, modern, friendly
- Best for: product overviews, onboarding, lighter brand decks
- Fonts: Plus Jakarta Sans only
- Palette: pale blue field, cream card, soft pink/mint/lavender accents
- Signature: vertical pills, rounded cards, soft shadows
### 7. Split Pastel
- Vibe: playful, modern, creative
- Best for: agency intros, workshops, portfolios
- Fonts: Outfit only
- Palette: peach + lavender split with mint badges
- Signature: split backdrop, rounded tags, light grid overlays
### 8. Vintage Editorial
- Vibe: witty, personality-driven, magazine-inspired
- Best for: personal brands, opinionated talks, storytelling
- Fonts: Fraunces + Work Sans
- Palette: cream, charcoal, dusty warm accents
- Signature: geometric accents, bordered callouts, punchy serif headlines
### 9. Neon Cyber
- Vibe: futuristic, techy, kinetic
- Best for: AI, infra, dev tools, future-of-X talks
- Fonts: Clash Display + Satoshi
- Palette: midnight navy, cyan, magenta
- Signature: glow, particles, grids, data-radar energy
### 10. Terminal Green
- Vibe: developer-focused, hacker-clean
- Best for: APIs, CLI tools, engineering demos
- Fonts: JetBrains Mono only
- Palette: GitHub dark + terminal green
- Signature: scan lines, command-line framing, precise monospace rhythm
### 11. Swiss Modern
- Vibe: minimal, precise, data-forward
- Best for: corporate, product strategy, analytics
- Fonts: Archivo + Nunito
- Palette: white, black, signal red
- Signature: visible grids, asymmetry, geometric discipline
### 12. Paper & Ink
- Vibe: literary, thoughtful, story-driven
- Best for: essays, keynote narratives, manifesto decks
- Fonts: Cormorant Garamond + Source Serif 4
- Palette: warm cream, charcoal, crimson accent
- Signature: pull quotes, drop caps, elegant rules
## Direct Selection Prompts
If the user already knows the style they want, let them pick directly from the preset names above instead of forcing preview generation.
## Animation Feel Mapping
| Feeling | Motion Direction |
|---------|------------------|
| Dramatic / Cinematic | slow fades, parallax, large scale-ins |
| Techy / Futuristic | glow, particles, grid motion, scramble text |
| Playful / Friendly | springy easing, rounded shapes, floating motion |
| Professional / Corporate | subtle 200-300ms transitions, clean slides |
| Calm / Minimal | very restrained movement, whitespace-first |
| Editorial / Magazine | strong hierarchy, staggered text and image interplay |
## CSS Gotcha: Negating Functions
Never write these:
```css
right: -clamp(28px, 3.5vw, 44px);
margin-left: -min(10vw, 100px);
```
Browsers ignore them silently.
Always write this instead:
```css
right: calc(-1 * clamp(28px, 3.5vw, 44px));
margin-left: calc(-1 * min(10vw, 100px));
```
## Validation Sizes
Test at minimum:
- Desktop: `1920x1080`, `1440x900`, `1280x720`
- Tablet: `1024x768`, `768x1024`
- Mobile: `375x667`, `414x896`
- Landscape phone: `667x375`, `896x414`
## Anti-Patterns
Do not use:
- purple-on-white startup templates
- Inter / Roboto / Arial as the visual voice unless the user explicitly wants utilitarian neutrality
- bullet walls, tiny type, or code blocks that require scrolling
- decorative illustrations when abstract geometry would do the job better

View File

@ -1,7 +0,0 @@
interface:
display_name: "Frontend Slides"
short_description: "Create distinctive HTML slide decks and convert PPTX to web"
brand_color: "#FF6B3D"
default_prompt: "Create a viewport-safe HTML presentation with strong visual direction"
policy:
allow_implicit_invocation: true

View File

@ -1,96 +0,0 @@
---
name: investor-materials
description: Create and update pitch decks, one-pagers, investor memos, accelerator applications, financial models, and fundraising materials. Use when the user needs investor-facing documents, projections, use-of-funds tables, milestone plans, or materials that must stay internally consistent across multiple fundraising assets.
origin: ECC
---
# Investor Materials
Build investor-facing materials that are consistent, credible, and easy to defend.
## When to Activate
- creating or revising a pitch deck
- writing an investor memo or one-pager
- building a financial model, milestone plan, or use-of-funds table
- answering accelerator or incubator application questions
- aligning multiple fundraising docs around one source of truth
## Golden Rule
All investor materials must agree with each other.
Create or confirm a single source of truth before writing:
- traction metrics
- pricing and revenue assumptions
- raise size and instrument
- use of funds
- team bios and titles
- milestones and timelines
If conflicting numbers appear, stop and resolve them before drafting.
## Core Workflow
1. inventory the canonical facts
2. identify missing assumptions
3. choose the asset type
4. draft the asset with explicit logic
5. cross-check every number against the source of truth
## Asset Guidance
### Pitch Deck
Recommended flow:
1. company + wedge
2. problem
3. solution
4. product / demo
5. market
6. business model
7. traction
8. team
9. competition / differentiation
10. ask
11. use of funds / milestones
12. appendix
If the user wants a web-native deck, pair this skill with `frontend-slides`.
### One-Pager / Memo
- state what the company does in one clean sentence
- show why now
- include traction and proof points early
- make the ask precise
- keep claims easy to verify
### Financial Model
Include:
- explicit assumptions
- bear / base / bull cases when useful
- clean layer-by-layer revenue logic
- milestone-linked spending
- sensitivity analysis where the decision hinges on assumptions
### Accelerator Applications
- answer the exact question asked
- prioritize traction, insight, and team advantage
- avoid puffery
- keep internal metrics consistent with the deck and model
## Red Flags to Avoid
- unverifiable claims
- fuzzy market sizing without assumptions
- inconsistent team roles or titles
- revenue math that does not sum cleanly
- inflated certainty where assumptions are fragile
## Quality Gate
Before delivering:
- every number matches the current source of truth
- use of funds and revenue layers sum correctly
- assumptions are visible, not buried
- the story is clear without hype language
- the final asset is defensible in a partner meeting

View File

@ -1,7 +0,0 @@
interface:
display_name: "Investor Materials"
short_description: "Create decks, memos, and financial materials from one source of truth"
brand_color: "#7C3AED"
default_prompt: "Draft investor materials that stay numerically consistent across assets"
policy:
allow_implicit_invocation: true

View File

@ -1,91 +0,0 @@
---
name: investor-outreach
description: Draft cold emails, warm intro blurbs, follow-ups, update emails, and investor communications for fundraising. Use when the user wants outreach to angels, VCs, strategic investors, or accelerators and needs concise, personalized, investor-facing messaging.
origin: ECC
---
# Investor Outreach
Write investor communication that is short, concrete, and easy to act on.
## When to Activate
- writing a cold email to an investor
- drafting a warm intro request
- sending follow-ups after a meeting or no response
- writing investor updates during a process
- tailoring outreach based on fund thesis or partner fit
## Core Rules
1. Personalize every outbound message.
2. Keep the ask low-friction.
3. Use proof instead of adjectives.
4. Stay concise.
5. Never send copy that could go to any investor.
## Voice Handling
If the user's voice matters, run `brand-voice` first and reuse its `VOICE PROFILE`.
This skill should keep the investor-specific structure and ask discipline, not recreate its own parallel voice system.
## Hard Bans
Delete and rewrite any of these:
- "I'd love to connect"
- "excited to share"
- generic thesis praise without a real tie-in
- vague founder adjectives
- begging language
- soft closing questions when a direct ask is clearer
## Cold Email Structure
1. subject line: short and specific
2. opener: why this investor specifically
3. pitch: what the company does, why now, and what proof matters
4. ask: one concrete next step
5. sign-off: name, role, and one credibility anchor if needed
## Personalization Sources
Reference one or more of:
- relevant portfolio companies
- a public thesis, talk, post, or article
- a mutual connection
- a clear market or product fit with the investor's focus
If that context is missing, state that the draft still needs personalization instead of pretending it is finished.
## Follow-Up Cadence
Default:
- day 0: initial outbound
- day 4 or 5: short follow-up with one new data point
- day 10 to 12: final follow-up with a clean close
Do not keep nudging after that unless the user wants a longer sequence.
## Warm Intro Requests
Make life easy for the connector:
- explain why the intro is a fit
- include a forwardable blurb
- keep the forwardable blurb under 100 words
## Post-Meeting Updates
Include:
- the specific thing discussed
- the answer or update promised
- one new proof point if available
- the next step
## Quality Gate
Before delivering:
- the message is genuinely personalized
- the ask is explicit
- the proof point is concrete
- filler praise and softener language are gone
- word count stays tight

View File

@ -1,7 +0,0 @@
interface:
display_name: "Investor Outreach"
short_description: "Write concise, personalized outreach and follow-ups for fundraising"
brand_color: "#059669"
default_prompt: "Draft a personalized investor outreach email with a clear low-friction ask"
policy:
allow_implicit_invocation: true

View File

@ -1,75 +0,0 @@
---
name: market-research
description: Conduct market research, competitive analysis, investor due diligence, and industry intelligence with source attribution and decision-oriented summaries. Use when the user wants market sizing, competitor comparisons, fund research, technology scans, or research that informs business decisions.
origin: ECC
---
# Market Research
Produce research that supports decisions, not research theater.
## When to Activate
- researching a market, category, company, investor, or technology trend
- building TAM/SAM/SOM estimates
- comparing competitors or adjacent products
- preparing investor dossiers before outreach
- pressure-testing a thesis before building, funding, or entering a market
## Research Standards
1. Every important claim needs a source.
2. Prefer recent data and call out stale data.
3. Include contrarian evidence and downside cases.
4. Translate findings into a decision, not just a summary.
5. Separate fact, inference, and recommendation clearly.
## Common Research Modes
### Investor / Fund Diligence
Collect:
- fund size, stage, and typical check size
- relevant portfolio companies
- public thesis and recent activity
- reasons the fund is or is not a fit
- any obvious red flags or mismatches
### Competitive Analysis
Collect:
- product reality, not marketing copy
- funding and investor history if public
- traction metrics if public
- distribution and pricing clues
- strengths, weaknesses, and positioning gaps
### Market Sizing
Use:
- top-down estimates from reports or public datasets
- bottom-up sanity checks from realistic customer acquisition assumptions
- explicit assumptions for every leap in logic
### Technology / Vendor Research
Collect:
- how it works
- trade-offs and adoption signals
- integration complexity
- lock-in, security, compliance, and operational risk
## Output Format
Default structure:
1. executive summary
2. key findings
3. implications
4. risks and caveats
5. recommendation
6. sources
## Quality Gate
Before delivering:
- all numbers are sourced or labeled as estimates
- old data is flagged
- the recommendation follows from the evidence
- risks and counterarguments are included
- the output makes a decision easier

View File

@ -1,7 +0,0 @@
interface:
display_name: "Market Research"
short_description: "Source-attributed market, competitor, and investor research"
brand_color: "#2563EB"
default_prompt: "Research this market and summarize the decision-relevant findings"
policy:
allow_implicit_invocation: true

View File

@ -1,67 +0,0 @@
---
name: mcp-server-patterns
description: Build MCP servers with Node/TypeScript SDK — tools, resources, prompts, Zod validation, stdio vs Streamable HTTP. Use Context7 or official MCP docs for latest API.
origin: ECC
---
# MCP Server Patterns
The Model Context Protocol (MCP) lets AI assistants call tools, read resources, and use prompts from your server. Use this skill when building or maintaining MCP servers. The SDK API evolves; check Context7 (query-docs for "MCP") or the official MCP documentation for current method names and signatures.
## When to Use
Use when: implementing a new MCP server, adding tools or resources, choosing stdio vs HTTP, upgrading the SDK, or debugging MCP registration and transport issues.
## How It Works
### Core concepts
- **Tools**: Actions the model can invoke (e.g. search, run a command). Register with `registerTool()` or `tool()` depending on SDK version.
- **Resources**: Read-only data the model can fetch (e.g. file contents, API responses). Register with `registerResource()` or `resource()`. Handlers typically receive a `uri` argument.
- **Prompts**: Reusable, parameterised prompt templates the client can surface (e.g. in Claude Desktop). Register with `registerPrompt()` or equivalent.
- **Transport**: stdio for local clients (e.g. Claude Desktop); Streamable HTTP is preferred for remote (Cursor, cloud). Legacy HTTP/SSE is for backward compatibility.
The Node/TypeScript SDK may expose `tool()` / `resource()` or `registerTool()` / `registerResource()`; the official SDK has changed over time. Always verify against the current [MCP docs](https://modelcontextprotocol.io) or Context7.
### Connecting with stdio
For local clients, create a stdio transport and pass it to your servers connect method. The exact API varies by SDK version (e.g. constructor vs factory). See the official MCP documentation or query Context7 for "MCP stdio server" for the current pattern.
Keep server logic (tools + resources) independent of transport so you can plug in stdio or HTTP in the entrypoint.
### Remote (Streamable HTTP)
For Cursor, cloud, or other remote clients, use **Streamable HTTP** (single MCP HTTP endpoint per current spec). Support legacy HTTP/SSE only when backward compatibility is required.
## Examples
### Install and server setup
```bash
npm install @modelcontextprotocol/sdk zod
```
```typescript
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { z } from "zod";
const server = new McpServer({ name: "my-server", version: "1.0.0" });
```
Register tools and resources using the API your SDK version provides: some versions use `server.tool(name, description, schema, handler)` (positional args), others use `server.tool({ name, description, inputSchema }, handler)` or `registerTool()`. Same for resources — include a `uri` in the handler when the API provides it. Check the official MCP docs or Context7 for the current `@modelcontextprotocol/sdk` signatures to avoid copy-paste errors.
Use **Zod** (or the SDKs preferred schema format) for input validation.
## Best Practices
- **Schema first**: Define input schemas for every tool; document parameters and return shape.
- **Errors**: Return structured errors or messages the model can interpret; avoid raw stack traces.
- **Idempotency**: Prefer idempotent tools where possible so retries are safe.
- **Rate and cost**: For tools that call external APIs, consider rate limits and cost; document in the tool description.
- **Versioning**: Pin SDK version in package.json; check release notes when upgrading.
## Official SDKs and Docs
- **JavaScript/TypeScript**: `@modelcontextprotocol/sdk` (npm). Use Context7 with library name "MCP" for current registration and transport patterns.
- **Go**: Official Go SDK on GitHub (`modelcontextprotocol/go-sdk`).
- **C#**: Official C# SDK for .NET.

View File

@ -1,44 +0,0 @@
---
name: nextjs-turbopack
description: Next.js 16+ and Turbopack — incremental bundling, FS caching, dev speed, and when to use Turbopack vs webpack.
origin: ECC
---
# Next.js and Turbopack
Next.js 16+ uses Turbopack by default for local development: an incremental bundler written in Rust that significantly speeds up dev startup and hot updates.
## When to Use
- **Turbopack (default dev)**: Use for day-to-day development. Faster cold start and HMR, especially in large apps.
- **Webpack (legacy dev)**: Use only if you hit a Turbopack bug or rely on a webpack-only plugin in dev. Disable with `--webpack` (or `--no-turbopack` depending on your Next.js version; check the docs for your release).
- **Production**: Production build behavior (`next build`) may use Turbopack or webpack depending on Next.js version; check the official Next.js docs for your version.
Use when: developing or debugging Next.js 16+ apps, diagnosing slow dev startup or HMR, or optimizing production bundles.
## How It Works
- **Turbopack**: Incremental bundler for Next.js dev. Uses file-system caching so restarts are much faster (e.g. 514x on large projects).
- **Default in dev**: From Next.js 16, `next dev` runs with Turbopack unless disabled.
- **File-system caching**: Restarts reuse previous work; cache is typically under `.next`; no extra config needed for basic use.
- **Bundle Analyzer (Next.js 16.1+)**: Experimental Bundle Analyzer to inspect output and find heavy dependencies; enable via config or experimental flag (see Next.js docs for your version).
## Examples
### Commands
```bash
next dev
next build
next start
```
### Usage
Run `next dev` for local development with Turbopack. Use the Bundle Analyzer (see Next.js docs) to optimize code-splitting and trim large dependencies. Prefer App Router and server components where possible.
## Best Practices
- Stay on a recent Next.js 16.x for stable Turbopack and caching behavior.
- If dev is slow, ensure you're on Turbopack (default) and that the cache isn't being cleared unnecessarily.
- For production bundle size issues, use the official Next.js bundle analysis tooling for your version.

View File

@ -1,7 +0,0 @@
interface:
display_name: "Next.js Turbopack"
short_description: "Next.js 16+ and Turbopack dev bundler"
brand_color: "#000000"
default_prompt: "Next.js dev, Turbopack, or bundle optimization"
policy:
allow_implicit_invocation: true

View File

@ -1,141 +0,0 @@
---
name: product-capability
description: Translate PRD intent, roadmap asks, or product discussions into an implementation-ready capability plan that exposes constraints, invariants, interfaces, and unresolved decisions before multi-service work starts. Use when the user needs an ECC-native PRD-to-SRS lane instead of vague planning prose.
origin: ECC
---
# Product Capability
This skill turns product intent into explicit engineering constraints.
Use it when the gap is not "what should we build?" but "what exactly must be true before implementation starts?"
## When to Use
- A PRD, roadmap item, discussion, or founder note exists, but the implementation constraints are still implicit
- A feature crosses multiple services, repos, or teams and needs a capability contract before coding
- Product intent is clear, but architecture, data, lifecycle, or policy implications are still fuzzy
- Senior engineers keep restating the same hidden assumptions during review
- You need a reusable artifact that can survive across harnesses and sessions
## Canonical Artifact
If the repo has a durable product-context file such as `PRODUCT.md`, `docs/product/`, or a program-spec directory, update it there.
If no capability manifest exists yet, create one using the template at:
- `docs/examples/product-capability-template.md`
The goal is not to create another planning stack. The goal is to make hidden capability constraints durable and reusable.
## Non-Negotiable Rules
- Do not invent product truth. Mark unresolved questions explicitly.
- Separate user-visible promises from implementation details.
- Call out what is fixed policy, what is architecture preference, and what is still open.
- If the request conflicts with existing repo constraints, say so clearly instead of smoothing it over.
- Prefer one reusable capability artifact over scattered ad hoc notes.
## Inputs
Read only what is needed:
1. Product intent
- issue, discussion, PRD, roadmap note, founder message
2. Current architecture
- relevant repo docs, contracts, schemas, routes, existing workflows
3. Existing capability context
- `PRODUCT.md`, design docs, RFCs, migration notes, operating-model docs
4. Delivery constraints
- auth, billing, compliance, rollout, backwards compatibility, performance, review policy
## Core Workflow
### 1. Restate the capability
Compress the ask into one precise statement:
- who the user or operator is
- what new capability exists after this ships
- what outcome changes because of it
If this statement is weak, the implementation will drift.
### 2. Resolve capability constraints
Extract the constraints that must hold before implementation:
- business rules
- scope boundaries
- invariants
- trust boundaries
- data ownership
- lifecycle transitions
- rollout / migration requirements
- failure and recovery expectations
These are the things that often live only in senior-engineer memory.
### 3. Define the implementation-facing contract
Produce an SRS-style capability plan with:
- capability summary
- explicit non-goals
- actors and surfaces
- required states and transitions
- interfaces / inputs / outputs
- data model implications
- security / billing / policy constraints
- observability and operator requirements
- open questions blocking implementation
### 4. Translate into execution
End with the exact handoff:
- ready for direct implementation
- needs architecture review first
- needs product clarification first
If useful, point to the next ECC-native lane:
- `project-flow-ops`
- `workspace-surface-audit`
- `api-connector-builder`
- `dashboard-builder`
- `tdd-workflow`
- `verification-loop`
## Output Format
Return the result in this order:
```text
CAPABILITY
- one-paragraph restatement
CONSTRAINTS
- fixed rules, invariants, and boundaries
IMPLEMENTATION CONTRACT
- actors
- surfaces
- states and transitions
- interface/data implications
NON-GOALS
- what this lane explicitly does not own
OPEN QUESTIONS
- blockers or product decisions still required
HANDOFF
- what should happen next and which ECC lane should take it
```
## Good Outcomes
- Product intent is now concrete enough to implement without rediscovering hidden constraints mid-PR.
- Engineering review has a durable artifact instead of relying on memory or Slack context.
- The resulting plan is reusable across Claude Code, Codex, Cursor, OpenCode, and ECC 2.0 planning surfaces.

View File

@ -1,495 +0,0 @@
---
name: security-review
description: Use this skill when adding authentication, handling user input, working with secrets, creating API endpoints, or implementing payment/sensitive features. Provides comprehensive security checklist and patterns.
origin: ECC
---
# Security Review Skill
This skill ensures all code follows security best practices and identifies potential vulnerabilities.
## When to Activate
- Implementing authentication or authorization
- Handling user input or file uploads
- Creating new API endpoints
- Working with secrets or credentials
- Implementing payment features
- Storing or transmitting sensitive data
- Integrating third-party APIs
## Security Checklist
### 1. Secrets Management
#### FAIL: NEVER Do This
```typescript
const apiKey = "sk-proj-xxxxx" // Hardcoded secret
const dbPassword = "password123" // In source code
```
#### PASS: ALWAYS Do This
```typescript
const apiKey = process.env.OPENAI_API_KEY
const dbUrl = process.env.DATABASE_URL
// Verify secrets exist
if (!apiKey) {
throw new Error('OPENAI_API_KEY not configured')
}
```
#### Verification Steps
- [ ] No hardcoded API keys, tokens, or passwords
- [ ] All secrets in environment variables
- [ ] `.env.local` in .gitignore
- [ ] No secrets in git history
- [ ] Production secrets in hosting platform (Vercel, Railway)
### 2. Input Validation
#### Always Validate User Input
```typescript
import { z } from 'zod'
// Define validation schema
const CreateUserSchema = z.object({
email: z.string().email(),
name: z.string().min(1).max(100),
age: z.number().int().min(0).max(150)
})
// Validate before processing
export async function createUser(input: unknown) {
try {
const validated = CreateUserSchema.parse(input)
return await db.users.create(validated)
} catch (error) {
if (error instanceof z.ZodError) {
return { success: false, errors: error.errors }
}
throw error
}
}
```
#### File Upload Validation
```typescript
function validateFileUpload(file: File) {
// Size check (5MB max)
const maxSize = 5 * 1024 * 1024
if (file.size > maxSize) {
throw new Error('File too large (max 5MB)')
}
// Type check
const allowedTypes = ['image/jpeg', 'image/png', 'image/gif']
if (!allowedTypes.includes(file.type)) {
throw new Error('Invalid file type')
}
// Extension check
const allowedExtensions = ['.jpg', '.jpeg', '.png', '.gif']
const extension = file.name.toLowerCase().match(/\.[^.]+$/)?.[0]
if (!extension || !allowedExtensions.includes(extension)) {
throw new Error('Invalid file extension')
}
return true
}
```
#### Verification Steps
- [ ] All user inputs validated with schemas
- [ ] File uploads restricted (size, type, extension)
- [ ] No direct use of user input in queries
- [ ] Whitelist validation (not blacklist)
- [ ] Error messages don't leak sensitive info
### 3. SQL Injection Prevention
#### FAIL: NEVER Concatenate SQL
```typescript
// DANGEROUS - SQL Injection vulnerability
const query = `SELECT * FROM users WHERE email = '${userEmail}'`
await db.query(query)
```
#### PASS: ALWAYS Use Parameterized Queries
```typescript
// Safe - parameterized query
const { data } = await supabase
.from('users')
.select('*')
.eq('email', userEmail)
// Or with raw SQL
await db.query(
'SELECT * FROM users WHERE email = $1',
[userEmail]
)
```
#### Verification Steps
- [ ] All database queries use parameterized queries
- [ ] No string concatenation in SQL
- [ ] ORM/query builder used correctly
- [ ] Supabase queries properly sanitized
### 4. Authentication & Authorization
#### JWT Token Handling
```typescript
// FAIL: WRONG: localStorage (vulnerable to XSS)
localStorage.setItem('token', token)
// PASS: CORRECT: httpOnly cookies
res.setHeader('Set-Cookie',
`token=${token}; HttpOnly; Secure; SameSite=Strict; Max-Age=3600`)
```
#### Authorization Checks
```typescript
export async function deleteUser(userId: string, requesterId: string) {
// ALWAYS verify authorization first
const requester = await db.users.findUnique({
where: { id: requesterId }
})
if (requester.role !== 'admin') {
return NextResponse.json(
{ error: 'Unauthorized' },
{ status: 403 }
)
}
// Proceed with deletion
await db.users.delete({ where: { id: userId } })
}
```
#### Row Level Security (Supabase)
```sql
-- Enable RLS on all tables
ALTER TABLE users ENABLE ROW LEVEL SECURITY;
-- Users can only view their own data
CREATE POLICY "Users view own data"
ON users FOR SELECT
USING (auth.uid() = id);
-- Users can only update their own data
CREATE POLICY "Users update own data"
ON users FOR UPDATE
USING (auth.uid() = id);
```
#### Verification Steps
- [ ] Tokens stored in httpOnly cookies (not localStorage)
- [ ] Authorization checks before sensitive operations
- [ ] Row Level Security enabled in Supabase
- [ ] Role-based access control implemented
- [ ] Session management secure
### 5. XSS Prevention
#### Sanitize HTML
```typescript
import DOMPurify from 'isomorphic-dompurify'
// ALWAYS sanitize user-provided HTML
function renderUserContent(html: string) {
const clean = DOMPurify.sanitize(html, {
ALLOWED_TAGS: ['b', 'i', 'em', 'strong', 'p'],
ALLOWED_ATTR: []
})
return <div dangerouslySetInnerHTML={{ __html: clean }} />
}
```
#### Content Security Policy
```typescript
// next.config.js
const securityHeaders = [
{
key: 'Content-Security-Policy',
value: `
default-src 'self';
script-src 'self' 'unsafe-eval' 'unsafe-inline';
style-src 'self' 'unsafe-inline';
img-src 'self' data: https:;
font-src 'self';
connect-src 'self' https://api.example.com;
`.replace(/\s{2,}/g, ' ').trim()
}
]
```
#### Verification Steps
- [ ] User-provided HTML sanitized
- [ ] CSP headers configured
- [ ] No unvalidated dynamic content rendering
- [ ] React's built-in XSS protection used
### 6. CSRF Protection
#### CSRF Tokens
```typescript
import { csrf } from '@/lib/csrf'
export async function POST(request: Request) {
const token = request.headers.get('X-CSRF-Token')
if (!csrf.verify(token)) {
return NextResponse.json(
{ error: 'Invalid CSRF token' },
{ status: 403 }
)
}
// Process request
}
```
#### SameSite Cookies
```typescript
res.setHeader('Set-Cookie',
`session=${sessionId}; HttpOnly; Secure; SameSite=Strict`)
```
#### Verification Steps
- [ ] CSRF tokens on state-changing operations
- [ ] SameSite=Strict on all cookies
- [ ] Double-submit cookie pattern implemented
### 7. Rate Limiting
#### API Rate Limiting
```typescript
import rateLimit from 'express-rate-limit'
const limiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // 100 requests per window
message: 'Too many requests'
})
// Apply to routes
app.use('/api/', limiter)
```
#### Expensive Operations
```typescript
// Aggressive rate limiting for searches
const searchLimiter = rateLimit({
windowMs: 60 * 1000, // 1 minute
max: 10, // 10 requests per minute
message: 'Too many search requests'
})
app.use('/api/search', searchLimiter)
```
#### Verification Steps
- [ ] Rate limiting on all API endpoints
- [ ] Stricter limits on expensive operations
- [ ] IP-based rate limiting
- [ ] User-based rate limiting (authenticated)
### 8. Sensitive Data Exposure
#### Logging
```typescript
// FAIL: WRONG: Logging sensitive data
console.log('User login:', { email, password })
console.log('Payment:', { cardNumber, cvv })
// PASS: CORRECT: Redact sensitive data
console.log('User login:', { email, userId })
console.log('Payment:', { last4: card.last4, userId })
```
#### Error Messages
```typescript
// FAIL: WRONG: Exposing internal details
catch (error) {
return NextResponse.json(
{ error: error.message, stack: error.stack },
{ status: 500 }
)
}
// PASS: CORRECT: Generic error messages
catch (error) {
console.error('Internal error:', error)
return NextResponse.json(
{ error: 'An error occurred. Please try again.' },
{ status: 500 }
)
}
```
#### Verification Steps
- [ ] No passwords, tokens, or secrets in logs
- [ ] Error messages generic for users
- [ ] Detailed errors only in server logs
- [ ] No stack traces exposed to users
### 9. Blockchain Security (Solana)
#### Wallet Verification
```typescript
import { verify } from '@solana/web3.js'
async function verifyWalletOwnership(
publicKey: string,
signature: string,
message: string
) {
try {
const isValid = verify(
Buffer.from(message),
Buffer.from(signature, 'base64'),
Buffer.from(publicKey, 'base64')
)
return isValid
} catch (error) {
return false
}
}
```
#### Transaction Verification
```typescript
async function verifyTransaction(transaction: Transaction) {
// Verify recipient
if (transaction.to !== expectedRecipient) {
throw new Error('Invalid recipient')
}
// Verify amount
if (transaction.amount > maxAmount) {
throw new Error('Amount exceeds limit')
}
// Verify user has sufficient balance
const balance = await getBalance(transaction.from)
if (balance < transaction.amount) {
throw new Error('Insufficient balance')
}
return true
}
```
#### Verification Steps
- [ ] Wallet signatures verified
- [ ] Transaction details validated
- [ ] Balance checks before transactions
- [ ] No blind transaction signing
### 10. Dependency Security
#### Regular Updates
```bash
# Check for vulnerabilities
npm audit
# Fix automatically fixable issues
npm audit fix
# Update dependencies
npm update
# Check for outdated packages
npm outdated
```
#### Lock Files
```bash
# ALWAYS commit lock files
git add package-lock.json
# Use in CI/CD for reproducible builds
npm ci # Instead of npm install
```
#### Verification Steps
- [ ] Dependencies up to date
- [ ] No known vulnerabilities (npm audit clean)
- [ ] Lock files committed
- [ ] Dependabot enabled on GitHub
- [ ] Regular security updates
## Security Testing
### Automated Security Tests
```typescript
// Test authentication
test('requires authentication', async () => {
const response = await fetch('/api/protected')
expect(response.status).toBe(401)
})
// Test authorization
test('requires admin role', async () => {
const response = await fetch('/api/admin', {
headers: { Authorization: `Bearer ${userToken}` }
})
expect(response.status).toBe(403)
})
// Test input validation
test('rejects invalid input', async () => {
const response = await fetch('/api/users', {
method: 'POST',
body: JSON.stringify({ email: 'not-an-email' })
})
expect(response.status).toBe(400)
})
// Test rate limiting
test('enforces rate limits', async () => {
const requests = Array(101).fill(null).map(() =>
fetch('/api/endpoint')
)
const responses = await Promise.all(requests)
const tooManyRequests = responses.filter(r => r.status === 429)
expect(tooManyRequests.length).toBeGreaterThan(0)
})
```
## Pre-Deployment Security Checklist
Before ANY production deployment:
- [ ] **Secrets**: No hardcoded secrets, all in env vars
- [ ] **Input Validation**: All user inputs validated
- [ ] **SQL Injection**: All queries parameterized
- [ ] **XSS**: User content sanitized
- [ ] **CSRF**: Protection enabled
- [ ] **Authentication**: Proper token handling
- [ ] **Authorization**: Role checks in place
- [ ] **Rate Limiting**: Enabled on all endpoints
- [ ] **HTTPS**: Enforced in production
- [ ] **Security Headers**: CSP, X-Frame-Options configured
- [ ] **Error Handling**: No sensitive data in errors
- [ ] **Logging**: No sensitive data logged
- [ ] **Dependencies**: Up to date, no vulnerabilities
- [ ] **Row Level Security**: Enabled in Supabase
- [ ] **CORS**: Properly configured
- [ ] **File Uploads**: Validated (size, type)
- [ ] **Wallet Signatures**: Verified (if blockchain)
## Resources
- [OWASP Top 10](https://owasp.org/www-project-top-ten/)
- [Next.js Security](https://nextjs.org/docs/security)
- [Supabase Security](https://supabase.com/docs/guides/auth)
- [Web Security Academy](https://portswigger.net/web-security)
---
**Remember**: Security is not optional. One vulnerability can compromise the entire platform. When in doubt, err on the side of caution.

View File

@ -1,7 +0,0 @@
interface:
display_name: "Security Review"
short_description: "Comprehensive security checklist and vulnerability detection"
brand_color: "#EF4444"
default_prompt: "Run security checklist: secrets, input validation, injection prevention"
policy:
allow_implicit_invocation: true

View File

@ -1,103 +0,0 @@
---
name: strategic-compact
description: Suggests manual context compaction at logical intervals to preserve context through task phases rather than arbitrary auto-compaction.
origin: ECC
---
# Strategic Compact Skill
Suggests manual `/compact` at strategic points in your workflow rather than relying on arbitrary auto-compaction.
## When to Activate
- Running long sessions that approach context limits (200K+ tokens)
- Working on multi-phase tasks (research → plan → implement → test)
- Switching between unrelated tasks within the same session
- After completing a major milestone and starting new work
- When responses slow down or become less coherent (context pressure)
## Why Strategic Compaction?
Auto-compaction triggers at arbitrary points:
- Often mid-task, losing important context
- No awareness of logical task boundaries
- Can interrupt complex multi-step operations
Strategic compaction at logical boundaries:
- **After exploration, before execution** — Compact research context, keep implementation plan
- **After completing a milestone** — Fresh start for next phase
- **Before major context shifts** — Clear exploration context before different task
## How It Works
The `suggest-compact.js` script runs on PreToolUse (Edit/Write) and:
1. **Tracks tool calls** — Counts tool invocations in session
2. **Threshold detection** — Suggests at configurable threshold (default: 50 calls)
3. **Periodic reminders** — Reminds every 25 calls after threshold
## Hook Setup
Add to your `~/.claude/settings.json`:
```json
{
"hooks": {
"PreToolUse": [
{
"matcher": "Edit",
"hooks": [{ "type": "command", "command": "node ~/.claude/skills/strategic-compact/suggest-compact.js" }]
},
{
"matcher": "Write",
"hooks": [{ "type": "command", "command": "node ~/.claude/skills/strategic-compact/suggest-compact.js" }]
}
]
}
}
```
## Configuration
Environment variables:
- `COMPACT_THRESHOLD` — Tool calls before first suggestion (default: 50)
## Compaction Decision Guide
Use this table to decide when to compact:
| Phase Transition | Compact? | Why |
|-----------------|----------|-----|
| Research → Planning | Yes | Research context is bulky; plan is the distilled output |
| Planning → Implementation | Yes | Plan is in TodoWrite or a file; free up context for code |
| Implementation → Testing | Maybe | Keep if tests reference recent code; compact if switching focus |
| Debugging → Next feature | Yes | Debug traces pollute context for unrelated work |
| Mid-implementation | No | Losing variable names, file paths, and partial state is costly |
| After a failed approach | Yes | Clear the dead-end reasoning before trying a new approach |
## What Survives Compaction
Understanding what persists helps you compact with confidence:
| Persists | Lost |
|----------|------|
| CLAUDE.md instructions | Intermediate reasoning and analysis |
| TodoWrite task list | File contents you previously read |
| Memory files (`~/.claude/memory/`) | Multi-step conversation context |
| Git state (commits, branches) | Tool call history and counts |
| Files on disk | Nuanced user preferences stated verbally |
## Best Practices
1. **Compact after planning** — Once plan is finalized in TodoWrite, compact to start fresh
2. **Compact after debugging** — Clear error-resolution context before continuing
3. **Don't compact mid-implementation** — Preserve context for related changes
4. **Read the suggestion** — The hook tells you *when*, you decide *if*
5. **Write before compacting** — Save important context to files or memory before compacting
6. **Use `/compact` with a summary** — Add a custom message: `/compact Focus on implementing auth middleware next`
## Related
- [The Longform Guide](https://x.com/affaanmustafa/status/2014040193557471352) — Token optimization section
- Memory persistence hooks — For state that survives compaction
- `continuous-learning` skill — Extracts patterns before session ends

View File

@ -1,7 +0,0 @@
interface:
display_name: "Strategic Compact"
short_description: "Context management via strategic compaction"
brand_color: "#14B8A6"
default_prompt: "Suggest task boundary compaction for context management"
policy:
allow_implicit_invocation: true

View File

@ -1,410 +0,0 @@
---
name: tdd-workflow
description: Use this skill when writing new features, fixing bugs, or refactoring code. Enforces test-driven development with 80%+ coverage including unit, integration, and E2E tests.
origin: ECC
---
# Test-Driven Development Workflow
This skill ensures all code development follows TDD principles with comprehensive test coverage.
## When to Activate
- Writing new features or functionality
- Fixing bugs or issues
- Refactoring existing code
- Adding API endpoints
- Creating new components
## Core Principles
### 1. Tests BEFORE Code
ALWAYS write tests first, then implement code to make tests pass.
### 2. Coverage Requirements
- Minimum 80% coverage (unit + integration + E2E)
- All edge cases covered
- Error scenarios tested
- Boundary conditions verified
### 3. Test Types
#### Unit Tests
- Individual functions and utilities
- Component logic
- Pure functions
- Helpers and utilities
#### Integration Tests
- API endpoints
- Database operations
- Service interactions
- External API calls
#### E2E Tests (Playwright)
- Critical user flows
- Complete workflows
- Browser automation
- UI interactions
## TDD Workflow Steps
### Step 1: Write User Journeys
```
As a [role], I want to [action], so that [benefit]
Example:
As a user, I want to search for markets semantically,
so that I can find relevant markets even without exact keywords.
```
### Step 2: Generate Test Cases
For each user journey, create comprehensive test cases:
```typescript
describe('Semantic Search', () => {
it('returns relevant markets for query', async () => {
// Test implementation
})
it('handles empty query gracefully', async () => {
// Test edge case
})
it('falls back to substring search when Redis unavailable', async () => {
// Test fallback behavior
})
it('sorts results by similarity score', async () => {
// Test sorting logic
})
})
```
### Step 3: Run Tests (They Should Fail)
```bash
npm test
# Tests should fail - we haven't implemented yet
```
### Step 4: Implement Code
Write minimal code to make tests pass:
```typescript
// Implementation guided by tests
export async function searchMarkets(query: string) {
// Implementation here
}
```
### Step 5: Run Tests Again
```bash
npm test
# Tests should now pass
```
### Step 6: Refactor
Improve code quality while keeping tests green:
- Remove duplication
- Improve naming
- Optimize performance
- Enhance readability
### Step 7: Verify Coverage
```bash
npm run test:coverage
# Verify 80%+ coverage achieved
```
## Testing Patterns
### Unit Test Pattern (Jest/Vitest)
```typescript
import { render, screen, fireEvent } from '@testing-library/react'
import { Button } from './Button'
describe('Button Component', () => {
it('renders with correct text', () => {
render(<Button>Click me</Button>)
expect(screen.getByText('Click me')).toBeInTheDocument()
})
it('calls onClick when clicked', () => {
const handleClick = jest.fn()
render(<Button onClick={handleClick}>Click</Button>)
fireEvent.click(screen.getByRole('button'))
expect(handleClick).toHaveBeenCalledTimes(1)
})
it('is disabled when disabled prop is true', () => {
render(<Button disabled>Click</Button>)
expect(screen.getByRole('button')).toBeDisabled()
})
})
```
### API Integration Test Pattern
```typescript
import { NextRequest } from 'next/server'
import { GET } from './route'
describe('GET /api/markets', () => {
it('returns markets successfully', async () => {
const request = new NextRequest('http://localhost/api/markets')
const response = await GET(request)
const data = await response.json()
expect(response.status).toBe(200)
expect(data.success).toBe(true)
expect(Array.isArray(data.data)).toBe(true)
})
it('validates query parameters', async () => {
const request = new NextRequest('http://localhost/api/markets?limit=invalid')
const response = await GET(request)
expect(response.status).toBe(400)
})
it('handles database errors gracefully', async () => {
// Mock database failure
const request = new NextRequest('http://localhost/api/markets')
// Test error handling
})
})
```
### E2E Test Pattern (Playwright)
```typescript
import { test, expect } from '@playwright/test'
test('user can search and filter markets', async ({ page }) => {
// Navigate to markets page
await page.goto('/')
await page.click('a[href="/markets"]')
// Verify page loaded
await expect(page.locator('h1')).toContainText('Markets')
// Search for markets
await page.fill('input[placeholder="Search markets"]', 'election')
// Wait for debounce and results
await page.waitForTimeout(600)
// Verify search results displayed
const results = page.locator('[data-testid="market-card"]')
await expect(results).toHaveCount(5, { timeout: 5000 })
// Verify results contain search term
const firstResult = results.first()
await expect(firstResult).toContainText('election', { ignoreCase: true })
// Filter by status
await page.click('button:has-text("Active")')
// Verify filtered results
await expect(results).toHaveCount(3)
})
test('user can create a new market', async ({ page }) => {
// Login first
await page.goto('/creator-dashboard')
// Fill market creation form
await page.fill('input[name="name"]', 'Test Market')
await page.fill('textarea[name="description"]', 'Test description')
await page.fill('input[name="endDate"]', '2025-12-31')
// Submit form
await page.click('button[type="submit"]')
// Verify success message
await expect(page.locator('text=Market created successfully')).toBeVisible()
// Verify redirect to market page
await expect(page).toHaveURL(/\/markets\/test-market/)
})
```
## Test File Organization
```
src/
├── components/
│ ├── Button/
│ │ ├── Button.tsx
│ │ ├── Button.test.tsx # Unit tests
│ │ └── Button.stories.tsx # Storybook
│ └── MarketCard/
│ ├── MarketCard.tsx
│ └── MarketCard.test.tsx
├── app/
│ └── api/
│ └── markets/
│ ├── route.ts
│ └── route.test.ts # Integration tests
└── e2e/
├── markets.spec.ts # E2E tests
├── trading.spec.ts
└── auth.spec.ts
```
## Mocking External Services
### Supabase Mock
```typescript
jest.mock('@/lib/supabase', () => ({
supabase: {
from: jest.fn(() => ({
select: jest.fn(() => ({
eq: jest.fn(() => Promise.resolve({
data: [{ id: 1, name: 'Test Market' }],
error: null
}))
}))
}))
}
}))
```
### Redis Mock
```typescript
jest.mock('@/lib/redis', () => ({
searchMarketsByVector: jest.fn(() => Promise.resolve([
{ slug: 'test-market', similarity_score: 0.95 }
])),
checkRedisHealth: jest.fn(() => Promise.resolve({ connected: true }))
}))
```
### OpenAI Mock
```typescript
jest.mock('@/lib/openai', () => ({
generateEmbedding: jest.fn(() => Promise.resolve(
new Array(1536).fill(0.1) // Mock 1536-dim embedding
))
}))
```
## Test Coverage Verification
### Run Coverage Report
```bash
npm run test:coverage
```
### Coverage Thresholds
```json
{
"jest": {
"coverageThresholds": {
"global": {
"branches": 80,
"functions": 80,
"lines": 80,
"statements": 80
}
}
}
}
```
## Common Testing Mistakes to Avoid
### FAIL: WRONG: Testing Implementation Details
```typescript
// Don't test internal state
expect(component.state.count).toBe(5)
```
### PASS: CORRECT: Test User-Visible Behavior
```typescript
// Test what users see
expect(screen.getByText('Count: 5')).toBeInTheDocument()
```
### FAIL: WRONG: Brittle Selectors
```typescript
// Breaks easily
await page.click('.css-class-xyz')
```
### PASS: CORRECT: Semantic Selectors
```typescript
// Resilient to changes
await page.click('button:has-text("Submit")')
await page.click('[data-testid="submit-button"]')
```
### FAIL: WRONG: No Test Isolation
```typescript
// Tests depend on each other
test('creates user', () => { /* ... */ })
test('updates same user', () => { /* depends on previous test */ })
```
### PASS: CORRECT: Independent Tests
```typescript
// Each test sets up its own data
test('creates user', () => {
const user = createTestUser()
// Test logic
})
test('updates user', () => {
const user = createTestUser()
// Update logic
})
```
## Continuous Testing
### Watch Mode During Development
```bash
npm test -- --watch
# Tests run automatically on file changes
```
### Pre-Commit Hook
```bash
# Runs before every commit
npm test && npm run lint
```
### CI/CD Integration
```yaml
# GitHub Actions
- name: Run Tests
run: npm test -- --coverage
- name: Upload Coverage
uses: codecov/codecov-action@v3
```
## Best Practices
1. **Write Tests First** - Always TDD
2. **One Assert Per Test** - Focus on single behavior
3. **Descriptive Test Names** - Explain what's tested
4. **Arrange-Act-Assert** - Clear test structure
5. **Mock External Dependencies** - Isolate unit tests
6. **Test Edge Cases** - Null, undefined, empty, large
7. **Test Error Paths** - Not just happy paths
8. **Keep Tests Fast** - Unit tests < 50ms each
9. **Clean Up After Tests** - No side effects
10. **Review Coverage Reports** - Identify gaps
## Success Metrics
- 80%+ code coverage achieved
- All tests passing (green)
- No skipped or disabled tests
- Fast test execution (< 30s for unit tests)
- E2E tests cover critical user flows
- Tests catch bugs before production
---
**Remember**: Tests are not optional. They are the safety net that enables confident refactoring, rapid development, and production reliability.

View File

@ -1,7 +0,0 @@
interface:
display_name: "TDD Workflow"
short_description: "Test-driven development with 80%+ coverage"
brand_color: "#22C55E"
default_prompt: "Follow TDD: write tests first, implement, verify 80%+ coverage"
policy:
allow_implicit_invocation: true

View File

@ -1,126 +0,0 @@
---
name: verification-loop
description: "A comprehensive verification system for Claude Code sessions."
origin: ECC
---
# Verification Loop Skill
A comprehensive verification system for Claude Code sessions.
## When to Use
Invoke this skill:
- After completing a feature or significant code change
- Before creating a PR
- When you want to ensure quality gates pass
- After refactoring
## Verification Phases
### Phase 1: Build Verification
```bash
# Check if project builds
npm run build 2>&1 | tail -20
# OR
pnpm build 2>&1 | tail -20
```
If build fails, STOP and fix before continuing.
### Phase 2: Type Check
```bash
# TypeScript projects
npx tsc --noEmit 2>&1 | head -30
# Python projects
pyright . 2>&1 | head -30
```
Report all type errors. Fix critical ones before continuing.
### Phase 3: Lint Check
```bash
# JavaScript/TypeScript
npm run lint 2>&1 | head -30
# Python
ruff check . 2>&1 | head -30
```
### Phase 4: Test Suite
```bash
# Run tests with coverage
npm run test -- --coverage 2>&1 | tail -50
# Check coverage threshold
# Target: 80% minimum
```
Report:
- Total tests: X
- Passed: X
- Failed: X
- Coverage: X%
### Phase 5: Security Scan
```bash
# Check for secrets
grep -rn "sk-" --include="*.ts" --include="*.js" . 2>/dev/null | head -10
grep -rn "api_key" --include="*.ts" --include="*.js" . 2>/dev/null | head -10
# Check for console.log
grep -rn "console.log" --include="*.ts" --include="*.tsx" src/ 2>/dev/null | head -10
```
### Phase 6: Diff Review
```bash
# Show what changed
git diff --stat
git diff HEAD~1 --name-only
```
Review each changed file for:
- Unintended changes
- Missing error handling
- Potential edge cases
## Output Format
After running all phases, produce a verification report:
```
VERIFICATION REPORT
==================
Build: [PASS/FAIL]
Types: [PASS/FAIL] (X errors)
Lint: [PASS/FAIL] (X warnings)
Tests: [PASS/FAIL] (X/Y passed, Z% coverage)
Security: [PASS/FAIL] (X issues)
Diff: [X files changed]
Overall: [READY/NOT READY] for PR
Issues to Fix:
1. ...
2. ...
```
## Continuous Mode
For long sessions, run verification every 15 minutes or after major changes:
```markdown
Set a mental checkpoint:
- After completing each function
- After finishing a component
- Before moving to next task
Run: /verify
```
## Integration with Hooks
This skill complements PostToolUse hooks but provides deeper verification.
Hooks catch issues immediately; this skill provides comprehensive review.

View File

@ -1,7 +0,0 @@
interface:
display_name: "Verification Loop"
short_description: "Build, test, lint, typecheck verification"
brand_color: "#10B981"
default_prompt: "Run verification: build, test, lint, typecheck, security"
policy:
allow_implicit_invocation: true

View File

@ -1,308 +0,0 @@
---
name: video-editing
description: AI-assisted video editing workflows for cutting, structuring, and augmenting real footage. Covers the full pipeline from raw capture through FFmpeg, Remotion, ElevenLabs, fal.ai, and final polish in Descript or CapCut. Use when the user wants to edit video, cut footage, create vlogs, or build video content.
origin: ECC
---
# Video Editing
AI-assisted editing for real footage. Not generation from prompts. Editing existing video fast.
## When to Activate
- User wants to edit, cut, or structure video footage
- Turning long recordings into short-form content
- Building vlogs, tutorials, or demo videos from raw capture
- Adding overlays, subtitles, music, or voiceover to existing video
- Reframing video for different platforms (YouTube, TikTok, Instagram)
- User says "edit video", "cut this footage", "make a vlog", or "video workflow"
## Core Thesis
AI video editing is useful when you stop asking it to create the whole video and start using it to compress, structure, and augment real footage. The value is not generation. The value is compression.
## The Pipeline
```
Screen Studio / raw footage
→ Claude / Codex
→ FFmpeg
→ Remotion
→ ElevenLabs / fal.ai
→ Descript or CapCut
```
Each layer has a specific job. Do not skip layers. Do not try to make one tool do everything.
## Layer 1: Capture (Screen Studio / Raw Footage)
Collect the source material:
- **Screen Studio**: polished screen recordings for app demos, coding sessions, browser workflows
- **Raw camera footage**: vlog footage, interviews, event recordings
- **Desktop capture via VideoDB**: session recording with real-time context (see `videodb` skill)
Output: raw files ready for organization.
## Layer 2: Organization (Claude / Codex)
Use Claude Code or Codex to:
- **Transcribe and label**: generate transcript, identify topics and themes
- **Plan structure**: decide what stays, what gets cut, what order works
- **Identify dead sections**: find pauses, tangents, repeated takes
- **Generate edit decision list**: timestamps for cuts, segments to keep
- **Scaffold FFmpeg and Remotion code**: generate the commands and compositions
```
Example prompt:
"Here's the transcript of a 4-hour recording. Identify the 8 strongest segments
for a 24-minute vlog. Give me FFmpeg cut commands for each segment."
```
This layer is about structure, not final creative taste.
## Layer 3: Deterministic Cuts (FFmpeg)
FFmpeg handles the boring but critical work: splitting, trimming, concatenating, and preprocessing.
### Extract segment by timestamp
```bash
ffmpeg -i raw.mp4 -ss 00:12:30 -to 00:15:45 -c copy segment_01.mp4
```
### Batch cut from edit decision list
```bash
#!/bin/bash
# cuts.txt: start,end,label
while IFS=, read -r start end label; do
ffmpeg -i raw.mp4 -ss "$start" -to "$end" -c copy "segments/${label}.mp4"
done < cuts.txt
```
### Concatenate segments
```bash
# Create file list
for f in segments/*.mp4; do echo "file '$f'"; done > concat.txt
ffmpeg -f concat -safe 0 -i concat.txt -c copy assembled.mp4
```
### Create proxy for faster editing
```bash
ffmpeg -i raw.mp4 -vf "scale=960:-2" -c:v libx264 -preset ultrafast -crf 28 proxy.mp4
```
### Extract audio for transcription
```bash
ffmpeg -i raw.mp4 -vn -acodec pcm_s16le -ar 16000 audio.wav
```
### Normalize audio levels
```bash
ffmpeg -i segment.mp4 -af loudnorm=I=-16:TP=-1.5:LRA=11 -c:v copy normalized.mp4
```
## Layer 4: Programmable Composition (Remotion)
Remotion turns editing problems into composable code. Use it for things that traditional editors make painful:
### When to use Remotion
- Overlays: text, images, branding, lower thirds
- Data visualizations: charts, stats, animated numbers
- Motion graphics: transitions, explainer animations
- Composable scenes: reusable templates across videos
- Product demos: annotated screenshots, UI highlights
### Basic Remotion composition
```tsx
import { AbsoluteFill, Sequence, Video, useCurrentFrame } from "remotion";
export const VlogComposition: React.FC = () => {
const frame = useCurrentFrame();
return (
<AbsoluteFill>
{/* Main footage */}
<Sequence from={0} durationInFrames={300}>
<Video src="/segments/intro.mp4" />
</Sequence>
{/* Title overlay */}
<Sequence from={30} durationInFrames={90}>
<AbsoluteFill style={{
justifyContent: "center",
alignItems: "center",
}}>
<h1 style={{
fontSize: 72,
color: "white",
textShadow: "2px 2px 8px rgba(0,0,0,0.8)",
}}>
The AI Editing Stack
</h1>
</AbsoluteFill>
</Sequence>
{/* Next segment */}
<Sequence from={300} durationInFrames={450}>
<Video src="/segments/demo.mp4" />
</Sequence>
</AbsoluteFill>
);
};
```
### Render output
```bash
npx remotion render src/index.ts VlogComposition output.mp4
```
See the [Remotion docs](https://www.remotion.dev/docs) for detailed patterns and API reference.
## Layer 5: Generated Assets (ElevenLabs / fal.ai)
Generate only what you need. Do not generate the whole video.
### Voiceover with ElevenLabs
```python
import os
import requests
resp = requests.post(
f"https://api.elevenlabs.io/v1/text-to-speech/{voice_id}",
headers={
"xi-api-key": os.environ["ELEVENLABS_API_KEY"],
"Content-Type": "application/json"
},
json={
"text": "Your narration text here",
"model_id": "eleven_turbo_v2_5",
"voice_settings": {"stability": 0.5, "similarity_boost": 0.75}
}
)
with open("voiceover.mp3", "wb") as f:
f.write(resp.content)
```
### Music and SFX with fal.ai
Use the `fal-ai-media` skill for:
- Background music generation
- Sound effects (ThinkSound model for video-to-audio)
- Transition sounds
### Generated visuals with fal.ai
Use for insert shots, thumbnails, or b-roll that doesn't exist:
```
generate(model_name: "fal-ai/nano-banana-pro", input: {
"prompt": "professional thumbnail for tech vlog, dark background, code on screen",
"image_size": "landscape_16_9"
})
```
### VideoDB generative audio
If VideoDB is configured:
```python
voiceover = coll.generate_voice(text="Narration here", voice="alloy")
music = coll.generate_music(prompt="lo-fi background for coding vlog", duration=120)
sfx = coll.generate_sound_effect(prompt="subtle whoosh transition")
```
## Layer 6: Final Polish (Descript / CapCut)
The last layer is human. Use a traditional editor for:
- **Pacing**: adjust cuts that feel too fast or slow
- **Captions**: auto-generated, then manually cleaned
- **Color grading**: basic correction and mood
- **Final audio mix**: balance voice, music, and SFX levels
- **Export**: platform-specific formats and quality settings
This is where taste lives. AI clears the repetitive work. You make the final calls.
## Social Media Reframing
Different platforms need different aspect ratios:
| Platform | Aspect Ratio | Resolution |
|----------|-------------|------------|
| YouTube | 16:9 | 1920x1080 |
| TikTok / Reels | 9:16 | 1080x1920 |
| Instagram Feed | 1:1 | 1080x1080 |
| X / Twitter | 16:9 or 1:1 | 1280x720 or 720x720 |
### Reframe with FFmpeg
```bash
# 16:9 to 9:16 (center crop)
ffmpeg -i input.mp4 -vf "crop=ih*9/16:ih,scale=1080:1920" vertical.mp4
# 16:9 to 1:1 (center crop)
ffmpeg -i input.mp4 -vf "crop=ih:ih,scale=1080:1080" square.mp4
```
### Reframe with VideoDB
```python
# Smart reframe (AI-guided subject tracking)
reframed = video.reframe(start=0, end=60, target="vertical", mode=ReframeMode.smart)
```
## Scene Detection and Auto-Cut
### FFmpeg scene detection
```bash
# Detect scene changes (threshold 0.3 = moderate sensitivity)
ffmpeg -i input.mp4 -vf "select='gt(scene,0.3)',showinfo" -vsync vfr -f null - 2>&1 | grep showinfo
```
### Silence detection for auto-cut
```bash
# Find silent segments (useful for cutting dead air)
ffmpeg -i input.mp4 -af silencedetect=noise=-30dB:d=2 -f null - 2>&1 | grep silence
```
### Highlight extraction
Use Claude to analyze transcript + scene timestamps:
```
"Given this transcript with timestamps and these scene change points,
identify the 5 most engaging 30-second clips for social media."
```
## What Each Tool Does Best
| Tool | Strength | Weakness |
|------|----------|----------|
| Claude / Codex | Organization, planning, code generation | Not the creative taste layer |
| FFmpeg | Deterministic cuts, batch processing, format conversion | No visual editing UI |
| Remotion | Programmable overlays, composable scenes, reusable templates | Learning curve for non-devs |
| Screen Studio | Polished screen recordings immediately | Only screen capture |
| ElevenLabs | Voice, narration, music, SFX | Not the center of the workflow |
| Descript / CapCut | Final pacing, captions, polish | Manual, not automatable |
## Key Principles
1. **Edit, don't generate.** This workflow is for cutting real footage, not creating from prompts.
2. **Structure before style.** Get the story right in Layer 2 before touching anything visual.
3. **FFmpeg is the backbone.** Boring but critical. Where long footage becomes manageable.
4. **Remotion for repeatability.** If you'll do it more than once, make it a Remotion component.
5. **Generate selectively.** Only use AI generation for assets that don't exist, not for everything.
6. **Taste is the last layer.** AI clears repetitive work. You make the final creative calls.
## Related Skills
- `fal-ai-media` — AI image, video, and audio generation
- `videodb` — Server-side video processing, indexing, and streaming
- `content-engine` — Platform-native content distribution

View File

@ -1,7 +0,0 @@
interface:
display_name: "Video Editing"
short_description: "AI-assisted video editing for real footage"
brand_color: "#EF4444"
default_prompt: "Edit video using AI-assisted pipeline: organize, cut, compose, generate assets, polish"
policy:
allow_implicit_invocation: true

View File

@ -1,230 +0,0 @@
---
name: x-api
description: X/Twitter API integration for posting tweets, threads, reading timelines, search, and analytics. Covers OAuth auth patterns, rate limits, and platform-native content posting. Use when the user wants to interact with X programmatically.
origin: ECC
---
# X API
Programmatic interaction with X (Twitter) for posting, reading, searching, and analytics.
## When to Activate
- User wants to post tweets or threads programmatically
- Reading timeline, mentions, or user data from X
- Searching X for content, trends, or conversations
- Building X integrations or bots
- Analytics and engagement tracking
- User says "post to X", "tweet", "X API", or "Twitter API"
## Authentication
### OAuth 2.0 Bearer Token (App-Only)
Best for: read-heavy operations, search, public data.
```bash
# Environment setup
export X_BEARER_TOKEN="your-bearer-token"
```
```python
import os
import requests
bearer = os.environ["X_BEARER_TOKEN"]
headers = {"Authorization": f"Bearer {bearer}"}
# Search recent tweets
resp = requests.get(
"https://api.x.com/2/tweets/search/recent",
headers=headers,
params={"query": "claude code", "max_results": 10}
)
tweets = resp.json()
```
### OAuth 1.0a (User Context)
Required for: posting tweets, managing account, DMs, and any write flow.
```bash
# Environment setup — source before use
export X_CONSUMER_KEY="your-consumer-key"
export X_CONSUMER_SECRET="your-consumer-secret"
export X_ACCESS_TOKEN="your-access-token"
export X_ACCESS_TOKEN_SECRET="your-access-token-secret"
```
Legacy aliases such as `X_API_KEY`, `X_API_SECRET`, and `X_ACCESS_SECRET` may exist in older setups. Prefer the `X_CONSUMER_*` and `X_ACCESS_TOKEN_SECRET` names when documenting or wiring new flows.
```python
import os
from requests_oauthlib import OAuth1Session
oauth = OAuth1Session(
os.environ["X_CONSUMER_KEY"],
client_secret=os.environ["X_CONSUMER_SECRET"],
resource_owner_key=os.environ["X_ACCESS_TOKEN"],
resource_owner_secret=os.environ["X_ACCESS_TOKEN_SECRET"],
)
```
## Core Operations
### Post a Tweet
```python
resp = oauth.post(
"https://api.x.com/2/tweets",
json={"text": "Hello from Claude Code"}
)
resp.raise_for_status()
tweet_id = resp.json()["data"]["id"]
```
### Post a Thread
```python
def post_thread(oauth, tweets: list[str]) -> list[str]:
ids = []
reply_to = None
for text in tweets:
payload = {"text": text}
if reply_to:
payload["reply"] = {"in_reply_to_tweet_id": reply_to}
resp = oauth.post("https://api.x.com/2/tweets", json=payload)
tweet_id = resp.json()["data"]["id"]
ids.append(tweet_id)
reply_to = tweet_id
return ids
```
### Read User Timeline
```python
resp = requests.get(
f"https://api.x.com/2/users/{user_id}/tweets",
headers=headers,
params={
"max_results": 10,
"tweet.fields": "created_at,public_metrics",
}
)
```
### Search Tweets
```python
resp = requests.get(
"https://api.x.com/2/tweets/search/recent",
headers=headers,
params={
"query": "from:affaanmustafa -is:retweet",
"max_results": 10,
"tweet.fields": "public_metrics,created_at",
}
)
```
### Pull Recent Original Posts for Voice Modeling
```python
resp = requests.get(
"https://api.x.com/2/tweets/search/recent",
headers=headers,
params={
"query": "from:affaanmustafa -is:retweet -is:reply",
"max_results": 25,
"tweet.fields": "created_at,public_metrics",
}
)
voice_samples = resp.json()
```
### Get User by Username
```python
resp = requests.get(
"https://api.x.com/2/users/by/username/affaanmustafa",
headers=headers,
params={"user.fields": "public_metrics,description,created_at"}
)
```
### Upload Media and Post
```python
# Media upload uses v1.1 endpoint
# Step 1: Upload media
media_resp = oauth.post(
"https://upload.twitter.com/1.1/media/upload.json",
files={"media": open("image.png", "rb")}
)
media_id = media_resp.json()["media_id_string"]
# Step 2: Post with media
resp = oauth.post(
"https://api.x.com/2/tweets",
json={"text": "Check this out", "media": {"media_ids": [media_id]}}
)
```
## Rate Limits
X API rate limits vary by endpoint, auth method, and account tier, and they change over time. Always:
- Check the current X developer docs before hardcoding assumptions
- Read `x-rate-limit-remaining` and `x-rate-limit-reset` headers at runtime
- Back off automatically instead of relying on static tables in code
```python
import time
remaining = int(resp.headers.get("x-rate-limit-remaining", 0))
if remaining < 5:
reset = int(resp.headers.get("x-rate-limit-reset", 0))
wait = max(0, reset - int(time.time()))
print(f"Rate limit approaching. Resets in {wait}s")
```
## Error Handling
```python
resp = oauth.post("https://api.x.com/2/tweets", json={"text": content})
if resp.status_code == 201:
return resp.json()["data"]["id"]
elif resp.status_code == 429:
reset = int(resp.headers["x-rate-limit-reset"])
raise Exception(f"Rate limited. Resets at {reset}")
elif resp.status_code == 403:
raise Exception(f"Forbidden: {resp.json().get('detail', 'check permissions')}")
else:
raise Exception(f"X API error {resp.status_code}: {resp.text}")
```
## Security
- **Never hardcode tokens.** Use environment variables or `.env` files.
- **Never commit `.env` files.** Add to `.gitignore`.
- **Rotate tokens** if exposed. Regenerate at developer.x.com.
- **Use read-only tokens** when write access is not needed.
- **Store OAuth secrets securely** — not in source code or logs.
## Integration with Content Engine
Use `brand-voice` plus `content-engine` to generate platform-native content, then post via X API:
1. Pull recent original posts when voice matching matters
2. Build or reuse a `VOICE PROFILE`
3. Generate content with `content-engine` in X-native format
4. Validate length and thread structure
5. Return the draft for approval unless the user explicitly asked to post now
6. Post via X API only after approval
7. Track engagement via public_metrics
## Related Skills
- `brand-voice` — Build a reusable voice profile from real X and site/source material
- `content-engine` — Generate platform-native content for X
- `crosspost` — Distribute content across X, LinkedIn, and other platforms
- `connections-optimizer` — Reorganize the X graph before drafting network-driven outreach

View File

@ -1,7 +0,0 @@
interface:
display_name: "X API"
short_description: "X/Twitter API integration for posting, threads, and analytics"
brand_color: "#000000"
default_prompt: "Use X API to post tweets, threads, or retrieve timeline and search data"
policy:
allow_implicit_invocation: true

View File

@ -120,7 +120,7 @@ Assume the validator is hostile and literal.
## The `hooks` Field: DO NOT ADD ## The `hooks` Field: DO NOT ADD
> WARNING: **CRITICAL:** Do NOT add a `"hooks"` field to `plugin.json`. This is enforced by a regression test. > ⚠️ **CRITICAL:** Do NOT add a `"hooks"` field to `plugin.json`. This is enforced by a regression test.
### Why This Matters ### Why This Matters

View File

@ -3,15 +3,3 @@
If you plan to edit `.claude-plugin/plugin.json`, be aware that the Claude plugin validator enforces several **undocumented but strict constraints** that can cause installs to fail with vague errors (for example, `agents: Invalid input`). In particular, component fields must be arrays, `agents` must use explicit file paths rather than directories, and a `version` field is required for reliable validation and installation. If you plan to edit `.claude-plugin/plugin.json`, be aware that the Claude plugin validator enforces several **undocumented but strict constraints** that can cause installs to fail with vague errors (for example, `agents: Invalid input`). In particular, component fields must be arrays, `agents` must use explicit file paths rather than directories, and a `version` field is required for reliable validation and installation.
These constraints are not obvious from public examples and have caused repeated installation failures in the past. They are documented in detail in `.claude-plugin/PLUGIN_SCHEMA_NOTES.md`, which should be reviewed before making any changes to the plugin manifest. These constraints are not obvious from public examples and have caused repeated installation failures in the past. They are documented in detail in `.claude-plugin/PLUGIN_SCHEMA_NOTES.md`, which should be reviewed before making any changes to the plugin manifest.
### Custom Endpoints and Gateways
ECC does not override Claude Code transport settings. If Claude Code is configured to run through an official LLM gateway or a compatible custom endpoint, the plugin continues to work because hooks, skills, and any retained legacy command shims execute locally after the CLI starts successfully.
Use Claude Code's own environment/configuration for transport selection, for example:
```bash
export ANTHROPIC_BASE_URL=https://your-gateway.example.com
export ANTHROPIC_AUTH_TOKEN=your-token
claude
```

View File

@ -2,7 +2,7 @@
"name": "everything-claude-code", "name": "everything-claude-code",
"owner": { "owner": {
"name": "Affaan Mustafa", "name": "Affaan Mustafa",
"email": "me@affaanmustafa.com" "email": "affaan@example.com"
}, },
"metadata": { "metadata": {
"description": "Battle-tested Claude Code configurations from an Anthropic hackathon winner" "description": "Battle-tested Claude Code configurations from an Anthropic hackathon winner"
@ -11,13 +11,11 @@
{ {
"name": "everything-claude-code", "name": "everything-claude-code",
"source": "./", "source": "./",
"description": "The most comprehensive Claude Code plugin — 38 agents, 156 skills, 72 legacy command shims, selective install profiles, and production-ready hooks for TDD, security scanning, code review, and continuous learning", "description": "Complete collection of agents, skills, hooks, commands, and rules evolved over 10+ months of intensive daily use",
"version": "1.10.0",
"author": { "author": {
"name": "Affaan Mustafa", "name": "Affaan Mustafa"
"email": "me@affaanmustafa.com"
}, },
"homepage": "https://ecc.tools", "homepage": "https://github.com/affaan-m/everything-claude-code",
"repository": "https://github.com/affaan-m/everything-claude-code", "repository": "https://github.com/affaan-m/everything-claude-code",
"license": "MIT", "license": "MIT",
"keywords": [ "keywords": [
@ -40,8 +38,7 @@
"code-review", "code-review",
"security", "security",
"best-practices" "best-practices"
], ]
"strict": false
} }
] ]
} }

View File

@ -1,12 +1,12 @@
{ {
"name": "everything-claude-code", "name": "everything-claude-code",
"version": "1.10.0", "version": "1.2.0",
"description": "Battle-tested Claude Code plugin for engineering teams — 38 agents, 156 skills, 72 legacy command shims, production-ready hooks, and selective install workflows evolved through continuous real-world use", "description": "Complete collection of battle-tested Claude Code configs from an Anthropic hackathon winner - agents, skills, hooks, and rules evolved over 10+ months of intensive daily use",
"author": { "author": {
"name": "Affaan Mustafa", "name": "Affaan Mustafa",
"url": "https://x.com/affaanmustafa" "url": "https://x.com/affaanmustafa"
}, },
"homepage": "https://ecc.tools", "homepage": "https://github.com/affaan-m/everything-claude-code",
"repository": "https://github.com/affaan-m/everything-claude-code", "repository": "https://github.com/affaan-m/everything-claude-code",
"license": "MIT", "license": "MIT",
"keywords": [ "keywords": [
@ -22,46 +22,20 @@
"automation", "automation",
"best-practices" "best-practices"
], ],
"skills": ["./skills/", "./commands/"],
"agents": [ "agents": [
"./agents/architect.md", "./agents/architect.md",
"./agents/build-error-resolver.md", "./agents/build-error-resolver.md",
"./agents/chief-of-staff.md",
"./agents/code-reviewer.md", "./agents/code-reviewer.md",
"./agents/cpp-build-resolver.md",
"./agents/cpp-reviewer.md",
"./agents/csharp-reviewer.md",
"./agents/dart-build-resolver.md",
"./agents/database-reviewer.md", "./agents/database-reviewer.md",
"./agents/doc-updater.md", "./agents/doc-updater.md",
"./agents/docs-lookup.md",
"./agents/e2e-runner.md", "./agents/e2e-runner.md",
"./agents/flutter-reviewer.md",
"./agents/gan-evaluator.md",
"./agents/gan-generator.md",
"./agents/gan-planner.md",
"./agents/go-build-resolver.md", "./agents/go-build-resolver.md",
"./agents/go-reviewer.md", "./agents/go-reviewer.md",
"./agents/harness-optimizer.md",
"./agents/healthcare-reviewer.md",
"./agents/java-build-resolver.md",
"./agents/java-reviewer.md",
"./agents/kotlin-build-resolver.md",
"./agents/kotlin-reviewer.md",
"./agents/loop-operator.md",
"./agents/opensource-forker.md",
"./agents/opensource-packager.md",
"./agents/opensource-sanitizer.md",
"./agents/performance-optimizer.md",
"./agents/planner.md", "./agents/planner.md",
"./agents/python-reviewer.md", "./agents/python-reviewer.md",
"./agents/pytorch-build-resolver.md",
"./agents/refactor-cleaner.md", "./agents/refactor-cleaner.md",
"./agents/rust-build-resolver.md",
"./agents/rust-reviewer.md",
"./agents/security-reviewer.md", "./agents/security-reviewer.md",
"./agents/tdd-guide.md", "./agents/tdd-guide.md"
"./agents/typescript-reviewer.md" ]
],
"skills": ["./skills/"],
"commands": ["./commands/"]
} }

View File

@ -1,39 +0,0 @@
---
name: add-language-rules
description: Workflow command scaffold for add-language-rules in everything-claude-code.
allowed_tools: ["Bash", "Read", "Write", "Grep", "Glob"]
---
# /add-language-rules
Use this workflow when working on **add-language-rules** in `everything-claude-code`.
## Goal
Adds a new programming language to the rules system, including coding style, hooks, patterns, security, and testing guidelines.
## Common Files
- `rules/*/coding-style.md`
- `rules/*/hooks.md`
- `rules/*/patterns.md`
- `rules/*/security.md`
- `rules/*/testing.md`
## Suggested Sequence
1. Understand the current state and failure mode before editing.
2. Make the smallest coherent change that satisfies the workflow goal.
3. Run the most relevant verification for touched files.
4. Summarize what changed and what still needs review.
## Typical Commit Signals
- Create a new directory under rules/{language}/
- Add coding-style.md, hooks.md, patterns.md, security.md, and testing.md files with language-specific content
- Optionally reference or link to related skills
## Notes
- Treat this as a scaffold, not a hard-coded script.
- Update the command if the workflow evolves materially.

View File

@ -1,36 +0,0 @@
---
name: database-migration
description: Workflow command scaffold for database-migration in everything-claude-code.
allowed_tools: ["Bash", "Read", "Write", "Grep", "Glob"]
---
# /database-migration
Use this workflow when working on **database-migration** in `everything-claude-code`.
## Goal
Database schema changes with migration files
## Common Files
- `**/schema.*`
- `migrations/*`
## Suggested Sequence
1. Understand the current state and failure mode before editing.
2. Make the smallest coherent change that satisfies the workflow goal.
3. Run the most relevant verification for touched files.
4. Summarize what changed and what still needs review.
## Typical Commit Signals
- Create migration file
- Update schema definitions
- Generate/update types
## Notes
- Treat this as a scaffold, not a hard-coded script.
- Update the command if the workflow evolves materially.

View File

@ -1,38 +0,0 @@
---
name: feature-development
description: Workflow command scaffold for feature-development in everything-claude-code.
allowed_tools: ["Bash", "Read", "Write", "Grep", "Glob"]
---
# /feature-development
Use this workflow when working on **feature-development** in `everything-claude-code`.
## Goal
Standard feature implementation workflow
## Common Files
- `manifests/*`
- `schemas/*`
- `**/*.test.*`
- `**/api/**`
## Suggested Sequence
1. Understand the current state and failure mode before editing.
2. Make the smallest coherent change that satisfies the workflow goal.
3. Run the most relevant verification for touched files.
4. Summarize what changed and what still needs review.
## Typical Commit Signals
- Add feature implementation
- Add tests for feature
- Update documentation
## Notes
- Treat this as a scaffold, not a hard-coded script.
- Update the command if the workflow evolves materially.

View File

@ -1,334 +0,0 @@
{
"version": "1.3",
"schemaVersion": "1.0",
"generatedBy": "ecc-tools",
"generatedAt": "2026-03-20T12:07:36.496Z",
"repo": "https://github.com/affaan-m/everything-claude-code",
"profiles": {
"requested": "full",
"recommended": "full",
"effective": "full",
"requestedAlias": "full",
"recommendedAlias": "full",
"effectiveAlias": "full"
},
"requestedProfile": "full",
"profile": "full",
"recommendedProfile": "full",
"effectiveProfile": "full",
"tier": "enterprise",
"requestedComponents": [
"repo-baseline",
"workflow-automation",
"security-audits",
"research-tooling",
"team-rollout",
"governance-controls"
],
"selectedComponents": [
"repo-baseline",
"workflow-automation",
"security-audits",
"research-tooling",
"team-rollout",
"governance-controls"
],
"requestedAddComponents": [],
"requestedRemoveComponents": [],
"blockedRemovalComponents": [],
"tierFilteredComponents": [],
"requestedRootPackages": [
"runtime-core",
"workflow-pack",
"agentshield-pack",
"research-pack",
"team-config-sync",
"enterprise-controls"
],
"selectedRootPackages": [
"runtime-core",
"workflow-pack",
"agentshield-pack",
"research-pack",
"team-config-sync",
"enterprise-controls"
],
"requestedPackages": [
"runtime-core",
"workflow-pack",
"agentshield-pack",
"research-pack",
"team-config-sync",
"enterprise-controls"
],
"requestedAddPackages": [],
"requestedRemovePackages": [],
"selectedPackages": [
"runtime-core",
"workflow-pack",
"agentshield-pack",
"research-pack",
"team-config-sync",
"enterprise-controls"
],
"packages": [
"runtime-core",
"workflow-pack",
"agentshield-pack",
"research-pack",
"team-config-sync",
"enterprise-controls"
],
"blockedRemovalPackages": [],
"tierFilteredRootPackages": [],
"tierFilteredPackages": [],
"conflictingPackages": [],
"dependencyGraph": {
"runtime-core": [],
"workflow-pack": [
"runtime-core"
],
"agentshield-pack": [
"workflow-pack"
],
"research-pack": [
"workflow-pack"
],
"team-config-sync": [
"runtime-core"
],
"enterprise-controls": [
"team-config-sync"
]
},
"resolutionOrder": [
"runtime-core",
"workflow-pack",
"agentshield-pack",
"research-pack",
"team-config-sync",
"enterprise-controls"
],
"requestedModules": [
"runtime-core",
"workflow-pack",
"agentshield-pack",
"research-pack",
"team-config-sync",
"enterprise-controls"
],
"selectedModules": [
"runtime-core",
"workflow-pack",
"agentshield-pack",
"research-pack",
"team-config-sync",
"enterprise-controls"
],
"modules": [
"runtime-core",
"workflow-pack",
"agentshield-pack",
"research-pack",
"team-config-sync",
"enterprise-controls"
],
"managedFiles": [
".claude/skills/everything-claude-code/SKILL.md",
".agents/skills/everything-claude-code/SKILL.md",
".agents/skills/everything-claude-code/agents/openai.yaml",
".claude/identity.json",
".codex/config.toml",
".codex/AGENTS.md",
".codex/agents/explorer.toml",
".codex/agents/reviewer.toml",
".codex/agents/docs-researcher.toml",
".claude/homunculus/instincts/inherited/everything-claude-code-instincts.yaml",
".claude/rules/everything-claude-code-guardrails.md",
".claude/research/everything-claude-code-research-playbook.md",
".claude/team/everything-claude-code-team-config.json",
".claude/enterprise/controls.md",
".claude/commands/database-migration.md",
".claude/commands/feature-development.md",
".claude/commands/add-language-rules.md"
],
"packageFiles": {
"runtime-core": [
".claude/skills/everything-claude-code/SKILL.md",
".agents/skills/everything-claude-code/SKILL.md",
".agents/skills/everything-claude-code/agents/openai.yaml",
".claude/identity.json",
".codex/config.toml",
".codex/AGENTS.md",
".codex/agents/explorer.toml",
".codex/agents/reviewer.toml",
".codex/agents/docs-researcher.toml",
".claude/homunculus/instincts/inherited/everything-claude-code-instincts.yaml"
],
"agentshield-pack": [
".claude/rules/everything-claude-code-guardrails.md"
],
"research-pack": [
".claude/research/everything-claude-code-research-playbook.md"
],
"team-config-sync": [
".claude/team/everything-claude-code-team-config.json"
],
"enterprise-controls": [
".claude/enterprise/controls.md"
],
"workflow-pack": [
".claude/commands/database-migration.md",
".claude/commands/feature-development.md",
".claude/commands/add-language-rules.md"
]
},
"moduleFiles": {
"runtime-core": [
".claude/skills/everything-claude-code/SKILL.md",
".agents/skills/everything-claude-code/SKILL.md",
".agents/skills/everything-claude-code/agents/openai.yaml",
".claude/identity.json",
".codex/config.toml",
".codex/AGENTS.md",
".codex/agents/explorer.toml",
".codex/agents/reviewer.toml",
".codex/agents/docs-researcher.toml",
".claude/homunculus/instincts/inherited/everything-claude-code-instincts.yaml"
],
"agentshield-pack": [
".claude/rules/everything-claude-code-guardrails.md"
],
"research-pack": [
".claude/research/everything-claude-code-research-playbook.md"
],
"team-config-sync": [
".claude/team/everything-claude-code-team-config.json"
],
"enterprise-controls": [
".claude/enterprise/controls.md"
],
"workflow-pack": [
".claude/commands/database-migration.md",
".claude/commands/feature-development.md",
".claude/commands/add-language-rules.md"
]
},
"files": [
{
"moduleId": "runtime-core",
"path": ".claude/skills/everything-claude-code/SKILL.md",
"description": "Repository-specific Claude Code skill generated from git history."
},
{
"moduleId": "runtime-core",
"path": ".agents/skills/everything-claude-code/SKILL.md",
"description": "Codex-facing copy of the generated repository skill."
},
{
"moduleId": "runtime-core",
"path": ".agents/skills/everything-claude-code/agents/openai.yaml",
"description": "Codex skill metadata so the repo skill appears cleanly in the skill interface."
},
{
"moduleId": "runtime-core",
"path": ".claude/identity.json",
"description": "Suggested identity.json baseline derived from repository conventions."
},
{
"moduleId": "runtime-core",
"path": ".codex/config.toml",
"description": "Repo-local Codex MCP and multi-agent baseline aligned with ECC defaults."
},
{
"moduleId": "runtime-core",
"path": ".codex/AGENTS.md",
"description": "Codex usage guide that points at the generated repo skill and workflow bundle."
},
{
"moduleId": "runtime-core",
"path": ".codex/agents/explorer.toml",
"description": "Read-only explorer role config for Codex multi-agent work."
},
{
"moduleId": "runtime-core",
"path": ".codex/agents/reviewer.toml",
"description": "Read-only reviewer role config focused on correctness and security."
},
{
"moduleId": "runtime-core",
"path": ".codex/agents/docs-researcher.toml",
"description": "Read-only docs researcher role config for API verification."
},
{
"moduleId": "runtime-core",
"path": ".claude/homunculus/instincts/inherited/everything-claude-code-instincts.yaml",
"description": "Continuous-learning instincts derived from repository patterns."
},
{
"moduleId": "agentshield-pack",
"path": ".claude/rules/everything-claude-code-guardrails.md",
"description": "Repository guardrails distilled from analysis for security and workflow review."
},
{
"moduleId": "research-pack",
"path": ".claude/research/everything-claude-code-research-playbook.md",
"description": "Research workflow playbook for source attribution and long-context tasks."
},
{
"moduleId": "team-config-sync",
"path": ".claude/team/everything-claude-code-team-config.json",
"description": "Team config scaffold that points collaborators at the shared ECC bundle."
},
{
"moduleId": "enterprise-controls",
"path": ".claude/enterprise/controls.md",
"description": "Enterprise governance scaffold for approvals, audit posture, and escalation."
},
{
"moduleId": "workflow-pack",
"path": ".claude/commands/database-migration.md",
"description": "Workflow command scaffold for database-migration."
},
{
"moduleId": "workflow-pack",
"path": ".claude/commands/feature-development.md",
"description": "Workflow command scaffold for feature-development."
},
{
"moduleId": "workflow-pack",
"path": ".claude/commands/add-language-rules.md",
"description": "Workflow command scaffold for add-language-rules."
}
],
"workflows": [
{
"command": "database-migration",
"path": ".claude/commands/database-migration.md"
},
{
"command": "feature-development",
"path": ".claude/commands/feature-development.md"
},
{
"command": "add-language-rules",
"path": ".claude/commands/add-language-rules.md"
}
],
"adapters": {
"claudeCode": {
"skillPath": ".claude/skills/everything-claude-code/SKILL.md",
"identityPath": ".claude/identity.json",
"commandPaths": [
".claude/commands/database-migration.md",
".claude/commands/feature-development.md",
".claude/commands/add-language-rules.md"
]
},
"codex": {
"configPath": ".codex/config.toml",
"agentsGuidePath": ".codex/AGENTS.md",
"skillPath": ".agents/skills/everything-claude-code/SKILL.md"
}
}
}

View File

@ -1,15 +0,0 @@
# Enterprise Controls
This is a starter governance file for enterprise ECC deployments.
## Baseline
- Repository: https://github.com/affaan-m/everything-claude-code
- Recommended profile: full
- Keep install manifests, audit allowlists, and Codex baselines under review.
## Approval Expectations
- Security-sensitive workflow changes require explicit reviewer acknowledgement.
- Audit suppressions must include a reason and the narrowest viable matcher.
- Generated skills should be reviewed before broad rollout to teams.

View File

@ -1,162 +0,0 @@
# Curated instincts for affaan-m/everything-claude-code
# Import with: /instinct-import .claude/homunculus/instincts/inherited/everything-claude-code-instincts.yaml
---
id: everything-claude-code-conventional-commits
trigger: "when making a commit in everything-claude-code"
confidence: 0.9
domain: git
source: repo-curation
source_repo: affaan-m/everything-claude-code
---
# Everything Claude Code Conventional Commits
## Action
Use conventional commit prefixes such as `feat:`, `fix:`, `docs:`, `test:`, `chore:`, and `refactor:`.
## Evidence
- Mainline history consistently uses conventional commit subjects.
- Release and changelog automation expect readable commit categorization.
---
id: everything-claude-code-commit-length
trigger: "when writing a commit subject in everything-claude-code"
confidence: 0.8
domain: git
source: repo-curation
source_repo: affaan-m/everything-claude-code
---
# Everything Claude Code Commit Length
## Action
Keep commit subjects concise and close to the repository norm of about 70 characters.
## Evidence
- Recent history clusters around ~70 characters, not ~50.
- Short, descriptive subjects read well in release notes and PR summaries.
---
id: everything-claude-code-js-file-naming
trigger: "when creating a new JavaScript or TypeScript module in everything-claude-code"
confidence: 0.85
domain: code-style
source: repo-curation
source_repo: affaan-m/everything-claude-code
---
# Everything Claude Code JS File Naming
## Action
Prefer camelCase for JavaScript and TypeScript module filenames, and keep skill or command directories in kebab-case.
## Evidence
- `scripts/` and test helpers mostly use camelCase module names.
- `skills/` and `commands/` directories use kebab-case consistently.
---
id: everything-claude-code-test-runner
trigger: "when adding or updating tests in everything-claude-code"
confidence: 0.9
domain: testing
source: repo-curation
source_repo: affaan-m/everything-claude-code
---
# Everything Claude Code Test Runner
## Action
Use the repository's existing Node-based test flow: targeted `*.test.js` files first, then `node tests/run-all.js` or `npm test` for broader verification.
## Evidence
- The repo uses `tests/run-all.js` as the central test orchestrator.
- Test files follow the `*.test.js` naming pattern across hook, CI, and integration coverage.
---
id: everything-claude-code-hooks-change-set
trigger: "when modifying hooks or hook-adjacent behavior in everything-claude-code"
confidence: 0.88
domain: workflow
source: repo-curation
source_repo: affaan-m/everything-claude-code
---
# Everything Claude Code Hooks Change Set
## Action
Update the hook script, its configuration, its tests, and its user-facing documentation together.
## Evidence
- Hook fixes routinely span `hooks/hooks.json`, `scripts/hooks/`, `tests/hooks/`, `tests/integration/`, and `hooks/README.md`.
- Partial hook changes are a common source of regressions and stale docs.
---
id: everything-claude-code-cross-platform-sync
trigger: "when shipping a user-visible feature across ECC surfaces"
confidence: 0.9
domain: workflow
source: repo-curation
source_repo: affaan-m/everything-claude-code
---
# Everything Claude Code Cross Platform Sync
## Action
Treat the root repo as the source of truth, then mirror shipped changes to `.cursor/`, `.codex/`, `.opencode/`, and `.agents/` only where the feature actually exists.
## Evidence
- ECC maintains multiple harness-specific surfaces with overlapping but not identical files.
- The safest workflow is root-first followed by explicit parity updates.
---
id: everything-claude-code-release-sync
trigger: "when preparing a release for everything-claude-code"
confidence: 0.86
domain: workflow
source: repo-curation
source_repo: affaan-m/everything-claude-code
---
# Everything Claude Code Release Sync
## Action
Keep package versions, plugin manifests, and release-facing docs synchronized before publishing.
## Evidence
- Release work spans `package.json`, `.claude-plugin/*`, `.opencode/package.json`, and release-note content.
- Version drift causes broken update paths and confusing install surfaces.
---
id: everything-claude-code-learning-curation
trigger: "when importing or evolving instincts for everything-claude-code"
confidence: 0.84
domain: workflow
source: repo-curation
source_repo: affaan-m/everything-claude-code
---
# Everything Claude Code Learning Curation
## Action
Prefer a small set of accurate instincts over bulk-generated, duplicated, or contradictory instincts.
## Evidence
- Auto-generated instinct dumps can duplicate rules, widen triggers too far, or preserve placeholder detector output.
- Curated instincts are easier to import, audit, and trust during continuous-learning workflows.

View File

@ -1,14 +0,0 @@
{
"version": "2.0",
"technicalLevel": "technical",
"preferredStyle": {
"verbosity": "minimal",
"codeComments": true,
"explanations": true
},
"domains": [
"javascript"
],
"suggestedBy": "ecc-tools-repo-analysis",
"createdAt": "2026-03-20T12:07:57.119Z"
}

View File

@ -1,21 +0,0 @@
# Everything Claude Code Research Playbook
Use this when the task is documentation-heavy, source-sensitive, or requires broad repository context.
## Defaults
- Prefer primary documentation and direct source links.
- Include concrete dates when facts may change over time.
- Keep a short evidence trail for each recommendation or conclusion.
## Suggested Flow
1. Inspect local code and docs first.
2. Browse only for unstable or external facts.
3. Summarize findings with file paths, commands, or links.
## Repo Signals
- Primary language: JavaScript
- Framework: Not detected
- Workflows detected: 10

View File

@ -1,34 +0,0 @@
# Everything Claude Code Guardrails
Generated by ECC Tools from repository history. Review before treating it as a hard policy file.
## Commit Workflow
- Prefer `conventional` commit messaging with prefixes such as fix, test, feat, docs.
- Keep new changes aligned with the existing pull-request and review flow already present in the repo.
## Architecture
- Preserve the current `hybrid` module organization.
- Respect the current test layout: `separate`.
## Code Style
- Use `camelCase` file naming.
- Prefer `relative` imports and `mixed` exports.
## ECC Defaults
- Current recommended install profile: `full`.
- Validate risky config changes in PRs and keep the install manifest in source control.
## Detected Workflows
- database-migration: Database schema changes with migration files
- feature-development: Standard feature implementation workflow
- add-language-rules: Adds a new programming language to the rules system, including coding style, hooks, patterns, security, and testing guidelines.
## Review Reminder
- Regenerate this bundle when repository conventions materially change.
- Keep suppressions narrow and auditable.

View File

@ -1,47 +0,0 @@
# Node.js Rules for everything-claude-code
> Project-specific rules for the ECC codebase. Extends common rules.
## Stack
- **Runtime**: Node.js >=18 (no transpilation, plain CommonJS)
- **Test runner**: `node tests/run-all.js` — individual files via `node tests/**/*.test.js`
- **Linter**: ESLint (`@eslint/js`, flat config)
- **Coverage**: c8
- **Lint**: markdownlint-cli for `.md` files
## File Conventions
- `scripts/` — Node.js utilities, hooks. CommonJS (`require`/`module.exports`)
- `agents/`, `commands/`, `skills/`, `rules/` — Markdown with YAML frontmatter
- `tests/` — Mirror the `scripts/` structure. Test files named `*.test.js`
- File naming: **lowercase with hyphens** (e.g. `session-start.js`, `post-edit-format.js`)
## Code Style
- CommonJS only — no ESM (`import`/`export`) unless file ends in `.mjs`
- No TypeScript — plain `.js` throughout
- Prefer `const` over `let`; never `var`
- Keep hook scripts under 200 lines — extract helpers to `scripts/lib/`
- All hooks must `exit 0` on non-critical errors (never block tool execution unexpectedly)
## Hook Development
- Hook scripts normally receive JSON on stdin, but hooks routed through `scripts/hooks/run-with-flags.js` can export `run(rawInput)` and let the wrapper handle parsing/gating
- Async hooks: mark `"async": true` in `settings.json` with a timeout ≤30s
- Blocking hooks (PreToolUse, stop): keep fast (<200ms) no network calls
- Use `run-with-flags.js` wrapper for all hooks so `ECC_HOOK_PROFILE` and `ECC_DISABLED_HOOKS` runtime gating works
- Always exit 0 on parse errors; log to stderr with `[HookName]` prefix
## Testing Requirements
- Run `node tests/run-all.js` before committing
- New scripts in `scripts/lib/` require a matching test in `tests/lib/`
- New hooks require at least one integration test in `tests/hooks/`
## Markdown / Agent Files
- Agents: YAML frontmatter with `name`, `description`, `tools`, `model`
- Skills: sections — When to Use, How It Works, Examples
- Commands: `description:` frontmatter line required
- Run `npx markdownlint-cli '**/*.md' --ignore node_modules` before committing

View File

@ -1,442 +0,0 @@
---
name: everything-claude-code-conventions
description: Development conventions and patterns for everything-claude-code. JavaScript project with conventional commits.
---
# Everything Claude Code Conventions
> Generated from [affaan-m/everything-claude-code](https://github.com/affaan-m/everything-claude-code) on 2026-03-20
## Overview
This skill teaches Claude the development patterns and conventions used in everything-claude-code.
## Tech Stack
- **Primary Language**: JavaScript
- **Architecture**: hybrid module organization
- **Test Location**: separate
## When to Use This Skill
Activate this skill when:
- Making changes to this repository
- Adding new features following established patterns
- Writing tests that match project conventions
- Creating commits with proper message format
## Commit Conventions
Follow these commit message conventions based on 500 analyzed commits.
### Commit Style: Conventional Commits
### Prefixes Used
- `fix`
- `test`
- `feat`
- `docs`
### Message Guidelines
- Average message length: ~65 characters
- Keep first line concise and descriptive
- Use imperative mood ("Add feature" not "Added feature")
*Commit message example*
```text
feat(rules): add C# language support
```
*Commit message example*
```text
chore(deps-dev): bump flatted (#675)
```
*Commit message example*
```text
fix: auto-detect ECC root from plugin cache when CLAUDE_PLUGIN_ROOT is unset (#547) (#691)
```
*Commit message example*
```text
docs: add Antigravity setup and usage guide (#552)
```
*Commit message example*
```text
merge: PR #529 — feat(skills): add documentation-lookup, bun-runtime, nextjs-turbopack; feat(agents): add rust-reviewer
```
*Commit message example*
```text
Revert "Add Kiro IDE support (.kiro/) (#548)"
```
*Commit message example*
```text
Add Kiro IDE support (.kiro/) (#548)
```
*Commit message example*
```text
feat: add block-no-verify hook for Claude Code and Cursor (#649)
```
## Architecture
### Project Structure: Single Package
This project uses **hybrid** module organization.
### Configuration Files
- `.github/workflows/ci.yml`
- `.github/workflows/maintenance.yml`
- `.github/workflows/monthly-metrics.yml`
- `.github/workflows/release.yml`
- `.github/workflows/reusable-release.yml`
- `.github/workflows/reusable-test.yml`
- `.github/workflows/reusable-validate.yml`
- `.opencode/package.json`
- `.opencode/tsconfig.json`
- `.prettierrc`
- `eslint.config.js`
- `package.json`
### Guidelines
- This project uses a hybrid organization
- Follow existing patterns when adding new code
## Code Style
### Language: JavaScript
### Naming Conventions
| Element | Convention |
|---------|------------|
| Files | camelCase |
| Functions | camelCase |
| Classes | PascalCase |
| Constants | SCREAMING_SNAKE_CASE |
### Import Style: Relative Imports
### Export Style: Mixed Style
*Preferred import style*
```typescript
// Use relative imports
import { Button } from '../components/Button'
import { useAuth } from './hooks/useAuth'
```
## Testing
### Test Framework
No specific test framework detected — use the repository's existing test patterns.
### File Pattern: `*.test.js`
### Test Types
- **Unit tests**: Test individual functions and components in isolation
- **Integration tests**: Test interactions between multiple components/services
### Coverage
This project has coverage reporting configured. Aim for 80%+ coverage.
## Error Handling
### Error Handling Style: Try-Catch Blocks
*Standard error handling pattern*
```typescript
try {
const result = await riskyOperation()
return result
} catch (error) {
console.error('Operation failed:', error)
throw new Error('User-friendly message')
}
```
## Common Workflows
These workflows were detected from analyzing commit patterns.
### Database Migration
Database schema changes with migration files
**Frequency**: ~2 times per month
**Steps**:
1. Create migration file
2. Update schema definitions
3. Generate/update types
**Files typically involved**:
- `**/schema.*`
- `migrations/*`
**Example commit sequence**:
```
feat: implement --with/--without selective install flags (#679)
fix: sync catalog counts with filesystem (27 agents, 113 skills, 58 commands) (#693)
feat(rules): add Rust language rules (rebased #660) (#686)
```
### Feature Development
Standard feature implementation workflow
**Frequency**: ~22 times per month
**Steps**:
1. Add feature implementation
2. Add tests for feature
3. Update documentation
**Files typically involved**:
- `manifests/*`
- `schemas/*`
- `**/*.test.*`
- `**/api/**`
**Example commit sequence**:
```
feat(skills): add documentation-lookup, bun-runtime, nextjs-turbopack; feat(agents): add rust-reviewer
docs(skills): align documentation-lookup with CONTRIBUTING template; add cross-harness (Codex/Cursor) skill copies
fix: address PR review — skill template (When to use, How it works, Examples), bun.lock, next build note, rust-reviewer CI note, doc-lookup privacy/uncertainty
```
### Add Language Rules
Adds a new programming language to the rules system, including coding style, hooks, patterns, security, and testing guidelines.
**Frequency**: ~2 times per month
**Steps**:
1. Create a new directory under rules/{language}/
2. Add coding-style.md, hooks.md, patterns.md, security.md, and testing.md files with language-specific content
3. Optionally reference or link to related skills
**Files typically involved**:
- `rules/*/coding-style.md`
- `rules/*/hooks.md`
- `rules/*/patterns.md`
- `rules/*/security.md`
- `rules/*/testing.md`
**Example commit sequence**:
```
Create a new directory under rules/{language}/
Add coding-style.md, hooks.md, patterns.md, security.md, and testing.md files with language-specific content
Optionally reference or link to related skills
```
### Add New Skill
Adds a new skill to the system, documenting its workflow, triggers, and usage, often with supporting scripts.
**Frequency**: ~4 times per month
**Steps**:
1. Create a new directory under skills/{skill-name}/
2. Add SKILL.md with documentation (When to Use, How It Works, Examples, etc.)
3. Optionally add scripts or supporting files under skills/{skill-name}/scripts/
4. Address review feedback and iterate on documentation
**Files typically involved**:
- `skills/*/SKILL.md`
- `skills/*/scripts/*.sh`
- `skills/*/scripts/*.js`
**Example commit sequence**:
```
Create a new directory under skills/{skill-name}/
Add SKILL.md with documentation (When to Use, How It Works, Examples, etc.)
Optionally add scripts or supporting files under skills/{skill-name}/scripts/
Address review feedback and iterate on documentation
```
### Add New Agent
Adds a new agent to the system for code review, build resolution, or other automated tasks.
**Frequency**: ~2 times per month
**Steps**:
1. Create a new agent markdown file under agents/{agent-name}.md
2. Register the agent in AGENTS.md
3. Optionally update README.md and docs/COMMAND-AGENT-MAP.md
**Files typically involved**:
- `agents/*.md`
- `AGENTS.md`
- `README.md`
- `docs/COMMAND-AGENT-MAP.md`
**Example commit sequence**:
```
Create a new agent markdown file under agents/{agent-name}.md
Register the agent in AGENTS.md
Optionally update README.md and docs/COMMAND-AGENT-MAP.md
```
### Add New Command
Adds a new command to the system, often paired with a backing skill.
**Frequency**: ~1 times per month
**Steps**:
1. Create a new markdown file under commands/{command-name}.md
2. Optionally add or update a backing skill under skills/{skill-name}/SKILL.md
**Files typically involved**:
- `commands/*.md`
- `skills/*/SKILL.md`
**Example commit sequence**:
```
Create a new markdown file under commands/{command-name}.md
Optionally add or update a backing skill under skills/{skill-name}/SKILL.md
```
### Sync Catalog Counts
Synchronizes the documented counts of agents, skills, and commands in AGENTS.md and README.md with the actual repository state.
**Frequency**: ~3 times per month
**Steps**:
1. Update agent, skill, and command counts in AGENTS.md
2. Update the same counts in README.md (quick-start, comparison table, etc.)
3. Optionally update other documentation files
**Files typically involved**:
- `AGENTS.md`
- `README.md`
**Example commit sequence**:
```
Update agent, skill, and command counts in AGENTS.md
Update the same counts in README.md (quick-start, comparison table, etc.)
Optionally update other documentation files
```
### Add Cross Harness Skill Copies
Adds skill copies for different agent harnesses (e.g., Codex, Cursor, Antigravity) to ensure compatibility across platforms.
**Frequency**: ~2 times per month
**Steps**:
1. Copy or adapt SKILL.md to .agents/skills/{skill}/SKILL.md and/or .cursor/skills/{skill}/SKILL.md
2. Optionally add harness-specific openai.yaml or config files
3. Address review feedback to align with CONTRIBUTING template
**Files typically involved**:
- `.agents/skills/*/SKILL.md`
- `.cursor/skills/*/SKILL.md`
- `.agents/skills/*/agents/openai.yaml`
**Example commit sequence**:
```
Copy or adapt SKILL.md to .agents/skills/{skill}/SKILL.md and/or .cursor/skills/{skill}/SKILL.md
Optionally add harness-specific openai.yaml or config files
Address review feedback to align with CONTRIBUTING template
```
### Add Or Update Hook
Adds or updates git or bash hooks to enforce workflow, quality, or security policies.
**Frequency**: ~1 times per month
**Steps**:
1. Add or update hook scripts in hooks/ or scripts/hooks/
2. Register the hook in hooks/hooks.json or similar config
3. Optionally add or update tests in tests/hooks/
**Files typically involved**:
- `hooks/*.hook`
- `hooks/hooks.json`
- `scripts/hooks/*.js`
- `tests/hooks/*.test.js`
- `.cursor/hooks.json`
**Example commit sequence**:
```
Add or update hook scripts in hooks/ or scripts/hooks/
Register the hook in hooks/hooks.json or similar config
Optionally add or update tests in tests/hooks/
```
### Address Review Feedback
Addresses code review feedback by updating documentation, scripts, or configuration for clarity, correctness, or convention alignment.
**Frequency**: ~4 times per month
**Steps**:
1. Edit SKILL.md, agent, or command files to address reviewer comments
2. Update examples, headings, or configuration as requested
3. Iterate until all review feedback is resolved
**Files typically involved**:
- `skills/*/SKILL.md`
- `agents/*.md`
- `commands/*.md`
- `.agents/skills/*/SKILL.md`
- `.cursor/skills/*/SKILL.md`
**Example commit sequence**:
```
Edit SKILL.md, agent, or command files to address reviewer comments
Update examples, headings, or configuration as requested
Iterate until all review feedback is resolved
```
## Best Practices
Based on analysis of the codebase, follow these practices:
### Do
- Use conventional commit format (feat:, fix:, etc.)
- Follow *.test.js naming pattern
- Use camelCase for file names
- Prefer mixed exports
### Don't
- Don't write vague commit messages
- Don't skip tests for new features
- Don't deviate from established patterns without discussion
---
*This skill was auto-generated by [ECC Tools](https://ecc.tools). Review and customize as needed for your team.*

View File

@ -1,15 +0,0 @@
{
"version": "1.0",
"generatedBy": "ecc-tools",
"profile": "full",
"sharedSkills": [
".claude/skills/everything-claude-code/SKILL.md",
".agents/skills/everything-claude-code/SKILL.md"
],
"commandFiles": [
".claude/commands/database-migration.md",
".claude/commands/feature-development.md",
".claude/commands/add-language-rules.md"
],
"updatedAt": "2026-03-20T12:07:36.496Z"
}

View File

@ -1,98 +0,0 @@
# Everything Claude Code for CodeBuddy
Bring Everything Claude Code (ECC) workflows to CodeBuddy IDE. This repository provides custom commands, agents, skills, and rules that can be installed into any CodeBuddy project using the unified Target Adapter architecture.
## Quick Start (Recommended)
Use the unified install system for full lifecycle management:
```bash
# Install with default profile
node scripts/install-apply.js --target codebuddy --profile developer
# Install with full profile (all modules)
node scripts/install-apply.js --target codebuddy --profile full
# Dry-run to preview changes
node scripts/install-apply.js --target codebuddy --profile full --dry-run
```
## Management Commands
```bash
# Check installation health
node scripts/doctor.js --target codebuddy
# Repair installation
node scripts/repair.js --target codebuddy
# Uninstall cleanly (tracked via install-state)
node scripts/uninstall.js --target codebuddy
```
## Shell Script (Legacy)
The legacy shell scripts are still available for quick setup:
```bash
# Install to current project
cd /path/to/your/project
.codebuddy/install.sh
# Install globally
.codebuddy/install.sh ~
```
## What's Included
### Commands
Commands are on-demand workflows invocable via the `/` menu in CodeBuddy chat. All commands are reused directly from the project root's `commands/` folder.
### Agents
Agents are specialized AI assistants with specific tool configurations. All agents are reused directly from the project root's `agents/` folder.
### Skills
Skills are on-demand workflows invocable via the `/` menu in chat. All skills are reused directly from the project's `skills/` folder.
### Rules
Rules provide always-on rules and context that shape how the agent works with your code. Rules are flattened into namespaced files (e.g., `common-coding-style.md`) for CodeBuddy compatibility.
## Project Structure
```
.codebuddy/
├── commands/ # Command files (reused from project root)
├── agents/ # Agent files (reused from project root)
├── skills/ # Skill files (reused from skills/)
├── rules/ # Rule files (flattened from rules/)
├── ecc-install-state.json # Install state tracking
├── install.sh # Legacy install script
├── uninstall.sh # Legacy uninstall script
└── README.md # This file
```
## Benefits of Target Adapter Install
- **Install-state tracking**: Safe uninstall that only removes ECC-managed files
- **Doctor checks**: Verify installation health and detect drift
- **Repair**: Auto-fix broken installations
- **Selective install**: Choose specific modules via profiles
- **Cross-platform**: Node.js-based, works on Windows/macOS/Linux
## Recommended Workflow
1. **Start with planning**: Use `/plan` command to break down complex features
2. **Write tests first**: Invoke `/tdd` command before implementing
3. **Review your code**: Use `/code-review` after writing code
4. **Check security**: Use `/code-review` again for auth, API endpoints, or sensitive data handling
5. **Fix build errors**: Use `/build-fix` if there are build errors
## Next Steps
- Open your project in CodeBuddy
- Type `/` to see available commands
- Enjoy the ECC workflows!

View File

@ -1,98 +0,0 @@
# Everything Claude Code for CodeBuddy
为 CodeBuddy IDE 带来 Everything Claude Code (ECC) 工作流。此仓库提供自定义命令、智能体、技能和规则,可以通过统一的 Target Adapter 架构安装到任何 CodeBuddy 项目中。
## 快速开始(推荐)
使用统一安装系统,获得完整的生命周期管理:
```bash
# 使用默认配置安装
node scripts/install-apply.js --target codebuddy --profile developer
# 使用完整配置安装(所有模块)
node scripts/install-apply.js --target codebuddy --profile full
# 预览模式查看变更
node scripts/install-apply.js --target codebuddy --profile full --dry-run
```
## 管理命令
```bash
# 检查安装健康状态
node scripts/doctor.js --target codebuddy
# 修复安装
node scripts/repair.js --target codebuddy
# 清洁卸载(通过 install-state 跟踪)
node scripts/uninstall.js --target codebuddy
```
## Shell 脚本(旧版)
旧版 Shell 脚本仍然可用于快速设置:
```bash
# 安装到当前项目
cd /path/to/your/project
.codebuddy/install.sh
# 全局安装
.codebuddy/install.sh ~
```
## 包含的内容
### 命令
命令是通过 CodeBuddy 聊天中的 `/` 菜单调用的按需工作流。所有命令都直接复用自项目根目录的 `commands/` 文件夹。
### 智能体
智能体是具有特定工具配置的专门 AI 助手。所有智能体都直接复用自项目根目录的 `agents/` 文件夹。
### 技能
技能是通过聊天中的 `/` 菜单调用的按需工作流。所有技能都直接复用自项目的 `skills/` 文件夹。
### 规则
规则提供始终适用的规则和上下文,塑造智能体处理代码的方式。规则会被扁平化为命名空间文件(如 `common-coding-style.md`)以兼容 CodeBuddy。
## 项目结构
```
.codebuddy/
├── commands/ # 命令文件(复用自项目根目录)
├── agents/ # 智能体文件(复用自项目根目录)
├── skills/ # 技能文件(复用自 skills/
├── rules/ # 规则文件(从 rules/ 扁平化)
├── ecc-install-state.json # 安装状态跟踪
├── install.sh # 旧版安装脚本
├── uninstall.sh # 旧版卸载脚本
└── README.zh-CN.md # 此文件
```
## Target Adapter 安装的优势
- **安装状态跟踪**:安全卸载,仅删除 ECC 管理的文件
- **Doctor 检查**:验证安装健康状态并检测偏移
- **修复**:自动修复损坏的安装
- **选择性安装**:通过配置文件选择特定模块
- **跨平台**:基于 Node.js支持 Windows/macOS/Linux
## 推荐的工作流
1. **从计划开始**:使用 `/plan` 命令分解复杂功能
2. **先写测试**:在实现之前调用 `/tdd` 命令
3. **审查您的代码**:编写代码后使用 `/code-review`
4. **检查安全性**对于身份验证、API 端点或敏感数据处理,再次使用 `/code-review`
5. **修复构建错误**:如果有构建错误,使用 `/build-fix`
## 下一步
- 在 CodeBuddy 中打开您的项目
- 输入 `/` 以查看可用命令
- 享受 ECC 工作流!

View File

@ -1,312 +0,0 @@
#!/usr/bin/env node
/**
* ECC CodeBuddy Installer (Cross-platform Node.js version)
* Installs Everything Claude Code workflows into a CodeBuddy project.
*
* Usage:
* node install.js # Install to current directory
* node install.js ~ # Install globally to ~/.codebuddy/
*/
const fs = require('fs');
const path = require('path');
const os = require('os');
// Platform detection
const isWindows = process.platform === 'win32';
/**
* Get home directory cross-platform
*/
function getHomeDir() {
return process.env.USERPROFILE || process.env.HOME || os.homedir();
}
/**
* Ensure directory exists
*/
function ensureDir(dirPath) {
try {
if (!fs.existsSync(dirPath)) {
fs.mkdirSync(dirPath, { recursive: true });
}
} catch (err) {
if (err.code !== 'EEXIST') {
throw err;
}
}
}
/**
* Read lines from a file
*/
function readLines(filePath) {
try {
if (!fs.existsSync(filePath)) {
return [];
}
const content = fs.readFileSync(filePath, 'utf8');
return content.split('\n').filter(line => line.length > 0);
} catch {
return [];
}
}
/**
* Check if manifest contains an entry
*/
function manifestHasEntry(manifestPath, entry) {
const lines = readLines(manifestPath);
return lines.includes(entry);
}
/**
* Add entry to manifest
*/
function ensureManifestEntry(manifestPath, entry) {
try {
const lines = readLines(manifestPath);
if (!lines.includes(entry)) {
const content = lines.join('\n') + (lines.length > 0 ? '\n' : '') + entry + '\n';
fs.writeFileSync(manifestPath, content, 'utf8');
}
} catch (err) {
console.error(`Error updating manifest: ${err.message}`);
}
}
/**
* Copy a file and manage in manifest
*/
function copyManagedFile(sourcePath, targetPath, manifestPath, manifestEntry, makeExecutable = false) {
const alreadyManaged = manifestHasEntry(manifestPath, manifestEntry);
// If target file already exists
if (fs.existsSync(targetPath)) {
if (alreadyManaged) {
ensureManifestEntry(manifestPath, manifestEntry);
}
return false;
}
// Copy the file
try {
ensureDir(path.dirname(targetPath));
fs.copyFileSync(sourcePath, targetPath);
// Make executable on Unix systems
if (makeExecutable && !isWindows) {
fs.chmodSync(targetPath, 0o755);
}
ensureManifestEntry(manifestPath, manifestEntry);
return true;
} catch (err) {
console.error(`Error copying ${sourcePath}: ${err.message}`);
return false;
}
}
/**
* Recursively find files in a directory
*/
function findFiles(dir, extension = '') {
const results = [];
try {
if (!fs.existsSync(dir)) {
return results;
}
function walk(currentPath) {
try {
const entries = fs.readdirSync(currentPath, { withFileTypes: true });
for (const entry of entries) {
const fullPath = path.join(currentPath, entry.name);
if (entry.isDirectory()) {
walk(fullPath);
} else if (!extension || entry.name.endsWith(extension)) {
results.push(fullPath);
}
}
} catch {
// Ignore permission errors
}
}
walk(dir);
} catch {
// Ignore errors
}
return results.sort();
}
/**
* Main install function
*/
function doInstall() {
// Resolve script directory (where this file lives)
const scriptDir = path.dirname(path.resolve(__filename));
const repoRoot = path.dirname(scriptDir);
const codebuddyDirName = '.codebuddy';
// Parse arguments
let targetDir = process.cwd();
if (process.argv.length > 2) {
const arg = process.argv[2];
if (arg === '~' || arg === getHomeDir()) {
targetDir = getHomeDir();
} else {
targetDir = path.resolve(arg);
}
}
// Determine codebuddy full path
let codebuddyFullPath;
const baseName = path.basename(targetDir);
if (baseName === codebuddyDirName) {
codebuddyFullPath = targetDir;
} else {
codebuddyFullPath = path.join(targetDir, codebuddyDirName);
}
console.log('ECC CodeBuddy Installer');
console.log('=======================');
console.log('');
console.log(`Source: ${repoRoot}`);
console.log(`Target: ${codebuddyFullPath}/`);
console.log('');
// Create subdirectories
const subdirs = ['commands', 'agents', 'skills', 'rules'];
for (const dir of subdirs) {
ensureDir(path.join(codebuddyFullPath, dir));
}
// Manifest file
const manifest = path.join(codebuddyFullPath, '.ecc-manifest');
ensureDir(path.dirname(manifest));
// Counters
let commands = 0;
let agents = 0;
let skills = 0;
let rules = 0;
// Copy commands
const commandsDir = path.join(repoRoot, 'commands');
if (fs.existsSync(commandsDir)) {
const files = findFiles(commandsDir, '.md');
for (const file of files) {
if (path.basename(path.dirname(file)) === 'commands') {
const localName = path.basename(file);
const targetPath = path.join(codebuddyFullPath, 'commands', localName);
if (copyManagedFile(file, targetPath, manifest, `commands/${localName}`)) {
commands += 1;
}
}
}
}
// Copy agents
const agentsDir = path.join(repoRoot, 'agents');
if (fs.existsSync(agentsDir)) {
const files = findFiles(agentsDir, '.md');
for (const file of files) {
if (path.basename(path.dirname(file)) === 'agents') {
const localName = path.basename(file);
const targetPath = path.join(codebuddyFullPath, 'agents', localName);
if (copyManagedFile(file, targetPath, manifest, `agents/${localName}`)) {
agents += 1;
}
}
}
}
// Copy skills (with subdirectories)
const skillsDir = path.join(repoRoot, 'skills');
if (fs.existsSync(skillsDir)) {
const skillDirs = fs.readdirSync(skillsDir, { withFileTypes: true })
.filter(entry => entry.isDirectory())
.map(entry => entry.name);
for (const skillName of skillDirs) {
const sourceSkillDir = path.join(skillsDir, skillName);
const targetSkillDir = path.join(codebuddyFullPath, 'skills', skillName);
let skillCopied = false;
const skillFiles = findFiles(sourceSkillDir);
for (const sourceFile of skillFiles) {
const relativePath = path.relative(sourceSkillDir, sourceFile);
const targetPath = path.join(targetSkillDir, relativePath);
const manifestEntry = `skills/${skillName}/${relativePath.replace(/\\/g, '/')}`;
if (copyManagedFile(sourceFile, targetPath, manifest, manifestEntry)) {
skillCopied = true;
}
}
if (skillCopied) {
skills += 1;
}
}
}
// Copy rules (with subdirectories)
const rulesDir = path.join(repoRoot, 'rules');
if (fs.existsSync(rulesDir)) {
const ruleFiles = findFiles(rulesDir);
for (const ruleFile of ruleFiles) {
const relativePath = path.relative(rulesDir, ruleFile);
const targetPath = path.join(codebuddyFullPath, 'rules', relativePath);
const manifestEntry = `rules/${relativePath.replace(/\\/g, '/')}`;
if (copyManagedFile(ruleFile, targetPath, manifest, manifestEntry)) {
rules += 1;
}
}
}
// Copy README files (skip install/uninstall scripts to avoid broken
// path references when the copied script runs from the target directory)
const readmeFiles = ['README.md', 'README.zh-CN.md'];
for (const readmeFile of readmeFiles) {
const sourcePath = path.join(scriptDir, readmeFile);
if (fs.existsSync(sourcePath)) {
const targetPath = path.join(codebuddyFullPath, readmeFile);
copyManagedFile(sourcePath, targetPath, manifest, readmeFile);
}
}
// Add manifest itself
ensureManifestEntry(manifest, '.ecc-manifest');
// Print summary
console.log('Installation complete!');
console.log('');
console.log('Components installed:');
console.log(` Commands: ${commands}`);
console.log(` Agents: ${agents}`);
console.log(` Skills: ${skills}`);
console.log(` Rules: ${rules}`);
console.log('');
console.log(`Directory: ${path.basename(codebuddyFullPath)}`);
console.log('');
console.log('Next steps:');
console.log(' 1. Open your project in CodeBuddy');
console.log(' 2. Type / to see available commands');
console.log(' 3. Enjoy the ECC workflows!');
console.log('');
console.log('To uninstall later:');
console.log(` cd ${codebuddyFullPath}`);
console.log(' node uninstall.js');
console.log('');
}
// Run installer
try {
doInstall();
} catch (error) {
console.error(`Error: ${error.message}`);
process.exit(1);
}

View File

@ -1,231 +0,0 @@
#!/bin/bash
#
# ECC CodeBuddy Installer
# Installs Everything Claude Code workflows into a CodeBuddy project.
#
# Usage:
# ./install.sh # Install to current directory
# ./install.sh ~ # Install globally to ~/.codebuddy/
#
set -euo pipefail
# When globs match nothing, expand to empty list instead of the literal pattern
shopt -s nullglob
# Resolve the directory where this script lives
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
# Locate the ECC repo root by walking up from SCRIPT_DIR to find the marker
# file (VERSION). This keeps the script working even when it has been copied
# into a target project's .codebuddy/ directory.
find_repo_root() {
local dir="$(dirname "$SCRIPT_DIR")"
# First try the parent of SCRIPT_DIR (original layout: .codebuddy/ lives in repo root)
if [ -f "$dir/VERSION" ] && [ -d "$dir/commands" ] && [ -d "$dir/agents" ]; then
echo "$dir"
return 0
fi
echo ""
return 1
}
REPO_ROOT="$(find_repo_root)"
if [ -z "$REPO_ROOT" ]; then
echo "Error: Cannot locate the ECC repository root."
echo "This script must be run from within the ECC repository's .codebuddy/ directory."
exit 1
fi
# CodeBuddy directory name
CODEBUDDY_DIR=".codebuddy"
ensure_manifest_entry() {
local manifest="$1"
local entry="$2"
touch "$manifest"
if ! grep -Fqx "$entry" "$manifest"; then
echo "$entry" >> "$manifest"
fi
}
manifest_has_entry() {
local manifest="$1"
local entry="$2"
[ -f "$manifest" ] && grep -Fqx "$entry" "$manifest"
}
copy_managed_file() {
local source_path="$1"
local target_path="$2"
local manifest="$3"
local manifest_entry="$4"
local make_executable="${5:-0}"
local already_managed=0
if manifest_has_entry "$manifest" "$manifest_entry"; then
already_managed=1
fi
if [ -f "$target_path" ]; then
if [ "$already_managed" -eq 1 ]; then
ensure_manifest_entry "$manifest" "$manifest_entry"
fi
return 1
fi
cp "$source_path" "$target_path"
if [ "$make_executable" -eq 1 ]; then
chmod +x "$target_path"
fi
ensure_manifest_entry "$manifest" "$manifest_entry"
return 0
}
# Install function
do_install() {
local target_dir="$PWD"
# Check if ~ was specified (or expanded to $HOME)
if [ "$#" -ge 1 ]; then
if [ "$1" = "~" ] || [ "$1" = "$HOME" ]; then
target_dir="$HOME"
fi
fi
# Check if we're already inside a .codebuddy directory
local current_dir_name="$(basename "$target_dir")"
local codebuddy_full_path
if [ "$current_dir_name" = ".codebuddy" ]; then
# Already inside the codebuddy directory, use it directly
codebuddy_full_path="$target_dir"
else
# Normal case: append CODEBUDDY_DIR to target_dir
codebuddy_full_path="$target_dir/$CODEBUDDY_DIR"
fi
echo "ECC CodeBuddy Installer"
echo "======================="
echo ""
echo "Source: $REPO_ROOT"
echo "Target: $codebuddy_full_path/"
echo ""
# Subdirectories to create
SUBDIRS="commands agents skills rules"
# Create all required codebuddy subdirectories
for dir in $SUBDIRS; do
mkdir -p "$codebuddy_full_path/$dir"
done
# Manifest file to track installed files
MANIFEST="$codebuddy_full_path/.ecc-manifest"
touch "$MANIFEST"
# Counters for summary
commands=0
agents=0
skills=0
rules=0
# Copy commands from repo root
if [ -d "$REPO_ROOT/commands" ]; then
for f in "$REPO_ROOT/commands"/*.md; do
[ -f "$f" ] || continue
local_name=$(basename "$f")
target_path="$codebuddy_full_path/commands/$local_name"
if copy_managed_file "$f" "$target_path" "$MANIFEST" "commands/$local_name"; then
commands=$((commands + 1))
fi
done
fi
# Copy agents from repo root
if [ -d "$REPO_ROOT/agents" ]; then
for f in "$REPO_ROOT/agents"/*.md; do
[ -f "$f" ] || continue
local_name=$(basename "$f")
target_path="$codebuddy_full_path/agents/$local_name"
if copy_managed_file "$f" "$target_path" "$MANIFEST" "agents/$local_name"; then
agents=$((agents + 1))
fi
done
fi
# Copy skills from repo root (if available)
if [ -d "$REPO_ROOT/skills" ]; then
for d in "$REPO_ROOT/skills"/*/; do
[ -d "$d" ] || continue
skill_name="$(basename "$d")"
target_skill_dir="$codebuddy_full_path/skills/$skill_name"
skill_copied=0
while IFS= read -r source_file; do
relative_path="${source_file#$d}"
target_path="$target_skill_dir/$relative_path"
mkdir -p "$(dirname "$target_path")"
if copy_managed_file "$source_file" "$target_path" "$MANIFEST" "skills/$skill_name/$relative_path"; then
skill_copied=1
fi
done < <(find "$d" -type f | sort)
if [ "$skill_copied" -eq 1 ]; then
skills=$((skills + 1))
fi
done
fi
# Copy rules from repo root
if [ -d "$REPO_ROOT/rules" ]; then
while IFS= read -r rule_file; do
relative_path="${rule_file#$REPO_ROOT/rules/}"
target_path="$codebuddy_full_path/rules/$relative_path"
mkdir -p "$(dirname "$target_path")"
if copy_managed_file "$rule_file" "$target_path" "$MANIFEST" "rules/$relative_path"; then
rules=$((rules + 1))
fi
done < <(find "$REPO_ROOT/rules" -type f | sort)
fi
# Copy README files (skip install/uninstall scripts to avoid broken
# path references when the copied script runs from the target directory)
for readme_file in "$SCRIPT_DIR/README.md" "$SCRIPT_DIR/README.zh-CN.md"; do
if [ -f "$readme_file" ]; then
local_name=$(basename "$readme_file")
target_path="$codebuddy_full_path/$local_name"
copy_managed_file "$readme_file" "$target_path" "$MANIFEST" "$local_name" || true
fi
done
# Add manifest file itself to manifest
ensure_manifest_entry "$MANIFEST" ".ecc-manifest"
# Installation summary
echo "Installation complete!"
echo ""
echo "Components installed:"
echo " Commands: $commands"
echo " Agents: $agents"
echo " Skills: $skills"
echo " Rules: $rules"
echo ""
echo "Directory: $(basename "$codebuddy_full_path")"
echo ""
echo "Next steps:"
echo " 1. Open your project in CodeBuddy"
echo " 2. Type / to see available commands"
echo " 3. Enjoy the ECC workflows!"
echo ""
echo "To uninstall later:"
echo " cd $codebuddy_full_path"
echo " ./uninstall.sh"
}
# Main logic
do_install "$@"

View File

@ -1,291 +0,0 @@
#!/usr/bin/env node
/**
* ECC CodeBuddy Uninstaller (Cross-platform Node.js version)
* Uninstalls Everything Claude Code workflows from a CodeBuddy project.
*
* Usage:
* node uninstall.js # Uninstall from current directory
* node uninstall.js ~ # Uninstall globally from ~/.codebuddy/
*/
const fs = require('fs');
const path = require('path');
const os = require('os');
const readline = require('readline');
/**
* Get home directory cross-platform
*/
function getHomeDir() {
return process.env.USERPROFILE || process.env.HOME || os.homedir();
}
/**
* Resolve a path to its canonical form
*/
function resolvePath(filePath) {
try {
return fs.realpathSync(filePath);
} catch {
// If realpath fails, return the path as-is
return path.resolve(filePath);
}
}
/**
* Check if a manifest entry is valid (security check)
*/
function isValidManifestEntry(entry) {
// Reject empty, absolute paths, parent directory references
if (!entry || entry.length === 0) return false;
if (entry.startsWith('/')) return false;
if (entry.startsWith('~')) return false;
if (entry.includes('/../') || entry.includes('/..')) return false;
if (entry.startsWith('../') || entry.startsWith('..\\')) return false;
if (entry === '..' || entry === '...' || entry.includes('\\..\\')||entry.includes('/..')) return false;
return true;
}
/**
* Read lines from manifest file
*/
function readManifest(manifestPath) {
try {
if (!fs.existsSync(manifestPath)) {
return [];
}
const content = fs.readFileSync(manifestPath, 'utf8');
return content.split('\n').filter(line => line.length > 0);
} catch {
return [];
}
}
/**
* Recursively find empty directories
*/
function findEmptyDirs(dirPath) {
const emptyDirs = [];
function walkDirs(currentPath) {
try {
const entries = fs.readdirSync(currentPath, { withFileTypes: true });
const subdirs = entries.filter(e => e.isDirectory());
for (const subdir of subdirs) {
const subdirPath = path.join(currentPath, subdir.name);
walkDirs(subdirPath);
}
// Check if directory is now empty
try {
const remaining = fs.readdirSync(currentPath);
if (remaining.length === 0 && currentPath !== dirPath) {
emptyDirs.push(currentPath);
}
} catch {
// Directory might have been deleted
}
} catch {
// Ignore errors
}
}
walkDirs(dirPath);
return emptyDirs.sort().reverse(); // Sort in reverse for removal
}
/**
* Prompt user for confirmation
*/
async function promptConfirm(question) {
return new Promise((resolve) => {
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
rl.question(question, (answer) => {
rl.close();
resolve(/^[yY]$/.test(answer));
});
});
}
/**
* Main uninstall function
*/
async function doUninstall() {
const codebuddyDirName = '.codebuddy';
// Parse arguments
let targetDir = process.cwd();
if (process.argv.length > 2) {
const arg = process.argv[2];
if (arg === '~' || arg === getHomeDir()) {
targetDir = getHomeDir();
} else {
targetDir = path.resolve(arg);
}
}
// Determine codebuddy full path
let codebuddyFullPath;
const baseName = path.basename(targetDir);
if (baseName === codebuddyDirName) {
codebuddyFullPath = targetDir;
} else {
codebuddyFullPath = path.join(targetDir, codebuddyDirName);
}
console.log('ECC CodeBuddy Uninstaller');
console.log('==========================');
console.log('');
console.log(`Target: ${codebuddyFullPath}/`);
console.log('');
// Check if codebuddy directory exists
if (!fs.existsSync(codebuddyFullPath)) {
console.error(`Error: ${codebuddyDirName} directory not found at ${targetDir}`);
process.exit(1);
}
const codebuddyRootResolved = resolvePath(codebuddyFullPath);
const manifest = path.join(codebuddyFullPath, '.ecc-manifest');
// Handle missing manifest
if (!fs.existsSync(manifest)) {
console.log('Warning: No manifest file found (.ecc-manifest)');
console.log('');
console.log('This could mean:');
console.log(' 1. ECC was installed with an older version without manifest support');
console.log(' 2. The manifest file was manually deleted');
console.log('');
const confirmed = await promptConfirm(`Do you want to remove the entire ${codebuddyDirName} directory? (y/N) `);
if (!confirmed) {
console.log('Uninstall cancelled.');
process.exit(0);
}
try {
fs.rmSync(codebuddyFullPath, { recursive: true, force: true });
console.log('Uninstall complete!');
console.log('');
console.log(`Removed: ${codebuddyFullPath}/`);
} catch (err) {
console.error(`Error removing directory: ${err.message}`);
process.exit(1);
}
return;
}
console.log('Found manifest file - will only remove files installed by ECC');
console.log('');
const confirmed = await promptConfirm(`Are you sure you want to uninstall ECC from ${codebuddyDirName}? (y/N) `);
if (!confirmed) {
console.log('Uninstall cancelled.');
process.exit(0);
}
// Read manifest and remove files
const manifestLines = readManifest(manifest);
let removed = 0;
let skipped = 0;
for (const filePath of manifestLines) {
if (!filePath || filePath.length === 0) continue;
if (!isValidManifestEntry(filePath)) {
console.log(`Skipped: ${filePath} (invalid manifest entry)`);
skipped += 1;
continue;
}
const fullPath = path.join(codebuddyFullPath, filePath);
// Security check: use path.relative() to ensure the manifest entry
// resolves inside the codebuddy directory. This is stricter than
// startsWith and correctly handles edge-cases with symlinks.
const relative = path.relative(codebuddyRootResolved, path.resolve(fullPath));
if (relative.startsWith('..') || path.isAbsolute(relative)) {
console.log(`Skipped: ${filePath} (outside target directory)`);
skipped += 1;
continue;
}
try {
const stats = fs.lstatSync(fullPath);
if (stats.isFile() || stats.isSymbolicLink()) {
fs.unlinkSync(fullPath);
console.log(`Removed: ${filePath}`);
removed += 1;
} else if (stats.isDirectory()) {
try {
const files = fs.readdirSync(fullPath);
if (files.length === 0) {
fs.rmdirSync(fullPath);
console.log(`Removed: ${filePath}/`);
removed += 1;
} else {
console.log(`Skipped: ${filePath}/ (not empty - contains user files)`);
skipped += 1;
}
} catch {
console.log(`Skipped: ${filePath}/ (not empty - contains user files)`);
skipped += 1;
}
}
} catch {
skipped += 1;
}
}
// Remove empty directories
const emptyDirs = findEmptyDirs(codebuddyFullPath);
for (const emptyDir of emptyDirs) {
try {
fs.rmdirSync(emptyDir);
const relativePath = path.relative(codebuddyFullPath, emptyDir);
console.log(`Removed: ${relativePath}/`);
removed += 1;
} catch {
// Directory might not be empty anymore
}
}
// Try to remove main codebuddy directory if empty
try {
const files = fs.readdirSync(codebuddyFullPath);
if (files.length === 0) {
fs.rmdirSync(codebuddyFullPath);
console.log(`Removed: ${codebuddyDirName}/`);
removed += 1;
}
} catch {
// Directory not empty
}
// Print summary
console.log('');
console.log('Uninstall complete!');
console.log('');
console.log('Summary:');
console.log(` Removed: ${removed} items`);
console.log(` Skipped: ${skipped} items (not found or user-modified)`);
console.log('');
if (fs.existsSync(codebuddyFullPath)) {
console.log(`Note: ${codebuddyDirName} directory still exists (contains user-added files)`);
}
}
// Run uninstaller
doUninstall().catch((error) => {
console.error(`Error: ${error.message}`);
process.exit(1);
});

View File

@ -1,184 +0,0 @@
#!/bin/bash
#
# ECC CodeBuddy Uninstaller
# Uninstalls Everything Claude Code workflows from a CodeBuddy project.
#
# Usage:
# ./uninstall.sh # Uninstall from current directory
# ./uninstall.sh ~ # Uninstall globally from ~/.codebuddy/
#
set -euo pipefail
# Resolve the directory where this script lives
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
# CodeBuddy directory name
CODEBUDDY_DIR=".codebuddy"
resolve_path() {
python3 -c 'import os, sys; print(os.path.realpath(sys.argv[1]))' "$1"
}
is_valid_manifest_entry() {
local file_path="$1"
case "$file_path" in
""|/*|~*|*/../*|../*|*/..|..)
return 1
;;
esac
return 0
}
# Main uninstall function
do_uninstall() {
local target_dir="$PWD"
# Check if ~ was specified (or expanded to $HOME)
if [ "$#" -ge 1 ]; then
if [ "$1" = "~" ] || [ "$1" = "$HOME" ]; then
target_dir="$HOME"
fi
fi
# Check if we're already inside a .codebuddy directory
local current_dir_name="$(basename "$target_dir")"
local codebuddy_full_path
if [ "$current_dir_name" = ".codebuddy" ]; then
# Already inside the codebuddy directory, use it directly
codebuddy_full_path="$target_dir"
else
# Normal case: append CODEBUDDY_DIR to target_dir
codebuddy_full_path="$target_dir/$CODEBUDDY_DIR"
fi
echo "ECC CodeBuddy Uninstaller"
echo "=========================="
echo ""
echo "Target: $codebuddy_full_path/"
echo ""
if [ ! -d "$codebuddy_full_path" ]; then
echo "Error: $CODEBUDDY_DIR directory not found at $target_dir"
exit 1
fi
codebuddy_root_resolved="$(resolve_path "$codebuddy_full_path")"
# Manifest file path
MANIFEST="$codebuddy_full_path/.ecc-manifest"
if [ ! -f "$MANIFEST" ]; then
echo "Warning: No manifest file found (.ecc-manifest)"
echo ""
echo "This could mean:"
echo " 1. ECC was installed with an older version without manifest support"
echo " 2. The manifest file was manually deleted"
echo ""
read -p "Do you want to remove the entire $CODEBUDDY_DIR directory? (y/N) " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
echo "Uninstall cancelled."
exit 0
fi
rm -rf "$codebuddy_full_path"
echo "Uninstall complete!"
echo ""
echo "Removed: $codebuddy_full_path/"
exit 0
fi
echo "Found manifest file - will only remove files installed by ECC"
echo ""
read -p "Are you sure you want to uninstall ECC from $CODEBUDDY_DIR? (y/N) " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
echo "Uninstall cancelled."
exit 0
fi
# Counters
removed=0
skipped=0
# Read manifest and remove files
while IFS= read -r file_path; do
[ -z "$file_path" ] && continue
if ! is_valid_manifest_entry "$file_path"; then
echo "Skipped: $file_path (invalid manifest entry)"
skipped=$((skipped + 1))
continue
fi
full_path="$codebuddy_full_path/$file_path"
# Security check: ensure the path resolves inside the target directory.
# Use Python to compute a reliable relative path so symlinks cannot
# escape the boundary.
relative="$(python3 -c 'import os,sys; print(os.path.relpath(os.path.abspath(sys.argv[1]), sys.argv[2]))' "$full_path" "$codebuddy_root_resolved")"
case "$relative" in
../*|..)
echo "Skipped: $file_path (outside target directory)"
skipped=$((skipped + 1))
continue
;;
esac
if [ -L "$full_path" ] || [ -f "$full_path" ]; then
rm -f "$full_path"
echo "Removed: $file_path"
removed=$((removed + 1))
elif [ -d "$full_path" ]; then
# Only remove directory if it's empty
if [ -z "$(ls -A "$full_path" 2>/dev/null)" ]; then
rmdir "$full_path" 2>/dev/null || true
if [ ! -d "$full_path" ]; then
echo "Removed: $file_path/"
removed=$((removed + 1))
fi
else
echo "Skipped: $file_path/ (not empty - contains user files)"
skipped=$((skipped + 1))
fi
else
skipped=$((skipped + 1))
fi
done < "$MANIFEST"
while IFS= read -r empty_dir; do
[ "$empty_dir" = "$codebuddy_full_path" ] && continue
relative_dir="${empty_dir#$codebuddy_full_path/}"
rmdir "$empty_dir" 2>/dev/null || true
if [ ! -d "$empty_dir" ]; then
echo "Removed: $relative_dir/"
removed=$((removed + 1))
fi
done < <(find "$codebuddy_full_path" -depth -type d -empty 2>/dev/null | sort -r)
# Try to remove the main codebuddy directory if it's empty
if [ -d "$codebuddy_full_path" ] && [ -z "$(ls -A "$codebuddy_full_path" 2>/dev/null)" ]; then
rmdir "$codebuddy_full_path" 2>/dev/null || true
if [ ! -d "$codebuddy_full_path" ]; then
echo "Removed: $CODEBUDDY_DIR/"
removed=$((removed + 1))
fi
fi
echo ""
echo "Uninstall complete!"
echo ""
echo "Summary:"
echo " Removed: $removed items"
echo " Skipped: $skipped items (not found or user-modified)"
echo ""
if [ -d "$codebuddy_full_path" ]; then
echo "Note: $CODEBUDDY_DIR directory still exists (contains user-added files)"
fi
}
# Execute uninstall
do_uninstall "$@"

View File

@ -1,54 +0,0 @@
# .codex-plugin — Codex Native Plugin for ECC
This directory contains the **Codex plugin manifest** for Everything Claude Code.
## Structure
```
.codex-plugin/
└── plugin.json — Codex plugin manifest (name, version, skills ref, MCP ref)
.mcp.json — MCP server configurations at plugin root (NOT inside .codex-plugin/)
```
## What This Provides
- **156 skills** from `./skills/` — reusable Codex workflows for TDD, security,
code review, architecture, and more
- **6 MCP servers** — GitHub, Context7, Exa, Memory, Playwright, Sequential Thinking
## Installation
Codex plugin support is currently in preview. Once generally available:
```bash
# Install from Codex CLI
codex plugin install affaan-m/everything-claude-code
# Or reference locally during development
codex plugin install ./
Run this from the repository root so `./` points to the repo root and `.mcp.json` resolves correctly.
```
The installed plugin registers under the short slug `ecc` so tool and command names
stay below provider length limits.
## MCP Servers Included
| Server | Purpose |
|---|---|
| `github` | GitHub API access |
| `context7` | Live documentation lookup |
| `exa` | Neural web search |
| `memory` | Persistent memory across sessions |
| `playwright` | Browser automation & E2E testing |
| `sequential-thinking` | Step-by-step reasoning |
## Notes
- The `skills/` directory at the repo root is shared between Claude Code (`.claude-plugin/`)
and Codex (`.codex-plugin/`) — same source of truth, no duplication
- ECC is moving to a skills-first workflow surface. Legacy `commands/` remain for
compatibility on harnesses that still expect slash-entry shims.
- MCP server credentials are inherited from the launching environment (env vars)
- This manifest does **not** override `~/.codex/config.toml` settings

View File

@ -1,30 +0,0 @@
{
"name": "ecc",
"version": "1.10.0",
"description": "Battle-tested Codex workflows — 156 shared ECC skills, production-ready MCP configs, and selective-install-aligned conventions for TDD, security scanning, code review, and autonomous development.",
"author": {
"name": "Affaan Mustafa",
"email": "me@affaanmustafa.com",
"url": "https://x.com/affaanmustafa"
},
"homepage": "https://ecc.tools",
"repository": "https://github.com/affaan-m/everything-claude-code",
"license": "MIT",
"keywords": ["codex", "agents", "skills", "tdd", "code-review", "security", "workflow", "automation"],
"skills": "./skills/",
"mcpServers": "./.mcp.json",
"interface": {
"displayName": "Everything Claude Code",
"shortDescription": "156 battle-tested ECC skills plus MCP configs for TDD, security, code review, and autonomous development.",
"longDescription": "Everything Claude Code (ECC) is a community-maintained collection of Codex-ready skills and MCP configs evolved over 10+ months of intensive daily use. It covers TDD workflows, security scanning, code review, architecture decisions, operator workflows, and more — all in one installable plugin.",
"developerName": "Affaan Mustafa",
"category": "Productivity",
"capabilities": ["Read", "Write"],
"websiteURL": "https://ecc.tools",
"defaultPrompt": [
"Use the tdd-workflow skill to write tests before implementation.",
"Use the security-review skill to scan for OWASP Top 10 vulnerabilities.",
"Use the verification-loop skill to verify correctness before shipping changes."
]
}
}

View File

@ -1,96 +0,0 @@
# ECC for Codex CLI
This supplements the root `AGENTS.md` with Codex-specific guidance.
## Model Recommendations
| Task Type | Recommended Model |
|-----------|------------------|
| Routine coding, tests, formatting | GPT 5.4 |
| Complex features, architecture | GPT 5.4 |
| Debugging, refactoring | GPT 5.4 |
| Security review | GPT 5.4 |
## Skills Discovery
Skills are auto-loaded from `.agents/skills/`. Each skill contains:
- `SKILL.md` — Detailed instructions and workflow
- `agents/openai.yaml` — Codex interface metadata
Available skills:
- tdd-workflow — Test-driven development with 80%+ coverage
- security-review — Comprehensive security checklist
- coding-standards — Universal coding standards
- frontend-patterns — React/Next.js patterns
- frontend-slides — Viewport-safe HTML presentations and PPTX-to-web conversion
- article-writing — Long-form writing from notes and voice references
- content-engine — Platform-native social content and repurposing
- market-research — Source-attributed market and competitor research
- investor-materials — Decks, memos, models, and one-pagers
- investor-outreach — Personalized investor outreach and follow-ups
- backend-patterns — API design, database, caching
- e2e-testing — Playwright E2E tests
- eval-harness — Eval-driven development
- strategic-compact — Context management
- api-design — REST API design patterns
- verification-loop — Build, test, lint, typecheck, security
- deep-research — Multi-source research with firecrawl and exa MCPs
- exa-search — Neural search via Exa MCP for web, code, and companies
- claude-api — Anthropic Claude API patterns and SDKs
- x-api — X/Twitter API integration for posting, threads, and analytics
- crosspost — Multi-platform content distribution
- fal-ai-media — AI image/video/audio generation via fal.ai
- dmux-workflows — Multi-agent orchestration with dmux
## MCP Servers
Treat the project-local `.codex/config.toml` as the default Codex baseline for ECC. The current ECC baseline enables GitHub, Context7, Exa, Memory, Playwright, and Sequential Thinking; add heavier extras in `~/.codex/config.toml` only when a task actually needs them.
ECC's canonical Codex section name is `[mcp_servers.context7]`. The launcher package remains `@upstash/context7-mcp`; only the TOML section name is normalized for consistency with `codex mcp list` and the reference config.
### Automatic config.toml merging
The sync script (`scripts/sync-ecc-to-codex.sh`) uses a Node-based TOML parser to safely merge ECC MCP servers into `~/.codex/config.toml`:
- **Add-only by default** — missing ECC servers are appended; existing servers are never modified or removed.
- **7 managed servers** — Supabase, Playwright, Context7, Exa, GitHub, Memory, Sequential Thinking.
- **Canonical naming** — ECC manages Context7 as `[mcp_servers.context7]`; legacy `[mcp_servers.context7-mcp]` entries are treated as aliases during updates.
- **Package-manager aware** — uses the project's configured package manager (npm/pnpm/yarn/bun) instead of hardcoding `pnpm`.
- **Drift warnings** — if an existing server's config differs from the ECC recommendation, the script logs a warning.
- **`--update-mcp`** — explicitly replaces all ECC-managed servers with the latest recommended config (safely removes subtables like `[mcp_servers.supabase.env]`).
- **User config is always preserved** — custom servers, args, env vars, and credentials outside ECC-managed sections are never touched.
## Multi-Agent Support
Codex now supports multi-agent workflows behind the experimental `features.multi_agent` flag.
- Enable it in `.codex/config.toml` with `[features] multi_agent = true`
- Define project-local roles under `[agents.<name>]`
- Point each role at a TOML layer under `.codex/agents/`
- Use `/agent` inside Codex CLI to inspect and steer child agents
Sample role configs in this repo:
- `.codex/agents/explorer.toml` — read-only evidence gathering
- `.codex/agents/reviewer.toml` — correctness/security review
- `.codex/agents/docs-researcher.toml` — API and release-note verification
## Key Differences from Claude Code
| Feature | Claude Code | Codex CLI |
|---------|------------|-----------|
| Hooks | 8+ event types | Not yet supported |
| Context file | CLAUDE.md + AGENTS.md | AGENTS.md only |
| Skills | Skills loaded via plugin | `.agents/skills/` directory |
| Commands | `/slash` commands | Instruction-based |
| Agents | Subagent Task tool | Multi-agent via `/agent` and `[agents.<name>]` roles |
| Security | Hook-based enforcement | Instruction + sandbox |
| MCP | Full support | Supported via `config.toml` and `codex mcp add` |
## Security Without Hooks
Since Codex lacks hooks, security enforcement is instruction-based:
1. Always validate inputs at system boundaries
2. Never hardcode secrets — use environment variables
3. Run `npm audit` / `pip audit` before committing
4. Review `git diff` before every push
5. Use `sandbox_mode = "workspace-write"` in config

View File

@ -1,9 +0,0 @@
model = "gpt-5.4"
model_reasoning_effort = "medium"
sandbox_mode = "read-only"
developer_instructions = """
Verify APIs, framework behavior, and release-note claims against primary documentation before changes land.
Cite the exact docs or file paths that support each claim.
Do not invent undocumented behavior.
"""

View File

@ -1,9 +0,0 @@
model = "gpt-5.4"
model_reasoning_effort = "medium"
sandbox_mode = "read-only"
developer_instructions = """
Stay in exploration mode.
Trace the real execution path, cite files and symbols, and avoid proposing fixes unless the parent agent asks for them.
Prefer targeted search and file reads over broad scans.
"""

View File

@ -1,9 +0,0 @@
model = "gpt-5.4"
model_reasoning_effort = "high"
sandbox_mode = "read-only"
developer_instructions = """
Review like an owner.
Prioritize correctness, security, behavioral regressions, and missing tests.
Lead with concrete findings and avoid style-only feedback unless it hides a real bug.
"""

View File

@ -1,120 +0,0 @@
#:schema https://developers.openai.com/codex/config-schema.json
# Everything Claude Code (ECC) — Codex Reference Configuration
#
# Copy this file to ~/.codex/config.toml for global defaults, or keep it in
# the project root as .codex/config.toml for project-local settings.
#
# Official docs:
# - https://developers.openai.com/codex/config-reference
# - https://developers.openai.com/codex/multi-agent
# Model selection
# Leave `model` and `model_provider` unset so Codex CLI uses its current
# built-in defaults. Uncomment and pin them only if you intentionally want
# repo-local or global model overrides.
# Top-level runtime settings (current Codex schema)
approval_policy = "on-request"
sandbox_mode = "workspace-write"
web_search = "live"
# External notifications receive a JSON payload on stdin.
notify = [
"terminal-notifier",
"-title", "Codex ECC",
"-message", "Task completed!",
"-sound", "default",
]
# Persistent instructions are appended to every prompt (additive, unlike
# model_instructions_file which replaces AGENTS.md).
persistent_instructions = "Follow project AGENTS.md guidelines. Use available MCP servers when they can help."
# model_instructions_file replaces built-in instructions instead of AGENTS.md,
# so leave it unset unless you intentionally want a single override file.
# model_instructions_file = "/absolute/path/to/instructions.md"
# MCP servers
# Keep the default project set lean. API-backed servers inherit credentials from
# the launching environment or can be supplied by a user-level ~/.codex/config.toml.
[mcp_servers.github]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-github"]
startup_timeout_sec = 30
[mcp_servers.context7]
command = "npx"
# Canonical Codex section name is `context7`; the package itself remains
# `@upstash/context7-mcp`.
args = ["-y", "@upstash/context7-mcp@latest"]
startup_timeout_sec = 30
[mcp_servers.exa]
url = "https://mcp.exa.ai/mcp"
[mcp_servers.memory]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-memory"]
startup_timeout_sec = 30
[mcp_servers.playwright]
command = "npx"
args = ["-y", "@playwright/mcp@latest", "--extension"]
startup_timeout_sec = 30
[mcp_servers.sequential-thinking]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-sequential-thinking"]
startup_timeout_sec = 30
# Additional MCP servers (uncomment as needed):
# [mcp_servers.supabase]
# command = "npx"
# args = ["-y", "supabase-mcp-server@latest", "--read-only"]
#
# [mcp_servers.firecrawl]
# command = "npx"
# args = ["-y", "firecrawl-mcp"]
#
# [mcp_servers.fal-ai]
# command = "npx"
# args = ["-y", "fal-ai-mcp-server"]
#
# [mcp_servers.cloudflare]
# command = "npx"
# args = ["-y", "@cloudflare/mcp-server-cloudflare"]
[features]
# Codex multi-agent collaboration is stable and on by default in current builds.
# Keep the explicit toggle here so the repo documents its expectation clearly.
multi_agent = true
# Profiles — switch with `codex -p <name>`
[profiles.strict]
approval_policy = "on-request"
sandbox_mode = "read-only"
web_search = "cached"
[profiles.yolo]
approval_policy = "never"
sandbox_mode = "workspace-write"
web_search = "live"
[agents]
# Multi-agent role limits and local role definitions.
# These map to `.codex/agents/*.toml` and mirror the repo's explorer/reviewer/docs workflow.
max_threads = 6
max_depth = 1
[agents.explorer]
description = "Read-only codebase explorer for gathering evidence before changes are proposed."
config_file = "agents/explorer.toml"
[agents.reviewer]
description = "PR reviewer focused on correctness, security, and missing tests."
config_file = "agents/reviewer.toml"
[agents.docs_researcher]
description = "Documentation specialist that verifies APIs, framework behavior, and release notes."
config_file = "agents/docs-researcher.toml"

View File

@ -1,114 +0,0 @@
{
"hooks": {
"sessionStart": [
{
"command": "node .cursor/hooks/session-start.js",
"event": "sessionStart",
"description": "Load previous context and detect environment"
}
],
"sessionEnd": [
{
"command": "node .cursor/hooks/session-end.js",
"event": "sessionEnd",
"description": "Persist session state and evaluate patterns"
}
],
"beforeShellExecution": [
{
"command": "npx block-no-verify@1.1.2",
"event": "beforeShellExecution",
"description": "Block git hook-bypass flag to protect pre-commit, commit-msg, and pre-push hooks from being skipped"
},
{
"command": "node .cursor/hooks/before-shell-execution.js",
"event": "beforeShellExecution",
"description": "Tmux dev server blocker, tmux reminder, git push review"
}
],
"afterShellExecution": [
{
"command": "node .cursor/hooks/after-shell-execution.js",
"event": "afterShellExecution",
"description": "PR URL logging, build analysis"
}
],
"afterFileEdit": [
{
"command": "node .cursor/hooks/after-file-edit.js",
"event": "afterFileEdit",
"description": "Auto-format, TypeScript check, console.log warning, and frontend design-quality reminder"
}
],
"beforeMCPExecution": [
{
"command": "node .cursor/hooks/before-mcp-execution.js",
"event": "beforeMCPExecution",
"description": "MCP audit logging and untrusted server warning"
}
],
"afterMCPExecution": [
{
"command": "node .cursor/hooks/after-mcp-execution.js",
"event": "afterMCPExecution",
"description": "MCP result logging"
}
],
"beforeReadFile": [
{
"command": "node .cursor/hooks/before-read-file.js",
"event": "beforeReadFile",
"description": "Warn when reading sensitive files (.env, .key, .pem)"
}
],
"beforeSubmitPrompt": [
{
"command": "node .cursor/hooks/before-submit-prompt.js",
"event": "beforeSubmitPrompt",
"description": "Detect secrets in prompts (sk-, ghp_, AKIA patterns)"
}
],
"subagentStart": [
{
"command": "node .cursor/hooks/subagent-start.js",
"event": "subagentStart",
"description": "Log agent spawning for observability"
}
],
"subagentStop": [
{
"command": "node .cursor/hooks/subagent-stop.js",
"event": "subagentStop",
"description": "Log agent completion"
}
],
"beforeTabFileRead": [
{
"command": "node .cursor/hooks/before-tab-file-read.js",
"event": "beforeTabFileRead",
"description": "Block Tab from reading secrets (.env, .key, .pem, credentials)"
}
],
"afterTabFileEdit": [
{
"command": "node .cursor/hooks/after-tab-file-edit.js",
"event": "afterTabFileEdit",
"description": "Auto-format Tab edits"
}
],
"preCompact": [
{
"command": "node .cursor/hooks/pre-compact.js",
"event": "preCompact",
"description": "Save state before context compaction"
}
],
"stop": [
{
"command": "node .cursor/hooks/stop.js",
"event": "stop",
"description": "Console.log audit on all modified files"
}
]
}
}

View File

@ -1,81 +0,0 @@
#!/usr/bin/env node
/**
* Cursor-to-Claude Code Hook Adapter
* Transforms Cursor stdin JSON to Claude Code hook format,
* then delegates to existing scripts/hooks/*.js
*/
const { execFileSync } = require('child_process');
const path = require('path');
const MAX_STDIN = 1024 * 1024;
function readStdin() {
return new Promise((resolve) => {
let data = '';
process.stdin.setEncoding('utf8');
process.stdin.on('data', chunk => {
if (data.length < MAX_STDIN) data += chunk.substring(0, MAX_STDIN - data.length);
});
process.stdin.on('end', () => resolve(data));
});
}
function getPluginRoot() {
return path.resolve(__dirname, '..', '..');
}
function transformToClaude(cursorInput, overrides = {}) {
return {
tool_input: {
command: cursorInput.command || cursorInput.args?.command || '',
file_path: cursorInput.path || cursorInput.file || cursorInput.args?.filePath || '',
...overrides.tool_input,
},
tool_output: {
output: cursorInput.output || cursorInput.result || '',
...overrides.tool_output,
},
transcript_path: cursorInput.transcript_path || cursorInput.transcriptPath || cursorInput.session?.transcript_path || '',
_cursor: {
conversation_id: cursorInput.conversation_id,
hook_event_name: cursorInput.hook_event_name,
workspace_roots: cursorInput.workspace_roots,
model: cursorInput.model,
},
};
}
function runExistingHook(scriptName, stdinData) {
const scriptPath = path.join(getPluginRoot(), 'scripts', 'hooks', scriptName);
try {
execFileSync('node', [scriptPath], {
input: typeof stdinData === 'string' ? stdinData : JSON.stringify(stdinData),
stdio: ['pipe', 'pipe', 'pipe'],
timeout: 15000,
cwd: process.cwd(),
});
} catch (e) {
if (e.status === 2) process.exit(2); // Forward blocking exit code
}
}
function hookEnabled(hookId, allowedProfiles = ['standard', 'strict']) {
const rawProfile = String(process.env.ECC_HOOK_PROFILE || 'standard').toLowerCase();
const profile = ['minimal', 'standard', 'strict'].includes(rawProfile) ? rawProfile : 'standard';
const disabled = new Set(
String(process.env.ECC_DISABLED_HOOKS || '')
.split(',')
.map(v => v.trim().toLowerCase())
.filter(Boolean)
);
if (disabled.has(String(hookId || '').toLowerCase())) {
return false;
}
return allowedProfiles.includes(profile);
}
module.exports = { readStdin, getPluginRoot, transformToClaude, runExistingHook, hookEnabled };

View File

@ -1,19 +0,0 @@
#!/usr/bin/env node
const { hookEnabled, readStdin, runExistingHook, transformToClaude } = require('./adapter');
readStdin().then(raw => {
try {
const input = JSON.parse(raw);
const claudeInput = transformToClaude(input, {
tool_input: { file_path: input.path || input.file || '' }
});
const claudeStr = JSON.stringify(claudeInput);
// Accumulate edited paths for batch format+typecheck at stop time
runExistingHook('post-edit-accumulator.js', claudeStr);
runExistingHook('post-edit-console-warn.js', claudeStr);
if (hookEnabled('post:edit:design-quality-check', ['standard', 'strict'])) {
runExistingHook('design-quality-check.js', claudeStr);
}
} catch {}
process.stdout.write(raw);
}).catch(() => process.exit(0));

View File

@ -1,12 +0,0 @@
#!/usr/bin/env node
const { readStdin } = require('./adapter');
readStdin().then(raw => {
try {
const input = JSON.parse(raw);
const server = input.server || input.mcp_server || 'unknown';
const tool = input.tool || input.mcp_tool || 'unknown';
const success = input.error ? 'FAILED' : 'OK';
console.error(`[ECC] MCP result: ${server}/${tool} - ${success}`);
} catch {}
process.stdout.write(raw);
}).catch(() => process.exit(0));

View File

@ -1,28 +0,0 @@
#!/usr/bin/env node
const { readStdin, hookEnabled } = require('./adapter');
readStdin().then(raw => {
try {
const input = JSON.parse(raw || '{}');
const cmd = String(input.command || input.args?.command || '');
const output = String(input.output || input.result || '');
if (hookEnabled('post:bash:pr-created', ['standard', 'strict']) && /\bgh\s+pr\s+create\b/.test(cmd)) {
const m = output.match(/https:\/\/github\.com\/[^/]+\/[^/]+\/pull\/\d+/);
if (m) {
console.error('[ECC] PR created: ' + m[0]);
const repo = m[0].replace(/https:\/\/github\.com\/([^/]+\/[^/]+)\/pull\/\d+/, '$1');
const pr = m[0].replace(/.+\/pull\/(\d+)/, '$1');
console.error('[ECC] To review: gh pr review ' + pr + ' --repo ' + repo);
}
}
if (hookEnabled('post:bash:build-complete', ['standard', 'strict']) && /(npm run build|pnpm build|yarn build)/.test(cmd)) {
console.error('[ECC] Build completed');
}
} catch {
// noop
}
process.stdout.write(raw);
}).catch(() => process.exit(0));

View File

@ -1,12 +0,0 @@
#!/usr/bin/env node
const { readStdin, runExistingHook, transformToClaude } = require('./adapter');
readStdin().then(raw => {
try {
const input = JSON.parse(raw);
const claudeInput = transformToClaude(input, {
tool_input: { file_path: input.path || input.file || '' }
});
runExistingHook('post-edit-format.js', JSON.stringify(claudeInput));
} catch {}
process.stdout.write(raw);
}).catch(() => process.exit(0));

Some files were not shown because too many files have changed in this diff Show More