Product

Review Agent

Jimmy's self-built async pre-meeting / pre-PR review coach — before the briefer sends the material to the responder, let the AI bounce it back using the responder's bar.

1. Core Product / Service

A top-down review agent: the requester throws the material to be sent out (meeting brief, PR, proposal) into the skill, the agent simulates the responder's view to run a critical review, produces a dissent log + decision-ready summary, and requires the requester to revise to the "responder bar" before release.

Framework: 4-pillar review (Background / Materials / Framework / Intent) + Responder Simulation. Requester sends draft → agent runs 4-pillar check → simultaneously simulates the responder's potential pushback → on close, delivers to both responder and requester.

Version roadmap:

  • v1 (deprecated): based on hermes-agent runtime, Lark WS channel, per-pairing workspace
  • v2.x: migrated to openclaw runtime, per-peer dynamic workspace (workspace-feishu-ou_<openid>), watcher + systemd seeder that injects the template before SOUL.md is written
  • v3 (current): independent service, FastAPI + DeepSeek, systemd --user deployment, dedicated port 8080, fully isolated from openclaw

Multimodal I/O: PDF, image-OCR, Lark doc direct ingest.

Deployment forms: can be installed as a hermes/openclaw skill, or run as an independent service (v3 is the latter).

2. Target Users & Pain Points

Primary user: Jimmy himself, and anyone reporting upward to a higher-level responder (subordinate-to-superior communication scenarios).

Pain points:

  • No one stress-tests a draft before it goes out — what will the responder ask, does the framework hold up, is the intent clearly written
  • In 2026, market pre-meeting AI is all bottom-up (making a pre-read summary for the receiver), no one does top-down (training the sender to reach the receiver's bar)
  • Self-review has blind spots, asking someone to review owes a favor / isn't timely enough

Trigger scenarios: before sending a brief to boss / investor / co-founder ahead of a meeting, before submitting a PR for self-review, before finalizing a proposal document.

3. Competitive Landscape

Adjacent but non-overlapping:

  • Pre-read class (Granola, Read.ai, Otter) — bottom-up, summarizing for the receiver, not teaching the sender how to write
  • PR review class (Greptile, CodeRabbit, inferact) — only covers code diffs, not narrative materials
  • Writing assistant (Grammarly, Lex) — fixes grammar / style, doesn't simulate responder pushback

Differentiation: top-down review + responder simulation + dissent log; the responder's bar is explicitly encoded into the 4-pillar, rather than a generic "make it better".

4. Unique Observations

  • "Top-down review" is a niche no one occupies (2026-04 research conclusion); whether it's a structural gap or a market too small remains unverified — once v3 is redeployed + connected to Lark, more live data will follow.
  • The design separates dissent log from summary: dissent goes to the requester (so he knows which points don't hold), summary goes to the responder (so the meeting moves fast) — an implementation of ai-human-hybrid.
  • Key lesson from v2 → v3: the skill-as-plugin pattern (v2 running inside openclaw) has lifecycle management (install/uninstall/upgrade) far harder than imagined; issue #1 recorded 7 uninstall bugs in one go; lifecycle becomes much cleaner after v3 became an independent service.
  • Same personal-ai stack as personal-ai-delegate but a different layer: PAID is the outbound delegate (sending messages / decisions for you), review-agent is the inbound gate (locking down content before it goes out).
  • Inference backend uses deepseek (V4 Pro / V4 Flash, since 2026-04; early days ran on the v3 family), in thinking-disable mode; price-performance is a notch higher than Claude / GPT, sufficient for review tasks that are long-context but not requiring extreme creativity.

5. Financials / Funding

Non-commercial project. Jimmy self-hosts personally; running cost is mainly DeepSeek API calls (per-review ~cents) + a shared VPS. The GitHub repo is public as a skill release; no monetization path is planned.

6. People & Relationships


Sources

  • local: 2026-04-20-summary.md
  • local: memory/project_review_agent.md (v2.x design + uninstall lessons)
  • local: memory/project_review_agent_v3_deploy.md (v3 deployment architecture)
Last compiled: 2026-05-09