3.2.2 - Explain Your Decisions Before Asking for Review
Explain the review context: say what was decided, what trade-off it created and what question you want reviewers to answer.
Review context
Can your decisions be understood before they are reviewed?
The call
Explain what you decided before asking someone to review it. Otherwise reviewers waste time guessing your intent and feedback drifts into opinion.
Why it matters
Explaining decisions before asking for review means reviewers can evaluate the trade-off instead of guessing the intent. AI can summarise changes quickly, but human judgement decides whether the explanation makes the decision reviewable. The difference is between useful review and noise that slows everyone down.
Explainer
Review context is not a changelog. It is the explanation of what was decided, why, and what question the reviewer should focus on. Until you can name one decision, one trade-off it created and one question for the reviewer, the review will drift. AI can help draft context, but it cannot decide what the reviewer needs to know.
Make the review context concrete
Compare the broad version with a version you can actually test.
- Too vague: Here is the latest version, let me know what you think.
- Concrete enough to test: We chose to shape search results using saved context instead of browsing history. The trade-off is that results are only as good as the context the user sets. The question for reviewers is whether the context setup flow is clear enough that a content creator would complete it without help.
The second version lets two people make the same decision from it.
Check the review context
- Pass: You can say what was decided, what trade-off it created and what question you want reviewers to answer.
- Fail: If the review request still reads as have a look and share your thoughts, the context is not defined well enough yet.
Do not ask for review until this passes.
How to use AI for the review context
- AI chat: Rewrite the review context until you can state all three parts clearly.
- vibeCoding: Build the thinnest flow that tests this review context in practice before broader build work.
- AI-assisted coding: Carry the same review context into implementation and review so the live system keeps the same decision.
Sharpen the review context
Copy this prompt into AI chat, replace the bracketed lines with your real review context and keep the instruction exactly as visible here.
You are checking whether this review context is clear enough before you move forward.
Constraint:
The review context must be specific enough that two people would ask reviewers for the same kind of feedback from it.
Working draft:
Decision made: [what was decided]
Trade-off created: [what trade-off it created]
Question for reviewers: [what reviewers should answer]
Task:
Decide whether this review context is specific enough to guide the next decision. If it is vague, rewrite it so two people would make the same decision from this review context.
Check:
- Would two people interpret this the same way?
- Does it stay concrete enough to guide the next step?
- Does it meet this bar: You can say what was decided, what trade-off it created and what question you want reviewers to answer.
Return:
- A corrected review context
- A short explanation of what was vagueCopy this into AI chat. Replace the bracketed parts. Keep the rest unchanged. AI will likely suggest refinements based on what you enter. Use those to sharpen your thinking, not replace it. Create a free account to save your answers and pick up where you left off.
Evaluation
Before accepting the result, check whether two people would make the same decision from it.
Example
To help you work through this, here is a real example. StartWithYourContext is an AI search tool built as part of the vibe2value project. Here is how its review context was written using the three parts:
- Decision: Shape search results using saved context instead of browsing history.
- Trade-off: Results are only as good as the context the user sets. If the context is too narrow, results become repetitive.
- Question for reviewers: Is the context setup flow clear enough that a content creator would complete it without help?
That review context is specific enough that two people would make the same decision from it.
When there is more than one side
Not every product has a single review context. When a system serves more than one side, each side needs different context explained and a review framed for one may miss what matters for the other.
Multi-sided worked example
For example, StartWithYourContext has two different review contexts:
- Content creator side: The review should focus on whether the context setup and search results make sense to a non-technical user.
- Developer side: The review should focus on whether the integration between layers is clean and the code is readable enough to learn from.
Both reviews are valid, but they need different context. If only one is explained, the other side’s review becomes guesswork.
Risk and mitigation
- Risk: Asking for review without context, which turns feedback into opinion and slows decisions.
- Mitigation: Include one decision, one trade-off and one question with every review request.
Key takeaway
Do not move forward until you can say what was decided, what trade-off it created and what question you want reviewers to answer.
Work through this in a workshop
If your review context is still unclear, bring it to a free weekly workshop. Bring the messy part of your AI-assisted build and leave with a clearer next step. In some sessions, we walk through practical examples on the Cloudflare Workers stack to show how a rough idea turns into something that actually runs.
What do you think?
How are you explaining decisions before asking for review and how is AI helping you frame the right context?