2.1.3 - What This Build Is Meant to Teach You
Name the learning goal: say what you believe, what the build needs to answer and what signal will prove or weaken that belief.
Learning goal
Is this build teaching one clear lesson?
The call
Name the lesson before you build. Otherwise AI helps you ship fast without learning anything you can act on.
Why it matters
What this build is meant to teach you should be named before adding scope so you can feel progress early. AI can generate many directions quickly, but human judgement decides which direction strengthens the intended lesson and which adds noise. The difference is between focused learning and random activity that produces output but no evidence.
Explainer
A learning goal is not a loose hope that the build will teach something. It is the exact question the build is meant to answer. Until you can name one hypothesis, one question and one signal that answers it, the build will collect noise instead of evidence. AI can help generate experiments, but it cannot decide what counts as learning.
Make the learning goal concrete
Compare the broad version with a version you can actually test.
- Too vague: This build should help us learn what users want from AI search.
- Concrete enough to test: This build should tell us whether a content creator gets more relevant results when their saved context shapes the search than when it does not.
The second version lets two people collect the same evidence from it.
Check the learning goal
- Pass: You can say what you believe, what the build needs to answer and what signal will prove or weaken that belief.
- Fail: If the build is still meant to teach something useful without a clear question, the learning goal is not sharp enough yet.
Do not move into implementation or prototype work until this passes.
How to use AI for the learning goal
- AI chat: Rewrite the learning goal until you can state all three parts clearly.
- vibeCoding: Build the thinnest flow that tests this learning goal in practice before broader build work.
- AI-assisted coding: Carry the same learning goal into implementation and review so the live system keeps the same decision.
Sharpen the learning goal
Copy this prompt into AI chat, replace the bracketed lines with your real learning goal and keep the instruction exactly as visible here.
You are checking whether this learning goal is clear enough before you move forward.
Constraint:
The learning goal must be specific enough that two people would collect the same evidence from it.
Working draft:
Hypothesis: [what you believe]
Question: [what the build needs to answer]
Signal: [what signal will prove or weaken that belief]
Task:
Decide whether this learning goal is specific enough to guide the next decision. If it is vague, rewrite it so two people would make the same decision from this learning goal.
Check:
- Would two people interpret this the same way?
- Does it stay concrete enough to guide the next step?
- Does it meet this bar: You can say what you believe, what the build needs to answer and what signal will prove or weaken that belief.
Return:
- A corrected learning goal
- A short explanation of what was vagueCopy this into AI chat. Replace the bracketed parts. Keep the rest unchanged. AI will likely suggest refinements based on what you enter. Use those to sharpen your thinking, not replace it. Create a free account to save your answers and pick up where you left off.
Evaluation
Before accepting the result, check whether two people would collect the same evidence from it.
Example
To help you work through this, here is a real example. StartWithYourContext is an AI search tool built as part of the vibe2value project. Here is how its learning goal was written using the three parts:
- Hypothesis: A content creator will get more relevant search results when their saved context shapes the query.
- Question: Does the context make a visible difference to the results, or do users get the same output regardless?
- Signal: The user acts on a context-shaped result in the same session instead of ignoring it or searching elsewhere.
That learning goal is specific enough that two people would collect the same evidence from it.
When there is more than one side
Not every product has a single learning goal. When a system serves more than one side, each side teaches a different lesson and collecting evidence for one may tell you nothing about the other.
Multi-sided worked example
For example, StartWithYourContext has two different learning goals:
- Content creator: Does saved context produce visibly better results than searching without it? The signal is whether they act on a result.
- Developer: Can a new developer set up and run the full stack from the README without getting stuck? The signal is whether they reach a working state independently.
Both are valid learning goals, but they require different evidence. If only one is tested, the other side’s lesson stays unlearned.
Risk and mitigation
- Risk: Expanding the build before the lesson is clear, which creates output but hides whether anyone learned anything useful.
- Mitigation: Define one learning signal for each iteration and pause new scope when that signal is still unclear.
Key takeaway
Do not move forward until you can say what you believe, what the build needs to answer and what signal will prove or weaken that belief.
Work through this in a workshop
If your learning goal is still unclear, bring it to a free weekly workshop. Bring the messy part of your AI-assisted build and leave with a clearer next step. In some sessions, we walk through practical examples on the Cloudflare Workers stack to show how a rough idea turns into something that actually runs.
What do you think?
How are you defining what a build is meant to teach and how is AI helping you keep that lesson clear as you iterate?