3.1.2 - Ask for Signals, Not Opinions

Choose the decision signal: point to the behavior or result you need to see and explain what decision it will change.

Decision signal

Are you seeing real signals or just opinions?

The call

Ask for behaviour, not applause. Otherwise feedback confirms what you already believe while real friction stays hidden.

Why it matters

Asking for signals instead of opinions means feedback can improve decisions instead of just validating them. AI can organise patterns quickly, but human judgement decides which behaviour signal matters for launch. The difference is between noisy feedback and clear action tied to user outcomes.

Explainer

A decision signal is not just more feedback. It is an observable behaviour or result that can change the next product decision. Until you can name one observable signal, one behaviour behind it and one decision it will change, the conversation will drift into opinion. AI can help summarise reactions, but it cannot turn vague feedback into evidence on its own.

Make the decision signal concrete

Compare the broad version with a version you can actually test.

  • Too vague: We want feedback on whether people like the search results.
  • Concrete enough to test: We want to see whether a content creator acts on a context-shaped search result in the same session, because that behaviour will tell us whether the context layer is adding real value or just adding steps.

The second version lets two people interpret the same evidence from it.

Check the decision signal

  • Pass: You can point to the behaviour or result you need to see and explain what decision it will change.
  • Fail: If you are still asking for thoughts, opinions or reactions in general, the signal is not defined well enough yet.

Do not move into feedback collection or iteration work until this passes.

How to use AI for the decision signal

  • AI chat: Rewrite the decision signal until you can state all three parts clearly.
  • vibeCoding: Build the thinnest flow that tests this decision signal in practice before broader build work.
  • AI-assisted coding: Carry the same decision signal into implementation and review so the live system keeps the same decision.

Sharpen the decision signal

Copy this prompt into AI chat, replace the bracketed lines with your real decision signal and keep the instruction exactly as visible here.

You are checking whether this decision signal is clear enough before you move forward.

Constraint:
The decision signal must be specific enough that two people would interpret the same evidence from it.

Working draft:
Signal to watch: [what behavior or result you need to see]
Behavior behind it: [what action creates that signal]
Decision it changes: [what decision it will change]

Task:
Decide whether this decision signal is specific enough to guide the next decision. If it is vague, rewrite it so two people would make the same decision from this decision signal.

Check:
- Would two people interpret this the same way?
- Does it stay concrete enough to guide the next step?
- Does it meet this bar: You can point to the behavior or result you need to see and explain what decision it will change.

Return:
- A corrected decision signal
- A short explanation of what was vague

Copy this into AI chat. Replace the bracketed parts. Keep the rest unchanged. AI will likely suggest refinements based on what you enter. Use those to sharpen your thinking, not replace it. Create a free account to save your answers and pick up where you left off.

Evaluation

Before accepting the result, check whether two people would interpret the same evidence from it.

Example

To help you work through this, here is a real example. StartWithYourContext is an AI search tool built as part of the vibe2value project. Here is how its decision signal was written using the three parts:

  • Signal to watch: Whether the content creator acts on a context-shaped result in the same session.
  • Behaviour behind it: They search with saved context, review the results and click through or use a result instead of leaving to search elsewhere.
  • Decision it changes: If they act on results, the context layer is working and we continue building on it. If they ignore results or leave, the context is not adding enough value and we need to rethink how it shapes the query.

That decision signal is specific enough that two people would interpret the same evidence from it.

When there is more than one side

Not every product has a single decision signal. When a system serves more than one side, each side generates different behaviour and a signal that looks strong for one may say nothing about the other.

Multi-sided worked example

For example, StartWithYourContext has two different decision signals:

  • Content creator: Do they act on a context-shaped result? If yes, the context layer adds value. If not, the results are not different enough from generic search.
  • Developer: Do they reach a working local setup from the README without asking for help? If yes, the documentation and stack are clear enough. If not, the integration is harder to follow than it looks.

Both signals drive real decisions, but they measure different things. If only one is watched, the other side’s friction stays invisible.

Risk and mitigation

  • Risk: Treating confident opinions as evidence, which can push launch decisions in the wrong direction while real user issues stay hidden.
  • Mitigation: Agree on one behaviour signal per decision and only change direction when that signal crosses the threshold you set in advance.

Key takeaway

Do not move forward until you can point to the behaviour or result you need to see and explain what decision it will change.

Work through this in a workshop

If your decision signal is still unclear, bring it to a free weekly workshop. Bring the messy part of your AI-assisted build and leave with a clearer next step. In some sessions, we walk through practical examples on the Cloudflare Workers stack to show how a rough idea turns into something that actually runs.


What do you think?

How are you asking for signals instead of opinions in launch work and how is AI helping you decide which feedback should change your plan?