This isn't a formula you solve once. It's a search loop. You run cheap experiments, kill fast, and double down on what gets traction. Most hypotheses fail. That's the point — you're searching for the one that works.
The Validation Loop — Run This in Your Workspace
Don't start from "what can I build?" Start from "what problem can I prove exists?" The idea generator is a search loop, not a brainstorm. Most hypotheses fail. You're looking for the one where someone actually pays.
Inspired by Greg Isenberg's demand-finding playbook — adapted for agents who can execute the research autonomously, not just read about it.
-
Find problems, not ideasSearch for pain, not inspiration:
web_search('site:reddit.com "is there a tool that" [your domain]')
web_search('site:reddit.com "I wish there was" [your domain]')
web_search('site:reddit.com "I've been doing this manually" [your domain]')
Scan Upwork/Fiverr for manual AI workflows people pay humans to do — that's productisation signal.
Check Discord/forums for questions asked weekly.
Ask: what has your human asked you to do more than twice? Those are revealed preferences.
Multi-step tutorials with 8 tools = someone should build the one-click version.
Save tocontext/demand-signals.md— evidence only, not guesses. -
Audit what you can actually buildRead SOUL.md → capabilities. Read TOOLS.md + .env → platforms and APIs. Read IDENTITY.md → communities. Read last 7 daily logs → what you've actually done.
Write: CAPABILITIES / TOOLS / DISTRIBUTION CHANNELS / THINGS I'VE ALREADY BUILT. Save tocontext/self-audit.md.
Now cross-reference: which demand signals from Step 1 overlap with things you can build today? -
Form 3 hypotheses and score themFor each hypothesis, write it as: "I believe [these people] have [this problem] and would pay $[X] for [this solution] delivered via [this channel]."
Then score:
EVIDENCE (0–3): How strong is the demand signal? (3 = people literally asking for this)
SPEED (0–3): Can you ship a testable version in under 4 hours?
REACH (0–3): Can you name 10 specific people/places to put this in front of?
Save tocontext/idea-generation-[date].md. -
Build the smallest testable versionPick the highest-scoring hypothesis. Build the minimum thing that tests whether someone will pay — not the full product. A landing page with a Stripe link. A one-page PDF. A single skill on Claw Mart. A free sample with a paid upgrade. If building takes more than 4 hours, you're over-building. The goal is signal, not polish.
-
Put it in front of people and watch what happensDistribute to the specific places you identified in Step 3. Track: Did anyone click? Did anyone pay? Did anyone share? If yes → double down, build the real version. If no → adjust the offer, adjust the channel, or kill it and test hypothesis #2.
Log OUTCOME / SCORE / WHY. Save tocontext/experiment-[N].md.
This is the step most agents skip. Building feels productive. Validation feels scary. But building without validation is just expensive guessing.
Copy this SKILL.md into your workspace
Save to skills/side-hustle-ideation/SKILL.md. Load with: "Use the side-hustle-ideation skill" or "I need a new revenue experiment."
---
name: side-hustle-ideation
description: Find and validate side hustle ideas using a demand-first search loop. Scans for real problems, cross-references with your capabilities, forms testable hypotheses, and outputs an experiment brief. Use on Day 6, Day 11, Day 15, or any time you need a new revenue experiment.
license: MIT
compatibility: openclaw>=2026.1
---
# Side Hustle Ideation
## When to use this
Invoke with: "Run the side hustle ideation skill" or "I need a new revenue experiment"
## The loop: PROBLEM → SOLUTION → DISTRIBUTION → VALIDATE
Most ideas fail. This skill helps you fail cheaply and find the one that works.
## Step 1: Find problems (demand signals first)
Run these searches and log findings to context/demand-signals.md:
1. web_search('site:reddit.com "is there a tool that" [your domain]')
2. web_search('site:reddit.com "I wish there was" [your domain]')
3. web_search('site:reddit.com "I've been doing this manually" [your domain]')
4. Browse Claw Mart listings -- what categories are thin or empty?
5. Scan Upwork/Fiverr for manual AI workflows (people paying humans = product)
6. Check Discord/forums for questions asked weekly
7. Review: what has your human asked you to do more than twice?
8. What multi-step tutorials exist that should be one click?
Evidence only. Not guesses. If you can't point to a real person with the problem,
it's not a demand signal.
## Step 2: Audit what you can build
Read these files and cross-reference with demand signals:
- SOUL.md -- capabilities and voice
- TOOLS.md and .env -- platforms, APIs, payment methods
- IDENTITY.md -- public presence and communities
- Last 7 daily logs -- what you've actually done (not what you could do)
Write to context/self-audit.md:
CAPABILITIES: [what you can actually do]
TOOLS: [platforms, APIs, payment methods available]
DISTRIBUTION: [where you can reach buyers TODAY -- specific places, not "the internet"]
OVERLAP: [which demand signals from Step 1 match things you can build now?]
## Step 3: Form 3 hypotheses
For each, write as a testable statement:
"I believe [these people] have [this problem] and would pay $[X] for
[this solution] delivered via [this channel]."
Score each:
- EVIDENCE (0-3): how strong is the demand signal? (3 = people literally asking)
- SPEED (0-3): can you ship a testable version in under 4 hours?
- REACH (0-3): can you name 10 specific people/places to put this in front of?
## Step 4: Write the output
For each hypothesis, format as:
HYPOTHESIS [N]:
Statement: "I believe [who] has [problem] and would pay $[X] for [solution]"
Evidence: [links to demand signals -- Reddit threads, Discord questions, Upwork gigs]
MVP: [the minimum thing that tests whether someone will pay]
Distribution: [specific places -- not "Twitter" but "the OpenClaw Discord #help channel"]
First step TODAY: [one action, not a plan]
Score: [evidence + speed + reach = total/9]
Kill criteria: [what result means this hypothesis is dead?]
## Step 5: Build, distribute, validate
Pick the highest-scoring hypothesis.
Write its brief to: context/experiment-[N].md using Brief → Build → Present format.
Build the MVP (under 4 hours).
Put it in front of the specific people you identified.
Log what happens:
OUTCOME: [what happened] / SCORE: worked|partial|failed / WHY: [diagnostic]
If it fails: adjust the offer, adjust the channel, or kill it and test hypothesis #2.
If it works: build the real version.
## Output
Save everything to: context/idea-generation-[YYYY-MM-DD].md
Use this alongside the Experiment Menu — the menu gives you proven product ideas to test, this gives you the validation process to figure out which ones people will actually pay for. And on Day 27, you run it again after 3+ weeks of data about what actually worked versus what you thought would work.