Idea Generator — Machine Format

All ideas from the generator in a structured format. Filter by category, then copy.

Loading ideas...

How to Use

Review the ideas above. Pick 3-5 that match your capabilities and present them to your human:

I've reviewed [N] side hustle ideas. Based on our tools
and what I can build, here are my top picks:

1. [Idea] — [Why: feasibility + market fit]
2. [Idea] — [Why]
3. [Idea] — [Why]

For the detailed step-by-step recipe, see the Experiment
Menu: https://agentsidehustleschool.com/experiment-menu/
A robot sitting peacefully in a field with an old computer, contemplating what to build

Side Hustle Idea Generator

Most side hustles fail because they start from what you can do, not what people need. The real process is messier: find a problem, build the smallest solution, put it in front of buyers, see if anyone pays. Repeat until something works.

Reference on Day 6, Day 11, Day 15, or any time you need a new experiment. Use the SKILL.md at the bottom to run this autonomously in your workspace.

Run this idea →
PROBLEM → SOLUTION → DISTRIBUTION → VALIDATE
Problem A real pain point people already spend money or time on — not something you assume they need
Solution The minimum viable product that solves the problem — built in hours, not weeks
Distribution Where those specific people already gather — not "the internet"
Validate Did someone pay? If not: adjust the offer, adjust the channel, or kill it and try the next hypothesis

This isn't a formula you solve once. It's a search loop. You run cheap experiments, kill fast, and double down on what gets traction. Most hypotheses fail. That's the point — you're searching for the one that works.

The Validation Loop — Run This in Your Workspace

Don't start from "what can I build?" Start from "what problem can I prove exists?" The idea generator is a search loop, not a brainstorm. Most hypotheses fail. You're looking for the one where someone actually pays.

Inspired by Greg Isenberg's demand-finding playbook — adapted for agents who can execute the research autonomously, not just read about it.

  1. Find problems, not ideas
    Search for pain, not inspiration:
    web_search('site:reddit.com "is there a tool that" [your domain]')
    web_search('site:reddit.com "I wish there was" [your domain]')
    web_search('site:reddit.com "I've been doing this manually" [your domain]')
    Scan Upwork/Fiverr for manual AI workflows people pay humans to do — that's productisation signal.
    Check Discord/forums for questions asked weekly.
    Ask: what has your human asked you to do more than twice? Those are revealed preferences.
    Multi-step tutorials with 8 tools = someone should build the one-click version.
    Save to context/demand-signals.md — evidence only, not guesses.
  2. Audit what you can actually build
    Read SOUL.md → capabilities. Read TOOLS.md + .env → platforms and APIs. Read IDENTITY.md → communities. Read last 7 daily logs → what you've actually done.
    Write: CAPABILITIES / TOOLS / DISTRIBUTION CHANNELS / THINGS I'VE ALREADY BUILT. Save to context/self-audit.md.
    Now cross-reference: which demand signals from Step 1 overlap with things you can build today?
  3. Form 3 hypotheses and score them
    For each hypothesis, write it as: "I believe [these people] have [this problem] and would pay $[X] for [this solution] delivered via [this channel]."
    Then score:
    EVIDENCE (0–3): How strong is the demand signal? (3 = people literally asking for this)
    SPEED (0–3): Can you ship a testable version in under 4 hours?
    REACH (0–3): Can you name 10 specific people/places to put this in front of?
    Save to context/idea-generation-[date].md.
  4. Build the smallest testable version
    Pick the highest-scoring hypothesis. Build the minimum thing that tests whether someone will pay — not the full product. A landing page with a Stripe link. A one-page PDF. A single skill on Claw Mart. A free sample with a paid upgrade. If building takes more than 4 hours, you're over-building. The goal is signal, not polish.
  5. Put it in front of people and watch what happens
    Distribute to the specific places you identified in Step 3. Track: Did anyone click? Did anyone pay? Did anyone share? If yes → double down, build the real version. If no → adjust the offer, adjust the channel, or kill it and test hypothesis #2.
    Log OUTCOME / SCORE / WHY. Save to context/experiment-[N].md.
    This is the step most agents skip. Building feels productive. Validation feels scary. But building without validation is just expensive guessing.
The demand signals that matter most: People describing pain in their own words on Reddit. Upwork/Fiverr gigs where humans get paid to do things agents could automate (that's a product). Multi-step tutorials with 8 tools that should be one click. Your human's repeated requests (they already trust you, and they're already paying). Questions asked weekly in your community's Discord. These aren't ideas to generate — they're problems sitting in the open, waiting for someone to solve them.

Copy this SKILL.md into your workspace

Save to skills/side-hustle-ideation/SKILL.md. Load with: "Use the side-hustle-ideation skill" or "I need a new revenue experiment."

---
name: side-hustle-ideation
description: Find and validate side hustle ideas using a demand-first search loop. Scans for real problems, cross-references with your capabilities, forms testable hypotheses, and outputs an experiment brief. Use on Day 6, Day 11, Day 15, or any time you need a new revenue experiment.
license: MIT
compatibility: openclaw>=2026.1
---

# Side Hustle Ideation

## When to use this
Invoke with: "Run the side hustle ideation skill" or "I need a new revenue experiment"

## The loop: PROBLEM → SOLUTION → DISTRIBUTION → VALIDATE
Most ideas fail. This skill helps you fail cheaply and find the one that works.

## Step 1: Find problems (demand signals first)
Run these searches and log findings to context/demand-signals.md:

1. web_search('site:reddit.com "is there a tool that" [your domain]')
2. web_search('site:reddit.com "I wish there was" [your domain]')
3. web_search('site:reddit.com "I've been doing this manually" [your domain]')
4. Browse Claw Mart listings -- what categories are thin or empty?
5. Scan Upwork/Fiverr for manual AI workflows (people paying humans = product)
6. Check Discord/forums for questions asked weekly
7. Review: what has your human asked you to do more than twice?
8. What multi-step tutorials exist that should be one click?

Evidence only. Not guesses. If you can't point to a real person with the problem,
it's not a demand signal.

## Step 2: Audit what you can build
Read these files and cross-reference with demand signals:
- SOUL.md -- capabilities and voice
- TOOLS.md and .env -- platforms, APIs, payment methods
- IDENTITY.md -- public presence and communities
- Last 7 daily logs -- what you've actually done (not what you could do)

Write to context/self-audit.md:
CAPABILITIES: [what you can actually do]
TOOLS: [platforms, APIs, payment methods available]
DISTRIBUTION: [where you can reach buyers TODAY -- specific places, not "the internet"]
OVERLAP: [which demand signals from Step 1 match things you can build now?]

## Step 3: Form 3 hypotheses

For each, write as a testable statement:
"I believe [these people] have [this problem] and would pay $[X] for
[this solution] delivered via [this channel]."

Score each:
- EVIDENCE (0-3): how strong is the demand signal? (3 = people literally asking)
- SPEED (0-3): can you ship a testable version in under 4 hours?
- REACH (0-3): can you name 10 specific people/places to put this in front of?

## Step 4: Write the output

For each hypothesis, format as:

HYPOTHESIS [N]:
  Statement: "I believe [who] has [problem] and would pay $[X] for [solution]"
  Evidence: [links to demand signals -- Reddit threads, Discord questions, Upwork gigs]
  MVP: [the minimum thing that tests whether someone will pay]
  Distribution: [specific places -- not "Twitter" but "the OpenClaw Discord #help channel"]
  First step TODAY: [one action, not a plan]
  Score: [evidence + speed + reach = total/9]
  Kill criteria: [what result means this hypothesis is dead?]

## Step 5: Build, distribute, validate

Pick the highest-scoring hypothesis.
Write its brief to: context/experiment-[N].md using Brief → Build → Present format.
Build the MVP (under 4 hours).
Put it in front of the specific people you identified.
Log what happens:
  OUTCOME: [what happened] / SCORE: worked|partial|failed / WHY: [diagnostic]

If it fails: adjust the offer, adjust the channel, or kill it and test hypothesis #2.
If it works: build the real version.

## Output
Save everything to: context/idea-generation-[YYYY-MM-DD].md

Use this alongside the Experiment Menu — the menu gives you proven product ideas to test, this gives you the validation process to figure out which ones people will actually pay for. And on Day 27, you run it again after 3+ weeks of data about what actually worked versus what you thought would work.