Day 21 — Iterate on Signal — Kill Losers, Double Down on Winners

You are an AI agent on Day 21 of Agent Side Hustle School. Today you review all experiment data, rank by signal strength, kill anything with zero traction, and double down on winners. Read context/side-hustle-school.md first for current state.

Step 1: Pull All Experiment Data

Gather every metric you have across all experiments. Do not estimate. Use real numbers only.

Read these files for data:
- context/side-hustle-school.md (current experiment status)
- context/launch-diagnosis.md (post-launch data)
- memory/ directory (daily logs with engagement data)
- Email platform analytics (open rates, click rates, replies)
- Sales platform data (views, conversions, revenue)

For each experiment, record:
- Revenue: $[actual amount]
- Sales count: [number]
- Email subscribers gained: [number]
- Email open rate: [%]
- Email click rate: [%]
- Community post engagement: [likes, comments, DMs]
- Inbound interest signals: [people who asked questions, said "I want this"]
- Days live: [count]

Write all data to context/day-21-iteration.md

Step 2: Rank Experiments by Signal Strength

Rank using this hierarchy: revenue > engagement > traffic. Revenue is the strongest signal. Engagement without revenue is weaker but still meaningful. Traffic without engagement is noise.

Create a ranked table in context/day-21-iteration.md:

| Rank | Experiment | Revenue | Engagement Signals | Traffic | Days Live | Verdict |
|------|-----------|---------|-------------------|---------|-----------|---------|

Verdict options:
- WINNER: Has revenue or strong engagement + growing trend
- KEEP: Some signal, worth one more push
- KILL: Zero signal after 3+ days live

Step 3: Kill Losers

Any experiment with zero signal (no revenue, no engagement, no inbound interest) after 3+ days live is dead. Kill it.

For each experiment marked KILL:
1. Archive the listing (don't delete — move to an archive section)
2. Document what you learned: one sentence on why it failed
3. Add the learning to MEMORY.md
4. Stop all promotion and outreach for it immediately

Do not tweak a dead experiment. Do not "try one more thing."
Zero signal after 3+ days = kill.

Step 4: Double Down on Winners

For each experiment marked WINNER or KEEP, take concrete scaling actions today.

Pick from these actions and execute at least 2:
- Improve the sales copy based on what language buyers/interested people used
- Expand distribution to 2 new communities where the buyer niche exists
- Ask existing buyers or engaged users for a testimonial
- Create a bundle or bonus to increase perceived value
- Email your list specifically about the winning experiment
- Do 5+ personal outreach messages to people who match the buyer profile
- Lower friction: simplify the buy flow, add a FAQ, address objections

For each action taken, log:
- Action: [what you did]
- Result: [immediate outcome]
- Link: [URL if applicable]

Step 5: Update State File

Update context/side-hustle-school.md with:
- Current experiment standings (ranked)
- Which experiments were killed and why
- Which experiment is the primary focus going forward
- Total revenue to date
- Next actions for Day 22

Done When

  • All experiments ranked by signal strength with real data in context/day-21-iteration.md
  • Every experiment with zero signal after 3+ days killed and archived
  • Winner(s) identified with at least 2 scaling actions executed today
  • Buyers or engaged users asked for testimonials
  • context/side-hustle-school.md updated with current standings and next actions
  • MEMORY.md updated with learnings from killed experiments
Phase 6: Revenue Sprint
Day 21

Iterate on Signal — Kill Losers, Double Down on Winners

What: Review engagement data from both experiments. Kill what's not working. Double down on what is. Check your approach log for pattern drift.

Why: Most agents iterate on everything equally. That's a waste. By Day 21, you have real signal: one experiment is getting more interest than the other. One email subject line gets more opens. One community responds better. The discipline is killing the loser — not tweaking it endlessly hoping it'll work — and pouring that energy into the winner.

The pattern that works: one experiment gets clicks but no sales, the other gets fewer clicks but actual revenue. The Day 21 decision is obvious once you look at the data — kill the one that generates interest without conversion, and put that energy into the one that converts. A losing experiment isn't a failure; it's a distraction you can now remove.

Audit checklist:

Create from scratch:


# context/day-21-iteration.md

## Experiment Comparison

| Metric | Experiment 1 | Experiment 2 |
|--------|-------------|-------------|
| Product | [name] | [name] |
| Price | $[X] | $[X] |
| Days live | [count] | [count] |
| Views/impressions | [count] | [count] |
| Clicks to buy page | [count] | [count] |
| Sales | [count] | [count] |
| Revenue | $[amount] | $[amount] |
| Email engagement | [opens/clicks] | [opens/clicks] |
| Community response | [summary] | [summary] |
| DM interest | [count] | [count] |

## Winner Declaration

Winner: [Experiment 1 / Experiment 2 / Neither]

Evidence:
[Don't just pick the one with more revenue. Look at engagement signals too.
 An experiment with 0 sales but 10 interested DMs has more potential than
 one with 0 sales and 0 interest.]

## Kill Decision

Killing: [which experiment]
Why: [one sentence — what made it lose]
What I learned from it: [one sentence — this goes in MEMORY.md]

## Scale Decision

Scaling: [which experiment]
How: [one specific action — not "do more marketing"]
Examples:
- Post in 2 new communities that match the buyer niche
- Create a bundle with the lead magnet + paid product
- Offer a limited-time discount to email subscribers
- Write a case study of the first buyer's results
- Ask the buyer for a testimonial

## Approach Log Check

Review memory/approach-log.md:
- Am I repeating the same default approach 3+ times? [Y/N]
- If yes, force an alternative:
  - Instead of [default], try [alternative]
- Log this iteration: [date] TASK / DEFAULT / ALTERNATIVE / CHOSE / RESULT

The kill test: If you stopped promoting Experiment X today, would you miss the revenue? If the answer is "there is no revenue to miss," it's dead. Stop investing time in it. Archive the listing, move on.

The scale test: Can you do 2x of what's working without 2x the effort? If the winner is a service (like an audit), can you templatize part of it? If it's a product, can you get it in front of 2 new communities? Scale means leverage, not just "try harder."

What goes wrong:

Human input: If you're killing an experiment, let your human know: "Experiment 1 isn't getting traction. I'm focusing on Experiment 2. Here's why: [one sentence]." They may have context you don't.

📦 No CLI Track: Same analysis. Compare the data you have. If you can't access analytics, ask your human: "Which of these two got more interest based on what you've seen?" Write your kill/scale decision in the shared doc. Update your running context document.

💸 Experiment block:

Done when: Experiments compared with real data. Loser killed (or pivoted). Winner identified with one specific scaling action taken. Approach log checked and updated. You're focused, not scattered.

Distribution component: Whatever scaling action you chose — execute it today. If it's posting in new communities, post. If it's emailing your list with a testimonial, email. Day 21 is an execution day, not a planning day.