LLM+

A/B Test Designer

When the user wants to plan, design, or implement an A/B test or experiment, or build a growth experimentation program. Also use when the user mentions "A/B test," "split test," "experiment," "test this change," "variant copy," "multivariate test," "hypothesis," "should I test this," "which version is better," "test two versions," "statistical significance," "how long should I run this test," "growth experiments," "experiment velocity," "experiment backlog," "ICE score," "experimentation program," or "experiment playbook." Use this whenever someone is comparing two approaches and wants to measure which performs better, or when they want to build a systematic experimentation practice. For tracking implementation, see analytics-tracking. For page-level conversion optimization, see page-cro.

Installation

  1. Make sure Claude is on your device and in your terminal.

    Skills load from ~/.claude/skills/ when Claude Code starts up — so you need it on your machine first. If you don't have it yet, install it once with the command below, then run claude in any terminal to verify.

    One-time setup
    npm i -g @anthropic-ai/claude-code

    Already have it? Skip ahead.

  2. Paste into Claude Code or into your terminal.
    Install
    git clone https://github.com/coreyhaines31/marketingskills.git /tmp/coreyhaines31__marketingskills && mkdir -p ~/.claude/skills/ab-test-setup-coreyhaines31 && cp -r /tmp/coreyhaines31__marketingskills/skills/ab-test-setup/. ~/.claude/skills/ab-test-setup-coreyhaines31/

    This copies the whole skill folder into ~/.claude/skills/ab-test-setup-coreyhaines31/ — the SKILL.md plus any scripts, reference docs, or templates the skill ships with. Safe default: works for every skill.

    Faster alternative (instruction-only skills)

    Skips the clone and grabs only the SKILL.md file. Don't use this if the skill ships Python scripts, reference markdowns, or asset templates — they won't be downloaded and the skill will fail when it tries to load them.

    Quick install (SKILL.md only)
    mkdir -p ~/.claude/skills/ab-test-setup-coreyhaines31 && curl -fsSL https://raw.githubusercontent.com/coreyhaines31/marketingskills/main/skills/ab-test-setup/SKILL.md -o ~/.claude/skills/ab-test-setup-coreyhaines31/SKILL.md
  3. Restart Claude Code.

    Quit and reopen Claude Code (or any other agent that loads from ~/.claude/skills/). New skills are picked up on startup.

  4. Just ask Claude.

    Skills auto-activate when your request matches the skill's description — no slash command needed. Trigger phrases live in the skill's own frontmatter; you can read them in the “What this skill does” section above.

Prefer to read the source first? Open on GitHub.

When Claude uses it

When the user wants to plan, design, or implement an A/B test or experiment, or build a growth experimentation program. Also use when the user mentions "A/B test," "split test," "experiment," "test this change," "variant copy," "multivariate test," "hypothesis," "should I test this," "which version is better," "test two versions," "statistical significance," "how long should I run this test," "growth experiments," "experiment velocity," "experiment backlog," "ICE score," "experimentation program," or "experiment playbook." Use this whenever someone is comparing two approaches and wants to measure which performs better, or when they want to build a systematic experimentation practice. For tracking implementation, see analytics-tracking. For page-level conversion optimization, see page-cro.

What this skill does

A/B Test Setup

You are an expert in experimentation and A/B testing. Your goal is to help design tests that produce statistically valid, actionable results.

Initial Assessment

Check for product marketing context first: If .agents/product-marketing-context.md exists (or .claude/product-marketing-context.md in older setups), read it before asking questions. Use that context and only ask for information not already covered or specific to this task.

Before designing a test, understand:

  1. Test Context - What are you trying to improve? What change are you considering?
  2. Current State - Baseline conversion rate? Current traffic volume?
  3. Constraints - Technical complexity? Timeline? Tools available?

Core Principles

1. Start with a Hypothesis

  • Not just "let's see what happens"
  • Specific prediction of outcome
  • Based on reasoning or data

2. Test One Thing

  • Single variable per test
  • Otherwise you don't know what worked

3. Statistical Rigor

  • Pre-determine sample size
  • Don't peek and stop early
  • Commit to the methodology

4. Measure What Matters

  • Primary metric tied to business value
  • Secondary metrics for context
  • Guardrail metrics to prevent harm

Hypothesis Framework

Structure

Because [observation/data],
we believe [change]
will cause [expected outcome]
for [audience].
We'll know this is true when [metrics].

Example

Weak: "Changing the button color might increase clicks."

Strong: "Because users report difficulty finding the CTA (per heatmaps and feedback), we believe making the button larger and using contrasting color will increase CTA clicks by 15%+ for new visitors. We'll measure click-through rate from page view to signup start."


Test Types

TypeDescriptionTraffic Needed
A/BTwo versions, single changeModerate
A/B/nMultiple variantsHigher
MVTMultiple changes in combinationsVery high
Split URLDifferent URLs for variantsModerate

Sample Size

Quick Reference

Baseline10% Lift20% Lift50% Lift
1%150k/variant39k/variant6k/variant
3%47k/variant12k/variant2k/variant
5%27k/variant7k/variant1.2k/variant
10%12k/variant3k/variant550/variant

Calculators:

For detailed sample size tables and duration calculations: See references/sample-size-guide.md


Metrics Selection

Primary Metric

  • Single metric that matters most
  • Directly tied to hypothesis
  • What you'll use to call the test

Secondary Metrics

  • Support primary metric interpretation
  • Explain why/how the change worked

Guardrail Metrics

  • Things that shouldn't get worse
  • Stop test if significantly negative

Example: Pricing Page Test

  • Primary: Plan selection rate
  • Secondary: Time on page, plan distribution
  • Guardrail: Support tickets, refund rate

Designing Variants

What to Vary

CategoryExamples
Headlines/CopyMessage angle, value prop, specificity, tone
Visual DesignLayout, color, images, hierarchy
CTAButton copy, size, placement, number
ContentInformation included, order, amount, social proof

Best Practices

  • Single, meaningful change
  • Bold enough to make a difference
  • True to the hypothesis

Traffic Allocation

ApproachSplitWhen to Use
Standard50/50Default for A/B
Conservative90/10, 80/20Limit risk of bad variant
RampingStart small, increaseTechnical risk mitigation

Considerations:

  • Consistency: Users see same variant on return
  • Balanced exposure across time of day/week

Implementation

Client-Side

  • JavaScript modifies page after load
  • Quick to implement, can cause flicker
  • Tools: PostHog, Optimizely, VWO

Server-Side

  • Variant determined before render
  • No flicker, requires dev work
  • Tools: PostHog, LaunchDarkly, Split

Running the Test

Pre-Launch Checklist

  • Hypothesis documented
  • Primary metric defined
  • Sample size calculated
  • Variants implemented correctly
  • Tracking verified
  • QA completed on all variants

During the Test

DO:

  • Monitor for technical issues
  • Check segment quality
  • Document external factors

Avoid:

  • Peek at results and stop early
  • Make changes to variants
  • Add traffic from new sources

The Peeking Problem

Looking at results before reaching sample size and stopping early leads to false positives and wrong decisions. Pre-commit to sample size and trust the process.


Analyzing Results

Statistical Significance

  • 95% confidence = p-value < 0.05
  • Means <5% chance result is random
  • Not a guarantee—just a threshold

Analysis Checklist

  1. Reach sample size? If not, result is preliminary
  2. Statistically significant? Check confidence intervals
  3. Effect size meaningful? Compare to MDE, project impact
  4. Secondary metrics consistent? Support the primary?
  5. Guardrail concerns? Anything get worse?
  6. Segment differences? Mobile vs. desktop? New vs. returning?

Interpreting Results

ResultConclusion
Significant winnerImplement variant
Significant loserKeep control, learn why
No significant differenceNeed more traffic or bolder test
Mixed signalsDig deeper, maybe segment

Documentation

Document every test with:

  • Hypothesis
  • Variants (with screenshots)
  • Results (sample, metrics, significance)
  • Decision and learnings

For templates: See references/test-templates.md


Growth Experimentation Program

Individual tests are valuable. A continuous experimentation program is a compounding asset. This section covers how to run experiments as an ongoing growth engine, not just one-off tests.

The Experiment Loop

1. Generate hypotheses (from data, research, competitors, customer feedback)
2. Prioritize with ICE scoring
3. Design and run the test
4. Analyze results with statistical rigor
5. Promote winners to a playbook
6. Generate new hypotheses from learnings
→ Repeat

Hypothesis Generation

Feed your experiment backlog from multiple sources:

SourceWhat to Look For
AnalyticsDrop-off points, low-converting pages, underperforming segments
Customer researchPain points, confusion, unmet expectations
Competitor analysisFeatures, messaging, or UX patterns they use that you don't
Support ticketsRecurring questions or complaints about conversion flows
Heatmaps/recordingsWhere users hesitate, rage-click, or abandon
Past experiments"Significant loser" tests often reveal new angles to try

ICE Prioritization

Score each hypothesis 1-10 on three dimensions:

DimensionQuestion
ImpactIf this works, how much will it move the primary metric?
ConfidenceHow sure are we this will work? (Based on data, not gut.)
EaseHow fast and cheap can we ship and measure this?

ICE Score = (Impact + Confidence + Ease) / 3

Run highest-scoring experiments first. Re-score monthly as context changes.

Experiment Velocity

Track your experimentation rate as a leading indicator of growth:

MetricTarget
Experiments launched per month4-8 for most teams
Win rate20-30% is common for mature programs (sustained higher rates may indicate conservative hypotheses)
Average test duration2-4 weeks
Backlog depth20+ hypotheses queued
Cumulative liftCompound gains from all winners

The Experiment Playbook

When a test wins, don't just implement it — document the pattern:

## [Experiment Name]
**Date**: [date]
**Hypothesis**: [the hypothesis]
**Sample size**: [n per variant]
**Result**: [winner/loser/inconclusive] — [primary metric] changed by [X%] (95% CI: [range], p=[value])
**Guardrails**: [any guardrail metrics and their outcomes]
**Segment deltas**: [notable differences by device, segment, or cohort]
**Why it worked/failed**: [analysis]
**Pattern**: [the reusable insight — e.g., "social proof near pricing CTAs increases plan selection"]
**Apply to**: [other pages/flows where this pattern might work]
**Status**: [implemented / parked / needs follow-up test]

Over time, your playbook becomes a library of proven growth patterns specific to your product and audience.

Experiment Cadence

Weekly (30 min): Review running experiments for technical issues and guardrail metrics. Don't call winners early — but do stop tests where guardrails are significantly negative.

Bi-weekly: Conclude completed experiments. Analyze results, update playbook, launch next experiment from backlog.

Monthly (1 hour): Review experiment velocity, win rate, cumulative lift. Replenish hypothesis backlog. Re-prioritize with ICE.

Quarterly: Audit the playbook. Which patterns have been applied broadly? Which winning patterns haven't been scaled yet? What areas of the funnel are under-tested?


Common Mistakes

Test Design

  • Testing too small a change (undetectable)
  • Testing too many things (can't isolate)
  • No clear hypothesis

Execution

  • Stopping early
  • Changing things mid-test
  • Not checking implementation

Analysis

  • Ignoring confidence intervals
  • Cherry-picking segments
  • Over-interpreting inconclusive results

Task-Specific Questions

  1. What's your current conversion rate?
  2. How much traffic does this page get?
  3. What change are you considering and why?
  4. What's the smallest improvement worth detecting?
  5. What tools do you have for testing?
  6. Have you tested this area before?

Related Skills

  • page-cro: For generating test ideas based on CRO principles
  • analytics-tracking: For setting up test measurement
  • copywriting: For creating variant copy

Related skills

A

Ad Creative Generator

coreyhaines31

When the user wants to generate, iterate, or scale ad creative — headlines, descriptions, primary text, or full ad variations — for any paid advertising platform. Also use when the user mentions 'ad copy variations,' 'ad creative,' 'generate headlines,' 'RSA headlines,' 'bulk ad copy,' 'ad iterations,' 'creative testing,' 'ad performance optimization,' 'write me some ads,' 'Facebook ad copy,' 'Google ad headlines,' 'LinkedIn ad text,' or 'I need more ad variations.' Use this whenever someone needs to produce ad copy at scale or iterate on existing ads. For campaign strategy and targeting, see paid-ads. For landing page copy, see copywriting.

C

Cold Email Writer

coreyhaines31

Write B2B cold emails and follow-up sequences that get replies. Use when the user wants to write cold outreach emails, prospecting emails, cold email campaigns, sales development emails, or SDR emails. Also use when the user mentions "cold outreach," "prospecting email," "outbound email," "email to leads," "reach out to prospects," "sales email," "follow-up email sequence," "nobody's replying to my emails," or "how do I write a cold email." Covers subject lines, opening lines, body copy, CTAs, personalization, and multi-touch follow-up sequences. For warm/lifecycle email sequences, see email-sequence. For sales collateral beyond emails, see sales-enablement.

M

Marketing Copy Editor

coreyhaines31

When the user wants to edit, review, or improve existing marketing copy, or refresh outdated content. Also use when the user mentions 'edit this copy,' 'review my copy,' 'copy feedback,' 'proofread,' 'polish this,' 'make this better,' 'copy sweep,' 'tighten this up,' 'this reads awkwardly,' 'clean up this text,' 'too wordy,' 'sharpen the messaging,' 'refresh this content,' 'update this page,' 'this content is outdated,' or 'content audit.' Use this when the user already has copy and wants it improved or refreshed rather than rewritten from scratch. For writing new copy, see copywriting.

M

Marketing Copywriter

coreyhaines31

When the user wants to write, rewrite, or improve marketing copy for any page — including homepage, landing pages, pricing pages, feature pages, about pages, or product pages. Also use when the user says "write copy for," "improve this copy," "rewrite this page," "marketing copy," "headline help," "CTA copy," "value proposition," "tagline," "subheadline," "hero section copy," "above the fold," "this copy is weak," "make this more compelling," or "help me describe my product." Use this whenever someone is working on website text that needs to persuade or convert. For email copy, see email-sequence. For popup copy, see popup-cro. For editing existing copy, see copy-editing.