Clean Commit

This proposal is password protected

Enter the password to continue.

Prepared for TooTimid

Conversion Rate Optimization Proposal

Prepared by Clean Commit · April 2026

The one-page version

Here's our pitch
We start with the changes that move your unit economics. Pricing, offer structure, shipping thresholds, bundles, post-purchase upsells. These are what we call "Tier 1" experiments and they produce much bigger effects (15-40% lifts) and resolve faster than surface-level layout changes. Once we've got these big changes running, we'll layer in what we call "Tier 2" structural improvements that let us test more things but with smaller impacts.

What we've already done
We built a psychographic customer profile from 80+ of your real customer reviews. Our team has also completed a preliminary site analysis and identified several specific Tier 1 experiment opportunities for your store. We've added an appendix at the end of this document that you can read through. We think it's going to be super valuable whether you move forward with this proposal or not.

What you get
Based on three comparable engagements, expected 12-month impact of a 15 to 20% conversion rate lift and $1M+ in cumulative revenue.

Price
Performance based. Month one is performance only with no retainer. You pay nothing unless an experiment wins. Month two onward, $3,000 monthly floor or one month of measured uplift, whichever is higher. Capped at $10,000. No lock-in.

Proof
21 experiments documented in this proposal across 11 clients, with real results. Three full case studies from comparable engagements (8x to 15x ROI).

Next step
A 30-minute call with Tim Davidson to walk through your current metrics, confirm the opportunity size and answer anything outstanding. [email protected].

How We Prioritize What to Test

The trap most brands fall into

The trap that a lot of brands fall into when they start playing around with A/B testing and CRO is focusing on the UI and even the user experience of the store. These things are important but they're much less important than things like price elasticity, the offer you're presenting to customers, discounts, gift with purchase, buy one get one free and other mechanisms which convince customers that the value of what you're offering is greater, and then finding the profitable sweet spot.

Interface changes are low risk and easier to test when you're getting started, but the impacts are lower and they're harder to measure. Plus they usually take a lot longer and a lot of brands don't have the patience when they're first starting out to wait for them to reach significance.

We classify every experiment into tiers based on how directly it affects your unit economics, then prioritize accordingly.

The framework

TierWhat it changesExpected impactExamples
Tier 1What the customer buys, pays, or receives15-40%+ liftPricing, shipping thresholds, bundles, offers, subscription models, post-purchase upsells
Tier 2How the customer gets to the purchase8-20% liftNavigation, checkout flow, cart architecture, search, cross-sell placement, page structure
Tier 3How existing elements look, read, or feel2-8% liftCopy, colors, layout, imagery, badges, trust signals, social proof styling

This approach is backed by some pretty significant studies (Wharton, Browne & Jones).

The approach for TooTimid: Tier 1 experiments should be the initial focus. These experiments finish faster, produce larger effects and compound more aggressively. They'll take a few months to get tested fully, and even then there's a chance they'll continue over a longer period of time. But once the big levers are optimized, we'll blend in Tier 2 changes. Tier 3 comes last.

The question we want to ask when deciding which tests to run is: would this change make the customer's bank statement look different? If the answer is yes, it falls into Tier 1.

Winning Experiment Examples

In presenting this proposal, I really wanted to spend the time proving to you (Rob) that we have experience running the kinds of tests that can make a difference to TooTimid. Not in an academic sense, but to show you actual experiments we've run so you can visualize what they would look like on your store. Our team's focus is finding ways to make your store more profitable, and that means getting creative about the way your products are priced and offered.

The best way I thought to do that was to list a full set of examples of high-impact tests we've run for other clients. This will sound a little like blowing our own horn, but my goal was to give you enough detail to assess if we would be a good fit. It's all good and well to say we did X, Y and Z for a past client, but you approach this with skepticism as a new brand. So I wanted to provide an overwhelming volume of evidence that the kind of experiments we can run would have a meaningful impact for TooTimid.

#TierExperimentClientKey Result
1T1Price increase on hero SKUsOne Quiet Mind+42.5% CVR, +33.4% RPV
2T1Free shipping threshold optimizationAFTCO+12% AOV, +4% net revenue
3T1Starter bundle introductionAnyAge Wear+16% AOV
4T1Gift with purchase vs flat discountPeluva+18% RPV
5T1Subscribe & save on consumablesTrollco Clothing+9% RPV, +1.2x reorder rate
6T1Discount removal on flagshipMarsh Wear+14% margin, +29% checkout rate
7T1Spend-and-save threshold tiersCodeword+13% AOV
8T1Post-purchase one-click upsellHashStash+16% AOV, 14% acceptance rate
9T1Starter kit for new customersMarsh Wear+17% new visitor CVR
10T1Volume discount incentive in cartMarsh Wear+21% RPV
11T2Desktop sticky navbarAFTCO+5% RPV
12T2Homepage UGC carouselCodeword+5% CVR, -8% bounce
13T2Cross-sell pop-up at add-to-cartMarsh Wear+15% RPV, +7% AOV
14T2Free gift callout on PDPPeluva+14% RPV
15T2Homepage reskin with category cardsOverland Addict+45% CVR
16T2Product card differentiationGum of Gods+9% CVR
17T2Single column collection layoutAnyAge Wear+3% ATC rate
18T2Mobile navigation redesignQ30+14% CVR, +17% RPV
19T2Popup redesign & delayBetterGuards+4% CVR, +7% ATC
20T2Cart vs quiz checkout flowMarsh Wear+33% RPV
21T2Sale countdown timerBetterGuards+6% CVR, +4% RPV

Tier 1: The experiments that change your economics

These experiments change what the customer pays, receives or how the offer is structured. They're often harder to implement and require quite a lot of testing, but they consistently produce the largest, fastest results.

1. Price Increase on Hero SKUs

Result: +42.5% CVR, +33.4% RPV
Duration: 20 days, 53,200 visitors
Client: One Quiet Mind

Tested a 15% price increase on three flagship weighted pillow SKUs. Conversion rate went up, not down. The original price was anchoring the product as "cheap," and the target audience associated higher price with higher quality.

Control: Original pricing
Control: Original pricing on the flagship Weighted Pillow.
Variant: 15% price increase
Variant: 15% price increase. Conversion went up.

For TooTimid: Your premium vibrators and toys could be underpriced relative to what your customers expect to pay for quality. A price test on your top 3-5 SKUs would tell us immediately whether you're leaving margin on the table.

2. Free Shipping Threshold Optimization

Result: +12% AOV, +4% net revenue
Duration: 28 days, 38,400 visitors
Client: AFTCO

Tested raising the free shipping threshold from $79 to $99. Pushed customers to add one more item to qualify. Average overshoot was 25-30% above the new threshold.

Control: $79 threshold
Control: Free shipping on orders $79+.
Variant: $99 threshold
Variant: Threshold raised to $99. Customers added more to qualify.

For TooTimid: Your current free shipping threshold is $59. Testing a higher threshold ($79 or $99) could meaningfully lift AOV. Your catalogue is deep enough that customers can easily add complementary items to reach a higher bar.

3. Starter Bundle Introduction

Result: +16% AOV
Duration: 35 days, 31,700 visitors
Client: AnyAge Wear

Introduced a bundled kit on the PDP pairing two bestsellers at a combined discount. Despite the screenshots, the pricing is handled dynamically and increases the discount to around 15% for the bundled content. Positioned as the default recommended option.

Control: Single product PDP
Control: Standard PDP with a single product.
Variant: Complete Kit bundle
Variant: "Complete Kit" bundle as the recommended purchase.

For TooTimid: Couples kits, first-timer starter kits or "date night" bundles would map directly to your two largest customer segments (couples at 35% and first-time explorers at 25%). Bundles reduce decision paralysis and increase AOV in a single move.

4. Gift With Purchase vs Flat Discount

Result: +18% RPV
Duration: 30 days, 57,400 visitors
Client: Peluva

Replaced a sitewide 15% discount code with a free branded accessory (retail value $25) on orders over $75. The gift with purchase outperformed the discount on conversion, AOV and margin.

Control: 15% off sitewide
Control: 15% off sitewide with code.
Variant: Free accessory on orders $75+
Variant: Free branded accessory on orders $75+.

For TooTimid: You already include a free gift with every order, but you're also running a permanent 50% sitewide discount code. Testing whether the free gift alone drives comparable results could recover significant margin.

5. Subscribe & Save on Consumables

Result: +9% RPV, 1.2x reorder rate
Duration: 42 days, 34,800 visitors
Client: Trollco Clothing

Added a subscribe & save option on the PDP for consumable products. 10% discount on recurring orders with a toggle between one-time and subscription. Subscription set as the default selection.

Control: One-time purchase only
Control: One-time purchase only.
Variant: Subscribe & save toggle
Variant: Subscribe & save toggle with 10% recurring discount.

For TooTimid: Lubricants, toy cleaners and other consumables are natural candidates for subscription. These products run out and need replenishing. A subscribe & save model generates predictable recurring revenue at zero acquisition cost.

6. Discount Removal on Flagship

Result: +14% gross margin, +29% checkout rate
Duration: 21 days, 46,500 visitors
Client: Marsh Wear

Removed the permanent discount code from the hero product and tested it at full price with stronger value messaging. Checkout completions actually increased because removing the discount code field eliminated the "let me go find a code" abandonment loop.

Control: Permanent sale pricing
Control: Permanent sale pricing with discount code.
Variant: Full price with value messaging
Variant: Full price with value-led messaging. Margin recovered, checkouts went up.

For TooTimid: You're running a permanent "SEXY50" code for 50% off sitewide. Testing what happens when the discount goes away, replaced with value messaging and the free gift offer, could be one of the single highest-impact changes on your store.

7. Spend-and-Save Threshold Tiers

Result: +13% AOV
Duration: 45 days, 32,300 visitors
Client: Codeword

Replaced a flat 10% discount with tiered spend-and-save thresholds: spend $100 save 10%, spend $150 save 15%, spend $200 save 20%. Most customers aimed for the middle tier, overshooting their original cart value by 25-40%.

Control: Flat 10% discount
Control: Flat 10% discount on all orders.
Variant: Tiered spend-and-save
Variant: Three tiers with escalating rewards and cart progress bar.

For TooTimid: Tiered spend-and-save could replace the blanket 50% code. It gives customers a reason to add more items while maintaining healthier margins at every tier.

8. Post-Purchase One-Click Upsell

Result: +16% AOV, 14% acceptance rate
Duration: 30 days, 55,100 visitors
Client: HashStash

Added a one-click upsell page between checkout completion and the thank-you page. Offered complementary products with a "Buy 1 Get 1 40% Off" incentive, purchasable with a single tap. No re-entering payment details. 14% of customers took the offer.

Control: Standard post-purchase page
Control: Standard post-purchase page with no recommendations.
Variant: BOGO 40% off upsell page
Variant: Post-purchase upsell with BOGO 40% off offer. 14% acceptance.

For TooTimid: Post-purchase upsells are especially powerful in your category because the customer has already committed. They've overcome the privacy anxiety and entered payment details. Adding a complementary item at that point is frictionless. We're not sure if you're currently running post-purchase upsells, but this is something we'd like to experiment with, trying different combinations of products and offers.

9. Starter Kit for New Customers

Result: +17% new visitor CVR
Duration: 28 days, 38,200 visitors
Client: Marsh Wear

Created a $49 "First Timer Kit" with curated entry-level products bundled at a slight discount. Targeted at new visitors from paid ads. Reduced decision paralysis for first-time buyers who didn't know where to start.

Control: Standard homepage
Control: New visitors land on the standard homepage with full product grid.
Variant: First Timer Kit landing page
Variant: Curated "First Timer Kit" landing page for new visitors.

For TooTimid: 25% of your customers are first-time explorers. A "New to This? Start Here" kit, curated by your team and priced under $50 with the free gift included, would give first-timers a safe entry point. Starter kit buyers have 3.1x higher 12-month LTV across our client base.

10. Volume Discount Incentive in Cart

Result: +21% RPV
Duration: 21 days, 49,400 visitors
Client: Marsh Wear

Added a "Buy 2, Get 15% Off" incentive badge directly on the product card in the cart, paired with a cross-sell carousel at the bottom. Encouraged customers to add a second item from the same category.

Control: Standard cart
Control: Standard cart without volume incentive.
Variant: Buy 2, Get 15% Off badge
Variant: "Buy 2, Get 15% Off" badge + cross-sell carousel.

For TooTimid: Volume incentives work well with accessories and consumables where cost of goods is low. Testing a structured offer vs your current 50% flat discount would tell us whether structured offers drive better unit economics.

Tier 2: The experiments that change how customers buy

Tier 2 experiments change the structure of the buying experience. How customers discover products, navigate the catalogue and move through the funnel. They make the existing value proposition easier to find and act on.

11. Desktop Sticky Navbar

Result: +5% RPV
Duration: 11 days, 39,935 sessions
Client: AFTCO

Made the desktop navigation bar sticky so it stays visible while scrolling.

Control: Nav disappears on scroll
Control: Nav disappeared on scroll.
Variant: Sticky nav
Variant: Sticky nav stays pinned. +5% RPV.

For TooTimid: Your site has a large catalogue across many categories. Persistent navigation helps visitors browse without losing their place.

12. Homepage UGC Carousel

Result: +5% CVR, -8% bounce rate
Duration: 23 days, 52,800 sessions
Client: Codeword

Added a "Your Story, Our Hats" user-generated content section. Real customers wearing the product.

Control: No UGC
Control: Brand photography only.
Variant: UGC carousel
Variant: UGC section added. Bounce rate dropped 8%.

For TooTimid: UGC is tricky in your category for privacy reasons, but curated lifestyle content or anonymous review highlights could serve the same trust-building function.

13. Cross-Sell Pop-Up at Add-to-Cart

Result: +15% RPV, +7% AOV
Duration: 62 days, 35,900 sessions
Client: Marsh Wear

Added a "Pairs well with" pop-up showing complementary products when a customer adds to cart.

Control: Standard cart drawer
Control: Standard cart drawer, no cross-sell.
Variant: Cross-sell pop-up
Variant: "Pairs well with" pop-up. +7% AOV.

For TooTimid: Complementary items (lube, cleaner, batteries, accessories) are natural add-ons at the point of commitment.

14. Free Gift Callout on PDP

Result: +14% RPV
Client: Peluva

Added a "Get free socks!" callout with product image directly above the Add to Cart button.

Control: No free gift mention
Control: No mention of free gift on PDP.
Variant: Free gift callout above ATC
Variant: Free gift callout above the Add to Cart button. +14% RPV.

For TooTimid: Your free gift with every order is buried. Surfacing it on the PDP with the retail value visible would give first-time buyers an extra nudge.

15. Homepage Reskin

Result: +45% CVR (0.3% to 0.44%)
Duration: 30 days, 47,300 sessions, 97% confidence
Client: Overland Addict

Replaced a product-heavy homepage with a lifestyle hero and "Shop by Category" grid.

Control: Product-heavy homepage
Control: Product-heavy, no clear path for new visitors.
Variant: Lifestyle hero + categories
Variant: Lifestyle hero + category cards. CVR up 45%.

For TooTimid: Your homepage has the highest bounce rate on the site. Guided entry with clear category paths would reduce choice paralysis.

16. Product Card Differentiation

Result: +9% CVR
Duration: 29 days, 41,600 sessions
Client: Gum of Gods

Added feature callouts and benefit bullet points to collection page product cards.

Control: Identical cards
Control: Identical-looking product cards.
Variant: Differentiated cards
Variant: Differentiated with features and benefits. +9% CVR.

For TooTimid: Your collection pages show multiple products with confusing prices. Cleaner product cards with clear differentiation would reduce friction.

17. Single Column Collection Layout

Result: +3% ATC rate
Duration: 30 days, 54,000 sessions
Client: AnyAge Wear

Switched mobile collection from two-column grid to single-column with full-width lifestyle photos.

Control: Two-column grid
Control: Two-column grid, small images.
Variant: Single column
Variant: Single column, full-width photos. +3% ATC.

For TooTimid: In your category, product images need to do heavy lifting. More visual real estate on mobile would improve browse-to-click rates.

18. Mobile Navigation Redesign

Result: +14% CVR, +17% RPV
Duration: 22 days, 37,600 sessions
Client: Q30

Redesigned mobile navigation to highlight three main products at the top with images and descriptions.

Control: Plain text menu
Control: Plain text menu.
Variant: Product cards at top
Variant: Product cards with images at top. +14% CVR.

For TooTimid: Your mobile navigation needs to guide visitors through an unfamiliar catalogue. Visual category cards at the top would reduce guesswork.

19. Popup Redesign & Delay

Result: +4% CVR, +7% ATC rate
Duration: 30 days, 43,300 sessions
Client: BetterGuards

Redesigned the promotional popup from a generic split-screen layout to a mobile-optimized, product-focused design. Combined with a 60-second delay.

Control: Desktop popup, immediate
Control: Desktop-optimized popup, appeared immediately.
Variant: Mobile-first, delayed
Variant: Mobile-first design with 60-second delay. +4% CVR.

For TooTimid: If you're running popups that fire on page load, delaying them and redesigning for mobile could reduce the "close and leave" reflex for first-time visitors.

20. Cart vs Quiz Checkout Flow

Result: +33% RPV
Duration: 28 days, 58,200 sessions, 92% confidence
Client: Marsh Wear

Replaced the standard browse-and-add-to-cart flow with a guided quiz that recommends products based on customer answers.

Control: Standard cart flow
Control: Standard browse-and-add-to-cart flow.
Variant: Guided quiz
Variant: Guided quiz with personalized recommendations. +33% RPV.

For TooTimid: A "What's right for me?" quiz could be one of the highest-impact changes for your store. 25% of your customers are first-time buyers facing decision paralysis.

21. Sale Countdown Timer

Result: +6% CVR, +4% RPV
Duration: 14 days, 36,200 sessions
Client: BetterGuards

Added a sticky countdown timer bar to the top of the site during a clearance sale. Urgency tied to a real event, not a fake evergreen countdown.

Control: Standard announcement bar
Control: Standard announcement bar, no urgency.
Variant: Sticky countdown timer
Variant: Sticky countdown timer tied to a real clearance event.

For TooTimid: Countdown timers work best tied to real events. Tying a timer to genuine limited-time offers creates urgency without cheapening the brand.

Your Tier 1 Backlog

We've already spent time digging into your customers and your store. We built a psychographic customer profile. We believe we understand pretty clearly where the Tier 1 opportunities are.

What we already know

Tier 1 experiments we'd run

1. Discount structure test. Your permanent "SEXY50" code for 50% off is the single biggest lever we'd want to test against. We'd run a controlled experiment: current 50% discount vs free gift only (no discount code) vs tiered spend-and-save thresholds vs a reduced flat discount. The goal is to find out whether the discount is actually driving conversions, or whether you're giving away margin on customers who would have bought anyway. We'd also layer in tests of different discount rates to see whether they have different impacts on profitability.

2. Price point testing on hero SKUs. We'd test price increases on your top 5-10 products. We find that around 60% of the time there's a more profitable price point that's lower, because it increases the conversion rate so much that it brings in more sales which drives profitability. But the other 40% of the time, raising the price actually brings more profit even though you lose some sales. We won't know until we test.

3. Free shipping threshold optimization. Test your current $59 threshold against higher values ($79, $99) paired with a progress bar in the cart. Your catalogue is deep enough that customers can easily add complementary items to hit a higher bar. Lube, cleaner, lingerie, accessories.

4. Bundle introduction. Couples kits, first-timer starter kits, category bundles, gift sets. Positioned as the recommended purchase, not a sidebar widget. Bundles reduce decision paralysis for first-time buyers while lifting AOV.

5. Post-purchase one-click upsell. We're not sure if you're currently running post-purchase upsells, but we'd like to test this. A single-tap upsell page between checkout and order confirmation, where the customer has already overcome the privacy anxiety and entered payment details. We'd try different product and offer combinations to find the highest-converting post-purchase flow.

6. Gift with purchase value reframe. Test making the free gift's retail value visible on every product page and in the cart. "You're getting a FREE [product] worth $45!" This reframes the purchase as a better deal without discounting the primary product.

7. Subscription and LTV opportunities. We'd look for ways to build lifetime value through a subscribe & save model on consumable products (lube, toy cleaner), or a dripped-out package offer. This might be tricky given how particular customers are about their product choices in this category, but we'd actively look for angles regardless. Even a modest subscription uptake on consumables would generate predictable recurring revenue at zero incremental acquisition cost.

The timeline

Month 1. Deep diagnostic (we need access to Shopify, GA4, Klaviyo). Validate assumptions. Ship the first 2-3 Tier 1 experiments: discount structure test, price point test, shipping threshold test, ATC button contrast. Fast to set up, most likely to produce large measurable results early.

Month 2. Launch bundles, post-purchase upsells, the free gift reframe and replacement guarantee badge. Start blending in the first Tier 2 experiments (cart simplification, homepage guided entry, add-to-cart visibility, product card cleanup).

Month 3. First results from Month 1 experiments are in. Double down on what's working, cut what's flat. The compound effect starts here.

Month 4+. Expand based on what the first three months taught us. The full Tier 2 backlog is ready. Tier 3 changes (copy, imagery, layout, UX polish) come after T1 and T2 are optimized.

Case Study: Q30

+$504K Revenue and 67% Higher Conversion on 27% Less Traffic

The Headline Numbers

Metric20242025Change
Net Revenue$2.58M$3.09M+$504K (+20%)
Conversion Rate0.92%1.53%+67%
Add to Cart20,39929,573+45%
Sessions1,223,544899,092-27%
Returns2,3651,808-24%

Revenue grew while traffic dropped 27%. Better traffic quality plus a better on-site experience did the heavy lifting.

Q30 Shopify analytics. Total sales +36%, Conversion rate +40%, Sessions -11%
Q30. Total sales up 36% and conversion rate up 40% year-on-year, on 11% fewer sessions.

The Brand

Q30 makes the Q-Collar. A $199 FDA-cleared neck device that reduces brain movement during head impacts. Selling a science-backed $199 product to anxious parents who've never heard of the category.

Why This Matters for TooTimid

Different product, same buyer psychology. Q30 and TooTimid share the traits that matter most for CRO: high-anxiety buyers making a considered purchase in an unfamiliar category, where trust and education are the difference between a bounce and a sale.

DimensionQ30TooTimid
#1 DriverSecurity (94/100)Security (90/100)
Core objection"Does this actually work?""Is this site safe and discreet?"
Buyer typeSystem 2 (research-heavy)System 2 (research-heavy, high neuroticism)
Key frictionProduct education gapPrivacy anxiety + choice paralysis

The same dynamics apply to TooTimid. Q30's results came from understanding who the real buyer is, recognising they're deliberate researchers and learning that simplification actually hurt when the audience needed more information. Your customers need reassurance and guidance, not a stripped-back experience.

Four findings that shaped the program

  1. The real buyer is a parent, not an athlete. 60% of purchases were by parents and grandparents. The entire website was positioned for athletes and pros.
  2. These are System 2 buyers. Deliberate, sceptical, information-hungry researchers who won't buy until they've read enough proof to reconcile their doubt.
  3. Simplification hurts this audience. We tested a simplified PDP layout. CVR dropped 9%, revenue dropped 11%. The audience wanted more information, not less.
  4. Trust signals need to be visible, not buried. Parent testimonials, clinical data and "how it works" content all performed better when placed higher on the page where they couldn't be missed.

Standout Tests

What the Client Said

Charlie Kunze

"Tim and the Clean Commit team have been my secret weapon. I didn't have time to keep looking for ways to improve our store, and they've found optimizations I wouldn't have thought of. They're super responsive and require very little oversight."

Charlie Kunze, Director of Marketing, Q30 Innovations

Case Study: Marsh Wear

$590K Revenue Impact and 30% CVR Lift in 12 Months

The Headline Numbers

MetricBeforeAfterChange
Conversion rate1.83%2.38%+30.3%
Average order value$99$114+14.8%
Monthly revenue$308K$741K+140.7%

Conservative annualised revenue impact: $590,458 (projected at 0.75% of measured test outcomes, 18 implemented winners, 37 tests over 12 months).

Marsh Wear Shopify analytics. Sessions +19%, Total sales +21%, Orders +14%, Conversion rate +12%
Marsh Wear. Year-on-year growth across our engagement. Orders chart shows the compounding effect of 37 experiments.

The Brand

Premium outdoor apparel. Fishing, hunting, camping, boating lifestyle clothing. Around $5M/year on Shopify, 75%+ mobile traffic, conversion rate stuck below 2%. Owned by AFTCO, a brand we'd already been running a full CRO program on.

What we found

The marketing team was constantly updating the site, but every change was a guess. Layers of technical debt, no measurement, 75% of traffic on a mobile experience built as a desktop afterthought.

After 40 hours of diagnosis, the biggest wins came from making products look better and feel more desirable, not from reducing friction. That surprised us. Marsh Wear's customers are driven by brand belonging and product desire. They want UGC, real photography, the feeling of "I want to wear that." Urgency tactics cheapened the brand and hurt performance.

Top Winners

TestRPV LiftAnnual CII
Enhanced Search Results+14.7%$296K
Mini Cart Redesign+9.9%$36K
Discount Price Styling+10.0%$32K
Product Card Redesign+9.3%$30K
Mobile Menu Redesign+10.7%$22K
Hand-Picked Cross-Sells+15.0% AOV +7%$13K

The one that stood out

Most cross-sell implementations use algorithmic "frequently bought together" recommendations. We manually selected every product pairing. Fishing shirt with a specific hat. Jacket with matching gloves. Cheap, complementary, curated by humans who understood the products.

Result: +15% RPV, +7% AOV. Highest per-visitor revenue lift in the program. Human curation plus good timing beat the algorithm.

What the Client Said

Casey Sandoval

"Kamila, Tim and WK from the Clean Commit team are awesome. They run a tight ship and their program has been one of the main factors behind our growth this year."

Casey Sandoval, eCommerce Director, Marsh Wear

Case Study: Codeword

$915K Revenue Impact. $2M to $3.87M in One Year.

The Headline Numbers

MetricBeforeAfterChange
Conversion rate2.28%2.69%+18.2%
Average order value$113$146+28.6%
Monthly revenue$212K$287K+35.5%

Conservative annualised revenue impact: $915,128 (projected at 0.75% of measured test outcomes, 11 implemented winners, 35 tests over 12 months). Year-over-year gross revenue: $2.05M to $3.87M. +88.6%.

Codeword Shopify analytics. Total sales +49%, Conversion rate +17%
Codeword. Total sales up 49% year-on-year with conversion rate up 17%, on just 5% more sessions.

The Brand

Custom hat company. Order a single embroidered hat with no bulk minimum. Customers type in text, choose a style, pick placement. Around 85 to 90% of hats get customized, so the customizer is the product experience.

The Bottleneck

Conversion stuck at around 2% with no clear path forward. The off-the-shelf customizer plugin couldn't be A/B tested, had limited styling options, looked visually cheap and was completely locked down. For a store where 85%+ of customers have to use it to buy anything, that wasn't a minor UX issue. It was a revenue ceiling.

Top Winners

TestRPV LiftCVR LiftAnnual CII
Customizer Rebuild+32.6%+6.8%$375K
Condensed Product Gallery+62.9%+23.9%$164K
Review-Based FAQs+33.2%+8.4%$81K
Input-First Mobile Customizer+12.0%+2.8%$57K
Enhanced Mobile Customizer+21.7%+3.0%$54K

The smallest change, biggest result

The customizer preview was blank by default. Customers stared at an empty hat mockup, trying to imagine what their text would look like.

We added one thing. Placeholder text in the preview. "YOUR TEXT HERE" shown on the hat by default.

Result: +15.1% CVR, +9.4% RPV. One line of placeholder copy, 15% conversion lift.

The Customizer Rebuild

The biggest win wasn't a traditional A/B test. It was rebuilding the customizer plugin from scratch and then testing the new one against the old one.

New customizer: better styling, cleaner UI, mobile-first, real-time preview with zero lag, every element testable going forward. It also integrated with Nate's embroidery machines, automating a workflow that was previously manual.

+32.6% RPV, +6.8% CVR, +24.3% AOV. $375K annual impact from a single experiment.

What the Client Said

Nate Montgomery

"Our conversion rate is already up 10-15% just in a month or two of working with them. If you're on the fence, just do it. You will not regret it. They're a great team, they really work to understand you and your particular business."

Nate Montgomery, Founder, Codeword (video testimonial)

Why These Results Apply to TooTimid

A fair question after reading those case studies is whether we're just showing brands that were already growing. The honest answer is no.

When we started working with Q30, Marsh Wear and Codeword, every one of them was investing in traffic and pushing harder on growth. Whether sales would follow at the same rate was an open question. That's the exact stage where CRO does its best work, and it's where TooTimid is today.

You're spending around $200K a month on ads. You have 400,000 visitors coming through your store every month. The traffic engine is built and running. The question is how much of that traffic turns into revenue, and what's quietly leaking out of the funnel before it gets to checkout.

CRO needs traffic to operate on. You already have that. Our job is closing the gap between the traffic you're paying for and the revenue you're capturing from it. As your traffic grows, sales grow at the same rate or faster.

Your store also has a built-in psychological friction problem that makes it a strong fit for this kind of work. Your customers are buying something personal, potentially embarrassing. They need to trust the site before they'll commit. That kind of friction is exactly what our testing framework is built to identify and reduce. Every experiment we run will be grounded in how your specific customers think, feel and decide.

What Our Clients Say

Rachael Nelson

"CR has gone up roughly 800% since we started working on the store… which is pretty neat."

Rachael Nelson, eCommerce Manager, Peluva

Sarah Smyth

"Conversion rate went up almost 300%."

Sarah Smyth, Australian Black Worms

Tim Ruswick

"Fantastic, communicative, and made constant progress."

Tim Ruswick, GameDev.tv

Our Process

We run a fairly standard CRO process that involves diagnosing potential problems with analytics, looking at heat maps, watching session recordings, surveying your customers and doing all of the stuff that conversion rate optimization agencies typically do.

Here's what we do differently:

1. Regular accuracy checks

You can't trust A/B testing platforms to be accurate, which should carry some weight coming from a CRO agency. We never take the statistics at face value. What we do is run multiple AA tests. If you're not familiar with an AA test, it's one where you keep the control and the variant exactly the same and let it run for up to three weeks, then measure the impact. Often you'll see a 2 to 4% change between control and variant, which tells you that 2 to 4% is your minimum measurable effect. So if you ever run a test and it's not greater than that, you have to ignore it because that's just variance, the noise that comes from running A/B testing tools and not having billions of page views like some of the enterprise platforms.

2. Deep psychological research

We're not just looking for friction in how customers interact with your website. We're looking to understand what drives them, what they're afraid of, why they're here, what they're complaining about. These behaviors, which we have a 12-point scale for, underpin every decision they make. We need to ask whether the changes we're making on the website are going to help customers move closer to the behaviors that are driving them.

3. Focus on things that impact profitability

CRO agencies have a reputation for just changing button colors. We don't do that because we've all come from e-commerce backgrounds in owning or running e-commerce stores. We understand that you need to change stuff to have real impacts, and changing things means prices and product offers and bundles and the things we've mentioned throughout this document. We're going to do the hard stuff to change those metrics and come up with creative new combinations that bring about profitability.

4. Velocity

Our goal is to run as many meaningful experiments as possible. There are some limitations on how we can do this with Shopify but we work around those limitations. Our goal is to run at least two experiments per week, with a total goal for the year of 100 experiments. We have around a 30% winning rate which means you'll see thirty different changes to your website meaningfully move your profitability over the course of the year. That's how we can back up the claims we make in our case studies and the experiments you saw earlier in this document. We're not sitting on our hands. We're looking for rapid ways to help your profitability and explore new avenues to make money.

Price

Overview

We work on a performance based model. Our fee is tied directly to the profit our experiments generate. You only pay when the experiments produce significant results, and the fee is a fair share of the value we created.

Month one: performance only

Month one, there is no retainer. You pay only for results. At the end of the month, we calculate the incremental profit generated by experiments that reached statistical significance. The performance fee equals that incremental profit amount. If nothing produces a positive result, no fee is charged.

Month two onward: $3,000 minimum + performance

From month two, a monthly minimum of $3,000 applies. This is a floor, not a cap.

If the performance fee for the month is less than $3,000, you pay $3,000. If the performance fee exceeds $3,000, you pay the performance fee. You pay the higher of the two, not both.

The minimum means we both have skin in the game. You get a dedicated CRO team working on your store every month. We get a baseline that covers part of the effort, even in months where experiments don't produce measurable wins. In those months, you still get the customer learnings and strategic direction, which compounds over time.

A worked example

Example Intelligems experiment result showing control vs variant revenue
Example from one of our live experiments. Variant vs control revenue pulled directly from Intelligems.

In the example above, the variant generated $14,893 and the control generated $13,493. The difference is roughly $1,400. That's what we'd charge as the performance fee: one month of the measured uplift.

Every month after that, the $1,400 in extra revenue is yours. The experiment keeps running, the lift keeps compounding, and we earn nothing on it past that first charge.

How performance is measured

Profit impact is measured through the agreed testing platform (Intelligems). Every experiment is an A/B test: a portion of your traffic sees the original (control) and a portion sees our change (variant). Because both groups are drawn from the same pool of visitors at the same time, external factors affect both groups equally. The measured difference isolates the impact of our work.

An experiment qualifies for billing when it reaches at least 90% probability to beat baseline on the primary revenue metric, with directional support from at least one higher-powered funnel metric (add-to-cart rate or checkout commencement rate).

If an experiment doesn't reach 90%, we declare it flat. From there it's a mutual call whether to roll the change out anyway or bin it. If you choose to ship it before it reaches significance, it qualifies for billing. If you're shipping it, you're agreeing it has value.

Commitment

There's no hard lock-in. This agreement is month-to-month. Either party may terminate by providing 21 days' written notice.

We strongly recommend sticking with us for at least three months before drawing conclusions. That's the minimum time for us to learn your customers well enough to stop running broad experiments and start running the sharp, customized ones that tend to produce the biggest wins.

If we go a few months without a real win, we're the first ones who'll tap you on the shoulder. We're not incentivized by the minimum. The minimum is close to a wash for us. We're incentivized by the big wins. If those aren't happening, the shared incentive isn't there and we'll tell you straight.

Next Steps

You've seen the case studies, the process, the pricing and the specific plan we've already started building for your site. The next step is a call to go through it together.

Book a 30-Minute Call

We'll walk through your current metrics, pressure-test the plan against anything we haven't seen yet and confirm the opportunity size.

Tim Davidson
[email protected]

What Happens Next

  1. 30-minute call. We review your metrics together, confirm the opportunity size and answer anything outstanding.
  2. Agreement and kickoff. Paperwork within 24 hours. Kickoff within a week.
  3. Week 1 to 2. Deep diagnostic. Full access to your Shopify, GA4, Klaviyo. We build the real baseline.
  4. Week 2 to 4. First experiments go live. Discreet guarantee strip, ATC button test, cart simplification, product card cleanup.
  5. Month 2. Results from the first batch inform the next wave. Retainer kicks in.

Total elapsed time from signed agreement to live tests: 14 days.

Capacity

We currently have room for two new engagements this quarter. If we're at capacity when you reach out, we'll tell you and offer a start date rather than overcommit.

If the timing isn't right

The most useful thing we can do is send you the customer insight report (Appendix A) as a standalone document. We built it from 80+ of your real customer voices: what makes them anxious, what motivates them, what almost stopped them from buying and what got them over the line. It's yours either way.

Who are Clean Commit?

Clean Commit has been around since 2018 and is considered one of Australia's leading conversion rate optimization agencies. Our team is spread globally across Europe, America and Australia. We help Shopify brands turning over between $2M and $50M in revenue who have hit a growth ceiling.

We're a small team made up of experts in their fields. Senior project managers who have worked on large enterprise software platforms and infrastructure rollouts. Senior developers with a decade of experience designing web systems, UI and UX. Analysts with tertiary backgrounds in psychology, analytics and statistical analysis. Because we're all experts in our respective fields, we look at websites through a different lens than other teams.

We do one thing: scientific testing, customer analysis and conversion rate optimization for Shopify. It's our specialty and we know it inside and out.

By the Numbers

Brands optimized106+ brands
A/B tests run1,000+ with real traffic and statistical rigor
Revenue generated (last 12 months)$1.5M in measured, attributable lift

The Team

A small, senior team. You work directly with us, not a layer of account managers.

Tim Davidson

Tim Davidson

Founder & Lead Strategist

Wojciech Kaluzny

Wojciech Kaluzny (WK)

Co-Founder & Lead Engineer

Kamila Kucharska

Kamila Kucharska

Project Manager

Patryk Michalski

Patryk Michalski

Senior Web & UX Designer

Cormac Quaid

Cormac Quaid

Shopify Engineer

Borisa Krstic

Borisa Krstic

Shopify & React Engineer

Where we go deeper

Customer psychology, not just UX

Plenty of agencies do upfront research. Where we separate is how far we push past surface-level UX and into the psychology of why your customers buy.

Ever wondered why certain products fly off the shelves while others gather dust? There are real patterns in consumer psychology behind that. Patterns you can use to make a lot of money.

Over the last seven years we've built a framework called The 11 Pillars of Buying Psychology. It records what actually drives your customers to buy, and what quietly stops them. Every experiment we propose gets pressure-tested against those pillars before it goes live.

We focus on buying decisions, not just page components.

Volume, and the math behind it

We aim to run over 100 experiments a year for each of our active clients. We operate at roughly a 30% win rate, which means about 30 wins every year compounding into your baseline.

A single test is a coin flip. Run 100 of them through a disciplined framework and the math tilts in your favor. From what we've seen, a lot of our competitors and internal teams only run 20 to 30 tests a year. We run two to three times that.

Customer Insight Report

TooTimid. Who's Buying and Why

Built from ~80 of your real customer voices across Trustpilot (7,565 reviews, 3.6/5 rating), Bizrate (8.3/10, 1,527 reviews), BBB and Knoji. This is the kind of report we produce in the first two weeks of every engagement.

Your Customer

The defining trait: Your customers are anxious buyers. They chose TooTimid specifically because they're too uncomfortable to walk into a physical store. The brand name itself is the value proposition. This is the safe, discreet, non-intimidating way to shop for something they find embarrassing.

Who they are:

SegmentShare of customersTrigger
Couples looking to "spice things up"~35%Relationship routine, desire for novelty, often one partner initiating
Solo self-care / first-time explorer~25%Curiosity, self-discovery, TikTok or social media discovery
Repeat buyer restocking or upgrading~20%Previous product broke or wore out, prompted by email or promotion
Gift buyer (for partner)~10%Anniversary, Valentine's, birthday, spontaneous romantic gesture
Replacing a broken product~10%Product stopped working, need replacement or upgrade

The first-time explorer segment is the most underserved. They need reassurance above all else, plus low-commitment entry points (free gift, starter kits, quizzes, educational content). They're the most likely to bounce without buying.

What Drives the Purchase

Seven psychological drivers, scored by frequency and strength in customer language:

DriverStrengthCustomer Language
Security90/100"Discreet shipping." "Discreet packaging." "What you put down from your bank account."
Comfort80/100"Very easy and fast, simple process." "Easiest most pleasant experience I've ever had."
Curiosity55/100"I got a free toy for it being my first time ordering!" "Liberating...self care point of view."
Belonging45/100"It's our private ToysRus store." "My wife and I really enjoy this site!"
Progress35/100"Liberating...self care point of view." "Enhance your personal satisfaction."
Autonomy25/100"Vast selection...just about everything I'd ever want."
Status10/100Almost entirely absent. This is not an aspirational purchase.

The takeaway: Security and Comfort together account for the overwhelming majority of positive review language. Every page on the site should answer two questions: "Am I safe here?" and "Is this going to be easy?"

What Stops Them Buying

RankObjectionWhat They're Thinking
1. "Is this site legitimate?"Scam Detector gives 70.4/100. First-time visitors from TikTok or social ads are especially skeptical. Need trust badges, years in business (since 2000), review count (7,500+) and secure checkout callouts above the fold.
2. "What if someone sees the package or billing?"The #1 anxiety. Currently addressed by the brand but may not be visible enough on product pages and at checkout. Needs prominent, specific guarantees: plain brown box, no company name on exterior, billing shows as generic name, no follow-up mail.
3. "What if it's defective and I can't return it?"No-return policy on adult products is a major friction point. Replacement policy exists but isn't well-understood. Needs clearer communication: "Defective? Free replacement, no questions asked."
4. "Prices seem high"Competitors run aggressive promotions. TooTimid's free gift partially offsets this, but the value may not be clear until after purchase. Need visible value framing.
5. "I don't know which to choose"First-time buyers face decision paralysis in an unfamiliar category. Educational content exists but may not surface at the right moment. Needs guided selling.

How They Decide

TraitLevelImplication
NeuroticismHighThe defining trait. Anxious about being discovered, about package contents, about billing statements, about whether the site is safe. Every step needs abundant reassurance.
ConscientiousnessModerate-HighResearch before buying. Watch product videos. Read policies carefully. Give them everything on-site.
ExtraversionLow-ModeratePrivate purchase behavior dominates. They would NOT walk into a physical store. Avoid social proof that feels exposing.
AgreeablenessModerate-HighWarm and forgiving when things go right. Sharp and unforgiving when trust is broken.
OpennessModerateCurious enough to shop online for intimate products, but they chose the "safe" brand. Not early adopters.

Design implication: High neuroticism + moderate conscientiousness = these customers need reassurance at every step. Don't get clever with checkout. Show security badges prominently. Explain exactly what will appear on their credit card statement. Show what the package looks like. Keep the experience simple and non-overwhelming. Avoid social proof tactics that feel exposing ("X people are viewing this"). These buyers don't want to feel watched.

Data Confidence: 7/10

Built from ~80 distinct customer voices across 6+ sources, with 7,565 Trustpilot reviews providing quantitative backing. Known gaps: no Reddit presence found, Yelp blocked, homepage couldn't be scraped (JS/Shoplift layer), no access to on-site product reviews yet. Confidence will increase once we have access to on-site reviews, post-purchase survey data and analytics.


Every experiment in this proposal traces back to something one of your own customers said. We don't test random changes. We test changes grounded in how your specific customers think, feel and decide.

Frequently Asked Questions

How do you prevent experiments from cannibalizing each other?

We use a naming and intent convention that categorizes each part of the UI and cross-references it with the motivations of the customer. Someone looking for information on a PDP is on a different journey to someone flirting with purchasing on the same page, so we treat those as separate spaces.

When we scope an experiment, we stick to one defined part of the site with one defined intent. We can go surprisingly granular, and at that level of resolution it takes at least 18 months to exhaust all the combinations on a single store. So cannibalization is something we sidestep structurally, not something we manage case by case.

How do you accurately measure the uplift from experiments?

Every test is a controlled A/B. A percentage of your traffic sees the original (control), the rest sees the variation.

We measure a range of metrics. Conversion rate, revenue per visitor, average order value, bounce rate and a handful of supporting signals, all pulled directly from the testing platform.

We also run an AA test on each store before we start. That tells us the natural variance of your pages. If we know your baseline conversion rate naturally swings by around 5%, we won't call a 5% lift a win. That gets declared flat. It's the only way to separate real movement from statistical noise.

We push for above 90% statistical confidence before calling a winner. For stores with large traffic we'll reach into the 95%+ range. For smaller stores 90% is our working floor.

What A/B testing platform do you use?

We default to Intelligems on most engagements.

Intelligems uses randomized participation, which means a single visitor can be part of three, four, five or more concurrent experiments without the results interfering. That matters because it lets us maintain a high testing velocity without the tests tripping over each other.

We've also used Shoplift extensively. It isolates audiences per experiment, which means the number of concurrent tests you can run is much lower and each one takes longer to resolve. We don't recommend it anymore for high-velocity programs.

We've used script-based tools as well (VWO, Optimizely, Convert, AB Tasty) but for Shopify stores today, Intelligems is the best tool on the market.

What happens if you don't see wins for a couple of months?

We come to you and tell you.

We're incentivized by the wins, not the retainer, so a quiet stretch hurts us too. If we go a few months without a real win we'll suggest whatever we can to course-correct. If it still isn't landing, we'll raise the idea of mutually ending the engagement. We're not precious about the contract. We want the big wins, and when the shared incentive isn't there we'll say so.

How many experiments do you run at the same time?

We aim for up to 10 concurrent experiments and around 100 experiments per year. Our average win rate sits between 20 and 30%, which means 20 to 30 winners a year compounding into your baseline.

Can we still make content changes and tweak the website while experiments are running?

Yes. You don't need to coordinate with us.

We run GitHub Actions behind the scenes that pick up your changes and apply them to the live experiment so everything stays in sync. We aim to be relatively invisible in the background. You run your marketing, merchandising and content updates as normal.

Where is your team based and who would we be working with?

Tim is based in Australia (AEDT). The rest of the team is distributed across Europe: WK, Kamila, Patryk, Borisa and Cormac.

Tim is the account lead and the escalation point for anything strategic or contractual. Kamila is who you'll talk to in Slack day to day. She sends running updates and manages delivery. The bi-monthly sync call where we walk through new experiments and results is typically with Kamila and WK (our co-founder and lead engineer).

Can we have access to your designers and developers?

Yes. We encourage every client to connect with us on Slack. When you need something from a designer, developer, analyst or strategist, you can reach them directly in the channel.

Do you do work outside of A/B testing?

Yes. Custom Shopify app development, headless builds, custom themes, international expansion, integrations and more.

That said, the point of this engagement is to improve your revenue per visitor. When a request comes in that's outside CRO scope, we tend to package it as a separately scoped piece of work so it doesn't interrupt the testing program.

What does the effort look like from your end?

Minimal.

WhatTime
Shopify and analytics access at kickoff10 minutes, one off
Weekly Slack updates from us5 minutes to read
Review of experiments before launch15 to 20 minutes per week
Feedback on test designs (async)10 to 15 minutes per week
Bi-monthly sync call1 hour every 2 months

We handle the research, design, development, QA, launch, monitoring, analysis, reporting and implementation of winners.

What does an honest uplift look like after 3 months?

Three months is roughly one full testing cycle. You'd expect the diagnosis to have surfaced 10 to 20 high-impact opportunities, with 5 to 15 tested and 3 to 5 producing a measurable win.

In revenue terms, 3 months of testing on a store converting at 1.1% often lifts CVR into the 1.3 to 1.5 range, depending on traffic volume and the severity of the issues we find. The exponential compound doesn't really kick in until months 6 to 9, when the wins start stacking.

Can we talk to any of your clients?

Yes. Happy to put you on a call with Nate (Codeword), Charlie (Q30), Casey (Marsh Wear) or James (HashStash). Let us know which vertical matches your questions best and we'll arrange the intro.