Executive Summary: MyEListing Growth Experimentation Case Study
MyEListing is a free online commercial real estate marketplace that connects property owners, brokers, and investors. The company became the focus of a Growth Experimentation Case Study after achieving meaningful traffic growth while revenue metrics lagged behind. Growing traffic alone was not enough. The business needed a systematic approach to turn engagement into revenue and create confidence in which initiatives drove results
Growth teams often fall into the same trap: running scattered A/B tests that create activity without delivering business impact. Random experimentation produces vanity metrics, incremental gains, and little clarity for leadership on what truly moves revenue. At MyEListing, I confronted exactly that challenge.
The company had strong traffic momentum, but revenue growth had stalled. We ran tests across ads, landing pages, and email campaigns, yet lacked a prioritization framework and clear linkage to outcomes like ARR, CAC, LTV, or retention. Leadership questioned which experiments mattered, and resources spread too thin across tactical wins.
The transformation required more than new frameworks. It demanded a shift in mindset. We restructured experimentation around revenue-first hypotheses, giving every test a clear business purpose. This discipline improved conversion rates, retention, and organizational confidence in the process.
I also built the capability to sustain experimentation long-term. That included training teams in hypothesis design, modeling psychological safety for bold bets, and giving emerging leaders full ownership of experiments. Experimentation became part of our operating culture, not a tactical side project.
The impact was measurable and lasting:
- +83% web traffic
- +47% conversion rate
- +22% retention
- 133% marketing ROI
- 15% lift in trial-to-paid conversion through segmented onboarding
By embedding systems, culture, and leadership into experimentation, we created durable competitive advantage. MyEListing shifted from ad hoc testing to a systematic growth discipline that compounds results over time.
The Strategic Challenge: From Traffic Growth to Revenue Stagnation
MyEListing had achieved solid market traction and built consistent traffic growth, yet the business faced a critical inflection point. Traffic numbers looked healthy on dashboards, but revenue metrics told a different story. Despite our team’s significant activity running tests across multiple channels and touchpoints, growth had plateaued.
I diagnosed four interconnected problems blocking our growth:
Core Problems Identified:
- No prioritization discipline – Backlog contained dozens of ideas with no framework separating high-value opportunities from noise; teams debated experiments based on intuition and politics rather than objective business impact criteria
- Disconnected from revenue metrics – Experiments focused on intermediate metrics (click-through rates, email opens, page views) without establishing causality to ARR, CAC, LTV, or retention; teams generated insights without generating growth.
- Leadership blind spots – Executives lacked coherent visibility into experimentation performance; scattered updates on individual tests arrived regularly, but leaders couldn’t assess portfolio health, resource allocation efficiency, or strategic direction
- Fragmented tests without portfolio strategy – Teams ran too many small tests simultaneously without considering portfolio balance or risk management; individual initiatives made sense in isolation, but collectively they lacked strategic coherence
The fundamental insight was clear: we didn’t suffer from too few experiments. Instead, we suffered from the absence of a systematic approach to experimentation itself. A comprehensive system that could operate predictably quarter after quarter would build institutional knowledge and compound growth over time.
Strategic Approach and Implementation: The Growth Experimentation Playbook
Prioritization with ICE-R Scoring
To end scattered testing, I introduced ICE-R scoring as our mandatory prioritization framework. Every experiment idea had to score across four dimensions before entering the backlog:
ICE-R Components
- Impact – Tied directly to ARR, CAC, LTV, or retention; required teams to estimate the potential revenue effect.
- Confidence – Grounded in data, customer research, or past results, not gut instinct.
- Ease – Measured implementation speed and clarity of measurement.
- Reach – Defined the share of users or accounts affected, from narrow segments to full traffic.
I set a clear rule: no experiment launched without explicit linkage to a revenue driver.
This changed the culture. Conversations shifted from opinions to evidence. Political lobbying disappeared. Teams debated customer behavior assumptions worth validating instead of tactical preferences. The result was higher-quality experiments, faster learning cycles, and measurable revenue impact.

Balancing Risk and Return with the 70-20-10 Portfolio

drove disciplined growth at MyEListing.
I reorganized our experimentation backlog using the 70-20-10 portfolio model, a discipline borrowed from innovation management:
Portfolio Allocation
- 70% Safe Bets – High-confidence optimizations such as funnel refinements, signup flows, pricing tests, and conversion improvements. Delivered predictable gains quarter after quarter.
- 20% Adjacent Bets – Moderate-risk opportunities beyond our core, including referral pilots, segmented campaigns, retention bundles, and new targeting strategies. Expanded capability while limiting downside.
- 10% Bold Bets – High-risk, high-reward experiments such as AI-driven investor matching, new ad channels, and predictive targeting models. Most failed, but the wins created transformative revenue streams.
To align executives, I drew parallels to financial portfolio management: diversify risk, capture upside, and manage allocation deliberately. Leadership immediately understood and backed the approach.
The 70-20-10 discipline reduced downside through steady safe bets while keeping bold opportunities in play. It gave the team a clear framework to balance predictability with breakthrough potential.
When 70-20-10 Isn’t Enough
The 70-20-10 model worked well for MyEListing because the business already had momentum. What it needed was discipline, not reinvention. But portfolio balance is not one-size-fits-all.
- Legacy companies facing disruption may need to tilt harder toward transformation, adopting something closer to 50-30-20 or even 40-40-20 to fund bold moves.
- The McKinsey Three Horizons Model offers another lens. Horizon 1 (H1) focuses on optimizing the core, Horizon 2 (H2) invests in emerging opportunities, and Horizon 3 (H3) pursues transformational bets. It gives leaders a structured way to rebalance based on market maturity and urgency.
For MyEListing, 70-20-10 provided the right level of stability and upside. In other contexts, leaders should adapt the mix to match business stage, competitive pressure, and appetite for risk. The key is not the exact ratio, but the discipline of managing growth as a portfolio instead of chasing ideas at random
Building Operational Systems for Consistency and Scale
Frameworks alone don’t create sustained capability. To maintain discipline across experiment cycles, I built four interconnected systems:

Core Operational Systems
- Centralized Airtable Tracker – Logged every experiment with ICE-R scores, hypotheses, success criteria, results, and learnings. Created institutional memory, prevented repeated mistakes, and enabled pattern recognition.
- Bi-Weekly Review Cadence – Focused on insights, not vanity metrics. Teams presented learnings, debated customer behavior, and applied findings to future design.
- Kill/Scale Decision Rules – Defined thresholds before launch (significance, effect size, stop criteria). Removed politics and sped up resource reallocation.
- Experiment Ownership – Delegated full responsibility to emerging leaders, from hypothesis to results. Built judgment, collaboration skills, and leadership bench strength.
These systems turned experimentation from a side project into a core operating rhythm. Teams expected structured cycles as naturally as quarterly planning, embedding experimentation into the company’s DNA.
Building Capability and Culture for Sustainable Experimentation
Systems only work when people trust them and have the skills to execute. Many team members were comfortable with tactical tests but hesitant to run bold experiments that risked visible failure. I focused on building both capability and culture.
Capability & Culture Initiatives
- Training Workshops: Taught hypothesis framing, ICE-R application, experimental design, and statistical literacy. Teams learned to translate business questions into testable hypotheses and interpret results with rigor.
- Modeling Psychological Safety: Shared my own failed experiments to show the value of null results. Reinforced that the only failure was failing to document learning.
- Rewarding Learning Velocity: Recognized teams for fast, well-designed cycles rather than only for wins. This kept momentum high and reduced fear of failure.
- Empowering Emerging Leaders: Delegated end-to-end experiment ownership, from design to stakeholder presentations. Built confidence, judgment, and cross-functional capability.
This cultural shift proved as valuable as the frameworks themselves. Experimentation became expected, collaborative, and safe to pursue — a core part of how the team operated.
Learning from Failure: How Two Experiments Shaped the Playbook
Systematic experimentation means accepting that some tests will fail. The value comes from documenting those failures and converting them into lessons that shape future design. Two failed experiments were especially influential in refining our playbook.
Case 1: Referral Program Pilot (Adjacent Bet)
We believed referral credits for property listings would increase supply by motivating sellers to recruit peers. The team tested multiple credit amounts and referral mechanisms across a representative seller segment.
What We Found
- The program produced no measurable lift in listing volume, regardless of incentive level.
- Sellers prioritized transaction speed and certainty over modest credits.
- Once sellers chose to list, they wanted fast, reliable sales – not incremental perks.
What We Did Next
We redirected resources into speeding up transactions. That meant streamlining the closing process, improving pipeline transparency, and giving sellers real-time transaction tracking. These improvements generated the supply growth that referral credits failed to deliver.
Case 2: Display Advertising Channel Test (Bold Bet)
We launched a high-investment display advertising test, expecting volume growth to justify the spend through scale economics. We targeted audiences aligned with our Ideal Customer Profile (ICP) based on demographic and behavioral data.
What We Found
- CAC came in at 2x our blended benchmark, making the channel unsustainable.
- Audience quality didn’t align with our ICP despite demographic similarity.
- Behavioral intent signals mattered more than demographics in predicting conversion.
What We Did Next
We redesigned our channel testing playbook. New rules required rigorous pre-test audience validation, smaller pilot budgets, and staged scale-up criteria. This failure saved the company from larger future losses and created a disciplined framework for testing new channels.
The Takeaway: Documenting Failures Builds Knowledge
By documenting failures as carefully as successes, we built genuine institutional knowledge. Teams stopped repeating mistakes, refined portfolio allocation, and improved validation discipline. Each cycle compounded the organization’s learning velocity — exactly the point of systematic experimentation.
Business Impact: How the Playbook Drove Growth
The Growth Experimentation Playbook delivered significant, sustained business results across multiple dimensions:

Performance Across Core Metrics:
- Traffic Growth: Web traffic increased 83% year-over-year as we optimized acquisition channels and scaled proven performers systematically
- Conversion Rate Improvement: Overall conversion rates jumped 47% through continuous funnel optimization and targeted user experience improvements
- Marketing Efficiency: Marketing ROI reached 133%, reflecting disciplined resource allocation toward high-performing channels and rapid reallocation away from underperformers
- Retention Gains: Customer retention increased 22% through segmented lifecycle campaigns and product improvements informed by retention cohort analysis
- Experimentation Velocity: Monthly experiment throughput scaled from two to eight rigorous experiments, dramatically accelerating our organizational learning rate without sacrificing quality

High-Impact Wins: Breakthrough Results:
- Segmented Onboarding (Adjacent Bet): Lifted trial-to-paid conversion by 15% overall; among enterprise accounts specifically, drove 23% conversion improvement
- AI-Driven Investor Matching (Bold Bet): Validated technically and opened an entirely new revenue stream; attracted significant board attention and became a competitive differentiator
We achieved these outcomes through disciplined portfolio management, not isolated brilliance.
Portfolio-managed results stem from a systematic approach that operates predictably quarter-after-quarter. The experimentation system delivered compounding gains as institutional knowledge accumulated and team capability deepened.
Leadership Lessons from the Growth Experimentation Playbook
Building the Growth Experimentation Playbook at MyEListing clarified a set of leadership principles I now apply consistently. These lessons fall into three categories: discipline, culture, and capability.
1. Discipline: Linking Experiments to Business Outcomes
- Mandate revenue linkage. Every hypothesis must connect directly to ARR, CAC, LTV, or retention. If a team can’t articulate the revenue impact, the test doesn’t qualify.
- Prioritize with structure. ICE-R scoring replaces politics with evidence. It forces merit-based debates about assumptions and ensures high-value ideas surface first.
- Balance portfolio risk. The 70-20-10 model reduces downside through consistent safe bets while preserving upside with adjacent and bold moves. Leaders must manage that balance deliberately.
2. Culture: Building Safe and Fast Learning Environments
- Reward learning velocity. Recognize rapid, well-designed cycles — not just wins. This keeps teams moving forward instead of fearing failure.
- Model psychological safety. Leaders who share their own failures create permission for teams to take intelligent risks. Safety is proven by behavior, not by policy.

3. Capability: Developing Leaders and Scalable Infrastructure
- Develop leaders through ownership. Giving emerging talent end-to-end responsibility for experiments builds judgment and cross-functional skills faster than training alone.
- Create operating rhythm. Sustainable experimentation depends on infrastructure: centralized tracking, structured review cadences, predefined kill/scale criteria, and disciplined documentation. Leaders must build systems that outlast individual initiatives.
Reflections: Leadership Beyond Experimentation
At MyEListing, my role went far beyond choosing winning experiments. I built a system that surfaced the right tests, enforced discipline, and scaled predictably.
I shifted experimentation from tactical guessing to strategic discipline. By embedding ICE-R scoring, portfolio balance, psychological safety, operational infrastructure, and leadership development into the culture, I proved that systematic experimentation drives sustainable competitive advantage.
The most important outcome wasn’t the metrics. The team learned to manage growth like a science – with clarity, discipline, and confidence. They developed a capability that would compound long after my direct involvement.
Organizations that embed experimentation into their leadership DNA don’t rely on occasional wins. They create institutions that:
- Learn faster than competitors
- Adapt quicker to market shifts
- Sustain growth through disciplined systems, not isolated efforts
In volatile markets, that superior learning velocity becomes the most durable growth advantage.
Conclusion: Growth Experimentation Case Study
At MyEListing, I led the shift from scattered testing to a systematic growth experimentation discipline. Using ICE-R prioritization, 70-20-10 portfolio balance, operational infrastructure, and a culture of safe-to-fail testing, we proved that growth isn’t driven by hacks or one-off wins — it’s built through leadership and systems.
Results Delivered:
- +83% traffic growth
- +47% conversion improvement
- +22% retention gains
- 133% marketing ROI
More important than the metrics, we built an experimentation capability that will generate returns long after the initial wins.
Leaders who institutionalize experimentation don’t just optimize today’s model. They create learning organizations that:
- Discover new opportunities faster
- Validate them with rigor
- Scale them more effectively than competitors
In markets where customer behavior changes constantly and advantages erode quickly, learning velocity is the ultimate competitive weapon.
The MyEListing playbook demonstrated that growth leadership is about building systems that make organizations smarter and stronger each quarter. That capability compounds into enduring competitive advantage.
My Notion: Experiment & Hypothesis Dashboard
The Hypothesis Library is your centralized, reusable backlog of test ideas – categorized by funnel stage and theme – so you never start a sprint from zero.
The Experiment Tracker is your central hub for logging, prioritizing, and analyzing every growth experiment – helping your team move faster, stay aligned, and learn from every test.
Free to download and make it your own!
Help Support My Writing
Subscribe for weekly articles on leadership, growth, and AI-driven strategy. You’ll receive practical frameworks and clear takeaways that you can apply immediately. Connect with me on LinkedIn or Substack for conversations, resources, and real-world examples that help.
Growth Experimentation Related Article
The Ultimate Experimentation Guide for Leaders Who Want Results
Growth Loop Strategy: Best Practices for Business Leaders
ROI Growth Experiments: The Data-Driven Way to Win
The Truth About Building a B2B Ideal Customer Profile
The Ultimate Growth Experimentation Framework
Case Study Related Article
AI Case Study: How Florists Optimize With Practical AI
AI Marketing Stack Integration: Smarter Attribution, Better ROI
AI Digital Marketing Strategy: Powerful Results MyEListing Achieved
Case Study: High-Trust Teams With Proven Results
Sales Ops Case Study: From Chaos to Repeatable Growth
MyEListing Growth Experimentation Case Study: Driving Real Results
About the Author
I write about:
- AI + MarTech Automation
- AI Strategy
- COO Ops & Systems
- Growth Strategy (B2B & B2C)
- Infographic
- Leadership & Team Building
- My Case Studies
- Personal Journey
- Revenue Operations (RevOps)
- Sales Strategy
- SEO & Digital Marketing
- Strategic Thinking
📩 Want 1:1 strategic support?
🔗 Connect with me on LinkedIn
📬 Read my playbooks on Substack
Leave a Reply