
Product roadmaps look great on paper with neatly laid out requirements, epics, user stories, and acceptance criteria. However, these are, at best, educated guesses. Worse, prioritization of those requirements is often based on gut feel rather than concrete evidence.
This isn’t a post about yet another prioritization framework. Instead, it’s about why experimentation generates the necessary data to make high-quality product decisions. Decisions that minimize waste, avoid costly missteps, and ensure teams are building the right things for the right reasons.
The High Cost of Assumptions
Companies operate on quarterly and annual budgets, demanding measurable results from every investment. The standard pitch goes something like this: "We will deliver Feature X at a cost of Y, leading to Outcome Z." If the math checks out, leadership approves the build.
But here’s the catch: how do you know Feature X is worth building in the first place?
Far too often, product development is driven by assumptions rather than validated insights. A CB Insights study found that 42% of failed startups cited “no market need” as their primary reason for failure1. This means they built something no one actually wanted or needed. The same principle applies to enterprise product teams—without proper experimentation, teams are effectively gambling with development resources.
Why Product Teams Skip Experimentation (and Why They Shouldn’t)
Most product managers don’t apply the scientific method to their hypotheses. Instead, they over index on customer feedback, competitor analysis, anecdotes, and market trend. These are useful inputs, but only paint half of the picture. As a result, flawed assumptions get funded, leading to wasted development cycles and frustrated teams.
The common excuses for skipping experimentation include:
"Engineering time is too expensive."
"We need to move fast and ship more features."
"We already validated this in customer interviews."
But consider this: the cost of wasted development is significantly higher than the cost of early testing. IBM estimated that the cost of fixing a defect post-release is 100x greater than fixing it during the design phase2. Similarly, Google, Amazon, and Netflix have all shown that continuous experimentation leads to better product-market fit, higher user engagement, and ultimately, more successful products.
Scrummerfall: The Silent Productivity Killer
Even in Agile teams, I frequently see the Scrummerfall anti-pattern—where teams follow an iterative development process in theory but a waterfall-style planning process in practice. The cycle goes like this:
Develop a PRFAQ (Press Release & FAQ) and/or PRD (Product Requirements Doc).
Validate with engineering.
Add it to the backlog.
Prioritize, build, and ship.
But this sequence misses a crucial step: experimentation. Without it, teams risk building the wrong thing, incurring opportunity costs, and ultimately delivering features that never gain traction.
A Better Approach: Engineering-Integrated Experimentation
Experimentation shouldn’t be a post-development afterthought. It needs to be embedded early in the product definition process. Some mechanisms to consider:
Early Dev Spikes for Hypothesis Testing:
Dev Spikes are common but often misused—they usually happen after the build decision is already made. Instead, move them upstream into the design and planning phase to test whether a hypothesis is even worth pursuing.
Prototyping and A/B Testing Before Commitment:
Amazon is famous for its “Working Backwards” approach, where teams write a PRFAQ and prototype ideas before committing resources. Netflix, for example, runs thousands of A/B tests per year, ensuring that only data-backed changes make it to production.
Developer-Driven Experimentation (Wacky Thursday, Hack Weeks):
Engineering teams often have Wacky Thursday or hackathons to explore innovative ideas. Product should tap into these sessions to validate risky assumptions before formalizing requirements.
Customer-Validated MVPs (Not Just Customer Feedback):
Eric Ries' Lean Startup methodology emphasizes the importance of testing actual user behavior over stated user preferences3. MVPs should be measured based on engagement, conversion rates, and revenue impact, not just customer feedback.
The Real Goal: Frequent, Quality Releases
Shipping more features doesn’t equate to success. Success comes from shipping frequent, high-quality releases that solve real user problems. A feature that takes six months to develop but isn’t used is infinitely more expensive than an experiment that disproves a bad hypothesis in two weeks.
Instead of blindly following a roadmap, product teams should be asking:
What assumptions underlie this feature?
How can we validate them with minimal investment?
What’s the fastest way to prove (or disprove) our hypothesis?
Jeff Bezos famously said, “Once you have 70% of the data, go.4” The key is ensuring that 70% is based on experiments, not opinions.
Takeaway: Build Experimentation Into the Budget
If teams don’t budget for experimentation, they are budgeting for failure. Early validation ensures that product and engineering efforts are data-driven, minimizing wasted time, money, and frustration.
So, next time you’re pitching a feature, ask: Do we have real evidence that this will work? If not, go run an experiment. Your future self (and your teammates) will thank you.
My Ask
Thank you for reading this article. I would be very grateful if you complete one or all of the three actions below. Thank you!
Like this article by using the ♥︎ button at the top or bottom of this page.
Share this article with others.
Subscribe to the elastic tier newsletter! *note* please check your junk mail if you cannot find it
https://www.cbinsights.com/research/startup-failure-reasons-top/
https://arxiv.org/pdf/1609.04886
https://www.amazon.com/Lean-Startup-Entrepreneurs-Continuous-Innovation
https://www.aboutamazon.com/news/company-news/2016-letter-to-shareholders