I noticed an issue a few years ago that has become impossible to ignore in 2022. It challenges the most basic assumptions of my work.
It can be very humbling to realize that the techniques and instincts you’ve built throughout your career no longer apply to the new environment you find yourself in. Back in 2009, when I first started working on software and web products professionally, the best thinkers in product development were talking about Minimum Viable Products (MVPs), validated learning, design thinking, and agile development. You remember the old “skateboard diagram?”
These weren’t brand new ideas, but they captured the zeitgeist of the internet entrepreneur class in a big way around that time. From 2010-2015, I probably worked on something like 50-70 MVPs, running all kinds of experiments to validate products, features, and entire businesses. It was honestly thrilling.
More recently, I’ve noticed that a lot of the old tricks are starting to show their age, and the landscape we’re in today requires a different approach.
For one, the majority of early MVPs and experiments were often focused on what I would call “Front to Back” validation—that is exposing the user experience and design of a product in varying levels of fidelity well before you started writing code. A reasonable “test” of a product might be a clickable prototype someone put together with Sketch + Invision and ran through User Testing.
Secondly, most of the products we were working on had fundamentally deterministic behavior. The way you could write user stories, define acceptance criteria, and build test cases was predicated on the idea that you had relatively predictable inputs and specific outcomes you would expect to see.
Here’s what has changed:
- An increasing share of products are built on machine learning algorithms used to generate personalized recommendations.
- The data inputs aren’t limited to user input inside your product, and often come from heterogeneous third-party sources.
- The majority of software teams are using robust front-end frameworks with large standard libraries and product design systems, making UI more of a commodity.
In my recent experiences, perhaps the last 10 major features or products I’ve worked on, the most important thing to validate early on is whether or not your algorithms are capable of generating accurate, compelling results for your users. Instead of building UX prototypes with static sample data, the single best thing you can build early is an ugly interface with an intelligent algorithm returning instant results. At Cornerstone, we built one big sandbox like this called the Data Playground. It’s basically a nebula from which stars (future product and features) are born.
Here are some of the stages I like to think of when validating data-driven, ML-based enterprise software:
- Simple user-entered data, real algorithms, and live responses in a generic global sandbox environment. No user entries are saved.
- Progress towards a sandbox inside the customer’s portal where the inputs are a mix of user-entered data and existing system data already in place. This is where things can get really messy if the customer data isn’t in the right state, which is an important problem to solve.
- Add feedback mechanisms inside the product to track conversion and gather quantitative and qualitative feedback on recommendations.
Early on, what you’re trying to prove is that you are capable of capturing/normalizing the user data, that you’re gathering the right inputs, that these inputs are sufficient to provide personalized / accurate responses by leveraging the ML model you’re using, and the user flows are fundamentally resilient to wild variations in data quality and depth. Our teams now spend much more of their energy and attention (than my teams did 10+ years ago) trying to design experiences where the product still works well even if the data we use to shape the experience is sparse, low quality, or internally inconsistent.
Thankfully, I’m starting to see the culture shift internally towards recognizing the real risks and assumptions that we need to overcome to build successful products in today’s world. It’s going to take time, just like it took everyone years to shake the old waterfall development habits they built. But I’m hopeful and I’m excited to see which tools and techniques emerge to drive the next generation of product development.