Measure an Impact

Too often a product manager works so hard to understand a problem, comes up with a solution that they believe will solve it, work to spec out the work, deliver the solution, and then find themselves in an awkward position not understanding how to convey if the solution is working as expected and communicating the value of the solution to the rest of the business. I will speak as to how we can avoid these situations.

Start with Impact.

 

While setting goals, the PM should start with the outcomes that need to happen in order to define success. If this is the starting point on what the teams should then build, everything happens much more naturally.

On a day-to-day basis, the PM should be using data to understand where opportunities are, and define key problems that could be addressed to make an impact.

As a PM then begins working to define a problem that could be fixed, they should be looking into the data to see where the problem exists and how they can measure the impact on performance. As these key metrics are defined for each discovery item, this lays the foundation as the team works on the solution, and is thinking about the outcomes that are desired.

Release Plan.

 

As a solution has been defined in high fidelity, the next task that must happen in order to ensure that the specs on the project have been fully defined is setting a release plan. Based upon the key metrics defined for the project the team needs to understand how they will test the solution upon release. It is critical that within the scope of the project before it enters delivery that tasks exist to ensure that tracking is sufficient.

There are three main methods to testing the impact of feature releases:

  • AB test: The best-case scenario in a B2C environment is to test with live users with a control vs the solution. To run this test, first you need to ensure that you understand that variance in performance and understanding the timeframe it takes to get a result through an AB test. I generally try to ensure that there is some variance at less than 10% with a result with +90% confidence with 2-3 weeks. If it will take longer for a result with confidence, then I look the the other methods.

  • Beta group: Generally used in a B2B environment, only a select number of users are opted-in to the experience with open lines of communication with the customer to understand the pros and cons of the solution. This reduces the risk of something going live with everyone if it is not performant, and ensuring that your most loyal customer who won’t be swayed by a poor experience can provide input first before getting to newer/risk averse customers.

  • Historical analysis: Worst case scenario is doing a historical analysis. Virtually is it “just pushing it live”, but at least trying to be a little smarter than that. If the user base is too small, and the business needs to move fast, then the best thing to do is to release on a date not near holidays, and ideally in the middle of the week, and then to define cohorts as to users without or with the solution. There is considerable risk with this method due to other indirect releases impacting performance unintentionally and traffic differences due to marketing or market conditions.

Before release, a PM needs to ensure that the proper dashboards are set up to track the impact of a solution. It is critical to set an alotment of time to let the test run, with the focus on giving time to learn. It is best to let things run 2-3 weeks, to ensure that the results are not false positives/negatives.

Want to learn about PM fundamentals more?