One of the problems we product managers face is that there are lots of interesting technologies and product ideas, but not many that can be successful in the market. I like to use the “order of magnitude” rule of thumb as a test to help determine if a new product has any chance of being successful. This isn’t the only metric for success, but I consider it a necessary condition — if you don’t pass this test it’s going to be difficult to get a customer to pay attention to you.
The rule says that a new product product needs to improve some significant process — defining “significant” takes some expertise! — by an order of magnitude. That is, it has to be ten times better in some dimension.
In most cases you can’t improve the overall process by an order of magnitude — for example, there aren’t many products that enable an organization to reduce the personnel required for some activity by 90%. Typically, you’re going to be improving some component metric by that factor. The original value proposition for system management and monitoring software was that it reduced downtime by a factor of ten — organizations went from as much as 20% downtime to 2% or less. (Note that you need to look at the improvement in downtime to see the huge benefit — uptime only improves by about 20%, from 80% to 98%.)
Often you can determine the value of your product — which drives pricing — using this rule of thumb, because as a side effect it can tell you how much the customer will save by using it.
For example, if your product reduces the number of failed transactions on a website, and you can relate the number of failed transactions to a number of shopping carts abandoned, you have an excellent basis for pricing your product. “Our product will reduce the number of failed transactions by a factor of ten, resulting in X% more sales on a weekly basis, at an average of $Y per sale. At a price of $Z, the system pays for itself in a few months.”