It’s Not Data-Driven or Informed You Want, It’s Science

Thomas Kuhn actually wrote “The Structure of Scientific Revolutions” to figure out why he was churning from Game of War

“We want to be more data driven” or “We want to create a stronger data culture” are common refrains from organisations. Often these notions go unchallenged. The claim that organisations make better decisions with data at hand is, perhaps ironically, never investigated empirically. Dig a little deeper however, and it’s not the data that drives the value, but the better decisions made with empiricism. And empiricism, isn’t merely data, it’s a method of acquiring knowledge.

To even get off the ground analysing data, you need theory of measurement. What should we track, given limited engineering resources and raising storage costs? Claiming we should track, say payments and logins, at the exclusion of say, battle logs, implies a cost-benefit value ranking. It’s more valuable spending resources tracking payments. But these values are derived from somewhere. This “somewhere” are assumptions made about if-then statements. “If we raise price then quantity demanded will fall”, is a claim that can be shown on empirical or tautological grounds (1+1=2 because the rules define it as such). It’s true because whenever we raised price in the past quantity demanded fell or because we assume people respond to incentives (holding all else constant, higher price is a disincentive).

The methodological refinement process of these assumptions is crucial in evaluating the validity of these if-then statements. Will adding a clan system to the game maximize LTV relative to adding a login reward system? A method is not just necessary to evaluate this question, it’s impossible to do so without one.

The refinement of methodology through deliberation is at the core of what makes science, science. For science and its refinement process to even take place we need clear assumptions that produce those previously mentioned if-then claims. This is what I’ve seen Product Managers struggle with the most. Putting a feature on the roadmap has a set of implicit assumptions associated with it and these assumptions deserve to explicit so they can be debated and analysed.

As it turns out, the results produced by the scientific process are increasingly monetizable by the low marginal costs, stale fixed costs economics of software distribution. For example, if you find that people generally play your game on low-end devices and then start optimising for such devices, the benefit scales to million of players. The total addressable market is the extent of the market.

And to be clear, you don’t need empirical data to do science. Economists and Physicists make claims about the real world using accounting identities or tautologies like “something that is priced higher costs more” as a function of the rules (in this case, language).

Arguing to be data-driven or informed misplaces the value in the supply chain. Western science has never been more adored or employed then it has been now, something we all benefit from. But we need to more explicit in our endeavour – this isn’t about data, it’s about science.

F2P Demand Curves Are Weird, Just Ask Levitt

A paradigm forever changed, one man carries a dying tradition.
A paradigm forever changed, one man carries a dying tradition.

Steve Levitt, the last price theory samurai, and John List, future nobel prize winner, have published a paper on free to play economics.

In a textbook neoclassical experiment, Levitt alters the quantity of Candy Crush hard currency at a given price point. While economists generally think of price variation as the way of deriving demand curves, quantity variations are just as legitimate a tool.

Despite a sample size of over 15 million and a wide range of quantity convexity (80% variation across variants), all quantity discounting schemes produced similar revenue. Levitt concludes by commenting,

“…varying quantity discounts across an extremely wide range had almost no profit impact in the short term.”

The interesting and little explored result indicates that,

…almost all of the impact of the price changes was among those already making a purchase; radical price reductions induced almost no new customers to buy…

This suggests free to play games are made up of two groups of users: purchasers and non-purchasers. This means the decision of becoming a customer is exogenous, there is no ability to convert non-customers to customers  i.e. this is decided outside of the game.  Put another way, non-customers are perfectly price inelastic and customers are perfectly price elastic. Indeed, industry research collaborate this.2

    Interesting, but is it actionable?

Were this to hold, it suggests a number of results. The first is that product manager’s ability to monetize non-customers (99%~ of users) will not come from IAP, but rather other forms. This may help explain why F2P ad revenue and incentivized video continues to show YoY growth.3 4
Furthermore, product managers should consider experiments exploring the maxima point of ad frequency. Given that there’s a trade-off between retention and ad-frequency there exists an optimal ad frequency point.
With little chance of non-customers converting to customers, product managers should worry less about increased ad frequency turning off potential customers.

The final result suggests the ROI of trying to raise the LTV of customers exceeds that of trying to raise the new customer creation rate. Product managers should develop roadmaps in accordance.