## 1950’s Peruvian Coke and Gacha

In the 1950’s, Peruvian inflation forced Coke to charge more per bottle of Coke. Unfortunately, their vending machines required updating to accept a new domination, a domination that was far too large of a price increase. Instead, Coke devised a probabilistic system: the machine would charge the same amount as before, but randomly refuse to give a bottle. This raises the expected price of a bottle Coke while forgoing any mechanical updating. But a miscellaneous software engineer has a better idea: raise the price of Coke, but instead randomly give the money back.

The increase in price for a ‘bottle draw’ would equal the expected payoff of of a lower ‘draw price’ of one that randomly refuses to give a bottle. This is an interesting experiment, as gacha is the number one player frustration in free-to-play games.

Anyone care to reckon which one would do perform better?

## Players go to their highest valued LTV: Ads are Beautiful Pareto Exchanges

Previously, I wrote about ads as a way to monetize non-payers, but there’s more to the ad exchange and what I’ll coin as ‘portfolio pumping’. It’s like portfolio theory, but not really.

These terms reference two growing phenomenon in F2P games. King is at the forefront of portfolio pumping, in which a given firm pushes a player from game to game within the firm’s portfolio.

Unlike portfolio pumping, ad exchanges push players to another firm’s games. Companies like Scopely are more fond of ad exchanges.

Frequently, the ads being served are for competitor games. Why would a company show ads for its competitors? In addition, why would firms want players to move from one game in their portfolio to another? I argue the underlying explanation is Pareto Efficiency which is just a fancy term for trade.

$churned player LTV < ad revenue$
$acquired player LTV > ad cost$

It tends to be the case that a given company will engage in both ad buying and selling. The outcome of these ad exchanges are migrations of players to the games in which they have the highest LTV; the initial allocation doesn’t matter. This process takes place in high-speed auctions where firms are constantly in the search for the maximizing the equations outlined above. The decision rule for portfolio pumping is similar, but we add some special conditions, mainly the probability of simultaneous play.

$P(rLTV_{i} + nLTV_{i}) + P(nLTV_{i}) > rLTV_{i}$

Where,
$P(rLTV_{i} + nLTV_{i})$ is the probability of playing both games simultaneously. We add up both of the LTVs in this case.
$rLTV_{i}$ is the remaining LTV in the old game for the ith player, while nLTV is the LTV for the new game for the ith player.

This must be bigger than $rLTV_{i}$ for profitability.

Of course, there are ways to play with this. Wooga tried altering portfolio game prompts during a player’s lifespan but found no effect.1 King continues to portfolio pump but dropped ads in Candy Crush Saga.

It’s a goddamn gorgeous process that should litter econ textbooks like lighthouses and lemons.

## Re-rewriting Economic History

Will Luton argues on the dangers and solutions to F2P inflation over at gameindustry.biz.

While there are some missteps in the opening of the article, Will makes a powerful and elegant point:

…a sale can only be considered profitable if the net revenue from the start of the sale until resource equilibrium, and so demand, is restored is more than if the sale hadn’t been run. For well run sales in games with well balanced economies this should always be true.

Sales flood the economy with resources via shifts along the demand curve. Holding all else equal, this is modeled as a move from P1 to P2.

The tricky part, not found in the textbook model, is time. Unlike say, refrigerators, a durable good, virtual currency is a consumable good. This means we expect repeat purchases, similar to say, gasoline. Sales in this sense pull revenue forward by changing purchase ‘schedules’ more so then a durable good. The sales are only profitable if the sale sinks resources players would have never sunk otherwise (net positive sink). In games, this is achieved this achieved via live ops. This model explains how Supercell runs their games; it’s no coincidence that Clash Royal is the first Supercell game to have sales and real live ops while their other titles have little of either. Introducing one without the other keeps net sink flat in the long run by shifting intertemporal time preferences rather than increasing the size of the ‘sink pie’ so to speak.

Progression is another confounding variable. Holding all else constant, a given item is worth less for each additional level a user is at. This is simply an artifact of rising difficulty (in the form of stronger enemies, more experience to level up etc). As a result, sales make late game players in different while making early game players better off.

The insight Will offers is that sometimes this is an advantage by changing the progression path of newer players to a higher equilibrium then current late game players previously had.  This allows new players to ‘catch-up’. This sounds a lot like the Solow model. Yes, that Solow model 1. I don’t think Will models this correctly, however, as each player is not on a discrete curve as his graph on the left depicts. Even without inflation, the graph on the right is an accurate picture of a given game economy.

Consider two possible goods that could be put on sale (and thus inflated) from Clash of Clans: a builder or gold. The builder is a dramatically better purchase because it allows for more output per unit of gold or elixir (increase in technology). This shifts the growth rate of a given player up. On the other hand, the gold is a small one-time increase in capital stock that won’t scale with the game. For designers, this offers the chance to use sales as strategic instruments to alter the metagame. By offering Clash of Clan players discounts on a builder, players converge and then exceed the GDP of elder players. A sale of gold, however, merely ‘jumps’ the GDP of players without changing the long-run growth rate. This means designers can either jump the point along which new players are on the progression curve or they alter the new player curve entirely.

Back again

Unfortunately, this can deter some investment by changing inflation expectations. If players know a given dollar will have increased purchasing power later on, why make the investment now? Indeed, a 30+ paper written by Game of War players and subsequent boycotts attest to the negative side effects of perpetually trying to catch players up.

Careful consideration and analysis can make sales a valuable gameplay tool as much as they are a business one.

## Eric Seufert’s best F2P blog post isn’t about F2P

Everyone’s favorite former Rovio employee is a prolific writer on F2P games; the closest we have to a Fukuyama. Seufert has covered a range of topics, but none more important than internal organization.

Seufert argues for a number of institutional policies to surround analysts with within an organization. Frequently, analytics and data are as much about the appearance of sophistication as they are actual value adds. This need not be the case. The confusion arises over where the value of data lies. Perhaps ironically, data’s value doesn’t lie in the data, but rather in the data analyst.

In most organizations, analytics reports to product teams, a mistake, Eric argues. Often product managers face the principal – agent problem: their incentives and the companies do not align. Product managers want to successfully manage products and will present the narrative they are doing so. This is inefficient for companies who often wish to assess the true performance and trajectory of a portfolio. When an analyst’s career path depend on a product manager their narratives will often match. With organizational independence from product teams, analyst’s incentives align closer to the companies, providing more objective analysis.

Not just an accountability watchdog, real analyst value revolves around the ability to drive product roadmaps. At it’s highest order, analytics is a forward looking discipline, not a backward looking one. By experimenting and studying human behavior, analysts find levers that pull certain responses.  This creates opportunities to exploit these levers. Do currency pinches increase monetization? Are new gotcha characters or new levels driving revenue? Should we invest more in reducing load times or UI changes? Using theory driven empirical investigation analysts can move companies towards better outcomes than competitors. If organizations don’t allow analysts to pursue these questions, they’ll become cheerleaders for product teams. On the other hand, if first order information (RR, ARPU) is not accessible or automated, analysts will forever be running the hamster wheel of reporting. This is one of the more overlooked points Eric argues for.

I think this suggests a dual mandate of analysts: (1) accountability of features and (2) what features are worth developing. This creates a natural tension of not only playing the role of watchdog to product managers but partners as well. It is the duty of good analysts to navigate this relationship successfully.

## F2P Demand Curves Are Weird, Just Ask Levitt

Steve Levitt, the last price theory samurai, and John List, future nobel prize winner, have published a paper on free to play economics.

In a textbook neoclassical experiment, Levitt alters the quantity of Candy Crush hard currency at a given price point. While economists generally think of price variation as the way of deriving demand curves, quantity variations are just as legitimate a tool.

Despite a sample size of over 15 million and a wide range of quantity convexity (80% variation across variants), all quantity discounting schemes produced similar revenue. Levitt concludes by commenting,

“…varying quantity discounts across an extremely wide range had almost no profit impact in the short term.”

The interesting and little explored result indicates that,

…almost all of the impact of the price changes was among those already making a purchase; radical price reductions induced almost no new customers to buy…

This suggests free to play games are made up of two groups of users: purchasers and non-purchasers. This means the decision of becoming a customer is exogenous, there is no ability to convert non-customers to customers  i.e. this is decided outside of the game.  Put another way, non-customers are perfectly price inelastic and customers are perfectly price elastic. Indeed, industry research collaborate this.2

Interesting, but is it actionable?

Were this to hold, it suggests a number of results. The first is that product manager’s ability to monetize non-customers (99%~ of users) will not come from IAP, but rather other forms. This may help explain why F2P ad revenue and incentivized video continues to show YoY growth.3 4
Furthermore, product managers should consider experiments exploring the maxima point of ad frequency. Given that there’s a trade-off between retention and ad-frequency there exists an optimal ad frequency point.

With little chance of non-customers converting to customers, product managers should worry less about increased ad frequency turning off potential customers.

The final result suggests the ROI of trying to raise the LTV of customers exceeds that of trying to raise the new customer creation rate. Product managers should develop roadmaps in accordance.

## How to Measure Whales

You’ve soft launched your game, done a UA push, and a string of hope appears. Against all odds, a dominant cohorted ARPU curve emerges! Is this this an anomaly or have you caught a whale?

The first way to examine this is to perform cointegration tests between the cohorted ARPU curves, testing for stastistical significance. It may be true the difference in the curves are real, but that doesn’t answer if you’ve caught a whale.

In 1905, Michael Lorenz developed a method for measuring relative inequality between nations known as the Lorenz curve.

The F2P application is to define wealth as revenue (either on a daily or game level) and players as the population in the context of free to play games. By measuring how bent inwards a cohorted Lorenz curve is relative to other cohorted Lorenz curves we can measure the ‘whali-ness’™ of different cohorts. Even better is how this reduces to a single metric – the gini coefficient. A gini coefficient of zero indicates a perfectly equal distribution of income, 10% of the population owns 10% of the wealth, 20% of the population owns 20% of the wealth and so on and so forth. A gini coefficient of 1 is the exact opposite – a single person owns 100% of the wealth.

This translates to what % of players are responsible what % of the revenue. Measuring gini coefficients across games rather than cohorts gives more insight into how a particular game monetizes – whether it’d be whale, dolphin, or minow driven.

Actionable insights might include how effective introducing ads could be. A high gini coefficient (very few players are responsible for revenue) might mean there’s a more fertile base to monetize on.

The main insight, however, is further understanding. It’s clear that success can come about in drastically different ways in free to play games, the gini coefficient is simple way to measure that.

## The Content Problem and the Death of Level Designers

F2P is as much of a design choice as it is a business choice. Given this, F2P has its own set of design challenges among  which is the content problem.

Developers will only continue making additional content until the benefits are greater then the costs. This is specified when

`expected marginal revenue from content > development costt + opportunity costt`

where

`development costt is the cumulative cost by time of release (t)`

but if

`User Acquisition Rate (UAR) < Churn Rate (CR)`

there’s a shrinking pool of buyers which only widens at t+1. This is the essence of the content problem: how do we create content fast enough to curtail churn and while minimizing development costs?

The genius of PvP (Player v Player) environments is how they necessitate the emergence of a meta-game. In mathematics, Player vs Environment (PvE) resembles the field of optimization where strategies are static – one and done. PvP environments, however, resemble game theory models where it has been shown strategies evolve in an evolutionary process. This means equilibrium in PvP environments is constantly being reshuffled with each balance change; the search for dominant strategies in an ever shifting equilibrium is the game itself.

It’s been 4 years since the launch of Clash of Clans and there continue to be oodles of strategy videos. Supercell is constantly debuffing and buffing different units which makes some strategies more successful than others and by trial and error players expose this.

The push for PvP environments has seen the emergence of ‘Systems Design’ and the demise of a Level Designers. With few exceptions, linear and deliberate gameplay has gone the way of Spaghetti Westerns.

On the other hand, a different type of PvE has found ways to combat the content problem. For example, Trials Frontier adopted meaningful level mastery with a touch of PvP. This is achieved via quests that revisit locations, stars, leaderboards, mission rewards, and gameplay that rewards depth (back/front flips can improve my times!). That said, PvE shares a smaller piece of the pie than it once did. This trend will only continue as F2P marches into the console and PC arena.

## Get more life out of your Lifetime Value Model! A discussion of methods.

Predicting the average cumulative spending behavior or Lifetime Value (LTV) for players is incredibly valuable. Being able to do so helps figure out what to spend on User Acquisition (UA). If a cohort of players has an LTV of \$1.90 and took \$1 to acquire then we’ve made money! This helps evaluate how effective particular channels of advertising are as we’d expect different cohorts of players to have different values. Someone acquired via Facebook may be worth more then some acquired via Adcolony.

But wait there’s more!

My argument in this post is that LTV has great deal of value outside of marketing. In fact, LTV might have parts more valuable then the whole. How to predict LTV can adopt numerous approaches and each approach has associated benefits. Remember, there doesn’t have to be just one LTV model!

Consider four requirements we’d want out of an LTV model:

1. Accuracy

LTV predicted should be the LTV realized. Figuring out upward and downward bias in your coefficients is important here. This gives insight into the maximum or the minimum  to spend on UA depending on the direction you suspect your coefficients are biased towards.1

2. Portability

Creating models is labor intensive and even more so when doing so for multiple games. There are particular LTV models that sweep this aside called Pareto/Negative Binomial Distribution Models (NBD). Since they’re based only on the # of transactions as well as transaction recency they don’t require game specific information. This means you can apply them anywhere!

3. Interpretability

This one’s big and perhaps the most overlooked. Consider the Linear * Survival Analysis model approach to LTV. The first part is to predict when a particular player will churn. By including variables like rank, frustration rate (attempts on particular level), or social engagement we gain insight in what’s retaining players. This type of information is incredibly valuable.

1. Scalability

If it’s F2P then there are going to hundreds of thousands to millions of players (you hope). I’ve seen some LTV approaches that would take eons of time to apply to a player pool of this size, our LTV should scale easily.

So how do the different approaches stack against one another?

 Accuracy Portability Interpretability Scalability Pareto/NBD2 / x x ARPDAU * Retention3 x x Linear * Survival Analysis4 x x x Wooga + Excel5 x Hazard Model6 x x x

Parteo/NBD is great, but it’s hard to incorporate a spend feature (it just predicts # of transactions).7 A small standard deviation in transaction value gives this model a great deal of value and something to benchmark against. This model also makes sense if data science labor is few and far in between.

ARPDAU * Retention is probably the approach you’re using; it’s a great starter LTV. If marketing/player behavior becomes more important, the gains to scale from a approach beyond this start to make more sense.

Wooga + Excel just doesn’t scale which kills its viability, but it’s conceptually useful to understand.

Linear * Survival Analysis  gives a great deal of interpretability that also sub-predicts customer churn time. This means testing whether the purchase of a particular item or mode increases churn time is done within the model. The interpretability of linear models also means it’s easy to see different LTV values for variables like country or device.

There are many, many different approaches beyond what’s been laid out here. Don’t settle on using just one model, each has costs and benefits that shouldn’t be ignored.

## Optimal Area Currency with Milton

In hindsight, one of Friedman’s great predictions is the Eurozone crisis. Despite being a huge champion for flexible exchange rates, Friedman never advocated for a common European currency.

Europe exemplifies a situation unfavourable to a common currency. It is composed of separate nations, speaking different languages, with different customs, and having citizens feeling far greater loyalty and attachment to their own country than to a common market or to the idea of Europe.

— Milton Friedman, The Times, November 19, 1997

The crisis of Greece exemplifies many of the problems Friedman points out. In times of economic recession/depression, central banks devalue the domestic currency to return the country to full employment. When economies are similar recessions/depressions move together, if there’s a crisis in Texas it’s probable there’s one in Washington. This makes central bank policy more effective because capital won’t escape from ‘recessed’ areas to one’s with higher returns – there are none. It can be much harder to accomplish this in Europe where every country’s economy is dramatically different and institutional policy fluctuates widely.

What the hell does this have to do with game design?

Why do multiple currencies exist in games to begin with? Why not just have one type of currency rather then four or five?

Simply put, it’s all about segmentation. Once again, Supercell has provided us with a wonderful example, Clash of Clans (CoC). Consider which items cost gold and others elixir, the choice was not an arbitrary one. After a quick scan, you’ll notice only the defensive items (cannons, archer towers, walls) cost gold and only the offensive items (troops, barracks, spells). Why might this be the case? Segmenting these items gives designers greater control over the economy and minimizes the potential for ‘contagion’ effects. Consider a world in which Clash of Clans only contained gold. It’s possible players might have a preference for attacking rather then defending, encapsulating the idea of capital going to its highest return. If this were the case the game could become unbalanced as all players attack and none spend gold to upgrade their base defense. By segmenting base defense into elixir, you remove any opportunity cost from from spending on base defense. This is similar to giving your relatives a gift card, rather then spending it on whatever they fancy, they must now spend it on whatever is from the gift card’s store. A domestic currency is much like a gift card to that country’s ‘store’ just as elixir is a gift card to only CoC’s offense ‘store’.

If Supercell finds players are not creating challenging defenses, it’s very easy increase the rate of gold production without worrying that money will be spent on offense. They can also do this by lowering the cost of items priced in gold. Supercell has toyed more with this strategy in their other title, Boom Beach.

The rules for when segmentation is worthwhile emerge reading Mundell’s famous paper 1 backwards.

Segmentation in games, just like in real world economies, gives game designers and central bankers more control.

## There’s more to A/B testing then A and B: I

One of the most powerful features of mobile games is the ability to run simultaneous randomized experiments at no cost. Academics swoon at such a possibility and it’s very real and very spectacular in F2P games.  Decades of running experiments in academic research can lend insight to developer scientists. An example of this is an insight from experimental economics called ‘bending the payoff curve’.

One of the favorite topics of experimental economists concern risk aversion and auction theory, risk aversion due its ability to challenge the neoclassical paradigm (i.e. mainstream economics) and auction theory because it uses fancy mathematics. The first groundbreaking economic experiments employed auctions in lab settings to see if participants diverged from rational behavior. A series of experiments run by Cox, Smith, and Robinson1 appeared to show participants were not doing what we’d expect them to do if they were rational agents (i.e getting the lowest price). The suggestion being participants were acting as risk – averse agents, rather than risk – neutral agents. The key insight, however, came from a challenge in the way these experiments were run from a 1992 AER article called Theory and Misbehavior in First Price Auctions.2

The author, Glenn Harrison, argued that the costs of engaging in non-optimal behavior were incredibly small. In other words, being dumb didn’t cost participants that much, and being smart didn’t earn participants a great deal either. Glenn argued this casts doubt on the suggestion that participants were engaging in non-optimal behavior, but instead participants weighed the expected mental effort of being smart and concluded it wasn’t worth the foregone increase in income.

What researchers need to do, Glenn argued, was bend the payoff curve i.e. increase the reward for being smart. This way researchers can see if the type of behavior they’re testing is a real thing.

##### What does this mean for A/B testing in my game?

Developers often turn to A/B testing to test even the most minute items; frustration emerges when results are inconclusive. For example, Supercell might test whether or not players prefer reward scheme X or Y in Boom Beach by way of sessions played. An A/B test that presents each scheme after a battle could possibly turn up inconclusive. This is because each reward has a small outcome on player progression. That is itself an insight, but if we’re really interested in whether A or B is better it’d make sense to ‘bend the payoff curve’. That means we’d offer A + 5 or B + 5 to exaggerate the effects of the different reward schemes.

Think of it as amplifying two lights on each side of a room to see where flies gravitate toward. If the lights were dim the effect on the flies would be smaller than it otherwise is.

While not always appropriate, bending the payoff curve is another tool developer scientists should consider when designing experiments.