The Content Problem and the Death of Level Designers

 

rgj8Z.jpg
Here we see the content problem in its natural habitat

F2P is as much of a design choice as it is a business choice. Given this, F2P has its own set of design challenges among  which is the content problem.

Developers will only continue making additional content until the benefits are greater then the costs. This is specified when

expected marginal revenue from content > development costt + opportunity costt

where

development costt is the cumulative cost by time of release (t)

but if

User Acquisition Rate (UAR) < Churn Rate (CR)

there’s a shrinking pool of buyers which only widens at t+1. This is the essence of the content problem: how do we create content fast enough to curtail churn and while minimizing development costs?

The genius of PvP (Player v Player) environments is how they necessitate the emergence of a meta-game. In mathematics, Player vs Environment (PvE) resembles the field of optimization where strategies are static – one and done. PvP environments, however, resemble game theory models where it has been shown strategies evolve in an evolutionary process. This means equilibrium in PvP environments is constantly being reshuffled with each balance change; the search for dominant strategies in an ever shifting equilibrium is the game itself.

It’s been 4 years since the launch of Clash of Clans and there continue to be oodles of strategy videos. Supercell is constantly debuffing and buffing different units which makes some strategies more successful than others and by trial and error players expose this.

Is it a paradox to watch mobile strategy videos on a desktop?

The push for PvP environments has seen the emergence of ‘Systems Design’ and the demise of a Level Designers. With few exceptions, linear and deliberate gameplay has gone the way of Spaghetti Westerns.

On the other hand, a different type of PvE has found ways to combat the content problem. For example, Trials Frontier adopted meaningful level mastery with a touch of PvP. This is achieved via quests that revisit locations, stars, leaderboards, mission rewards, and gameplay that rewards depth (back/front flips can improve my times!). That said, PvE shares a smaller piece of the pie than it once did. This trend will only continue as F2P marches into the console and PC arena.

Get more life out of your Lifetime Value Model! A discussion of methods.

Customer-Lifetime-Value

Predicting the average cumulative spending behavior or Lifetime Value (LTV) for players is incredibly valuable. Being able to do so helps figure out what to spend on User Acquisition (UA). If a cohort of players has an LTV of $1.90 and took $1 to acquire then we’ve made money! This helps evaluate how effective particular channels of advertising are as we’d expect different cohorts of players to have different values. Someone acquired via Facebook may be worth more then some acquired via Adcolony.

But wait there’s more!

My argument in this post is that LTV has great deal of value outside of marketing. In fact, LTV might have parts more valuable then the whole. How to predict LTV can adopt numerous approaches and each approach has associated benefits. Remember, there doesn’t have to be just one LTV model!

Consider four requirements we’d want out of an LTV model:

1. Accuracy

LTV predicted should be the LTV realized. Figuring out upward and downward bias in your coefficients is important here. This gives insight into the maximum or the minimum  to spend on UA depending on the direction you suspect your coefficients are biased towards.1

2. Portability

Creating models is labor intensive and even more so when doing so for multiple games. There are particular LTV models that sweep this aside called Pareto/Negative Binomial Distribution Models (NBD). Since they’re based only on the # of transactions as well as transaction recency they don’t require game specific information. This means you can apply them anywhere!

3. Interpretability

This one’s big and perhaps the most overlooked. Consider the Linear * Survival Analysis model approach to LTV. The first part is to predict when a particular player will churn. By including variables like rank, frustration rate (attempts on particular level), or social engagement we gain insight in what’s retaining players. This type of information is incredibly valuable.

  1. Scalability

If it’s F2P then there are going to hundreds of thousands to millions of players (you hope). I’ve seen some LTV approaches that would take eons of time to apply to a player pool of this size, our LTV should scale easily.

So how do the different approaches stack against one another?

Accuracy Portability Interpretability Scalability
Pareto/NBD2  / x x
ARPDAU * Retention3 x x
Linear * Survival Analysis4 x x x
Wooga + Excel5 x
Hazard Model6 x x x

Parteo/NBD is great, but it’s hard to incorporate a spend feature (it just predicts # of transactions).7 A small standard deviation in transaction value gives this model a great deal of value and something to benchmark against. This model also makes sense if data science labor is few and far in between.

ARPDAU * Retention is probably the approach you’re using; it’s a great starter LTV. If marketing/player behavior becomes more important, the gains to scale from a approach beyond this start to make more sense.

Wooga + Excel just doesn’t scale which kills its viability, but it’s conceptually useful to understand.

Linear * Survival Analysis  gives a great deal of interpretability that also sub-predicts customer churn time. This means testing whether the purchase of a particular item or mode increases churn time is done within the model. The interpretability of linear models also means it’s easy to see different LTV values for variables like country or device.

There are many, many different approaches beyond what’s been laid out here. Don’t settle on using just one model, each has costs and benefits that shouldn’t be ignored.

 

Optimal Area Currency with Milton

Friedman and Mario are about the same height.
Friedman and Mario are about the same height.

In hindsight, one of Friedman’s great predictions is the Eurozone crisis. Despite being a huge champion for flexible exchange rates, Friedman never advocated for a common European currency.

Europe exemplifies a situation unfavourable to a common currency. It is composed of separate nations, speaking different languages, with different customs, and having citizens feeling far greater loyalty and attachment to their own country than to a common market or to the idea of Europe.

— Milton Friedman, The Times, November 19, 1997

The crisis of Greece exemplifies many of the problems Friedman points out. In times of economic recession/depression, central banks devalue the domestic currency to return the country to full employment. When economies are similar recessions/depressions move together, if there’s a crisis in Texas it’s probable there’s one in Washington. This makes central bank policy more effective because capital won’t escape from ‘recessed’ areas to one’s with higher returns – there are none. It can be much harder to accomplish this in Europe where every country’s economy is dramatically different and institutional policy fluctuates widely.

What the hell does this have to do with game design?

Why do multiple currencies exist in games to begin with? Why not just have one type of currency rather then four or five?

Simply put, it’s all about segmentation. Once again, Supercell has provided us with a wonderful example, Clash of Clans (CoC). Consider which items cost gold and others elixir, the choice was not an arbitrary one. After a quick scan, you’ll notice only the defensive items (cannons, archer towers, walls) cost gold and only the offensive items (troops, barracks, spells). Why might this be the case? Segmenting these items gives designers greater control over the economy and minimizes the potential for ‘contagion’ effects. Consider a world in which Clash of Clans only contained gold. It’s possible players might have a preference for attacking rather then defending, encapsulating the idea of capital going to its highest return. If this were the case the game could become unbalanced as all players attack and none spend gold to upgrade their base defense. By segmenting base defense into elixir, you remove any opportunity cost from from spending on base defense. This is similar to giving your relatives a gift card, rather then spending it on whatever they fancy, they must now spend it on whatever is from the gift card’s store. A domestic currency is much like a gift card to that country’s ‘store’ just as elixir is a gift card to only CoC’s offense ‘store’.

If Supercell finds players are not creating challenging defenses, it’s very easy increase the rate of gold production without worrying that money will be spent on offense. They can also do this by lowering the cost of items priced in gold. Supercell has toyed more with this strategy in their other title, Boom Beach.

The rules for when segmentation is worthwhile emerge reading Mundell’s famous paper 1 backwards.

Segmentation in games, just like in real world economies, gives game designers and central bankers more control.

There’s more to A/B testing then A and B: I

One of the most powerful features of mobile games is the ability to run simultaneous randomized experiments at no cost. Academics swoon at such a possibility and it’s very real and very spectacular in F2P games.  Decades of running experiments in academic research can lend insight to developer scientists. An example of this is an insight from experimental economics called ‘bending the payoff curve’.

One of the favorite topics of experimental economists concern risk aversion and auction theory, risk aversion due its ability to challenge the neoclassical paradigm (i.e. mainstream economics) and auction theory because it uses fancy mathematics. The first groundbreaking economic experiments employed auctions in lab settings to see if participants diverged from rational behavior. A series of experiments run by Cox, Smith, and Robinson1 appeared to show participants were not doing what we’d expect them to do if they were rational agents (i.e getting the lowest price). The suggestion being participants were acting as risk – averse agents, rather than risk – neutral agents. The key insight, however, came from a challenge in the way these experiments were run from a 1992 AER article called Theory and Misbehavior in First Price Auctions.2

The author, Glenn Harrison, argued that the costs of engaging in non-optimal behavior were incredibly small. In other words, being dumb didn’t cost participants that much, and being smart didn’t earn participants a great deal either. Glenn argued this casts doubt on the suggestion that participants were engaging in non-optimal behavior, but instead participants weighed the expected mental effort of being smart and concluded it wasn’t worth the foregone increase in income.

Each deviation from zero (the optimal bid) costs the participant little.

What researchers need to do, Glenn argued, was bend the payoff curve i.e. increase the reward for being smart. This way researchers can see if the type of behavior they’re testing is a real thing.

What does this mean for A/B testing in my game?

Developers often turn to A/B testing to test even the most minute items; frustration emerges when results are inconclusive. For example, Supercell might test whether or not players prefer reward scheme X or Y in Boom Beach by way of sessions played. An A/B test that presents each scheme after a battle could possibly turn up inconclusive. This is because each reward has a small outcome on player progression. That is itself an insight, but if we’re really interested in whether A or B is better it’d make sense to ‘bend the payoff curve’. That means we’d offer A + 5 or B + 5 to exaggerate the effects of the different reward schemes.

Think of it as amplifying two lights on each side of a room to see where flies gravitate toward. If the lights were dim the effect on the flies would be smaller than it otherwise is.

See? Just like an A/B test.
See? Just like an A/B test.

While not always appropriate, bending the payoff curve is another tool developer scientists should consider when designing experiments.