## The Environmental Economics Approach to Liveops Content Management

In 1931, American economist Harold Hotelling published the seminal paper The Economics of Exhaustible Resources. Harold described a problem many firms face: how much of a non-renewable resource should they sell at any given time? This problem is more obvious when thinking about managing an oil supply, but just as relevant when considering how to manage match-3 levels.

For oil firms, since supply is always declining, price should move up at every $$t + 1$$ period, holding all else constant. As a result, the optimization question emerges: Do I sell my oil now or wait for a higher price later? To solve, we first need to understand net present value or the idea that value now is worth more then value later. We can model this as:
$$V_{today} = V_{later} / {(1+r)^n}$$
Where $$r$$ is the interest rate or a proxy for opportunity costs and $$n$$ is the number of periods. All we're saying here is that $1,000, say, 7 years from now is only worth$710 dollars today at a 5% interest rate. This is because I could take that $710 dollars, invest it, and in 7 years I'd have$1,000 given the 5% interest rate.
$$710 = 1000/ (1+.05)^7$$

The more the interest rate rises, the more sense it makes to sell oil now and invest the money rather than waiting for the price of the oil increase. The oil firm needs to compare the rate at which the oil price increases against the rate of interest. As such, the price of the oil is strongly correlated with the increase of the interest rate over time.

Hotelling's approach and intuition helped form what is called dynamic programming economics. More wrinkles have been added to the model to solve everything from cake-eating to managing how many fish should be caught. The tools of the field help us solve tricky now or later dilemmas.

Consider the release cadence of television show episodes. Should Netflix release all episodes in a given season all at once, in batches, or weekly? Perhaps even monthly? The problem is littered throughout gaming: should I use a limited-time direct store or an always growing catalog store? How should I distribute rewards throughout progression? Should I save content for a UA burst? And like the oil firm, we're in search of the content rationing solution that maximizes $$LTV$$.

## Match-3 Player Level Management

Consider how King should ration Candy Crush levels for new players. King is producing new levels at some constant rate $$p$$. But we know that players are consuming levels at a greater rate than King can make them in any given time period $$t$$. $$c_t > p_t$$
At some point, a given player will run out of levels, the speed of which is governed by the size of $$c_t - p_t$$. We can pick some random constants to visualize this:
We might also think of a player as having a churn probability on any given day since install. This declines over time: the best predictor of a player retaining on D30 is if they retained on D29 ( decreasing convex or $$\frac {\partial Pr(Churn)}{\partial DaysSinceInstall} {<}0$$). This suggests elder players are less difficulty "elastic" or respond less to increases in difficulty.
And yet, we might imagine that the probability of churn increases every day without new levels as well (increasing convex or $$\frac {\partial Pr(Churn|NoNewLevels)}{\partial DaysSinceInstall}{>}0$$). In the example above, the player ran out of levels on D205.
In the long-run, King could ramp up level production, but that would eventually hit diminishing returns and has its own labor costs. As a substitute, King's main lever to control player progression speed is difficulty. By raising difficulty, King can shift the line further right, as it would take longer for the player to reach the no content or exhaustion point. King could even tune difficulty such that $$c_t = p_t$$ and the player would never run out of levels. On the other hand, if difficulty is too hard then churn will spike (low difficulty also causes this)! This suggests that King needs to solve an intertemporal difficulty optimization problem. That's a mouthful, but all it means is that King needs to balance the spike in probability from not enough levels against the spike in probability from increasing difficulty too much. To do so, we first need to model the total marginal effect of no new levels for every day since install, $$d$$. This is the counterfactual churn probability of when there were levels against the current probability of churn when there are no more levels.
$$\text{Total Marginal Effect of No New Levels} \\= \Sigma{[Pr(Churn|NoNewLevels)_d - Pr(Churn|NewLevels)_d}]$$

Whereas increasing difficulty to delay this from occurring has its own cost.

$$\text{Total Marginal Effect of Increasing Difficulty} \\= \Sigma{[Pr(Churn|Difficulty_{x+1})_d} - Pr(Churn|Difficulty_x)_d ]$$

The solution to this system of equations is what we might call the maximum sustainable difficulty. And the intuition is similar to what first developed for the oil firm.

A bunch more stuff falls out of the model. For instance, King should significantly increase the difficulty of levels that are just near the exhaustion point, just as the oil firm would significantly increase the price of the last bits of oil. There's very little downside to doing so, as the player will be in a high probability space soon enough anyways. Furthermore, since King is always stockpiling levels, new players face an exhaustion point further away then players who downloaded the game at first launch. It's in King's interest to ease up on the difficulty for early levels since the area underneath the churn curve expands as the exhaustion point moves rightward.

There's much more room for expansion of the model and I encourage data scientists to play a strong role in systems design.

## A Simple Model of Cosmetics and Why They’re Hard to Sustain

In Six common mistakes when moving to live-service games and free-to-play, Ben Cousins argues that cosmetic-only monetization is a mistake:

The games that make billions from cosmetic-only economies typically only succeed because of the sheer numbers of players. On a per-user basis they actually have very poor monetization, relative to games that use more aggressive methods. This is because for a multiplayer game that is built from the ground-up to be about dominating other players, the proportion of the audience who are interested in self-expression via cosmetics is rather small.

He’s right. Traditional HD developers choose cosmetics because there are no core design implications. Cosmetics can be layered into nearly any game at any stage of production. But for it to “work” massive scale is needed and even then it’s risky. It’s no wonder that very few mobile games in the top 100 grossing that use cosmetics only monetization. But I also think Ben misidentifies the challenge of cosmetics. They are certainly not about self-expression (at least the successful ones). In fact, Ben is right: male-centric multiplayer games are about domination but this is why cosmetics are viable.

Ironically enough, this is best summed up by piece on “cam girl” economics:

Men want a few things, and probably one of the biggest is winning a competition.

You see, you’re not just trying to get a guy to pay you – you’re trying to get a guy to pay you in front of a bunch of other guys. This is a super key. A man wants to feel attention from an attractive women on him, and this is made even more satisfying when it’s to the exclusion of those around him. He is showing off his power by buying your happiness.

High-level cosmetics in multiplayer games often signal domination whether or not the cosmetic is attached to skill. Apex is particularly effective at this. Consider low and high level banners:

The stat tracker element directly shows a player’s time commitment to the game. We can also see the more exotic colors and shapes communicating danger. Sort of like the poisonous dart frogs: cute, but deadly.

And finisher animations are particularly humminating since both the player performing the finisher and the one being finished must watch.

None of this should be particularly surprising, we are political beings after all.

While these might be good observations, we need a falsifiable hypothesis of cosmetics. This allow us to make more predictive statements about how a new cosmetic will or will not sell. And to an economist this means a model!

## The Model

Consider a basic model of cosmetic demand:

$$D_i = (C_i{_j} -\mu{_j})P_jT_j$$

Where:

$$\bullet$$ $$Di$$ is demand for the $$i$$th cosmetic $$\\$$ $$\bullet$$ $$(C_i{_j} -\mu{_j})$$ is the level of differentiation of the cosmetic from the average cosmetic in circulation at that given cosmetic vector $$j$$. Just think of $$j$$ as any customizable “slot”. This benchmarks the cosmetic against its closest comparables in the same way you wouldn’t benchmark a goalie against a midfielder. $$\\$$ $$\bullet$$ $$P_j$$ is the prominence of the cosmetic vector or how “featured” the cosmetic is in-game (a watch versus an entire costume) $$\\$$ $$\bullet$$ $$T_j$$ is the amount of time the cosmetic vector is featured on screen, both for the owner and others $$\\$$ There’s a lot to tease out of this model, but much of it is beyond the scope of this post. Let’s instead focus in on the inflation problem or what’s suggested by $$(C_i{_j} -\mu{_j})$$.
$$\\$$ There are two elements: the cosmetics level of the game and player’s individual cosmetics level. Let’s imagine a player starting a game at time $$t_1$$ and with cosmetic $$C_1$$. During this period, the player derives a certain amount of cosmetic utility, $$u_1$$. After earning a level, a player may unlock a higher rarity cosmetic ($$C_2$$) and choose to equip it. They move to $$u_2$$ at $$t_2$$ as a result.

## Player Cosmetic Utility Choice Model: Getting a Better Cosmetic

Now, let’s consider what happens when a player unlocks another cosmetic ($$C_3$$), but this time the cosmetic is of lower rarity. Because $$u_2$$ > $$u_3$$ the player does not equip it.

## Player Cosmetic Utility Choice Model: Getting a Worse Cosmetic

Players are benchmarking a given cosmetic against the currently equipped cosmetic in the same slot. Overtime players earn/purchase cosmetics that give them higher and higher utility. This makes it harder and harder to sell cosmetics – each one must make the player better off then the one before it. Every player in the game is going through this journey, and as such the average cosmetic level rises, just as we modeled in $$\mu{_j}$$. And when this rises differentiation falls and with it quantity demanded. That’s a long winded way of saying that cosmetic inflation hurts monetization.

## Horizontal or Vertical Progression

Progression is a powerful toolkit, even for cosmetics. One way to combat the inflation problem is essentially print more money. And by this I mean introduce higher and higher rarity. Riot, for example, introduced Ultimate II skins (3250 RP).

Alternatively, the horizontal way to attack this problem is to add cosmetic vectors. Dota 2 has mastered this with things like chat lines and ping cosmetics as customizable vectors. Of course, this too will face challenges are players collect more and more items in the given cosmetic vector.

Cosmetics are a hard road to follow and inflation is always chasing developers. It’s important to think carefully about mitigation strategies and how to grow cosmetic vectors overtime as you would any other element of a live-service.

## The Economics of Battle Pass are Broken. Let’s Fix It.

Monetization’s modern paradigm is defined by a direct store and battle pass (BP). After years (and ongoing) criticism of loot boxes, Fortnite re-wrote the rulebook in a way that seems to make both developers and players happy. However, it’s important to consider that at sufficient scale any monetization scheme looks like a winner. It’s unclear if Fortnite is a winner because of the pass or despite it. For instance, the collapse of Clash Royale’s monetization can be partly traced to the introduction of its own pass.

Reports of the death of loot boxes have been greatly exaggerated as well. Of top 10 grossing free-to-play games, 8 sold loot boxes in some form. Of the top 10 premium games, 4 did. It’s going to take more work to dislodge the loot box paradigm.

And it’s understandable. If a developer isn’t going to reach Fortnite scale, battle pass isn’t a sufficient monetization solution. But it doesn’t have to be this way. We’re in the early innings of BP; by breaking down the model we can pivot the pass from an engagement to monetization driver.

## The Model

We can understand BP spend depth by benchmarking it against an $$average\; daily\; monetization\;cap$$. The core challenge with the pass is the relatively small spend cap. To start, consider a pass with a fixed entry cost,$$fc$$, and $$N$$ tiers at a given price $$y_i$$. $$\\{total\;monetization\;cap}=\sum_{i=1}^N {(y_i)} +fc$$
In part, the maximum spend is limited by the cadence of the pass. $$d$$ is the pass length, in days. Dividing by this gives us the average daily monetization cap (ADMC).
$$\\ {average\;daily\;monetization\;cap} = \frac {\displaystyle \sum_{i=1}^N {(y_i)} +fc} {d}$$

We now have a model to compare different passes. Let’s examine Fortnite, Valorent, Call of Duty Warzone and Dota 2.

Each game has a widely different approach. Some go for more tiers and longer length while others go for less tiers and shorter season length. While not accounted for above, it is also important to adjust for pass frequency over a year long period. Dota 2 has a 200% increase in ADMC over Fortnite, but Valve only releases the pass once a year. Warzone, on the other hand, maintains a 100 tier season over an 8 week span consistently. This would average 5-6 passes a year. Nonetheless, battle pass presents far less spend depth over other monetization vectors. In a loot box system, the spend cap is the average price price to unlock all content. In a direct store, the cap is correlated to the total sum of prices divided by the rotation cadence. In almost all games this puts ADMC north of ~$20. However, we’ve still failed to account for opportunity costs to paint a complete profit maximizing picture. ## The Pivot ### Marginal Pricing BP tier pricing is plagued by inconsistency. Most games use exponential time to complete curves, with each level taking more XP (and therefore time) to complete then the prior one. Here’s Fortnite chapter one, season four as an example: And unlike RPGs, XP earn rate does not generally increase overtime. Yet, tier price is constant at$1.50 despite increasing XP requirements and constant earn rates. This is a rather odd proposition as it suggests that the usual $1.50 tier price forgoes a differing amount time depending on the tier number. By dividing by average time to earn we can understand the cost of an hour forgone per tier number. For example, tier 100 may take 6.5 hours to earn, and with a price of$1.50 this means the player pays $0.23 per hour if they choose to purchase the tier ($1.50/6.5hr to earn). Tier 1 only takes 12 minutes, suggesting a hourly price of $120 ($1.50/0.125hr to earn)!

This incentivises players to withhold tier spending until the end of the season.
More bang per tier buck. Yet players who make it to the end of the season are the most price inelastic! Why not vary tier price to maintain a constant benefit? Here’s what this’d look like assuming $1.50 buys 6,000 XP instead of a fixed tier: Tier 1 would cost$0.10 since it forgoes very little time while tier 99 would be $9.00 since it forgoes over 6 hours. I chose 6,000 XP to illustrate, but any amount can be chosen as the constant. At around 7,000 XP per$1.50, marginal tier pricing produces a greater spend depth then constant tier pricing (99 tiers at $1.50 =$148.50 spend cap).

This also combats the sunk cost problem. If a player is in the middle of a long tier, the price to complete the tier remains constant in hard currency. Purchasing a tier doesn’t always boost a player to the same “spot” in the next tier. In that case, it’s easy to feel like the early XP earned in the long tier is “wasted” if the player is considering purchasing the tier outright.

Setting the optimal XP/tier price is an exercise in price elasticity. If we set a tier price to say, $10,000, almost no player would buy tier. On the other hand, a price of$0.01 would produce very little revenue. Therefore, there exists a revenue maximizing amount of XP at a given price. While we may not know this exact number, it’s likely to be higher then current prices may suggest.

### Tier UX

Moving to marginal tier pricing puts much more pressure on the UX wrapper. Frustratingly, despite exponential time to complete, nearly every pass displays the distance between levels as constant. All of these levels in Apex look just as easy/hard as one another:

There are two solves for this: (1) alter tier distance (increase spacing between levels consistent with XP required) and/or (2) hard level labeling.

First used by King’s groundbreaking experimentation team, hard level labeling simply labels a label as hard (as long as it actually is). This encourages booster spend by a significant margin, as it tells players it’s optimal spend in the level. This has spread to other match-3 titles and now we find super hard levels.

Simply labeling a BP tier as hard should produce the same effect. This becomes a bit trickier for exponential curves as all the hard levels are backloaded. Shifting to an s-curve design avoids this as well as being a more equitable distribution of rewards.

### More Tiers or Less Days

A simple way to increase ADMC is increase the cadence of the pass. Instead of 12 weeks to complete, refresh the pass after 8 weeks. In Fortnite’s case this would increase ADMC by ~50% ($1.90 to$2.86). We can model the marginal effects of changing the pass refresh rate:

Adding more tiers produces a similar effect but in reverse. Instead of 100 tiers, why not Dota’s 2,000? Cost and smart use of content plays a strong role.

### Alter Cost Profile

Shortening cadence or adding tiers runs into supply side problems. There ain’t no such thing as free content. Furthermore, it’s wise to consider the vector with the highest return per price of content: does adding an Epic outfit to the direct store produce more marginal revenue then adding it to the BP? But even before answering that question we need to consider the cost of producing items all together. Despite Epic employing hundred of employees and engaging in outsourcing, ~12% of tiers in the pass are “costless”. This means they consist of either currency or boosters rather than distinct items.

Having cheap items altogether is another avenue. Apex has mastered this. Consider the stat tracker: it’s simply a tally of a particular stat. Or loading screens: an old piece of 2D art. This type of content is extremely cheap to produce. The foundational conversations of a game economy should include content vector specifications.

Valve has employed clever use of changing colors on cosmetics and Fortnite adopted something similar in their new crystal levels.

## Where To

MTX design has evolved and there’s no reason to think BP won’t do the same. At the end of the day, it’s a mechanic not a destiny. In future posts, I’ll expand on pivoting the pass even more. This is just an appetizer.

## Why Do FPS Players Like Small Maps?

It’s the incentives, stupid.

Players want to unlock content and the most efficient way to do so is to maximize how many FPS games control progression speed: SPM or score per minute. Score is usually a formula composed of objectives and kills. The key is that it’s uncapped: there’s not a fixed amount of XP up for grabs in a given match or time played (this would be a better design). The formula implies that the more “action” in a given minute of gameplay then the more score per given unit of time and the faster a player will progress. Small maps excel at encouraging this – there’s a short amount of time before you bump into an enemy or objective.

FPS players like small maps because they function as costless XP boosts. Nuketown will be making its 5th appearance in the CoD title with Cold War.

## Can We Get Players to Tell Us Their LTV?

Eric Suffert acutely describes the dangers of extending payback windows. At every t+1 the accuracy of LTV declines while the variance in cohort profitability increases. LTV, however, is not an exogenous variable and clever design can incentivize players into revealing their long-run time horizons within a game.

Consider the design of a many subscriptions: you can pay a lower annual fee or a higher month-to-month fee. If you’re uncertain about the subscription, then the month-to-month is more economical while if you have more certainty then the annual fee makes more sense. The choice is a huge predictor of retention: annual users are far more likely to retain then month-to-month users. The mere inclusion of this annual/month-to-month choice gives users the opportunity self-segment into more predictable cohorts. Why can’t we use the same mechanics in game design to create more predictable LTVs?

Consider two possible goods for purchase via gems in Clash of Clans: a builder or gold. The builder increases the long-run growth rate of gold while the gold itself is a temporary boost in short-run capital stock. In layman’s terms: spending 100 gems on a builder might net you 200 gold today and 1,000 gold by D30 while spending 100 gems directly on gold may only yield 700 Gold today and 0 gold by D30. The builder is an annuity that pays dividends every period, the longer a player’s time horizon the more valuable the annuity.

Players who expect to have a long time horizon in a given title have an enormous incentive to purchase “investment” goods or goods that pay dividends overtime (battle passes similar to some degree). Not doing so results in a increasing opportunity cost penalty every period due to lost compounding growth.

F2P has experimented with direct daily annuities of hard currency. They offer players a discount over the standard IAP packs but must pay upfront to receive a daily allowance. Instead of a 30-day pass, why not ramp to a quarterly or bi-annual pass? Doing so would make LTVs more predictable early in given player’s lifecycle.

## The Collective-Action Problem of F2P Clans Remains Unsolved

There’s a compelling aspect to achieving group oriented goals: being apart of something larger than yourself. Lots of F2P developers harp on the importance of social features. Yet the social experience in many games is abysmal. Lots of teammates or clanmates don’t seem interested in participating instead preferring to “free-ride”, putting forward little effort but getting the fruits of the team reward. Mancur Olson’s foundational work, The Logic of Collective Action, describes how this problem manifests in the public sphere (sometimes literally in the case of electric scooters). Game designers have a much easier time aligning individual and clan incentives than public officials yet they sometimes miss easy wins. How can we make the clan experience better then it might otherwise be?

In Clash Royale, clans advance a boat against rival clans. Advancing the boat depends on individual clanmates playing games everyday (and winning). The more clanmates play consistently, the more the boat advances and the better the rewards the clan will receive. But for many clanmates playing everyday requires a great of effort, why not let others earn the rewards for you?

The problem is severe in Battlefield where “PTFO” or “Play the Fucking Objective” is standard nomenclature. Players often won’t engage in activities that benefit the team (capturing flags), instead preferring to pursue their own objectives (generally: shoot players as fast as possible).

A given player faces two potential payoff schedules when considering to allocate effort to the clan. There’s the expected payoff with no effort (the probability that the clan/team will win if the given player did nothing) as well the probability that the clan will win if the player puts forth effort. We can model this as such:

$\mathit{expected\ payoff\ from\ effort_i} = {P(\mathit{winning} | \mathit{effort_i}) \ *R}$

where

$P(\mathit{winning} | \mathit{effort_i})$

is the probability of winning the clan event given give the effort of a given player or rather the additive probability of this given player participating.

While R is the reward from winning.

Of course if $$P(\mathit{winning}| \mathit{effort_i}) = P(\mathit{winning})$$ or the given player cannot sufficiently contribute to the probability of the clan winning then there’s zero incentive for them to put forth effort. Why bother?

This problem exacerbates as team size grows: the efficacy of a given player varies inversely with the number of teammates. This makes intuitive sense: in Battlefield, a player in 2 versus 2 match has a greater impact on the outcome then a player in a 32 versus 32 player match. The incentive to free-ride rises as the number of teammates or clanmates rises. Weakness hides in numbers.

We’ve also ignored the game-theory dynamics of this problem for simplicity, but it’s worth mentioning. If I know my other teammates are not going to put forth effort, why should I? This leads to Nash equilibriums where clans have almost no activity.

How can we overcome the free-rider problem and ensure that all teammates put forth effort? The highest cost-benefit feature is simply better monitoring tools. In many clan or team based games, clan leaders face asymmetric information: they simply can’t identify the players that do not put forth effort. A simple measure of activity (last login) or games played in the last week goes a long way to kicking out free-riders. We might also consider a joint-production function. In Battlefield or Clash Royale each player would receive a score based on their effort or contribution to team advancement, if the team wins they receive a multiplier on this score. Such a system would have two benefits: it would more closely align individual effort with individual outcome (reap what you sow), and it would increase the benefit for high performing clan members to engage in monitoring. For example, a high performing member might have $20 in contributions with a 2x multiplier or$40 for winning compared to a low performing member with $5 in contributions and therefore$10 for winning. In real terms, the high performing member has an even greater incentive to encourage low performers to put forth effort.

There’s a lot to be said for social shaming as well. While it hasn’t been effective for zero effort participants, there’s evidence it might help players on the margin. A push notification demonstrating that your clans needs you or perhaps better yet, a system where your clanmates can send you push notifications is a compelling way to push players into action.

Perhaps the greatest miss I see is not in clan monitoring (kicking out free-riders), but in self-selection to begin with. Clans are generally pareto efficient for players meaning that there’s zero cost and only benefit to joining one. Players then generally look for near max-size clans as they maximize the clan’s probability of winning a reward and thus the players. Reducing search costs by recommending (or restricting) clans based on device language, location, and some measure of progression maturity makes all players better off.

It’s hard for social monetization opportunities to take-off if team based activities suck. We still have a long way to go to fix top of the funnel problems. Afterall, teamwork makes the dreamwork.

## Why do contestents break the rules in Netflix’s Too Hot To Handle?

Economists like Tyler Cowen or Brad DeLong are too self-respecting to study reality shows. Fret not, this economist has no such self-respect.

Previously, we examined the economics of the reality show genre but just as interesting are the economics of the a particular reality show’s design.

To Hot Too Handle introduces of the most interesting examinations of communal property dynamics: a group prize is reduced when individuals act in their short-term private interest.

At a more practical level, the show gathers ten attractive 20 somethings into a villa in Mexico for three weeks. Cameras are littered around the villa with the exception of bathrooms (to be replaced with mics). The contestants are only informed of the rules once the cameras start rolling; if they masturbate, kiss, or engage in any sort of sexual activity the prize pool of $100,000 is reduced. It’s unclear to contestants how “expensive” each activity is or how the prize pool will be divided or won. Shockingly, interviews with the show’s producers reveal they didn’t have the rules or the costs figured out until they happened. While there’s no traditional contestant elimination process, producers will ask contestants to leave if they’re not invested in the “process”. Supposedly, the show wants to teach these singles how to form emotional rather then physical connections. The spectacle for viewers is how hard it is for these contestants to keep in their pants – of the original$100,000, over $40,000 is lost. Seems like a lot, right? How could they give up so much money? Well… It’s really not that much. On the face of it$100,000/10 = $10,000 per contestant. The tax situation matters greatly – U.S. contestants or those with residency in the U.S. will probably pay about 50% of that$10,000 in taxes. Interestingly, if the show took place in the U.S. rather than Mexico, all contestants would be subject to U.S. taxes. It appears to the case that the Brits and Canadians don’t face game show taxes.

On a expected payout basis, the costs are far less then they might appear:

• $3,000 for a kiss is only$300 on a per contestant basis. Only $150 after taxes. •$6,000 for oral sex = $600 gross,$300 after taxes.
• $20,000 for sex =$2,000 gross, $1,000 after taxes. The show filmed for 3 weeks, at a max payout of$10,000 this is a yearly salary of $173k. Not bad, but many of the contestants already out gross that. Francesca Fargo is estimated to have a net worth of over$500k alone. Almost all of the contestants make money off their likeness or brand. Like Francesca, they model, sell clothing or act. Thus, building an Instagram following is directly connected to their revenue stream. Breaking the rules can help the contestants build that brand – losing out on $300 now could be much more in brand awareness later. Those without brands seemed to leave early or not attempt anything “interesting” – see Madison – a late arriver who never coupled up. But the rules weren’t clear on splitting the prize and contestants could have been under the impression only 1 or 2 would win. Under an expected value model the payout is the same:$10,000 (\$100,000 *10% chance of winning). However, if you feel as though you’re a weak contestant you might estimate yourself at less than 10% probability to win. I think this was the case for sorority girl Hailey who broke the rules a mere two episodes in and had no interest in continuing.

I think there’s room for improvement in the show’s design. It was rather strange to reveal to contestants who the rule-breakers were so early in the show. This introduced social shaming as retaliation for rule breakers, speculation and investigation makes for far more drama. If the show was about temptation, why not focus more on the money or relationships? Maybe contestants can choose to eliminate their show’s squeeze – money AND sex as tests of genuine connection. Discounting seems like a great lever for drama injection – this week sex is 50% off! Adding new contestants didn’t seem to work, everyone had coupled up by the time they got there. Subtraction or an elimination is lot more fun.

Well, here’s to a solid season two. Hopefully, the show remains tongue in cheek. But not literally – that would be a rule violation.

## 1950’s Peruvian Coke and Gacha

In the 1950’s, Peruvian inflation forced Coca-Cola to charge more per bottle of Coke. Unfortunately, their vending machines required physical updating to accept a new and larger domination. Instead, Coke devised a probabilistic system: the machine would charge the same amount as before, but randomly refuse to give a bottle. This raises the expected price of a bottle Coke while forgoing any physical  updating. But a miscellaneous software engineer has a better idea: raise the price of Coke, but instead randomly give the money back.

The increase in price for a  given ‘bottle draw’ would equal the expected payoff of a lower priced ‘bottle draw’ that randomly refuses to give a bottle. This is an interesting solution to player frustrations in gacha (“I didn’t get anything of value when I opened a pack!”).

Anyone care to reckon which model one would perform better: Higher draw price but gives money back or lower draw price but sometimes doesn’t give anything?

## The Content Problem and the Death of Level Designers

F2P is as much of a design choice as it is a business choice. Given this, F2P has its own set of design challenges among  which is the content problem.

Developers will only continue making additional content until the benefits are greater then the costs. This is specified when

expected marginal revenue from content > development costt + opportunity costt

where

development costt is the cumulative cost by time of release (t)

but if

User Acquisition Rate (UAR) < Churn Rate (CR)

there’s a shrinking pool of buyers which only widens at t+1. This is the essence of the content problem: how do we create content fast enough to curtail churn and while minimizing development costs?

The genius of PvP (Player v Player) environments is how they necessitate the emergence of a meta-game. In mathematics, Player vs Environment (PvE) resembles the field of optimization where strategies are static – one and done. PvP environments, however, resemble game theory models where it has been shown strategies evolve in an evolutionary process. This means equilibrium in PvP environments is constantly being reshuffled with each balance change; the search for dominant strategies in an ever shifting equilibrium is the game itself.

It’s been 4 years since the launch of Clash of Clans and there continue to be oodles of strategy videos. Supercell is constantly debuffing and buffing different units which makes some strategies more successful than others and by trial and error players expose this.

The push for PvP environments has seen the emergence of ‘Systems Design’ and the demise of a Level Designers. With few exceptions, linear and deliberate gameplay has gone the way of Spaghetti Westerns.

On the other hand, a different type of PvE has found ways to combat the content problem. For example, Trials Frontier adopted meaningful level mastery with a touch of PvP. This is achieved via quests that revisit locations, stars, leaderboards, mission rewards, and gameplay that rewards depth (back/front flips can improve my times!). That said, PvE shares a smaller piece of the pie than it once did. This trend will only continue as F2P marches into the console and PC arena.