One revelation of the Apple v. Epic case is that 67% of revenue is from the Item Shop.Continue reading “Battle Pass Who? We’ve Got Direct Stores”
The previous model of battle pass (BP) focused on average daily monetization cap (ADMC) as the key lever in driving more monetization from BP. Special attention was paid to the role of tiers and we’ll continue to do so here.
One of the more interesting shortcomings of BP is the inner temporal nature of the pass. The pass is available not on demand but at fixed time intervals. If a player joins in the middle of a twelve week season they face radically different pass economics then someone who started at the beginning of the season.Continue reading “More on the Economics of Battle Pass: Resting Prices, Forecasted Level and Complete Pass”
In 1931, American economist Harold Hotelling published the seminal paper The Economics of Exhaustible Resources. Harold described a problem many firms face: how much of a non-renewable resource should they sell at any given time? This problem is more obvious when thinking about managing an oil supply, but just as relevant when considering how to manage match-3 levels.Continue reading “The Environmental Economics Approach to Liveops Content Management”
In Six common mistakes when moving to live-service games and free-to-play, Ben Cousins argues that cosmetic-only monetization is a mistake:Continue reading “A Simple Model of Cosmetics and Why They’re Hard to Sustain”
Monetization’s modern paradigm is defined by a direct store and battle pass (BP). After years (and ongoing) criticism of loot boxes, Fortnite re-wrote the rulebook in a way that seems to make both developers and players happy. However, it’s important to consider that at sufficient scale any monetization scheme looks like a winner. It’s unclear if Fortnite is a winner because of the pass or despite it. For instance, the collapse of Clash Royale’s monetization can be partly traced to the introduction of its own pass.Continue reading “The Economics of Battle Pass are Broken. Let’s Fix It.”
Players want to unlock content and the most efficient way to do so is to maximize how many FPS games control progression speed: SPM or score per minute. Score is usually a formula composed of objectives and kills. The key is that it’s uncapped: there’s not a fixed amount of XP up for grabs in a given match or time played (this would be a better design). The formula implies that the more “action” in a given minute of gameplay then the more score per given unit of time and the faster a player will progress. Small maps excel at encouraging this – there’s a short amount of time before you bump into an enemy or objective.
FPS players like small maps because they function as costless XP boosts. Nuketown will be making its 5th appearance in the CoD title with Cold War.
Eric Suffert acutely describes the dangers of extending payback windows. At every t+1 the accuracy of LTV declines while the variance in cohort profitability increases. LTV, however, is not an exogenous variable and clever design can incentivize players into revealing their long-run time horizons within a game.
Consider the design of a many subscriptions: you can pay a lower annual fee or a higher month-to-month fee. If you’re uncertain about the subscription, then the month-to-month is more economical while if you have more certainty then the annual fee makes more sense. The choice is a huge predictor of retention: annual users are far more likely to retain then month-to-month users. The mere inclusion of this annual/month-to-month choice gives users the opportunity self-segment into more predictable cohorts. Why can’t we use the same mechanics in game design to create more predictable LTVs?
Consider two possible goods for purchase via gems in Clash of Clans: a builder or gold. The builder increases the long-run growth rate of gold while the gold itself is a temporary boost in short-run capital stock. In layman’s terms: spending 100 gems on a builder might net you 200 gold today and 1,000 gold by D30 while spending 100 gems directly on gold may only yield 700 Gold today and 0 gold by D30. The builder is an annuity that pays dividends every period, the longer a player’s time horizon the more valuable the annuity.
Players who expect to have a long time horizon in a given title have an enormous incentive to purchase “investment” goods or goods that pay dividends overtime (battle passes similar to some degree). Not doing so results in a increasing opportunity cost penalty every period due to lost compounding growth.
F2P has experimented with direct daily annuities of hard currency. They offer players a discount over the standard IAP packs but must pay upfront to receive a daily allowance. Instead of a 30-day pass, why not ramp to a quarterly or bi-annual pass? Doing so would make LTVs more predictable early in given player’s lifecycle.
There’s a compelling aspect to achieving group oriented goals: being apart of something larger than yourself. Lots of F2P developers harp on the importance of social features. Yet the social experience in many games is abysmal. Lots of teammates or clanmates don’t seem interested in participating instead preferring to “free-ride”, putting forward little effort but getting the fruits of the team reward. Mancur Olson’s foundational work, The Logic of Collective Action, describes how this problem manifests in the public sphere (sometimes literally in the case of electric scooters). Game designers have a much easier time aligning individual and clan incentives than public officials yet they sometimes miss easy wins. How can we make the clan experience better then it might otherwise be?
In Clash Royale, clans advance a boat against rival clans. Advancing the boat depends on individual clanmates playing games everyday (and winning). The more clanmates play consistently, the more the boat advances and the better the rewards the clan will receive. But for many clanmates playing everyday requires a great of effort, why not let others earn the rewards for you?
The problem is severe in Battlefield where “PTFO” or “Play the Fucking Objective” is standard nomenclature. Players often won’t engage in activities that benefit the team (capturing flags), instead preferring to pursue their own objectives (generally: shoot players as fast as possible).
A given player faces two potential payoff schedules when considering to allocate effort to the clan. There’s the expected payoff with no effort (the probability that the clan/team will win if the given player did nothing) as well the probability that the clan will win if the player puts forth effort. We can model this as such:
is the probability of winning the clan event given give the effort of a given player or rather the additive probability of this given player participating.
While R is the reward from winning.
This problem exacerbates as team size grows: the efficacy of a given player varies inversely with the number of teammates. This makes intuitive sense: in Battlefield, a player in 2 versus 2 match has a greater impact on the outcome then a player in a 32 versus 32 player match. The incentive to free-ride rises as the number of teammates or clanmates rises. Weakness hides in numbers.
We’ve also ignored the game-theory dynamics of this problem for simplicity, but it’s worth mentioning. If I know my other teammates are not going to put forth effort, why should I? This leads to Nash equilibriums where clans have almost no activity.
How can we overcome the free-rider problem and ensure that all teammates put forth effort? The highest cost-benefit feature is simply better monitoring tools. In many clan or team based games, clan leaders face asymmetric information: they simply can’t identify the players that do not put forth effort. A simple measure of activity (last login) or games played in the last week goes a long way to kicking out free-riders. We might also consider a joint-production function. In Battlefield or Clash Royale each player would receive a score based on their effort or contribution to team advancement, if the team wins they receive a multiplier on this score. Such a system would have two benefits: it would more closely align individual effort with individual outcome (reap what you sow), and it would increase the benefit for high performing clan members to engage in monitoring. For example, a high performing member might have $20 in contributions with a 2x multiplier or $40 for winning compared to a low performing member with $5 in contributions and therefore $10 for winning. In real terms, the high performing member has an even greater incentive to encourage low performers to put forth effort.
There’s a lot to be said for social shaming as well. While it hasn’t been effective for zero effort participants, there’s evidence it might help players on the margin. A push notification demonstrating that your clans needs you or perhaps better yet, a system where your clanmates can send you push notifications is a compelling way to push players into action.
Perhaps the greatest miss I see is not in clan monitoring (kicking out free-riders), but in self-selection to begin with. Clans are generally pareto efficient for players meaning that there’s zero cost and only benefit to joining one. Players then generally look for near max-size clans as they maximize the clan’s probability of winning a reward and thus the players. Reducing search costs by recommending (or restricting) clans based on device language, location, and some measure of progression maturity makes all players better off.
It’s hard for social monetization opportunities to take-off if team based activities suck. We still have a long way to go to fix top of the funnel problems. Afterall, teamwork makes the dreamwork.
Economists like Tyler Cowen or Brad DeLong are too self-respecting to study reality shows. Fret not, this economist has no such self-respect.
Previously, we examined the economics of the reality show genre but just as interesting are the economics of the a particular reality show’s design.
To Hot Too Handle introduces of the most interesting examinations of communal property dynamics: a group prize is reduced when individuals act in their short-term private interest.
At a more practical level, the show gathers ten attractive 20 somethings into a villa in Mexico for three weeks. Cameras are littered around the villa with the exception of bathrooms (to be replaced with mics). The contestants are only informed of the rules once the cameras start rolling; if they masturbate, kiss, or engage in any sort of sexual activity the prize pool of $100,000 is reduced. It’s unclear to contestants how “expensive” each activity is or how the prize pool will be divided or won. Shockingly, interviews with the show’s producers reveal they didn’t have the rules or the costs figured out until they happened. While there’s no traditional contestant elimination process, producers will ask contestants to leave if they’re not invested in the “process”. Supposedly, the show wants to teach these singles how to form emotional rather then physical connections.
The spectacle for viewers is how hard it is for these contestants to keep in their pants – of the original $100,000, over $40,000 is lost. Seems like a lot, right? How could they give up so much money?
Well… It’s really not that much. On the face of it $100,000/10 = $10,000 per contestant. The tax situation matters greatly – U.S. contestants or those with residency in the U.S. will probably pay about 50% of that $10,000 in taxes. Interestingly, if the show took place in the U.S. rather than Mexico, all contestants would be subject to U.S. taxes. It appears to the case that the Brits and Canadians don’t face game show taxes.
On a expected payout basis, the costs are far less then they might appear:
- $3,000 for a kiss is only $300 on a per contestant basis. Only $150 after taxes.
- $6,000 for oral sex = $600 gross, $300 after taxes.
- $20,000 for sex = $2,000 gross, $1,000 after taxes.
The show filmed for 3 weeks, at a max payout of $10,000 this is a yearly salary of $173k. Not bad, but many of the contestants already out gross that. Francesca Fargo is estimated to have a net worth of over $500k alone. Almost all of the contestants make money off their likeness or brand. Like Francesca, they model, sell clothing or act. Thus, building an Instagram following is directly connected to their revenue stream. Breaking the rules can help the contestants build that brand – losing out on $300 now could be much more in brand awareness later. Those without brands seemed to leave early or not attempt anything “interesting” – see Madison – a late arriver who never coupled up.
But the rules weren’t clear on splitting the prize and contestants could have been under the impression only 1 or 2 would win. Under an expected value model the payout is the same: $10,000 ($100,000 *10% chance of winning). However, if you feel as though you’re a weak contestant you might estimate yourself at less than 10% probability to win. I think this was the case for sorority girl Hailey who broke the rules a mere two episodes in and had no interest in continuing.
I think there’s room for improvement in the show’s design. It was rather strange to reveal to contestants who the rule-breakers were so early in the show. This introduced social shaming as retaliation for rule breakers, speculation and investigation makes for far more drama. If the show was about temptation, why not focus more on the money or relationships? Maybe contestants can choose to eliminate their show’s squeeze – money AND sex as tests of genuine connection. Discounting seems like a great lever for drama injection – this week sex is 50% off! Adding new contestants didn’t seem to work, everyone had coupled up by the time they got there. Subtraction or an elimination is lot more fun.
Well, here’s to a solid season two. Hopefully, the show remains tongue in cheek. But not literally – that would be a rule violation.
In the 1950’s, Peruvian inflation forced Coca-Cola to charge more per bottle of Coke. Unfortunately, their vending machines required physical updating to accept a new and larger domination. Instead, Coke devised a probabilistic system: the machine would charge the same amount as before, but randomly refuse to give a bottle. This raises the expected price of a bottle Coke while forgoing any physical updating. But a miscellaneous software engineer has a better idea: raise the price of Coke, but instead randomly give the money back.
The increase in price for a given ‘bottle draw’ would equal the expected payoff of a lower priced ‘bottle draw’ that randomly refuses to give a bottle. This is an interesting solution to player frustrations in gacha (“I didn’t get anything of value when I opened a pack!”).
Anyone care to reckon which model one would perform better: Higher draw price but gives money back or lower draw price but sometimes doesn’t give anything?