Just today I spent 4 hours talking through some metrics data of @FunkyCircuit games in order to drive the next wave of updates… so maybe worth writing about what that is.
Game analytics (or metrics, or telemetry) has been around for ages. I want to quickly cover some basic points here, as I see them. It’s a huge topic so without a doubt we’ll be coming back to it in the future.
The basic idea is that you:
- report some metrics data from within the game – i.e. game sends data to a server location
- analyse the data collected in order to establish how users interact with the game and what you can be doing better
- tweak and update the game so it’s a better experience… better performance… etc
Sounds easy, right? I wish! Let’s look at what’s involved.
There are two main challenges to do with reporting good valuable data from your game:
- Technical – what technology do you use to report the data? Do you write your own or do you use an off the shelf one? I personally use Flurry (www.flurry.com). You can certainly write your own although the effort and infrastructure involved often outweighs the benefit of such venture. If you however run a server-based game you probably better of with collecting your own data. Off the shelf systems are usually generic (one shape fits all) solutions – which can be sometimes limiting if you need specific data reported.
- Data – the data being reported often is the limiting factor in such an endeavor. There is a temptation to just report everything and make sense of it later. Even if that was possible (you don’t want to use excessive bandwidth – your users will be a bit cross with you) it’s still not clear how you structure the data reported. Most systems (like Flurry) will aggregate your data in one way or another, so you want your data to make sense when aggregated. Get this one right and you have solved most of your problems!
So how do we do better reporting?
- Have a list of questions you want answered. Write them down, then think what data would you need reported to answer those questions.
- Iterate – don’t leave this for the last few days before release. Implement it as soon as you have some questions to answer. Look at the data, and try to make sense of it, how it’s aggregated and visualized. Tweak the implementation and then look at the resulting data, again and again… until you are happy with the results.
- Think about the data and how it’s aggregated so that you can report it efficiently. The more you do this, the better you will become at it.
This is far from easy. Don’t expect to get it perfect first time. Good metrics data will help you understand your game and your users better. Bad metrics data will leave you scratching you head and quite possibly turn you off this otherwise valuable process.
Keep improving this after the game is released and updated.
Quite often various theories emerge about user behavior when looking at the data. It’s easy to make theories but not nearly as straightforward to be confident that data backs them up.
Often people go for generic buzzword indicators to measure performance of a game. Just to name a few: ARPU (average revenue per user),Retention, Daily Active Users (DAU), etc.
These are important and you will get most of these out of the box with any standard packages (did I say I use Flurry). However, correlating these numbers to various events or features in-game (user behavior) and outside game (marketing) is often hazardous and prone to wrong assumptions. Did this new feature affect our numbers negatively or did we get suddenly get a group of new users who play the game differently? You will see this in the weekly user fluctuation leading to regular spikes over weekends and more moderate performance during the week. How can you be sure that increased user activity is due to your in-game feature or just part of the usual cycle or holiday in one of your territories? Is the feature Bob worked on really that poor or did we suddenly get a batch of users who don’t like our game because we advertised on the wrong site? Notice how this can subtlely become a political issue as various interests intersect and with no clear metric everyone argues their corner bringing such analysis to a halt.
One way to resolve this issue is to use split tests (also known as AB tests). Users are grouped into two different groups (cohorts). One of the groups (A) is assigned the desired behavior and the other one (B) is left untouched. We can now measure the performance of the two groups and compare the results. This eliminates user quality fluctuation – regardless what time of day or week it is or where these users came from, if they are randomly assigned to both groups as they come the results will always be an indicator of how our desired behavior is performing. Certainly a lot easier to analyse – yet no silver bullet.
You can also examine various game design decisions through the prism of data. This will help you spot any difficulty balancing issues and tell you a lot about how people play the game. At minimum I would always have a “first time reached” event for my game progression curve. That way I know how many people have reached that point in the game. I can see at which point people have stopped playing… and if there are any sudden drops in the curve then we have a problem.
Tweak & Update
We were always going to tweak and update our game, right? Doing it without data to inform our decisions just means we are doing it in the dark based on our assumptions about our target audience and our overall plan for updates.
Having looked at the data however, this is where we decide to decrease the difficulty on level 16 because 50% of the users who finished level 15 never finished it. Here is where we reduce the price of map pack 2 because 90% of users who finished map pack 1 didn’t think it was worth spending that much virtual currency on that content. Here is where we don’t spend time working on map pack 5 because that would mean catering for 5 people who have finished map pack 4 and instead concentrate our limited resource on other parts of the game that would benefit more users.
It is not always that easy. In one of our games we had a difficult trade-off:
People who start in location A, go on to spend more money on in-app purchases. People who start at location B spend less on in-app purchases. However, on average more B players completed 50% of the game than A players. What is more valuable to us? Hard cash or people playing the game for longer, getting more engaged and telling their friends about it. Are they telling their friends about it? We’ll never know – we didn’t report that data correlated to the start location!
This process certainly isn’t a magical silver bullet. It is no substitute for creativity – you are still in charge of coming up with that great idea. However, I believe it can be valuable tool in your game development process helping you bring that great idea to success.