Uncategorized

Porting MZR to android (NDK)

Posted on

googleplay

Recently I spent some time porting MZR to android and wanted to do a quick write up of some of the challenges I encountered. I’m no android expert so all of these were news to me.

A bit about the code.

First things first. MZR is written in C++ on a custom cross platform engine. The bulk of the code is written in C++ with a thin driver/glue layer per platform. My engine supports PC, iOS and now Android. It’s very easy to add new platforms. However, as many of you who’ve done this engine type of thing know, each platform comes with time consuming maintenance work afterwards.

Here’s a list of things that go in that platform-dependent layer:

  • low level basics – core types, threading primitives
  • low level IO support – file systems, input devices
  • low lever render API – PC(DirectX), iOS (OpenGL ES)
  • low level sound API – both PC and iOS use OpenAL
  • networking – platform specific APIs for fetching data over http, socket based stuff, etc
  • other non-essential stuff (from game point of view): social platforms, store and billing support, etc

Important to stress that the platform bits are contained to that layer. No game code references low level platform APIs. Everything is accessed through a platform-independent internal API that in turn talks to the platform specific code. On PC that is C++, on iOS it would Objective-C and on Android Java/JNI. This ensures that the game code works the same on all platforms and changes to platforms don’t trigger changes to game code.

Note: this may seem an unnecessary layer of abstraction but I do use certain techniques that allow me to bypass indirections and keep performance benefits without introducing dependencies to platform specific code [possibly warrants a post of its own].

Porting to Android

Ironically the most daunting part of the whole process of porting MZR to Android was the project setup. I knew I had to use Android NDK so I can make use of my C++ code base. However, how does a full project get set up with both Java and NDK, what are the right ways to set up both build environment and Eclipse… all down to reading a lot of tutorials, internet posts and trying things out.

My final setup ended up with:

  • Eclipse project for the game. The Eclipse project only references the Java code, the C++ code is elsewhere.
  • Android.mk make-file that references all the C++ sources compiling to a single .so library

All C++ debugging was done using custom tools and printf. In the end I couldn’t set up Eclipse to behave with my folder structure, building and debugging. Main problems here included the source generating a lot of errors by the automatic Eclipse compilation and then refusing to build the project. The Android.mk file had all necessary include paths and built correctly.

Once I had the environment set up, it was down to writing some code.

Core Basics & OS 

Core basics were fairly easy to port. I use pthreads so nothing new there. Everything ported without issues.

However, I did get a nasty surprise in the atomic increment/decrement intrinsics encapsulation. On iOS the atomic inc/dec return the new value after the operation while the Android equivalents return the value before the operation. Something to watch out  for if you ever do this kind of low level porting.

File systems. Here on iOS it was all down to getting the right file system “root” folder and then leave it to standard C libraries to do the job. On Android however there are two types of storage file systems in my engine – one to read assets from (AssetFIleSystem) and one to save data to (SaveFileSystem). Fortunately I already do this separation in my code when I read assets and save game data. That didn’t present a problem beyond the android implementation details.

The asset file system is built on top of AssetManager. I had to pass the AssetManager down to the JNI code from Java and then use it in the platform specific Android code of the engine. The save file system just has a path to the target folder and uses standard C libraries code.

JNI or how Java and native code talk to each other

If you write stuff with NDK, you will inevitably need to do some JNI. Theoretically you can write an android game entirely in native code. However, in practice every third party SDK for android comes as a java library, including the ones provided by Google.

Nothing to worry about there though. JNI is not hard and is well documented.

Here’s some gotchas that caught me out.

Native -> Java

On the Java side just write your classes as you normally would – all the work is then handled in the native part. I tend to use static methods mainly to keep things simple.

1. Cache your methods at load time. You need to execute some code to locate the Java class and method so later on when you want to invoke that method you can use these “handles” to do it. It’s a good practice to do this in your JNI_OnLoad function. The JNI_OnLoad function is called when your native (.so) library is loaded.

2. Make sure you are familiar with how the type names works so you can produce the correct signature for your methods. Otherwise you’ll struggle to find them even though the name is the same and the class is there. Also pay attention to the return type of the method because that mandates how you invoke that method.

For example, invoking a static method that has a void return type would be:

env->CallStaticVoidMethod(classMyClass, methodMyMethod);

While invoking a method that returns “float” would look like this:

float myVal = env->CallStaticFloatMethod(classMyClass, methodMyMethod);

3. When you locate your class and cache the handle, you need to get a global reference to it or it can get moved and then you are left holding onto a dangling pointer.

jclass classMyClass = env->;FindClass( "com/fc/mzr/MyClass");
if (classMyClass == NULL)
{
    FAIL("Can't find MyClass class.");
}

//get global reference to my class
classMyClass = (jclass)env->NewGlobalRef(classMyClass);

4. Make sure the JNI environment you use is attached to the thread you are calling java methods from. For example, in my code a run a number of threads, mostly for IO – file access, networking, etc but also for stuff like sound, music, etc. If you don’t do that you may get crashes that sometimes can be difficult to trace and seemingly inexplicable.

You can ensure the JNI environment is attached to the tread like this:

JNIEnv* JNI_GetThreadEnv()
{
   JNIEnv *env = NULL;
   bool isAttached = false;

   int status = g_JavaVM->GetEnv((void **) &env, JNI_VERSION_1_6);
   if(status == JNI_EDETACHED)
   {
       status = g_JavaVM->AttachCurrentThread(&env, NULL);
       DEBUG_LOG("GetEnv: attach status: %d", status);

       if(status != 0)
       {
    	   DEBUG_LOG("GetEnv: failed to attach current thread");
           return NULL;
       }

       if (status == JNI_EVERSION)
       {
    	   DEBUG_FAIL("GetEnv: version not supported");
       }

       isAttached = true;
   }
   else
   {
	 if (status == JNI_EVERSION)
	{
	   DEBUG_FAIL("GetEnv: version not supported");
	}
   }

   return env;
}

And then your calling code would look something like this:

JNIEnv* env = JNI_GetThreadEnv();
env->CallStaticVoidMethod(classMyClass, methodMyMethod);

5. Make sure you complete your Java execution quickly and return control back to JNI, especially if you have further code that uses that JNI environment in the same function – it won’t stay attached to that thread forever. I make it a practice to have blocking code in the Java part be executed in another task and then callback when long blocking operations are expected.

Java -> JNI

Calling native methods from Java is somewhat easier to set up. You need to declare methods as native in the Java class and then declare them in a very specific way in the native source.

1. Declare the methods in native code as C signature and with the correct name. Curiously the dots in the class path package name become underscores. The ‘extern “C”‘ part is really important, especially if you are using C++ – otherwise JNI won’t find your function.

extern "C"
{
	//mappted to class called MyClass in pacakge com.fc.mzr and the method is called myNativeMethod
	JNIEXPORT void JNICALL Java_com_fc_mzr_MyClass_myNativeMethod(JNIEnv *env, jobject obj, jstring myStringParam);
}

2. Don’t make assumptions on what thread you are once your JNI native methods are invoked. I spawn threads on the Java side to do async operations like Facebook requests, so the callbacks often come from another thread.

General JNI 

Your native code is compiled to .so library that is then loaded by the Java code. You will have to put some code there to load your library. If you have multiple .so libraries that depend on each other then you need to load them in the correct order otherwise you may get a run-time linking error. This is not a 100% on all devices so take care because it may be that your device is working fine but others crash on start because of this.

And now some input multi-touch FUN!

Input was fairly straightforward. I handled the input in the Java part of the code and handed it over to the native part. I did have some problem with events being handled on a different thread to main loop. I ended up caching the events and picking them up once per game frame to ensure all events were handled consistently.

Beyond that there is a manifest setting to enable multi-touch support and the code for support multiple touches may need some care. It certainly took me some time to get it right but I attribute it to me being new to it rather than any gotchas. It all makes perfect sense now.

OpenGL ES is easy… oh wait… what was that bit about the context?

My OpenGL ES code that I use on iOS ported directly with minimal changes to Android. It just worked. There was little need to debug or fix things.

However, there’s a nasty surprise for Android developers with OpenGL ES. When an Android app is suspended (pressing the home button) it would lose the OpenGL context. That means that all resources referenced by the app are lost and the app needs to reallocate/reload all of that once the context is recreated.

MZR uses a lot of dynamic geometry and render-to-texture surfaces. Not only would I have to reload all static resources (like textures) but also recreate all resources that were dynamically generated such as old mazes, pre-rendered texture atlases, pre-rendered signs, etc.

In Android (I’m not sure which version this was introduced in) there is a method on the GLSurfaceView class called “setPreserveEGLContextOnPause”. That supposedly keeps the context so that it survives the app being suspended to the background although it doesn’t guarantee it will be back once the app is resumed. I used that – it works. For the most part the context is there – however I have gotten reports that on some devices it isn’t.

I also made sweeping changes to MZR to make it reconstruct all resources so that it can survive a “lost” context. I did hit some problems with that. I’m assuming this is completely my own fault but I was unable to completely trace the issue on Android. On both PC and iOS I was able to nuke all resources and restore them again but the android version just wouldn’t work. I had limited time to invest in this issue so I left it at that. If there is a need to fix that entirely I’ll look into it again.

OpenAL works a treat but there is a… lag!

I use OpenAL on iOS and PC. The code is the same. I have a high-level audio engine (that uses XACT as authoring tool) that’s built on top of that.

My first reaction was to use one of the Android sound APIs but it looked like a lot of work compared to a tested OpenAL implementation. I found a library called openal-soft that has an Android port. Dropped that in (skipping all the build/setup details) and it worked perfectly….  apart from the half a second delay between triggering a sound and the sound actually being audible. The delay would vary with different devices.

At some point I was going to ship with that lag in but in end thought I’d try the native android audio library – OpenSL.

OpenSL ES is very different from OpenAL. It has a COM like interface and it required some getting used to. This tutorial proved very useful.

I had to make a fresh implementation of my low-level sound system interface using OpenSL.

To play compressed music I used the MediaPlayer in the Java part of the code. I had a policy to cache these players on iOS so that I can have a zero-seconds start of the music playback. On android however there is (can be) a limit of available players at any given moment so you could end up with the player failing to play your music track. I was hitting this problem at some point.

Android devices come in all forms and shapes

MZR is a portrait game. I could limit the orientation to “portrait” in the android manifest. However, that doesn’t guarantee the resolution or the aspect ratio of the main display buffer.

This means that MZR suddenly needed to support all sorts of resolutions, not just the standard iPhone and iPad ones. MZR is mainly a 3D game so that wasn’t a big problem for the game content.

Front end 2D interface however is another matter. I have a 2D layout tool where I can layout each of my 2D interface pages that I then load and display in-game. To support an unexpected resolution the tool supports anchoring. Anchoring is used in many windowing systems to control the behaviour of UI elements when window panels change size. For example if a button is anchored to the left of the parent panel, it will stay relative to the left edge of that panel. In my tool I can anchor elements both horizontally and vertically. Respectively they can be anchored left, centre or right and top, centre and bottom. That solves most problems with small variations of the resolution and aspect ratios.

One final glitch is font rendering. In my engine fonts try to stay pixel perfect in 2D unless explicitly overridden by the game code. Fonts would look really small on screen for higher resolutions. I had to introduce multiple font sizes and switch at certain resolution thresholds to keep the fonts crisp and relatively well sized.

Android devices come in all muscle shapes and sizes

MZR is a fairly demanding game. Despite its simplicity it renders quite a bit of geometry and overdraws the screen with transparent effects. This could prove particularly troublesome especially if the hardware is slightly out of date or/and resolution is really high.

To address this on iOS I had a predefined list for each version of iOS device with graphics quality settings. Some features would be enabled on some and disabled on other devices. For example iPhone4 has hardly any graphics features enabled and iPhone5 has all the bells-and-whistles… with original iPad Mini being somewhere in between.

For android this was not a practical approach. With potentially thousands of devices I wouldn’t be able to create such a list. To solve this I introduced a number of features:

  • a graphics settings page in the options menu – the user can go and switch graphics options on/off to tweak their performance
  • an automatic measurement of average and lowest FPS (frames-per-second) and switching graphics options off if that FPS measurement drops under certain limit; that way the app always starts with maximum settings and after a couple of runs (if FPS was not satisfactory) dropped the settings to recover some FPS
  • server solution – after every run the game would upload average FPS info to the server anonymously for that device (and settings); the server can then aggregate the data and find an optimal preset of settings for that device type;
  • server presets – every time the game boots it contacts the server and asks for graphics settings presets – if presets exist for that device then they are downloaded and set as default – turning this process into self-accumulating list for thousands of devices

Immersive mode & full screen

When running a fullscreen game on android you will want to hide the status bar. Also some android devices have their home and back buttons at the bottom of the screen – (action bar)?

There are a few examples on the internet how to handle this but what I’d missed out is that I have to set up an OnSystemUiVisibilityChangeListener which is a handler that gets called when there are changes to the visibility settings of the app. In that handler I’d have to set the appropriate flags to make the app full screen.

You can read more about Immersive mode here: https://developer.android.com/training/system-ui/immersive.html

Check it out!

The game is now available on the Google play store. Be sure to check it out:

https://play.google.com/store/apps/details?id=com.funkycircuit.mzr

That’s it for now.

Thanks for reading!

MZR Update 1.1 Released

Posted on

MZR
MZR

MZR Update 1.1.0 was just released on the AppStore.

Link to the AppStore

So what’s new in this update:

Super Power-Ups

There is a new “super power-up” mode in the bonus stage allowing the player to get a higher highscore. That mode was in development before main game release but got put on hold so that I can get the game out on time. It introduces another type of maze solving – a classic iteration on the multiple overlapping mazes where you need to choose one entrance that would lead you to to the destination you desire. As with the bonus stage there is no failure here – the player can only gain. They have to quickly asses what powerups are on offer and choose the right entrance.

Retina Graphics

This is a result of some of the work related to the Android version of the game. MZR now supports retina resolutions giving the game a nice crisp look if the device can handle it.

Country Scores

The main screen/leadergrid gets a facelift. Countries now have full names (as opposed to alpha-2 country code) and are better spaced on the field. As a result not-all countries are visualised but it looks a lot more presentable.

Finally, congrats to Australia. At the time of release of the update Australia was in the lead with the score of 812 – amazing achievement.

Other Fixes

There have been many fixes and improvements in this update such as: tilt control centring, graphics options, spelling mistake fixes and many more.

What’s next?

There’s an Android version of the game which is very close to release. So watch that space.

That’s it for now,

See you later!

Blending and Transitioning Camera Behaviours

Posted on

This time I’m going to talk about an approach I’ve used successfully on multiple occasions in several companies and home projects. I use it to manage multiple behaviours, blend and transition between them.

I’ve used this system for animation blending and camera control in the past. However, if I can represent something as a state structure and want to blend/transition between different behaviours that operate on that state then this is my “go to” solution.

In MZR I use this approach for my camera system as well as a system that adds additional camera effects – shakes, wobbles, etc. They are both independent: the camera may have a “focus on point of interest” behaviour to control it while the effects can change independently of that giving me a wider variety of visual experience.

The main motivation behind a system like that is that we want to be mostly concerned with directing the system behaviour rather than be busy with the small detail of how that happens every time. I’m interested in when the camera transition happens, how long it lasts, what the new camera frame is, etc. I’m less concerned with what happens with the old camera setup and I’m definitely not keen on doing the same work every time a camera transition needs to happen, in terms of maintaining entities and writing code to manage their life time.

Ideally we want to write only the code that introduces changes in the system and have the system sort itself out afterwards.

First things first.

Couple of words about some code choices.

In order to get automation of allocation and deallocation of objects I make use of a smart pointer system. If you are working in a managed language environment (like C# for example) you don’t have to worry about that. However, for the purpose of this article if you see an object that is derived from BaseObject then it supports intrusive reference counting. And if you see SmartPtr<MyClass> then that adds value semantics to the pointer – incrementing, decrementing the ref count of the object to manage its life time.

In short it’s an automatic lifetime management system. If noone is pointing to an object it will get deleted.

The State

I use a POD type of structure to represent the state of an item in the system. In this example the we are doing a camera system so let’s represent the state as two points: camera position and camera target. You can use a position and orientation or any other combination of properties.

The state is important because this is the result of our system. It is also the data that we would blend. Any behaviours we have will aim to produce one of these states as result of their execution.


struct CameraState
{
     Vector3 position;
     Vector3 target;
};

CameraState BlendCameraState(const CameraState& lhs, const CameraState& rhs, float fraction)
{
    CameraState result;
    result.position = lhs.positions*(1.0f - fraction) + rhs.position*fraction;
    result.target = lhs.target*(1.0f - fraction) + rhs.target*fraction;

    return result;
} 

The state can be anything. In case of an animation system the state can be an array of skeletal bone transforms, movement vector extracted from the animation and so on.

The Base Controller

Next, the basic building block of my system – the CameraControllerBase. It contains a state that we can get access to in order to find out the current state of the system.


class CameraControllerBase: public BaseObject
{
public:
    virtual SmartPtr<CameraControllerBase> Update(float fDeltaTime)
    {
         return this;
    }

    const CameraState& GetState() const { return m_state; } 

protected:
    CameraState m_state;
};

The most important part of this class is the Update method. The update is where a derived behaviour would do the work by overriding that method.

The Update returns the current controller the parent entity would have after this update step. At the top level, let’s say the entity that owns the system, we have a pointer to the current “top” controller.

SmartPtr<CameraControllerBase> m_topController;

In the update part of this top level entity we want to update the current current top controller and assign to it whatever it returns.

void Game::Update(float fDeltaTime)
{
    ...
    m_topController = m_topController->Update(fDeltaTime);
    ...
}

By doing this we make sure that whatever behaviour is currently at the top controller will be updated and can delegate it’s position of “top controller” to one of it’s child behaviours it aggregates.

This is the driving idea behind this approach. Controllers can “suicide” themselves and pass the responsibility of top controller to another controller they hold a pointer to.

In this article I use the terms controller and behaviour interchangeably. My base building block is the controller – but some controllers have more complex functionality that is beyond the simple control/blend functionality of the system. In other words they have some game or domain specific function that is used to generate or process a state. I call such controllers “behaviours” to indicate their higher function.

The Blend Controller

The blend controller is a class that takes two other controllers (of unknown type but derived from the base one) and blends between them over time. Then it replaces itself with the second controller – the one it interpolates to. As it replaces itself with the “to” controller the “from” and the “blend” controllers are automatically disposed of.

It’s a transition.


class CameraControllerBlend: public CameraControllerBase
{
public:
    CameraControllerBlend(SmartPtr<CameraControllerBase> From, SmartPtr<CameraControllerBase> To, float BlendTime)
    {
         m_blendTimeMax = BlendTime;
         m_blendTime = 0

         m_controllerFrom = From;
         m_controllerTo = To;
    }

    SmartPtr<CameraControllerBase> Update(float fDeltaTime)
    {
         // accumulate the time
         m_blendTime += fDeltaTime;

         //update the two controllers; assign the result of the update them so the take over logic works
         m_from = m_from->Update(fDeltaTime);
         m_fo = m_to->Update(fDeltaTime);         

         if (m_blendTime < m_blendTimeMax)
         {
              float fraction = m_blendTime/m_blendTimeMax;

              //use fraction to blend between the states of m_From and m_To
              //store the resulting blended state in m_state of base class
              m_state = BlendCameraState(m_from->GetState(), m_to->GetState(), fraction);

              // return this one as the current top controller
              return this;
         }
         else
         {
              //the blending has finished - return the m_To controller as one that will take over
              return m_to;
         }
    }

private:
    float m_blendTime;
    float m_blendTimeMax;

    SmartPtr<CameraControllerBase> m_from;
    SmartPtr<CameraControllerBase> m_to;
};

To trigger this transition we we have to replace the top controller with a newly created blend one that blends between the old top controller and a new behaviour.

void Game::BlendToController(SmartPtr<CameraControllerBase> ToController, float BlendTime)</pre>
{
      m_topController = new CameraControllerBlend(m_topController, ToController, BlendTime);
}

Easy. With just one line we can introduce new controller/behaviour in the system and have it blend in gracefully and clean up after itself.

A diagram showing how new controllers are blended in.
A diagram showing how new controllers transition in to take over the top controller role.

A really nice property of this system is that if blend transitions come in close succession (before the previous blend has finished) everything works exactly as expected. By doing that we are essentially growing the three of objects with every branch being a blend controller pointing to either other blend controllers or behaviour. In the end once all blend times have expired we will be left once again with one top controller.

Sometimes we want to blend to a behaviour, stay at that behaviour for a while and then return to the previous one. I call that an “attack-sustain-release” blend controller. The controller blends to the “to” behaviour, stays there for “sustain” time and in the end returning the “from” controller and disposing of the “to” one.

Here’s how this “attack-sustain-release” (ASR) controller Update function might look.

SmartPtr<CameraControllerBase> CameraControllerBlendASR::Update(float fDeltaTime)
{
    // accumulate the time
    m_blendTime += fDeltaTime;

    //update the two controllers; assign the result of the update them so the take over logic works
    m_from = m_from->Update(fDeltaTime);
    m_to = m_to->Update(fDeltaTime);         

    if (m_blendTime < (m_attackTime + m_sustainTime + m_releaseTime)
    {
         //calculate fraction as function of current BlendTime, attack, sustain and release times
         //fraction will stay in the range [0:1]
         float fraction = CalcAttackSustainReleaseFrac(m_blendTime, m_attackTime, m_sustainTime, m_releaseTime);

         //use fraction to blend between the states of m_From and m_To
         //store the resulting blended state in m_state of base class
         m_state = BlendCameraState(m_from->GetState(), m_to->GetState(), fraction);

         // return this one as the current top controller
         return this;
    }
    else
    {
         //the blending has finished - return the m_From controller as one that will take over
         return m_from;
    }
}

The fraction function returns a value between 0 and 1 depending which phase of the controller we are in. During “sustain” fraction will always be 1 for example.

Behaviour Controllers

We had a look at the blend controllers but what about the actual behaviours in the system? Well, that’s down to the specific system. That’s why our Update function is virtual, so that any derived classes can calculate the state in many ways not imagined by us at the point of writing the system.

For a camera system here are some that I’ve used in the past (Note: names are something I’ve just come up with):

  • CameraControllerSnapshot – take a snapshot of a current CameraState and keep it still – in many ways that’s the CameraControllerBase with ability to expose the m_state for writing.
  • CameraControllerFixedPointLookAtPlayer – one that keeps the camera in the same position but makes it look at the player and track them
  • CameraControllerFixedDirectionLookAtPlayer – camera looks at the player from certain directions and moves position to maintain that direction as that player moves. Often such camera would be constrained by a box or geometry.
  • CameraControllerRailsLookAtPlayer – this is sort of cinematic camera 3rd person action games would employ. It would constrain it’s position to a pre-defined spline (on-rails) and follow the player.
  • CameraControllerRailsFixedLookedAtPlayer – this is a variant of the on-rails camera where there are two splines. One that defines the camera position and another one that defines the camera look-at point. This is used so that at any point the artist (camera man) knows what will be in the frame. We would then take the player position, find the closest point on the target spline and calculate the position spline accordingly.

I’m sure you can come up with a lot more camera behaviours. This is just a taste. As long as your Update function uses some logic to fill the m_state CameraState you will have a working system.

Non-transition blending

Not all blends are transitional. They don’t have to be timed and always expire.

We could have a behaviour that has two child behaviours – very much like our blend controller. However instead of time controlling our fraction we can control it from another parameter in code or data setup. That way we can dynamically control the degree in which each of the child behaviours contribute to the final state.

You can take this notion a step further and introduce several child behaviours that are all associated with a value on a line – for example one sits at 0, one at 0.5 and finally one at 1.0. Then the blend behaviour would be given a parameter “depth” and it would evaluate which child behaviours contribute to the final output. This I’ve heard that called a “depth blend”.

There could be other blend examples where the parameters are not linear. Any parameter set can be used as long they can be evaluated to result into a weight for their corresponding child behaviour contribution.

I’ve mostly used these in animation blending. For example, the depth blend could be used in character animation where we want to blend between two animation loops: running and walking. Based on the desired speed of the character we can derive a parameter that is the fraction between walking and running speed and pass it in as depth blend factor. The result would be an animation that is half walk and half run driven by the parameter we just passed in.

Another animation example would be the multi-parameter blend. Let’s say we have several animations of a character that is pointing [a weapon?] at different directions. Each of these animations is associated with it’s corresponding aim direction. Given a desired aim direction the system evaluates a function that results in an array of weights – one for every animation. Using those weights we can then calculate a weighted blend of those animations to got to a state where we have a character pointing at the direction we need.

Note: there is a lot more going on in character animation systems and I’m simplifying here to illustrate this method. A good animation system would need to compensate for different character speeds, foot planting, make additional corrections using IK solutions and so on.

In practice…

… I have a generic template implementation that I specialise every time I have to write one of these systems.  The blend controllers are the same. The state, the state interpolation function and the behaviours are what differs between systems.

Sometimes you will need different interpolation methods than just a linear in transitions. You can add that to the blend controller and control it with a parameter.

Depending on the game that you are making you may even want to make the blending controllers be more context aware. Maybe you want your camera to always track the player no-mater-what. Maybe sometimes blending between two perfectly good behaviours you end up with a frame or two when the camera isn’t looking at the player. To fix that you could make a “clever” blend controller that blends between two controllers but keeps the player into view.

You can trigger camera blend transitions in code – I do that in MZR. However, quite often camera transitions and blends are result of a complex setup in a level. When player passes this trigger then transition to this camera and if they get in this area switch that one, etc. You can even have an editor that allows you to lay down those triggers and position the camera behaviours around the level… but that’s another story.

 

That’s it for now. I hope you enjoyed reading about this system. It has served me well and I like how it liberates me from the tedious book-keeping of the blending transition tasks and allows me to focus on the top line “what I want to happen” bit of development.

See you next time.

MZR: Gradient Based Shader Effect

Posted on

Today I’ll talk about a shader idea I’ve always wanted to use but never got to release in a game until now. It has its roots in the old retro palette scrolling technique – or at least was inspired by it.

Palette scrolling was the thing when images had 8 bit pixels with each pixels being an index into a palette table of 256 RGB entries. That way using the same image and just changing the palette, one could change the look of the image without actually altering any pixels. Artists would do wonderful animations with just changing palettes. One of the cheapest way to do that would be to just shift the palette one entry (scroll it) and then see the colours shift – I called that palette scrolling.

These days one can still do palette scrolling but on current GPU hardware that involves using two textures: one index texture and one 1D palette texture. Animation being achieved by dynamically changing the palette texture. While on desktop GPU hardware that’s entirely fine on current mobile device GPUs dependent texture fetches are not very performance friendly.

I wanted to use a similar concept of having a static texture that would change appearance when “something like a palette” would change.

I do that by exposing a range from a gradient texture using a step function. For a quick refresher on the topic, have a look at this excellent post on step and pulse functions: http://realtimecollisiondetection.net/blog/?p=95

By using a gradient texture and a step function, y = sat(ax + b), I can vary the parameters and a and b and reveal/animate different parts of the said texture. I also introduce two colours and interpolate between then based on the y value.

Here is the shader code:


uniform mediump vec2 GradientParams;
uniform lowp vec4 GradientColour0;
uniform lowp vec4 GradientColour1;

...

mediump vec4 col = texture2D(Texture, texVar);

// calculate a*x + b
mediump float y = col.x*GradientParams.x + GradientParams.y; 

// calculate sat(a*x + b) by clamping
y = clamp(y, 0.0, 1.0); //sat (a*x + b)

// interpolate the two colours based on the resulting y value
lowp vec4 rcol = GradientColour0*(1.0 - y) + GradientColour1*y;

// factor in the original texture alpha
col.xyz = rcol.xyz;
col.a *= rcol.a;

//apply the variant colour
gl_FragColor = col*colorVar;

The a and b parameters go in the GradientParameters x and y components and two colours at each extreme is respectively GradientColour0 (for y=0) and GradientColour1 (for y=1).

Let’s take a simple gradient texture:

h_grad
A horizontal gradient texture.

And then apply our shader to it. We are using the function y=sat(ax+b). We use a=1 and b=0 thus giving us a gradient of 0 to 1 in the range of the texture. Then we are going to assign a colour at y=0 to be white (255,255,255) and at y=1 we’ll assign it to be black (0,0,0). Here’s how that would look.

Reverse: Using the gradient but replacing colour 0 to be white and colour1 to be black.
Reverse: Using the gradient but replacing colour 0 to be white and colour1 to be black.

 

Next let’s try to use a part of the range. We’ll use the same function but use a = 3.3 and b=-0.9. That way y will be 0 until x reaches 0.3 and then grow linearly to 1 until x reaches 0.6. To illustrate that I’ve assigned colours to be red for y=0 and blue for y=1.

grad_example2
Note how gradient transition between the two colours happens int he range 0.3-0.6 that we have isolated using our two parameters.

 

Here’s one based on the same gradient texture that illustrates the way I use this effect. I assign a=2 and b=0 and that gives me a gradient between 0 and 0.5. I also assign the y=1 colour to be translucent – alpha=0.0. That way by varying the parameter a with some dynamic game value, I can get the bar to move with that value.

Gradient based on parameter going into translucency.
Gradient based on parameter going into translucency.

In MZR I link a lot of effects to the music EQ so that visuals appear to bounce with with music.

The above examples are using our simple horizontal gradient texture. Things get a bit more interesting as we start using more complicated textures. For example here’s the actual texture I use for my effect in MZR and the final result next to it. The green MZR logo in the texutre is to indicate where the start of the gradient is – it’s a grey gradient that fills up a maze.

Gradient texture for a maze that the shader works with.
Gradient texture for a maze that the shader works with.
The final result in MZR. The parameter a is controlled by the base frequency in the music animating the maze on the screen.
The final result in MZR. The parameter a is controlled by the base frequency in the music animating the maze on the screen. Note: this also has the MZR logo rendered on top as well as the FUNKY CIRCUIT sign.

 

 

 

 

 

 

 

 

 

 

 

And here’s the intro sequence to MZR where this effect is used:

 

The parameters are linked to the music EQ. The video shows the effect which is a single render call as well as the logo rendered on top and a FUNKY CIRCUIT sign underneath.

 

That’s it for now. See you next time.

 

Here’s looking at you – stats!

Posted on Updated on

Just today I spent 4 hours talking through some metrics data of @FunkyCircuit games in order to drive the next wave of updates… so maybe worth writing about what that is.

Game analytics (or metrics, or telemetry)  has been around for ages. I want to quickly cover some basic points here, as I see them. It’s a huge topic so without a doubt we’ll be coming back to it in the future.

The basic idea is that you:

  • report some metrics data from within the game – i.e. game sends data to a server location
  • analyse the data collected in order to establish how users interact with the game and what you can be doing better
  • tweak and update the game so it’s a better experience… better performance… etc

Sounds easy, right? I wish! Let’s look at what’s involved.

Report

There are two main challenges to do with reporting good valuable data from your game:

  • Technical – what technology do you use to report the data? Do you write your own or do you use an off the shelf one? I personally use Flurry (www.flurry.com). You can certainly write your own although the effort and infrastructure involved often outweighs the benefit of such venture. If you however run a server-based game you probably better of with collecting your own data. Off the shelf systems are usually generic (one shape fits all) solutions – which can be sometimes limiting if you need specific data reported.
  • Data – the data being reported often is the limiting factor in such an endeavor. There is a temptation to just report everything and make sense of it later. Even if that was possible (you don’t want to use excessive bandwidth – your users will be a bit cross with you) it’s still not clear how you structure the data reported. Most systems (like Flurry) will aggregate your data in one way or another, so you want your data to make sense when aggregated. Get this one right and you have solved most of your problems!

So how do we do better reporting?

  • Have a list of questions you want answered. Write them down, then think what data would you need reported to answer those questions.
  • Iterate – don’t leave this for the last few days before release. Implement it as soon as you have some questions to answer. Look at the data, and try to make sense of it, how it’s aggregated and visualized. Tweak the implementation and then look at the resulting data, again and again… until you are happy with the results.
  • Think about the data and how it’s aggregated so that you can report it efficiently. The more you do this, the better you will become at it.

This is far from easy. Don’t expect to get it perfect first time. Good metrics data will help you understand your game and your users better. Bad metrics data will leave you scratching you head and quite possibly turn you off this otherwise valuable process.

Keep improving this after the game is released and updated.

Analyse

Quite often various theories emerge about user behavior when looking at the data. It’s easy to make theories but not nearly as straightforward to be confident that data backs them up.

Often people go for generic buzzword indicators to measure performance of a game. Just to name a few: ARPU (average revenue per user),Retention, Daily Active Users (DAU), etc.

These are important and you will get most of these out of the box with any standard packages (did I say I use Flurry). However, correlating these numbers to various events or features in-game (user behavior) and outside game (marketing) is often hazardous and prone to wrong assumptions. Did this new feature affect our numbers negatively or did we get suddenly get a group of new users who play the game differently? You will see this in the weekly user fluctuation leading to regular spikes over weekends and more moderate performance during the week. How can you be sure that increased user activity is due to your in-game feature or just part of the usual cycle or holiday in one of your territories? Is the feature Bob worked on really that poor or did we suddenly get a batch of users who don’t like our game because we advertised on the wrong site? Notice how this can subtlely become a political issue as various interests intersect and with no clear metric everyone argues their corner bringing such analysis to a halt.

One way to resolve this issue is to use split tests (also known as AB tests). Users are grouped into two different groups (cohorts). One of the groups (A) is assigned the desired behavior and the other one (B) is left untouched. We can now measure the performance of the two groups and compare the results. This eliminates user quality fluctuation – regardless what time of day or week it is or where these users came from, if they are randomly assigned to both groups as they come the results will always be an indicator of how our desired behavior is performing. Certainly a lot easier to analyse – yet no silver bullet.

You can also examine various game design decisions through the prism of data. This will help you spot any difficulty balancing issues and tell you a lot about how people play the game. At minimum I would always have a “first time reached” event for my game progression curve. That way I know how many people have reached that point in the game. I can see at which point people have stopped playing… and if there are any sudden drops in the curve then we have a problem.

Tweak & Update

We were always going to tweak and update our game, right? Doing it without data to inform our decisions just means we are doing it in the dark based on our assumptions about our target audience and our overall plan for updates.

Having looked at the data however, this is where we decide to decrease the difficulty on level 16 because 50% of the users who finished level 15 never finished it. Here is where we reduce the price of map pack 2 because 90% of users who finished map pack 1 didn’t think it was worth spending that much virtual currency on that content. Here is where we don’t spend time working on map pack 5 because that would mean catering for 5 people who have finished map pack 4 and instead concentrate our limited resource on other parts of the game that would benefit more users.

It is not always that easy. In one of our games we had a difficult trade-off:

People who start in location A, go on to spend more money on in-app purchases. People who start at location B spend less on in-app purchases. However, on average more B players completed 50% of the game than A players. What is more valuable to us? Hard cash or people playing the game for longer, getting more engaged and telling their friends about it. Are they telling their friends about it? We’ll never know – we didn’t report that data correlated to the start location!

… 

This process certainly isn’t a magical silver bullet. It is no substitute for creativity – you are still in charge of coming up with that great idea. However, I believe it can be valuable tool in your game development process helping you bring that great idea to success.

First

Posted on

This hopefully will be a blog about making small indie games. It should have a fair amount of technical stuff (me being a programmer and all). 

I hope you enjoy it.