We kicked off our inaugural Game Backend Engineering summit at the Game Developers Conference last month, which was hotly followed by Quo Vadis in Berlin. Both events brought a mixture of technology and business leaders together in a room to share best practices on how to overcome backend challenges. As Harald Riegler from Games Industry Network said, “this is information I would have killed for a few years ago.”
The summits raised the fact that the infrastructure and backend are inherently related. Traditionally gaming studios have fallen victim to the common mistake of separating the infrastructure and techstack decisions. Gaming companies typically decide first on infrastructure provider and then build the techstack – the clear takeaway here was: the techstack decision MUST lead your infrastructure decision. Certain infrastructures don’t support certain technologies and not making future-proofed decisions based on this can impede you a few years down the line.
Making decisions about backend infrastructure
There are 3 main options for building your backend: a managed public cloud, a backend as a service (or BaaS), or an on-premise/hosting provider.
Deciding which infrastructure providers to use should be influenced by several determining factors, not only driven by game type, but also on your workforce and commercial limitations. Another genuine grievance is the shortage of talent and not being able to achieve your goals simply due to the lack of an experienced team to implement. There is no simple answer to what approach you should choose, but there’s likely to be a number of trade-offs. Here’s some key features for each category of infrastructure provider.
- Unmanaged hosting: barrier-free to the majority of technologies; optimized server configs; open source; no vendor lock-in; future scalable; considerable cost savings; BUT must have a team to manage and build orchestration/scaling.
- Managed cloud: safety in numbers – “everyone uses it”; provides autoscaling; low maintenance; BUT no optimized hardware; most expensive option; vendor lock-in and lack of flexibility.
- BaaS – quick, easy, out of the box solutions; BUT at the whim of their techstack, and forced to adopt whatever changes they make; there have even been cases of providers shutting down and the backend disappearing completely.
Handling data and thinking long-term
Another thing that should be top of your mind is a long-term plan for how you will gather, store and interact with the data you collect. You should be thinking about whether you’ll use a multi-provider infrastructure or integrating with services such as Kubernetes; how to backup player data and disaster recovery strategies; fail-safe offline modes in event of a network failure; and especially the best database for your needs.
Techstacks such as Kubernetes, Golang, and Docker are being hailed repeatedly for easing the life of developers. Containers are enabling game developers to manage their infrastructure much easier – taking a lot of the management load off their hands and improving agility.
The broad array of technologies used is much the same across the board. Although given that no two games are alike, how these models are then applied changes from game to game. Some “moments of genius” unearth new best practices and it’s great to see the gaming community is keen to share these ideas and help other studios overcome their challenges.
At least one point where everyone could agree was that the aim is to be able to build your own backend within 5 years and have complete control and ownership of your whole architecture.
What are the experts saying?
I collected some notes from our Gaming Backend Panel at GDC which featured Douglas Manton (Chief Technology Officer at Boss Fight Entertainment), Kim Pallister (Chief Technology Officer, VR/AR, Gaming, e-Sports at Intel Corporation), and Stephen Nichols (Studio Engineering Director at Certain Affinity). Here’s is a small selection of discussion points for your perusal:
What’s a good way to distribute backend services into different VM’s?
Stephen Nichols: Certain Affinity use containers to help manage their infrastructure by packing up their systems according to different requirements (memory, storage, capacity etc.). Kubernetes, Golang and Docker all provide this service and will save you a headache with this issue.
If you are just starting out and know nothing about this, you can find managed container services – such as Kubernetes which can come managed by Google Cloud. The only problem with this is it could lead to vendor lock-in as you don’t have the knowledge to transfer your containers elsewhere.
You can also run a Kubernetes cluster across multiple cloud providers and manage this using tools such as Kops or Cluster Federation.
Is it easy to migrate from bare metal to containers for large popular games?
Stephen Nichols: You can absolutely containerize these games. The systems exist to support huge games which are hosted on dedicated servers like Call of Duty, you just have to experiment with your game and understand how it all works.
How much do you split up the services of the game to microservices?
Stephen Nichols: If you’ve ever tried managing microservices, you’ll know it’s complicated and there are a lot of moving pieces. It’s best to start with something straightforward that you can build on, and once it proves that it needs to be broken up then you can go ahead and do it then.
We generally start with the monolith and we’ll deploy a bunch of API services in a single executable, and as we find that they are proving valuable we’ll hoist them out. A good example is our service “room service” which came from one of our games and proved valuable, so we hoisted it out to its only standalone service.
It’s best to think about what makes sense code wise and rather than creating loads of new stuff which could have bugs and not be useful, to just use what’s already proven successful. It’s more of an evolutionary thing.
What are the benefits of microservices?
Stephen Nichols: Multi-dimensional scaling is the main thing – in terms of performance and team scalability. You can for example say this part of our system is using a lot of CPU, then easily take that out to scale independently – this is the performance side. The other part of scalability is team scalability – it’s easy to have teams split up so they can work on different microservices and scale them.
Which database types do you guys use?
Douglas Manton: We use a combination of Couchbase, Elastic Search and Redshift – all for different things.
Stephen Nichols: A good thing about Couchbase is they have Kubernetes operators, which makes it easy to deploy and scale on the platform. Another database I’d recommend is Cockroach DB, which is a distributed system so you can easily scale up and down nodes.
How do you avoid bugs?
Douglas Manton: Logging is a big part of it and being able to visualize those logs, open tracing is a standard for this, and also being able to run your systems through proper automated tests.
How much do you prepare for your game becoming a success and should you think about scalability from the outset?
Douglas Manton: Cloud platforms give the ability to dynamically spin up servers in minutes, it makes it easy to scale. However, the cost is certainly more expensive for virtual servers than bare metal.
Stephen Nichols: I think it’s less important to think about what provider you are picking, but it’s more important to understand algorithms when you are building your systems. Make sure when you are building your algorithms, that you really understand where points of failure are so you can design around that. Your algorithm needs to work properly before you start thinking about scaling.