Super League: a journey to modern, hybrid bare metal Kubernetes

Bare metal Kubernetes promises many advantages — but it also has a reputation for being challenging to manage. Super League started with an on prem strategy, moved to the cloud and found its way back to bare metal, but now with a twist. A hybrid approach, leveraging the best that both bare metal and cloud have to offer, changed the game.
GE Healthcare logo symbol

The writeup that follows is an edited transcript of a session that Super League delivered at KubeCon North America 2023 in Chicago.

You can watch the recording of the full session here 👉

Edge Kubernetes and Healthcare

Meet Justin Head

VP of DevOps at
GE Healthcare logo black

Justin has 20+ years of experience engineering infrastructure platforms and development workflows at companies such as X (Twitter), Blizzard, Obsidian Security, Palo Alto Networks, and various startups. He is the VP of DevOps at Super League, where he heads IT, infrastructure, and security.

There and back again: a journey to modern bare-metal

This story could have also been called “There and back again, a hobbit's tale". Or, in this case, a Minecraft tale. It's about a journey to bare metal. And “back again” because, as you'll see, we started on bare metal. In fact, so did I. I actually started my career in Chicago, at the Equinix data center. I used to spend a lot of long, cold nights there just racking and stacking, crash carting and fixing things that were broken.

Super League and Minehut

Essentially, Super League is a brand activation agency, helping bring brands like Mattel into Roblox, or Nickelodeon into Minehut or Minecraft worlds. Super League builds out experiences and brand interactions within digital worlds, where a lot of people are spending their time.

Minehut itself is the largest Java-based server community in the world. It’s different from other Minecraft hosting places or having a server by yourself — it's all about interactivity and community, and a friendly place for kids to play Minecraft.

Survivor mode: placing our first block in the clouds

Let’s talk about what started all this off: we moved Minehut to the cloud.

Minehut started in a provider called OVH, back in 2018, where we had dedicated servers and everything was manually done.

In February 2019, we migrated over to AWS ECS, as we were looking for a more professional platform, in which we could get support.

We eventually moved back to OVH in 2019 for cost control reasons, but it was still not the right solution for us. And, as OVH is primarily an European provider, the hours were not ideal as well. We also wanted on demand ability to work with APIs and get a little bit more modern with the container orchestration platform.

After testing out both OVH and AWS, we decided to move to GCP 2020, and Super League brought in a consultant, who built a solution using Kubernetes. At the time, one of the only places to manage Kubernetes was Google’s GKE, so that's where we moved.

I started at Super League one month before the Covid pandemic hit. I actually started building an in-person esports platform globally, and that did not happen. When the pandemic started, the real world shut down and people started playing Minecraft on Minehut a lot more. The cost soon got out of control.

We were faced with a challenge. Minehut essentially is a free service, so people can build their own server and Super League would run it for them. But that became unsustainable as the numbers grew, so we had to find a better way to keep it running. 

"Leaning on Spectro Cloud enabled us to move quickly into bare metal hosting reducing the up front complexity, maintenance, and knowledge requirements. They collaborated as an extended part of the team to help us achieve a quick migration that resulted in significantly lower operating cost for our Kubernetes clusters."

Justin Head, VP of DevOps at
GE Healthcare logo black

Creative mode: laying the foundation for bare metal

We had to get creative. With GCP we were spending about $200k per month on Minehut — it was mostly compute, but a big part of it was also storage, egress costs and the overall costs of being in a top public cloud.

We looked at different options, including going back to dedicated servers and building capacity out ourselves. Ultimately, bare metal promised to save us at least 50% on the machine side. We wouldn’t need Amazon EBS storage, because we would have storage inside the machines, and network savings were a big one, around 90%.

The image above is a simplified view of what Minehut looks like on top of Kubernetes, from the player traffic perspective. At the entry point, the player connects and they go through Cloudflare Spectrum to clean up the traffic and prevent major volumetric DDoS. 

From there, the traffic hits Velocity Proxy, which is an open source tool that can be used to proxy Minecraft traffic. It sits in a BGP ECMP (Equal Cost Multi-Path) setup that has sticky sessions or constant hashing to prevent players from dropping if something happens with a node. 

After Velocity, people will get dropped into the game lobby and that's where the community part happens. In the lobby, people can see the top servers that are being played at the moment, interact with brand events, do parkour and all these other Minecraft opportunities to get rewards. Then they can jump over to their own game server, or someone else's, and play Minecraft as a community or solo.

So how does elasticity work with bare metal?

If you haven't worked with bare metal before, or just haven't worked with it in a while, you'll notice right away its deployment times. Spinning up full physical machines will take about 15 minutes each, which includes powering it on, configuring, and bringing it into the kubernetes cluster. 

As far as Minehut goes, our setup has different node pools that we use for different workloads inside of the cluster. We've got the control plane, default and proxy node pools, and those, for the most part, are not elastic; they have a fixed number of machines that we run due to Minehut’s stateful needs. 

On the game side, we autoscale our main workload — it's the one that goes up and down, with peaks and valleys. We separate node pools for paid servers, as people can pay on Minehut to have extra CPU or RAM, or even disk space, for their server to run, so they can host more plugins and people. 

We have a paid and a free pool, and we use different schedulers for each of them. For the paid pool, we spread the pods evenly as demand is fairly static throughout the day, and most of those servers stay running 24/7. 

On the free pool, we use a method of bin-packing on the scheduler to pack them in as tight as possible. When new ones come up, they'll get packed on to the highest node to keep it light, and the Kubernetes autoscaler can remove the nodes that are not in use as the day goes up and down.

Our free servers have a 4-hour time limit before they get shut down for the day, and that allows us to do the bin-packing. Otherwise, people would keep their Minecraft servers up for months at a time, which would not allow us to do autoscaling at all, for the most part.

But what about all of the other stuff? 

The game and the services don't run by themselves. There are other Kubernetes services we run, like databases, messaging, observability, etc. We use operators or open source solutions, and we try to run everything we can inside of Kubernetes workloads, running on top of Spectro Cloud Palette. 

When it comes to storage, we try to keep it really simple with a product called TopoLVM. It’s basically an interface that utilizes local disks to provision persistent volumes from the Kubernetes API, and have it all flow through.

We run one thing in the bare metal environment, outside of Kubernetes: MinIO, which we use for object storage. The game world in Minehut is basically a database, it’s the state of the world with everything that has been created in there. We back that up as an object into MinIO, and we keep hot data sitting there in the data center for about two to three weeks after the server has started. After that, it gets flushed out. 

We also use Backblaze in the background to keep a full archive of every server that has ever been created on Minehut.

Although we love bare metal, we use some cloud and SaaS offerings for things that our small team wouldn’t be able to manage in our bare metal environments. Our philosophy is to select tooling that works across clouds wherever possible. 

Hardcore mode: breaking the surface of lifecycle management

Statefulness of Minehut

Three parts of Minehut are stateful — the velocity proxy, game world and game lobby — and they all behave differently. The main thing we're trying to prevent is player disruption. If you play video games, the worst thing is dropping off the server and losing your progress. 

For the game lobby, on the other hand, it's not a big deal as it is not where you're building things, and you’d get switched to another lobby if one were to break down. 

The only real sticking point is on the Velocity Proxy side. Minecraft does not behave like a browser, where you're just hitting some service — it's not going to retry for you. You’d get dropped and disconnected, and you'd have to load back in the client if that proxy goes down. It is unfortunate, but it’s just  something that we have to deal with. 

Platform upgrades with Spectro Cloud Palette

Time to get more practical: how do we do upgrades when we have all these stateful databases of the Minecraft world running? Through Spectro Cloud Palette, we use some special flags inside Cluster API (CAPI) to program it to pause all the game pools that are out there, as well as the proxy pools. These steps can be done weeks in advance from the planned downtime. 

Once we pause the pools, we can update the cluster specifications, which can include updating its Kubernetes version, among other things. In the meantime, the control plane, the default pools, and even the pools running databases can continue to run without disruption. As we’re dealing with bare metal, it can take up to 30 minutes to conclude the process of spinning up a new machine and releasing the other one back into inventory for future use.

After performing these initial steps, we move on to the full downtime maintenance, for which we let the community know in advance that we’re going to shut down for a few hours. 

As part of the maintenance, we safely drain all of their game rules off, save them in MinIO and back up into Backblaze. After that, we do a mass delete of all the machines for the game and the proxy pools. We CAPI delete them, and then tell CAPI to unpause, and it will bring everything back up, at once. 

This approach minimizes downtime for us — we plan for a 2-hour maintenance window, but we usually can get it done within an hour.

Emergency maintenance

There are occasional OS security and Kubernetes issues that need to be addressed right away. As we’re dealing with bare metal, we might face different challenges compared to working with VMs and AWS — occasionally, things just go wrong with the machines. 

To address these emergency maintenance needs, we use a couple open source projects. One of them is called the Node Problem Detector (NPD), and it looks at low level items within nodes to ensure they’re working as expected. 

We also use another project called Kured, the Kubernetes Reboot Daemon, which runs Ubuntu underneath, looking for the var/lib reboot required file. As we have stateful workloads, we can’t freely reboot servers when security issues are found, unless it was a serious security issue. We allow Kured and NPD to run on default and other node pools with permission to reboot as needed, but making sure that only one is rebooting at a time. In particular cases, we can consider a special release to include rebooting game nodes.

We do also have an emergency way for the game nodes and pods to come down. For that, we extend the pod terminationGracePeriodSeconds to make it quite long, and we intercept the term and do a game save into MinIO. It’s fairly safe to do so, but it does cause player disruption, which means we only do it if it’s truly required.

Game results

With Bare Metal, we achieved an outstanding 65% cost reduction. On the machine side alone, we saw reductions from 55% to 66%, depending on the configuration, and for the network we got from 90% to 100% reduction. This last number might not sound right, but the reason behind it is that some bare metal providers give you the network for free with the machine price.

Another key result for us was a 15% better performance compared to VMs. And we didn't have to add any new people to the team to manage all of this. The only downside is that some of these services aren't stressed as much, so you might hit some interesting things, and that infrastructure iterations take longer.

What’s the next step in your bare metal Kubernetes journey?

Learn about our bare metal K8s partnership with our friends at Canonical.

Watch the webinar

Check out our bare metal solutions and catch up on related blogs, demos and documentation.

Discover more