Exploring the high-performance hardware behind Bunny CDN

Posted by:

As a content delivery platform, our team of techies is obsessed not just with the networks and software that make the magic happen, but also with hardware.

For most people ‘the cloud’ has become so abstracted it may as well be a fluffy shape in the sky and as consumers we’ve got used to cloud services just magically working (most of the time). But for anyone who’s ever visited a data center - the real home of the cloud - they would know that ‘clouds’ are really big, ugly buildings, with lots of security, and rows upon rows of humming server racks blinking in a temperature controlled environment.

This hardware plays a critical role in delivering great performance and reliability. Just as the network is essential for getting content from A to B, the machines that sit on the end of those network links are tasked with sending and receiving the data quickly, efficiently, and securely. Cloud-based services work magically because effort is invested in building a network based on reliability, redundancy, and in the case of bunny.net - performance.

Behind the scenes of bunny.net

We are sure many of you are just as excited about server hardware as we are, so this blog post will take you deep into the burrows and share a sneak peek at the hardware behind the scenes of bunny.net.

Throughout the years, we have experimented with many different hardware configurations, delivering the full spectrum of better or worse results. In fact, we’ve managed to bottleneck everything from CPU, to SSDs, to network cards and even PCIe lanes.

But failure often teaches you more than success and thanks to the lessons learned, we have been able to develop our ideal custom hardware. Because the bunny.net network is always growing, the goal was to find something that works well in a standalone situation, allowing us to launch new PoPs with ease, but also works well in a larger cluster as we grow those locations. We now run a pretty standard configuration that works great for the majority of situations.

Experimental server with Dual Xeon Gold CPU, 80 Gbit, and 32 x NVMe

Configuration of a high-performance server

The configuration we chose gives us a great mix of processing power, ultra-fast storage and next generation networking - everything we need to build a faster internet. We aim to combine approximately 20TB of fast storage, 256GB of memory, 40Gbps of connectivity, and a powerful CPU to back that up. At the moment, an example configuration looks like this:

AMD EPYC 7402P 
256 GB of RAM 
40 Gbit Mellanox NIC 
10 x 1.92TB NVMe

Bunny CDN edge nodes being built

In the past, we used to rely heavily on the Intel Xeon CPUs, but thanks to the very impressive processors that AMD is producing these days, it became an easy and obvious choice to switch from the blue team to the red team. At the same time we also made the switch from Supermicro to Gigabyte servers. While not the most popular choice in the enterprise world, they work extremely well for us thanks to their excellent AMD support.

For storage we switched from using plain old SSD drives to ultra-fast NVMe (Non-Volatile Memory Express to the uninitiated), which gives our edge an even greater performance boost than before and further reduces the latency when accessing your files due to its highly scalable storage protocol, that connects the host to the memory subsystem.

For connectivity, it was also the recent strides in innovation made by Mellanox that helped us switch away from the more dated Intel NICs. Our drive for performance means all of our technology has to be cutting edge and Mellanox also gives us the advantage of working with the latest generation PCIe 4.0 lanes to avoid any potential bottlenecks that can happen when pushing large amounts of traffic.

Hardware that keeps evolving

The beauty of this configuration is that we are able to run our servers with a very modular design. They are small and economically viable enough to be used as a single server but will scale horizontally to allow building a much bigger cluster with hundreds of terabytes of capacity.

While the servers are specific to our high performance requirements, working with standardized hardware allows us to easily manage and add capacity. But we will continue to monitor new technologies and upgrade components as needed to keep evolving our infrastructure.

We hope you enjoyed this under the hood view into how we power your content delivery network! Hardware is always evolving and it's exciting for us to follow the latest developments to make sure we keep hitting our targets for high performance. If this was an interesting post and you'd like to learn more about our infrastructure, please let us know and we'll be happy to share!

Find out more about Bunny CDN