How We Removed The Latency Overhead Of Dynamic Image Transformation

Posted by:

Dynamic image transformation offers very powerful features and convenience for developers, but has a fundamental problem. Latency.

This is especially problematic in globally distributed systems, where the processing happens on the edge and is very tricky to work around, but we tackled the problem head on and came very darn close.

As a result, Bunny Optimizer offers one of the fastest dynamic image transformation APIs out there. We're very passionate about performance and technology, so we wanted to share how we achieved this.

Of course, as any performance conscious company, we made sure the optimization and transformation itself works as quickly as possible. We use caching and efficient code, backed by powerful new Ryzen CPUs. On average, an image is processed in around 40ms or less. Faster than a blink of an eye and not a dramatic problem at all.

But it turned out, the transformation wasn't really the biggest issue at all. Instead it was the network latency between the CDN nodes and the origin that we needed to solve.

How Distance Kills TTFB With Dynamic Image Processing

Usually, when a CDN receives a request that isn't cached, it will connect to the origin and start reading the response. Immediately after the first bits of data are received, the CDN node will start writing a response to the user. The user will see a slight delay, but the file will start loading almost immediately nevertheless.

When you add in image transformation, things get a bit more tricky. The request can no longer be immediately streamed back to the user. Instead, the transformation system needs to download the full file first, then start processing the image. Finally, only once that's all done, it can start returning the processed image back to the CDN.

Factor in long distance and you have a disaster in your hands. If the processing node is far away from the origin, it can take up to half a second or more just to fetch the image file from the origin. In the meantime, the browser is left waiting for data and the user stares at a blank screen where nothing is loading, essentially destroying the user experience.

This was a big issue we've seen with some traditional dynamic processing services. Usually, optimization happened in one of the following two ways. Either it happened on the edge as part of a global compute CDN system, or it happened in a centralized location where the service operated. Either way, the performance wasn't very good and could be quite unpredictable.

We always try to push performance to the next level and wanted to make something better. There is no reason that the convenience of dynamic image transformation should mean we have to sacrifice the performance and leave the users waiting.

So if the distance kills performance, we wanted to kill the distance.

Moving From The Edge To The Origin

The solution for this was actually quite simple, kill the distance and latency by processing  the files right next to the origin. We already have a massive global network with 43 PoPs, so in the majority of cases, one of our PoPs is just a few milliseconds away from the origin.

To achieve this, we developed a special routing system that automatically detects and routes all optimization requests to one of our PoPs right next to your own server. This significantly reduces the distance between our system and your origin. The CDN edge no longer needs to cross half of the world to fetch the file, reducing the download times as much as 90% or more before we can start returning the file to the CDN.

This introduces 3 important benefits compared to traditional systems:

  • With Bunny Optimizer, the TTFB is much lower, because our optimization happens right next to your origin. It means we can fetch files extremely quickly and with minimum latency.
  • We shrink the images before even leaving the origin region. The CDN only needs to load the optimized file which is usually much smaller than the original image. This can actually result in a faster load time with uncached larger images, despite some overhead.
  • Image processing becomes centralized, but in a good way. Thanks to caching, this means we only need to optimize each image only once. Even if requested from 40 PoPs around the world, only the first request would have to wait. This reduces cost as well as improves performance even further.

Combined, these benefits make Bunny Optimizer run as quick as it does, and we're very excited that we can offer a great experience to your users.

Finally, we took the solution one step further. With our latest Perma-Cache feature, each image only needs to be processed once and automatically replicates to multiple storage regions around the world. No more overheads, just fast performance every single time.

We're very excited to be able to tie our features together, to really offer an incredible solution for performance critical workloads, allowing your content to hop faster and faster.

Perma-Cache is one of such features and we're excited to see what else we can do with it in the future.

Learn more about Bunny Optimizer