In this post, I’ll share my experiments combining ReSTIR and variable rate tracing (VRT) in my toy rendering engine. The idea is straightforward: ReSTIR’s spatial and temporal passes already find and reuse valuable samples for each pixel, so why not use them to fill the “missing rays” for pixels skipped by VRT in the current frame? Over the past few months, I’ve tested multiple ways to make these techniques work together. In this post, I’ll discuss what worked well, what didn’t, and why.
Before diving into the main topic, I’d like to provide some background on variable rate tracing (VRT). This technique has already been explored in several shipped titles, and there are a couple of presentations that share details about different implementations.
One of these presentations is It Just Works: Ray-Traced Reflections in Battlefield V. The approach described there is to compute a specular and diffuse ratio. After that, the screen is divided into 16×16 tiles, and for each tile, they select the maximum ratio among all pixels in that tile. Next, they sum those maximum ratios from all tiles, and each tile’s final value is normalized by dividing it by that sum. This normalized value represents the fraction of the total ray budget allocated to that tile. To simplify ray allocation, they round the number of rays for each tile to a power of two. They use additional random numbers to decide whether to round up or down; otherwise, each tile would consistently trace too many or too few rays.
A slightly different solution was proposed by the developers of Avatar: Frontiers of Pandora in Raytracing in Snowdrop: An Optimized Lighting Pipeline for Consoles. In this case, the variable rate is based on a computed “quality” value. First, they compute quality per pixel by considering luminance, hit distance, roughness, light variance, and surface specularity. Next, they compute both the average and the maximum pixel quality per 16×16 tile. They also reproject tile quality from the last frame and combine these three qualities to compute a final per-tile quality value, which determines the tracing rate for the next frame. Additionally, they use geometry normals to figure out a half-resolution axis.
Another approach was introduced by the developers of The Callisto Protocol, described in The Rendering of The Callisto Protocol. In that presentation, the authors describe variable rate tracing for both shadows and reflections.
In this implementation, for both shadows and reflections, the tracing rate is chosen among four modes: 1 ray per 1×1, 1×2, 2×1, or 2×2 pixels. Unlike the Battlefield V method, here the variable rate is calculated after tracing, then reprojected in the following frame, similarly to Avatar.
Although not strictly related to ray tracing, I’d also like to mention Nanite GPU-Driven Materials, which demonstrates excellent results using software variable rate shading. The authors replaced hardware VRS (8×8 tiles on AMD GPUs and 16×16 on NVIDIA) with a software approach that always uses 2×2 tiles. This yielded remarkable performance gains - an 18% reduction in shaded pixels.
Another example with small tiles appears in Software-Based Variable Rate Shading in Call of Duty: Modern Warfare. While it’s specific to rasterization (using MSAA and stencil tricks), it offers some interesting ideas for ray tracing. In particular, it discusses storing more than one VRS value. Because TAA (or other super-resolution techniques) often rely on jitter, information from a single frame may skip edges and produce an overly low shading rate, causing additional quality loss. To address that, the authors store a VRS mask for the last four frames - matching their jitter sequence length - and use this history to conservatively pick the shading rate for the next frame. They reproject the entire history in one step by packing all four frames into a single 8-bit texture (giving 2 bits per frame).
Many offline renderers also use some form of adaptive tracing, though it differs from real-time approaches. Instead of tracing fewer rays in the current or subsequent frame, offline methods often employ a heuristic to stop casting additional rays once a pixel’s result seems converged. A great resource on this is Real-Time Rendering’s Next Frontier: Adopting Lessons from Offline Ray Tracing to Real-Time Ray Tracing for Practical Pipelines by Matt Pharr. He describes computing per-pixel variance and stopping ray generation if that variance is low enough. He also stresses that each pixel should consider the variance of its neighbors, to prevent missing important directions if all samples in the current pixel “failed” to find a bright reflection.
A similar approach is used, for example, in Disney’s Hyperion renderer, as detailed in The Design and Evolution of Disney’s Hyperion Renderer.