Home » Blogs » Neural Supersampling and Denoising for Real-time Path Tracing

Neural Supersampling and Denoising for Real-time Path Tracing

Creating realistic images has been a persistently challenging problem in computer graphics, especially when it comes to rendering scenes with complex lighting. Path tracing achieves photorealistic quality rendering by simulating the way light rays bounce around the scene and interact with different materials, but it also requires significant computation to generate clean images. This is where neural supersampling and denoising come into play. In this blog post, we describe how our neural supersampling and denoising work together to push the boundary for real-time path tracing.

Math behind Rendering

Rendering equation [1] formulates outgoing radiance (L_o) leaving a surface point (x) as the sum of emitted radiance (L_e) and the fraction of incoming radiance (L_i) scattered by material (f_r) and geometric (\overrightarrow{\omega_i} \cdot \overrightarrow{n}) terms over a hemisphere (\Omega) centered around the surface point.

L_o (x, \overrightarrow{ω_o}) = L_e (x, \overrightarrow{ω_o})+ \int_{\Omega} f_r (x, \overrightarrow{ω_i}, \overrightarrow{ω_o})\ L_i (x,\overrightarrow{ω_i})\ (\overrightarrow{ω_i} \cdot \overrightarrow{n})\ d(\overrightarrow{ω_i})

Monte Carlo integration is a stochastic technique used to estimate the integrals with random samples in the domain. Path tracing uses Monte Carlo integration to estimate the integral in the rendering equation with random ray samples cast from the virtual camera’s origin over all possible light paths scattered from surfaces. Path tracing is conceptually simple, unbiased and offers complex physically based rendering effects like reflections, refractions, shadows for a variety of scenes.


Figure 1. Photorealistic rendering by path tracing with 32768 samples per pixel. (Classroom rendered by Capsaicin [10].)link to fullsize image (4MB)

Noise in path tracing images

The randomness of samples in Monte Carlo integration inherently produces noise when the scattered rays do not hit the light source after multiple bounces. Hence, many samples per pixel (spp) are required to achieve high quality pixels in Monte Carlo path tracing, often taking a couple of minutes or hours to render a single image. Although the higher number of samples per pixel, the higher chance of less noise in an image, in many cases even with several thousands of samples it still falls short to converge to high quality and shows visually annoying noise.


Figure 2. Path tracing with 32768 samples per pixel could still show noise. (Evermotion Archinteriors rendered by Capsaicin [10].)link to fullsize image (2.3MB)

Reconstructing pixels in noisy rendering

Denoising is one of techniques to address the problem of the high number of samples required in Monte Carlo path tracing. It reconstructs high quality pixels from a noisy image rendered with low samples per pixel. Often, auxiliary buffers like albedo, normal, roughness, and depth are used as guiding information that are available in deferred rendering. By reconstructing high quality pixels from a noisy image within much shorter time than that full path tracing takes, denoising becomes an inevitable component in real-time path tracing.

Existing denoising techniques fall into two groups: offline and real-time denoisers, depending on their performance budget. Offline denoisers focus on production film quality reconstruction from a noisy image rendered with higher samples per pixel (e.g., more than 8). Real-time denoisers target at denoising noisy images rendered with very few samples per pixel (e.g., 1-2 or less) within a limited time budget.

It is common to take noisy diffuse and specular signals as inputs to denoise them separately with different filters and composite the denoised signals to a final color image to better preserve fine details. Many real-time rendering engines include separated denoising filters for each effect like diffuse lighting, reflection, and shadows for quality and/or performance. Since each effect may have different inputs and noise characteristics, dedicated filtering could be more effective.

Neural Denoising

Neural denoisers [3,4,5,6,7,8] use a deep neural network to predict denoising filter weights in a process of training on a large dataset. They are achieving remarkable progress in denoising quality compared to hand-crafted analytical denoising filters [2]. Depending on the complexity of a neural network and how it cooperates with other optimization techniques, neural denoisers are getting more attention to be used for real-time Monte Carlo path tracing.

A unified denoising and supersampling [7] takes noisy images rendered at low resolution with low samples per pixel and generates a denoised as well as upscaled image to target display resolution. Such joint denoising and supersampling with a single neural network gives an advantage of sharing learned parameters in the feature space to efficiently predict denoising filter weights and upscaling filter weights. Most performance gain is obtained from low resolution rendering as well as low samples per pixel, giving more time budget for neural denoising to reconstruct high quality pixels.

Current AMD research

We are actively researching neural techniques for Monte Carlo denoising with the goal of moving towards real-time path tracing on RDNATM GPUs. Our research sets a few aims as follows:

  • Reconstruct spatially and temporally outstanding quality pixels with fine details given extremely noisy images rendered with 1 sample per pixel.
  • Use minimal input by taking a noisy color image as input instead of separated noisy diffuse and specular signals.
  • Handle various noise from all lighting effects with a single denoiser instead of multiple denoisers for different effects.
  • Support both denoising-only and denoising/upscaling modes from a single neural network for wider use cases.
  • Highly optimized performance for real-time path tracing at 4K resolution.

With these goals, we research a Neural Supersampling and Denoising technique which generates high quality denoised and supersampled images at higher display resolution than render resolution for real-time path tracing with a single neural network. Inputs include a noisy color image rendered with one sample per pixel and a few guide buffers that are readily available in rendering engines, like albedo, normal, roughness, depth, and specular hit distance at low resolution. Temporally accumulated noisy input buffers increase the effective samples per pixel of noisy images. History output is also reprojected by motion vectors for temporal accumulation. The neural network is trained with large number of path tracing images to predict multiple filtering weights and decides how to temporally accumulate, denoise and upscale extremely noisy low-resolution images. Our technique can replace multiple denoisers used for different lighting effects in rendering engine by denoising all noise in a single pass as well as at low resolution. Depending on use cases, a denoising-only output can be utilized, which is identical to 1x upscaling by skipping upscale filtering. We show a sneak peek of our quality results here.


Figure 3. Workflow of our Neural Supersampling and Denoising.link to fullsize image (450K)


Figure 4. Denoised and upscaled result to 4K resolution for Bistro scene[9]. Input is a noisy image path traced with 1 sample per pixel in 1920×1080.link to fullsize image (17.3MB)

References

  1. James T. Kajiya. 1986. The rendering equation. SIGGRAPH ‘86: Proceedings of the 13th annual conference on Computer graphics and interactive techniques. pp. 143–150.
  2. Christoph Schied, Anton Kaplanyan, Chris Wyman, Anjul Patney, Chakravarty R. Alla Chaitanya, John Burgess, Shiqiu Liu, Carsten Dachsbacher, Aaron Lefohn, Marco Salvi. 2017. Spatiotemporal variance-guided filtering: real-time reconstruction for path-traced global illumination. Proceedings of High Performance Graphics, July 2017, pp 1–12.
  3. Yuchi Huo, Sung-eui Yoon. 2021. A survey on deep learning-based Monte Carlo denoising. Comp. Visual Media, Volume 7, pp 169–185.
  4. Steve Bako, Thijs Vogels, Brian Mcwilliams, Mark Meyer, Jan NováK, Alex Harvill, Pradeep Sen, Tony Derose, and Fabrice Rousselle. 2017. Kernel-predicting convolutional networks for denoising Monte Carlo renderings. ACM Transactions on Graphics, 36, 4, Article 97, pp 1–14.
  5. Thijs Vogels, Fabrice Rousselle, Brian Mcwilliams, Gerhard Röthlin, Alex Harvill, David Adler, Mark Meyer, and Jan Novák. 2018. Denoising with kernel prediction and asymmetric loss functions. ACM Transactions on Graphics, 37, 4, Article 124, pp 1–15.
  6. Mustafa Işık, Krishna Mullia, Matthew Fisher, Jonathan Eisenmann, and Michaël Gharbi. 2021. Interactive Monte Carlo Denoising using Affinity of Neural Features. ACM Transactions on Graphics, 40, 4, Article 37, pp 1–13.
  7. Manu Mathew Thomas, Gabor Liktor, Christoph Peters, Sungye Kim, Karthik Vaidyanathan, and Angus G. Forbes. 2022. Temporally Stable Real-Time Joint Neural Denoising and Supersampling. Proceedings of the ACM on Computer Graphics and Interactive Techniques, 5, 3, Article 21, pp 1–22.
  8. Martin Balint, Krzysztof Wolski, Karol Myszkowski, Hans-Peter Seidel, and Rafał Mantiuk. 2023. Neural Partitioning Pyramids for Denoising Monte Carlo Renderings. ACM SIGGRAPH 2023 Conference Proceedings, Article 60, pp 1–11.
  9. Amazon Lumberyard. 2017. Amazon Lumberyard Bistro, Open Research Content Archive (ORCA).
  10. AMD Capsaicin Framework.

Related links

HIP Ray Tracing

Introducing HIP RT v2.2

With the release of v2.2, HIP RT now support multi-level instancing. Multi-level instancing can help to reduce memory requirements, allowing you to render large scenes with limited memory.

Picture of SungYe Kim
SungYe Kim

SungYe Kim is a Principal Member of Technical Staff (PMTS) in the Advanced Graphics Program group, where she focuses on research and development of AI-assisted neural rendering techniques and leads development of forward-looking techniques.  She received her PhD in Computer Engineering from Purdue University.  Throughout her career in the industry, she has developed proficiency in diverse domains including gaming, media, VR and neural rendering with an emphasis on generating high-quality images for real-time use cases.

Picture of Pawel Kaźmierczyk
Pawel Kaźmierczyk

Paweł Kaźmierczyk is a Senior Member of Technical Staff (SMTS) in the Advanced Rendering Research group. He's passionate about applying machine learning to improve real-time rendering and gaming. Paweł earned his master's degree in Computer Science from the Warsaw University of Technology.

Picture of Wojciech Uss
Wojciech Uss

Wojciech Uss is a Senior Member of Technical Staff (SMTS) in the Advanced Rendering Research group, specializing in the development and optimization of neural network models for use in computer graphics rendering. His work focuses on pushing the boundaries of real-time graphics and path tracing rendering efficiency. He holds a PhD in Mathematics from Gdańsk University. Outside of work, Wojciech enjoys spending time with his family, running, and honing communication skills, particularly in Nonviolent Communication (NVC).

Picture of Wojciech Kaliński
Wojciech Kaliński

Wojciech Kaliński is a Member of Technical Staff (MTS) in the Advanced Rendering Research group. He has extensive experience in computer graphics, which he applies to his work on neural rendering projects. His main interests are physically based rendering, ray tracing and applications of AI in 3D graphics.

Picture of Tomasz Galaj
Tomasz Galaj

Tomasz Gałaj is a Member of Technical Staff (MTS) in the Advanced Rendering Research group, focusing primarily on marrying computer graphics with machine learning. He obtained his PhD degree in Technical Informatics and Telecommunications from Lodz University of Technology, where he published several journal papers on efficient rendering of the atmospheric scattering phenomenon in the Earth's atmosphere, using the analytical and machine learning based methods. His current research interests include high-performance computer graphics, computer games, machine learning, and computer simulations and visualizations.

Picture of Mateusz Maciejewski
Mateusz Maciejewski

Mateusz Maciejewski serves as a Member of Technical Staff (MTS) in the Advanced Rendering Research group, where he bridges the gap between high-performance computing and interactive systems. Drawing from his electromagnetic engineering background at Gdańsk Tech, where he earned his MSc in Electronics Engineering, Mateusz developed a novel approach to computational electromagnetics by combining model order reduction techniques with machine learning. His work on FEM optimization demonstrated significant performance gains in microwave analysis applications (see: InventSim @ inventsim.com). Today, Mateusz works on Unreal Engine development, focusing on both core engine capabilities and AI-driven real-time applications. His blend of low-level systems expertise and engine architecture knowledge positions him at the intersection of high-performance computing and interactive technology development. When not delving into engine internals, Mateusz can often be found experimenting with emerging game development tools, developing such systems himself and contributing to open-source projects.

Picture of Kunal Tyagi
Kunal Tyagi

Kunal Tyagi is a Senior Member of Technical Staff (SMTS) in the Advanced Rendering Research group working on ML rendering.

Picture of Kris Szabo
Kris Szabo

Kris Szabo works in the Advanced Rendering Research group. He develops highly optimized code for ML inference to obtain the best performance on AMD GPUs. He has deep expertise in maximizing GPU efficiency for graphics and ML workloads.

Picture of Rama Harihara
Rama Harihara

Rama Harihara is an AMD Fellow, leading the ML applied research team with emphasis on real-time graphics, neural rendering, differentiable rendering, generative AI and AI-based 3D content creation. She is responsible for setting the pathfinding and research roadmap for ML-assisted rendering and providing technology leadership to drive research from POC to product. She collaborates with academia, ISV partners, product business units, HW and SW IP architects to influence the evolution and adoption of these forward-looking technologies on AMD ML stack.

Enjoy this blog post? If you found it useful, why not share it with other game developers?

You may also like...

Getting started: AMD GPUOpen software

New or fairly new to AMD’s tools, libraries, and effects? This is the best place to get started on GPUOpen!

AMD GPUOpen Getting Started Development and Performance

Looking for tips on getting started with developing and/or optimizing your game, whether on AMD hardware or generally? We’ve got you covered!

GPUOpen Manuals

Don’t miss our manual documentation! And if slide decks are what you’re after, you’ll find 100+ of our finest presentations here.

AMD GPUOpen Technical blogs

Browse our technical blogs, and find valuable advice on developing with AMD hardware, ray tracing, Vulkan®, DirectX®, Unreal Engine, and lots more.

AMD GPUOpen videos

Words not enough? How about pictures? How about moving pictures? We have some amazing videos to share with you!

AMD GPUOpen Performance Guides

The home of great performance and optimization advice for AMD RDNA™ 2 GPUs, AMD Ryzen™ CPUs, and so much more.

AMD GPUOpen software blogs

Our handy software release blogs will help you make good use of our tools, SDKs, and effects, as well as sharing the latest features with new releases.

AMD GPUOpen publications

Discover our published publications.