Skip to content

Instantly share code, notes, and snippets.

@bazhenovc
Last active September 30, 2025 18:19
Show Gist options
  • Save bazhenovc/c0aa56cdf50df495fda84de58ef1de5e to your computer and use it in GitHub Desktop.
Save bazhenovc/c0aa56cdf50df495fda84de58ef1de5e to your computer and use it in GitHub Desktop.
The Sane Rendering Manifesto

The Sane Rendering Manifesto

The goal of this manifesto is to provide an easy to follow and reasonable rules that realtime and video game renderers can follow.

These rules highly prioritize image clarity/stability and pleasant gameplay experience over photorealism and excess graphics fidelity.

Keep in mind that shipping a game has priority over everything else and it is allowed to break the rules of the manifesto when there are no other good options in order to ship the game.

Do not use dynamic resolution.

Fractional upscaling makes the game look bad on most monitors, especially if the scale factor changes over time.

What is allowed:

  1. Rendering to an internal buffer at an integer scale factor followed by blit to native resolution with a point/nearest filtering.
  2. Integer scale factor that matches the monitor resolution exactly after upscaling.
  3. The scale factor should be fixed and determined by the quality preset in the settings.

What is not allowed:

  1. Adjusting the scale factor dynamically at runtime.
  2. Fractional scale factors.
  3. Any integer scale factor that doesn't exactly match the monitor/TV resolution after upscale.
  4. Rendering opaque and translucent objects at different resolutions.

Implementation recommendations:

  1. Rendering at lower resolution internally, but outputting to native.
  2. Render at lower resolution render target, then do integer upscale and postprocess at native resolution.
  3. Use letterboxing to work around weird resolutions.

Do not render at lower refresh rates.

Low refresh rates (under 60Hz) increase input latency and make the gameplay experience worse for the player.

What is allowed:

  1. In case of a high refresh rate monitors (90Hz, 120Hz, 244Hz etc) it is allowed to render at 60Hz.
  2. It is always allowed to render at the highest refresh rate the hardware supports, even if it's lower than 60Hz (for example incorrect cable/HW configuration or user explicitly configured power/battery saving settings).
  3. Offering alternative graphics presets to reach target refresh rate.

What is not allowed:

  1. Explicitly targeting 30Hz refresh rate during development.
  2. Using any kind of frame generation - it does not improve the input latency which is the whole point of having higher refresh rates.

Implementation recommendations:

  1. Decouple your game logic update from the rendering code.
  2. Use GPU-driven rendering to avoid CPU bottlenecks.
  3. Try to target native monitor refresh rate and use the allowed integer scaling to match it.
  4. Use vendor-specific low-latency input libraries.

Do not use temporal amortization.

If you cannot compute something in the duration of 1 frame then stop and rethink what you are doing.

You are making a game, make sure it looks great in motion first and foremost. Nobody cares how good your game looks on static screenshots.

In many cases bad TAA or unstable temporally amortized effects is an accessibility issue that can cause health issues for your players.

What is allowed:

  1. Ray tracing is allowed as long as the work is not distributed across multiple frames.
  2. Any king of lighting or volume integration is allowed as long as it can be computed or converged during 1 rendering frame.
  3. Variable rate shading is allowed as long as it does not change the shading rate based on the viewing angle and does not introduce aliasing.

What is not allowed:

  1. Reusing view-dependent computation results from previous frames.
  2. TAA, including AI-assisted TAA. It never looked good in motion, even with AI it breaks on translucent surfaces and particles.
  3. Trying to interpolate or denoise missing data in cases of disocclusion or fast camera movement.

Implementation recommendations:

  1. Prefilter your roughness textures with vMF filtering.
  2. Use AI-based tools to generate LOD and texture mipmaps.
  3. Use AI-based tools to assist with roughness texture prefiltering, take supersampled image as an input and train the AI to prefilter it to have less shader aliasing.
  4. Enforce consistent texel density in the art production pipeline.
  5. Enforce triangle density constraints in the art production pipeline.
@bazhenovc
Copy link
Author

As for my Fresnel code. Although my image was over a cube map, I am not using the cube map at all for reflection, but instead using screen space reflections with the previous frame.

It makes sense, thanks for the explanation!

I've got a few follow up questions:
Do you reproject the previous frame?
How are you handling disocclusion or missing data?
Any issues with animated characters or procedural animation?

@Johan-Hammes
Copy link

About physically accurate, I would argue that my fix is way more physically accurate than almost all games out there. The bright pixels on that grab handle appears because the reflection vector is pointing into the the handle itself and back out the other side. This is impossible in real life. Usually but not always as a result of normal vectors pointing away from the camera (due to a flat triangle replacing curved geometry and interpolating normals) My shader code fixes all of those to be physically accurate before doing any light calculations.

As for SSR, I am using this only on the strong Fresnel portions. I have other reflection solutions for the rest of my scene. personally I still favor planar reflections for water over SSR with its occlusion problems etc.

  • No I do not reproject, and have never seen it as a visual error
  • But we are only talking about the last 2-3 pixels right at the edge. By the time Fresnel makes it shiny enough to reflect, the angle is so tiny, that the SSR reflection is usually within that 10-100 pixel distance from the pixel we are lighting, and when the reflection is really strong that shrinks. It also means that occlusion is almost not a problem and can be ignored.
  • I haven't seen many issues with animations If you look at this video (select 4K so youtubes compression doesn't destroy it) the issues with animation is minimal in my opinion https://youtu.be/6T-2T_R8g0c

@bazhenovc
Copy link
Author

Thanks for the info!

I'll find time to implement it eventually, it's an interesting idea.

How exactly are you fixing normals after interpolation?

@whoisKomet
Copy link

I wanted to drop by and show something I found while looking for real time global illumination techniques that reminded me of this discussion. It was made by the same people behind the radiance hints technique mentioned by a comment a while earlier. In essence, and from what I understood, it aims to replicate the instant radiosity method where virtual point lights are generated inside of surfaces hit by direct lighting to simulate bounce lighting, but here, the location and attributes of the VPLs are precalculated. When a light of sufficient intensity is near/aims at a VPL, it "turns on", while the rest are culled. It eliminates the tracing step that instant radiosity would otherwise have to perform in order to determine direct lighting and obtain the surface properties of the mesh being lit. The results are shockingly passable in most cases, considering that it is meant to run on already extremely limited mobile VR hardware:
Screenshot 2025-08-06 034817

(Ambient occlusion is naturally not modeled, but we have enough headroom in PC hardware to fill it in with complimenting techniques)

There is even a tool for Unity provided that generates these VPLs automatically.
Here is the link to the paper outlining the technique and the github page containing the Unity tool.

That said, the paper brought my mind back to this discussion specifically because this form of precalculation seems like a reoccurring theme; a precalculation that makes existing techniques easier to update dynamically/alias less/produce more plausible results, instead of baking the results themselves. Roughness prefiltering is the other biggest example. I wonder if this could be applied to more than just textures and diffuse lighting? Maybe reflections? Geometry? We do have LODs, but perhaps we don't consider how they can impact aliasing when viewed from the distances they are meant to appear in, as most techniques are focused on preserving the volume of the mesh primarily.

I do have other, more controversial thoughts (especially regarding TAA and MSAA), but I'm not a graphics programmer and, having no substantial evidence for their viability in an actual production setting anyway, I will keep them to myself for now.

@bazhenovc
Copy link
Author

@whoisKomet this is a valid idea and it was used before, it's slightly cheaper than regular LPV and slightly lower quality. Makes perfect sense for VR or games that are not rendering shadowmaps for some reason.

@whoisKomet
Copy link

@bazhenovc Interesting, is there a game you could point me to that uses this, just for reference? This paper is personally the first time I've encountered something like this, but I wouldn't know where to start looking anyway.

And regarding LPVs, from where I see it these two techniques compliment each other really well if put together. Having the precomputed VPL locations and colors already in the scene bypasses the need for a RSM for each light source. Each VPL would only need to check it's visibility in a regular shadow map, and then the injection and propagation steps could be performed as usual. I wonder if the removal of the extra shadow buffers is enough to compensate for storing the VPLs beforehand, though. Then again, I am basing my knowledge of LPVs on the og 2009 paper by Kaplanyan, so if there is a more updated version available this might be irrelevant (and I'd like to be made aware of it if possible).

@bazhenovc
Copy link
Author

bazhenovc commented Aug 7, 2025

@whoisKomet

Interesting, is there a game you could point me to that uses this, just for reference?

PC/DX11 version of Ghost Recon: Future Soldier (released in 2012) used this exact idea if I recall correctly (I didn't directly work on the GI implementation), but my memory is hazy and as far as I know it wasn't published so you'll have to take my word for it. It was discussed on several conference afterparties and it's likely there's more games from that era that used it.

Having the precomputed VPL locations and colors already in the scene bypasses the need for a RSM for each light source

If you're rendering a shadow map already, extending it to RSM isn't that expensive. VR/Mobile games often don't render shadow maps so it's an important feature there, otherwise there's very little benefit.

Another thing to consider is that rendering 6000 visible point lights isn't exactly trivial either, LPV at least decouples that and sampling cost is fixed.

Also cached shadow maps isn't exactly new either (i.e. https://gpuzen.blogspot.com/2019/05/gpu-zen-2-parallax-corrected-cached.html), having a cached RSM is a trivial extension of that (albeit being borderline banned by this manifesto lol).

@whoisKomet
Copy link

Another thing to consider is that rendering 6000 visible point lights isn't exactly trivial either, LPV at least decouples that and sampling cost is fixed.

Fair enough. I think the main concern I have with LPVs is light bleeding, which seems to be sufficiently addressed in the original paper but may still appear when used in scenes with the geometric complexity of current generation titles (which in itself might already be problematic anyway lmao). Rendering directly with the VPLs in theory allows shadowmaps or ISMs to brute force through the visibility problem, but even then ISMs aren't trivial and shadowmaps aren't much better. At that point it would be smarter to use the VPLS generated by the RSM anyway, so I see your point.

Still, I wonder why LPVs aren't mentioned much at all anymore. They debut with Crysis 2 IIRC, were added to UE4 for a while, and suddenly left the conversation altogether. I would imagine that something equally as performant (and temporally stable) superseded them, but it clearly hasn't made any headlines yet.

@bazhenovc
Copy link
Author

bazhenovc commented Aug 8, 2025

LPV is inherently low-frequency and cannot do high-frequency details, I'm personally fine with that but a lot of my peers disagree with me.

Light leaking is also a problem, and it cannot do indirect specular.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment