Yeah, and we can actually afford to do it nowadays. I'm currently making a game with very retro graphics (think Minecraft-level pixelated stuff)
Sure, the textures themselves aren't too high fidelity, but since the pixel shader is so simple, it's quite feasible to do tricks which would be impossible even ten years ago. I can run the game even with 8x SSAA (that means 8x8=64 samples per pixel) and almost ground truth, 128x anisotropic filtering.
There's practically zero temporal aliasing and zero spatial aliasing.[0]
Now of course, some people don't like the lack of aliasing too much - probably conditioning because of all the badly running, soulless releases - but I think that this direction is the future of gaming. Less photorealism, more intentional graphics design and crisp rendering.
(edit: swapped the URL because imgur was compressing the image to shit)
[0] https://files.catbox.moe/46ih7b.png
It can't be 4x increase because the additional information will be correlated and predictable.
if i render a linear gradient at increasingly higher resolutions, I certainly am not creating infinite information in the continuum limit obviously
4x is the upper bound. On average across all of gaming the information gain is going to be much more pedestrian, but where the extra information is not predictable it does make a big impact. For some pixels you do get the full 4x increase and those are the exact pixels for which you needed it the most.
You still don't get the 4X increase on those pixels. You get it compressed down to a blended by coverage estimation of elements within the pixel. With a 4x higher res display instead of 4x MSAA or 4x SSAA you get more info in that area because you preserve more spatial aspects of the elements of it, instead of just coverage aspects.
Aren't some forms of aliasing specifically temporal in nature? For example, high detail density induced flicker on movement.
I appreciate people standing up for classical stuff, but I don't want the pendulum swung too far back the other way either.
That flickering is heavily reduced already by mipmapping and anisotropic filtering.
I have always heard that MSAA doesn't work well with deferred rendering which is why it is not longer used[0] which is a shame. Unreal TAA can be good in certain cases but breaks down if both the character and background are in motion, at least from my testing. I wish there was a way to remove certain objects from TAA. I assume this could be done by rendering the frame both with the object and without it there but that seems wasteful. Rendering at a higher resolution seems like a good idea only when you already have a high end card.
[0] eg https://docs.nvidia.com/gameworks/content/gameworkslibrary/g...
MSAA doesn't do anything for shader aliasing, it only handles triangle aliasing, and modern renderers have plenty of shader aliasing so there isn't much reason to use MSAA.
The high triangle count of modern renders might in some cases cause the MSAA to become closer to SSAA in terms of cost and memory usage, all for a rather small AA count relative to a temporal method.
Temporal AA can handle everything, and is relatively cheap, so it has replaced all the other approaches. I haven't used Unreal TAA, does Unreal not support the various vendor AI driven TAA's?
Unreal has plugins to support other AA including DLSS and FSR (which can both be used just for AA IIRC). I tried FSR and it didn't work as well as the default TAA for certain cases, but I'm pretty sure I just had some flickering issue in my project that I was trying to use TAA to solve as a band aid so maybe not a great example of which AA methods are good. I'm not an expert and only use Unreal in my spare time.
>I have always heard that MSAA doesn't work well with deferred rendering which is why it is not longer used
Yes, but is deferred still go-to method? I think MSAA is good reason to go with "forward+" methods.
From what I have heard, forward is still used for games that are developed especially for VR. Unreal docs:
>there are some trade-offs in using the Deferred Renderer that might not be right for all VR experiences. Forward Rendering provides a faster baseline, with faster rendering passes, which may lead to better performance on VR platforms. Not only is Forward Rendering faster, it also provides better anti-aliasing options than the Deferred Renderer, which may lead to better visuals[0]
This page is fairly old now, so I don't know if this is still the case. I think many competitive FPS titles use forward.
>"forward+" methods.
Can you expound on this?
[0] https://dev.epicgames.com/documentation/en-us/unreal-engine/...
Can you expound on this?
"forward+" term was used by paper introducing tile-based light culling in compute shader, compared to the classic way of just looping over every possible light in the scene.
I'm a huge fan too, but my understanding is that traditional intra-frame anti-aliasing (SSAA, MSAA, FXAA/SMAA, etc.) does not increase the information capacity of the final image, even though it may appear to do that. For more information per frame, you need one or more of: Higher resolution, higher dynamic range, sampling across time, or multi-sampled shading (i.e. MSAA, but you also run the shader per subsample).
MSAA does indeed have a higher information capacity, since the MSAA output surface has a 2x, 4x or 8x higher resolution than the rendering resolution, it's not a 'software post-processing filter' like FXAA or SMAA.
The output of a single pixel shader invocation is duplicated 2, 4 or 8 times and written to the MSAA surface through a triangle-edge coverage mask, and once rendering to the MSAA surface has finished, a 'resolve operation' happens which downscale-filters the MSAA surface to the rendering resolution.
SSAA (super-sampling AA) is simply rendering to a higher resolution image which is then downscaled to the display resolution, e.g. MSAA invokes the pixel shader once per 'pixel' (yielding multiple coverage-masked samples) and SSAA once per 'sample'.
I probably only have a basic understanding of MSAA, but isn't its advantage reduced owing to detail levels even in situations where it could be used. There's so many geometry edges in models and environments (before you consider aspects like tessellation) to AA and intricate shading on plane surfaces so to get a good result you're effectively supersampling much of the image.
MSAA actually does, it stores more information per pixel using a special buffer format