Ten Years of D3D12

46 pointsposted 4 days ago
by ibobev

34 Comments

teucris

an hour ago

Really appreciate the detailed article! I was on the team that shipped D3D11 and helped with the start of D3D12. I went off to other things and lost touch - the API has come a long way! So many of these features were just little whispers of ideas exchanged in the hallways. So cool to see them come to life!

bob1029

2 hours ago

> Or at least, you can do that if you still care about MSAA. :)

I am a huge fan of all traditional forms of supersampling, intra-frame-only anti-aliasing techniques. The performance cost begins to make sense when you realize that these techniques are essentially perfect for the general case. They actually increase the information content of each pixel within each frame. Many modern techniques rely on multiple consecutive frames to build a final result. This is tantamount to game dev quackery in my book.

SSAA is even better than MSAA and is effectively what you are using in any game where you can set the "render scale" to a figure above 100%. It doesn't necessarily need to come in big scary powers of two (it used to and made enablement a problem). Even small oversampling rates like 110-130% can make a meaningful difference in my experience. If you can afford to go to 200% scale, you will receive a 4x increase in information content per pixel.

Pannoniae

24 minutes ago

Yeah, and we can actually afford to do it nowadays. I'm currently making a game with very retro graphics (think Minecraft-level pixelated stuff)

Sure, the textures themselves aren't too high fidelity, but since the pixel shader is so simple, it's quite feasible to do tricks which would be impossible even ten years ago. I can run the game even with 8x SSAA (that means 8x8=64 samples per pixel) and almost ground truth, 128x anisotropic filtering.

There's practically zero temporal aliasing and zero spatial aliasing.[0] Now of course, some people don't like the lack of aliasing too much - probably conditioning because of all the badly running, soulless releases - but I think that this direction is the future of gaming. Less photorealism, more intentional graphics design and crisp rendering.

(edit: swapped the URL because imgur was compressing the image to shit)

[0] https://files.catbox.moe/46ih7b.png

nh23423fefe

2 hours ago

It can't be 4x increase because the additional information will be correlated and predictable.

if i render a linear gradient at increasingly higher resolutions, I certainly am not creating infinite information in the continuum limit obviously

bob1029

an hour ago

4x is the upper bound. On average across all of gaming the information gain is going to be much more pedestrian, but where the extra information is not predictable it does make a big impact. For some pixels you do get the full 4x increase and those are the exact pixels for which you needed it the most.

cma

an hour ago

You still don't get the 4X increase on those pixels. You get it compressed down to a blended by coverage estimation of elements within the pixel. With a 4x higher res display instead of 4x MSAA or 4x SSAA you get more info in that area because you preserve more spatial aspects of the elements of it, instead of just coverage aspects.

perching_aix

an hour ago

Aren't some forms of aliasing specifically temporal in nature? For example, high detail density induced flicker on movement.

I appreciate people standing up for classical stuff, but I don't want the pendulum swung too far back the other way either.

Sohcahtoa82

10 minutes ago

That flickering is heavily reduced already by mipmapping and anisotropic filtering.

fngjdflmdflg

2 hours ago

I have always heard that MSAA doesn't work well with deferred rendering which is why it is not longer used[0] which is a shame. Unreal TAA can be good in certain cases but breaks down if both the character and background are in motion, at least from my testing. I wish there was a way to remove certain objects from TAA. I assume this could be done by rendering the frame both with the object and without it there but that seems wasteful. Rendering at a higher resolution seems like a good idea only when you already have a high end card.

[0] eg https://docs.nvidia.com/gameworks/content/gameworkslibrary/g...

TinkersW

an hour ago

MSAA doesn't do anything for shader aliasing, it only handles triangle aliasing, and modern renderers have plenty of shader aliasing so there isn't much reason to use MSAA.

The high triangle count of modern renders might in some cases cause the MSAA to become closer to SSAA in terms of cost and memory usage, all for a rather small AA count relative to a temporal method.

Temporal AA can handle everything, and is relatively cheap, so it has replaced all the other approaches. I haven't used Unreal TAA, does Unreal not support the various vendor AI driven TAA's?

fngjdflmdflg

an hour ago

Unreal has plugins to support other AA including DLSS and FSR (which can both be used just for AA IIRC). I tried FSR and it didn't work as well as the default TAA for certain cases, but I'm pretty sure I just had some flickering issue in my project that I was trying to use TAA to solve as a band aid so maybe not a great example of which AA methods are good. I'm not an expert and only use Unreal in my spare time.

garaetjjte

38 minutes ago

>I have always heard that MSAA doesn't work well with deferred rendering which is why it is not longer used

Yes, but is deferred still go-to method? I think MSAA is good reason to go with "forward+" methods.

fngjdflmdflg

31 minutes ago

From what I have heard, forward is still used for games that are developed especially for VR. Unreal docs:

>there are some trade-offs in using the Deferred Renderer that might not be right for all VR experiences. Forward Rendering provides a faster baseline, with faster rendering passes, which may lead to better performance on VR platforms. Not only is Forward Rendering faster, it also provides better anti-aliasing options than the Deferred Renderer, which may lead to better visuals[0]

This page is fairly old now, so I don't know if this is still the case. I think many competitive FPS titles use forward.

>"forward+" methods.

Can you expound on this?

[0] https://dev.epicgames.com/documentation/en-us/unreal-engine/...

garaetjjte

16 minutes ago

Can you expound on this?

"forward+" term was used by paper introducing tile-based light culling in compute shader, compared to the classic way of just looping over every possible light in the scene.

CharlesW

2 hours ago

I'm a huge fan too, but my understanding is that traditional intra-frame anti-aliasing (SSAA, MSAA, FXAA/SMAA, etc.) does not increase the information capacity of the final image, even though it may appear to do that. For more information per frame, you need one or more of: Higher resolution, higher dynamic range, sampling across time, or multi-sampled shading (i.e. MSAA, but you also run the shader per subsample).

flohofwoe

2 hours ago

MSAA does indeed have a higher information capacity, since the MSAA output surface has a 2x, 4x or 8x higher resolution than the rendering resolution, it's not a 'software post-processing filter' like FXAA or SMAA.

The output of a single pixel shader invocation is duplicated 2, 4 or 8 times and written to the MSAA surface through a triangle-edge coverage mask, and once rendering to the MSAA surface has finished, a 'resolve operation' happens which downscale-filters the MSAA surface to the rendering resolution.

SSAA (super-sampling AA) is simply rendering to a higher resolution image which is then downscaled to the display resolution, e.g. MSAA invokes the pixel shader once per 'pixel' (yielding multiple coverage-masked samples) and SSAA once per 'sample'.

keyringlight

an hour ago

I probably only have a basic understanding of MSAA, but isn't its advantage reduced owing to detail levels even in situations where it could be used. There's so many geometry edges in models and environments (before you consider aspects like tessellation) to AA and intricate shading on plane surfaces so to get a good result you're effectively supersampling much of the image.

frabert

2 hours ago

MSAA actually does, it stores more information per pixel using a special buffer format

rmunn

2 hours ago

Me, a tabletop RPG gamer and board gamer who hasn't played computer games in years (literally more than a decade): "Huh, that's interesting. Why are they rolling a 12-sided die between 1 and 3 times, choosing how many times it will be rolled by rolling a 6-sided die and dividing the number in half rounded up?"

Because before I clicked on the article (or the comments), that's the only sense I could make of the expression "d3d12" — rolling a d12, d3 times.

shmerl

3 hours ago

It should have been Vulkan from the start instead of another NIH.

flohofwoe

2 hours ago

Out of the modern non-console 3D APIs (Metal, D3D12, Vulkan), Vulkan is arguably the worst because it simply continued the Khronos tradition of letting GPU vendors contribute random extensions which then may or may not be promoted to the core API, lacking any sort of design vision or focus beyond the Mantle starting point.

Competition for 3D APIs is more important than ever (e.g. Vulkan has already started 'borrowing' a couple of ideas from other APIs - for instance VK_KHR_dynamic_rendering (e.g. the 'render-pass-object-removal') looks a lot like Metal's transient render pass system, just adapter to a non-OOP C API).

bobajeff

an hour ago

It sounds like maybe Khronos should handle 3D APIs like they do with slang[1] and just host someone else's API such as Diligent Engine or BGFX and then let them govern it's development.

[1]: https://shader-slang.org/

pjmlp

23 minutes ago

Just like with Mantle, Slang came to Khronos, because after they stated they would not be doing any further GLSL development, and it was up to the community to keep using HLSL or do something else themselves, a year later NVidia decided to contribute the Slang project.

pjmlp

2 hours ago

Vulkan only exists because DICE and AMD were nice enough to contribute their Mantle work to Khronos, otherwise they would still be wondering what OpenGL vNext should look like.

HexDecOctBin

an hour ago

> otherwise they would still be wondering what OpenGL vNext should look like.

And the world would have been a better place. All we needed in OpenGL 5 was a thread-safe API with encapsulated state and more work on AZDO and DSA. Now, everyone has to be a driver developer. And new extensions are released one-by-one to make Vulkan just a little more like what OpenGL 5 should have been in the first place. Maybe in 10 more years we'll get there.

pjmlp

29 minutes ago

I used to be a big OpenGL advocate all the way up to Long Peaks, eventually I came to realise it is another committee driven API, and writing a couple of plugin backends was never that much of a deal.

Just see the shading mess as well, with GLSL lagging behind, most devs using HLSL due to its market share, and now Slang, which again was contributed by NVidia.

Also the day LunarG loses out their Vulkan sponsorship, the SDK is gone.

connicpu

31 minutes ago

I can't imagine the world without Vulkan because while it is a lot lower level and more difficult to work with, it makes things like DXVK not only possible but quite performant. Gaming on Linux has been accelerated super strongly by projects like that.

pjmlp

27 minutes ago

Gaming on Linux is doing just fine on Android/Linux.

The problem is making gaming on GNU/Linux profitable, Vulkan will not fix that, while Proton is not a solution that will work out long term.

cyberax

11 minutes ago

Hard disagree. OpenGL state management was unfixable if it had to keep compatibility with OpenGL 2. That's why OpenGL 3/4 ended up being such huge messes.

The main problem with Vulkan is that Apple decided to go with its own Metal API, completely fracturing the graphics space.

pjmlp

2 minutes ago

All alternatives to Vulkan predate it, and it only exists thanks to Mantle's gift.

rahkiin

2 hours ago

Dx12 was release 2 years before Vulkan… And there are plenty of advantages to having control over an API instead of putting it in some slow external organization without focus

dotnet00

an hour ago

I think Vulkan adoption was hurt by how much more complicated it was to use efficiently early on.

Considering that DX12 made it out earlier, and it took some time for Vulkan to finally relax some of its rules enough to be relatively easy to use efficiently, I think it just lost momentum.