GP1_W12_DirectX Shading and Transparency PDF

Document Details

LovableLearning9524

Uploaded by LovableLearning9524

Howest

Tags

DirectX rendering Graphics programming Shader programming Computer graphics

Summary

This document provides instructions and details for implementing DirectX shading and transparency effects in a graphics programming course. It covers topics like diffuse color, normal mapping, specular color (Phong), and partial coverage, along with necessary shader parameters.

Full Transcript

GRAPHICS PROGRAMMING I HARDWARE RASTERIZATION PART III Graphics Programming 1 – Hardware Rasterization Part III DirectX: Shading Last week we rendered the mesh with only a diffuse texture. This week we add typical shading: Diffuse Color Normal Mapping Specu...

GRAPHICS PROGRAMMING I HARDWARE RASTERIZATION PART III Graphics Programming 1 – Hardware Rasterization Part III DirectX: Shading Last week we rendered the mesh with only a diffuse texture. This week we add typical shading: Diffuse Color Normal Mapping Specular Color (Phong) Enable parsing normal and tangent vectors in the.obj parser again. This also means the vertex struct needs to be adjusted both on the CPU and GPU side, including adjusting the input layout! Hint: use the semantics NORMAL and TANGENT. Graphics Programming 1 – Hardware Rasterization Part III DirectX: Shading Let’s get shading! Update the pixel shader to have the same functionality as the PixelShading function of the software rasterizer, including specular using Phong. We must provide extra information to the shader to enable proper rendering, including: Normal Map : Texture2D Specular Map : Texture2D Glossiness Map : Texture2D Light Direction : float3 → for now, hardcode: { 0.577f, -0.577f, 0.577f } World Matrix : float4x4 Camera Position: float3 Other variables you likely want to define: PI Light Intensity (7.0f) Shininess (25.0f) Graphics Programming 1 – Hardware Rasterization Part III DirectX: Shading Why do we need the World matrix? We need to transform our normals and tangents with the World matrix, NOT the WorldViewProjection! These are also transformed in the vertex shader. The world matrix can be passed as a float4x4, but only the rotation part is needed. The normal and tangent vectors are type float3 regardless. Extract the rotation part of a float4x4 as follows: So why do we need the Camera Position? Remember that Phong specular uses the inverted view direction. We need both the camera position and the 3D position of the (interpolated) vertex to calculate that. If we pass on the camera position as a global variable the first part is covered. If we also transform the vertex position with the world matrix in the vertex shader and store the result in an extra output parameter that will be interpolated in the rasterizer, then the second part is also covered, and we can calculate the view direction for every pixel in the pixel shader! ☺ Graphics Programming 1 – Hardware Rasterization Part III DirectX: Shading Add the matrix in the shader, using the semantics WORLD and the camera position using the semantic CAMERA. Capture the variables on the CPU side and before rendering, update the GPU variables using the appropriate ID3DX11EffectMatrixVariable. Transform the incoming position (defined in model space) with the world matrix and store it in an additional variable in the vertex output struct. In the pixel shader, use the interpolated world position of the pixel and the ONB of the camera to calculate the view direction. Graphics Programming 1 – Hardware Rasterization Part III DirectX: Shading Implement shading effects as you did in the software rasterizer. Some useful functions and tips: You can write separate helper functions to call in the vertex and/or pixel shader functions. Use intrinsic functions to perform the necessary math: cross → beware handedness! dot saturate → clamp to range [0, 1] reflect normalize Use the swizzling functionality of HLSL 1. https://docs.microsoft.com/en-us/windows/win32/direct3dhlsl/dx-graphics-hlsl-intrinsic-functions 2. https://docs.microsoft.com/en-us/windows/win32/direct3dhlsl/dx9-graphics-reference-asm-ps-registers-modifiers-source-register-swizzling Graphics Programming 1 – Hardware Rasterization Part III DirectX: Shading With the correct pixel shader and rasterizer state, you should get a similar result. Graphics Programming 1 – Hardware Rasterization Part III DirectX: Transparency Let’s make our demo even better with FIRE! Graphics Programming 1 – Hardware Rasterization Part III DirectX: Transparency Upon closer inspection of the previous image, you can notice that at the edges of the flames, the backdrop is visible. Unlike in our current render, there are no “hard edges”. To recreate a similar effect, we need to introduce transparency. Transparency is the root of a great many evils in game engine rendering pipelines. Many techniques require special passes to render meshes correctly. There are a lot of potential issues, we don’t have time to go through them. But we do recommend the following video by Morgan McGuire: https://www.youtube.com/watch?v=rVh-tnsJv54 Graphics Programming 1 – Hardware Rasterization Part III DirectX: Transparency Graphics Programming 1 – Hardware Rasterization Part III DirectX: Transparency Graphics Programming 1 – Hardware Rasterization Part III DirectX: Transparency Partial Coverage Upon closer inspection of the previous image, you can notice that at the edges of the flames, the backdrop is visible. Unlike in our current render, there are no “hard edges”. To recreate a similar effect, we need to introduce transparency. Transparency is the root of a great many evils in game engine rendering pipelines. Many techniques require special passes to render meshes correctly. There are a lot of potential issues, we don’t have time to go through them. But we do recommend the following video by Morgan McGuire: https://www.youtube.com/watch?v=rVh-tnsJv54 Transparency consists of two categories: Transmission Partial Coverage We are only going to implement partial coverage. Graphics Programming 1 – Hardware Rasterization Part III DirectX: Partial Coverage Start by making a new shader and effect class. Why? We don’t want any fancy shading for the fire effect. We want flat shading. Creating a new shader and effect class allows us to skip the part where we push texture data to the GPU and it allows us to write a very simple vertex and pixel shader. If you write an extra effect class, it helps to write a base effect class that loads and compiles the shader and updates the WorldViewProjection matrix. “All” shaders need this, so you avoid code duplication! Specific effect classes can then inherit from this base effect and add more variables. The flat shading effect will just read and return a value from the diffuse map, including the alpha channel value! Once you’ve made the new shader and effect class, load in the new.obj and the associated diffuse map. Part of used terminology may cause confusion: Shader → sometimes called effect = code that runs on the GPU. Effect → often called material = holds the data for one particular shader, as well as the associated resource variables and/or views. It is also responsible for updating the shader variables. Graphics Programming 1 – Hardware Rasterization Part III DirectX: Partial Coverage Graphics Programming 1 – Hardware Rasterization Part III DirectX: Partial Coverage Graphics Programming 1 – Hardware Rasterization Part III DirectX: Partial Coverage There are a few problems here. Let us go over them one by one. Notice that there are planes that are visible when looking from the back, but not visible from the front. Why? Back-face culling! In this case, we want a double-sided effect, which means we want to be able to see the planes from all angles. Fix this by adding a rasterizer state that disables back-face culling. Graphics Programming 1 – Hardware Rasterization Part III DirectX: Partial Coverage Graphics Programming 1 – Hardware Rasterization Part III DirectX: Partial Coverage Graphics Programming 1 – Hardware Rasterization Part III DirectX: Partial Coverage That is better, but even though we are returning our alpha value, there is no “transparency”. Why? We didn’t tell the pipeline it has to blend the returned value with the one that is already in the backbuffer! So how does blending work? Every pixel that passes the stencil and depth test, is passed to the blending function in the output merger stage. Destination Pixel (RenderTarget Pixel) Blending Function Combined Pixel Combines the Destination (Written to the RenderTarget) and Source Pixel Source Pixel (New Pixel from PS) Graphics Programming 1 – Hardware Rasterization Part III DirectX: Partial Coverage Result = Colorsrc x BlendFactorsrc + Colordst x BlendFactordst Graphics Programming 1 – Hardware Rasterization Part III DirectX: Partial Coverage Result = Colorsrc x BlendFactorsrc + Colordst x BlendFactordst The blend function consist out of three important parts: Colors: Source → Pixel value from the Pixel Shader Destination → Pixel value in the Backbuffer (RenderTarget) BlendFactors for both the source and destination value (always multiplied). Graphics Programming 1 – Hardware Rasterization Part III DirectX: Partial Coverage Result = Colorsrc x BlendFactorsrc + Colordst x BlendFactordst The blend function consist out of three important parts: Colors: Source → Pixel value from the Pixel Shader Destination → Pixel value in the Backbuffer (RenderTarget) BlendFactors for both the source and destination value (always multiplied). Blend Operation Graphics Programming 1 – Hardware Rasterization Part III DirectX: Partial Coverage Using those adjustable parameters, you can create a lot of blending effects! Graphics Programming 1 – Hardware Rasterization Part III DirectX: Partial Coverage How do we set the rendering pipeline to blend the returned value? Just as with the rasterizer state you can set it on the CPU side and push it to the GPU, or you define it in the shader itself. But instead of a rasterizer state, we use a Blend State. See official documentation at MSDN 1. https://docs.microsoft.com/en-us/windows/win32/api/d3d11/ns-d3d11-d3d11_render_target_blend_desc Graphics Programming 1 – Hardware Rasterization Part III DirectX: Partial Coverage Graphics Programming 1 – Hardware Rasterization Part III DirectX: Partial Coverage Graphics Programming 1 – Hardware Rasterization Part III DirectX: Partial Coverage Progress is made, but we still have one problem. Pixels appear to be missing, why? Graphics Programming 1 – Hardware Rasterization Part III DirectX: Partial Coverage When we look at the depth buffer, we see this  Graphics Programming 1 – Hardware Rasterization Part III DirectX: Partial Coverage The transparent gaps are caused by pixels failing the depth test! We are not in control of the order of rendering in the draw call itself (well we can, if we sort the planes in 3DS Max for one particular view), some transparent planes occlude pixels that should be visible. The depth buffer doesn’t store alpha values. It is not aware of transparency anywhere on the planes. How can we fix that? Graphics Programming 1 – Hardware Rasterization Part III DirectX: Partial Coverage There is the rasterizer state and blend state, but there is also a state called the depth stencil state. Using a depth stencil state, we can say what should or should not happen with the depth and stencil buffer. In our case we want to perform a depth test (checking against all the other geometry in our scene) but we don’t want to write to the depth buffer! 1. https://docs.microsoft.com/en-us/windows/win32/api/d3d11/ns-d3d11-d3d11_depth_stencil_desc Graphics Programming 1 – Hardware Rasterization Part III DirectX: Partial Coverage Graphics Programming 1 – Hardware Rasterization Part III DirectX: Partial Coverage Graphics Programming 1 – Hardware Rasterization Part III DirectX: Partial Coverage Graphics Programming 1 – Hardware Rasterization Part III DirectX: Partial Coverage Now our mesh is broken again. Why? If you remember from GameTech, the memory on a GPU is persistent, by which we mean, if we don’t overwrite a value, it stays in memory. In this case, if we don’t change one of the states, the next drawcall is still going to use the same states. To fix this, make sure your other shader has a correct rasterizer, blend and depth stencil state in its technique! ☺ “Fixing” our vehicle shader will result in the following correct result! Graphics Programming 1 – Hardware Rasterization Part III DirectX: Partial Coverage Graphics Programming 1 – Hardware Rasterization Part III DirectX: Partial Coverage Interestingly, we can still break things if we first render the combustion effect and then the vehicle. Why? → We are still order dependent! But there is no easy way to fix this  Graphics Programming 1 – Hardware Rasterization Part III The End That’s it, folks! It’s time to complete the DirectX project, which is also part of your exam project. For the exam: There is going to be an announcement on leho. Theory part (25%): list of theory to study follows. Dual Rasterizer (25%): we go over the project which has DirectX and software rasterizer and ask pertinent questions. Stay calm and use your brains. You are all smart enough! ☺ A good night’s rest is important! Make sure to study, but do not forget to sleep before the theory exam. Twelve weeks ago, you might not have made anything related to graphics programming. And now, you’ve written your own: Software Ray Tracer with Direct Lighting and PBR. Software Rasterizer that mimics DirectX, supporting 3D meshes and different shading techniques. Used DirectX 11 to hardware accelerate the rasterization progress, including writing your very own shaders! For this… Graphics Programming 1 – Hardware Rasterization Part III WE SALUTE YOU! Graphics Programming 1 – Hardware Rasterization Part III GOOD LUCK! Graphics Programming 1 – Hardware Rasterization Part III

Use Quizgecko on...
Browser
Browser