Graphics Programming I - Software Rasterization Part III PDF
Document Details
Uploaded by PrincipledSugilite4815
null
Tags
Summary
This document is a set of lecture notes or study material about graphics programming, focusing on software rasterization. It covers the projection stage, projection matrix, and methods for rendering 3D meshes.
Full Transcript
GRAPHICS PROGRAMMING I SOFTWARE RASTERIZATION PART III Graphics Programming 1 – Software Rasterization Part III Rasterization: Projection Stage Last week we improved upon the basic projection stage and rasterization stage. The projection stage required a few calculations: 1....
GRAPHICS PROGRAMMING I SOFTWARE RASTERIZATION PART III Graphics Programming 1 – Software Rasterization Part III Rasterization: Projection Stage Last week we improved upon the basic projection stage and rasterization stage. The projection stage required a few calculations: 1. Vertex from Model to World Space (if not defined in world space, which it was in our demo). 2. Vertex from World Space to View Space using the inverse of CameraToWorld (ONB/World), which makes a WorldToCamera (View) matrix. 3. Mapping x and y coordinates to camera settings (aspect ratio and field of view). 4. Applying the perspective divide on both x and y components to transform to NDC Space (x and y in [−1,1] range). Transformation from NDC to Screen/Raster Space, which is not part of the projection stage, but the rasterization stage (see culling soon). Thus far we defined these calculations in multiple lines of code. However, these could be combined into one. How? Matrices! ☺ Graphics Programming 1 – Software Rasterization Part III Rasterization: Projection Matrix Let’s see how we can combine these calculations. Moving a model from Model Space to World Space (putting it somewhere in the world) can be defined using a WorldMatrix. You’ve done this before in your Ray Tracer (rotating triangles)! Going from World Space to View Space is already defined by a matrix, the inverse of the camera ONB, called the ViewMatrix or WorldToCamera Matrix. So how can we combine the projection calculations into a matrix, called the ProjectionMatrix? Graphics Programming 1 – Software Rasterization Part III Rasterization: Projection Matrix 1. How to integrate camera settings: We need to scale the x and y coordinates of our vertex with the camera settings. 𝑷𝒓𝒐𝒋𝒆𝒄𝒕𝒆𝒅𝑽𝒆𝒓𝒕𝒆𝒙𝒙 𝑷𝒓𝒐𝒋𝒆𝒄𝒕𝒆𝒅𝑽𝒆𝒓𝒕𝒆𝒙′𝒙 = 𝑨𝒔𝒑𝒆𝒄𝒕𝑹𝒂𝒕𝒊𝒐 ∗ 𝑭𝑶𝑽 𝑷𝒓𝒐𝒋𝒆𝒄𝒕𝒆𝒅𝑽𝒆𝒓𝒕𝒆𝒙𝒚 𝑷𝒓𝒐𝒋𝒆𝒄𝒕𝒆𝒅𝑽𝒆𝒓𝒕𝒆𝒙′𝒚 = 𝑭𝑶𝑽 Putting this in a matrix is straightforward: 𝟏 𝟎 𝟎 𝒗𝒙 𝒗𝒚 𝒗𝒛 𝟎 𝟏 𝟎 = 𝒗𝒙 𝒗𝒚 𝒗𝒛 𝟎 𝟎 𝟏 𝟏 𝟎 𝟎 𝑨𝒔𝒑𝒆𝒄𝒕𝑹𝒂𝒕𝒊𝒐 ∗ 𝑭𝑶𝑽 𝒗𝒙 𝒗𝒚 𝒗𝒙 𝒗𝒚 𝒗𝒛 𝟏 = 𝒗𝒛 𝟎 𝟎 𝑨𝒔𝒑𝒆𝒄𝒕𝑹𝒂𝒕𝒊𝒐 ∗ 𝑭𝑶𝑽 𝑭𝑶𝑽 𝑭𝑶𝑽 𝟎 𝟎 𝟏 Graphics Programming 1 – Software Rasterization Part III Rasterization: Projection Matrix You may have noticed that there are some problems when the camera comes too close to an object (crash). Triangles behind the view plane are also rendered (but mirrored). The reason for this is that we do not check whether vertices are inside the frustum. We should check whether a vertices’ x and y coordinates are inside this frustum: Both coordinates are mapped to the [−𝟏, 𝟏] range after the perspective divide. If a value is less than − 1. 𝑓 or greater than 1. 𝑓 it is outside the frustum, so we ignore or cull it. Hence it is called Frustum Culling. What about triangles behind the view plane? Last week we defined the z value as an interpolated depth value defined in ViewSpace. What is the range? How can we compare values to check if it’s in our frustum? We could normalize the z value within a certain range. In the camera we can define a near plane and a far plane. Everything before the near plane or behind the far plane can be culled as well. researchgate.net (altered) – Stefan Diewald Graphics Programming 1 – Software Rasterization Part III Rasterization: Projection Matrix We must pick a range for the z coordinate. There is no single, definite standard: DirectX = [0, 1] OpenGL = [−1, 1] Because we are already following DirectX standards, we are going to map the z coordinate to the range [𝟎, 𝟏]. 1 0 0 0 𝐴𝑠𝑝𝑒𝑐𝑡𝑅𝑎𝑡𝑖𝑜 ∗ 𝐹𝑂𝑉 1 𝑣𝑥 𝑣𝑦 𝑣𝑥 𝑣𝑦 𝑣𝑧 1 0 0 0 = 𝐴𝑠𝑝𝑒𝑐𝑡𝑅𝑎𝑡𝑖𝑜 ∗ 𝐹𝑂𝑉 𝑨𝒗𝒛 + 𝑩 0 𝐹𝑂𝑉 𝐹𝑂𝑉 0 0 𝑨 0 0 0 𝑩 0 𝑨𝒗𝒛 + 𝑩 𝑩 Afterwards a perspective divide is performed, after which: 𝒛= 𝒗𝒛 =𝑨+ 𝒗𝒛 𝑨 and 𝑩 are unknown, but the near and far plane we can define ourselves, as well as the range which is [0, 1] by convention. 𝑩 𝑩 When 𝒗𝒛 is on the near plane: 𝑨 + 𝒏𝒆𝒂𝒓 = 𝟎 and when 𝒗𝒛 is on the far plane: 𝑨 + 𝒇𝒂𝒓 = 𝟏 Graphics Programming 1 – Software Rasterization Part III Rasterization: Projection Matrix Let’s solve the two unknown 𝑨 and 𝑩, knowing: 𝑩 𝑩 𝑨+ = 1 and 𝑨 + =0 𝒇𝒂𝒓 𝒏𝒆𝒂𝒓 𝑩 Let’s rewrite the second formula to find 𝑩: 𝑨 + = 0 𝑩 = −𝑨 * near 𝒏𝒆𝒂𝒓 Let’s find a solution for 𝑨 by substituting 𝑩 in our first formula: 𝑩 (−𝑨 ∗ 𝒏𝒆𝒂𝒓) 𝑨 + 𝒇𝒂𝒓 = 1 𝑨+ =1 𝒇𝒂𝒓 Now let’s solve for 𝑨: (−𝑨 ∗ 𝒏𝒆𝒂𝒓) (𝑨 ∗𝒇𝒂𝒓 −𝑨 ∗𝒏𝒆𝒂𝒓) (𝑨 ∗ 𝒇𝒂𝒓 − 𝑨 ∗ 𝒏𝒆𝒂𝒓) = far 𝑨+ =1 =1 𝒇𝒂𝒓 𝒇𝒂𝒓 𝒇𝒂𝒓 (𝑨 ∗ 𝒇𝒂𝒓 − 𝑨 ∗ 𝒏𝒆𝒂𝒓) = far 𝑨 (𝒇𝒂𝒓 − 𝒏𝒆𝒂𝒓) = far 𝑨= (𝒇𝒂𝒓 −𝒏𝒆𝒂𝒓) Now we can substitute in 𝑨 our 𝑩 formula: −𝒇𝒂𝒓 −(𝒇𝒂𝒓 ∗𝒏𝒆𝒂𝒓) 𝑩 = −𝑨 * near 𝑩 = (𝒇𝒂𝒓 −𝒏𝒆𝒂𝒓)* near 𝑩= (𝒇𝒂𝒓 −𝒏𝒆𝒂𝒓) Graphics Programming 1 – Software Rasterization Part III Rasterization: Projection Matrix 2. Now that we have solved 𝑨 and 𝑩, it is time to plug them into the matrix: 𝒇𝒂𝒓 𝑨= (𝒇𝒂𝒓 − 𝒏𝒆𝒂𝒓) −(𝒇𝒂𝒓 ∗ 𝒏𝒆𝒂𝒓) 𝑩= 1 (𝒇𝒂𝒓 − 𝒏𝒆𝒂𝒓) 0 0 0 𝐴𝑠𝑝𝑒𝑐𝑡𝑅𝑎𝑡𝑖𝑜 ∗ 𝐹𝑂𝑉 1 0 0 0 1 𝐴𝑠𝑝𝑒𝑐𝑡𝑅𝑎𝑡𝑖𝑜 ∗ 𝐹𝑂𝑉 0 0 0 1 𝐹𝑂𝑉 0 0 0 ⇒ 𝒇𝒂𝒓 𝐹𝑂𝑉 0 0 𝑨 0 0 0 𝟎 0 0 𝑩 0 (𝒇𝒂𝒓 − 𝒏𝒆𝒂𝒓) −(𝒇𝒂𝒓 ∗ 𝒏𝒆𝒂𝒓) 0 0 0 (𝒇𝒂𝒓 − 𝒏𝒆𝒂𝒓) We have one problem now! Can you spot it? Hint: what happens after this step? Graphics Programming 1 – Software Rasterization Part III Rasterization: Projection Matrix We just lost the original z value, which is needed for the perspective divide and vertex attribute interpolation! To mitigate this, we can store the original z component in the w component of our 4D vector: 1 0 0 0 𝐴𝑠𝑝𝑒𝑐𝑡𝑅𝑎𝑡𝑖𝑜 ∗ 𝐹𝑂𝑉 1 0 0 0 𝐹𝑂𝑉 𝑣𝑥 𝑣𝑦 𝑓𝑎𝑟 ∗ 𝑣𝑧 −(𝑓𝑎𝑟 ∗ 𝑛𝑒𝑎𝑟) 𝑣𝑥 𝑣𝑦 𝑣𝑧 1 𝑓𝑎𝑟 = 𝐴𝑠𝑝𝑒𝑐𝑡𝑅𝑎𝑡𝑖𝑜 ∗ 𝐹𝑂𝑉 𝐹𝑂𝑉 + (𝑓𝑎𝑟 − 𝑛𝑒𝑎𝑟) (𝑓𝑎𝑟 − 𝑛𝑒𝑎𝑟) 𝒗𝒛 0 0 0 0 1 (𝑓𝑎𝑟 − 𝑛𝑒𝑎𝑟) −(𝑓𝑎𝑟 ∗ 𝑛𝑒𝑎𝑟) 0 0 0 (𝑓𝑎𝑟 − 𝑛𝑒𝑎𝑟) Graphics Programming 1 – Software Rasterization Part III Rasterization: Projection Matrix Info: the implementation of a Projection Matrix depends on the coordinate system used. (Left-Handed > +Z vs. Right-Handed > -Z) 1 1 0 0 0 0 0 0 𝐴𝑠𝑝𝑒𝑐𝑡𝑅𝑎𝑡𝑖𝑜 ∗ 𝐹𝑂𝑉 𝐴𝑠𝑝𝑒𝑐𝑡𝑅𝑎𝑡𝑖𝑜 ∗ 𝐹𝑂𝑉 1 1 0 0 0 0 0 0 𝐹𝑂𝑉 𝐹𝑂𝑉 𝑓𝑎𝑟 𝒇𝒂𝒓 0 0 1 0 0 −1 (𝑓𝑎𝑟 − 𝑛𝑒𝑎𝑟) (𝒏𝒆𝒂𝒓 − 𝒇𝒂𝒓) −(𝑓𝑎𝑟 ∗ 𝑛𝑒𝑎𝑟) (𝒇𝒂𝒓 ∗ 𝒏𝒆𝒂𝒓) 0 0 0 0 0 0 (𝑓𝑎𝑟 − 𝑛𝑒𝑎𝑟) (𝒏𝒆𝒂𝒓 − 𝒇𝒂𝒓) Left-Handed Coordinate System Right-Handed Coordinate System We are using the left-handed version, when comparing to other sources keep that in mind. Graphics Programming 1 – Software Rasterization Part III Rasterization: Projection Matrix We can now combine all our space transformation into one matrix: WorldViewProjectionMatrix = WorldMatrix ∗ ViewMatrix ∗ ProjectionMatrix So, in our projection stage, we can multiply every vertex with this matrix, which is the same for all vertices within one mesh! After that, we must perform perspective divide to put them in NDC. Watch out though! Now use the w component of your vector (which has the correct depth value in View Space): 𝑣𝑥 /= 𝑣𝑤 𝑣𝑦 /= 𝑣𝑤 𝑣𝑧 /= 𝑣𝑤 𝟏 𝑣𝑤 = 𝑣𝑤 ⇒ Warning: some rasterizers store because this is the value needed for attribute interpolation. 𝑣𝑤 After this, coordinates are defined in: 𝑣𝑥 = [−1, 1] 𝑣𝑦 = [−1, 1] 𝑣𝑧 = [0, 1] 𝑣𝑤 = 𝑣𝑧 in View Space Graphics Programming 1 – Software Rasterization Part III Rasterization: Projection Matrix Why not divide (x, y, z) by z in Projection Space instead of z in View Space? Once the z component is in the [𝑛𝑒𝑎𝑟, 𝑓𝑎𝑟] range, it is no longer linear. Which means dividing by this value would result in skewed, incorrect values. Is it truly no longer linear? 𝑩 𝑩 𝒛=𝑨+ ⇒ 𝒗𝒛 = far 𝒗𝒛 (𝒛−𝑨) Suppose z = 0.5, 𝑛𝑒𝑎𝑟 = 1, 𝑓𝑎𝑟 = 1000. If z is linear, then 𝑧 = 0.5 would mean halfway between near and far. ~2 999 near 1 Expected: 𝑣𝑧 = 2. 0 0.5 −(𝒇𝒂𝒓∗𝒏𝒆𝒂𝒓) −(𝟏𝟎𝟎𝟎∗𝟏) 0 𝑩 𝒇𝒂𝒓−𝒏𝒆𝒂𝒓 𝒗𝒛 = 𝒛−𝑨 = 𝒇𝒂𝒓 = 𝟏 𝟏𝟎𝟎𝟎−𝟏 𝟏𝟎𝟎𝟎 = … 𝒛− − (𝒇𝒂𝒓−𝒏𝒆𝒂𝒓) 𝟐 (𝟏𝟎𝟎𝟎−𝟏) Do not forget to change the perspective divide to use the “original” z in View Space, which is now in the w component. Graphics Programming 1 – Software Rasterization Part III Rasterization: Projection Matrix For which other reasons do we keep the w component? 𝟏 𝟏 To correctly interpolate vertex attributes, we need a linear value: is thus replaced with: 𝒛 𝒘 One cannot simply interpolate with the new z value, because it is not linear. Not even when you invert it again. To summarize: after applying respectively the projection matrix and perspective divide: 𝒛 = not linear ⇒ [0, 1] range 𝒘 = not linear ⇒ perspective distortion: projection does not preserve distances! 𝟏 = still not linear ⇒ even countering perspective distortion, we still have a non-linear value! ☺ 𝒛 𝟏 = linear! 𝒘 𝟏 Again, this means interpolating vertex attributes in the rasterization stage now uses: 𝒘 This also means, after the projection stage and the perspective divide, the position of our vertices are no longer stored in a Vector3, but in a Vector4! Graphics Programming 1 – Software Rasterization Part III Rasterization: Projection Matrix After Projection Stage Graphics Programming 1 – Software Rasterization Part III Rasterization: Depth Buffer Hold on, did we not store 𝒗𝒛 in View Space in our Depth Buffer last week? Correct, and now that value is stored in 𝒗𝒘. If we would store this value, we would be storing the 𝒘 value (which is the 𝒛 in View Space). We would be creating a w-buffer and not a z-buffer. In hardware accelerated rasterizers, the depth buffer is a z-buffer. Why did they go through this trouble? Frustum clipping: we can now check whether a vertex is inside the frustum. The Depth Test becomes: 1. Is our interpolated depth value in the range [0,1]? 2. If so, is this value closer than the one stored in our depth buffer? 3. If this pixel is closer → color pixel and store the depth in the depth buffer. Graphics Programming 1 – Software Rasterization Part III Rasterization: Depth Buffer WARNING! For the Depth Buffer only, we do not use these values to interpolate vertex attributes. We are only interested in comparing depth values. That these values are not linear is fine, if both values are non-linear with the same function. 𝟏 𝐙𝐁𝐮𝐟𝐟𝐞𝐫𝐕𝐚𝐥𝐮𝐞 = (= Interpolated Depth) 𝟏 𝟏 𝟏 𝒘𝟎 + 𝒘𝟏 + 𝒘 Non-Linear! 𝑽𝟎,𝒛 𝑽𝟏,𝒛 𝑽𝟐,𝒛 𝟐 This 𝐙𝐁𝐮𝐟𝐟𝐞𝐫𝐕𝐚𝐥𝐮𝐞 is the one we compare in the Depth Test and the value we store in the Depth Buffer (uses 𝑽𝒛 ). When we want to interpolate vertex attributes with a correct depth (color, uv, normals, etc.), we still use the View Space depth (𝑽𝒘): 𝟏 𝐖𝐈𝐧𝐭𝐞𝐫𝐩𝐨𝐥𝐚𝐭𝐞𝐝 = (= Interpolated Depth) 𝟏 𝟏 𝟏 𝒘𝟎 + 𝒘𝟏 + 𝒘 Linear! 𝑽𝟎,𝒘 𝑽𝟏,𝒘 𝑽𝟐,𝒘 𝟐 𝑽𝟎,𝒖𝒗 𝑽𝟏,𝒖𝒗 𝑽𝟐,𝒖𝒗 𝐔𝐕𝐈𝐧𝐭𝐞𝐫𝐩𝐨𝐥𝐚𝐭𝐞𝐝 = ( 𝒘𝟎 + 𝒘𝟏 + 𝒘𝟐 ) 𝒘𝐈𝐧𝐭𝐞𝐫𝐩𝐨𝐥𝐚𝐭𝐞𝐝 (= Interpolated UV) 𝑽𝟎,𝒘 𝑽𝟏,𝒘 𝑽𝟐,𝒘 Same applies to other vertex attributes! Graphics Programming 1 – Software Rasterization Part III Rasterization: Depth Buffer What changes this week: Use a (World)ViewProjectionMatrix for transforming our vertices. Store the result in a Vector4 instead of a Vector3. Use 𝑽𝒛 to interpolate the depth and use the result for our DepthTest and DepthBuffer. Use 𝑽𝒘 to interpolate all other vertex attribute using correct depth interpolation. Now that we have our x, y and z in NDC, perform culling. The flow of your program should now be: PROJECTION STAGE OPTIMIZATION Vertices to NDC Frustum Culling Clipping NDC to Raster (x,y) Rasterization Attribute Interpolation RASTERIZATION STAGE Graphics Programming 1 – Software Rasterization Part III Rasterization: Clipping? Optional: we do not render a triangle as soon as one vertex is outside the frustum. But what about triangles that are partially in the frustum? Graphics Programming 1 – Software Rasterization Part III Rasterization: What to do? Next steps: Add a near and far plane variable to your camera. (default: near = 1. 𝑓, far = 1000. 𝑓) Use the (World)ViewProjectionMatrix Use the correct depth buffer for the depth test (z-buffer, not w-buffer). Add frustum culling. If everything works, you should get the exact same result, but you’ll finally have a “correct” depth buffer! ☺ Toggle between FinalColor and DepthBuffer with key ‘F4’. In other words, make sure you can visualize your depth buffer! Hint: values will probably be close to 1, showing a lot of white. Remap the values before rendering using: Remap(depthValue, 0.985f, 1.f) Graphics Programming 1 – Software Rasterization Part III Rasterization: What to do? Graphics Programming 1 – Software Rasterization Part III Rasterization: Meshes Finally, render a 3D Mesh again. With the ray tracer we just read in the vertex positions and calculate the normal of the primitive (triangle) ourselves. We did not have UV coordinates. With our rasterizer we don’t do this. We just read in per vertex attributes and interpolate. This means an update is needed for OBJParser to read in: UV coordinates: prefix vt Normals: prefix vn Use Utils::ParseOBJ Make sure to uncomment the required vertex attributes (DataTypes::Vertex) and comment out the ‘DISABLE_OBJ’ directive (Utils::ParseOBJ) This function will read out an OBJ file, and populate the vertex (position, uv, normal & tangent) and index array Also let it rotate using a WorldMatrix. Graphics Programming 1 – Software Rasterization Part III Rasterization: Meshes Camera Position: 0.f, 5.f, -30.f (60 FovAngle) || DepthRemap(0.995f – 1.f) Graphics Programming 1 – Software Rasterization Part III Rasterization: Meshes Graphics Programming 1 – Software Rasterization Part III GOOD LUCK! Graphics Programming 1 – Software Rasterization Part III