Graphics Programming 1 - Software Rasterization Part 1 PDF

Document Details

PrincipledSugilite4815

Uploaded by PrincipledSugilite4815

Tags

graphics programming rasterization ray tracing computer graphics

Summary

This document is about Graphics Programming 1 - Software Rasterization Part I, covering topics like Rasterization and Ray Tracing algorithms. It's an introductory material for computer graphics and includes details on coordinate systems, projection stage, rasterization stage, and barycentric and depth buffers optimization techniques.

Full Transcript

GRAPHICS PROGRAMMING I SOFTWARE RASTERIZATION PART I Graphics Programming 1 – Software Rasterization Part I Rasterization vs Ray Tracing Ray Tracing for each pixel (cast a ray) for each triangle...

GRAPHICS PROGRAMMING I SOFTWARE RASTERIZATION PART I Graphics Programming 1 – Software Rasterization Part I Rasterization vs Ray Tracing Ray Tracing for each pixel (cast a ray) for each triangle does ray hit triangle? Graphics Programming 1 – Software Rasterization Part I Rasterization vs Ray Tracing Rasterization for each triangle (project) for each pixel is pixel in triangle? Graphics Programming 1 – Software Rasterization Part I Rasterization vs Ray Tracing A ray tracer is often called image centric, because for every pixel in the view plane we cast a ray into the scene and check for collision. A rasterizer is often called object centric, because we project the primitive onto the view plane and than check if a pixel overlaps with the primitive. Rasterization is the “inverse” approach of ray tracing. Graphics Programming 1 – Software Rasterization Part I Rasterization: Why? When comparing ray tracing and rasterization, we notice that the inner and outer for loops are swapped. Why would we do this? Ray Tracing Rasterization for each pixel (cast a ray) for each triangle (project) for each triangle for each pixel does ray hit triangle? is pixel in triangle? It’s a different technique to solve the visibility problem. Some say that rasterization is always faster than ray tracing. This is not necessarily true. It depends on scene complexity, data organization and which optimization techniques are being used. In general, rasterization is much faster because: Better convergence when projecting. GPUs are optimized to perform these operations. Graphics Programming 1 – Software Rasterization Part I Rasterization: Algorithm Let’s take a closer look at the rasterization algorithm. As mentioned before the algorithm contains two stages: Projecting 3D geometry onto our 2D view plane → PROJECTION STAGE Checking if a pixel in the 2D view plane overlaps with the projected geometry → RASTERIZATION STAGE Many books and online resources explain rasterization in this order. We do the opposite and start programming our rasterizer with coordinates that already have been projected. Graphics Programming 1 – Software Rasterization Part I Rasterization: Coordinate Systems Since we picked a “smart” coordinate system for ray tracing, we can adopt the same for our rasterizer. This is also the general convention used in rasterization. Raster/Screen Space Normalized Raster Space NDC Space (0,0) (screenWidth,0) (0,0) (1,0) (-1,1) (1,1) (0,screenHeight) (0,1) (-1,-1) (1,-1) Normalized Device Coordinates Graphics Programming 1 – Software Rasterization Part I Rasterization: Rasterization Stage To start with the rasterization stage, define a triangle with the following NDC coordinates: Vertex 0: position( 0.0𝑓, 0.5𝑓, 1.0𝑓) (-1,1) NDC Space (1,1) Vertex 1: position( 0.5𝑓, −0.5𝑓, 1.0𝑓) Vertex 2: position(−0.5𝑓, −0.5𝑓, 1.0𝑓) 𝑉0 The rasterization stage is about checking if a pixel is in the triangle. Before checking this, the projected triangle needs to be converted from NDC space to screen space (often called raster space). 𝑉2 𝑉1 We can convert coordinates from NDC to screen space using the following formula: (-1,-1) (1,-1) 𝑵𝑫𝑪𝑽𝒆𝒓𝒕𝒆𝒙𝒙 +𝟏 𝑺𝒄𝒓𝒆𝒆𝒏𝑺𝒑𝒂𝒄𝒆𝑽𝒆𝒓𝒕𝒆𝒙𝒙 = ∗ 𝑺𝒄𝒓𝒆𝒆𝒏𝑾𝒊𝒅𝒕𝒉 𝟐 𝟏−𝑵𝑫𝑪𝑽𝒆𝒓𝒕𝒆𝒙𝒚 𝑺𝒄𝒓𝒆𝒆𝒏𝑺𝒑𝒂𝒄𝒆𝑽𝒆𝒓𝒕𝒆𝒙𝒚 = ∗ 𝑺𝒄𝒓𝒆𝒆𝒏𝑯𝒆𝒊𝒈𝒉𝒕 𝟐 Graphics Programming 1 – Software Rasterization Part I Rasterization: Rasterization Stage Now that the vertices are in screen space, we can check if a pixel is in the triangle. Notice that the formulas used are the inverse of converting from screen space to view plane space. This time it is not an inside-outside test with an intersection point, but with a pixel. We define the pixel as a Vector2 point: 𝐏(𝒑𝒙 + 𝟎. 𝟓, 𝒑𝒚 + 𝟎. 𝟓). Do not forget to add 0.5 since we should check from the center of pixels, not the top left corner. Just as with the ray tracer we can use the cross products and 𝑉0 𝑥 check the signs of the returned signed areas. 𝒑 Also keep in mind not to use the pixel itself for the cross products 𝒆 but rather the vector pointing from the current vertex to the pixel! 𝑷 The cross product of 2D vectors returns their signed area. This is 𝑉2 𝑉1 a scalar with the same value as the magnitude of a Vector3 cross 𝑦 product, but without a direction because its 3D counterpart does not exist in the 2D plane. We store our vertices in screen space in 3D (storing the initial depth value). For now, we are only interested in the signed area, so we use 2D cross products and ignore the z (depth) value. Graphics Programming 1 – Software Rasterization Part I Rasterization: Rasterization Stage Loop over all triangles (in this case, just one) with vertices defined in NDC space. Loop over all the pixels and check if a pixel is in the triangle, using the same technique as in the ray tracer (Triangle Intersection Test). If it is inside the triangle, color the pixel white. Implemented correctly, you should have the following result: 𝑽𝟎 = (𝟎, 𝟎. 𝟓, 𝟏) Notice 𝑽𝟏 = (𝟎. 𝟓, −𝟎. 𝟓, 𝟏) clockwise 𝑽𝟐 = (−𝟎. 𝟓, −𝟎. 𝟓, 𝟏) order That is the essence of the rasterization stage. There is more, but this suffices to start. Graphics Programming 1 – Software Rasterization Part I Rasterization: Projection Stage On to the PROJECTION STAGE, whose results are the NDC triangles for rasterization. Object models are first defined in model space. Its vertices are relative to model axis. The origin of these axis is usually the center of the object, which allows for sensible rotations. Once we use these models in an environment, they come to be in world space. This space consist of triangles defined by coordinates relative to the world axis. In the ray tracer we defined vertices in model space and transformed (translated, rotated, scaled) them to world space using a transformation matrix. codinglabs.net Graphics Programming 1 – Software Rasterization Part I Rasterization: Projection Stage How does one transform World Space Coordinates to Normalized Device Coordinates? Step 1: Translating from world space to camera space: We want to know how a triangle in world space relates to the camera. (VIEW SPACE) After all, it is the camera that defines what we can see. To go from one coordinate system to another (ONB), we apply a transformation matrix to all points and vectors. (VIEW MATRIX) In the ray tracer we created such a transformation matrix which defines the camera relative to world space. This matrix was used to transform points and vectors from camera space to world space. This allowed us to move the camera, and its view rays, around in the world space. Now we want the opposite to happen: to translate points from world space to camera space. Effectively we would be rotating and translating the world around the camera, the camera itself does not move. In the ray tracer we already have the transformation matrix in one way, to perform the opposite transformation, we can use the inverse matrix. Thus, to go from world to camera space, we can multiply every point with the inverse of the cameraToWorld matrix. (CameraToWorld Matrix → Transforms from View/Camera to World Space) (WorldToCamera Matrix (also known as View Matrix) → Transforms from World to Camera/View Space) Graphics Programming 1 – Software Rasterization Part I Rasterization: Projection Stage Transform all vertices from world space to camera space, often called view space, by multiplying them with the inverse of the camera matrix. Matrix class comes with an implemented inverse function. codinglabs.net In Direct3D 9 this is implemented as CreateLookAtLH(), you are free to implement either. https://learn.microsoft.com/en-us/windows/win3 2 /direct3 d9/d3 dxmatrixlookatlh Graphics Programming 1 – Software Rasterization Part I Rasterization: Projection Stage Step 2: Projecting 3D points onto a 2D view plane What about orthogonal projection, effectively replacing all z-values with 1? What does it look like if we were to draw lines from each vertex to the camera origin? Graphics Programming 1 – Software Rasterization Part I Rasterization: Projection Stage Step 2: Projecting 3D points onto a 2D view plane These coordinates are not in the required [-1, 1] range of NDC. Let’s see how we go from this view space to these NDC coordinates. What we want to do is project our 3D point onto our 2D view plane. So how do we project a 3D point onto a 2D plane? 𝟏 𝑽𝒆𝒓𝒕𝒆𝒙𝒛 𝑪 = 𝑽𝒆𝒓𝒕𝒆𝒙 𝑩𝑪 𝑩′𝑪′ = 𝑨𝑩 𝑨𝑩′ 𝑪′ 𝑽𝒆𝒓𝒕𝒆𝒙𝒚 𝑨 𝑽𝒆𝒓𝒕𝒆𝒙′𝒚 𝑽𝒆𝒓𝒕𝒆𝒙𝒚 𝑽𝒆𝒓𝒕𝒆𝒙′𝒚 = 𝑧=1 𝑩′ 𝑩 𝑽𝒆𝒓𝒕𝒆𝒙𝒛 𝟏 𝑽𝒆𝒓𝒕𝒆𝒙𝒚 𝑽𝒆𝒓𝒕𝒆𝒙′𝒚 = 𝑽𝒆𝒓𝒕𝒆𝒙𝒛 −𝟏 This is how you project a 3D point onto the view plane. This is called the PERSPECTIVE DIVIDE! Graphics Programming 1 – Software Rasterization Part I Rasterization: Projection Stage We are using a left-handed system, which means that our positive Z-axis is pointing into the screen. This means 𝑉𝑖𝑒𝑤𝑆𝑝𝑎𝑐𝑒𝑉𝑒𝑟𝑡𝑒𝑥𝑧 is currently the distance of a vertex relative to the camera (orthogonal to the view plane). 𝑽𝒊𝒆𝒘𝑺𝒑𝒂𝒄𝒆𝑽𝒆𝒓𝒕𝒆𝒙𝒙 𝑷𝒓𝒐𝒋𝒆𝒄𝒕𝒆𝒅𝑽𝒆𝒓𝒕𝒆𝒙𝒙 = 𝑽𝒊𝒆𝒘𝑺𝒑𝒂𝒄𝒆𝑽𝒆𝒓𝒕𝒆𝒙𝒛 𝑽𝒊𝒆𝒘𝑺𝒑𝒂𝒄𝒆𝑽𝒆𝒓𝒕𝒆𝒙𝒚 𝑷𝒓𝒐𝒋𝒆𝒄𝒕𝒆𝒅𝑽𝒆𝒓𝒕𝒆𝒙𝒚 = 𝑽𝒊𝒆𝒘𝑺𝒑𝒂𝒄𝒆𝑽𝒆𝒓𝒕𝒆𝒙𝒛 What about our ‘projected’ z-component? For now, we store it unchanged. 𝑷𝒓𝒐𝒋𝒆𝒄𝒕𝒆𝒅𝑽𝒆𝒓𝒕𝒆𝒙𝒛 = 𝑽𝒊𝒆𝒘𝑺𝒑𝒂𝒄𝒆𝑽𝒆𝒓𝒕𝒆𝒙𝒛 After the perspective divide a 3D point is in what we call PROJECTION SPACE, where the z-axis points into the screen. Graphics Programming 1 – Software Rasterization Part I Rasterization: Projection Stage In summary: we project each point onto the screen. Effectively, this is only an orthographic projection. But since each point is divided by its z-component (which becomes larger for points further away), we mimic perspective distortion. To quote Edwin Catmull: “Screen-space is also 3D, but the objects have undergone a perspective distortion so that an orthogonal project of the object onto the x-y plane, would result in the expected perspective image.” Graphics Programming 1 – Software Rasterization Part I Rasterization: Projection Stage There is something missing still. Can you guess what? Tip: week 1 ray tracer orange Step 3: We have yet to consider the camera settings and screen size. Most of us do not have square screens but rectangular ones. As it turns out, we can use the same logic as in the ray tracer but inverted. To ensure each point is projected relative to the size of our screen, it needs to be divided by the aspect ratio (only x-component) and the field of view (FOV). 𝑷𝒓𝒐𝒋𝒆𝒄𝒕𝒆𝒅𝑽𝒆𝒓𝒕𝒆𝒙 𝑷𝒓𝒐𝒋𝒆𝒄𝒕𝒆𝒅𝑽𝒆𝒓𝒕𝒆𝒙′𝒙 = 𝑨𝒔𝒑𝒆𝒄𝒕𝑹𝒂𝒕𝒊𝒐 ∗ 𝑭𝑶𝑽𝒙 𝑷𝒓𝒐𝒋𝒆𝒄𝒕𝒆𝒅𝑽𝒆𝒓𝒕𝒆𝒙𝒚 𝑷𝒓𝒐𝒋𝒆𝒄𝒕𝒆𝒅𝑽𝒆𝒓𝒕𝒆𝒙′𝒚 = 𝑭𝑶𝑽 The order of steps 2 and 3 does not matter. Whether you first divide to add perspective, and then account for screen dimensions or the other way around will net the same result. Graphics Programming 1 – Software Rasterization Part I Rasterization: Projection + Rasterization Stage The FULL projection stage 1. Model → World Space (World Matrix) 2. World → View Space (View Matrix) 3. View → Clipping Space (NDC) (Projection Matrix) Using a WorldViewProjection (WVP) Matrix: Model → Clipping Space (WVP Matrix → World*View*Projection) We will see this later The rasterization stage 1. NDC → Raster Space Graphics Programming 1 – Software Rasterization Part I Rasterization: Projection + Rasterization Stage Points are now in projection space with NDC [−1, 1] for the x- and y-component. These can be rasterized now as before. Textual recap: transforming a 3D point to NDC requires the following steps: 1. Transform the point from world space to camera space, often called View Space, by multiplying it with the inverse of the camera matrix (ONB). 2. Take the camera settings/size into account by dividing the point with the aspect ratio (only x- component) and the field of view. 3. Apply the perspective divide to obtain perspective distortion. Meaning divide both the x- and y- component by the z-component. For now, we store the ViewSpace z-component in the z-component of the projected point. After the divide, the point is now in Projection Space. Once our point is in projection space, we still need to put it in Screen Space before we can start comparing it with actual pixels (rasterization). We apply the following transformations: World Space → View Space → Projection Space → Screen Space Graphics Programming 1 – Software Rasterization Part I Rasterization: Projection + Rasterization Stage You can think of the projection stage as the opposite of what we did in the ray tracer. As a reminder: applying a matrix transforms coordinates relative to a new axis system. We call these spaces. We represent a point relative to a certain space / coordinate system. Next step is to implement the camera. Feel free to copy + paste the ‘update’ logic from the ray tracer. We can start with the ONB of the ray tracer and use the inverse. Moving the camera doesn’t change. Define a triangle in world space, transform its vertices accordingly and use the previously written rasterization stage. Hint: put the transformation logic in a function. Define a triangle with coordinates: 𝑣0 0. 𝑓, 2. 𝑓, 0. 𝑓 , 𝑣1 1. 𝑓, 0. 𝑓, 0. 𝑓 , 𝑣2 −1. 𝑓, 0. 𝑓, 0. 𝑓 And camera with origin 0. 𝑓, 0. 𝑓, −10. 𝑓 and FovTotalAngle 60. 𝑓 Graphics Programming 1 – Software Rasterization Part I Rasterization: Projection + Rasterization Stage Implemented correctly you should see the following triangle. You should also be able to move and rotate around the triangle, because you already have a working camera from the ray tracer. Should be doable… Graphics Programming 1 – Software Rasterization Part I Rasterization: Barycentric Coordinates When we perform the triangle inside-outside test, we checked whether a point is on the “right” side of its edges. The cross product of these 2D vectors is a scalar value, representing the signed area of the parallelogram. This scalar value can also be interpreted in another way! Let’s have a look. 𝑽0 SignedAreaParallelogramP,V0,V1 = Cross(𝑽𝟏 − 𝑽𝟎, 𝑷 − 𝑽𝟎) SignedAreaTriangleP,V0,V1 = Cross(𝑽𝟏 − 𝑽𝟎, 𝑷 − 𝑽𝟎) / 2 𝑷 𝑽2 𝑽𝟏 Graphics Programming 1 – Software Rasterization Part I Rasterization: Barycentric Coordinates If we divide this signed area with the total area of the entire triangle, we can have different scenario’s: 𝑽0 𝑽𝟎 𝑽𝟎 𝑷 𝑷 𝑷 𝑽2 𝑽1 𝑽2 𝑽1 𝑽𝟐 𝑽𝟏 SignedAreaTriangleP,V0,V2 = 0 SignedAreaTriangleP,V0,V2 = 1 SignedAreaTriangleP,V0,V2 = 0.4 So, as you can see, the returned value says how much of the triangle’s area, formed by the two vectors, covers the area of the total triangle. It’s a sort of ratio. Graphics Programming 1 – Software Rasterization Part I Rasterization: Barycentric Coordinates More importantly, it’s a scalar value that can be interpreted as how close the pixel is to the vertex that not part of this smaller triangle. 𝑽𝟎 𝑷 𝑽𝟐 𝑽𝟏 In other words, it’s a weight value we can use together with the vertex to determine, relative inside the triangle, how close our pixel is to the actual vertex. We do this for all three vertices! And all three weights for our barycentric coordinates! Graphics Programming 1 – Software Rasterization Part I Rasterization: Barycentric Coordinates Barycentric coordinates: think of a triangle where each of the vertices has a specific mass. 𝑃0 1: 0: 0 Now we can define a point 𝑷 which is the center of mass of this triangle. Think of where your index finger should / be when balancing a metal triangle sheet on said finger. 𝑷(𝟏: 𝟏: 𝟏) In this specific case all vertices have the same weight. To find the center of mass, draw lines from one vertex to the middle of the opposite edge. / 𝑃1 0: 1: 0 By changing weights of vertices, we can now move point 𝑷 on the triangle, because it always needs to be the center of mass. It’s even possible to define points outside the triangle by using negative weights. Another way to think about it is as gravitational fields 𝑃2 0: 0: 1 where positive values pull the center of mass toward a vertex and negative values push it away. https://en.wikipedia.org/wiki/Barycentric_coordinate_system#Barycentric_coordinates_on_triangles Graphics Programming 1 – Triangle Meshes Rasterization: Barycentric Coordinates −1 2 1 𝑷 (𝟐: 𝟏: 𝟏) / 𝑷 (𝟏: 𝟏: 𝟏) / 1 1 1 1 1 1 𝑷 (−𝟏: 𝟏: 𝟏) Graphics Programming 1 – Triangle Meshes Rasterization: Barycentric Coordinates Coordinates of points inside the triangle all have positive values, a handy property to determine whether a point is 𝑃0 1: 0: 0 inside a triangle. Barycentric coordinates are unique up to scaling. Meaning 𝟏 𝟏 𝟏 / 𝑷 𝟏: 𝟏: 𝟏 = ( , , ) for any positive 𝝀 the coordinates 𝝀𝒂: 𝝀𝒃: 𝝀𝒄 are the 𝟑 𝟑 𝟑 same. Thus 1: 0: 0 = 2: 0: 0 and 1: 1: 1 = 5: 5: 5. To normalize the coordinates, divide each weight by the / total sum of weights. Note how we change notation after 𝑃1 0: 1: 0 normalizing, using commas instead of colons. For 𝟏 𝟏 𝟏 example: 𝟏: 𝟎: 𝟎 = (𝟏, 𝟎, 𝟎) and 𝟏: 𝟏: 𝟏 = ( , , ). 𝟑 𝟑 𝟑 Once you know two of the normalized barycentric coordinates, you also know the last one as their total sum 𝑃2 0: 0: 1 is exactly 1. https://en.wikipedia.org/wiki/Barycentric_coordinate_system#Barycentric_coordinates_on_triangles Graphics Programming 1 – Triangle Meshes Rasterization: Barycentric Coordinates How are we ever going to calculate all of this? 𝑽𝟎 When we draw lines from each of the vertices to the center mass point, we divide the triangle into three smaller triangles. There is a connection between the weights of vertices and the areas of these triangles made up by the other vertices and the center of mass. 𝑷 We can demonstrate this connection by changing weights. 𝑽𝟏 𝑽𝟐 https://en.wikipedia.org/wiki/Barycentric_coordinate_system#Barycentric_coordinates_on_triangles Graphics Programming 1 – Triangle Meshes Rasterization: Barycentric Coordinates What does the area look like for various normalized weight values? 𝟏 𝟏 𝟏 𝟏 𝟏 𝟏 𝟏 𝟏 𝑽𝟎 𝑷 𝟏: 𝟏: 𝟏 = ( , , ) 𝑽𝟎 𝑷 𝟏: 𝟏: 𝟐 = ( , , ) 𝑽𝟎 𝑷 𝟎: 𝟎: 𝟏 = (𝟎, 𝟎, 𝟏) 𝑽𝟎 𝑷 𝟏: 𝟏: 𝟎 = ( , , 𝟎) 𝟑 𝟑 𝟑 𝟒 𝟒 𝟐 𝟐 𝟐 𝑽𝟏 𝑽𝟏 𝑽𝟏 𝑽𝟏 𝑽𝟐 𝑽𝟐 𝑽𝟐 𝑽𝟐 The ratio opposite triangle area over entire triangle area equals the normalized weight of the vertex. Graphics Programming 1 – Triangle Meshes Rasterization: Barycentric Coordinates How are we ever going to calculate all of this? 𝑽𝟎 We can calculate the weights using normalized areas. For 𝑽𝟐 we can calculate the area of opposite triangle 𝑽𝟎 𝑽𝟏 𝑷 We can calculate this using the cross product: 𝑽𝟏 −𝑽𝟎 × 𝑷−𝑽𝟎 △ 𝑽𝟎 𝑽𝟏 𝑷 = 2 To normalize the values, we can divide by the area of the entire triangle △ 𝑽𝟎 𝑽𝟏 𝑽𝟐. 𝑷 𝑽𝟏 The same applies for the other two vertices. 𝑽𝟐 https://en.wikipedia.org/wiki/Barycentric_coordinate_system#Barycentric_coordinates_on_triangles Graphics Programming 1 – Triangle Meshes Rasterization: Barycentric Coordinates Recap: if we do this for every vertex, and thus every small triangle, we have three weights that map relative to the triangle. We can use these to figure out where the pixel is inside the triangle (in Barycentric Coordinates), using the following formula: 𝑷𝒊𝒏𝒔𝒊𝒅𝒆𝑻𝒓𝒊𝒂𝒏𝒈𝒍𝒆 = 𝑾𝟎 ∗ 𝑽𝟎 + 𝑾𝟏 ∗ 𝑽𝟏 + 𝑾𝟐 ∗ 𝑽𝟐 Do not forget to keep track which cross product represents which weight (vertex opposite to the inner triangle) → weight = “magnitude cross product”: 𝑽0 Weight of 𝑽𝟎 (𝑾𝟎) is defined by: Vector(𝑷 − 𝑽𝟏 ) and Vector(𝑽𝟐 − 𝑽𝟏) Weight of 𝑽𝟏 (𝑾𝟏) is defined by: Vector(𝑷 − 𝑽𝟐 ) and Vector(𝑽𝟎 − 𝑽𝟐) Weight of 𝑽𝟐 (𝑾𝟐) is defined by: Vector(𝑷 − 𝑽𝟎 ) and Vector(𝑽𝟏 − 𝑽𝟎) 𝑷 Total Triangle Area = 𝑾𝟎 + 𝑾𝟏 + 𝑾𝟐 𝑽2 𝑽𝟏 When you add all the weights (that have been divided by the total triangle area) you should get a total value of 1! Graphics Programming 1 – Software Rasterization Part I Rasterization: Barycentric Coordinates Suppose every vertex has a color value encoded. When you check if a pixel is inside triangle, and it is, which color should the pixel have? Using barycentric coordinates you know the weights relative to each vertex. We can use these to interpolate the three colors (one from each vertex) to calculate a pixel’s final color. Next up in your rasterizer: Use the predefined Vertex Struct (It contains multiple attributes, but for now we are only interested in position & color). When checking if a pixel is inside the triangle, store the results of the cross products. If it is inside the triangle, use these to calculate the final weights! Do so by dividing the area of current parallelogram (result cross product) by the area of total parallelogram (cross product of 𝑽𝟏 − 𝑽𝟎 and 𝑽𝟐 − 𝑽𝟎 ). (Or sum of the individual weights) There is no need to divide both areas by 2 (making it a triangle area), because we only care about the ratio. Calculate the final color by interpolating every vertex color by its corresponding weight: Interpolated Color = 𝑽𝒆𝒓𝒕𝒆𝒙𝑪𝒐𝒍𝒐𝒓𝟎 ∗ 𝑾𝟎 + 𝑽𝒆𝒓𝒕𝒆𝒙𝑪𝒐𝒍𝒐𝒓𝟏 ∗ 𝑾𝟏 + 𝑽𝒆𝒓𝒕𝒆𝒙𝑪𝒐𝒍𝒐𝒓𝟐 ∗ 𝑾𝟐 Using the following triangle, you should have the result from the next slide: 𝑉0 : 𝑝𝑜𝑠𝑖𝑡𝑖𝑜𝑛 0,4,2 , 𝑐𝑜𝑙𝑜𝑟 1,0,0 𝑉1 : 𝑝𝑜𝑠𝑖𝑡𝑖𝑜𝑛 3, −2,2 , 𝑐𝑜𝑙𝑜𝑟 0,1,0 𝑉2 : 𝑝𝑜𝑠𝑖𝑡𝑖𝑜𝑛 −3, −2,2 , 𝑐𝑜𝑙𝑜𝑟 0,0,1 Graphics Programming 1 – Software Rasterization Part I Rasterization: Barycentric Coordinates Graphics Programming 1 – Software Rasterization Part I Rasterization: Depth Buffer What if we want to render two triangles, which triangle is closer to our camera? When rendering we have yet to consider the proximity, or depth of a triangle. The colors of each pixel should be determined by the triangle that happens to be closest. Thanks to hindsight, we already store the z-component in our projected points. When rasterizing multiple triangles, we need to keep track per pixel how close the closest points rendered at this point was. We need a way to store this depth information. Create an array with the same size as the array of pixels. Instead of RGB Colors, it contains simple floating-point values (depth / z-component). We call it the DEPTH BUFFER. Whenever a pixel is inside the current triangle, we read from the depth buffer (should be initialized with the maximum value of a float, points that are infinitely far away). If a pixels’ depth value is smaller than the one in the depth buffer (closer to the viewer), we overwrite or render the pixel in the pixel buffer and store its new depth value in the depth buffer. These are called the DEPTH TEST, writing the depth is called a DEPTH WRITE. If we write or compare a depth value, which depth value should we take (vertex 0, 1 or 2)? None of the above, we should interpolate between the depth values using the barycentric weights! Graphics Programming 1 – Software Rasterization Part I Rasterization: Depth Buffer Adjust the rasterizer to take depth into account and render two triangles: Create the depth buffer (array with the total number of pixels, holding just float values) – initialize all values with the maximum value of a float. The depth buffer holds the depth values of all the closest pixels. When the pixel is inside the triangle, calculate its depth by interpolating between the different depth values. Do the depth test (is this pixel closer than the one stored in the depth buffer). If the test succeeds, render the pixel and store the pixels’ depth value as the new depth buffer value. If two values are identical you should choose to either keep the current value/pixel and replace it with the next value/pixel. Generally, this is a bad thing, and it should be avoided. The effect is called depth or z-fighting. Graphics Programming 1 – Software Rasterization Part I Rasterization: Depth Buffer Graphics Programming 1 – Software Rasterization Part I Rasterization: Optimizations Finally, we loop over all the pixels for every triangle. That is a lot of unnecessary pixels, depending how big the triangle is. We are only interested in pixels that are (potentially) covered by our triangle. With our triangle already being projected in screen space, we could figure out pixels that might not be rendered. Which pixels will never be covered a triangle? Think of an optimization we implemented in the ray tracer. We can calculate the bounding box of each triangle in screen space and only loop over pixels inside the bounding box. Calculating the bounding box is straightforward: Find the top left point (smallest x-value and smallest y-value) and the right bottom point (largest x-value and largest y-value), based on the three vertices. (Hint: use std::max, std::min). Make sure the two points defining the bounding box don’t exceed the screen boundaries: 0 >= xValues = yValues SCREEN space Once all vertices are transformed, you can apply the Rasterization logic from the previous step Graphics Programming 1 – Software Rasterization Part I Rasterization: (3) Barycentric Coordinates Objective: extend the Rasterization Stage with Barycentric Coordinates. Interpolate the color for each pixel. See slides for implementation Graphics Programming 1 – Software Rasterization Part I Rasterization: (4) Depth Buffer Objective: implement a Depth Buffer. Perform a Depth Test to check whether a pixel is visible or covered by a previous rendered primitive. (This is part of the Rasterization Stage) + Adjust the render loop to support multiple triangles (each set of 3 vertices defines a separate triangle) Create a float array, initialize each value with the maximum value of a float. You need to do this every frame before rendering! Also clear the BackBuffer (SDL_FillRect, clearColor = {100,100,100} You’ll have to interpolate the depth value, and perform a depth test for that pixel. Do not forget to also update the Depth Buffer if the Depth Test succeeds! See slides for implementation Graphics Programming 1 – Software Rasterization Part I Rasterization: (5) BoundingBox Optimization Objective: once you have your pixels in Screen Space (pixel coordinates), define a closest fitting bounding box. Instead of iterating all the pixels of the screen, only iterate over the pixels defined by the bounding box Depending on how much screen space each triangle covers, you should see a difference in performance with/without using a bounding box Graphics Programming 1 – Software Rasterization Part I GOOD LUCK! Graphics Programming 1 – Software Rasterization Part I

Use Quizgecko on...
Browser
Browser