Document Details

Uploaded by Deleted User

Tags

computer graphics 3D projection viewing transformation graphics pipeline

Summary

This document provides a tutorial on basic concepts of viewing and projection transformations in computer graphics, using diagrams and explanations. The content also briefly discusses useful references and possibly relevant concepts for graphics development.

Full Transcript

Viewing and Projection Possibly useful references  Computing the pixel coordinates of a 3D point https://www.scratchapixel.com/lessons/3d-basic-rendering/computing-pixel- coordinates-of-3d-point/mathematics-computing-2d-coordinates-of-3d-points  Math for computer graphics / games...

Viewing and Projection Possibly useful references  Computing the pixel coordinates of a 3D point https://www.scratchapixel.com/lessons/3d-basic-rendering/computing-pixel- coordinates-of-3d-point/mathematics-computing-2d-coordinates-of-3d-points  Math for computer graphics / games http://www.essentialmath.com/tutorial.htm 2 Simplified Graphics Pipeline: Vertex Processing Model World Eye Coordinates Coordinates Coordinates Modeling Viewing Vertex Transformation Transformation Screen (Window) Normalized Device Clip Coordinates Coordinates Coords. Viewport Perspective Projection Transformation Division Transformation  First stage transforms object vertices from Modelling coordinates to World coordinates (i.e. move objects into the world)  Second stage transforms object vertices from World Coordinates to Eye Coordinates (also called camera coordinates or viewing coordinates) 3 OpenGL: Transformation via a matrix  OpenGL combines (multiplies) the modelling transform matrix with viewing transform matrix to form one matrix – ModelView  For a possible explanation of why, see this link 4 By SoylentGreen - Own work, CC BY-SA 3.0, By SoylentGreen - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid https://commons.wikimedia.org/w/index.php?curid=3781723 =3771202 World coordinates and Projection Plane Eye coords. in viewing direction VIEWING Viewing Transformation  Converts points in world coordinates to eye coordinates  also known as camera or viewing coordinates  i.e. redefines all objects in terms of the camera’s coordinate system  In 2D, think of the camera as a 2D rectangle. Everything outside of this “clip rectangle” is clipped away  In 3D it’s a cube or frustum (i.e. a truncated pyramid) 6 First, let’s look at the sequence of steps in 2D… Step 1 - Establish a world coordinate system (units and scale are arbitrary) yw 150 125 100 75 50 25 xw 50 100 150 200 250 300 -100 -50 8 Step 2 - Move objects into the world (modelling transformations) yw 150 125 100 75 50 25 xw 50 100 150 200 250 300 -100 -50 9 Step 3 - Create camera view (clip rectangle) yw 150 gluOrtho2D(l, r, b, t) l = left r = right b = bottom t = top 125 t 100 75 50 pw b 25 xw 50 100 150 200 250 300 -100 l r -50 Pw = (100, 50) 10 Step 4 - Viewing transformation: Convert from world to eye (camera) coordinates yw 150 yeye 125 t 100 75 50 25 b 25 xeye 0 50 xw 50 100 150 200 250 300 -100 l r -50 peye = (50, 25) 11 Viewing Transformation (simple case)  How do we get peye from pw ?  Imagine translating the eye coordinate system so that it’s origin coincides with the world coordinate system origin  Apply this translation to all points of all objects  They are now in eye coordinates Note: Here we chose the eye coordinate origin to be at the lower-left corner of the clip rectangle… we could put the origin in a different location and everything would still work 12 Step 4 - Viewing transformation: Convert from world to eye (camera) coordinates yw 150 yeye 125 t 100 75 50 25 b 25 xeye 0 50 xw 50 100 150 200 250 300 -100 l r -50 peye = (50, 25) 13 Viewing Transformation: Convert from world to eye coordinates yw 150 125 100 yeye 75 50 25 xeye xw 0 50 100 150 200 250 -100 300 -50 peye = (50, 25) 14 2D Viewing Transformation Peye = T pw Origin of camera frame 1 0 -left in world coordinates is T= 0 1 -bottom (50, 25). So translate 0 0 1 (-50, -25) For example: 50 1 0 -50 100 Peye = 25 = 0 1 -25 50 1 0 0 1 1 15 Viewing Transformation: General case – camera may be tilted!! yw 150 125 xeye yeye 100 75 50 25 50 b 25 0 xw 50 100 150 200 250 300 -100 l peye = (50, 25) -50 θ =  degrees pworld = (83.54, 69.721) 16 Viewing Transformation: General Case 1. Translate camera coordinate system so that its origin coincides with world origin 2. Then rotate camera coordinate system so that its axes line up with world axes 3. Then apply these two transformations to all objects 17 General 2D Viewing Transformation peye = R T pw cos θ sin θ 0 1 0 -left R (-θ)= -sin θ cos θ 0 T= 0 1 -bottom 0 0 1 0 0 1 For example: 50.89443.44721 0 1 0 -50 83.54 Peye = 25 = -.44721.89443 0 0 1 -25 69.72 1 0 0 1 0 0 1 1 18 Viewing Transformation yw yw yw yeye xeye yeye yw0  yeye xeye  xeye xw xw xw xw0 19 Coordinate System Transformations: Simple way to define view & construct 2D Rotation Matrix  Imagine user defines clip (2D camera) window by specifying points p1 and p0 as well as width of clip window yw  Think of p0 as the position of the camera in xeye world space yeye p1  Think of vector p1 - p0 (i.e. p0 to p1) as the camera up vector y0 p0  Specifies the camera orientation xw  Can construct camera coordinate system, x0 origin is at p0 = (x0, y0) in world coords. 20 Coordinate System Transformations  Do the translation as before yw xeye yeye p1' xw p0' 21 Coordinate System Transformations  Rotation: Form u, v normalized basis vectors of camera coord. system in terms of world coordinates, plug into rotation matrix yw Rotation of v axis by -90o to form u axis p1 v u xw p0 22 2D Viewing Transformation u : camera x axis unit vector v : camera y axis unit vector peye = RTpw peye : point in eye (camera or view) coordinates V = RT V : viewing transformation matrix 23 Why does this work? yw v = (vx, vy) opp u adj hyp θ xw 24 Why does this work? (optional) a,b are eye coordinates of point p yw x,y are world coordinates of point p peye= (a,b) = pw= (x,y) v u xw 25 Aside: Converting from one coordinate system to another ⚫ p expressed in world coordinates (i,j basis vectors): ⚫ p expressed in eye (viewing) coordinates (u,v basis vectors): ⚫ p expressed in world coords but using (u, v) instead of (i, j), where u,v are expressed in terms of world coords (x,y) 26 2D Viewing Transformation ⚫ Full matrix expansion of peye= (a, b) = V pw = RT pw ⚫ How to transform the other way (eye to world coords): pw= V-1 peye where V-1 = T-1R-1 = T-1RT 27 3D Viewing and Projection Reminder: The Camera Analogy 1. Compose and arrange the scene  Modeling Transformations 2. Set up the tripod and point the camera at the scene  Viewing transformation 3. Choose a camera lens or adjust the zoom  Projection transformation 4. Determine size of final photograph  Viewport transformation 29 Set up the 3D camera: OpenGL ⚫ gluLookAt(x0, y0, z0, xref, yref, zref, Vx, Vy, Vz) ⚫ P0 = (x0, y0, z0 ) “look from” point ⚫ Pref= (xref, yref, zref ) “look at” point ⚫ Vx, Vy, Vz camera “up” vector (i.e. orientation) Set up the camera 31 From gluLookAt parameters, OpenGL constructs the eye (camera) coordinate system Looking along negative zeye axis 32 Then use gluPerspective/glFrustum to define view volume (i.e. “adjusting the lens”) Then use gluPerspective/glFrustum to define view volume (i.e. “adjusting the lens”) Or use glOrtho(left, right, top, bottom, near, far) Or use glOrtho(left, right, top, bottom, near, far) 3D Viewing Transformation 3D Viewing Transformation  Transforms world coordinates to eye coordinates (also known as camera or view coordinates)  Similar to 2D case: redefine all objects in terms of eye coordinate system  Do this by finding a translation and rotation that aligns the eye coord. system with the world coord. system so that the camera is at the origin looking down the –z axis  Apply this transformation to all vertices of all objects 38 3D Graphics Pipeline 3D World 3D Eye Coordinates Coordinates Modeling Viewing 3D Model Coordinates Transformation Transformation Clipping applied Perspective Clip Coordinates (4D) Projection Division Transformation Normalized Device Rasterization Coordinates 2D (float) Screen Coordinates (x,y) Viewport* modified Z-coordinate, color, etc Fragment Processing transformation World Space to Camera Space Viewing Transformation (a.k.a. Eye or View Space) By SoylentGreen - Own work, CC BY-SA 3.0, By SoylentGreen - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid https://commons.wikimedia.org/w/index.php?curid=3781723 =3771202 View coordinates and Projection Plane Eye coords. in viewing direction Viewing Transformation: a view from above View is from above looking down After Viewing Transformation has been applied (Perspective Case) Planes: l = left r = right b = bottom t = top n = near f = far NOTE: After the viewing transformation camera is now at origin looking down negative z-axis!! After Viewing Transformation (Orthographic Case) Planes: l = left r = right b = bottom t = top n = near f = far Viewing Transformation: Steps  We first need to construct an eye/camera coordinate system  We specify in our graphics program (e.g. using gluLookAt()): 1. a camera/eye position P0 (in world coordinates) 2. a “look at” point Pref (in world coordinates) 3. a view up vector Vup (camera orientation)  How does OpenGL construct a camera coordinate system from this information??? 44 Viewing Transformation: Constructing the Camera Coordinate System  OpenGL begins by constructing view-plane normal vector N  Think of it as the camera’s (positive) z axis  Note: lowercase denotes unit vectors (i.e. length of 1.0) 45 Constructing the Camera Coordinate System 46 From gluLookAt parameters, OpenGL constructs the eye (camera) coordinate system +zeye Looking along negative zeye axis 47 Camera Orientation  View up vector: determines camera orientation  Often we choose world “y” axis (0,1,0) in gluLookAt()  Can choose anything but must not be parallel to n axis!! (why??)  Does not need to be exactly orthogonal to n axis initially – we’ll fix it later to be orthogonal  The up vector becomes camera’s initial y axis 48 Viewing Transformation  Having calculated n̂ and given the initial Vup (from gluLookAt()) we can use cross-product to get U vector (then normalize to û)  This is the camera’s x axis up up 49 Viewing Transformation  We can now fix the initial Vup to make it orthogonal to n̂ and û (also normalize it)  We have constructed camera (eye) coordinate system!!  This is what OpenGL does! 50 From gluLookAt parameters, OpenGL constructs the eye (camera) coordinate system Looking along negative zeye axis 51 3D Viewing Transformation: World to Eye (Camera) Coordinates  1) Translate the camera coordinate origin pO = (xO , yO, zO) to the origin of the world coordinate system  2) Apply rotations to align the xeye, yeye, zeye axes with the world xw, yw, zw axes pO = (xO , yO, zO) peye = RTpw where V = RT 52 OpenGL Model-View Matrix  Recall we use a sequence of modeling transformations (translate, rotate, scale) to move objects into the world from their own modelling/object (m) coordinate system  These are multiplied together by OpenGL into a single matrix M The viewing transformation matrix V  is constructed and transforms world pw = Mpm coordinates into eye (or camera) coords. peye = Vpw  OpenGL calls this VM matrix the peye = VMpm Model-View matrix 53 OpenGL Model-View Matrix  Recall we use a sequence of modeling transformations (translate, rotate, scale) to move objects into the world from their own modelling/object (m) coordinate system  These are multiplied together by OpenGL into a single matrix M  The viewing transformation matrix V pw = Mpm is constructed and transforms world coordinates into eye (or camera) coords. peye = Vpw  OpenGL calls this VM matrix the peye = VMpm Model-View matrix Should call gluLookAt before any modelling transformations! 54 (OpenGL multiplies on the right) Projection SEE  http://ogldev.atspace.co.uk/www/tutorial12/tutorial12.html 3D Graphics Pipeline 3D World 3D Eye Coordinates Coordinates Modeling Viewing 3D Model Coordinates Transformation Transformation Clipping applied Perspective Clip Coordinates (4D) Projection Division Transformation Normalized Device Rasterization Coordinates 2D (float) Screen Coordinates (x,y) Viewport* modified Z-coordinate, color, etc Fragment Processing transformation Projection  3D object points must ultimately be projected onto a 2D view plane  Parallel Projection:  3D points are transferred to view plane along parallel lines  e.g. Orthographic  Perspective Projection:  3D points are transferred to view plane along lines that converge to a point (Center of Projection: COP) behind the view plane 58 Projection Projection Center of Projection (COP) Parallel Projection: Orthographic, Oblique  Preserves relative proportions of objects  Size of projected object independent of distance to view plane (always projects to same size)  Project along lines that are perpendicular to view plane: Orthographic  Project along lines that are at an oblique angle to view plane: Oblique 61 Orthographic Projection Example. 62 Orthographic Projection ⚫ Like moving camera to infinity 63 Perspective Projection  Does not preserve relative proportion of objects  Size of projected object dependent on distance to view plane (farther away objects appear smaller)  This is important because it approximates the projection formed by the real world on our retinas  exhibits our expectation of foreshortening, i.e. a distant object is displayed smaller than a nearer one of the same size.  it provides an important depth cue to the brain 64 Perspective Projection 65 Projection as a Matrix Transformation ⚫We talked about geometric transformations, focusing on modelling transformations e.g.: translation, rotation, scale ⚫ We also talked about the viewing transformation e.g. gluLookAt() ⚫These transformations together make up the OpenGL ModelView matrix ⚫ We can also express projection as a matrix! ⚫Let’s now see how we can represent orthographic and perspective projection with a projection matrix Orthographic Projection Matrix  Projection lines are perpendicular to view (projection) plane  x’,y’ is the x,y point projected onto view plane Note: the projected point is independent of the z-component of the object !  Doesn’t matter what the z coordinate is, x’, y’ are always the same OpenGL Orthographic Projection Matrix  We can use this matrix P on previous slide to get a projected image on the view plane (3D to 2D)  Are we ready to rasterize? What’s wrong with matrix P?  P does not preserve depth information!! – needed for hidden surface removal (i.e. what is in front of what)  We must redesign P to: 1. preserve depth 2. allow for easy clipping ◼ i.e. normalize the view volume to a canonical (standard) view volume How does OpenGL do Orthographic? 69 OpenGL Orthographic Projection Matrix  glOrtho(xmin, xmax, ymin, ymax, zmin, zmax) or glOrtho(left, right, bottom, top, near, far)  glOrtho defines a view volume and creates a projection matrix P.  The CTM is then multiplied by matrix P 70 View Volume Normalization  Simplest clipping volume is a cube with center at the origin and -1 < x, y, z < 1  This volume is called the canonical view volume and is defined in a left-handed coordinate system  allows positive distances in viewing direction to be interpreted as distances away from view/projection plane (near plane)  Design a projection matrix P that transforms rectangular view volume on previous slide into this cube  apply this matrix P to all objects 71 View Volume Normalization far plane Planes: l = left r = right b = bottom t = top n = near f = far near plane OpenGL Orthographic Projection Matrix: View Volume Normalization 1. First, translate center of view volume to origin 2. Then, scale it to size of standard cube 3. Then reflect to get left hand coord. system OpenGL Orthographic Projection Matrix: View Volume Normalization 1. T translate center of view volume to origin: r = right, l = left, t = top, b = back, f = front, n = near 2. S to scale view volume so that edge length = 2 3. Sreflection : Reflect in x-y plane, all z values that lie on negative z- axis become positive, i.e. convert to left-handed system 74 OpenGL Orthographic Projection Matrix pw = Mpm peye = Vpw peye = VMpm - pclip = Ppeye = PVMpm Projection matrix P = SreflectionST  This 4x4 parallel projection matrix:  Preserves depth (z)  Normalizes to a 2 x 2 x 2 cube for easy clipping 75 Exercise 1: Orthographic Proj. Matrix ❑ Write out the matrices T, S, and Sreflection described on slide 71 ❑ Multiply them to derive the OpenGL orthographic projection matrix P 76 Perspective Projection Perspective Projection: Homogeneous Coordinates ❑ Recall homogeneous coordinates represent 3D coordinates using 4 dimensions x/w x y/w y (x, y,z) =  =  z/w z      1  w ❑ Typically w = 1 in model/object coordinates 78 More On Homogeneous Coordinates ⚫ What effect does this matrix have on the point (x, y, z)? x' 1 0 0 0x y' 0 1 0   0y  = z' 0 0 1 0z      w' 0 0 0 w 10  ⚫ Conceptually w acts like a scale factor Homogeneous Coordinates In particular, increasing w makes things smaller We think of homogeneous coordinates as defining a projective space Increasing w “getting further away” Will come in handy for perspective projection matrix: Homogeneous coordinates will allow us to capture perspective projection using matrix multiplication Perspective Projection In the real world, objects exhibit perspective foreshortening: distant objects appear smaller The basic situation: Perspective Projection  COP also called Projection Reference Point  Projection Plane also called view plane Perspective Projection When we do 3-D graphics, we think of the screen as a 2-D window onto the 3-D world How Tall Should this bunny be? 83 Perspective Projection The geometry of the situation is that of similar triangles. View from above: (ignore y-axis for now) X View (or projection) plane p (x, y, z) (0,0,0) x’ = ? Z d What is x’ (i.e. x coordinate projected onto projection plane)??? Use ratios of similar triangles Perspective Projection The geometry of the situation is that of similar triangles. View from above: (ignore y-axis for now) X View (or projection) plane p (x, y, z) (0,0,0) x’ = ? Z d What is x’ (i.e. x coordinate projected onto projection plane)??? Use ratios of similar triangles Perspective Projection The geometry of the situation is that of similar triangles. View from above: (ignore y-axis for now) X View (or projection) plane p (x, y, z) x (0,0,0) Z z What is x’ (i.e. x coordinate projected onto projection plane)??? Use ratios of similar triangles Perspective Projection Desired result for a point [x, y, z, 1]T projected onto the view plane (using similar triangles): x' x y' y = , = d z d z Rewrite to isolate x’ and y’ dx x dy y x' = = , y' = = , z' = d z z d z z d Exercise 2: Perspective Projection Equations ❑ Make sure you understand how similar triangles are used to derive the equations on the previous page ❑ First, derive x’ then repeat and derive y’ ❑ z’ just projects to d in our example. Note, we arbitraily placed the projection plane at z=d ❑ We could have placed it anywhere along the z axis 88 Perspective Projection As a Matrix What could a matrix look like to perform these three equations on a point (x, y, z, 1)? 89 A Simple Perspective Projection Matrix An Answer: 1 0 0 0 0 1 0 0  Pperspective = 0 0 1 0   0 0 1d 0 A Simple Perspective Projection Matrix Example:  x  1 0 0 0x  y  0 1 0   0y  =  z  0 0 1 0z      z d 0 0 1d 01 Or, in 3-D Euclidean space (divide by 4th component w): x y   zd, zd, d    Is this exactly what we want? NOTE: Perspective Projection  Using homogeneous coordinates allows us to capture perspective using matrix multiplication  Note that this is a transformation, **not a projection**.  A true projection transformation reduces dimensionality (e.g. 3D to 2D).  However, our perspective projection transformation matrix transforms 4D points into 4D points.  The perspective transformation matrix in essence “prepares” a point for projection. It does not actually perform the projection.  Then where does the actual projection from 3D to 2D happen? Further along the pipeline, after normalization and after clipping and after perspective division (i.e. divide by w) OpenGL Perspective Projection Matrix  As in Orthographic Projection case we need to redesign matrix P to allow for simple clipping  Normalize the view volume (now a truncated pyramid aka a frustum) to the same canonical view volume as before (a 2X2X2 cube)  Use the w component to store some sort of z value (i.e. a depth value) of a point – see perspective divide Perspective Projection Matrix  As before, we want to transform view frustum (truncated pyramid) into a canonical view volume (i.e. a cube) then form the projected 2D image as in orthographic projection OpenGL Perspective Projection Functions  glFrustum(xmin, xmax, ymin, ymax, zmin, zmax) or glFrustum(left, right, bottom, top, near, far)  Or gluPerspective: A Possible Perspective Projection Matrix Let’s try: 1 0 0 0 0 1 0 0  N= 0 0 α β   0 0 −1 0 Multiply this matrix by the point (x, y, z, 1) (try it!) and then divide each component by the w value (i.e. -z). This is called perspective division, and the point (x, y, z, 1) goes to: x’ = -x/z y’= -y/z z’ = -(+/z) which projects orthogonally to the desired point regardless of the value of  and  That is, the x’ and y’ are what we want! AND also keeps some sort of z (i.e. depth) value based on original z 96 Picking  and  If we pick - (n + f) = f −n 2fn = − f −n Then: the near plane of view volume is mapped to z = -1 the far plane of view volume is mapped to z = 1 and the sides of view volume are mapped to x =  1, y =  1 Hence the truncated pyramid becomes the canonical view volume! i.e. a cube 97 Perspective Projection Matrix  It is useful to think of the projection transformation matrix as causing a warping of 3D space  It preserves straightness and flatness: lines transform into lines, planes into planes etc.  It also preserves “in-between-ness”: if a point is inside an object, the transformed point will also be inside the transformed object Before projection transformation, we’ve got our blue objects, in Camera Coordinates, and the red shape represents the frustum of the camera : the part of the scene that the camera is actually able to see Multiplying everything by the Perspective Projection Matrix and performing the perspective division has the following effect: the frustum is now a perfect cube (between -1 and 1 on all axes), and all blue objects have been deformed in the same way. Thus, the objects that are near the camera ( = near the face of the cube that we can’t see) are big, the others are smaller. Perspective Projection: Aspect Ratio  View volume normalization will convert (or warp) all objects into intermediate shapes (see previous slide).  The truncated pyramid may have been specified with an aspect ratio > 1 or < 1 but the canonical cube has an aspect ratio of exactly 1  This warping introduces some x, y distortion but the distortion will be eliminated in the viewport transformation (i.e. screen mapping) ◼ That is, the canonical view volume has an aspect ratio of 1 but if the viewport has same aspect ratio as the truncated pyramid view volume then the x,y distortion is eliminated!  After clipping is performed, the 4D homogeneous coordinates (x, y, z, w) are divided by the parameter w to obtain the true 3D projection coordinate positions: (x’ and y’) as well as a depth coordinate z’.  This is called **perspective division** (it is 4D to 3D) How does OpenGL do Perspective Projection? 101 Summary: OpenGL Perspective Projection Matrix  The general (possibly asymmetric) frustum view volume is converted (normalized) to the canonical cube in 3 steps: 1. Shear to create a symmetric frustum and center the frustum centerline along the z axis ◼ Where H is shear matrix 2. Depth-dependent scaling to normalize it to a 2x2x2 cube ◼ Where S is a scale matrix similar to orthographic case 3. z translation to center the frustum at the origin, and z reflection ◼ Part of matrix N, where N is from slide 91 102 OpenGL Perspective Projection Matrix P = NSH Our previously defined Scale Shear perspective matrix  The normalization of the frustum (view volume) uses an initial shear H to form a right-angled viewing pyramid  This is followed by scaling S to get the normalized perspective volume (i.e. 2x2x2 so that x, y, z are between -1 and +1)  Finally, the perspective matrix needs only a final orthogonal transformation N 103 OpenGL Perspective Projection Matrix  Reverses z coordinate, just like glOrtho  Puts reversed z value into w, so objects will be scaled by it when perspective divide happens. Compare this to Orthographic Matrix on pg 72 104 Note: Projection from 3D to 2D  Using homogeneous coordinates allows us to implement perspective projection using matrix multiplication  Note that this does not convert (project) from 3D to 2D  The perspective transformation transforms 4D points into 4D points!  The perspective transformation matrix prepares a point for projection  It does not actually perform the projection  … Then where does the projection from 3D to 2D happen?  Further along the pipeline, after clipping and after perspective division (division by w) 3D graphics pipeline so far  We use a sequence of modelling transformations M (translate, rotate, scale etc.) to move objects into the world from their own model coordinate system pw = Mpm  The viewing transformation matrix V transforms world coordinates into eye (camera) cords peye = Vpw  The projection transformation matrix P performs a perspective (or orthographic) projection pclip = Ppeye  Putting it all together: pclip = PVMpm 106 3D Graphics Pipeline 3D World 3D Eye Coordinates Coordinates Modeling Viewing 3D Model Coordinates Transformation Transformation Clipping applied Perspective Clip Coordinates (4D) Projection Division Transformation Normalized Device Rasterization Coordinates 2D (float) Screen Coordinates (x,y) Viewport* modified Z-coordinate, color, etc Fragment Processing transformation Perspective Division Clipping and Perspective Division  After the projection transformation, clipping is performed at the boundaries of the canonical cube  After clipping, the homogeneous coordinates (x, y, z, w) are divided by the parameter w  Recall that (x, y, z, w) and (x/w, y/w, z/w, 1) are equivalent points in homogeneous coordinates  This gives the true perspective projection coordinate positions x’ = x/w, y’ = y/w, and a depth coordinate z’ = z/w  This is called perspective division (it converts 4D to 3D) 109 After Perspective Division  After perspective division, the true projection coordinate positions are equivalent to orthographic projection coordinates because we warped to a canonical view volume – a cube  We can proceed exactly as in the orthographic case. That is: 1. Use x’, y’ positions for the viewport transformation and then rasterization 2. Store z’ coordinate in the depth buffer for hidden surface removal 110 OpenGL Perspective Projection Matrix pw = Mpm peye = Vpw peye = VMpm pclip = Ppeye = PVMpm Perspective Projection Matrix  Reverses z coordinate, just like glOrtho  Puts reversed z value into w, so objects will be scaled by it when perspective divide happens. Compare this to Orthographic Matrix on pg 64 The Depth Buffer and Z resolution  The depth buffer or Z buffer is typically implemented in GPU hardware  Stores a Z value for each screen pixel to efficiently decide “what’s in front”  Each Z value has a certain number of bits, e.g. 24  Z resolution can be poor because the number of bits is limited and the values stored are non-linear  May result in visual artifacts where objects intersect or are close in Z like jagged lines, flickering, colour bleeding, “stitching”  See example program ZResolutionDemo 113 The Depth Buffer and Z resolution  Why z values are non-linear?  z coordinate was mapped into the range (-1, 1) then divided by its old value (via perspective division by w) – this is a non-linear operation graph of 1/z  We lose resolution as the near plane and far plane separate  A big change in zeye (eye coords) results in a small change in zndc (normalized device coords)  Not enough resolution to distinguish overlapping/intersecting objects 114 Note: Z resolution  We did a linear (matrix) transform on z (shear, translate, scale) then we divided it by its old value using w (perspective division) – this is a nonlinear operation: z’ = -(+/z)  Therefore, the resulting z values we get are no longer linear in depth  We lose resolution as the near plane and far plane separate 3D Graphics Pipeline 3D World 3D Eye Coordinates Coordinates Modeling Viewing 3D Model Coordinates Transformation Transformation Clipping applied Perspective Clip Coordinates (4D) Projection Division Transformation Normalized Device Rasterization Coordinates 2D (float) Screen Coordinates (x,y) Viewport modified Z-coordinate, color, etc Fragment Processing transformation Viewport Transformation OpenGL Viewport Transformation: Mapping points to the screen  Actually, in reality the viewport transformation is really 3 x 3 – you input the (x’,y’,z’) point (in NDC) and the output is an (x’',y’',z’').  The viewport transformation: 1. scales and translates the x’, y’ values so that they are placed properly in the viewport (i.e. The “photograph” or screen window). 2. makes minor adjustments to the z’-component to make it more suitable for depth testing (maps z’ (pseudo‐depth) from [‐1,1] to [0,1])  As we have seen, the perspective transformation matrix “squashes” the view volume into a canonical cube, introducing x-y distortion. Typically the aspect ratio of the view volume is the same as the viewport so the viewport transformation will undo this distortion. Summary Complete Summary of 3D Transformations in the OpenGL Graphics Pipeline: 1. Point p(x,y,z) is extended to a homogeneous coordinate by appending a 1 2. This homogeneous (4D) point (x, y, z, 1) is multiplied by the ModelView matrix, producing a 4D point in eye (camera) coordinates 3. The point is then multiplied by the projection matrix producing a normalized 4D point in homogeneous clip coordinates (x,y,z,w) 4. Clipping is then done on the canonical view volume 5. Perspective division is performed (divide by w), resulting in a 3D point in Normalized Device Coordinates 6. This 3D point is then multiplied by the Viewport transformation matrix.  The resulting x, y point is eventually displayed in screen coordinates  A transformed nonlinear z (depth) value is used for hidden surface removal, eventually stored in the depth buffer (after rasterization) 120

Use Quizgecko on...
Browser
Browser