Full Transcript

Chapter 4 Windows and Viewports and clipping algorithms Co-ordinates Representation In the last chapter we learned how we can use matrices to our advantage by transforming all vertices with transformation matrices. OpenGL expects all the vertices, that we want to become visible, t...

Chapter 4 Windows and Viewports and clipping algorithms Co-ordinates Representation In the last chapter we learned how we can use matrices to our advantage by transforming all vertices with transformation matrices. OpenGL expects all the vertices, that we want to become visible, to be in normalized device coordinates after each vertex shader run. Vertex shaders are fed Vertex Attribute data, as specified from a vertex array object by a drawing command. A vertex shader receives a single vertex from the vertex stream and generates a single vertex to the output vertex stream. There must be a 1:1 mapping from input vertices to output vertices. That is, the x, y and z coordinates of each vertex should be between -1.0 and 1.0; coordinates outside this range will not be visible. What we usually do, is specify the coordinates in a range (or space) we determine ourselves and in the vertex shader transform these coordinates to normalized device coordinates (NDC). These NDC are then given to the rasterizer to transform them to 2D coordinates/pixels on your screen. Transforming coordinates to NDC is usually accomplished in a step-by- step fashion where we transform an object's vertices to several coordinate systems before finally transforming them to NDC. The advantage of transforming them to several intermediate coordinate systems is that some operations/calculations are easier in certain coordinate systems. There are a total of 5 different coordinate systems that are of importance to us: Local space (or Object space) World space View space (or Eye space) Clip space Screen space Those are all a different state at which our vertices will be transformed in before finally ending up as fragments. To transform the coordinates from one space to the next coordinate space we'll use several transformation matrices of which the most important are the model, view and projection matrix. Our vertex coordinates first start in local space as local coordinates and are then further processed to world coordinates, view coordinates, clip coordinates and eventually end up as screen coordinates. The following image displays the process and shows what each transformation does: For example, when modifying your object it makes most sense to do this in local space, while calculating certain operations on the object with respect to the position of other objects makes most sense in world coordinates and so on. 1. Local coordinates are the coordinates of your object relative to its local origin; they're the coordinates your object begins in. 2. The next step is to transform the local coordinates to world-space coordinates which are coordinates in respect of a larger world. These coordinates are relative to some global origin of the world, together with many other objects also placed relative to this world's origin. 3. Next we transform the world coordinates to view-space coordinates in such a way that each coordinate is as seen from the camera or viewer's point of view. 4. After the coordinates are in view space we want to project them to clip coordinates. Clip coordinates are processed to the -1.0 and 1.0 range and determine which vertices will end up on the screen. Projection to clip- space coordinates can add perspective if using perspective projection. 5. And lastly we transform the clip coordinates to screen coordinates in a process we call viewport transform that transforms the coordinates from -1.0 and 1.0 to the coordinate range defined by glViewport. The resulting coordinates are then sent to the rasterizer to turn them into fragments. Local space Local space is the coordinate space that is local to your object, i.e. where your object begins in. Imagine that you've created your cube in a modeling software package The origin of your cube is probably at (0,0,0) even though your cube may end up at a different location in your final application. Probably all the models you've created all have (0,0,0) as their initial position. All the vertices of your model are therefore in local space: they are all local to your object. World space If we would import all our objects directly in the application they would probably all be somewhere positioned inside each other at the world's origin of (0,0,0) which is not what we want. We want to define a position for each object to position them inside a larger world. The coordinates in world space are exactly what they sound like: the coordinates of all your vertices relative to a (game) World space This is the coordinate space where you want your objects transformed to in such a way that they're all scattered around the place (preferably in a realistic fashion). The coordinates of your object are transformed from local to world space; this is accomplished with the model matrix. The model matrix is a transformation matrix that translates, scales and/or rotates your object to place it in the world at a location/orientation they belong to. Think of it as transforming a house by scaling it down (it was a bit too large in local space), translating it to a suburbia town and rotating it a bit to the left on the y-axis so that it neatly fits with the neighboring houses. View space The view space is what people usually refer to as the camera of OpenGL (it is sometimes also known as camera space or eye space). The view space is the result of transforming your world-space coordinates to coordinates that are in front of the user's view. The view space is thus the space as seen from the camera's point of view. View space This is usually accomplished with a combination of translations and rotations to translate/rotate the scene so that certain items are transformed to the front of the camera. These combined transformations are generally stored inside a view matrix that transforms world coordinates to view space. Clip space At the end of each vertex shader run, OpenGL expects the coordinates to be within a specific range and any coordinate that falls outside this range is clipped. Coordinates that are clipped are discarded, so the remaining coordinates will end up as fragments visible on your screen. This is also where clip space gets its name from. Because specifying all the visible coordinates to be within the range -1.0 and 1.0 isn't really intuitive, we specify our own coordinate set to work in and convert those back to NDC as OpenGL expects them. To transform vertex coordinates from view to clip-space we define a so called projection matrix that specifies a range of coordinates e.g. - 1000 and 1000 in each dimension. The projection matrix then transforms coordinates within this specified range to normalized device coordinates (-1.0, 1.0). All coordinates outside this range will not be mapped between -1.0 and 1.0 and therefore be clipped. With this range we specified in the projection matrix, a coordinate of (1250, 500, 750) would not be visible, since the x coordinate is out of range and thus gets converted to a coordinate higher than 1.0 in NDC and is therefore clipped. Note that if only a part of a primitive e.g. a triangle is outside the clipping volume OpenGL will reconstruct the triangle as one or more triangles to fit inside the clipping range. The projection matrix to transform view coordinates to clip coordinates usually takes two different forms, where each form defines its own unique frustum. We can either create an orthographic projection matrix or a perspective projection matrix. Orthographic projection An orthographic projection matrix defines a cube-like frustum box that defines the clipping space where each vertex outside this box is clipped. When creating an orthographic projection matrix we specify the width, height and length of the visible frustum. All the coordinates inside this frustum will end up within the NDC range after transformed by its matrix and thus won't be clipped. The frustum looks a bit like a container: The frustum defines the visible coordinates and is specified by a width, a height and a near and far plane. Any coordinate in front of the near plane is clipped and the same applies to coordinates behind the far plane. The orthographic frustum directly maps all coordinates inside the frustum to normalized device coordinates without any special side effects since it won't touch the w component of the transformed vector; if the w component remains equal to 1.0 perspective division won't change the coordinates. Perspective projection If you ever were to enjoy the graphics the real life has to offer you'll notice that objects that are farther away appear much smaller. This weird effect is something we call perspective. Perspective is especially noticeable when looking down the end of an infinite motorway or railway as seen in the following image: As you can see, due to perspective the lines seem to coincide at a far enough distance. This is exactly the effect perspective projection tries to mimic and it does so using a perspective projection matrix. The projection matrix maps a given frustum range to clip space, but also manipulates the w value of each vertex coordinate in such a way that the further away a vertex coordinate is from the viewer, the higher this w component becomes. Once the coordinates are transformed to clip space they are in the range -w to w (anything outside this range is clipped). OpenGL requires that the visible coordinates fall between the range -1.0 and 1.0 as the final vertex shader output, thus once the coordinates are in clip space, perspective division is applied to the clip space coordinates: Viewing Pipeline A world-coordinate area selected for display is called a window. An area on a display device to which a window is mapped is called a viewport. The window defines what is to be viewed; the viewport defines where it is to be displayed. Often, windows and viewports are rectangles in standard position, with the rectangle edges parallel to the coordinate axes. The mapping of a part of a world-coordinate scene to device coordinates is referred to as a viewing transformation. Sometimes the two-dimensional viewing transformation is simply referred to as the window-to-viewport transformation or the windowing transformation Clipping Area Clipping Area: Clipping area refers to the area that can be seen (i.e., captured by the camera), measured in OpenGL coordinates. The function gluOrtho2D can be used to set the clipping area of 2D orthographic view. Objects outside the clipping area will be clipped away and cannot be seen. To set the clipping area, we need to issue a series of commands as follows: we first select the so-called projection matrix for operation, and reset the projection matrix to identity. We then choose the 2D orthographic view with the desired clipping area, via gluOrtho2D(). Viewport Viewport: Viewport refers to the display area on the window (screen), which is measured in pixels in screen coordinates (excluding the title bar). The clipping area is mapped to the viewport. We can use glViewport function to configure the viewport. X and y is the positon of the bottom left corner void glViewport(GLint x, GLint y, GLsizei width, GLsizei height); The x and y parameters specify the lower-left corner of the viewport within the window, and the width and height parameters specify these dimensions in pixels. Example: the following code will divide the screen into three viewports each with its own clipping area. #include glColor3f( 0, 0, 1 ); #include glPushMatrix(); // Sets up the PROJECTION matrix float angle =-20; glRotatef(-30, 0.0f, 0.0f, 1.0f); glMatrixMode(GL_PROJECTION); void timer(int value) glRectf(0.0,0.0,10.0,30.0); glLoadIdentity(); { glPopMatrix(); gluOrtho2D(0.0,50.0,-10.0,40.0); // also sets up world window angle-=10; glColor3f(0,0,0); // Draw RED rectangle if(angle>180) glColor3f(0,0,0); glLineWidth(10); angle-=20; glLineWidth(10); glutPostRedisplay(); glBegin(GL_LINES); glVertex2f(0,40); glBegin(GL_LINES); glutTimerFunc(1000,timer,0); glVertex2f(0,40); } glVertex2f(50,40); void draw(){ glVertex2f(50,40); glVertex2f(50,40); // Make background colour yellow glVertex2f(50,-10); glVertex2f(50,40); glClearColor( 100, 100, 0, 0 ); glVertex2f(50,-10); glVertex2f(50,-10); glClear (GL_COLOR_BUFFER_BIT ); glVertex2f(0,-10); glVertex2f(50,-10); // Sets up FIRST viewport spanning the left-bottom quarter of the interface window glVertex2f(0,-10); glVertex2f(0,-10); glViewport(0,0,250,250); glVertex2f(0,-10); glVertex2f(0,40); // Sets up the PROJECTION matrix glVertex2f(0,40); glMatrixMode(GL_PROJECTION); glEnd(); glEnd(); glLoadIdentity(); glViewport(250,250,250,250); glColor3f( 1, 0, 0 ); gluOrtho2D(0.0,50.0,-10.0,40.0); // also sets up world window glRectf(0.0,0.0,10.0,30.0); // Draw BLUE rectangle // display rectangles glViewport(500,500,250,250); glMatrixMode(GL_PROJECTION); int main(int argc, char ** argv) glLoadIdentity(); { gluOrtho2D(0.0,50.0,-10.0,40.0); glutInit(&argc, argv); glColor3f(0,0,0); glPushMatrix(); // Set window size glRotatef(angle, 0.0f, 0.0f, 1.0f); glutInitWindowSize( 750,750 ); glRectf(0.0,0.0,10.0,30.0); glPopMatrix(); glutCreateWindow("Three viewports "); glLineWidth(10); glutDisplayFunc(draw); glBegin(GL_LINES); glutTimerFunc(1000,timer,0); glVertex2f(0,40); glVertex2f(50,40); glutMainLoop(); glVertex2f(50,40); return 0; glVertex2f(50,-10); } glVertex2f(50,-10); glVertex2f(0,-10); glVertex2f(0,-10); glVertex2f(0,40); glEnd(); glFlush(); } Zooming and panning effects with windows and viewports #include #include float zoomFactor = 1.1; float l,w,x,y; if( button == GLUT_RIGHT_BUTTON ) void mouse( int button, int state, int mx, int my ) { { zoomFactor-=0.02; if( button == GLUT_LEFT_BUTTON) } { glutPostRedisplay(); zoomFactor+=0.02;} } //if(state == GLUT_DOWN && button == GLUT_LEFT_BUTTON) void display() { void keyPress(int key,int x,int y) glClear( GL_COLOR_BUFFER_BIT ); { void myinit() glMatrixMode(GL_MODELVIEW ); { glPushMatrix(); if(key==27) glClearColor(0.0,0.0,0.0,0.0); glScalef(zoomFactor, zoomFactor, 0.0f); exit(0); glColor3f(1.0,0.0,0.0); glColor3f( 1, 0, 0 ); if (key == GLUT_KEY_UP) } glBegin( GL_TRIANGLES ); zoomFactor +=.05; glVertex2f( 0.5, 0 ); if (key == GLUT_KEY_DOWN) glVertex2f( 0.0, 0.5 ); zoomFactor -=.05; glVertex2f( 0.0, 0 ); glVertex2f( 0, 0 ); glutPostRedisplay(); glVertex2f( -0.5, 0.0); glVertex2f( 0.0, -0.5 ); } glEnd(); glPopMatrix(); glFlush(); } int main( int argc, char **argv ) { glutInit( &argc, argv ); glutInitWindowSize( 600, 600 ); glutCreateWindow( "GLUT" ); glutDisplayFunc( display ); myinit(); glutMouseFunc( mouse ); glutSpecialFunc(keyPress); glutMainLoop(); return 0; } Clipping algorithm Many graphics application programs give the users the impression of looking through a window at a very large picture. To display an enlarged portion of a picture, we must not only apply the appropriate scaling and translation but also should identify the visible parts of the picture. This is not straightforward. Certain lines may lie partly inside the visible portion of the picture and partly outside. We cannot display each of these lines in its entirety. The correct way to select visible information for display is to use clipping, a process which divides each element of the picture in to its visible and invisible portions, allowing the invisible portion to be discarded. Clipping can be applied to a variety of different types of picture elements such as pointer, lines, curves, text character and polygons. POINT Clipping Assuming that the clip window is a rectangle in standard position, we save a point P = (x, y) for display if the following inequalities are satisfied: Where the edges of the clip window (XWmin, XWmax, YWmin , YWmax) can be either the world-coordinate window boundaries or viewport boundaries. If any one of these four inequalities is not satisfied, the point is clipped (not saved for display). Although point clipping is applied less often than line or polygon clipping, some applications may require a point clipping procedure. LINE CLIPPING Lines intersecting a rectangular clip region are always clipped to a single line segment. Figure shows examples of clipped lines. Before clipping After clipping COHEN – SUTHERLAND LINE CLIPPING ALGORITHM From the above figure, those lines that are partly invisible are divided by the screen boundary in to one or more invisible portions but in to only one visible segment. Visible segment of a straight line can be determined by computing its 2 endpoints. The Cohen- Sutherland algorithm is designed to find these end points very rapidly but also to reject even more rapidly any line that is clearly invisible. This makes it a very good algorithm for clipping pictures that are much larger than the screen. Left Right Above Each code follows: ABRL pattern Viewing Window Below The algorithm has 2 parts. The first part determines whether the line lies entirely on the screen, then it will be accepted and if not whether the line can be rejected if lying entirely off the screen. If it satisfies none of these 2 tests, then the line is divided in to 2 parts and these 2 tests are applied to each part. The algorithm depends on the fact that every line is entirely on the screen. Or can be divided so that one part can be rejected. We are extending the edges of the screen so that they divide the space occupied by the unclipped picture in to 9 regions. Each of these regions has a 4 bit code. Consider line AB, the 4 bit out code for A is 0001 B is 0000 CD C is 0000 D is 0000 EF E is 0101 F is 0010 GH G is 0101 H is 0100 If the logical AND of the 2 codes is not zero, then the line will be entirely off the screen. For example, for line GH, the logical AND of the 2 end points is 0100. it is not equal to zero, so the line is completely out of the screen. If the 4 bit codes for both endpoints are zero, the line lies entirely on the screen. For example, for line CD, the 4 bit codes for both endpoints C, D are 0000. So the line lies entirely inside the screen. If both these tests are not successful, it means that a part of a line is inside and a part is outside the screen. For example line AB. Then such lines are subdivided. A simple method of sub division is to find the point of intersection of the line with one edge of the screen and to discard that part of the line that lies off the screen. The line AB could be subdivided at C and the portion AC is discarded. We now have the line BC. For line BC, we apply the same tests. The line cannot be rejected, so we again subdivide it at point D. the resulting line BD is entirely on the screen. The algorithm works as follows. We compute the out codes of both endpoints and check for trivial acceptance and rejection. If both tests are not successful, we find an end point that lies outside and test the out code to find the edge that is crossed and to determine the a corresponding intersection point. We can clip off the line segment from the outside end point to the intersection point by replacing the outside end point with the intersection point and compute the outside of this new end point to prepare for the next iteration. For example consider the line segment AD Point A has out code 0000 and point D has out code 1001. The line AD cannot be accepted and cannot be rejected. Therefore the algorithm chooses D as the outside point, whose out code shows that the line crosses the top edge and left edge. In our algorithm we choose the top edge of the screen to clip and we clip AD to AB. We compute B‘s out code as 0000. In our next iteration, we apply the trivial acceptance/ rejection tests to AB, and the line is accepted and displayed Consider another line EI in the following figure. EI requires more iteration. The first end point E has out code 0100. The algorithm chooses E as the outside point and tests the out code to find the first edge against which the line is cut is the bottom edge, where EI is clipped top FI. In the second iteration, FI cannot be completely accepted or rejected. The out code of the first end point, F is 0000, so the algorithm chooses the outside point I that has out code 1010. The first edge clipped against is the top edge, yielding FH. H has the out code 0010. Then the next iteration results in a clip against the right edge to FG. This is accepted in the final iteration and is displayed. POLYGON CLIPPING SUTHERLAND AND HODGMAN POLYGON CLIPPING ALGORITHM It uses a divide and conquers strategy. It solves a series of simple and identical problems that, when combined, solve the entire problem. The simple problem is to clip a polygon against a single infinite clip edge. Four clip edges, each defining one boundary of the clip rectangle successively clip a polygon against a clip rectangle. There are four possible cases when processing vertices in sequence around the perimeter of a polygon. As each pair of adjacent polygon vertices is passed to a window boundary clipper, we make the following tests: (1) If the first vertex is outside the window boundary and the second vertex is inside, both the intersection point of the polygon edge with the window boundary and the second vertex are added to the output vertex list. (2) If both input vertices are inside the window boundary, only the second vertex is added to the output vertex list. (3) If the first vertex is inside the window boundary and the second vertex is outside, only the edge intersection with the window boundary is added to the output vertex list. (4) If both input vertices are outside the window boundary, nothing is added to the output list. These four cases are illustrated in Figure above for successive pairs of polygon vertices. Once all vertices have been processed for one clip window boundary, the output list of vertices is clipped against the next window boundary. Right We will clip each side of the triangle from the left ,right , bottom top And top clipping window 1) For left side clipping window the output is : v1v2v2 v2 v2v3v2’ v2’ v3v1 v3’v1 2) For right side clipping window the output is : no change in the output v2’’ v3 3) For bottom side clipping window the output is : v1’ Bottom v1v2v1’v2 v3’ v2v2’ v2’ left v2’v3’ v2’’ v1 V3’v1 no return value 4) 3) For top side clipping window the output is : no change in the output

Use Quizgecko on...
Browser
Browser