CSE-423 Computer Graphics & Visualization Lecture Notes Fall 2024 PDF
Document Details
Uploaded by TopnotchRiemann2481
Egypt-Japan University of Science and Technology
2024
Dr. Reda Elbasiony
Tags
Summary
These lecture notes cover computer graphics concepts, focusing on topics like image formation, various camera models (pinhole, synthetic), and OpenGL programming principles, including shader-based graphics and implementation details.
Full Transcript
Department of Computer Science & Information Technology CSE-423 Computer Graphics & Visualization Lec 2 : Models & Architectures Instructor : Dr. Reda Elbasiony Fall, 2024 Image Formation In computer graphics, we form ima...
Department of Computer Science & Information Technology CSE-423 Computer Graphics & Visualization Lec 2 : Models & Architectures Instructor : Dr. Reda Elbasiony Fall, 2024 Image Formation In computer graphics, we form images which are generally two dimensional using a process analogous to how images are formed by physical imaging systems – Cameras – Microscopes – Telescopes – Human visual system 2 Elements of Image Formation Objects Viewer Light source(s) Attributes that govern how light interacts with the materials in the scene Note the independence of the objects, the viewer, and the light source(s) 3 Pinhole Camera 4 Pinhole Camera Use trigonometry to find projection of point at (x,y,z) xp= -x/z/d yp= -y/z/d zp= d These are equations of simple perspective 5 Pinhole Camera Field of view is the angle made by the largest object that our camera can image on its film plane 6 Synthetic Camera Model Synthetic Camera Model projector p image plane projection of p center of projection 8 Image Formation Revisited Can we mimic the synthetic camera model to design graphics hardware software? Application Programmer Interface (API) – Need only specify Objects Materials Viewer Lights But how is the API implemented? 9 Physical Approaches Ray tracing: follow rays of light from center of projection until they either are absorbed by objects or go off to infinity – Can handle global effects Multiple reflections Translucent objects – Slow – Must have whole data base Radiosity: Energy based approach – Very slow 10 Practical Approach Process objects one at a time in the order they are generated by the application Pipeline architecture application display program All steps can be implemented in hardware on the graphics card 11 Vertex Processing Much of the work in the pipeline is in converting object representations from one coordinate system to another – Object coordinates – Camera (eye) coordinates – Screen coordinates Every change of coordinates is equivalent to a matrix transformation Vertex processor also computes vertex colors 12 Projection Projection is the process that combines the 3D viewer with the 3D objects to produce the 2D image – Perspective projections: all projectors meet at the center of projection – Parallel projection: projectors are parallel, center of projection is replaced by a direction of projection 13 Primitive Assembly Vertices must be collected into geometric objects before clipping and rasterization can take place – Line segments – Polygons – Curves and surfaces 14 Clipping Just as a real camera cannot “see” the whole world, the virtual camera can only see part of the world or object space – Objects that are not within this volume are said to be clipped out of the scene 15 Rasterization If an object is not clipped out, the appropriate pixels in the frame buffer must be assigned colors Rasterizer produces a set of fragments for each object Fragments are “potential pixels” – Have a location in frame bufffer – Color and depth attributes Vertex attributes are interpolated over objects by the rasterizer 16 Fragment Processing Fragments are processed to determine the color of the corresponding pixel in the frame buffer Colors can be determined by texture mapping or interpolation of vertex colors Fragments may be blocked by other fragments closer to the camera – Hidden-surface removal 17 The Programmer’s Interface Programmer sees the graphics system through a software interface: the Application Programmer Interface (API) 18 API Contents Functions that specify what we need to form an image – Objects – Viewer – Light Source(s) – Materials Other information – Input from devices such as mouse and keyboard 19 Object Specification Most APIs support a limited set of primitives including – Points (0D object) – Line segments (1D objects) – Polygons (2D objects) – Some curves and surfaces Quadrics Parametric polynomials All are defined through locations in space or vertices 20 Example (old style) type of object location of vertex glBegin(GL_POLYGON) glVertex3f(0.0, 0.0, 0.0); glVertex3f(0.0, 1.0, 0.0); glVertex3f(0.0, 0.0, 1.0); glEnd( ); end of object definition 21 Example (GPU based) Put geometric data in an array var points = [ vec3(0.0, 0.0, 0.0), vec3(0.0, 1.0, 0.0), vec3(0.0, 0.0, 1.0), ]; Send array to GPU Tell GPU to render as triangle 22 Camera Specification Six degrees of freedom – Position of center of lens – Orientation Lens Film size Orientation of film plane 23 Lights and Materials Types of lights – Point sources vs distributed sources – Spot lights – Near and far sources – Color properties Material properties – Absorption: color properties – Scattering Diffuse Specular 24 Modern OpenGL Performance is achieved by using GPU rather than CPU Control GPU through programs called shaders Application’s job is to send data to GPU GPU does all rendering 25 Immediate Mode Graphics Geometry specified by vertices – Locations in space( 2 or 3 dimensional) – Points, lines, circles, polygons, curves, surfaces Immediate mode – Each time a vertex is specified in application, its location is sent to the GPU – Old style uses glVertex – Creates bottleneck between CPU and GPU – Removed from OpenGL 3.1 and OpenGL ES 2.0 26 Retained Mode Graphics Put all vertex attribute data in array Send array to GPU to be rendered immediately Almost OK but problem is we would have to send array over each time we need another render of it Better to send array over and store on GPU for multiple renderings 27 OpenGL 3.1 Totally shader-based – Each application must provide both a vertex and a fragment shader No immediate mode Most 2.5 functions deprecated Backward compatibility not required – Exists a compatibility extension 28 Other Versions OpenGL ES – Embedded systems – Version 1.0 simplified OpenGL 2.1 – Version 2.0 simplified OpenGL 3.1 Shader based WebGL – Javascript implementation of ES 2.0 – Supported on newer browsers OpenGL 4.1, 4.2, ….. – Add geometry, tessellation, compute shaders 29 OpenGL Architecture 30 An OpenGL Simple Program Generate a square on a solid background 31 It used to be easy #include void mydisplay(){ glClear(GL_COLOR_BUFFER_BIT); glBegin(GL_QUAD); glVertex2f(-0.5, -0.5); glVertex2f(-0,5, 0,5); glVertex2f(0.5, 0.5); glVertex2f(0.5, -0.5); glEnd() } int main(int argc, char** argv){ glutCreateWindow("simple"); glutDisplayFunc(mydisplay); glutMainLoop(); } 32 What happened? Most OpenGL functions deprecated – immediate vs retained mode – make use of GPU However, processing loop is the same 33 Execution in Browser 34 Event Loop Remember that the sample program specifies a render function which is an event listener or callback function – Every program should have a render callback – For a static application we need only execute the render function once – In a dynamic application, the render function can call itself recursively but each redrawing of the display must be triggered by an event 35 Lack of Object Orientation All versions of OpenGL are not object oriented so that there are multiple functions for a given logical function Example: sending values to shaders – gl.uniform3f – gl.uniform2i – gl.uniform3dv Underlying storage mode is the same 36 WebGL function format function name dimension gl.uniform3f(x,y,z) belongs to WebGL canvas x,y,z are variables gl.uniform3fv(p) p is an array 37 WebGL constants Most constants are defined in the canvas object – In desktop OpenGL, they were in #include files such as gl.h Examples – desktop OpenGL glEnable(GL_DEPTH_TEST); – WebGL gl.enable(gl.DEPTH_TEST) – gl.clear(gl.COLOR_BUFFER_BIT) 38 WebGL and GLSL WebGL requires shaders and is based less on a state machine model than a data flow model Most state variables, attributes and related pre 3.1 OpenGL functions have been deprecated Action happens in shaders Job of application is to get data to GPU 39 GLSL OpenGL Shading Language C-like with – Matrix and vector types (2, 3, 4 dimensional) – Overloaded operators – C++ like constructors Similar to Nvidia’s Cg and Microsoft HLSL Code sent to shaders as source code WebGL functions compile, link and get information to shaders 40