Introduction to Computer Graphics PDF

Document Details

Uploaded by Deleted User

Tags

computer graphics rendering animation computer science

Summary

This document provides an introduction to computer graphics, covering core concepts such as modeling, rendering, and animation. It explains how computer graphics represents images and colors using RGB values and pixel arrays. It also defines buffers and how they work in relation to computer graphics.

Full Transcript

IT2202 Introduction to Computer Graphics to another. Pixel data is stored in a region of memory...

IT2202 Introduction to Computer Graphics to another. Pixel data is stored in a region of memory called the framebuffer. Core Concepts A framebuffer may contain multiple buffers that store different types of data for The term computer graphics describes the use of computers to create and each pixel. At a minimum, the framebuffer must contain a color buffer, which manipulate images. Below are the major areas of computer graphics stores RGB values. When rendering a 3D scene, the framebuffer must also (Marschner & Shirley, 2021): contain a depth buffer. A depth buffer stores distances from points on scene Modeling – deals with the mathematical specification of shape and objects to the virtual camera. Depth values determine whether the object’s appearance properties in a way that can be stored on the computer. points are in front of or behind other objects (from the camera’s perspective). Rendering – deals with the creation of shaded images from 3D Thus, these values can also determine whether the points on each object will computer models. be visible when the scene is rendered. Finally, framebuffers may contain a Animation – a technique to create an illusion of motion through a buffer called a stencil buffer, which may be used to store values used in sequence of images. generating advanced effects, such as shadows, reflections, or portal rendering. Rendering a scene produces a raster. A raster is an array of pixels (picture elements) displayed on a screen, arranged in a grid with two (2) dimensions. Aside from rendering three-dimensional scenes, another goal in computer Pixels specify colors using triples of floating-point numbers between 0 and 1, graphics is creating animated scenes. Animations consist of a sequence of which represent the amount of red, green, and blue light existing in a color; a images quickly displayed. Each of the images that is displayed is called a value of 0 indicates that no amount of that color exists, while a value of 1 frame. The speed or rate at which these images appear is called the frame represents that color is displayed at full intensity. The following are various rate and is measured in frames per second (FPS). colors and their corresponding RGB values: Color R G B The Graphics Processing Unit (GPU) features a highly parallel structure that Red 1 0 0 makes it more efficient than CPUs for rendering computer graphics. Programs Orange 1 0.5 0 run by GPUs are called shaders. These are used to perform many different Yellow 1 1 0 computations required in the rendering process. Shader programming Green 0 1 0 languages implement an application programming interface (API), which defines a set of commands, functions, and protocols that can be used in Blue 0 0 1 interacting with an external system such as the GPU. Below are some APIs Violet 0.5 0 1 and their corresponding shader languages: Black 0 0 0 DirectX API & High-Level Shading Language (HLSL) – used on White 1 1 1 Microsoft platforms, including the Xbox game console Gray 0.5 0.5 0.5 Metal API & Metal Shading Language – runs on modern Mac Brown 0.5 0.2 0 computers, iPhones, and iPads Pink 1 0.5 0.5 OpenGL (Open Graphics Library) API & OpenGL Shading Cyan 0 1 1 Language (GLSL), a cross-platform library – OpenGL is the most widely adopted graphics API. The quality of an image depends partly on its resolution and precision. Resolution is the number of pixels in the raster while precision is the number The Graphics Pipeline of bits used for each pixel. A graphics pipeline is an abstract model used to describe a sequence of steps needed in rendering a three-dimensional scene. Pipelining enables a A buffer (or data buffer, or buffer memory) is a part of a computer's memory computational task to be split into subtasks thus increasing overall efficiency. that serves as temporary storage for data while being moved from one location 01 Handout 1 *Property of STI  [email protected] Page 1 of 3 IT2202 Graphics pipelines increase the efficiency of the rendering process, enabling information and can be activated and deactivated as needed during images to be displayed at faster rates. the rendering process. The pipeline model used in OpenGL consists of four (4) stages: Stage 2: Geometry Processing 1. Application – initializing the window where rendered graphics will be The shape of a geometric object is defined by a mesh, a collection of points displayed; sending data to the GPU that are grouped into lines or triangles. 2. Geometry Processing – determining the position of each vertex of the geometric shapes to be rendered, implemented by a program known as vertex shader 3. Rasterization – determining which pixels correspond to the geometric shapes to be rendered 4. Pixel Processing – determining the color of each pixel in the rendered image, involving a program called a fragment shader Stage 1: Application The application stage primarily involves processes that run on the CPU. The Figure 1. Examples of wireframe meshes following are performed during the application stage: Apart from the object’s overall shape, additional information may be required Creating a window where the rendered graphics will be to describe how the object should be rendered. The properties or attributes displayed: The window must be initialized to read the graphics from that are specific to rendering each individual point are grouped together into a the GPU framebuffer. For animated and interactive applications, the data structure called a vertex. A vertex should contain the three-dimensional main application contains a loop that repeatedly re-renders the scene, position of the corresponding point. The additional data contained by a vertex usually aiming for a rate of 60 FPS. often includes the following: Reading data required for the rendering process: This data may A color to be used when rendering the point include vertex attributes, which describe the appearance of the geometric shapes being rendered. The vertex attribute data is stored Texture coordinates (or UV coordinates), which indicate a point in an in GPU memory buffers called vertex buffer objects (VBOs), while image that is mapped to the vertex images to be used as textures are stored in texture buffers. Lastly, A normal vector, which indicates the direction perpendicular to a source code for the vertex shader and fragment shader programs surface and is typically used in lighting calculations needs to be sent to the GPU, compiled, and loaded. Sending data to the GPU: The application needs to specify the The figure below illustrates different renderings of a sphere that make use of associations between attribute data stored in VBOs and attribute these attributes: wireframe, vertex colors, texture, and with lighting effects variables in the vertex shader program. A single geometric shape may have multiple attributes for each vertex (such as position and color). The corresponding data is streamed from buffers to variables in the shader during rendering. Frequently, it is also necessary to work with many sets of such associations. Multiple geometric shapes may be rendered by the same shader program. Each shape may also be rendered by a different shader program. These sets of associations can be managed using vertex array objects (VAOs). VAOs store this Figure 2. Renderings of a sphere 01 Handout 1 *Property of STI  [email protected] Page 2 of 3 IT2202 During this stage, the vertex shader is applied to each of the vertices; each function parameter when the rendering process begins. The process of attribute variable in the shader receives data from a buffer according to grouping points into geometric primitives is termed primitive assembly. previously specified associations. The primary purpose of the vertex shader is to determine the final position of each of the points being rendered. This is The next step is to identify which pixels correspond to the interior of each typically calculated from a series of transformations. geometric primitive. A criterion must be specified to clarify which pixels are in the interior. A fragment is created for each pixel corresponding to the interior of a shape. A fragment is a collection of data used to determine the color of a single pixel in a rendered image. The data stored in a fragment always includes the raster position, also called pixel coordinates. Stage 4: Pixel Processing The primary purpose of this stage is to determine the final color of each pixel, Figure 3. One scene rendered from multiple camera locations and angles storing this data in the color buffer within the framebuffer. During the first part of the pixel processing stage, a program called the fragment shader is applied Model transformation: The collection of points defining the intrinsic to each of the fragments to calculate their final color. This calculation may shape of an object may be translated, rotated, and scaled. Hence, the involve a variety of data stored in each fragment, in combination with data object appears to have a particular location, orientation, and size with globally available during rendering, such as the following: respect to a virtual three-dimensional world. The coordinates A base color applied to the entire shape expressed from this frame of reference are in world space. In world Colors stored in each fragment (interpolated from vertex colors) space, the origin is at the center of the scene. (Figure 3) Textures (images applied to the surface of the shape), where colors View transformation: Coordinates in this context are said to be in are sampled from locations specified by texture coordinates (Figure 4) view space. The view space (or camera space, or eye space) is the Light sources, whose relative position and/or orientation may lighten result when world-space coordinates are transformed to coordinates or darken the color, depending on the direction the surface is facing at in front of the user's view. a point, specified by normal vectors Projection transformation: Any points outside the specified region are discarded or clipped from the scene; coordinates expressed at this stage are in clip space. Stage 3: Rasterization Once the vertex shader has specified the final positions of each vertex, the rasterization stage begins. The points themselves must first be grouped into the desired type of geometric primitive: points, lines, or triangles, consisting of sets of 1, 2, or 3 points. For lines and triangles, additional information must be specified. For example, an array of points [A, B, C, D, E, F] is to be grouped into lines. They could be grouped in disjoint pairs, such as (A, B), (C, D), (E, Figure 4. An image file used as a texture for a 3D object F), producing a set of disconnected line segments. They could also be grouped References: in overlapping pairs, such as (A, B), (B, C), (C, D), (D, E), (E, F), producing a Korites, B. (2018). Python graphics: A reference for creating 2D and 3D images. Apress. set of connected line segments (called a line strip). The type of geometric Marschner, S. & Shirley, P. (2021). Fundamentals of computer graphics (5th ed.). CRC Press. primitive and method for grouping points is specified using an OpenGL Stemkoski, L. & Pascale, M. (2021). Developing graphics frameworks with Python and OpenGL. CRC Press. 01 Handout 1 *Property of STI  [email protected] Page 3 of 3

Use Quizgecko on...
Browser
Browser