Computer Graphics Syllabus PDF
Document Details
Uploaded by AffectionateConnemara8848
null
Tags
Related
Summary
This document is a syllabus for a computer graphics course, likely at an undergraduate level. It covers topics including computer graphics primitives, transformations, viewing, and rendering techniques and provides practical suggestions for programming assignments in C++.
Full Transcript
1 Syllabus S.Y.B.Sc. (IT) SEMESTER - III, PAPER - II COMPUTER GRAPHIC Unit I Introduction Computer Graphics and Primitive Algorithms: Introduction to Image and Objects, Image Representation, Basic Graphics Pipeline,...
1 Syllabus S.Y.B.Sc. (IT) SEMESTER - III, PAPER - II COMPUTER GRAPHIC Unit I Introduction Computer Graphics and Primitive Algorithms: Introduction to Image and Objects, Image Representation, Basic Graphics Pipeline, Bitmap and Vector-Based Graphics, Applications of Computer Graphics, Display Devices, Cathode Ray Tubes, Raster-Scan Display, Random-Scan Display, Flat Panel Display, Input Technology, Coordinate System Overview, Scan-Conversion of graphics primitives: Scan-Conversion of a Lines (Digital Differential Analyzer Algorithm, Bresenham's Line- Drawing Algorithm, Scan-Conversion of Circle and Ellipse (Bresenham's Method of Circle Drawing, Midpoint Circle Algorithm), Drawing Ellipses and Other Conics. Unit II Two Dimensional Transformation: Introduction to transformations, Transformation Matrix, Types of Transformations in Two-Dimensional Graphics: Identity Transformation, Scaling, Reflection, Shear Transformations, Rotation, Translation, Rotation about an Arbitrary Point, Combined Transformation, Homogeneous Coordinates, 2D Transformations using Homogeneous Coordinates Unit III Three-dimensional transformations, Objects in Homogeneous Coordinates; Three-Dimensional Transformations: Scaling, Translation, Rotation, Shear Transformations, Reflection, World Coordinates and Viewing Coordinates, Projection, Parallel Projection, Perspective Projection. Unit IV Viewing and Solid Area Scan-Conversion : Introduction to viewing and clipping, viewing Transformation in Two Dimensions, Introduction to Clipping, Two-Dimensional Clipping, Point Clipping, Line Clipping, Introduction to a Polygon Clipping, Viewing and Clipping in Three Dimensions, Three-Dimensional Viewing Transformations, Text Clipping Introduction to Solid Area Scan-Conversion, Inside - Outside Test, Winding Number Method and Coherence Property, Polygon Filling, Seed Fill Algorithm, Scan-Lino Algorithm, Priority Algorithm, Scan Conversion of Character, Aliasing, Anti-Aliasing, Halftoning, Thresholding and Dithering 2 Unit V Introduction to curves, Curve Continuity, Conic Curves, Piecewise Curve Design, Parametric Curve Design, Spline Curve Representation, Bezier Curves, B-Spline Curves, Fractals and its applications. Surface Design : Bilinear Surfaces, Ruled Surfaces, Developable Surfaces, Coons Patch, Sweep Surfaces, Surface of Revolution, Quadric Surfaces, Constructive Solid Geometry, Bezier Surfaces, B-Spline Surfaces, Subdivision Surfaces. Visible Surfaces : Introduction to visible and hidden surfaces, Coherence for visibility, Extents and Bounding Volumes, Back Face Culling, Painter’s Algorithm, Z-Buffer Algorithm, Floating Horizon Algorithm, Roberts Algorithm. Unit VI Object Rendering : Introduction Object-Rendering, Light Modeling Techniques, illumination Model, Shading, Flat Shading, Polygon Mesh Shading, Gaurand Shading Model, Phong Shading, Transparency Effect, Shadows, Texture and Object Representation, Ray Tracing, Ray Casting, Radiosity, Color Models. Introduction to animation, Key-Frame Animation, Construction of an Animation Sequence, Motion Control Methods, Procedural Animation, Key-Frame Animation vs. Procedural Animation, Introduction to Morphing, Three-Dimensional Morphing. Books : Computer Graphics, R. K. Maurya, John Wiley. Mathematical elements of Computer Graphics, David F. Rogers, J. Alan Adams, Tata McGraw-Hill. Procedural elements of Computer Graphics, David F. Rogers, Tata McGraw-Hill. Reference: Computer Graphics, Donald Hearn and M. Pauline Baker, Prentice Hall of India. Computer Graphics, Steven Harrington, McGraw-Hill. Computer Graphics Principles and Practice, J.D. Foley, A Van Dam, S. K. Feiner and R. L. Phillips, Addision Wesley. Principles of Interactive Computer Graphics, William M. Newman, Robert F. Sproull, Tata McGraw-Hill. Introduction to Computer Graphics, J.D. Foley, A. Van Dam, S. K. Feiner, J.F. Hughes and R.L. Phillips, Addision Wesley. 3 Practical (Suggested) : Should contain at least 10 programs developed using C++. Some sample practical are listed below. 1. Write a program with menu option to input the line coordinates from the user to generate a line using Bresenham’s method and DDA algorithm. Compare the lines for their values on the line. 2. Develop a program to generate a complete circle based on. a) Bresenham’s circle algorithm b) Midpoint Circle Algorithm 3. Implement the Bresenham’s / DDA algorithm for drawing line (programmer is expected to shift the origin to the center of the screen and divide the screen into required quadrants) 4. Write a program to implement a stretch band effect. (A user will click on the screen and drag the mouse / arrow keys over the screen coordinates. The line should be updated like rubber-band and on the right-click gets fixed). 5. Write program to perform the following 2D and 3D transformations on the given input figure a) Rotate through b) Reflection c) Scaling d) Translation 6. Write a program to demonstrate shear transformation in different directions on a unit square situated at the origin. 7. Develop a program to clip a line using Cohen-Sutherland line clipping algorithm between X1, Y1 X 2 , Y2 against a window X min , Ymin X max , Ymax . 8. Write a program to implement polygon filling. 9. Write a program to generate a 2D/3D fractal figures (Sierpinski triangle, Cantor set, tree etc). 10. Write a program to draw Bezier and B-Spline Curves with interactive user inputs for control polygon defining the shape of the curve. 11. Write a program to demonstrate 2D animation such as clock simulation or rising sun. 12. Write a program to implement the bouncing ball inside a defined rectangular window. 4 1 COMPUTER GRAPHICS - FUNDAMENTALS Unit Structure 1.0 Objectives 1.1 Introduction 1.2 Introduction to Image and Objects 1.3 Image Representation 1.4 Basic Graphics Pipeline 1.5 Bitmap and Vector-Based Graphics 1.6 Applications of Computer Graphics 1.7 Display Devices 1.7.1 Cathode Ray Tubes 1.7.2 Raster-Scan Display 1.7.3 Random-Scan Display 1.7.4 Flat Panel Display 1.8 Input Technology 1.9 Coordinate System Overview 1.10 Let us sum up 1.4 References and Suggested Reading 1.5 Exercise 1.0OBJECTIVES The objective of this chapter is To understand the basics of computer graphics. To be aware of applications of computer graphics. To know the elements of computer graphics. 1.1 INTRODUCTION Computer graphics involves display, manipulation and storage of pictures and experimental data for proper visualization using a computer. It provides methods for producing images and animations (sequence of images). It deals with the hardware as well as software support for generating images. 5 Basically, there are four major operations that we perform in computer graphics: Imaging: refers to the representation of 2D images. Modeling: refers to the representation of 3D images. Rendering: refers to the generation of 2D images from 3D models. Animation: refers to the simulation of sequence of images over time. 1.2 INTRODUCTION TO IMAGE AND OBJECTS An image is basically representation of a real world object on a computer. Itcan be an actual picture display, a stored page in a video memory, or a source code generated by a program. Mathematically, an image is a two - dimensional array of data with intensity or a color value at each element of the array. Objects are real world entities defined in three – dimensional world coordinates. In computer graphics we deal with both 2D and 3D descriptions of an object. We also study the algorithms and procedures for generation and manipulation of objects and images in computer graphics. Check your Progress: 1. Define image and object. 2. How an image is represented mathematically? 1.3 IMAGE REPRESENTATION Image representation is the approximations of the real world displayed in a computer. A picture in computer graphics is represented as a collection of discrete picture elements termed as pixels. A pixel is the smallest element of picture or object that can be represented on the screen of a device like computer. 6 Check your progress: 1. Define pixel. 1.4 BASIC GRAPHIC PIPELINE In computer graphics, the graphics pipeline refers to a series of interconnected stages through which data and commands related to a scene go through during rendering process. It takes us from the mathematical description of an object to its representation on the device. The figure shown below illustrates a 3D graphic pipeline. Figure 1.1: A 3D graphic pipeline 7 The real world objects are represented in world coordinate system. It is then projected onto a view plane. The projection is done from the viewpoint of the position of a camera or eye. There is an associated camera coordinate system whose z axis specifies the view direction when viewed from the viewpoint. The infinite volume swept by the rays emerging from the viewpoint and passing through the window is called as view volume or view pyramid. Clipping planes (near and far) are used to limit the output of the object. The mapping of an object to a graphic device requires the transformation of view plane coordinates to physical device coordinates. There are two steps involved in this process. (i) The window to a viewport transformation. The viewport is basically a sub – rectangle of a fixed rectangle known as logical screen. (ii) The transformation of logical screen coordinates to physical device coordinates. Transform into Clip against Project to view Transform to camera view volume plane viewport coordinates Representation Transform to of 3D world physical device objects coordinates Figure : Sequence of transformation in viewing pipeline Representation Clip against Transport to Transform to of 2D world window viewport physical device objects coordinates Figure 1.2: 2D coordinate system to physical device coordinates transformation. The figures above depict the graphic pipeline and the 2D coordinate transformation to physical device coordinates. 8 Check your Progress: 1. Differentiate between world coordinates system and camera coordinate system. 2. Define view volume. 1.5 BITMAP AND VECTOR – BASED GRAPHICS Computer graphics can be classified into two categories: Raster or Bitmap graphics and Vector graphics. Bitmap graphics: It is pixel based graphics. The position and color information about the image are stored in pixels arranged in grid pattern. The Image size is determined on the basis of image resolution. These images cannot be scaled easily. Bitmap images are used to represent photorealistic images which involve complex color variations. Figure 1.3 (a) An arrow image (b) magnified arrow image with pixel grid The above figure shows a bitmap arrow image in its actual size and magnified image with pixel grid. Vector graphics: The images in vector graphics are basically mathematically based images. Vector based images have smooth edges and therefore used to create curves and shapes. 9 Figure 1.4 (a) A rose image (b) vector description of leaf of rose These images are appropriate for precise illustrations but not good for photorealistic images. These images are easily scalable due to their mathematical structure. Figure 1.4(a) and (b) shows a rose image and vector description of leaf of rose. Figure 1.5 (a) A bitmap image (b) a vector image The above figure shows a bitmap and vector image of the letter A. Check your Progress: 1. Which graphic system is better for photorealistic images? 2. In which graphic system images are easily scalable? 10 1.6 APPLICATIONS OF COMPUTER GRAPHICS Computer graphics finds its application in various areas; some of the important areas are discussed below: Computer-Aided Design: In engineering and architectural systems, the products are modeled using computer graphics commonly referred as CAD (Computer Aided Design). In many design applications like automobiles, aircraft, spacecraft, etc., objects are modeled in a wireframe outline that helps the designer to observe the overall shape and internal features of the objects. CAD applications are also used in computer animations. The motion of an object can be simulated using CAD. Presentation graphics: In applications like summarizing of data of financial, statistical, mathematical, scientific and economic research reports, presentation graphics are used. It increases the understanding using visual tools like bar charts, line graphs, pie charts and other displays. Computer Art: A variety of computer methods are available for artists for designing and specifying motions of an object. The object can be painted electronically on a graphic tablet using stylus with different brush strokes, brush widths and colors. The artists can also use combination of 3D modeling packages, texture mapping, drawing programs and CAD software to paint and visualize any object. Entertainment: In making motion pictures, music videos and television shows, computer graphics methods are widely used. Graphics objects can be combined with live actions or can be used with image processing techniques to transform one object to another (morphing). Education and training: Computer graphics can make us understand the functioning of a system in a better way. In physical systems, biological systems, population trends, etc., models makes it easier to understand. In some training systems, graphical models with simulations help a trainee to train in virtual reality environment. For example, practice session or training of ship captains, aircraft pilots, air traffic control personnel. 11 Visualization: For analyzing scientific, engineering, medical and business data or behavior where we have to deal with large amount of information, it is very tedious and ineffective process to determine trends and relationships among them. But if it is converted into visual form, it becomes easier to understand. This process is termed as visualization. Image processing: Image processing provides us techniques to modify or interpret existing images. One can improve picture quality through image processing techniques and can also be used for machine perception of visual information in robotics. In medical applications, image processing techniques can be applied for image enhancements and is been widely used for CT (Computer X-ray Tomography) and PET (Position Emission Tomography) images. Graphical User Interface: GUI commonly used these days to make a software package more interactive. There are multiple window system, icons, menus, which allows a computer setup to be utilized more efficiently. Check your progress: 1. Fill in the blanks (a) GUI stands for................. (b)............ provides us techniques to modify or interpret existing images. 2. Explain how computer graphics are useful in entertainment industry. 1.7 DISPLAY DEVICES There are various types of displays like CRT, LCD and Plasma. We will discuss each of these three in brief. 12 CRT (Cathode Ray Tube) is one of the mostly used display technology. In CRT, a beam of electrons emitted by an electron gun strikes on specified positions on phosphor coated screen after passing through focusing and deflecting systems. Figure 1.6 : Elements of CRT LCD Display: LCD stands for Liquid Crystal Display o Organic molecules that remain in crystalline structure without external force, but re-aligns themselves like liquid under external force o So LCDs realigns themselves to EM field and changes their own polarizations Figure 1.7 : LCD display 13 o There are two types of LCD displays: o Active Matrix LCD: Electric field is retained by a capacitor so that the crystal remains in a constant state. Transistor switches are used to transfer charge into the capacitors during scanning. The capacitors can hold the charge for significantly longer than the refresh period Crisp display with no shadows. More expensive to produce. o Passive matrix LCD: LCD slowly transit between states. In scanned displays, with a large number of pixels, the percentage of the time that LCDs are excited is very small. Crystals spend most of their time in intermediate states, being neither "On" or "Off". These displays are not very sharp and are prone to ghosting. Plasma display: Figure1.8 (showing the basic structure of plasma display) o These are basically fluorescent tubes. o High- voltage discharge excites gas mixture (He, Xe), upon relaxation UV light is emitted, UV light excites phosphors. o Some of its features are Large view angle Large format display Less efficient than CRT, more power Large pixels: 1mm (0.2 mm for CRT) Phosphors depletion In CRT monitors there are two techniques of displaying images. 14 o Raster scan displays: A rectangular array of points or dots.In a raster scan system, the electron beam is swept across the screen, one row at a time from top to bottom.As the electron beam moves across each row, the beam intensity is turned on and off to create a pattern of illuminated spots. See the figure below. Figure 1.9 : Raster scan display Horizontal retrace: The return to the left of the screen, after refreshing each scan line. Vertical retrace: At the end of each frame (displayed in 1/80th to 1/60th of a second) the electron beam returns to the top left corner of the screen to begin the next frame. Figure 1.10 (showing horizontal and vertical retrace) 15 Random scan display: Random scan display is the use of geometrical primitives such as points, lines, curves, and polygons, which are all based upon mathematical equation. In a random scan display, a CRT has the electron beam directed only to the parts of the screen where a picture is to be drawn. Random scan monitors draw a picture one line at a time. See the figure below. Figure 1.11: Random Scan display o Refresh rate depends on the number of lines to be displayed. o Picture definition is now stored as a line-drawing commands an area of memory referred to as refresh display file. o To display a picture, the system cycle through the set of commands in the display file, drawing each component line in turn. o Random scan displays are designed to draw all the component lines of a picture 30 to 60 times each second. o Random scan displays have higher resolution than raster systems. There are some parameters or properties related to graphic displays like CRT: Persistence: In case of CRT, persistence refers to the property of a phosphor defining its life time, i.e., how long they continue to emit light after the CRT beam is removed. Resolution: The maximum number of points that can be displayed without overlap on a CRT is referred to as the resolution. In other words, it is the number of points per unit length that can be plotted horizontally and vertically. 16 Aspect ratio: It is the ratio of the number of vertical points to the number of horizontal points necessary to produce equal- length lines in both directions on the screen. Frame buffer: Frame buffer also known as refresh buffer is the memory area that holds the set of intensity values for all the screen points. Pixel: It refers a point on the screen. It is also known as pel and is shortened form of ‘picture element’. Bitmap or pixmap: A frame buffer is said to be bitmap on a black and white system with one bit per pixel. For systems with multiple bits per pixel, the frame buffer is referred to as pixmap. Graphical images - used to add emphasis, direct attention, illustrate concepts, and provide background content. Two types of graphics: o Draw-type graphics or vector graphics – represent an image as a geometric shape o Bitmap graphics – represents the image as an array of dots, called pixels Three basic elements for drawing in graphics are: o Point: A point marks a position in space. In pure geometric terms, a point is a pair of x, y coordinates. It has no mass at all. Graphically, however, a point takes form as a dot, a visible mark. A point can be an insignificant fleck of matter or a concentrated locus of power. It can penetrate like a bullet, pierce like a nail, or pucker like a kiss. A mass of points becomes texture, shape, or plane. Tiny points of varying size create shades of gray. o Line: A line is an infinite series of points. Understood geometrically, a line has length, but no breadth. A line is the connection between two points, or it is the path of a moving point. A line can be a positive mark or a negative gap. Lines appear at the edges of objects and where two planes meet. Graphically, lines exist in many weights; the thickness and texture as well as the path of the mark determine its visual presence. Lines are drawn with a pen, pencil, brush, mouse, or digital code. They can be straight or curved, continuous or broken. When a line reaches a certain thickness, it becomes a plane. Lines multiply to describe volumes, planes, and textures. o Plane: A plane is a flat surface extending in height and width. A plane is the path of a moving line; it is a line with breadth. A line closes to become a shape, a bounded plane. Shapes are 17 planes with edges. In vector–based software, every shape consists of line and fill. A plane can be parallel to the picture surface, or it can skew and recede into space. Ceilings, walls, floors, and windows are physical planes. A plane can be solid or perforated, opaque or transparent, textured or smooth. Check your progress: 1. Explain different display technologies. 2. Differentiate between Raster scan and Random scan display. 1.8 INPUT TECHNOLOGY There are different techniques for information input in graphical system. The input can be in the form of text, graphic or sound. Some of the commonly used input technologies are discussed below. 1.8.1Touch Screens A touch screen device allows a user to operate a touch sensitive device by simply touching the display screen. The input can be given by a finger or passive objects like stylus. There are three components of a touch screen device: a touch sensor, a controller and a software driver. A touch sensor is a touch sensitive clear glass panel. A controller is a small PC card which establishes the connection between a touch sensor and the PC. The software driver is a software that allows the touch screen to work together with the PC. The touch screen technology can be implemented in various ways like resistive, surface acoustic, capacitive, infrared, strain gauge, optical imaging, dispersive signal technology, acoustic pulse recognition and frustrated total internal reflection. 1.8.2Light pen A light pen is pen shaped pointing device which is connected to a visual display unit. It has light sensitive tip which detects the light from the screen when placed against it which enables a computer to locate the position of the pen on the screen. Users can 18 point to the image displayed on the screen and also can draw any object on the screen similar to touch screen with more accuracy. 1.8.3 Graphic tablets Graphic tablets allow a user to draw hand draw images and graphics in the similar way as is drawn with a pencil and paper. It consists of a flat surface upon which the user can draw or trace an image with the help of a provided stylus. The image is generally displayed on the computer monitor instead of appearing on the tablet itself. Check your Progress: 1. Name different types of touch screen technologies. 2. Differentiate between light pen and graphic tablet. 1.9 COORDINATE SYSTEM OVERVIEW To define positions of points in space one requires a coordinate system. It is way of determining the position of a point by defining a set of numbers called as coordinates. There are different coordinate systems for representing an object in 2D or 3D. 1.9.1 Cartesian coordinate system It is also known as rectangular coordinate system and can be of two or three dimensions. A point in Cartesian coordinate system can be defined by specifying two numbers, called as x – coordinate and the y – coordinate of that point. 19. Figure 1.12: Cartesian coordinate system In the above figure, there are two points (2, 3) and (3, 2) are specified in Cartesian coordinate system 1.9.2 Polar coordinate system In polar coordinate system, the position of a point is defined by specifying the distance (radius) from a fixed point called as origin and the angle between the line joining the point and the origin and the polar axis (horizontal line passing through the origin). Figure 1.13: Polar coordinate system The above figure shows a point (r, θ) in polar coordinates. 20 Check your Progress: Fill in the blanks 1. The position of a point is defined by specifying the distance (radius) from a fixed point called as ………. 2. A point in Cartesian coordinate system can be defined by specifying two numbers, called as …….. and the ………… of that point. Answers: 1. Origin 2. x - coordinate, y – coordinate. 1.10 LET US SUM UP We learnt about computer graphics, its application in different areas. We studied various display and input technologies. We also studied basic graphic pipeline, bitmap and vector based graphics. Then we learnt the elements of computer graphics in which came to know about the terms like persistence, resolution, aspect ratio, frame buffer, pixel and bitmap. Finally we studied about the coordinate system. 1.11 REFERENCES AND SUGGESTED READING (1) Computer Graphics, Donald Hearn, M P. Baker, PHI. (2) Procedural elements of Computer Graphics, David F. Rogers, Tata McGraw Hill. (3) Computer Graphics, Rajesh K. Maurya, Wiley - India 1.12 EXERCISE 1. What are the major operations that we perform on Computer Graphics? 2. Define some of the applications of Computer Graphics. 21 3. Define graphic pipeline and the process involved in it. 4. Differentiate between bitmap and vector based graphics. 5. Define the following terms: a. Persistence b. Aspect ratio c. Frame buffer d. Resolution e. Pixel 6. Define horizontal and vertical retrace. 22 2 SCAN CONVERSION OF GRAPHICS PRIMITIVES Unit Structure 2.0 Objectives 2.1 Introduction 2.2 Scan-Conversion of a Lines 2.3 Scan- Conversion of Circle and Ellipse 2.3.1 Digital Differential Analyzer Algorithm 2.3.2 Bresenham's Line-Drawing Algorithm 2.4 Drawing Ellipses and Other Conics 2.4.1 Bresenham's Method of Circle Drawing 2.4.2 Midpoint Circle Algorithm 2.5 Drawing Ellipses and Other Conics 2.6 Let us sum up 2.7 References and Suggested Reading 2.8 Exercise 2.0 OBJECTIVES The objective of this chapter is To understand the basic idea of scan conversion techniques. To understand the algorithms for scan conversion of line, circle and other conics. 2.1 INTRODUCTION Scan conversion or rasterization is the process of converting the primitives from its geometric definition into a set of pixels that make the primitive in image space. This technique is used to draw shapes like line, circle, ellipse, etc. on the screen. Some of them are discussed below 23 2.2 SCAN – CONVERSION OF LINES A straight line can be represented by a slope intercept equation as where m represents the slope of the line and b as the y intercept. If two endpoints of the line are specified at positions (x1,y1) and (x2,y2), the values of the slope m and intercept b can be determined as If ∆x and ∆y are the intervals corresponding to x and y respectively for a line, then for given interval ∆x, we can calculate ∆y. Similarly for given interval ∆y, ∆x can be calculated as For lines with magnitude |m| < 1, ∆x can be set proportional to a small horizontal deflection and the corresponding horizontal deflection is et proportional to ∆y and can be calculated as For lines with |m|>1, ∆y can be set proportional to small vertical deflection and corresponding ∆x which is set proportional to horizontal deflection is calculated using The following shows line drawn between points (x1, y1) and (x2, y2). Figure 2.1 : A line representation in Cartesian coordinate system 24 2.2.1 Digital Differential Analyzer (DDA) Algorithm Sampling of the line at unit interval is carried out in one coordinate and corresponding integer value for the other coordinate is calculated. If the slope is less than or equal to 1( |m| ≤ 1), the coordinate x is sampled at unit intervals (∆x = 1) and each successive values of y is computed as where k varies from 1 to the end point value taking integer values only. The value of y calculated is rounded off to the nearest integer value. For slope greater than 1 (|m| > 1), the roles of y and x are reversed, i.e., y is sampled at unit intervals (∆y = 1) and corresponding x values are calculated as For negative value slopes, we follow the same procedure as above, only the sampling unit ∆x and ∆y becomes ‘-1’ and Pseudocode for DDA algorithm is as follows LineDDA(Xa, Ya, Xb, Yb) // to draw a line from (Xa, Ya) to (Xb, Yb) { Set dx = Xb - Xa, dy = Yb - Ya; Set steps = dx; SetX = Xa, Y = Ya; int c = 0; Call PutPixel(Xa, ya); For (i=0; i dx) { // (that is, the fractional part is greater than 1 now Y = y +1; // carry the overflowed integer over c = c - dx // update the fractional part Call PutPixel(X, Y); } } } 25 2.2.2 Bresenham’s Line Drawing Algorithm This line drawing algorithm proposed by Bresenham, is an accurate and efficient raster-line generating algorithm using only incremental integer calculations. For lines |m| ≤ 1, the Bresenham’s line drawing algorithm I. Read the end points of the line and store left point in (x0, y0) II. Plot (x0, y0), the first point. III. Calculate constants ∆x, ∆y, 2∆y and 2∆y - 2∆x, and obtain a decision parameter p0 IV. Perform the following test for each xk, starting at k = 0 if pk< 0, then next plotting point is (xk+1, yk) and Otherwise, the next point to plot is (xk+1, yk+1) and V. Repeat step 4 ∆x times. For a line with positive slope more than 1, the roles of the x and y directions are interchanged. Check your progress: 1. Fill in the blanks (a)............of the line at unit interval is carried out in one coordinate and corresponding integer value for the other coordinate is calculated. (b) Bresenham's line drawing algorithm is an accurate and efficient raster-line generating algorithm using only................calculations. 2. Compare DDA and Bresenham's line drawing algorithm. Answers: 1(a) sampling (b) incremental integer. 26 2.3 SCAN – CONVERSION OF CIRCLE AND ELLIPSE A circle with centre (xc, yc) and radius r can be represented in equation form in three ways Analytical representation: r2 = (x – xc)2 + (y – yc)2 Implicit representation : (x – xc)2 + (y – yc)2 – r2 = 0 Parametric representation: x = xc + r cosθ y = yc +ysinθ A circle is symmetrical in nature. Eight – way symmetry can be used by reflecting each point about each 45o axis. The points obtained in this case are given below with illustration by figure. P1 = (x, y) P5 = (-x, -y) P2 = (y, x) P6 = (-y, -x) P3 = (-y, x) P7 = (y, -x) P4 = (-x, y) P8 = (x, -y) Figure 2.2 : Eight way symmetry of a circle 3.1 Bresenham’s circle drawing algorithm Let us define a procedure Bresenham_Circle (Xc,Yc, R) procedure for Bresenham’s circle drawing algorithm for circle of radius R and centre (Xc, Yc) Bresenham_Circle (Xc,Yc, R) { Set X = 0; Set Y= R; Set D = 3 – 2R; While (X < Y) { Call Draw_Circle (Xc, Yc, X, Y); X=X+1; If (D < 0) { D = D + 4X + 6; } Else { Y = Y – 1; D = D + 4(X – Y) + 10; 27 } Call Draw_Circle (Xc, Yc, X, Y); } } Draw_Circle (Xc, Yc, X, Y) { Call PutPixel (Xc + X, Yc, +Y); Call PutPixel (Xc – X, Yc, +Y); Call PutPixel (Xc + X, Yc, – Y); Call PutPixel (Xc – X, Yc, – Y); Call PutPixel (Xc + Y, Yc, + X); Call PutPixel (Xc – Y ,Yc, – X); Call PutPixel (Xc + Y, Yc, – X); Call PutPixel (Xc – Y, Yc, – X); } 2.3.2Midpoint circle drawing algorithm This algorithm uses the implicit function of the circle in the following way f (x, y) = (x – xc)2 + (y – yc)2 – r2 here f (x, y) < 0 means (x, y) is inside the circle f ( x, y) = 0 means (x, y) is on the circle f (x, y) > 0 means (x, y) is outside the circle The algorithm now follows as Midpoint_Circle( Xc, Yc, R) { Set X = 0; Set Y = R; Set P = 1 – R; While (X < Y) { Call Draw_Circle( Xc, Yc, X, Y); X = X + 1; If (P < 0) {P = P + 2X + 6; } Else { Y = Y – 1; P = P + 2 (X – Y) + 1; } Call Draw_Circle( Xc, Yc, X, Y); } } 2.3.3 Midpoint Ellipse Algorithm This is a modified form of midpoint circle algorithm for drawing ellipse. The general equation of an ellipse in implicit form is f (x, y) = b2x2 + a2y2 – a2b2 = 0 28 Now the algorithm for ellipse follows as MidPoint_Ellipse( Xc, Yc, Rx, Ry) { Set Sx = Rx * Rx; Set Sy = Ry * Ry; Set X = 0; Set Y = Ry; Set Px = 0; Set Py = 2 * Sx * Y; Call Draw_Ellipse (Xc, Yc, X, Y); Set P = Sy – (Sx * Ry) + (0.25 * Sx); While ( Px 0) { Y = Y – 1; Py = Py – 2 * Sx; If (P > 0) {P = P + Sx – Py;} Else { X = X + 1; Px = Px + 2 * Sy; P = P + Sx – Py + Px; } Call Draw_Ellipse (Xc, Yc, X, Y); } } Draw_Ellipse (Xc, Yc, X, Y) { 29 Call PutPixel (Xc + X, Yc + Y); Call PutPixel (Xc – X, Yc + Y); Call PutPixel (Xc + X, Yc – Y); Call PutPixel (Xc – X, Yc – Y); } Check your progress: 1. Give three representations of circle, also give their equations. 2. Fill in the blanks In midpoint circle drawing algorithm if f (x, y) < 0 means (x, y) is......the circle f ( x, y) = 0 means (x, y) is......the circle f (x, y) > 0 means (x, y) is........the circle Answers: 2. inside, on, outside 2.4 DRAWING ELLIPSES AND OTHER CONICS The equation of an ellipse with center at the origin is given as Using standard parameterization, we can generate points on it as Differentiating the standard ellipse equation we get Now the DDA algorithm for circle can be applied to draw the ellipse. Similarly a conic can be defined by the equation If starting pixel on the conic is given, the adjacent pixel can be determined similar to the circle drawing algorithm. 30 Check your Progress: 1. Write down the equation of a standard ellipse. 2. Which scan conversion technique can be applied to draw an ellipse? 2.5 LET US SUM UP We learnt about the scan conversion technique and how it is used to represent line, circle and ellipse. The DDA and Bresenham’s line drawing algorithm were discussed. We then learnt Bresenham’s and Midpoint circle drawing algorithm. Midpoint ellipse drawing algorithm was also illustrated. Finally we learnt about drawing ellipse and other conics. 2.6 REFERENCES AND SUGGESTED READING (1) Computer Graphics, Donald Hearn, M P. Baker, PHI. (4) Procedural elements of Computer Graphics, David F. Rogers, Tata McGraw Hill. (5) Computer Graphics, Rajesh K. Maurya, Wiley – India. 2.7 EXERCISE 7. Describe scan conversion? 8. Explain briefly the DDA line drawing algorithm. 9. Explain the Bresenham’s line drawing algorithm with example. 10. Discuss scan conversion of circle with Bresenham’s and midpoint circle algorithms. 11. Explain how ellipse and other conics can be drawn using scan conversion technique. 31 3 TWO DIMENSIONAL TRANSFORMATIONS I Unit Structure 3.0 Objectives 3.1 Introduction 3.2 Introduction to transformations 3.3 Transformation Matrix 3.4 Types of Transformations in Two-Dimensional Graphics 3.5 Identity Transformation 3.6 Scaling 3.7 Reflection 3.8 Shear Transformations 3.9 Let us sum up 3.10 References and Suggested Reading 3.11 Exercise 3.0 OBJECTIVES The objective of this chapter is To understand the basics of 2D transformations. To understand transformation matrix and types of 2D transformations. To understand two dimensional Identity transformations. To understand 2D Scaling and Reflection transformations. To understand Shear transformations in 2D. 3.1 INTRODUCTION Transformations are one of the fundamental operations that are performed in computer graphics. It is often required when object is defined in one coordinate system and is needed to observe in some other coordinate system. Transformations are also useful in animation. In the coming sections we will see different types of transformation and their mathematical form. 32 3.2 INTRODUCTION TO TRANSFORMATIONS In computer graphics we often require to transform the coordinates of an object (position, orientation and size). One can view object transformation in two complementary ways: (i) Geometric transformation: Object transformation takes place in relatively stationary coordinate system or background. (ii) Coordinate transformation: In this view point, coordinate system is transformed instead of object. On the basis of preservation, there are three classes of transformation Rigid body: Preserves distance and angle. Example – translation and rotation Conformal: Preserves angles. Example- translation, rotation and uniform scaling Affine: Preserves parallelism, means lines remains lines. Example- translation, rotation, scaling, shear and reflection In general there are four attributes of an object that may be transformed (i) Position(translation) (ii) Size(scaling) (iii) Orientation(rotation) (iv) Shapes(shear) Check your progress: 1. Differentiate between geometrical and coordinate transformation. 2. Fill in the blanks (a) Rigid body transformation preserves........... (b) Conformal transformation preserves............ (c) Affine transformation preserves............... Answers: 2(a) distance and angle (b) angles (c) parallelism 3.3 TRANSFORMATION MATRIX Transformation matrix is a basic tool for transformation. A matrix with n m dimensions is multiplied with the coordinate of objects. Usually 3 3 or 4 4 matrices are used for transformation. For example consider the following matrix for rotation operation 33 We will be using transformation matrix to demonstrate various translation operations in the subsequent sections. Check your progress: 1. Write down transfomation matrix for rotation operation at angle. 2. Obtain transformation matrix for 600rotation. Answer: 1. 2. 3.4TYPES OF TRANSFORMATION IN TWO – DIMENSIONAL GRAPHICS In 2D transformations, only planar coordinates are used. For this purpose a 2x2 transformation matrix is utilized. In general, 2D transformation includes following types of transformations: I. Identity transformation II. Scaling III. Reflection IV. Shear transformation V. Rotation VI. Translation 3.5 IDENTITY TRANSFORMATION In identity transformation, each point is mapped onto itself. There is no change in the source image on applying identity transformation. Suppose T is the transformation matrix for identity transformation which operates on a point P (x, y) which produces point P’ (x’, y’), then P’(x’, y’) =[x’ y’] = [P] [T] 34 = [x y] =[x y] We can see that on applying identity transformation we obtain the same points. Here the identity transformation is [T] = The identity transformation matrix is basically anxn matrix with ones on the main diagonal and zeros for other values. Check your progress: Fill in the blanks 1. In identity transformation, each point is mapped onto ……. 2. The identity transformation matrix is basically anxn matrix with …… on the main diagonal and ……. for other values. Answers: 1. Itself 2. ones, zeros. 3.6 SCALING This transforms changes the size of the object. We perform this operation by multiplying scaling factors sx and sy to the original coordinate values (x, y) to obtain scaled new coordinates (x’, y’). x'= x. sx y'= y. sy In matrix form it can be represented as For same sx and sy, the scaling is called as uniform scaling. For different sx and sy , the scaling is called as differential scaling. Check your Progress: Fill in the blanks 1. Scaling changes the ……… of the object. 2. For same sx and sy, the scaling is called as ………. Scaling. Answers: 1. Size. 2. uniform. 3.7REFLECTION In reflection transformation, the mirror image of an object is formed. In two dimensions, it is achieved through rotating the object by 180 degrees about an axis known as axis of reflection lying in a plane. We can choose any plane of reflection in xy plane or perpendicular to xy plane. 35 For example, reflection about x axis (y = 0) plane can be done with the transformation matrix Figure 3.1 : Reflection transformation about x axis A reflection about y axis (x = 0) is shown in the figure below which can be done by following transformation matrix. Figure 3.2 : Reflection about y axis Check Your Progress:Fill in the blanks 1. In reflection transformation, the ……… image of an object is formed. 2. 2D reflection transformation can be achieved through rotating the object by …….degrees. Answers: 1. Mirror 2. 180. 36 3.8 SHEAR TRANSFORMATIONS An object can be considered to be composed of different layers. In shear transformation, the shape of the object is distorted by producing the sliding effect of layers over each other. There are two general shearing transformations, one which shift coordinates of x axis and the other that shifts y coordinate values. The transformation matrix for producing shear relative to x axis is producing transformations x' = x + shx.y, y' = y whereshx is a shear parameter which can take any real number value. The figure below demonstrates this transformation Figure 3.3 : 2D shear transformation Check your Progress: Fill in the blanks 1. In shear transformation, the shape of the object is distorted by producing the ……. effect of layers. 2. The shear parameter can take any ………number value. Answers: 1. sliding 2. real 3.9 LET US SUM UP We learnt about the basics of two dimensional transformations. We studied about transformation matrix and various types of transformations in 2D. Then we learnt about identity transformations, scaling and reflection transformations in two dimensions. Finally we understood the 2D shear transformation. 37 3.10 REFERENCES AND SUGGESTED READING (1) Computer Graphics, Donald Hearn, M P. Baker, PHI. (6) Procedural elements of Computer Graphics, David F. Rogers, Tata McGraw Hill. (7) Computer Graphics, Rajesh K. Maurya, Wiley – India. 3.11 EXERCISE 1. Explain transformation and its importance. 2. Describe using transformation matrix, following 2D transformations (i) Translation (ii) Scaling (iii) Reflection 3. Scale a triangle with respect to the origin, with vertices at original coordinates (10,20), (10,10), (20,10) by sx=2, sy=1.5. 4. What is the importance of homogenous coordinates? 5. Explain two dimensional shear transformations. 6. Obtain the transformation matrix for reflection along diagonal line (y = x axis). Answers: 3. (20,30), (20,15), and (40,15) 38 4 TWO DIMENSIONAL TRANSFORMATIONS II Unit Structure 4.0 Objectives 4.1 Introduction 4.2 Rotation 4.3 Translation 4.4 Rotation about an Arbitrary Point 4.5 Combined Transformation 4.6 Homogeneous Coordinates 4.7 2D Transformations using Homogeneous Coordinates 4.8 Let us sum up 4.9 References and Suggested Reading 4.10 Exercise 4.0 OBJECTIVES The objective of this chapter is To understand 2D rotation transformation. To understand 2D translation transformations. To understand two dimensional combined transformations. To understand homogenous coordinates and 2D transformation using homogenous coordinates. To understand Shear transformations in 2D. 4.1 INTRODUCTION This chapter is the extension of the previous chapter in which we will discuss the rotation transformation about origin and about any arbitrary point. We will also learn about the translation transformation in which the position of an object changes. The homogenous coordinates and 2D transformation using homogenous coordinates will also be explained. 39 4.2 ROTATION In rotation transformation, an object is repositioned along a circular path in the xy plane. The rotation is performed with certain angle θ, known as rotation angle. Rotation can be performed in two ways: about origin or about an arbitrary point called as rotation point or pivot point. Rotation about origin: The pivot point here is the origin. We can obtain transformation equations for rotating a point (x, y) through an angleθ to obtain final point as (x’, y’) with the help of figure as x' = x cosθ – y sinθ y' = x sinθ + y cosθ (x ¢, y¢) q r (x, y) r q Figure 4.1 : Rotation about origin The transformation matrix for rotation can be written as Hence, the rotation transformation in matrix form can be represented as (x1 ,y1 ) P' = R.P Check your progress: 1. Find the new equation of line in new coordinates (x’, y’) resulting from rotation of 900. [use line equation y = mx + c]. Answer: 1. y' = (-1/m)x – c/m. 40 4.3 TRANSLATION The repositioning of the coordinates of an object along a straight line path is called as translation. The translation transformation is done by adding translation distance tx and ty to the original coordinate position (x, y) to obtain new position (x’, y’). x'= x + tx, y'= y + ty The pair (tx, ty) is called as translation vector or shift vector. In the matrix form, it can be written as P' = P + T , where , , The figure below shows the translation of an object. Here coordinate points defining the object are translated and then it is reconstructed. y 10 5 0 5 10 15 20 x (a) y 10 5 0 5 10 15 20 x (b) Figure 4.2 : 2D translation transformation 41 Check your Progress: 1. Translate a triangle with vertices at original coordinates (10, 20), (10,10), (20,10) by tx=5, ty=10. Answer: 1. (15, 30), (15, 20), and (25, 20) 4.4ROTATION ABOUT AN ARBITRARY POINT It is often required in many applications to rotate an object about an arbitrary point rather than the origin. Rotation about an arbitrary pivot point (xr, yr) is shown in the figure below (x ¢, y¢) (x, y) r r q (x1 , y1 ) q Figure 4.3 : Rotation about an arbitrary point The corresponding equation obtained will be x' = xt + (x – xt) cosθ – (y – yt) sinθ y' = yt + (x – xt) sinθ + (y – yt) cosθ We can see the difference between this rotation transformation from the previous one.This one contains the additive terms as well as the multiplicative factors on the coordinate values. Let us understand the rotation about an arbitrary point through an example. Suppose we have to rotate a triangle ABC by 90 degree about a point (1, -1). The coordinates of the triangle are A (4, 0), B (8, 3) and C (6, 2). 42 The triangle ABC can be represented in matrix as The point about which the triangle has to be rotated be P = (-1, 1) and the rotation matrix for 90 degree rotation is Now we can perform the rotation operation in three steps. In first step we will have to translate the arbitrary point to the origin. Let A’B’C’ be the new coordinates obtained after translation operation. Now the second step is to rotate the object. Let A’’B’’C’’ be new coordinates after applying rotation operation to A’B’C’, then In third step we translate back the coordinates The coordinates A’’’B’’’C’’’ is the required result. Check your progress: 1. What are the steps involved in rotating an object about an arbitrary point? 2. What is the difference between transformation about origin and rotation about an arbitrary point? 43 4.5 COMBINED TRANSFORMATION A sequence of transformation is said to be as composite or combined transformations can be represented by product of matrices. The product is obtained by multiplying the transformation matrices in order from right to left. For example, two successive translations applied to position P to obtain P’ is calculated as P' = {T (tx2, ty2). T (tx1, ty1)}. P The expanded form of the multiplication of translation vectors of above equation can be written as T (tx2, ty2). T (tx1, ty1) = T (tx1 + tx2, ty1 + ty2) Check your progress: 1. Explain Combined transformation with an example. 4.6HOMOGENOUS COORDINATES Representing 2D coordinates in terms of vectors with two components turns out to be rather awkward when it comes to carry out manipulations in computer graphics. Homogenous coordinates allow us to treat all transformation in the same way, as matrix multiplications. The consequence is that our 2-vectors become extended to 3-vectors, with a resulting increase in storage and processing. Homogenous coordinates means that we represent a point (x, y) by the extended triplet (x, y, w). In general w should be non-zero. The normalized homogenous coordinates are given by (x/w, y/w, 1) where (x/w, y/w) are the Cartesian coordinates at the point. Note in homogenous coordinates (x, y, w) is the same as 44 (x/w, y/w, 1) as is (ax; ay; aw) where a can be any real number. Points with w=0 are called points at infinity, and are not frequently used. Check your progress: 1. What is the significance of homogenous coordinates? 4.7 2D TRANSFORMATIONS USING HOMOGENOUS COORDINATES The homogenous coordinates for transformations are as follows: For translation For rotation For scaling Check your progress: 1. Obtain translation matrix for tx=2, ty=3 in homogenous coordinate system. 2. Obtain scaling matrix for sx=sy=2 in homogenous coordinate system. 45 Answers: 1. 2. 4.8 LET US SUM UP We learnt the basics of transformation and its use. Then we studied two dimensional transformation and its types which were translation, scaling, rotation, reflection and shear. The transformation matrix corresponding to each transformation operation was also studied. Homogenous coordinate system and its importance were also discussed. 4.9 REFERENCES AND SUGGESTED READING (1) Computer Graphics, Donald Hearn, M P. Baker, PHI. (8) Procedural elements of Computer Graphics, David F. Rogers, Tata McGraw Hill. (9) Computer Graphics, Rajesh K. Maurya, Wiley – India. 4.10 EXERCISE 12. Find the new coordinates of the point (2, -4) after the rotation of 300. 13. Rotate a triangle about the origin with vertices at original coordinates (10, 20), (10, 10), (20, 10) by 30 degrees. 14. Show that successive rotations in two dimensions are additive. 15. Obtain a matrix for two dimensional rotation transformation by an angle θ in clockwise direction. 16. Obtain the transformation matrix to reflect a point A (x, y) about the line y = mx + c. Answers: 1. (√3+2, 1-2√3) 2. (-1.34, 22.32), (3.6, 13.66), and (12.32, 18.66) 46 5 THREE DIMENSIONAL TRANSFORMATIONS I Unit Structure 5.0 Objectives 5.1 Introduction 5.2 Objects in Homogeneous Coordinates 5.3 Transformation Matrix 5.4 Three-Dimensional Transformations 5.5 Scaling 5.6 Translation 5.7 Rotation 5.8 Shear Transformations 5.9 Reflection 5.10 Let us sum up 5.11 References and Suggested Reading 5.12 Exercise 5.0 OBJECTIVES The objective of this chapter is To understand the Objects in Homogeneous Coordinates To understand the transformation matrix for 3D transformation. To understand the basics of three- dimensional transformations To understand the 3D scaling transformation. To understand 3D translation and rotation transformations. To understand 3D shear and reflection transformations. 5.1 INTRODUCTION In two dimensions there are two perpendicular axes labeled x and y. A coordinate system in three dimensions consists similarly of three perpendicular axes labeled x, y and z. The third axis makes the transformation operations different from two dimensions which will be discussed in this chapter. 47 5.2 OBJECTS IN HOMOGENOUS COORDINATES Homogeneous coordinates enables us to perform certain standard operations on points in Euclidean (XYZ) space by means of matrix multiplications. In Cartesian coordinate system, a point is represented by list ofn points, where n is the dimension of the space. The homogeneous coordinates corresponding to the same point require n+1 coordinates. Thus the two-dimensional point (x, y) becomes (x, y, 1) in homogeneous coordinates, and the three- dimensional point (x, y, z) becomes (x, y, z, 1). The same concept can be applied to higher dimensions. For example, Homogeneous coordinates in a seven-dimensional Euclidean space have eight coordinates. In combined transformation, a translation matrix can be combined with a translation matrix, a scaling matrix with a scaling matrix and similarly a rotation matrix with a rotation matrix. The scaling matrices and rotation matrices can also be combined as both of them are 3x3 matrices. If we want to combine a translation matrix with a scaling matrix and/or a rotation matrix, we will first have to change the translation matrix in homogenous coordinate system. Further in three dimensional, for uniform transformation we need to add a component to the vectors and increase the dimension of the transformation matrices. This for components representation is known as homogenous coordinate representation. 5.3THREE DIMENSIONAL TRANSFORMATIONS The transformations procedure in three dimensions is similar to transformations in two dimensions. 3D Affine Transformation: A coordinate transformation of the form: x' = axx x + axy y + axz z + bx , y' = ayx x + ayy y + ayz z + by , z' = azx x + azy y + azz z + bz , is called a 3D affine transformation. It can also be represented by transformation matrix as given below x ' a xx a xy a xz bx x y ' a yx a yy a yz b y y z' a a zy a zz b z z zx w 0 1 1 0 0 48 o Translation, scaling, shearing, rotation (or any combinations of them) are examples affine transformations. o Lines and planes are preserved. o Parallelism of lines and planes are also preserved, but not angles and length. Object transformation: Objects can be transformed using 2D and 3D transformation techniques Line: Lines can be transformed by transforming the end points. Plane (described by 3-points):It can be transformed by transforming the 3-points. Plane (described by a point and normal): Point is transformed as usual. Special treatment is needed for transforming normal. Check your progress: 1. Explain 3D affine transformation. 2. Explain object transformation. 5.4 SCALING Scaling transformation changes the dimension of a 3D object determined by scale factors in all three directions. x' = x. sx y' = y. sy z' = z. sz The transformation matrix and scaling through it 49 Check your Progress: 1. Explain the scaling process in three dimension. 2. Derive scaling matrix with sx=sy=2 and sz=1. Answer: 2. 5.5 TRANSLATION The three dimensional object displaced from its original position can be represented as x' = x + tx, y' = y + ty, z' = z + tz The 3D translation by transformation matrix can be represented as or P' = T. P Check your Progress: 1. Write down the matrix for three dimensional translation transformation. 2. Obtain 3D translation transformation matrix for tx=4, ty=3, tz=2. Answer: 2. 50 5.6 ROTATION Rotations in three dimensions require specification of axis of rotation apart from prescription of the rotation angle. Rotation operation can be performed with respect to any axis. The following transformation matrix corresponds to the rotation about z axis. Similarly transformation matrices for rotation about x and y axes are Figure 5.1: The three axes The above figure illustrates the rotation about the three axes. Properties of Rotation matrix: Determinant of the rotation matrix is 1. Columns and rows are mutually orthogonal unit vectors, i.e., orthonormal (inverse of any matrix is equal to transpose of that matrix). Product of any pair of rotation (orthonormal) matrices is also orthonormal. 51 Check your progress: 1. Explain three dimensional rotation transformation. 2. Derive the three dimensional rotation matrix about y axis with rotation angle 90 degrees. Answer: 2. 5.7 SHEAR TRANSFORMATIONS Three dimensional shear transformation is similar to the two dimensional shear transformation. It produces the slanting effect to the image in a given direction of x, y or z. The shear transformation in x direction maintains y and z coordinates but produces change in x coordinate. It causes tilt left or right effect depending on the x – shear value. In the similar fashion y – shear and z – shear transformation produces slanting effect in the y and z direction respectively. The matrix for three dimensional shear transform is given by Check your Progress: 1. Explain shear transformation in three dimension. 2. Obtain 3D shearing transformation matrix for a=c=2, b=d=3, e=f=1. Answer: 2. 52 5.8 REFLECTION The reflection transformation of a three dimensional object is performed with respect to a reflection axis or reflection plane in which the object is basically rotated by 180 degree. The following matrix corresponds to transformation matrix for reflection with respect xy plane Check your Progress: 1. Explain reflection transformation. 5.9 LET US SUM UP We learnt about the homogenous coordinates and its significance. We studied about three dimensional transformations. Then we learnt three dimensional scaling, rotation and translation transformation. Then we studied about shear and reflection transformation. 5.10 REFERENCES AND SUGGESTED READING (1) Computer Graphics, Donald Hearn, M P. Baker, PHI. (10) Procedural elements of Computer Graphics, David F. Rogers, Tata McGraw Hill. (11) Computer Graphics, Rajesh K. Maurya, Wiley – India. 5.11 EXERCISE 17. Find the translation matrix for tx = 2, ty = 2, tz = 4. 18. Obtain scaling matrix for sx = 2, sy = 3 and sz = 1 19. Obtain rotation matrix for θ = 450 along z axis. 20. Find the rotation matrix for θ = 300 about x axis. 53 21. Explain the significance of homogenous coordinates in three dimensional transformation. Answers: 1. 2. 3. 4. 54 6 THREE DIMENSIONAL TRANSFORMATIONS II Unit Structure 6.0 Objectives 6.1 Introduction 6.2 World Coordinates and Viewing Coordinates 6.3 Projection 6.4 Parallel Projection 6.5 Perspective Projection 6.6 Let us sum up 6.7 References and Suggested Reading 6.8 Exercise 6.0 OBJECTIVES The objective of this chapter is to understand World Coordinates and Viewing Coordinates Projection transformation – Parallel projections and perspective projection 6.1 INTRODUCTION Projections help us to represent a three dimensional object into two dimensional plane. It is basically mapping of a point onto its image in the view plane or projection plane. There are different types of projection techniques. In this chapter we are going to discuss the basic idea of projection. 6.2 WORLD COORDINATES AND VIEWING COORDINATES Objects in general are said to be specified by the coordinate system known as world coordinate system (WCS). Sometimes it is required to select a portion of the scene in which the objects are placed. This portion is captured by a rectangular area whose edges are parallel to the axes of the WCS and is known as window. 55 In simple words, a window refers to the area of a picture that is to be viewed. The area of the display device to which the window is mapped is known as viewport. The mapping of a part of scene specified by WCS to device coordinates is called as viewing transformation. The process of conversion of WCS coordinates of an object to normalized device coordinates is referred as window-to-viewport mapping. Figure 6.1 : Window to viewport mapping Normalised device coordinates: o Normalised device coordinates are the co-ordinates of the device expressed in normalised form. o The normalised device co-ordinates are thus the coordinates used to express the display space. o The co-ordinates are thus expressed in terms of their relative position on the display. o Conventionally (0, 0) is at the bottom left hand corner of the display and (1, 1) is the top right corner of the display. o Useful as they are device-independent. 56 Figure 6.2: World coordinates and normalized device coordinate Check your progress: 1. A rectangular area whose edges are parallel to the axes of the WCS and is known as............. 2. Define normalised device coordinates. Answers: 1. window 6.3 PROJECTION Projection is the process of representing a 3D object onto a 2D screen. It is basically a mapping of any point P (x, y, z) to its image P (x’, y’, z’) onto a plane called as projection plane. The projection transformation can be broadly classified into two categories: Parallel and Perspective projections. 6.4 PARALLEL PROJECTION In parallel projections the lines of projection are parallel both in reality and in the projection plane. The orthographic projection is one of the most widely used parallel projections. Orthographic projection: Orthographic projection utilizes perpendicular projectors from the object to a plane of projection to generate a system of drawing views. These projections are used to describe the design and features of an object. It is one of the parallel projection form, all the projection lines are orthogonal to the projection plane. 57 Figure 6.3: Projection plane and projection lines in orthogonal projection It is often used to generate the front, top and side views of an object. Figure 6.4: Views in orthographic projection It is widely used in engineering and architectural drawings. Orthographic projection that displays more than one face of an object is known as axonometric orthographic projections. Axonometric projections use projection planes that are not normal to a principal axis. On the basis of projection plane normal N = (dx, dy, dz) subclasses are o Isometric: | dx | = | dy | = | dz | i.e. N makes equal angles with all principal axes. 58 Figure 6.5: Axonometric projection o Dimetric : | dx | = | dy | o Trimetric : | dx | ≠ | dy | ≠ | dz | Check your Progress: 1. Define axonometric projections. 2. Differentiate between isometric, dimetric and trimetric projections. 6.5 PERSPECTIVE PROJECTION This projection method borrows idea from the artists who uses the principle of perspective drawing of three dimensional objects and scenes. The center of projection can be said analogous to the eye of the artist and the plane containing the canvas can be considered as view plane. Perspective projection is used to model 3D objects on 2D plane. It is done by projecting 3D points along the lines that pass through the single viewpoint until they strike an image plane. Frustum view volume: It specifies everything that can be seen with the camera or eye. It is defined by left plane, right plane, top plane, bottom plane, front (near) plane and back (far) plane. 59 The following figure illustrates perspective projection Figure 6.6: Perspective projection Center of projection: When a 3D scene is projected towards a single point, the point is called as center of projection. Vanishing points parallel to one of the principal axis is known as principal vanishing point. Projection from 3D to 2D is defined by straight projection rays (projectors) emanating from the center of projection, passing through each point of the object, and intersecting the projection plane to form a projection. Figure 6.7: Perspective projection illustrating center of projection , projectors and projection plane Perspective foreshortening: It is the term used for the illusion in which the object or length appears smaller as the distance from the center of projection increases. Vanishing points: One more feature of perspective drawing is that sometimes a certain set of parallel lines appear to meet at a point. These points are known as vanishing points. 60 z-axis vanishing point y x z Figure 6.8: Vanishing point Principle vanishing point: If a set of lines are parallel to one of the three axes, the vanishing point is called an axis vanishing point (Principal Vanishing Point). There are at most 3 such points, corresponding to the number of axes cut by the projection plane o One-point: One principle axis cut by projection plane One axis vanishing point o Two-point: Two principle axes cut by projection plane Two axis vanishing points o Three-point: Three principle axes cut by projection plane Three axis vanishing points The following figure shows the three types of principle vanishing points (a) One point (b) Two point (c) Three point Figure 6.9 : Types of vanishing points 61 View confusion: Objects behind the center of projection are projected upside down and backward onto the view plane. Topological distortion: A line segment joining a point which lies in front of the viewer to a point in back of the viewer is projected to a broken line of infinite extent. Check your progress: 1. Define centre of projection. 2. What is the term used for the illusion in which the object or length appears smaller as the distance from the center of projection increases? Answer: 2. Perspective foreshortening 6.6 LET US SUM UP In this chapter we learnt about world coordinate system and view coordinates. We then learnt the fundamental definition of projection. Orthographic projection with its application was discussed in short. We then learnt perspective projection and terms associated with it. 6.7 REFERENCES AND SUGGESTED READING (1) Computer Graphics, Donald Hearn, M P. Baker, PHI. (12) Procedural elements of Computer Graphics, David F. Rogers, Tata McGraw Hill. (13) Computer Graphics, Rajesh K. Maurya, Wiley – India. 6.8 EXERCISE 1. Explain world coordinate system. 2. Define viewing coordinates. 3. Explain orthographic projection with its applications. 4. What is topological distortion? 5. Describe perspective projection and explain perspective foreshortening and vanishing points. 62 7 VIEWING AND SOLID AREA SCAN- CONVERSION Unit Structure: 7.0 Objectives 7.1 Introduction to viewing and clipping 7.2 Viewing Transformation in Two Dimensions 7.3 Introduction to Clipping: 7.3.1 Point Clipping 7.3.2 Line Clipping 7.4 Introduction to a Polygon Clipping 7.5 Viewing and Clipping in Three Dimensions 7.6 Three-Dimensional Viewing Transformations 7.7 Text Clipping 7.8 Let us sum up 7.9 References and Suggested Reading 7.10 Exercise 7.0 OBJECTIVES The objective of this chapter is To understand the basics of concept of viewing transformations. To understand point clipping, line clipping and polygon clipping To understand the concept of text clipping. 7.1 INTRODUCTION TO VIEWING AND CLIPPING Windowing and clipping A “picture” is a “scene” consists of different objects. The individual objects are represented by coordinates called as “model” or “local” or ”master” coordinates. The objects are fitted together to create a picture, using co- ordinates called a word coordinate (WCS). 63 The created “picture” can be displayed on the output device using “physical device coordinates” (PDCS). The mapping of the pictures elements from “WCS” to “PDCS” is called a viewing transformation. Defination: a finite region selected in world coordinates is called as ‘window ’ and a finite region on which the window is mapped, on the output device is called a ‘view point’. Viewing Pipeline Displayed MC PDCS Ready to display objects WCS DC Pictures Mapping Fitting VCS nVCS Fig. 7.1 Viewing Pipeline Check your Progress: 1. What is PDCS? 2. Define window. 64 7.2 VIEWING TRANSFORMATION IN TWO DIMENSIONS Viewing Transformation / a complete mapping from window to view point. Let W in window defined Window by the lines: x= xwmin, x=xwmax, ywmax View port y=ywmin, y=ywmax yvmax Then the aspect ratio for W w defined by, V ywmin aw= (xwamx –xwmin) / yvmin (ywmax- ywmin) similarly for a view port Xwmin xwmax say V we have, Xvmin xvmax av= (xvmax-xvmin) / (yvmax- yvmin) Fig. 7.2 Window and Viewpoint 7.3 INTRODUCTION TO CLIPPING The process which divides the given picture into two parts : visible and Invisible and allows to discard the invisible part is known as clipping. For clipping we need reference window called as clipping window. (Xmax,Ymax) (Xmin, Ymin) Fig. 7.3 Window 7.3.1 POINT CLIPPING Discard the points which lie outside the boundary of the clipping window. Where, Xmin ≤ X ≤ Xmax and Ymin ≤ Y ≤Ymax 65 (Xmax, Ymax) (Xmax, Ymax) (Xmax, Ymax) (Xmin, Ymin) (Xmin, Ymin) (Xmin, Ymin) Clipping Window Before Clipping After Clipping Fig. 7.4 7.3.2 LINE CLIPPING Discard the part of lines which lie outside the boundary of the window. We require: 1. To identify the point of intersection of the line and window. 2. The portion in which it is to be clipped. The lines are divided into three categories. a) Invisible b) Visible c) Partially Visible [Clipping Candidates] To clip we give 4- bit code representation defined by Bit 1 Bit 2 Bit 3 Bit 4 Ymax Ymin Xmax Xmin Fig. 7.5 Where, Bits take the volume either 0 or 1 and Here, we divide the area containing the window as follows. Where, the coding is like this, Bit value = 1 if point lies outside the boundary OR = 0 if point lies inside the boundary. ( Xmin ≤ X ≤ Xmax and Ymin ≤ Y ≤Ymax ) 66 Bit 1 tells you the position of the point related to Y=Ymax Bit 2 tells you the position of the point related to Y=Ymin Bit 3 tells you the position of the point related to X=Xmax Bit 4 tells you the position of the point related to X=Xmin Fig. 7.6 Bit Code Representation Rules for the visibility of the line: 1. If both the end points have bit code 0000 the line is visible. 2. If atleast one of the end point in non zero and a) The logical “AND”ing is 0000 then the line is Partially Visible b) If the logical “AND”ing isnon-zero then line is Not Visible. Cohen-Sutherland Line Clipping Algorithm For each line: 1. Assign codes to the endpoints 2. Accept if both codes are 0000, display line 3. Perform bitwise AND of codes 4. Reject if result is not 0000, return 5. Choose an endpoint outside the clipping rectangle 6. Test its code to determine which clip edge was crossed and find the intersection of the line and that clip edge (test the edges in a consistent order) 7. Replace endpoint (selected above) with intersection point 8. Repeat (Xmax, Ymax) (Xmax, Ymax) (Xmax, Ymax) (Xmin, Ymin) (Xmin, Ymin) (Xmin, Ymin) Clipping Window Before Clipping After Clipping Fig. 7.7 67 Check your Progress : Fill in the blanks 1. For clipping we need reference window called as __________window. 2. While clipping lines are divided into three categories invisible, visible and________ visible. 7.4 INTRODUCTION TO A POLYGON CLIPPING Polygon Clipping Sutherland Hodgman Polygon Clipping algorithm 1. The polygon is stored by its vertices and edges, say v1, v2, v3 ,……vn and e1, e2, e3,…. en. 2. Polygon is clipped by a window we need 4 clippers. Left clipper , Right Clipper, Bottom Clipper, Top Clipper 3. After clipping we get a different set of vertices say v1’ , v2’ , v3’ ,…… vn’ 4. Redraw the polygon by joining the vertices v1’ , v2’ , v3’ ,…… vn’ appropriately. Algorithm: 1. Read v1, v2, v3 ,……vn coordinates of polygon. 2. Readcliping window. (Xmin, Ymin)(Xmax, Ymax) 3. For every edge do { 4. Compare the vertices of each edge of the polygon with the plane taken as the clipping plane. 5. Save the resulting intersections and vertices in the new list } // according to the possible relationships between the edge and the clipping boundary. 6. Draw the resulting polygon. The output of the algorithm is a list of polygon vertices all of which are on the visible side of the clipping plane. Here, the intersection of the polygon with the clipping plane is a line so every edge is individually compare with the clipping plane. This is achieved by considering two vertices of each edge which lies around the clipping boundary or plane. This results in 4 possible relationships between the edge and the clipping plane. 1st possibility: If the 1st vertex of an edge lies outside the window boundary and the 2nd vertex lies inside the window boundary. 68 Here, point of intersection of the edge with the window boundaryand the second vertex are added to the putput vertex list (V1, v2)→( V1’, v2) V1 V1’ V2 Fig. 7.8 2nd possibility: If both the vertices of an edge are inside of the window boundary only the second vertex is added to the vertex list Fig. 7.9 3rd possibility: If the 1st vertex is inside the window and 2nd vertex is outside only the intersection point is add to the output vertex list. V1 V2’ V2 v3’ v4 V3 Fig. 7.10 4th possibility: If both vertices are outside the window nothing is added to the vertex list. Once all vertices are processed for one clipped boundary then the output list of vertices is clipped against the next window boundary going through above 4 possibilities. We have to consider the following points. 1) The visibility of the point. We apply inside-outside test. 2) Finding intersection of the edge with the clipping plane. 69 7.5 VIEWING AND CLIPPING IN THREE DIMENSIONS We extend rays from the viewer’s position through the corners of the viewing window; we define a volume that represents all objects seen by the eye. This viewing volume is shown in the left diagram of Figure. Anything outside the volume will not be visible in the window. When we apply the perspective projection, objects further away from the viewer become smaller, and objects in front of the window appear larger. Logically, this is identical to “warping” the viewing pyramid into a viewing rectangular solid in which the sides of the viewing box are parallel. For example, the cube shown in the left viewing volume becomes warped to the non-parallel object shown on the right. Now, the process of clipping becomes much simpler. Clipping in 3D is similar to clipping in 2D. Everything outside of the canonical window that is not visible to the user is removed prior to display. Objects that are inside are retained, and objects that cross the window boundary need to be modified, or “clipped” to the portion that is visible. This is where the effect of the perspective transformation shown in Figure simplifies the process. Fig. 7.11 Fig. 7.12 If we were clipping to the sides of the pyramid as shown on the left, the calculations would be substantially more complex than the 2D clipping operations previously described. However, after the perspective transformation, clipping to the edges of the window is identical to clipping to the edges of the 2D window. The same algorithms can be used looking at the x and y coordinates of the points to clip. To complicate matters, however, we have the added capability in 3D of defining clipping planes that are parallel to the viewing window, but at different depths from the viewer. These are often referred to as “near” and “far” clipping planes as shown in Figure. The concept is that objects that are too close to the viewer, or too far away, are not visible and should not be considered. In addition, without clipping against the near clipping plane, you would see objects that were behind the camera! If it were a simple matter 70 of culling objects based on their depths and clipping those that fell between the two planes, it would be no problem. However, the complexity arises when objects cross the boundaries of the near and far planes similar to when objects cross the edges of the windows. The objects need to be “clipped” to the far and near planes as well as to the edges of the window. Fig. 7.13 Fig. 7.14 7.6 THREE-DIMENSIONAL VIEWING TRANSFORMATIONS 3D Viewing Transformation : The basic idea of the 3D viewing transformation is similar to the 2D viewing transformation. That is, a viewing window is defined in world space that specifies how the viewer is viewing the scene. A corresponding view port is defined in screen space, and a mapping is defined to transform points from world space to screen space based on these specifications. The view port portion of the transformation is the same as the 2D case. Specification of the window, however, requires additional information and results in a more complex mapping to be defined. Defining a viewing window in world space coordinates is exactly like it sounds; sufficient information needs to be provided to define a rectangular window at some location and orientation. The usual viewing parameters that are specified are: Eye Point the position of the viewer in world space. Look Point the point that the eye is looking at View Distance the distance that the window is from the eye Window Size the height and width of the window in world space coordinates Up Vector which direction represents “up” to the viewer, this parameter is sometimes specified as an angle of rotation about the viewing axis These parameters are illustrated in Figure. 71 Fig.7.15 The Eye Point to the Look Point forms a viewing vector that is perpendicular to the viewing window. If you want to define a window that is not perpendicular to the viewing axis, additional parameters need to be specified. The Viewing Distance specifies how far the window is from the viewer. Note from the reading on projections, that this distance will affect the perspective calculation. The window size is straightforward. The Up Vector determines the rotation of the window about the viewing vector. From the viewer’s point of view, the window is the screen. To draw points at their proper position on the screen, we need to define a transformation that converts points defined in world space to points defined in screen space. This transformation is the same as the transformation that positions the window so that it lies on the XY plane centered about the origin of world space. The process of transforming the window, using the specified parameters, to the origin, aligned with the XY plane can be broken into the following steps: 1. Compute the center of the window and translate it to the origin 2. Perform two rotations about the X and Y axes to put the window in the XY plane 3. Use the Up Vector to rotate the window about the Z axis and align it with the positive Y axis 4. Use the Window Height and Width to scale the window to the canonical size These four steps can be combined into a single transformation matrix that can be applied to all points in world space. After the transformation, points are ready for final projection, clipping, and drawing to the screen. The perspective transformation 72 occurs after points have been transformed through the viewing transformation. The perspective and view port transformations will not be repeated here. 7.7 TEXT CLIPPING Depends on methods used to generate characters & the requirements of a particular application Methods or processing character strings relative to a window boundary, All-or-none string clipping strategy All or none character clipping strategy Clip the components of individual characters All-or-none string clipping strategy Simplest method, fastest text clipping All string - inside clip window, keep it, and otherwise discard. Bounding rectangle considered around the text pattern If bounding position of rectangle overlap with window boundaries, string is rejected. Before Clipping After Clipping Text clipping using a bounding rectangle about the entire string Fig. 7.16 All or none character clipping strategy : Discard or reject an entire character string that overlaps a window boundary i.e, discard those characters that are not completely inside the window. Compare boundary limits of individual characters with the window. Any character which is outside or overlapping the window boundary are clipped. 73 Before Clipping After Clipping Text clipping using a bounding rectangle about individual characters Fig. 7.17 Clip the components of individual characters : Treat characters same as lines If individual char overlaps a clip window boundary, clip off the parts of the character that are outside the window Before Clipping After Clipping Text clipping is performed on the components of individual characters. Fig. 7.18 74 Check your Progress: True or False. 1. The perspective and viewport transformations will not be repeated in3D Viewing Transformation. 2. In Sutherland Hodgman Polygon Clipping algorithm polygon is clipped by a window we need 4 clippers. 3. To check the visibility of the point, we apply inside-outside test. 7.8 LET US SUM UP Point clipping, line clipping, polygon clipping and text clipping are types of the clipping. Normally window and view points are ‘rectangular’ shaped. The viewing transformation is also called as windowing transformation. Discarding and removing the invisible region of object from the given window is known as clipping. 7. 9 REFERENCES AND SUGGESTED READING Computer Graphics, Donald Hearn, M P. Baker, PHI. Procedural elements of Computer Graphics, David F. Rogers, Tata McGraw Hill. Computer Graphics, Amarendra Sinha, A. Udai,, Tata McGraw Hill. Computer Graphics,A. P. Godase, Technical Publications Pune. 7.10 EXERCISE 1. What is point clipping? 2. Explain Cohen-Sutherland Line clipping algorithm. 3. Explain the polygon clipping algorithm. 4. Write a short note on text clipping. 5. Define: window, View point. 75 8 INTRODUCTION TO SOLID AREA SCAN- CONVERSION Unit Structure: 8.0 Objectives 8.1 Introduction 8.2 Inside–Outside Test 8.3 Winding Number Method and Coherence Property 8.4 Polygon Filling and Seed Fill Algorithm 8.5 Scan-Line Algorithm 8.6 Priority Algorithm 8.7 Scan Conversion of Character 8.8 Aliasing, Anti-Aliasing, Half toning 8.9 Thresholding and Dithering 8.10 Let us sum up 8.11 References and Suggested Reading 8.12 Exercise 8.0 OBJECTIVE The objective of this chapter is To understand polygon filling techniques and algorithms. To understand scan conversion of characters. To understand concepts of anti-alising, half toning, thresholding and diathering. 8.1 INTRODUCTION To perform the scan conversion or to fill the polygon, we need the pixels inside the polygon as well as those on the boundary. The pixels which are inside the polygon can be determined by using the following two test: Inside outside test and Winding number test. 8.2 INSIDE–OUTSIDE TEST 1. Inside outside test (Even- Odd Test) We assume that the vertex list for the polygon is already stored and proceed as follows. 76 1. Draw any point outside the range Xmin and Xmax and Ymin and Ymax. Draw a scanline through P upto a point A under study (Xmax, Ymax) A P (Xmin, Ymin) Fig. 8.1 2. If this scan line i) Does not pass through any of the vertices then its contribution is equal to the number of times it intersects the edges of the polygon. Say C if a) C is odd then A lies inside the polygon. b) C is even then it lies outside the polygon. ii) If it passes through any of the vertices then the contribution of this intersection say V is, a) Taken as 2 or even. If the other points of the two edges lie on one side of the scan line. b) Ttaken as 1 if the other end points of the 2 edges lie on the opposite sides of the scan- line. c) Here will be total contribution is C + V. Remark : Here, the points on the boundary are taken care of by calling the procedure for polygon generation. 8.2 WINDING NUMBER METHOD AND COHERENCE PROPERTY Winding number algorithm : This is used for non- overlapping regions and polygons only. Steps: 1. Take a point A within the range (0,0) to ( Xmax, Ymax ) Joint it to any point Q outside this range. 2. Give directions to all the edges in anticlockwise direction. 3. Check whether Q passing through any of the vertices. If so ignored the position of Q. choose a new Q so that AQ does not pass through any of the vertices but passes through only edges. 77 (Xmax, Ymax) AQ is passing through edges. w=0 Subtract 1 from w, if crosses A edge moves from left to right: Q w= 0-1= -1 w is non zero so A lies inside the polygon. (0,0) 4. Initialize winding number w=0. Observe the edges intersecting AQ and 1) Add 1 to w if cross edge moves from right to left 2) Subtract 1 from w, if crosses edge moves from left to right. 5. If final count of w is zero 1) A lies outside the polygon 2) Non zero, A lies inside the polygon. 3) Illuminate the interior position till all the pixels in the above set range are painted. 8.3 POLYGON FILLING AND SEED FILL ALGORITHM Polygon filling algorithm There are two types of polygon filling algorithm. 1. Scan conversion polygon filling algorithm 2. Seed filling algorithms Besides these algorithms we can use a) Boundary fill algorithms and b) Flood fill algorithm Seed Fill To fill a polygon we start with a seed and point the neighboring points till the boundary of the polygon is reached. If boundary pixels are not reaching pixels are illuminated one by one and the process is continuing until the boundary points are reached. Here, at every step we need check the boundary. Hence, this algorithm is called “boundary fill algorithm”. To find the neighboring pixels we can use either a 4 connected or 8 connected region filling algorithm. 78 The algorithm is recursive one and can be given as follows: Here, we specify the parameters, fore-color by F and back- color by B 4 Neighbors are: N4={ (X+1,Y), (X-1,Y), (X,Y+1), (X,Y-1) } Fig. 8.3 Neighboring Pixels Seed pixel (x,y) Seed_Fill ( x, y, F, B)// 4 connected approach { //Seed= getpixel (x,y); If (getpixel(x,y)!= B && getpixel(x,y)!= F) { putpixel (x,y,F); Seed_Fill (x-1, y, F, B); Seed_Fill (x+1, y, F, B); Seed_Fill (x, y-1, F, B); Seed_Fill (x, y+1, F, B); } } getpixel (): is a procedure which gives the color of the specified pixel. putpixel(): is a procedure which draws the pixel with the specified color. B : is the boundary color. Drawbacks: in Seed Fill algorithm we have 2