Computer Graphics (C Version) by Donald Hearn PDF

Summary

This is a textbook on computer graphics, covering 2D and 3D concepts and techniques. It explores various concepts from raster-scan and random-scan systems to output primitives, transformations, viewing, and modeling. The book includes detailed explanations and examples using C programming.

Full Transcript

Contents PREFACE xvii Stereoscopic and Virtual-Reality Systems A Survey of Computer 1 Graphics 2 2-2 Raster-Sca...

Contents PREFACE xvii Stereoscopic and Virtual-Reality Systems A Survey of Computer 1 Graphics 2 2-2 Raster-Scan System!; Video Controller Raster-Scan Display Processor Computer-Aided Design 2-3 Random-Scan Systems Presentation Graphics 'I 2-4 Graphics Monitors and Workstations Computer Art l 3 2-5 Input Devices Entertainment 18 Keyboards Education and Training 21 Mouse Visualization 25 Trackball and Spaceball Image Processing 32 Joysticks Graphical User Interfaces 34 Data Glove Digitizers Image Scanners Touch Panels Overview of Graphics Light Pens 2 systems 35 2-6 Voice Systems Hard-Copy Devices 2-1 VideoDisplayDevices 36 2-7 Graphics Software Refresh Cathode-Ray Tubes 37 Coordinate Representations Raster-Scan Displays 40 Graphics Functions Random-Scan Displays 41 Software Standards Color CRT Monitors 42 PHIGS Workstations Direct-View Storage Tubes 4.5 Summary Flat-Panel Displays 45 References Three-Dimensional Viewing Devices 49 Exercises vii Contents Summary 3 Outout Primitives 83 Applications References Points and Lines Exercises Line-Drawing Algorithms DDA Algorithm Bresenham's Line Algorithm Parallel Line Algorithms Attributes of Output Loading the Frame Buffer Primitives 143 Line Function Circle-Generating Algorithms Line Attributes Properties of Circles Line Type Midpoint Circle Algorithm Line Width Ellipse-Generating Algorithms Pen and Brush Options Properties of Ellipses Line Color Midpoint Ellipse Algorithm Curve Attributes Other Curves Color and Grayscale Levels Conic Sections Color Tables Polynomials and Spline Curves Grayscale Parallel Curve Algorithms Area-Fill Attributes Curve Functions Fill Styles Pixel Addressing Pattern Fill and Object Geometry Soft Fill Screen Grid Coordinates Character Attributes Maintaining Geometric Properties Text Attributes of Displayed Objects Marker Attributes Filled-Area Primitives Bundled Attributes Scan-Line Polygon Fill Algorithm Bundled Line Attributes Inside-Outside Tests Bundled Area-Fi Attributes Scan-Line Fill of Curved Boundary Bundled Text Attributes Areas Bundled Marker Attributes Boundary-Fill Algorithm Inquiry Functions Flood-FillAlgorithm Antialiasing Fill-Area Functions Supersampling Straight Line Cell Array Segments Character Generation Pixel-Weighting Masks Contents Area Sampling Straight Line 5-6 Aff ine Transformations 208 Segments 174 5-7 Transformation Functions 208 Filtering Techniques 174 5-8 Raster Methods for Transformations 210 Pixel Phasing 175 Summary 212 Compensating for Line lntensity References 21 3 Differences 1 75 Antialiasing Area Boundaries 176 Exercises 213 Summary Two-Dimensional References Exercises 180 6 Viewing 21 6 6-1 The Viewing Pipeline 6-2 Viewing Coordinate Reference Frame Two-Dimensional Geometric 5 Transformations 183 6-3 Window-teviewport Coordinate Transformation Two-Dimensional Wewing Functions 5-1 Basic Transformations Translation Clipping Operations Rotation Point Clipping Scaling Line Clipping 5-2 Matrix Representations Cohen-Sutherland Line Clipping and Homogeneous Coordinates Liang-Barsky Line Clipping 5-3 Composite Transformations Nicholl-Lee-Nicholl Line Clipping Translations Line Clipping Using Nonrectangular Rotations Clip Windows Scalings Splitting Concave Polygons General Pivot-Point Rotation Polygon Clipping General Fixed-Point Scaling Sutherland-Hodgernan Polygon Clipping General Scaling Directions Weiler-Atherton Polygon Clipping Concatenation Properties Other Polygon-Clipping Algorithms General Composite Transformations and Computational Efficiency Curve Clipping 5-4 Other Transformations Text Clipping Reflection Exterior Clipping Shear Summary 5-5 Transformations Between Coordinate References Systems 205 Exercises Structures and Hierarchical Accommodating Multiple 7 Modeling 250 Skill Levels Consistency Minimizing Memorization 7-1 Structure Concepts 250 Basic Structure Functions 250 Backup and Error Handling Setting Structure Attributes 253 Feed back 7-2 Editing Structures 254 8-2 lnput of Graphical Data Structure Lists and the Element Logical Classification of Input Pointer 255 Devices Setting the Edit Mode 250 Locator Devices Inserting Structure Elements 256 Stroke Devices Replacing Structure Elements 257 String Devices Deleting Structure Elements 257 Valuator Devices Labeling Structure Elements 258 Choice Devices Copying Elements from One Structure Pick Devices to Another 260 8-3 lnput Functions 7-3 Basic Modeling Concepts 260 Input Modes Mode1 Representations 261 Request Mode Symbol Hierarchies 262 Locator and Stroke Input Modeling Packages. 263 in Request Mode 7-4 Hierarchical Modeling String Input in Request Mode with Structures 265 Valuator Input in Request Mode Local Coordinates and Modeling Choice lnput in Request Mode Transformations 265 Pick Input in Request Mode Modeling Transformations 266 Sample Mode Structure Hierarchies 266 Event Mode Summary 268 Concurrent Use of Input Modes References 269 8-4 Initial Values for Input-Device Exercises 2 69 Parameters 8-5 lnteractive Picture-Construction Techniques Basic Positioning Methods Graphical User Interfaces Constraints and Interactive lnput 8 Methods 271 Grids Gravity Field Rubber-Band Methods 8-1 The User Dialogue Dragging Windows and Icons Painting and Drawing 8-6 Virtual-Reality Environments 292 10-4 Superquadrics Summary 233 Superellipse References 294 Superellipsoid Exercises 294 10-5 Blobby Objects 10-6 Spline Representations Interpolation and Approximation Splines Three-Dimensional 9 Concepts 296 Parametric Continuity Conditions Geometric Continuity 9-1 Three-Dimensional Display Methods Conditions Parallel Projection Spline Specifications Perspective Projection Cubic Spline Interpolation Depth Cueing Methods Visible Line and Surface Natural Cubic Splines Identification Hermite Interpolation Surface Rendering Cardinal Splines Exploded and Cutaway Views Kochanek-Bartels Splines Three-Dimensional and Stereoscopic Bezier Curves and Surfaces Views Bezier Curves 9-2 Three-Dimensional Graphics Properties of Bezier Curves Packages 302 Design Techniques Using Bezier Curves Cubic E z i e r Curves Three-Dimensional Bezier Surfaces B-Spline Curves and Surfaces B-Spline Curves Uniform, Periodic B-Splines Cubic, Periodic €3-Splines 10-1 Polygon Surfaces Open, Uniform B-Splines Polygon Tables Nonuniform 13-Splines Plane Equations B-Spline Surfaces Polygon Meshes Beta-Splines 10-2 Curved Lines and Surfaces Beta-Spline Continuity 10-3 Quadric Sutiaces Conditions Sphere Cubic, Periodic Beta-Spline Ellipsoid Matrix Representation Torus Rational Splines Contents Conversion Between Spline Visual Representations Representations for Multivariate Data Fields 402 Displaying Spline Curves Summary 404 and Surfaces References 404 Homer's Rule Exercises 404 Forward-Difference Calculations Subdivision Methods Sweep Representations Three-Dimensional Constructive Solid-Geometry Geometric and Modeling Methods Octrees 11 Transformations 407 BSP Trees Translation 408 Fractal-Geometry Methods Rotation 409 Fractal-Generation Procedures Coordinate-Axes Rotations 409 Classification of Fractals General Three-Dimensional Fractal Dimension Rotations 413 Geometric Construction Rotations with Quaternions 419 of Deterministic Self-Similar Scaling 420 Fractals Other Transformat~ons 422 Geometric Construction Reflections 422 of Statistically Self-Similar Fractals Shears 423 Affine Fractal-Construction Conlposite Transformations 423 Methods Three-Dimens~onalTransformation Random Midpoint-Displacement Functions 425 Methods Modeling and Coordinate Controlling Terrain Topography Transformations 426 Self-squaring Fractals Summary 429 Self-inverse Fractals References 429 Shape Grammars and Other Exercises 430 Procedural Methods Particle Systems Three-Dimensional Physically Based Modeling Visualization of Data Sets Visual Representations 12 Viewing 431 for Scalar Fields 12-1 Viewing Pipeline 432 VisuaI Representations 12-2 Viewing Coordinates 433 for Vector Fields Specifying the Virbw Plane 433 Visual Representations Transformation from World for Tensor Fields- 40 1 to Viewing Coordinates 437 xii Contents Projections 1 3-1 2 Wireframe Methods 490 Parallel Projections 13-1 3 Visibility-Detection Functions 490 Perspective IJrojections Summary 49 1 View Volumes and General Keferences 492 Projection Transformations Exercises 49 2 General Parallel-Projection Transformations General Perspective-Projection Transformations lllumination Models and Surface-Rendering Clipping Normalized View Volumes Viewport Clipping 14 Methods 494 Clipping in Homogeneous Light Sources Coordinates Basic lllumination Models Hardware Implementations Ambient Light Three-Dimensional Viewing Diffuse Reflection Functions Specular Reflection Summary and the Phong Model References Combined Diffuse and Specular Exercises Reflections with Multiple Light Sources Warn Model Visi ble-Su dace Detection Intensity Attenuation Met hods 469 Color Considerations Transparency Classification of Visible-Surface Shadows D~tectionAlgorithms Displaying Light Intensities Back-Face Detection Assigning Intensity Levels Depth-Buffer Method Gamma Correction and Video A-Buffer Method Lookup Tables Scan-Line Method Displaying Continuous-Tone Images Depth-Sorting Method Halftone Patterns and Dithering BSP-Tree Method Techniques Area-Subdivision Method Halftone Approximations Octree Methods Dithering Techniques Ray-Casting Met hod Polygon-Rendering Methods Curved Surfaces Constant-Intensity Shading Curved-Surface Representations Gouraud Shading Surface Contour Plots Phong Shading Contents Fast Phong Shading 15-6 CMY Color Model Ray-Tracing Methods 15-7 HSV Color Model Basic Ray-Tracing Algorithm 15-8 Conversion Between HSV Ray-Surface Intersection and RGB Models CaIculations 15-9 HLS Color Model Reducing Object-Intersection 1 5-1 0 Color Selection Calculations and Applications Space-Subdivision Methods Summary AntiaIiased Ray Tracing Reierences Distributed Ray Tracing Exercises Radiosity Lighting Model Basic Radiosity Model Computer Progressive Refinement Radiosity Method Environment Mapping 16 Animation 583 Adding Surface Detail 14-1 Design of Animation Sequences Modeling Surface Detail 16-2 General Computer-Animation with Polygons Functions Texture Mapping 16-3 Raster Animations Procedural Texturing 16-4 Computer-Animation Languages Methods 16-5 Key-Frame Systems Bump Mapping Morphing Frame Mapping Simulating Accelerations Summary 16-6 Motion Specifications References Direct Motion Specification Exercises Goal-Directed Systems Kinematics and Dynamics Color Models and Color Summary A.p,d ications 564 References Exercises 597 15-1 Properties of Light 565 15-2 Standard Primaries and the Chromaticity Diagram XYZ Color Model 568 569 A Mathematics for Computer Graphics 599 CIE Chromaticity Diagram 569 A-1 Coordinate-Reference Frames 600 1 5-3 Intuitive Color Concepts 571 Two-Dimensional Cartesian 15-4 RGB Color Model 572 Reference Frames 600 15-5 Y I Q Color Model 5 74 Polar Coordinates in the xy Plane 601 xiv Contents Three-Dimensional Cartesian Matrix Transpose Reference Frames Determinant of a Matrix Three-Dimensional Curvilinear Matrix Inverse Coordinate Systems Complex Numbers Solid Angle Quaternions A-2 Points and Vectors Nonparametric Representations Vector Addition and Scalar Multiplication Parametric Representations Scalar Product of Two Vectors Numerical Methods Vector Product of Two Vectors Solving Sets of Linear Equations Finding Roots of Nonlinear A-3 Basis Vectors and the Metric Tensor Equations Orthonormal Basis Evaluating Integrals Metric Tensor Fitting C U N ~ S to Data Sets A-4 Matrices Scalar Multiplication and Matrix BIBLIOGRAPHY Addition 612 Matrix Multiplication 612 INDEX Graphics C Version C omputers have become a powerful tool for the rapid and economical pro- duction of pictures. There is virtually no area in which graphical displays cannot be used to some advantage, and so it is not surprising to find the use of computer graphics so widespread. Although early applications in engineering and science had to rely on expensive and cumbersome equipment, advances in computer technology have made interactive computer graphics a practical tool. Today, we find computer graphics used routinely in such diverse areas as science, engineering, medicine, business, industry, government, art, entertainment, ad- vertising, education, and training. Figure 1-1 summarizes the many applications of graphics in simulations, education, and graph presentations. Before we get into the details of how to do computer graphics, we first take a short tour through a gallery of graphics applications. - F ' I ~ ~ 1I - II ~ Examples of computer graphics applications. (Courtesy of DICOMED Corpora!ion.) A major use of computer graphics is in design processes, particularly for engi- neering and architectural systems, but almost all products are now computer de- signed. Generally referred to as CAD, computer-aided design methods are now routinely used in the design of buildings, automobiles, aircraft, watercraft, space- craft, computers, textiles, and many, many other products. For some design applications; objeck are f&t displayed in a wireframe out- line form that shows the overall sham and internal features of obiects. Wireframe displays also allow designers to qui'ckly see the effects of interacthe adjustments to design shapes. Figures 1-2 and 1-3give examples of wireframe displays in de- sign applications. Software packages for CAD applications typically provide the designer with a multi-window environment, as in Figs. 1-4 and 1-5. The various displayed windows can show enlarged sections or different views of objects. Circuits such as the one shown in Fig. 1-5 and networks for comrnunica- tions, water supply, or other utilities a R constructed with repeated placement of a few graphical shapes. The shapes used in a design represent the different net- work or circuit components. Standard shapes for electrical, electronic, and logic circuits are often supplied by the design package. For other applications, a de- signer can create personalized symbols that are to be used to constmct the net- work or circuit. The system is then designed by successively placing components into the layout, with the graphics package automatically providing the connec- tions between components. This allows the designer t~ quickly try out alternate circuit schematics for minimizing - the number of components or the space re- quired for the system. Figure 1-2 Color-coded wireframe display for an automobile wheel assembly. (Courtesy of Emns b Sutherland.) Figure 1-3 Color-coded wireframe displays of body designs for an aircraft and an automobile. (Courtesy of (a) Ewns 6 Suthcrhnd and (b) Megatek Corporation.) Animations are often used in CAD applications. Real-time animations using wiseframe displays on a video monitor are useful for testing perfonuance of a ve- hicle or system, as demonstrated in Fig. ld. When we do not display o b j s with rendered surfaces, the calculations for each segment of the animation can be per- formed quickly to produce a smooth real-time motion on the screen. Also, wire- frame displays allow the designer to see into the interior of the vehicle and to watch the behavior of inner components during motion. Animations in virtual- reality environments are used to determine how vehicle operators are affected by Figure 1-4 Multiple-window, color-coded CAD workstation displays. (Courtesy of Intergraph Corporation.) Figure 1-5 A drcuitdesign application, using multiple windows and colorcded logic components,displayed on a Sun workstation with attached speaker and microphone.(Courtesy of Sun Microsystems.) -. - Figure 1-6 Simulation of vehicle performance during lane changes. (Courtesy of Ewns 6 Sutherland and Mechanical Dynrrrnics, lnc.) certain motions. As the tractor operator in Fig. 1-7 manipulates the controls, the headset presents a stereoscopic view (Fig. 1-8) of the front-loader bucket or the backhoe, just as if the operator were in the tractor seat. This allows the designer to explore various positions of the bucket or backhoe that might obstruct the o p erator's view, which can then be taken into account in the overall hactor design. Figure 1-9 shows a composite, wide-angle view from the tractor seat, displayed on a standard video monitor instead of in a virtual threedimensional scene. And Fig. 1-10 shows a view of the tractor that can be displayed in a separate window o r on another monitor. -- -- - - Figure 1-7 Operating a tractor Ina virtual-dty envimnment.As the contFols are moved, the operator views the front loader,backhoe, and surroundings through the headset. (Courtesy of the National Center for Supercomputing A p p l i c a t h , Univmity of Illinois at U r b a ~ C h r r m p i g nand , Catopillnr, Inc.) Figure 1-8 Figure 1-9 A headset view of the backhoe Operator's view of the tractor presented to the tractor operator. bucket,cornposited in several (Courtesy of the Notional Centerfor sections to form a wide-angleview Supcomputing Applications, on a standard monitor. (Courtesy oi UniwrsifV of Illinois at Urbam- the National Centerfor ~ h r r m p i & n d Caterpillnr,Inc.) Supercomputing Applications, University of lllinois at Urhno- C h m p i g n , and Caterpillnr,Inc.) Chapter 1 A Survey of Computer Graphics Figure 1-10 View of the tractor displayed on a standad monitor. (Courtesy of tk National Cmter for Superwmputing ApplicPths, Uniwrsity of Illinois at U r b P ~ U w m p i g nand , Gterpilhr, Inc.) When obpd designs are complete, or nearly complete, realistic lighting models and surface rendering are applied to produce displays that wiU show the appearance of the final product. Examples of this are given in Fig. 1-11. Realistic displays are also generated for advertising of automobiles and other vehicles using special lighting effects and background scenes (Fig. 1-12). The manufaduring process is also tied in to the computer description of d e signed objects to automate the construction of the product. A circuit board lay- out, for example, can be transformed into a description of the individud processes needed to construct the layout. Some mechanical parts are manufac- tured by describing how the surfaces are to be formed with machine tools. Figure 1-13 shows the path to be taken by machine tools over the surfaces of an object during its construction. Numerically controlled machine tools are then set up to manufacture the part according to these construction layouts. ~ealisticrenderings of design products. (Courtesy of fa)Intergraph Corpomtion and fb) Emns b Sutherland.) Figure 1-12 Figure 1-13 Studio lighting effects and realistic A CAD layout for describing the surfacerendering techniques are numerically controlled machining applied to produce advertising of a part. The part surface is pieces for finished products. The displayed in one mlor and the tool data for this rendering of a Chrysler path in another color. (Courtesy of Laser was supplied by Chrysler Los Alamm National Labomtoty.) Corporation. (Courtesy of Eric Haines,3DIEYE Inc. ) Figure 1-14 Architectural CAD layout for a building design. (Courtesyof Precision Visuals, Inc., Boulder, Colorado.) Chapter 1 Architects use interactive graphics methods to lay out floor plans, such as A Survey of Computer Graphics Fig. 1-14, that show the positioning of rooms, doon, windows, stairs, shelves, counters, and other building features. Working from the display of a building layout on a video monitor, an electrical designer can try out arrangements for wiring, electrical outlets, and fire warning systems. Also, facility-layout packages can be applied to the layout to determine space utilization in an office or on a manufacturing floor. Realistic displays of architectural designs, as in Fig. 1-15, permit both archi- tects and their clients to study the appearance of a single building or a group of buildings, such as a campus or industrial complex. With virtual-reality systems, designers can even go for a simulated "walk" through the rooms or around the outsides of buildings to better appreciate the overall effect of a particular design. In addition to realistic exterior building displays, architectural CAD packages also provide facilities for experimenting with three-dimensional interior layouts and lighting (Fig. 1-16). Many other kinds of systems and products are designed using either gen- eral CAD packages or specially dweloped CAD software. Figure 1-17, for exam- ple, shows a rug pattern designed with a CAD system. - Figrrre 1-15 Realistic, three-dimensionalrmderings of building designs.(a) A street-level perspective for the World Trade Center project. (Courtesy of Skidmore, Owings & M m i l l. ) (b) Architectural visualization of an atrium, created for a compdter animation by Marialine Prieur, Lyon, France. (Courtesy of Thomson Digital Imngc, Inc.) Figtin 1-16 Figure 1-17 A hotel corridor providing a sense Oriental rug pattern created with of movement by placing light computer graphics design methods. fixtures along an undulating path (Courtesy of Lexidnta Corporation.) and creating a sense of enhy by using light towers at each hotel room. (Courtesy of Skidmore, Owings B Menill.).- PRESENTATION GRAPHICS Another major applicatidn ama is presentation graphics, used to produce illus- trations for reports or to generate 35-mm slides or transparencies for use with projectors. Presentation graphics is commonly used to summarize financial, sta- tistical, mathematical, scientific, and economic data for research reports, manage rial reports, consumer information bulletins, and other types of reports. Worksta- tion devices and service bureaus exist for converting screen displays into 35-mm slides or overhead transparencies for use in presentations. Typical examples of presentation graphics are bar charts, line graphs, surface graphs, pie charts, and other displays showing relationships between multiple parametem. Figure 1-18 gives examples of two-dimensional graphics combined with g e ographical information. This illustration shows three colorcoded bar charts com- bined onto one graph and a pie chart with three sections. Similar graphs and charts can be displayed in three dimensions to provide additional information. Three-dimensionalgraphs are sometime used simply for effect; they can provide a more dramatic or more attractive presentation of data relationships. The charts in Fig. 1-19 include a three-dimensional bar graph and an exploded pie chart. Additional examples of three-dimensional graphs are shown in Figs. 1-20 and 1-21. Figure 1-20 shows one kind of surface plot, and Fig. 1-21 shows a two- dimensional contour plot with a height surface. Chapter 1 A SUN^^ of Computer Graph~s Figure 1-18 Figure 1-19 Two-dimensional bar chart and me Three-dimensional bar chart. chart h k e d to a geographical c l h. exploded pie chart, and line graph. (Court~syof Computer Assocbtes, (Courtesy of Cmnputer Associates, copyrighi 0 1992: All rights reserved.) copyi'ghi 6 1992: All rights reserved.) Figure 1-20 Figure 1-21 Showing relationshipswith a Plotting two-dimensionalcontours surface chart. (Courtesy of Computer in the &und plane, with a height Associates, copyright O 1992. All field plotted as a surface above the rights reserved.) p u n d plane. (Cmrtesy of Computer Associates, copyright 0 1992. All rights d. j kclion 1-3 Computer Art Figure 1-22 T i e chart displaying relevant information about p p c t tasks. (Courtesy of computer Associntes, copyright 0 1992. ,411 rights m d. ) Figure 1-22 illustrates a time chart used in task planning. Tine charts and task network layouts are used in project management to schedule and monitor the progess of propcts. 1-3 COMPUTER ART Computer graphics methods are widely used in both fine art and commercial art applications. Artists use a variety of computer methods, including special-pur- p&e hardware, artist's paintbrush (such as Lumens), other paint pack- ages (such as Pixelpaint and Superpaint), specially developed software, symbolic mathematits packages (such as Mathematics), CAD paclpges, desktop publish- ing software, and animation packages that provide faciliHes for desigrung object shapes and specifiying object motions. Figure 1-23 illustrates the basic idea behind a paintbrush program that al- lows artists to "paint" pictures on the screen of a video monitor. Actually, the pic- ture is usually painted electronically on a graphics tablet (digitizer) using a sty- lus, which can simulate different brush strokes, brush widths, and colors. A paintbrush program was used to m t e the characters in Fig. 1-24, who seem to be busy on a creation of their own. A paintbrush system, with a Wacom cordlek, pressure-sensitive stylus, was used to produce the electronic painting in Fig. 1-25 that simulates the brush strokes of Van Gogh. The stylus transIates changing hand presswe into variable line widths, brush sizes, and color gradations. Figure 1-26 shows a watercolor painting produced with this stylus and with software that allows the artist to cre- ate watercolor, pastel, or oil brush effects that simulate different drying out times, wetness, and footprint. Figure 1-27 gives an example of paintbrush methods combined with scanned images. Fine artists use a variety of other computer technologies to produce images. To create pictures such as the one shown in Fig. 1-28, the artist uses a combina- tion of three-dimensional modeling packages, texture mapping, drawing pro- grams, and CAD software. In Fig. 1-29, we have a painting produced on a pen Figure 1-23 Cartoon drawing produced with a paintbrush program, symbolically illustrating an artist at work on a video monitor. (Courtesy of Gould Inc., Imaging 6 Graphics Division and Aurora Imaging.) plotter with specially designed software that can m a t e "automatic art" without intervention from the artist. Figure 1-30 shows an example of "mathematical" art. This artist uses a corn- biation of mathematical fundions, fractal procedures, Mathematics software, ink-jet printers, and other systems to create a variety of three-dimensional and two-dimensional shapes and stereoscopic image pairs. Another example of elm- Figure 1-24 Cartoon demonstrations of an "artist" mating a picture with a paintbrush system. The picture, drawn on a graphics tablet, is displayed on the video monitor as the elves look on. In (b), the cartoon is superimposed on the famous Thomas Nast drawing of Saint Nicholas, which was input to the system with a video camera, then scaled and positioned. (Courtesy Gould Inc., Imaging & Gmphics Division and Aurora Imaging.) Figure 1-25 Figure 1-26 A Van Gogh look-alike created by An elechPnic watercolor, painted graphcs artist E&abeth O'Rourke by John Derry of Tune Arts, Inc. with a cordless, pressuresensitive using a cordless, pressure-sensitive stylus. (Courtesy of Wacom stylus and Lwnena gouache-brush Technology Corpomtion.) of &ware. (Courtesy Wacom Technology Corporation.) Figure 1-27 The artist of this picture, called Electrunic Awlnnche, makes a statement about our entanglement with technology using a personal computer with a graphics tablet and Lumena software to combine renderings of leaves, Bower petals, and electronics componenb with scanned images. (Courtesy of the Williams Gallery. w g h t 0 1991 by Imn Tnrckenbrod, Tke School of the Arf Instituie of Chicago.) Figwe 1-28 Figure 1-29 From a series called Sphnrs of Inpumce, this electronic painting Electronic art output to a pen (entitled, WhigmLaree) was awted with a combination of plotter from softwarespecially methods using a graphics tablet, three-dimensional modeling, designed by the artist to emulate texture mapping, and a series of transformations. (Courtesyof the his style. The pen plotter includes Williams Gallery. Copyn'sht (b 1992 by w n e RPgland,]r.) multiple pens and painting inshuments, including Chinese brushes. (Courtesyof the Williams Gallery. Copyright 8 by Roman Vmtko, Minneapolis College of Art 6 Design.) Figure 1-30 Figure 1-31 This creation is based on a visualization of Fermat's Last Using mathematical hlnctiow, Theorem,I" + y" = z",with n = 5, by Andrew Hanson, fractal procedures, and Department of Computer Science,Indiana University. The image supermmpu ters, this artist- was rendered usingMathematics and Wavefront software. composer experiments with various (Courtesyof the Williams Gallery. Copyright 8 1991 by Stcmrt designs to synthesii form and color Dirkson.) with musical composition. (Courtesy of Brian Ewns, Vanderbilt University.) tronic art created with the aid of mathematical relationships is shown in Fig. 1-31. seaion 1-3 The artwork of this composer is often designed in relation to frequency varia- Computer Art tions and other parameters in a musical composition to produce a video that inte- grates visual and aural patterns. Although we have spent some time discussing current techniques for gen- erating electronic images in the fine arts, these methods are also applied in com- mercial art for logos and other designs, page layouts combining text and graph- ics, TV advertising spots, and other areas. A workstation for producing page layouts that combine text and graphics is ihstrated in Fig. 1-32. For many applications of commercial art (and in motion pictures and other applications), photorealistic techniques are used to render images of a product. Figure 1-33 shows an example of logo design, and Fig. 1-34 gives three computer graphics images for product advertising. Animations are also usxi frequently in advertising, and television commercials are produced frame by frame, where l i p r t. 1-32 Figure 1-33 Page-layout workstation. (Courtesy Three-dimens~onal rendering for a oj Visunl Technology.) logo. (Courtesy of Vertigo Technology, Inc.) -. Fi 1, Ay can be set proportional to a smaU vertical deflec- tion voltage with the corresponding horizontal deflection voltage set propor- tional to Ax, calculated from Eq. 3-5. For lines with m = 1, Ax = Ay and the hori- zontal and vertical deflections voltages are equal. In each case, a smooth line with slope m is generated between the specified endpoints. On raster systems, lines are plotted with pixels, and step sizes in the hori- zontal and vertical directions are constrained by pixel separations. That is, we must "sample" a line at discrete positions and determine the nearest pixel to the line at each sampled position. T h s scanconversion process for straight lines is il- lustrated in Fig. 3-4, for a near horizontal line with discrete sample positions v, / along the x axis. XI x2 DDA Algorithm The digital drflerential analyzer (DDA) is a scan-conversion line algorithm based on f'igure 3-4 calculating either Ay or Ax, using Eq. 3-4 or Eq. 3-5. We sample the line at unit in- Straight linesegment with tervals in one coordinate and determine corresponding integer values nearest the five sampling positions along line path for the other coordinate. the x ax% between x , and x2. Consider first a line with positive slope, as shown in Fig. 3-3. If the slope is less than or equal to 1, we sample at unit x intervals ( A x = 1) and compute each successive y value as Subscript k takes integer values starting from 1, for the first point, and increases by 1 until the final endpoint is reached. Since n1 can be any real number between 0 and 1, the calculated y values must be rounded to the n e m t integer. For lines with a positive slope greater than 1, we reverse the roles of x and y. That is, we sample at unit y intervals ( A y = 1) and calculate each succeeding x value as Equations 3-6 and 3-7 are based on the assumption that lines are to be processed from the left endpoint to the right endpoint (Fig. 3-3). If this processing is reversed, s o that the starting endpoint is at the right, then either we have Ax = - 1 and or (when the slope is greater than I ) we have Ay = -1 with Equations 3-6 through 3-9 can also be used to calculate pixel positions a l o n ~ a line with negative slope. If the absolute value of the slope is less than I and the start endpoint is at the left, we set Ax = 1 and calculate y values with Eq. 3-6. Chapfer When the start endpoint is at the right (for the same slope), we set Ax = -1 and Output Primitives obtain y positions from Eq. 3-8. Similarly, when the absolute value of a negative slope is w a t e r than 1, we use Ay = -1 and Eq. 3-9 or we use Ay = 1and Eq.3-7. This algorithm is summarized in the following procedure, which accepts as input the two endpolnt pixel positions. Horizontal and vertical differences be- tween the endpoint positions are assigned to parameters dx and dy. The differ- ence with the greater magnitude determines the value of parameter steps.Start- ing with pixel position (x,, yo), we determine the offset needed at each step to generate the next pixel position along the line path. We loop through this process steps times. If the magnitude of dx is greater than the magnitude of dy and xa is less than xb, the values of the increments in the x and y directions are 1 and m, respectively. If the greater change is in the x direction, but xa is greater than xb, then the decrements - 1 and - m are used to generate each new point on the line. Otherwise, we use a unit increment (or decrement) in they direction and an x in- crement (or decrement) of l / m. - - - - -- -- #include 'device. h" void lineDDA (int xa, int ya, int xb, int yb) ( int dx = xb - xa, dy = yb - ya, steps, k; float xrncrement, yIncrement, x = xa, y = ya; i t (abs (dx) > abri (dyl) steps = abs ( d x ) ; else steps = abs dy); xIncrement = dx i (float) sceps; yIncrement = dy 1 (float) steps setpixel (ROUNDlxl, ROUND(y)) : for (k=O; k 0 ) ( Y--; py - = twoRx2; i f ( p > 0) p += Rx2 - py; else ( x++; px += twoRy2: p + = Rx2 - PY + Px; Chanter 3 1 Output Primitives 1 e1l:poePlotFo~n:s ( x C e l l L r ~ ,ycenter, x, yl; void ellipsePlotPo-nts ( i n t x C e n t e r , i n t y C e n t e r , i n t x , i n t yl ( setpixel (xCentel. + x, yCenter + yl : setpixel (xCente1- - x , yCencer + y); setpixel (xCente1- t x , yCenter - y); setpixel (xCenter - x, $enter - y): OTHER CURVES Various curve functions are useful in object modeling, animation path specifica- tions, data and function graphing, and other graphics applications. Commonly encountered curves include conics, trigonometric and exponential functions, probability distributions, general polynomials, and spline functions. Displays of these curves can be generated with methods similar to those discussed for the circle and ellipse functions. We can obtain positions along curve paths directly from explicit representations y = f(x) or from parametric forms Alternatively, we could apply the incremental midpoint method to plot curves described with im- plicit functions f i x , y) = 1). A straightforward method for displaying a specified curve function is to ap- proximate it with straight line segments. Parametric representations are useful in this case for obtaining equally spaced line endpoint positions along the curve path. We can also generate equally spaced positions from an explicit representa- tion by choosing the independent variable according to the slope of the curve. Where the slope of y = ,f(x) has a magnitude less than 1, we choose x as the inde- pendent variable and calculate y values at equal x increments. To obtain equal spacing where the slope has a magnimde greater than 1, we use the inverse func- tion, x = f -'(y), and calculate values of x at equal y steps. Straight-line or cun7eapproximations are used to graph a data set of dis- crete coordinate points. We could join the discrete points with straight line seg- ments, or we could use linear regression (least squares) to approximate !he data set with a single straight line. A nonlinear least-squares approach is used to dis- play the data set with some approximatingfunction, usually a polynomial. As with circles and ellipses, many functions possess symmetries that can be exploited to reduce the computation of coordinate positions along curve paths. For example, the normal probability distribution function is symmetric about a center position (the mean), and all points along one cycle of a sine curve can be generated from the points in a 90" interval. Conic Sectior~s In general, we can describe a conic section (or conic) with the second-degree equation:.4x2 + By2 + Cxy + Dx + Ey + F =0 (3-.50) where values for parameters A, B, C, D, E, and F determine the kind of curve we section 3-7 are to display. Give11 this set of coefficients, we can dtatermine the particular conic Other Curves that will be generated by evaluating the discriminant R2 4AC: - [< 0, generates an ellipse (or circle) B2 - 41C { = 0, generates a parabola (.3-5 1 ) I> 0, generates a hyperbola For example, w e get the circle equation 3-24 when.4 = B = 1, C = 0, D = -2x,, E = -2y(, and F = x: + yf - r2.Equation 3-50 also describes the "degenerate" conics: points and straight lines. Ellipses, hyperbolas, and parabolas are particulilrly useful in certain aninia- tion applications. These curves describe orbital and other motions for objects subjected to gravitational, electromagnetic, or nuclear forces. Planetary orbits in the solar system, for example, are ellipses; and an object projected into-a uniform gravitational field travels along a parabolic trajectory. Figure 3-24 shows a para- bolic path in standard position for a gravitational field acting in the negative y di- rect~on.The explicit equation for the parabolic trajectory of the object shown can be written as y = yo + a(x - x,J2 + b(x - :to) with constants a and b determined by the initial velocity g cf the object and the acceleration 8 due to the uniform gravitational force. We can also describe such parabolic motions with parametric equations using a time parameter t, measured in seconds from the initial projection point: xo X = Xo S Grot ( 3 - 3 3 , F ~ , ~.3-24 I ~ w 1 P,lrabolic path of a n object I/ yo + v,,t - g f 2 2 tossed into a downward gravitational field at the Here, v,, and v,yoare the initial velocity components, and the value of g near the ir.~tialposition (x,,, ,yo). surface of the earth is approximately 980cm/sec2. Object positions along the par- abolic path are then calculated at selected time steps. Hyperbolic motions (Fig. 3-25) occur in connection with the collision of charged particles and in certain gravitational problems. For example, comets or meteorites moving around the sun may travel along hyperbolic paths and escape to outer space, never to return. The particular branch (left or right, in Fig. 3-25) describing the motion of an object depends o n the forces involved in the prob- lem. We can write the standard equation for the hyperbola c e n t e d on the origin in Fig. 3-25 as -r (3-51) with x 5 -r, for the left branch and x z r, for the right branch. Since this equa- - tion differs from the standard ellipse equation 3-35 only in the sign between the F I K l l r r3-25 x2 and y2 terms, we can generate points along a hyperbolic path with a slightly ~~f~and branches of a modified ellipse algorithm. We will return to the discussion of animation applica- hyperbola in standard tions and methods in more detail in Chapter 16. And in Chapter 10, we discuss position with symmetry axis applications of computer graphics in scientific visuali~ation. along the x axis. 111 Chapter 3 Parabolas and hyperbolas possess a symmetry axis. For example, the Ou~pu ~ Prirnit~ves parabola described by Eq. 3-53 is symmetric about the axis: The methods used in the midpoint ellipse algorithm can be directly applied to obtain points along one side of the symmetry axis of hyperbolic and parabolic paths in the two regions: (1) where the magnitude of the curve slope is less than 1, and (2) where the magnitude of the slope is greater than 1. To do this, we first select the appropriate form of Eq. 3-50 and then use the selected function to set up expressions for the decision parameters in the two regions. Polynomials dnd Spline Curves A polynomial function of nth degree in x is defined as where n is a nonnegative integer and the a, are constants, with a. Z 0. We get a quadratic when n = 2; a cubic polynomial when n = 3; a quartic when n = 4; and so forth. And we have a straight line when n = 1. Polynomials are useful in a number of graphics applications, including the design of object shapes, the speci- fication of animation paths, and the graphing of data trends in a discrete set of data points. Designing object shapes or motion paths is typically done by specifying a few points to define the general curve contour, then fitting.the selected points with a polynomial. One way to accomplish the curve fitting is to construct a cubic polynomial curve section between each pair of specified points. Each curve section is then described in parametric form as y = a,, + a,,u + a,,u2 + a,,u3 (3-57) / f--' where parameter u varies over the interval 0 to 1. Values for the coefficients of u in the parametric equations are determined from boundary conditions on the curve &ions. One boundary condition is that two adjacent curve sections have Figure 3-26 the same coordinate position at the boundary, and a second condition is to match A spline curve formedwith the two curve slopes at the boundary so that we obtain one continuous, smooth individual cubic curve (Fig. 3-26). Continuous curves that are formed with polynomial pieces are sections between specified called spline curves, or simply splines. There are other ways to set up spline coordinate points. curves, and the various spline-generating methods are explored in Chapter 10. 3-8 - PARALLEL CURVE ALGORITHMS Methods for exploiting parallelism in curve generation are similar to those used in displaying straight line segments. We can either adapt a sequential algorithm by allocating processors according to c u n e partitions, or we could devise other methods and assign processors to screen partitions. Section 3-9 A parallel midpoint method for displaying circles is to divide the circular Curve Functions arc from 90" to 45c into equal subarcs and assign a separate processor to each subarc. As in the parallel Bresenham line algorithm, we then need to set up com- putations to determine the beginning y value and decisicn parameter pk value for each processor. Pixel positions are then calculated throughout each subarc, and positions in the other circle octants are then obtained by symmetry. Similarly, a parallel ellipse midpoint method divides the elliptical arc over the first quadrant into equal subarcs and parcels these out to separate processors. Pixel positions in the other quadrants are determined by symmetry. A screen-partitioning scheme for circles and ellipses is to assign each scan line crossing the curve to a separate processor. In this case, each processor uses the circle or ellipse equation to calcu- late curve-intersectioncoordinates. For the display of elliptical a m or other curves, we can simply use the scan- line partitioning method. Each processor uses the curve equation to locate the in- tersection positions along its assigned scan line. With processors assigned to indi- vidual pixels, each processor would calculate the distance (or distance squared) from the curve to its assigned pixel. If the calculated distance is less than a prede- fined value, the pixel is plotted. 3-9 CURVE FUNCTIONS Routines for circles, splines, and other commonly used curves are included in many graphics packages. The PHIGS standard does not provide explicit func- tions for these curves, but it does include the following general curve function: generalizedDrawingPrimitive In, wc~oints,id, datalist) where wcpoints is a list of n coordinate positions, d a t a 1 i s t contains noncoor- dinate data values, and parameter i d selects the desired function. At a particular installation, a circle might be referenced with i d = 1, an ellipse with i d = 2, and SO on. As an example of the definition of curves through this PHIGS function, a circle ( i d = 1, say) could be specified by assigning the two center coordinate val- ues to wcpoints and assigning the radius value to d a t a l i s t. The generalized drawing primitive would then reference the appropriate algorithm, such as the midpoint method, to generate the circle. With interactive input, a circle could be defined with two coordinate points: the center position and a point on the cir- cumference. Similarly, interactive specification of an ellipse can be done with three points: the two foci and a point on the ellipse boundary, all s t o d in wc- p o i n t s. For an ellipse in standard position, wcpoints could be assigned only the center coordinates, with daZalist assigned the values for r, and r,. Splines defined with control points would be generated by assigning the control point coordi- nates to wcpoints. Figure 3-27 Functions to generate circles and ellipses often include the capability of Circular arc specified by drawing curve sections by speclfylng parameters for the line endpoints. Expand- beginning and ending angles. ing the parameter list allows specification of the beginning and ending angular Circle center is at the values for an arc, as illustrated in Fig. 3-27. Another method for designating a cir- coordinate origin. Chapter 3 cular or elliptical arc is to input the beginning and ending coordinate positions of Output Prim~t~ves the arc. PIXEL ADDRESSING AND OBJECTGEOMETRY So far we have assumed that all input positions were given in terms of scan-line number and pixel-posihon number across the scan line. As we saw in Chapter 2, there are, in general, several coordinate references associated with the specifica- tion and generation of a picture. Object descriptions are given in a world- reference frame, chosen to suit a particular application, and input world coordi- nates are ultimately converted to screen display positions. World descriptions of objects are given in terms of precise coordinate positions, which are infinitesi- Figure 3-28 mally small mathematical points. Pixel coordinates, however, reference finite Lower-left section of the screen areas. If we want to preserve the specified geometry of world objects, we screen grid referencing need to compensate for the mapping of mathematical input points to finite pixel Integer coord~natepositions. areas. One way to d o this is simply to adjust the dimensions of displayed objects to account for the amount of overlap of pixel areas with the object boundaries. Another approach is to map world coordinates onto screen positions between pixels, so that we align object boundaries with pixel boundaries instead of pixel centers. Screen Grid Coordinates An alternative to addressing display posit~onsin terms of pixel centers is to refer- ence screen coordinates with respect to the grid of horizontal and vertical pixel boundary lines spaced one unit apart (Fig. 3-28). A screen soordinale position is then the pair of integer values identifying a grid interswtion position between two pixels. For example, the mathematical line path for a polyline with screen endpoints (0, O), (5,2), and (1,4) is shown in Fig. 3-29. Figure 0-29 With the coordinate origin at the lower left of the screen, each pixel area can Line path for a series oi be referenced by the mteger grid coordinates of its lower left corner. Figure 3-30 connected line segments illustrates this convention for an 8 by 8 section of a raster, w ~ t ha single illumi- between screen grid coordinate positions. nated pixel at screen coordinate position (4, 5). In general, we identify the area occupied by a pixel with screen coordinates (x, y) as the unit square with diago- nally opposite corners at (x, y) and (x + 1, y + 1). This pixel-addressing scheme has several advantages: It avoids half-integer pixel boundaries, it facilitates pre- a s e object representations, and it simplifies the processing involved in many scan-conversion algorithms and in other raster procedures. The algorithms for line drawing and curve generation discussed in the pre- ceding sections are still valid when applied to input positions expressed as screen grid coordinates. Decision parameters in these algorithms are now simply a mea- sure of screen grid separation differences, rather than separation differences from pixel centers. Maintaining Geometric: Properties of Displayed Objects Figure 3-30 When we convert geometric descriptions of objects into pixel representations, we lllum~natedpixel a1 raster transform mathematical points and lines into finite screen arras. If we are to position (4,5). maintain the original geomehic measurements specified by the input coordinates 114 Section 3-10 Pixel Addressing and Object Geometry Figure 3-31 Line path and corresponding pixel display for input screen grid endpoint coordinates (20,lO)and (30,18). for an object, we need to account for the finite size of pixels when we transform the object definition to a screen display. Figure 3-31 shows the line plotted in the Bmenham line-algorithm example of Section 3-2. Interpreting the line endpoints (20, 10) and (30,18) as precise grid crossing positions, we see that the line should not extend past screen grid posi- tion (30, 18). If we were to plot the pixel with screen coordinates (30,181, as in the example given in Section 3-2, we would display a line that spans 11 horizontal units and 9 vertical units. For the mathematical line, however, Ax = 10 and Ay = 8. If we are addressing pixels by their center positions, we can adjust the length of the displayed line by omitting one of the endpoint pixels. If we think of s c m n coordinates as addressing pixel boundaries, as shown in Fig. 3-31, we plot a line using only those pixels that are "interior" to the line path; that is, only those pix- els that are between the line endpoints. For our example, we would plot the leh- most pixel at (20, 10) and the rightmost pixel at (29,17). This displays a line that F i p r e 3-32 Conversion of rectangle (a) with verti-es at sawn coordinates (0, O), (4, O), (4,3), and (0,3) into display (b) that includes the right and top boundaries and into display (c) that maintains geometric magnitudes. Chapter 3 has the same geometric magnitudes as the mathematical line from (20, 10) to Ou~putPrimitives (30, 18). For an enclosed area, input geometric properties are maintained by display- ing the area only with those pixels that are interior to the object boundaries. The rectangle defined with the screen coordinate vertices shown in Fig. 3-32(a), for example, is larger when we display it filled with pixels u p to and including the border pixel lines joining the specified vertices. As defined, the area of the rectangle is 12 units, but as displayed in Fig. 3-32(b), it has an area of 20 units. In Fig. 3-32(c), the original rectangle measurements are maintained by displaying Figure 3-33 Circle path and midpoint circle algorithm plot of a circle with radius 5 in screen coordinates. Figure 3-34 Modificationof the circle plot in Fig. 333 to maintain the specified circle diameter of 10. Sec'i0n3-11 only the jnternal pixels. The right boundary of the ~ n p u rectangle t is at r = 4. To Prirnitivcs maintain this boundary in the display, we set the rightmost pixel grid cwrdinate at.r = 3. The pixels in this vertical column then span the interval from x = 3 to 1' = 4. Similarly, the mathematical top boundary of the rectangle is at y = 3, so we set the top pixel row for the displayed rectangle at y = 2. These compensations for finite pixel width along object boundaries can be applied to other polygons and to curved figures so that the raster display main- tains the input object specifications. A circle of radius 5 and center position (10, lo), for instance, would be displayed as in Fig. 3 3 3 by the midpoint circle algo- rithm using screen grid coordinate positions. But the plotted circle has a diameter of 11. To plot the cmle with the defined diameter of 10, we can modify the circle algorithm to shorten each pixel scan line and each pixel column, as in Fig. 3-34. One way to d o this is to generate points clockwise along the circular arc in the third quadrant, starting at screen coordinates (10, 5). For each generated point, the other seven circle symmetry points are generated by decreasing the 1' coordi- nate values by 1 along scan lines and decreasing the y coordinate values by 1 along pixel culumns. Similar methods are applied in ellipse algorithms to main- tain the specified proportions in the display of an ellipse. 3-1 1 FILLED-AREA PRIMITIVES I\ standard output primitive in general graphics packages is a solid-color or pat- terned polygon area. Other kinds of area primitives are sometimes available, but polygons are easier to process since they have linear boundaries There are two basic approaches to area filling on raster systems. One way to fill an area is to determine the overlap mtervals for scan lines that cross the area. Another method for area filling is to start from a given interior position and paint outward from this point until we encounter the specified boundary conditions. The scan-line approach is typically used in general graphics packages to fill poly- gons, circles, ellipses, and other simple curves. All methods starting from an inte- rior point are useful with more complex boundaries and in interactive painting systems. In the following sections, we consider n~etliodsfor solid fill of specified areas. Other fill options are discussed in Chapter 4. St an-Lint. Polygon F i l l Algorithm Figure 3-35 illustrates the scan-line procedure for soha tilling of polygon areas. For each scan line crossing a polygon, the area-fill algorithm locates the intersec- tion points of the scan line with the polygon edges. These intersection points are then sorted from left to right, and the corresponding frame-buffer positions be- tween each intersection pair are set to the specified fill color. In the example of Fig. 3-35, the four pixel intersection positions with the polygon boundaries define two stretches of interior pixels from x = 10 to x = 14 and from x = 18 to x = 24. Some scan-line intersections at polygon vertices require special handling. A scan line passing through a vertex intersects two edges at that position, adding two points to the list of intersections for the scan line. Figure 3-36 shows two scan lines at positions y and y' that intersect edge endpoints. Scan line y in- tersects five polygon edges. Scan line y', however, intersects an even number of edges although it also passes through a vertex. Intersection points along scan line Output Primitives Figure 3-35 Interior pixels along a scan line passing through a polygon area y' correctly identify the interior pixel spans. But with scan line y, we need to d o some additional processing to determine the correct interior points. The topological difference between scan line y and scan line y ' in Fig. 3-36 is identified by noting the position of the intersecting edges relative to the scan line. For scan line y, the two intersecting edges sharing a vertex are on opposite sides of the scan line. But for scan line y', the two intersecting edges are both above the scan line. Thus, the vertices that require additional processing are those that have connecting edges on opposite sides of the scan line. We can identify these vertices by tracing around the polygon boundary either in clockwise or counterclockwise order and observing the relative changes in vertex y coordinates as we move from one edge to the next. If the endpoint y values of two consecutive edges mo- notonically increase or decrease, we need to count the middle vertex as a single intersection point for any scan line passing through that vertex. Otherwise, the shared vertex represents a local extremum (minimum or maximum) on the polv- gon boundary, and the two edge intersections with the scan line passing through that vertex can be added to the intersection list. Filprrrr 3-36 Intersection points along scan lines that intersect polygon vertices. Scan line y generates an odd number of intersections, but scan line y' generals an even number of intersections that can be paired to identify correctly the interior pixel spans. One way to resolve the question as to whether we should count a vertex as section 3-11 one intersection or two is to shorten some polygon edges to split those vertices Filled-Area Primitives that should be counted a s one intersection. We can process nonhorizontal edges around the polygon boundary in the order specified, either clockwise or counter- clockwise. As we process each edge, we can check to determine whether that edge and the next nonhorizontal edge have either monotonically increasing or decreasing endpoint y values. If so, the lower edge can be shortened to ensure that only one mtersection point is generated for the scan line going through the common vertex joining the two edges. Figure 3-37 illustrates shortening of an edge. When the endpoint y coordinates of the two edges are increasing, the y value of the upper endpoint for the current edge 1s decreased by 1, as in Fig. 3-37(a). When the endpoint y values are monotonically decreasing, as in Fig. 3-37(b), we decrease they coordinate of the upper endpoint of the edge following the current edge. Calculations performed in scan-conversion and other graphics algorithms typically take advantage of various coherence properties of a scene that is to be displayed. What we mean by coherence is simply that the properties of one part of a scene are related in some way to other parts of the scene so that the relation- ship can be used to reduce processing. Coherence methods often involve incre- mental calculations applied along a single scan line or between successive scan lines. In determining edge intersections, we can set u p incremental coordinate calculations along any edge by exploiting the fact that the slope of the edge is constant from one scan line to the next. Figure 3-38 shows two successive scan lines crossing a left edge of a polygon. The slope of this polygon boundary line can be expressed in terms of the scan-line intersection coordinates: Since the change in y coordinates between the two scan lines is simply P P 4% + - r / / / / Scan Line y + 1 Scan Line y / I r Scan Lme y - 1 / / d I{ (a1 (b) Figure 3-37 Adjusting endpomt I/ values for a polygon, as we process edges in order around the polygon perimeter. The edge currently being processed is indicated as a solid line. In (a), they coordinate of the upper endpoint of the current edge is decreased by 1. In tb), they coordinate of the upper endpoint of the next edge is decreased by 1. Chaoler 3 (x,.,, Yk. 11 A Scan Line y, + 1 Output Pr~rnilives d- \ Scan Line y, Figrtrc 3-38 Two successive scan lines tntersecting a polygon boundary. the x-intersection value xi,, on the upper scan line can be determined from the x-intersection value xk on the preceding scan line as Each successive x intercept can thus be calculated by adding the inverse of the slope and rounding to the nearest integer. An obvious parallel implementation of the fill algorithm is to assign each scan line crossing the polygon area to a separate processor. Edge-intersection cal- culations are then performed independently. Along an edge with slope m, the in- tersection xk value for scan line k above the initial scan line can be calculated as In a sequential fill algorithm, the increment of x values by the amount l / n i along an edge can be accomplished with integer operations by recalling that thc slope m is the ratio of two integers: where Ax and Ay are the differences between the edge endpoint x and y coordi- nate values. Thus, incremental calculations of x intercepts along an edge for suc cessive scan lines can be expressed as Using this equation, we can perform integer evaluation of the x intercepts by ini- tializing a counter to 0, then incrementing the counter by the value of Ax each time we move u p to a new scan line. Whenever the counter value becomes equal to or greater than Ay, we increment the current x intersection value by 1 and de- crease the counter by the value Ay. This procedure is equivalent to maintaining integer and fractional parts for x intercepts and incrementing the fractional part until we reach the next integer value. As an example of integer incrementing, suppose we have a n edge with slope rn = 7/3.A t the initial scan line, we set the counter to 0 and the counter in- Scan. Number Scen Line yo Scan Line y, -----..-.- - - --. Figu rc 3-39 A polygon and its sorted edge table, with e d g e m shorlened by one unit in they direction. crement to 3. As we move up to the next three scan lines along this edge, the counter is successively assigned the values 3, 6, and 9. On the third scan line above the initial scan line, the counter now has a value greater than 7. So we in- crement the x-intersection coordinate by 1, and reset the counter to the value 9 - 7 = 2. We continue determining the scan-line intersections in this way until we reach the upper endpoint of the edge. Similar calcutations are carried out to obtain intersections for edges with negative slopes. We can round to the nearest pixel x-intersection value, instead of truncating to obtain integer positions, by modifying the edge-intersection algorithm so that the increment is compared to Ay/2. This can be done with integer arithmetic by incrementing the counter with the value 2Ax at each step and comparing the in- crement to Ay. When the increment is greater than or equal to Ay, we increase the x value by 1 and decrement the counter by the value of 2Ay. In our previous ex- ample wGh rrr = 7 / 3 , the counter valucs for the first few scan lines above the ini- tial scan line on this edge would now be 6, 12 (reduced to -2), 4, 10 (reduced to -4), 2, 8 (reduced to -6), 0, 6, and 12 (reduced to - 2). Now x would be incre- mented on scan lines 2, 4, 6, 9, etc., above the initial scan line for this edge. The extra calculations required for each edge are 2Ax = dl + Ax and 2Ay = A ~ - + Ay. To efficiently perform a polygon fill, wt can first store the polygon bound- ary in a sorted edge table that contains a11 the information necessary to process the scan lines efficiently. Proceeding around the edges in either a clockwise or a counterclockwise order, we can use a bucket sort to store the edges, sorted on the smallest y value of cach edge, in the correct s c a n - h e positions. Only nonhorizon- tal edges are entered into the sorted edge table. As the edges are processed, we can also shorten certain edges to resolve the vertex-~ntersectionquestion. Each entry in the table for a particular scan line contains tht! maximum yvalue for that edge, the x-intercept value (at the lower vertex) for the edge, and the inverse slope of the edge. For each scan line, the edges are in sorted order from left to nght. F~gure3-39 shows a polygon and the associated sorted edge table. Chdpter 3 Next, we process the scan lines from the bottom of the polygon to its top, OUIPUI Prlm~tiv~s producing. for each scan line crossing thc polygon boundaries. an nrtivr r d ~ clisf The active edge list for a scan line contains all edges crossed by that scan line, with iterative coherence calculations used to obtain the edge inte;sections. Implementation of edge-intersection calculations t a n also be facilitated by storing Ax and ly values in the sorted edge table. Also, to ensure that we cor- rectly fill the interior of specified polygons, we can apply the considerations dis- cussed in Section 3-10. For each scan line, we fill in the pixel spans for each pair of x-intercepts starting from the leftmost x-intercept value and ending at one po- sition before the rightnlost r intercept. And each polygon edge can be shortened by one unit in they direction at the top endpoint. These measures also guarantee that pixels in adjacent polygons will not overlap each other. The following procedure performs a solid-fill scan conversion For an input set of polygon vertices. For each scan line within the vertical extents of the poly- gon, an active edge list is set up and edge intersections are calculated. Across each scan line, the interior fill is then applied between successive pairs of edge - intersections, processed from left to right.- Pinclude "device.h" typedef struct tEdge { int yupper; float xlntersect. dxPerScan; struct tEdge * next; 1 Edge:. ' Inserts edge into list in order of increas.ng x1n;essect field. *I void insertEdge (Edge ' list, Edge edge) { Edge ' p. ' q = list; p = q->next; while ( p ! = NULL) i i f (edge->xIntersect < p->xIntersectl p : NULL; else { 9 = P: p = p->next; 1 1 edge->next = q->next; q->next = edge; 1 : ' For an index, return y-coordinate of next nonhorizontal li?e ' / lnt yNext (int k , int cnt, dcPt * p t s ) i int j : j++; return (pts[j1.y); I /' Srore lower-y coordiaate and inverse slope for each edge. Adjust and store upper-y coordinate for edges that are the lower member of a monotonic all\^ ixreasing or decreasing pair of edges '/ void makeEdgeRec (dcPt lower, dcPt upper, int yComp, Edge ' edge, Edge edges[]) ( edge-~dxPerScan= (float) upper.^ - 1ower.x) / (upper.y - 1ower.y); edge-~xIntersect= 1orer.x; if (upper.y < yComp) edge->yUpper = upper.y - 1: else edge->yUpper = upper.y; insertEdge (edges[lower.yl , edge); 1 void buildEdgeList (int cnt, dcPt pts, Edge ' edges[]) ( Edge ' edge: dcPr vl, v2; int i, yPrev = pts[cnt - 21.Y; v1.x = pts[cnt-l1.x; v1.y = ptstcnt-l1.y; for (i=O; inext; while (pl) ( p2 = pl->next; for i1=pl-,xI1tersect; 1cg2-zx1nterr:ect;- + + ) setpixel i iintl i, scan); pl = p2->next; void deleteAfter (Edge ' ql ( q->next = p->next; free ( p ) : 1 I' Delete completed edges. Update 'xIntersect' field Eor others ' / ;oid updateActiveList iint scan. Edge activrl I while lp) if (scan > = p->yUpper) I p p->next; deleLrAfter iq): else ( p->x~ntersect= p->xintersect + p->dxPer.;can: p = p->next; void rescrtActiveList (Edge active) Edge q. ' p = active->next: active->next : NULL; while ( p ) ( q = p->next; insertEdge (active, p); P = 9; i ) 1 void scanFill (int cnt, dCPt ' pts) ( Edge * edges [WINDOW-HEIGHT1 , + actlve; inc i. scan; for (i=O; icWINCOW-HEIGHT; i + + )( edgesli] = (Edge ' I malloc ( s i z e o f (Edge)); edgesiii->next NULL; buildEdgeList (cnt, pts, edges); active = (Edge ' 1 malloc (sizeof (Edge)); active->next = NULL; for (scan=O; scannext) ( fillscan (scan, active); updateActlveList (scan, active); ressrtActiveLisc (active); I 1 / + Free edge records that have been malloc'ed... ' I Inside-Outside Tests Area-filling algorithms and other graphics processes often need to identify inte- rior regions of objects. So far, we have discussed area filling only in terms of stan- dard polygon shapes. In elementary geometry, a polygon is usually defined as having no self-intersections. Examples of standard polygons include triangles, rectangles, octagons, and decagons. The component edges of these objects are joined only at the vertices, and otherwise the edges have no common points in the plane. Identifying the interior regions of standard polygons is generally a straightforward process. But in most graphics applications, we can specify any sequence for the vertices of a fill area, including sequences that produce intersect- ing edges, as in Fig. 3-40. For such shapes, it is not always clear which regions of the xy plane we should call "interior" and which regions we should designate as "exterio!" to the object. Graphics packages normally use either the odd-even rule or the nonzero winding number rule to identify interior regions of an object. We apply the odd-even rule, also called the odd parity rule or the even- odd rule, by conceptually drawing a line from any position P to a distant point outside the coordinate extents of the object and counting the number of edge crossings along the line. If the number of polygon edges crossed by this line is odd, then P is an interior point. Otherwise, P is an exterior point. To obtain an ac- curate edge count, we must be sure that the line path we choose does not inter- sect any polygon vertices. Figure 340(a) shows the interior and exterior regions obtained from the odd-even rule for a self-intersecting set of edges. The scan-line polygon fill algorithm discussed in the previous section is an example of area fill- ing using the odd-even rule. Another method for defining interior regions is the nonzero winding num- ber rule, which.counts the number of times the polygon edges wind around a particular point in the counterclockwise direction. This count is called the wind- ing number, and the interior points of a two-dimensional object are defined to be Nonnm W~ndingNumber Rule Ibl Figure 3-40 Identifying interior and exterior regions for a self-intersecting polygon. Chapter 3 those that have a nonzeru value for the winding number. We apply the nonzero Output Prtmitives winding number rule to polygons by initializing the winding number tu C and again imagining a line drawn from any position P to a distant point bcjoi the.. coordinate extents of the object. The line we choose must not pass through any vertices. As we move along the line from position P to the distant point, we count the number of edges that cross the line in each direction. We add 1 to the winding number every time we intersect a polygon edge that crosses the line from right to left, and we subtract 1 every time we intersect an edge that crosses from left to right. The final value of the winding number, after all edge crossings have been counted, determines the relative position of P. If the winding number is nonzero, P is defined to be an interior point. Otherwise, P is taken to be an exterior point. Figure 3-40(b) shows the interior and exterior regions defined by the nonzero winding number rule for a self-intersecting set of edges. For standard polygons and other simple shapes, the nonzero winding number rule and the odd-even rule give the same results. But for more complicated shapes, the two methods may yield different interior and exterior regions, as in the example of Fig. 3-40. One way to determine directional edge crossings is to take the vector cross product of a vector u along the line from P to a distant point with the edge vector E for each edge that crosses the line. If the z'component of the cross product u X E for a particular edge is positive, that edge crosses from right to left and w e add 1 to the winding number. Otherwise, the edge crosses from left to right and we subtract 1 from the winding number. An edge vector is calculated by sub- tracting the starting vertex position for that edge from the ending vertex position. For example, the edge vector for the first edge in the example of Fig. 3-40 i

Use Quizgecko on...
Browser
Browser