CG Module 6-Visible Surface Detection and Animation PDF

Summary

This document provides an overview of computer graphics concepts focusing on visible surface detection and animation techniques. It covers different algorithms, methods, and principles. The summary introduces key topics such as back-facing detection, depth buffer methods, and essential principles for animation.

Full Transcript

Module 6 Visible Surface Detection and Animation Visible Surface detection In a real scene, there may be many 3D objects placed randomly, completely or partially hiding some other objects. At any given viewing direction, some of the surfaces may not be visible. Identify...

Module 6 Visible Surface Detection and Animation Visible Surface detection In a real scene, there may be many 3D objects placed randomly, completely or partially hiding some other objects. At any given viewing direction, some of the surfaces may not be visible. Identifying only visible surfaces from the scene is a major concern of computer graphics. For displaying a realistic scene, it is necessary to render visible surfaces only. Many algorithms have been devised to find visible surfaces efficiently. Visible surface detection algorithms are also known as hidden line or hidden surface removal algorithms. Aim of such algorithms is to identify the visible lines or surfaces of a given set of object from a particular viewing direction. Classification of Visible Surface Detection Algorithm Visible Surface Detection algorithms may be classified as: 1. Object Space method 2. Image Space method I. Object Space method Object space method determine the visible surface of an object by comparing the entire object or part of an object with other objects within the scene. These algorithms operate in physical co-ordinate system. This class of algorithms is faster and perform less computation. Object space methods are generally used in line-display algorithms. Following methods belong to this category: 1. Backface detection 2. Painter’s algorithm 3. Robert’s algorithm II. Image Space Methods In the image space method, the visibility of the object is decided by pixel by pixel comparison of overlapping objects. The pixel of the nearest object to the viewer is selected for display. These algorithms operate in a screen co-ordinate system. Such algorithms are more precise but computationally intense. Most hidden line/surface algorithms use image-space methods. Following methods belong to this category: 1. Depth Buffer method 2. Area Subdivision method 3. Octree method 4. Scan line method 5. Ray tracing algorithm Compare Object Space and Image Space Methods Object Space Method Image Space Method 1. Object space method determine the visible surface In the image space method, the visibility of the object is of an object by comparing the entire object or part decided by pixel by pixel comparison of overlapping of an object with other objects within the scene. objects. 2. These algorithms operate in physical co-ordinate These algorithms operate in a screen co-ordinate system. system. 3. This class of algorithms is faster and perform less Such algorithms are more precise but computationally computation. intense. 4. Object space methods are generally used in line- Most hidden line/surface algorithms use image-space display algorithms. methods. 5. Methods: Methods: 1. Backface detection 1. Depth Buffer method 2. Painter’s algorithm 2. Area Subdivision method 3. Robert’s algorithm 3. Octree method Back Surface detection method Object surfaces that are oriented away from the viewer are called back surfaces or faces. It is a fast object-space algorithm based upon the inside-outside test for identifying the back-face of polyhedron. A point ( x, y, z) is inside a polygon surface with plane parameters A, B, C and D if: Ax + By + Cz + D < 0 When an inside point is along the line of sight to the surface, the polygon must be a backface. That is, we are inside that face and cannot see the front of it from our viewing position. Consider the normal vector N to a polygon surface, which has Cartesian components (A, B, C). If V is a vector in the viewing direction from the eye (or Camera) position, as shown in figure below, then this polygon is a backface if: V∙N >0 If object descriptions have been converted to projection co-ordinates and our viewing direction is parallel to the viewing 𝑧𝑣 axis, then V = ( 0, 0, 𝑉𝑧 ) and V ∙ N = 𝑉𝑧 C so that we only need to consider the sign of C, the z component of the normal vector N. In a right-handed viewing system with viewing direction along the negative 𝑧𝑣 axis as shown in figure below, the polygon is a backface if C < 0. Also, we cannot see any face whose normal has z component C = 0, since our viewing direction is grazing that polygon. In general, we can label any polygon as a backface if its normal vector has a z- component value: C≤0 Depth Buffer Method Depth Buffer method is also known as the z-buffer method, since object depth is usually measured from the view plane along the z-axis of a viewing system. It is an image space approach which compares surface depths at each pixel position on the projection plane. Each surface of a scene is processed separately, one point at a time across the surface. The method is usually applied to scenes containing only polygon surfaces, because depth values can be computed very quickly and the method is easy to implement. With object descriptions converted to projection co-ordinates, each (x, y, z) position on a polygon surface corresponds to the orthographic projection point (x, y) on the view plane. Therefore, for each pixel position (x, y) on the view plane, object depths can be compared by comparing z values. Figure above shows three surfaces at varying distances along the orthographic projection line from position (x, y) in a view plane taken as the 𝑥𝑣 𝑦𝑣 plane. Surface 𝑆1 is closest at this position, so its surface intensity value at (x, y) is saved. We can implement the depth buffer algorithm in normalized co-ordinates, so that z values range from 0 at the back clipping plane to 𝑧𝑚𝑎𝑥 at the front clipping plane. The value of 𝑧𝑚𝑎𝑥 cab be set either to 1 (for a unit cube) or to the largest value that can be stored on the system. In this method, two buffer areas are required. A depth buffer is used to store depth values for each (x, y) position as surfaces are processed, and the refresh buffer stores the intensity values for each position. Initially, all positions in the depth buffer are set to 0 (minimum depth), and the refresh buffer is initialized to the background intensity. Each surface listed in the polygon tables is then processed , one scan line at a time, calculating the depth (z value) at each (x, y) pixel position. The calculated depth is compared to the value previously stored in the depth buffer at that position. If the calculated depth is greater than the value stored in the depth buffer, the new depth value is stored, and the surface intensity at that position is determined and placed in the same xy location in the refresh buffer. Depth values for a surface position (x, y) are calculated from the plane equation for each surface: On scan line Y = y, next pixel would be (x + 1, y). The depth at a pixel (x + 1, y) is given as: The ratio – A/C is constant for each surface, so succeeding depth values across a scan line are obtained from preceding values with single addition. Area Subdivision Method Area subdivision algorithm follow the divide and conquer strategy of spatial partitioning in the projection plane. This method takes the advantage of area coherence in a scene by locating those view areas that represent part of a single surface. In this method, the total viewing area is successively divided into smaller and smaller rectangles until each small area is the projection of part of a single visible surface or no surface at all. Warnock’s algorithm takes advantage of area coherence. Warnock developed area subdivision algorithm which subdivides each area into four equal squares. At each stage in the recursive subdivision process, the relationship between projection of each polygon and the area of interest is checked for four possible relationships: 1. Surrounding Polygon: One that completely encloses the (shaded) area of interest. (fig. 8.6.1 (a)) 2. Overlapping or intersecting polygon: One that is partly inside and partly outside the area. (fig. 8.6.1 (b)) 3. Inside or Contained Polygon: One that is completely inside the area. (fig. 8.6.1 (c)) 4. Outside or Disjoint Polygon: One that is completely outside the area. (fig. 8.6.1 (d)) After checking four relationships we can handle each relationship as follows: 1. If all the polygons are disjoint from the area, then the background colour is displayed in the area. 2. If there is only one intersecting or only one contained polygon, then the area is first filled with the background colour, and then the part of the polygon contained in the area is filled with colour of polygon. 3. If there is a single surrounding polygon, but no intersecting or contained polygons, then the area is filled with the colour of the surrounding polygon. 4. If there are more than one polygon intersecting, contained in, or surrounding the area then we have to do some more processing. In figure 8.6.2(a), the four intersections of the surrounding polygon are all closer to the viewpoint than any of the other intersections. Therefore, the entire area is filled with the colour of the surrounding polygon. Fig. 8.6.2 Figure 8.6.2(b) shows that the surrounding polygon is not completely in front of that intersecting polygon. Fig. 8.6.2 In such case we cannot make any decision and hence Warnock’s algorithm subdivides the area to simplify the problem. This is illustrated in figure 8.6.3. As shown in figure 8.6.3(a) we cannot make any decision about which polygon is in front of the other. But after dividing area of interest, polygon 1 is ahead of the polygon 2 in left area and polygon 2 is ahead of polygon 1 in the right area. Now we can fill these two areas with the corresponding colours of the polygon. The Warnock’s algorithm stops subdivision of area only when the problem is simplified or when area is only a single pixel. Algorithm: 1. Initialize the area to be the whole screen. 2. Create the list of polygons by sorting them with their z-values of vertices. Don’t include disjoint polygons in the list because they are not visible. 3. Find the relationship of each polygon. 4. Perform the visibility decision test: a) If all the polygons are disjoint from the area, then fill area with background colour. b) If there is only one intersecting or only one contained polygon then first fill entire area with background colour and then fill the part of the polygon contained in the area with the colour of polygon. c) If there is a single surrounding polygon, but no intersecting or contained polygons, then fill the area with the colour of the surrounding polygon. Algorithm contd.. d) If the surrounding polygon is closer to the viewpoint than all other polygons, so that all other polygons are hidden by it, fill the area with the colour of the surrounding polygon. e) If the area is the pixel (x, y), and neither a, b, c, nor d applies, compute the z- coordinate at pixel (x, y) of all polygons in the list. The pixel is then set to colour of the polygon which is closer to the viewpoint. 5. If none of the above tests are true then subdivide the area and go to step 2. Animation Computer Animation refers to the time sequence of visual changes. The animation is a key concept in entertainment, education, simulation, games, scientific and engineering studies etc. Today, the scope of animation is not only limited to changing the position of an object with time, but the concept is extended to various transformation operations like scaling, rotation, variations in color, transparency, surface property, shape etc. The animation is achieved by displaying successive frames with minor differences. Near identical frames are displayed at a certain rate, which creates the illusion of animation in the scene. Computer animation can be generated by changing camera position, orientation, movement speed, focal length, light etc. Traditional Animation Techniques Animation techniques can be classified as: 1. Conventional animation 2. Computer Based Animation I. Conventional Animation In the conventional approach, all the frames of video are designed by hand. Frames are displayed at the rate of at least 24 frames per second. The conventional method takes a tremendous amount of time and effort to create a video. The method is tedious but is provides great control over the animation to be created. Conventional animation is not limited by available computing technology. For high quality animation, it is still faster than computer based animation. The animation is only limited by the ability of the artist. Traditional Animation Techniques Contd.. II. Computer Based Animation: Many frames can be calculated instead of drawn. Many scenarios/variations can be quickly tried out. Complex 3D models: don't have to draw different views. Fewer tedious steps. Principles of Animation 1. Squash and stretch 2. Anticipation 3. Staging 4. Straight ahead action and pose to pose 5. Follow through and overlapping actions 6. Slow-in and Slow-out 7. Arcs 8. Secondary action 9. Timing 10. Exaggeration 11. Solid Drawing and Solid Posing 12. Appeal Principles of Animation contd.. 1. Squash and stretch: Squash and stretch define the elasticity, volume and flexibility of the character. Squash and stretch are used for all kinds of objects in the scene. It is also used in facial animation. It is used in almost every type of animation like a bouncing ball to moving person. 2. Anticipation: Anticipation prepares the viewer for the main action to come, for example starting to run, jump, kick etc. The first step of anticipation is to squat. Repetitive use of anticipation can be used to achieve the comic effect. All motion contains a lesser or greater amount of anticipation to add a more realistic effect. Principles of Animation contd.. 3. Staging: When filming a scene, where do you put the camera? Where do the actors go? What do you have them do? The combination of all these choices is what we call staging. It reflects the use of stage elements, pose, camera motion and action. These elements are essential to demonstrate the attitude, temper and reaction of the character. Use of various shots including long shot, medium shot, and close-up helps to narrate the story to the point. Due to time limitation in the video, the story should be conveyed through a clear cut agenda without any repetition. Principles of Animation contd.. 4. Straight ahead action and pose to pose: These are two ways of drawing animation. Straight ahead action is where you draw each frame of an action one after another as you go along. With pose-to-pose, you draw the extremes – that is, the beginning and end drawings of action – then you go on to the middle frame, and start to fill in the frames in- between. Pose-to-pose gives you more control over the action. You can see early on where your character is going to be at the beginning and end instead of hoping you’re getting the timing right. Principles of Animation contd.. 5. Follow through and Overlapping actions: When a moving object such as a person comes to a stop, parts might continue to move in the same direction because of the force of forward momentum. While we are travelling in a bus with almost constant speed, our body is steady. Our body is in motion with the bus, but is seems steady to insiders. On sudden break, the bus will stop but over the body will move in forward direction. Similarly, when the bus starts with jerk, our body goes in backward direction. This is called follow-through and overlapping. Principles of Animation contd.. 6. Slow-in and Slow-out: In the real world, objects have to accelerate as they start moving and slow down before stopping. For example, a person running, a car on the road or a pendulum. To represent this in animation, more frames must be drawn at the beginning and end in an action sequence. 7. Arcs: In the real world, the motion of an object is not always straight forward. Consider the fan, Pendulum, ball, hands, legs and many other objects. The circular or arc motion is very natural for the objects. So it is very essential to stimulate arc motion in objects to add more realism. Principles of Animation contd.. 8. Secondary Action: Secondary actions are gestures that support the main action to add more dimension to character animation. They can give more personality and insight to what the character is doing or thinking. For example, reading a newspaper is the primary action. Same time a person may be having a cup of tea is a secondary action. 9. Timing: Timing represents how many frames are being used to perform certain actions. To represent sharp and quick action, fewer frames are used. To convey smooth and slower action more frames are used. E.g. Time will be different for car and truck to take U-turn. Principles of Animation contd.. 10. Exaggeration: Exaggeration presents a character’s features and actions in an extreme form for comedic or dramatic effect. This can include distortions in facial features, body types, and expressions, but also the character’s movement. Exaggeration is a great way for an animator to increase the appeal of a character, and enhance the storytelling. 11. Solid drawing and solid posing: Solid drawing is all about making sure that animated forms feel like they’re in three- dimensional space. The character pose should be balanced and it should have evenly distributed weight and center of gravity. Pose of character should be expressive. It should unambiguously communicate thoughts, condition,, temper feeling etc. Principles of Animation contd.. 12. Appeal: People remember real, interesting, and engaging characters. Animated characters should be pleasing to look at and have a charismatic aspect to them; this even applies to the antagonists of the story. Small characters like Tom, Jerry, Mickey Mouse, Doraemon etc. are so appealing that they have left a great impression on kids and adults. Key framing: Character and Facial Animation Keyframe animation does interpolation to generate intermediate frames from a given pair of frames. The animator can control several intermediate frames to be generated. More keyframe generates smooth animation. Only a few seconds of animation of a bouncing ball produces many frames. To generate smooth and realistic animation. We should be careful in specifying position and number of keyframes in real sequence. Just by specifying the first and last frame as keyframes, we may end up with generating only liner motion of the ball. Keyframe animation can be applied to motion, shape, colour etc. Motion animation generates the intermediate frames defining the motion of an object at any given moment. In shape transformation, intermediate frame describes the transformation of one shape to another shape. In color transformation, in between frames determines the color change from source to the target frame. Fig: Keyframes for bouncing ball event Approaches for keyframe animation: 1. Shape interpolation 2. Parameter interpolation Deformation Deformation is achieved by either repositioning the object or by reshaping the object. The first one is called rigid body transformation, in which the shape of the object does not alter. The second one is called deformation in which the shape of the object is deformed. 1) Rigid body transformation: In rigid body transformation, the object moves along straight, circular or any random path. The shape and size of the object will remain intact after the transformation. Translation, rotation and reflection are examples of rigid body transformation. Even after performing rigid body transformation, the relative position of the points in an object does not change. Though it is simple, it does not simulate the real objects. Real objects frequently change the shape, size and orientation. Rigid body transformation only provides mathematical abstraction. All of these functions are linear operations and can be effectively represented by matrix multiplication. The transformed coordinates of points P is computed as, 𝑃′ = M ∙ P where M represents the corresponding transformation matrix. 2) Shape Deformation In this deformation, the shape of the object does change and the relative position of object points also change. It is useful in achieving more realistic animation. Deformation affects all points independently. With this, we can achieve any kind of animation due to freedom of shape deformation to any degree. Deformation can be achieved using different ways: 1. Parameterized Deformation 2. Lattice Deformation 3. Composite Deformation Motion Capture Motion capture character animation is the process of recording an actor’s movement and mapping it to 3D characters in the computer. To achieve this, sensors are attached all over the body of the actor. Movement of sensors is tracked and recorded and mapped on the 3D model in computer in real time. After retakes, appropriate take is selected and the motion is applied to the 3D character. Motion capture is widely used in film and animation industries. Instead of generating individual frames from keyframes in the computer, the entire sequence is derived by tracing the movement of sensors. Even when animators create a movement of animation characters by hand, they refer to video footage, study action on screen, even they look themselves in the mirror. Creating digital animation by hand is known as “keyframing”. Motion Capture contd.. Apart from film industries and games, sport and athletics do a lot of work with motion capture technology. For training the racer, sensors are attached to bike and rider. The motion of these sensors is recorded and analyzed. The racer will be instructed to adjust his posture accordingly in real-time through web data received from sensors.

Use Quizgecko on...
Browser
Browser