🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

106103224.pdf

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Document Details

Indian Institute of Technology, Guwahati

Tags

graphics computer graphics programming computer science

Full Transcript

INDEX S. No Topic Page No. Week 1 1 Introduction to graphics 1 2 Historical evolution, issues and challenges...

INDEX S. No Topic Page No. Week 1 1 Introduction to graphics 1 2 Historical evolution, issues and challenges 25 3 Basics of a graphics system 58 4 Introduction to 3D graphics pipeline 94 Week 2 5 Introduction and overview on object representation techniques 121 6 Various Boundary Representation Techniques 139 7 Spline representation – I 162 8 Spline representation – II 204 Week 3 9 Space representation methods 247 10 Introduction to modeling transformations 274 11 Matrix representation and composition of transformations 302 12 Transformations in 3D 332 Week 4 13 Color computation – basic idea 358 14 Simple lighting model 382 15 Shading models 413 16 Intensity mapping 446 Week 5 17 Color models and texture synthesis 475 18 View transformation 510 19 Projection transformation 538 20 Windows-to-viewport transformation 565 Week 6 21 Clipping introduction and 2D point and line clipping 594 22 2D fill-area clipping and 3D clipping 629 23 Hidden surface removal – I 665 24 Hidden surface removal – II 702 Week 7 25 Scan conversion of basic shapes – I 726 26 Scan conversion of basic shapes – II 754 27 Fill area and character scan conversion 792 28 Anti-aliasing techniques 829 Week 8 29 Graphics I/O Devices 872 30 Introduction to GPU and Shaders 908 31 Programming with OpenGL 936 32 Concluding remarks 974 Computer Graphics Dr. Samit Bhattacharya Computer Science and Engineering Indian Institute of Technology, Guwahati Lecture 01 Introduction to Graphics Hello and welcome to the first lecture of the course Computer Graphics. In this lecture we will try to get an overview of the basic idea of graphics and what it means. (Refer Slide Time: 00:50) So, let us begin with a very simple trivial question, what we do with computers? I think most of you will be able to tell that we do lot of things. Let us see some examples, what are the things that we do with a computer. 1 (Refer Slide Time: 01:07) The first example that we will see is related to a document processing task. So essentially we are interested in creating document and let us see what we do there and what we get to see on the screen. (Refer Slide Time: 01:29) On the screen I have shown one example of a document creation going on, this is essentially the creation of the slides from which I am delivering the lecture. So, as you can see there are many things that are being shown on the screen. So, what are those things what are the components that we are seeing on the screen? 2 (Refer Slide Time: 01:56) In fact there are large number of different things, the most important of course because we are talking of document processing activities the most important component is the alpha numeric character. So, there are many such characters the alphabets the numbers and how we enter those characters? By using a keyboard, either physical or virtual. (Refer Slide Time: 02:32) But apart from that there are other equally important components. 3 (Refer Slide Time: 02:39) For example, the menu option that we see here on the top side of the screen. As well as the icons various icons representing some editing tools that we get to see on the top part of the screen. So, here or here in fact all these components are essentially editing tools and the icons representing those tools. (Refer Slide Time: 03:19) We also have the preview slides on the left part which is another important component. 4 (Refer Slide Time: 03:28) So, if you have noted some of these components are shown as text like the alphanumeric characters and the others are shown as images like those icons. (Refer Slide Time: 03:41) So, essentially there is a mix of characters and images that constitute the interface of a typical document processing system. 5 (Refer Slide Time: 03:52) Now, let us see another example which you may or may not have seen but it is also quite common that is essentially CAD interface or Computer Aided Design Interface. So, CAD stands for Computer Aided Design. And this is an example of the interface so there are many difference system with different interfaces what I have shown here is one such example. (Refer Slide Time: 04:21) So, what this systems do, essentially with this system, someone can actually design machinery parts and there are some control buttons to do various operations on this parts. 6 (Refer Slide Time: 04:46) And as you can see the overall part that is the entire image is constructed from individual components like this smaller gears or this cylinder, this cubes smaller components. And these smaller components are having some specified properties for example dimension. (Refer Slide Time: 05:31) So, with this interface then what we can do typically engineers use such interfaces to create machinery by specifying individual components and their properties and try to assemble them virtually. On the screen to check if there is any problem in the specifications. So, clearly since everything is done virtually the engineer does not require any physical development of the machinery, so it saves time it saves cost and many other things. So, that is example 2. 7 (Refer Slide Time: 06:08) Now let us see one more example another interesting example of computer graphics, this is related to visualization or trying to visualize things that otherwise is difficult to visualize. So, under visualization we will see a couple of example the first one is visualization of a DNA molecule, now DNA as you all know stands for Deoxyribonucleic acid is essentially kind of your genetic code present in every cell and it is not possible to see it with our bear eyes as we all know. But it will be good if we can see it somehow in some manner, and application of computer graphics known as visualization makes it possible, like it is shown here. So, this type of visualization is known as scientific visualization where we try to visualize things that occur in nature but we cannot see otherwise or difficult to see. 8 (Refer Slide Time: 07:23) There is another type of visualization, let us see one example, suppose we want to visualize a computer network how traffic flow happens in the network, here by traffic I mean packets the packets that are being moved in the network, in any case we are not in a position to visualize it with our eyes but computer can help us with computer we can actually create a visualization of the network traffic flow. (Refer Slide Time: 08:06) These type of visualization are known as information visualization, here we are not dealing with natural objects instead we are dealing with unnatural or man-made information and we are trying to visualize that information. So, we have two types of visualization: scientific and 9 information. And these are applications of computer graphics that help us perceive that help us understand things that otherwise we will not be able to perceive. (Refer Slide Time: 08:55) So, as I said each of the examples that I have discussed earlier is an example of the use of computer graphics. (Refer Slide Time: 08:57) But these are only three examples in fact the spectrum of such applications of computer graphics is huge and everything that we get to see around us involving computers are basically applications of computer graphics and it is definitely not possible to least all those applications. 10 (Refer Slide Time: 09:28) Also we have to keep in mind that not only desktop or laptop screens we are here talking about a pleather of other types of displays as well that includes mobile phones, information kiosks at popular spots such as airports, ATMs, large displays at open air music concerts, air traffic control panels even movie screens in the theatres all these are some kinds of display and whatever is being shown on this displays are mostly applications of computer graphics. So, we have two things one is large number of application second is application on all possible displays. (Refer Slide Time: 10:26) And as I have already mentioned earlier those who are not very conversion to the inner working of a computer for them whenever we use the term computer essentially the thing that 11 comes to the mind of such lay persons is the display whatever is being shown on the display. So, essentially the display is considered as computer by those who are not very well- accustomed with the inner workings of a computer. (Refer Slide Time: 11:04) Now, what is the common thing between all this applications, instances of images that are displayed? Now, here by image we are refereeing to both text characters alpha numeric characters as well as actual images because texts are also considered as images as we shall see in our subsequent lectures. (Refer Slide Time: 11:19) And these images are constructed with objects components of the objects like we have discussed in the CAD application like there are individual objects as we have seen earlier, 12 now these objects are essentially geometric shapes. And on these objects, we assign some colors like the yellow color here or the blue color here or the white here. So, colored geometric objects are there which are used to create the overall image. (Refer Slide Time: 12:08) Along with that there is a one more thing when we create edit or view a document we are dealing with alphanumeric characters and each of these characters is an object. Again, we shall see in details why characters are considered to be objects in subsequent lectures. And these objects are rendered on the screen with different styles size as well as color. Like the typical objects that we have noted in the previous case. 13 (Refer Slide Time: 12:42) Similarly, if we are using some drawing application drawing package like MS paint or the drawing application of MS word, there we deal with other shapes such as circles, rectangles, curves, these are also objects and with these objects we create a bigger object or bigger image. (Refer Slide Time: 13:12) Finally, in the case of animation videos or computer games which involves animation anyway. In many cases we deal with virtual characters. Those are essentially some artificially created characters which may or may not be human like. 14 (Refer Slide Time: 13:31) And all these images or their components can be manipulated because nowadays most traffic systems are interacting. So, user can interact with the screen content and manipulate the content. For that input devices are there such as mouse, keyboard, joystick and so on. (Refer Slide Time: 14:01) Now, how a computer can do all these stuff, all these things. What are those things? Let us recap again. Images consisting of components so we need to represent those components then we need to put them together into the form of a image and we should allow the user to interact with those components or the whole image through input devices as well as we should be able to create the perception of motion by moving those images. How a computer can do all these things? 15 (Refer Slide Time: 14:42) We all know you probably have already done some basic courses where you know that computers understand only binary language that is language of 0s and 1s, on the other hand in computer graphics what we have letters numbers, symbols characters but these are not 0s or 1s. These are something that we understand we can perceive we can understand. So, what is needed there are two questions related to that. (Refer Slide Time: 15:23) First question is how we can represent such objects in a language that the computer understands and the computer can process. 16 (Refer Slide Time: 15:39) The second question is, how we can map from the computers language to something that we can perceived, so essentially with the computer output in 0s and 1s we will not be able to understand what that means. So, we want again in the form of those objects that we have mentioned earlier. So, one thing is mapping from our understanding to computers language and other thing is mapping from computers understanding to our language. (Refer Slide Time: 16:06) In other words, how we can create or represent synthesize and render images on a computer display this is the fundamental question that we try to answer in computer graphics. 17 (Refer Slide Time: 16:23) From this fundamental question we can frame FOUR component questions. (Refer Slide Time: 16:29) First one is as we have already said imagery is constructed from constituents parts. So, how we can represent those parts that is the first basic question. 18 (Refer Slide Time: 16:46) Second question is how to synthesize the constituents parts to form a complete realistic imagery? So, that is our second question. (Refer Slide Time: 17:01) Third question is how to allow the users to manipulate the imagery or its constituents on the screen with the use of input devices. That is our third fundamental question. 19 (Refer Slide Time: 17:22) And finally, the fourth question is how to create the impression of motion to create animations. So, these are the four questions first is how to represent, second is how to synthesize, third is how to interact and fourth is how to create animation. (Refer Slide Time: 17:43) Now, in computer graphics we see answers to these four basic questions. 20 (Refer Slide Time: 17:47) Here few things need to noted first of all when we are talking of computers screens, we are using it in a very broad sense because the screens vary in a great way as we all are aware nowadays from small displays to display walls to large displays and these variations indicate corresponding variations in the underling computing platform however we will ignore those things when we refer to computers screen will assume that we are refereeing to all sorts of screens. (Refer Slide Time: 18:33) Accordingly, whatever we discuss our objective would be to seek efficient solutions to the four basic questions for all possible platforms. For example, displaying something on mobile phone requires techniques difference from displaying something on your desktop, because the 21 underling hardware may be different. There is a difference in CPU speed, memory capacity, power consumption issues and so on. So, when we are proposing a solution to answer one of these question or all these questions we should keep in mind these underlying variations. (Refer Slide Time: 19:23) Now, in summary what we can say about computer graphics is that this is the process of rendering static images or animation which is a sequence of images on the computer screen, that to in an efficient way, where efficiency essentially refers to the efficient utilization of underlying resources. (Refer Slide Time: 19:48) In this course we shall learn in details this process particularly the stages of the pipeline where the pipeline actually refers to set of stages which are part of this whole process of 22 rendering and pipeline implementation that is how we implement the stages, this involve a discussion on the hardware and software basics for a graphic system. However, we will not discuss the process of creation of animation which is a vast topic in itself and requires a separate course all together. (Refer Slide Time: 20:31) This is just for your information, that there is a related term probably some of you may have heard of it called image processing, now in image processing we manipulate images whereas in computer graphics we synthesize images and also we synthesis it in a way such that it gives us perception of motion that we call animation. So, computer graphics deals with synthesis of image as well as animation, whereas image processing deals with manipulation of already captured images. And in many applications these two are linked but those things will not discuss in this limited scope of the course. 23 (Refer Slide Time: 21:17) So, whatever we have discussed today you can find in details from this book more specifically you should refer to chapter 1, section 1 for the topics that we covered today. In the next lecture we will go through some historical evolution of the field followed by a discussion on the issues and challenges that are faced by workers in this field. Thank you and good bye. 24 Computer Graphics Professor. Dr. Samit Bhattacharya Department of Computer Science and Engineering Indian Institute of Technology, Guwahati Lecture No. 2 Historical Evolution, Issues and Challenges Hello and welcome to lecture number 2 in the course Computer Graphics. So, before we start, let us recap what we have learned in the previous lecture. (Refer Slide Time: 0:40) In the last lecture if you may recall, we got introduced to the field and talked about the basic idea. That means, what is computer graphics, what it deals with. Now today, what we are going to do is, we will discuss the historical evolution of the field. And also we will discuss the issues and challenges that are faced by researchers in this area. Historical evolution knowledge is always beneficial for the broader understanding of the subject. So, we will go into a bit details of the evolution followed by discussion on the issues and challenges. 25 (Refer Slide Time: 1:25) In the early days when computer just started appearing, that means in the 1940s, 50s of the last century, displays constituted a terminal, a terminal unit capable of showing only characters. So, in earlier days we had displays that used to show only characters, there was no facility or no way to show anything other than characters. (Refer Slide Time: 2:05) Subsequently, the ability to show complex 2D images was introduced, that is the later developments. Now, with the advent of technology other things changed. 26 (Refer Slide Time: 2:24) We have now higher memory, capacity and increased processor speeds. Along with those changes, the display technology also improved significantly, so we had 3 broad developments, memory capacity enhancement, processor speed increase, as well as improvement in display technology. (Refer Slide Time: 2:56) Now, all 3 contributed together to display or to make it possible to display complex 3D animations which are computationally intensive and which assumes that we are capable 27 of performing the computations in real time. How computers computationally intensive these processes are, we will see in the subsequent lecture. In fact, that is the core content of this course. (Refer Slide Time: 3:43) Now, if we look closer to the 3D animation, then we will see that there are 2 aspects, one is synthesize of frames and the second one is combining the frames together and render them in a way such that it generates a perception of motion or generates the motion effects. Now, synthesis of frame as well as combining them and rendering them on the screen to generate motion are complex processes and they are also resource intensive. They require lots of hardware resources. So, these are the main focus areas of present day computer graphics activities, how to make the computer these processes workable in the modern day computing environment. 28 (Refer Slide Time: 4:48) Now, we are repeatedly talking about the term computer graphics but it has an origin. So, the term was first coined by William Fetter of the Boeing Corporation in 1960. That is 60 years ago. (Refer Slide Time: 5:09) Subsequently, Sylvan Chasen of Lockheed Corporation in 1981 proposed 4 phases of the evolution of the field. What are those phases? The phase was concepts to birth which is typically considered to be between 1950 and 1963. This is also known as the gestational 29 period. The second phase is the childhood phase of short duration 64 to 70 in the last century, then we have adolescence, this is again somewhat significant phase and span between 1970s to early phase of 1980s and then we have the adulthood which is still continuing starting from the early 1980s. So, these are the 4 phases that was proposed by Sylvan Chasen in 1981, gestational period, childhood, adolescence period and adulthood. Now, let us have a quick look at the major developments that took place in each of these phases. (Refer Slide Time: 6:31) Let us start with the first phase that is the gestational period between 1950 and 1963 at the early stages of computers. Now, if you are aware of the evolution of computers in the early phases, then you know that, that phase the gestational period also coincides with the early developmental phases of the computing technology itself. So, that was the phase when technology evolved. And nowadays, we take for granted the availability of interfaces that are popularly known as graphical user interfaces. So, we get to see it on all of our computer screens mostly, if we are using desktop, laptops or even smartphones. But in that phase, in the gestational period, the GUI concept was not there. In fact, nobody was even aware of the possibility of such an interface, it could not be imagined even. 30 (Refer Slide Time: 7:47) Now in that phase, there was one system developed which was called SAGE, which stands for Semi automatic, Semi Automatic Ground Environment. Now, it is developed by or for the benefit of the US Air Force, which is part of a bigger project called the Whirlwind project which was started in 1945. Now, the SAGE system is an early example from this phase of gestational period demonstrating the use of computer graphics. (Refer Slide Time: 8:39) 31 What this system does or what this system did, now the basic idea of the project was to get the positional information of an aircraft from rudder stations that is typically the job radar network. Now there is an operator who like this operator here, who was sitting in front of a screen, as you can see, but not the traditional screens that we are accustomed with but early version of a screen. (Refer Slide Time: 9:21) And on this screen aircrafts are shown and on the aircraft other data, the data received from the radar was superimposed. So, essentially what we have is that on the screen, a geographical region is shown and on that region the aircraft information is shown. 32 (Refer Slide Time: 9:48) There was more one more aspect of the system. So, it was actually in a sense interactive system, so the operator can actually interact with the system with the use of an input device called a light gun or pen, light pen. Now, if there is an aircraft shown on the screen, the operator can point the pen to that aircraft to get the identification information of the aircraft. (Refer Slide Time: 10:30) 33 So, when the gun was pointed at the plane symbol on the screen an event was sent to the Whirlwind system which in turn sent the details as text about the plane or about the identification information of the plane which was then displayed on the screen of the operator. Something like this. As you can see this is a light gun or light pen, operator is pointing the pen on the screen where an aircraft symbol is shown and once the pointing is done, then the system sends message to the overall system, Whirlwind system which had all the information, which is sent back to the interface to be seen by the operator. (Refer Slide Time: 11:34) So as I said, the system SAGE which is part of the Whirlwind system had traces of interactive graphics, where the interaction was done with the light gun or the light pens, but it was still not fully interactive the way we understand interaction in the modern context. True potential of the interactive computer graphics came into picture after the development of another system called Sketchpad by Ivan Sutherland way back in 1963. So, this Sketchpad system was a part of doctoral theses of Ivan Sutherland at MIT. And this system actually demonstrated the idea as well as the potential of an interactive graphics system. 34 (Refer Slide Time: 12:46) Like the SAGE system in Sketchpad also, the interaction was done through light pen and it was mean to develop engineering drawings directly on a CRT screen. So, here the operator need not be a passive input provider instead active input can be given in the form of creating drawings itself on the screen. An example is shown in this figure as you can see, this is the screen and on the screen the operator is holding light pen to create a drawing here. (Refer Slide Time: 13:36) 35 Now this Sketchpad system actually contains many firsts. It is widely considered to be the first GUI, although the term GUI was still not popular at that time, it is also credited with pioneering several concepts of graphical computing namely how to represent data in memory, how to deal with flexible lines, ability to zoom in and out, draw perfectly straight lines, corners, joints. These are things that nowadays we take for granted but these were very, very difficult at the time and sketchpad actually managed to demonstrate that these are possible. Accordingly, Sutherland is widely acknowledged by many as the grandfather of interactive computer graphics. (Refer Slide Time: 14:50) Now along with SAGE and Sketchpad, this period, gestational period also saw development of many other influential systems. 36 (Refer Slide Time: 15:03) During this phase first computer game called Spaceware was developed in 1961 on a PDP-1 platform which is an early computing platform. (Refer Slide Time: 15:25) IBM also developed the first CAD or Computer Aided Design system, recollect our previous lecture these systems are meant for helping engineers create mechanical drawings and test various thing without actually requiring to build the system. And in the 37 gestational period, IBM came up with this first CAD system in 1964 although the work started in 1959. (Refer Slide Time: 16:02) Now, the gestational period was followed by the childhood period, which is reasonably short duration period only of 6, 7 years. Now, in this period now much significantly new things happen only whatever was developed earlier in the gestational period, further development took place along those lines and consolidation took place of the earlier ideas. 38 (Refer Slide Time: 16:37) Then came the adolescent period, mostly confined to the 1970s and early phase of 1980s. Now, in this phase again, many new things happen, in 1971 Intel released the first commercial microprocessor called the 4004. Now, as we all know, with the coming of this microprocessor, a paradigm shift took placed in the way computers were designed and that in turn impacted the computer graphics field in a significant way by making computations less costly and affordable. (Refer Slide Time: 17:32) 39 As a result, in this period several interesting things happened, primarily two types of developments took place, one is techniques for realistic 3D graphics and several applications were developed during this phase particularly in the entertainment and movie making fields. As a result of those applications, people started noticing the potential of the field and invested more and more time and money so, both the development were significant in the context of overall evolution of the field. (Refer Slide Time: 18:16) Now, what were the works that were done for realistic and 3D image generation? One important development was the working on the lighting models. Now, these models we will learn later. What these models were meant to do, were to assign colors to pixels and this coloring of pixels or smallest graphical units on a screen is very important to give us a perception of realistic images as we all know. And we shall see in details in later lectures. 40 (Refer Slide Time: 19:03) Apart from that, another thing took place that is development of texture mapping techniques, now texture is basically patterns that we get to see on the surfaces. So, if we can impose textures on our artificially created object surfaces, then definitely that will lead us to a more realistic image representation and that development took place in this adolescence period. So, the first work was done by Catmull in 1974. First notable work, as you can see, that on this object some textures are shown, because of that, we are able to make out that it is a 3D object and it is having certain characteristics. So, without texture, it will look dull and non realistic. 41 (Refer Slide Time: 20:05) An advanced form of texture mapping was done through Bump mapping by Blinn in 1978. Like the example shown here, on the object surfaces, we can see that that special type of textures were incorporated, inserted to make it look more real, natural. These are called bumps, Bump mapping. (Refer Slide Time: 20:34) Also another development took place which is an advanced technique of creating 3D images that is called Ray Tracing and first notable development took place in 1980s in 42 the adolescence period, using this technique, we can develop realistic 3D images on a 2D screen, in a more better way than using the other techniques. Now, these are techniques that were developed to improve the quality of the synthesized images, to make them more realistic, more natural. So, to recap, broadly 4 approaches were develop in this phase. First one is lighting modern, basic work on the lighting model followed by texture model and bump modeling, bump mappings and finally Ray tracing methods. Apart from that, as I mentioned earlier, another strand of development that took place during this phase was development of several applications of computer graphics, whatever was the state of the art at that time based on that several applications were developed. Particularly in entertainment and movie making. (Refer Slide Time: 22:12) So, in 1973 the first movie came out named Westworld, which was the first movie to use computer graphics. 43 (Refer Slide Time: 22:26) This was followed in 1977 by the movie Star Wars, I think most of you, if not all, may be aware of this movie. So, the first movie came out in 1977 and it became hugely popular throughout the world and as a result, people learned about the potential of computer graphics in a more compelling way. (Refer Slide Time: 23:01) The adolescence period was followed by the adulthood period, starting from the early phase of 1980s. Now, in this period, the field entered the adulthood with the release of 44 IBM PC in 1981. Now as we all know, after the advent of the PC or personal computers, computers became a mass product, earlier it used to be confined to only a few people who were well educated in an advanced stage of studies and primarily does research or development works using this but after the advent of PC proliferated and become a mass product. And since it had become a mass product, focus now shifted to the development of applications that were appealing to the masses. (Refer Slide Time: 24:15) And using computer graphics lots of such applications were developed and focus shifted from graphics for expert to graphics for laymen. 45 (Refer Slide Time: 24:32) And as a result, we got to see several developments including the development of GUIs and the associated concepts. In fact, so many developments took place that it gave rise to a new field of study, which is called human-computer interaction or HCI in short. (Refer Slide Time: 24:52) One thing happened during this phase, a self sustaining cycle of development emerged, what is that? As more and more user friendly systems emerge, they create more and more interest among people, in turn that brings in new enthusiasm and investments on 46 innovative systems. So, it is a self sustaining cycle of development, more and more applications are there that is appealing to more and more people and the people in turn want more and more so, more and more investment came and it continued and it is still continuing. (Refer Slide Time: 25:42) And as a result of this self sustaining cycle of development, other associated developments took place. So, from CPU, we migrated to GPU or graphics processing, dedicated hardware for graphics, storage capacity improved significantly to be able to store and process large amount of data required for 3D realistic graphics. So, now we are talking in terms of terabytes, petabytes, instead of kilobytes or megabytes that used to be the case earlier. Similarly, display technology have seen huge improvement from the earliest cathode ray tubes to modern day touchscreens or situated walls or even better things. So, all this took place because of this self sustaining cycle of development. 47 (Refer Slide Time: 26:42) So, we can say that these technological developments brought in a paradigm shift in the field and we are now in a position with the help of new technology to develop algorithms to generate photorealistic 3D graphics in real time. So, all these things are important and this will form the core subject matter of our discussion in subsequent lectures. Now, note that all these are computation intensive process and because of the advancement in technologies, such computation intensive process has become manageable, possible to implement in real time. 48 (Refer Slide Time: 27:40) And since we are able to do those things now then the appeal and application of computer graphics have increased manifold and they presence of all these factors implies that the field is growing and will continue to grow in the foreseeable future. So, that is in brief the evolution of the field, 4 phases starting with the gestational period to the adulthood and the major developments we briefly discussed. Now, let us shift our focus to another important aspect of the field that is what are the issues and challenges that confront workers in this field? 49 (Refer Slide Time: 28:28) Now, in the formative stages of the field, primary concern was as we all know, generation of 2D images or 2D scenes. (Refer Slide Time: 28:39) But again as we have already discussed that subsequently changed and 2D graphics is no longer the thrust area and we are mostly focused on, nowadays we are mostly focused on the generation of 3D graphics and animation. 50 (Refer Slide Time: 29:02) In the context of 3D graphics and animation, there are 3 primary concerns related to software, software development for the system. (Refer Slide Time: 29:19) One is modeling which essentially means creating and representing object geometry in 3d world and here we have to keep in mind that we are not only talking about solid geometric objects, but also some phenomena such as bellowing of smoke, rain, fire, some 51 natural events phenomena so, how to model both objects as well as phenomena, that is one concern. (Refer Slide Time: 29:58) Second concern is rendering, essentially creating and displaying 2D image of the 3D objects, why 2D image? Because our screen is 2D so we have to convert the 3D objects into a 2D form. So, then this rendering deals with issues related to displaying the modeled objects on the screen and there are some other related issues involved namely color, coloring of the pixels on the screen, color and illumination which involves simulating the optical process. Then, visible surface determinism with respect to the viewer position, textured patterns on the surfaces or texture synthesis to mimic realism, 3D to 2D transformation and so on. So, these are the issues that are there in rendering. 52 (Refer Slide Time: 31:11) Then the third issue, third major issue related to graphic software is animation, describing how the image changes over time so, what it deals with? It deals with imparting motion on the objects to simulate movement, so, give us a perception of movement. Then the key concerns here are modeling of motion and interaction between objects during motion. So, the 3 major issues related to software are modeling of objects, rendering of objects and creating of animation. Now, there are some hardware related issues as well. (Refer Slide Time: 32:06) 53 Why those are important, because quality and cost of the display technology is of important concern, because there is always a tradeoff between the two, quality of the hardware as well as the cost, so we cannot get high quality in low cost and vice versa. And while building a graphics system application, we need to keep in mind this tradeoff. (Refer Slide Time: 32:39) Along with that, we need to keep in mind selection of appropriate interaction device because nowadays we are talking of interactive computer graphics. So, the interaction component is important and it is important to choose an appropriate mode of interaction or input device such that the interaction appears intuitive to the user. The user should not be forced to learn complex patterns or complex operations, it should be as natural as possible. 54 (Refer Slide Time: 33:20) Finally, design of specialized graphic devices to speed up the rendering process is also of utmost importance. Because graphics algorithms are computation intensive and if we can have dedicated hardware to perform those computations, then we can expect better performance. Now, the issue is how to design such hardware at an affordable cost and that is of primary concern related to hardware platforms for computer graphics. So, from the point of view hardware, we have this quality of the hardware as well as cost tradeoff to keep in mind also, we have to keep in mind the type of input device we are using as well as the dedicated graphic systems that we can afford. 55 (Refer Slide Time: 34:31) Now, one thing we should note here is that in this course, we shall learn how the issues are addressed, but we will not discuss issues related to animation, we will restrict our discussion to modeling and rendering of 2D images on the screen. (Refer Slide Time: 34:57) So, whatever we have discussed so far can be found in chapter 1 of the book that we are following. You are advised to go through section 1.1 and section 1.2 for getting more 56 details on the topics that we have covered today. So, that is all for today, we will meet again in the next lecture, thank you and good bye. 57 Computer Graphics Professor Doctor Samit Bhattacharya Department of Computer Science and Engineering Indian Institute of Technology Guwahati Lecture 3 Basics of a graphic system Hello and welcome to lecture number 3, in the course Computer Graphics. Before we go into the topics of today's discussion, let me briefly recap what we have learnt in the previous lectures. (Refer Slide Time: 0:45) So in the first lecture we got some basic introduction to the field, what is graphics and what are the main characteristics of this field. This was followed by a brief discussion on the historical evolution as well as the issues and challenges that confronts the researchers and the workers in this area. So, these three topics we have covered in the previous lectures. Today, we shall introduce a basic graphics system so that in subsequent discussions it will be easier for us to understand the content. 58 (Refer Slide Time: 1:25) So, what we do in computer graphics? The answer is simple, we generate or synthesize a 2D image from some scene and we display it on a screen. So essentially generation of the images and display on the screen. Now, how do you do that? So in the previous lectures we went into some details of this questions, now let us try to understand the answer from the perspective of the graphics system. (Refer Slide Time: 2:00) So if we look at a graphic system, the components that are likely to be there looks something like this. So we have a host computer, where all the processing takes place, then we have a display controller one component of the graphics system and this display controller takes 59 input from the host computer in the form of display commands and also it takes input from input devices, various input devices we mentioned earlier for enabling us to interact with the screen content. Now the output of the display controller goes to another component called video memory. Video memory content goes to third component called video controller which eventually displays or which eventually helps to display the image on the display screen. So, there are broadly three components that are unique to a graphic system; display controller, video memory and video controller. So we will have a discussion brief discussion on each of these components for better understanding. (Refer Slide Time: 3:40) 60 Let us start with display controller. Now image generation task is performed by the display controller, so when you say that in computer graphics our primary objective is to generate an image, that generation task is performed by the display controller and it takes input from the CPU of the host computer as well as external input devices such as mouse, keyboard, joystick etc. (Refer Slide Time: 4:12) And based on these inputs it generates images, now these images are generated following a multistage process which involves lots of computation. (Refer Slide Time: 4:30) 61 One concern here is that if all these computations are to be carried out by the host CPU, then it may get very less time to perform other computations. So a computer is not meant only to display, it is supposed to perform some other activities as well. Now if the CPU is engaged with only the computations relevant for display, then it will not have time to perform other computations which in effect will affect the throughput of the system. So in such a situation the system or the host computer system would not be able to do much except graphics which definitely is not a designable situation. (Refer Slide Time: 5:20) To avoid such situations and increase efficiency of the system the job of rendering or displaying is usually carried out by a dedicated component of the system which probably some of us or all of us had heard of is called graphics card. Now in this card there is a dedicated processor like CPU we have a dedicated processing unit for graphics computing which is called GPU or Graphics Processing Unit. Later on will have one lecture on the basic idea of GPU, for the time being will just mention that there is a unit called GPU in the graphics card. 62 (Refer Slide Time: 6:24) And the CPU as science any graphics rendering task to this separate graphics unit and we call this graphic unit as the display controller which is a of course generic name and in different systems it is called in different ways. So essentially display controller deals with performing the multi-stage operations required to create or synthesize a 2D image. (Refer Slide Time: 7:15) 63 Now the second component is video memory, so output of display controller is some representation of the 2D image and in video memory which if we recollect from this generic architecture which takes as input output of the display controller, it stores the representation. (Refer Slide Time: 7:29) Now display controller generates the images in the digital format strings of 0’s and 1’s which is expected because computer understands and processes information only in terms of 0’s and 1’s. 64 (Refer Slide Time: 7:45) The place where we store it is simply the video memory which is a dedicated path of the memory hierarchy. Now as we all know in the memory hierarchy of a computing system we have RAM, ROM, secondary storage, cache different levels video memory is also a part of those levels in the hierarchy and typically it is situated in the separate graphics unit or graphics card which is more popularly called VRAM or video RAM probably many of you or all of you have heard of this term. So display controller generates image representation and stores and that representation is stored in video memory. (Refer Slide Time: 8:48) 65 Then comes video controller, again let us go back to that generic architecture here, video controller is situated here which takes as input the information stored in video memory and then it does something to display the image on the screen. (Refer Slide Time: 9:13) So what it does? It essentially converts digital image that is represented in the form of 0’s and 1’s to analogue voltages, why? Because the voltages drive electromechanical arrangements which ultimately render image on the screen. So screen essentially is a electro mechanical mechanism and to run this mechanism we require voltage and this voltage is generated by the video controller based on the 0’s and 1’s stored to represent the image. 66 (Refer Slide Time: 10:05) In each display screen we have a basic unit of display which is typically called pixels and typically it is arranged in the form of a grid or matrix like if I draw a screen like this so we will have pixel grid something like this, where each cell may represent a pixel essentially a matrix form of pixels. (Refer Slide Time: 10:40) Now these pixels are essentially excited by electrical means and when they are excited they meet lights with specific intensities. Now these intensities give us the sensation of coloured images or give us the sensation of colours. So pixels are there on the screen pixels are excited by electrical means, so after excitation and they meet some light with the specified intensity 67 which gives us a sensation of colour. So if some portion of an image is having the red colour, the corresponding pixels will emit light with intensity of red colour so that we get the red colour sensation. (Refer Slide Time: 11:30) Now the mechanism through which these pixels are excited is the job of the video controller, so video controller essentially is tasked to excite pixels through electrical means by converting the digital input signal 0’s and 1’s into some analogue voltage signals which in turns activates the suitable electromechanical mechanism which is part of the controller. So that is in a very broad sense how a graphics system look like, so it has three unique components, display controller, memory and video controller. Display controller is responsible for creating a digital representation of the image to be displayed which is stored in the video memory and then this image information is used to basically excite pixels on the screen, to emit light of specific intensity, to give a sensation of coloured images. So this job of exciting pixels on the screen is done by video controller. Now, in light of this broad description of a graphic system, let us now move to our next topic of types of graphic systems or graphic devices. 68 (Refer Slide Time: 13:08) So there are broadly two types of graphic systems which is based on the method used to excite the pixels. Now what are these two types? One is the vector scan device other one is the raster scan device. (Refer Slide Time: 13:33) Let us start with the vector scan device. This type of devices or graphic devices are also known as random scan stroke writing or calligraphic devices. 69 (Refer Slide Time: 13:47) In this type of devices when we are talking of an image that image is represented or assume to be represented as a composition of continuous geometric primitives such as lines and curves. So any image is assumed to be composed of lines and curves and when we render or display these images on the screen essentially we render these basic geometric shapes. So we no longer talk about the whole image instead we talked about the component lines and curves that define the image. (Refer Slide Time: 14:37) In other words, a vector scan device excites only those pixels of the pixel grid that are part of these primitives, so to a vectors can device there is no such concepts as a full image, instead it 70 only knows about constituent, geometric primitives and it excites the pixels that are part of those primitives. (Refer Slide Time: 15:10) An example is shown here, consider this line in this left figure and the corresponding pixels is a truncated part of the grid the corresponding pixels are highlighted in this right figure. So to a vector scan device the image is not the line but only the set of pixels. It knows only about these pixels instead of knowing about this line and these pixels are excited to generate the line image and only these pixels are excited other pixels are not excited, this is important that in case of a vector scan device we excite only the pixels that are part of the primitives, other pixels are not touched. 71 (Refer Slide Time: 16:04) As a result, what we need to do? We need to selectively excite pixels which is very tough job which requires high precision which is obvious and complex hardware. (Refer Slide Time: 16:26) Which in turn makes these devices costly because it takes money to develop such hardware with high precision. Also due to the selective exciting such type of devices, vector scan devices are good for rendering wireframes which are basically outlined images. For complex scenes which involves lot of field of areas, flicker is visible because of this mechanism of selective exciting which is not a good thing. 72 (Refer Slide Time: 17:18) The other type of graphic devices is raster scan device. Now in raster scan device an images is viewed as represented by the whole pixel grid, so earlier we considered an image to be represented by only a subset of the whole pixel grid but here we are considering the whole pixel grid and not only the selected pixels representing the primitives. So when we render an image on a raster scan device all the pixels are considered, in case of vectors can device be considered only a subset and other pixels were not touched but here all the pixels are considered. And how do we consider that? (Refer Slide Time: 18:08) 73 By considering the pixels in a sequence. What is the typical sequence? It is typically left to right top to bottom. So if we have a grid like this then typically we start from left move towards the right end then go to the next row move towards the right end and continue in this way so kind of this type of movement till we reach the lower right endpoint or end pixel. (Refer Slide Time: 18:41) The same thing is mentioned here, so the controller starts with the top left pixel and checks if the pixel needs to be excited, that information will be stored in the memory. So if it needs to be excited it excites the pixel or leaves it unchanged but mind here that the pixel is considered for excitation and action is taken accordingly. (Refer Slide Time: 19:16) 74 It then moves to the next pixel on the right and repeat the steps till the last pixel in the row is reached. (Refer Slide Time: 19:29) Then the controller considers the first pixel in the next row and repeats the steps and in this manner it continues till the right bottom pixel of the grid. (Refer Slide Time: 19:43) Now this process of consideration of pixels in sequence or such sequential consideration of pixels is known as scanning this is a more generic term used that in raster scan devices, pixel scanning takes place each row of the grid is known as a scan line. So this sequential consideration is called scanning and each row in the pixel grid is known as scanline. 75 (Refer Slide Time: 20:23) Let us consider the same example here, earlier we considered only the pixels that are part of this line only these pixels, now we are considering all pixels starting from the top left corner moving in this direction then this row so on till this point. So each line is a scan line and as you can see in this figure, right hand figure, the white pixels means they need not be excited. The system considered the pixel and found that they need not be excited so it move to the next pixel and the filled up circles indicate excited pixels which represents the line so that information was also there in the memory and the video controller found out that these pixels needed to be excited so it excited those pixels, in the process is it considered all pixels in the grid and excited only those which need to be excited. 76 (Refer Slide Time: 21:42) Now the video memory of a raster scan system is more generally known as frame buffer where each location corresponds to each pixel. So the size of a frame buffer is equal to the screen resolution the size of the pixel grid, which is very obvious of course. (Refer Slide Time: 22:07) Now there is one interesting fact you should be aware of it, display processors are typically very fast they work at the speed of CPU, that is nanosecond scale so any operation is done at a very less time nanosecond level. On the other hand, video controllers are typically slower, much, much slower compared to display controllers because they involve electromechanical arrangements which takes time to work. 77 So typical speed ranges in the millisecond level or millisecond scale. Clearly there is a mismatch between the way display processor produces output between the speed at which the display processor can produce output and the speed at which the video controller can take that output as input. (Refer Slide Time: 23:15) Now assume that there is only one video memory or frame buffer, if the display controller outputs are fed directly as input to the video controller through that framebuffer, now the output is being produced very fast but the input is being consumed at a much lower rate so the output may get overwritten before the entire output is taken by the video controller as input which in turn may result in the image getting distorted because before the current input is processed the next input is ready and overwrote the current input. So to address this concern, so we use the concept of frame buffers. 78 (Refer Slide Time: 24:14) Where single buffer is not sufficient and will require at least 2 buffers and if two buffers are used it is called double buffering, of course there are cases with more than 2 buffers. Now in case of double buffering one buffer or one video memory is called primary and the other one is called secondary, so now video controller takes input from one of the buffers typically the primary buffer whereas the display controller fills up the other or the secondary buffer. Now when the video controller finishes reading input from the primary buffer, the primary now become secondary and the secondary becomes primary, so a role reversal takes place and the process repeats. So in this way the problem of overwriting the image information can be avoided. (Refer Slide Time: 25:17) 79 Another interesting fact to note here is called refreshing, now lights emitted from pixel elements which gives us the sensation of colour starts decaying over time. So it is not the case that the intensity of the emitted light remains the same throughout the display session so over time it starts decaying so intensity changes which lead to fading of the scene after sometime. However, pixels in a scene may get excited at different points of time, thus the pixels may not fade in sync. So in an image it is not necessary that every pixels fade in sync so that it is not perceptible to the user so it may lead to image distortion. (Refer Slide Time: 26:29) You know to avoid that situation, what is done is to keep on exciting the pixels periodically which is known as refreshing. So whatever is the excitation value with that value there is a periodic excitement of the whole pixel grid, so it is not an one time activity. One important 80 consideration here is the refresh rate at which rate we should keep on refreshing the screen so that the changes are not perceptible to the human eye. So the number of times a scene is refreshed per second is known as the refresh rate which is represented in Hz or Hertz, it is typically the frequency unit. And in case of displays that is typically considered to be 60 Hertz or 60 time per second screen should be refreshed. 81 (Refer Slide Time: 27:33) So what are the pros and cons of a raster scan device? Clearly here, since we are not trying to excite selectively, so we do not require a very high precision hardware. Scanning is a very straightforward job so a low precision hardware can do to the job. Also it is good for generating complex images since we are considering all pixels anyway, so it will not lead to flickers unlike in vector scan. (Refer Slide Time: 28:10) Due to these benefits one is low cost the other one is ability to generate complex images most of the displays that we see around us are based on raster graphic concept, so you get to see 82 only or mostly raster graphics devices around us because it is low cost and good at generating complex images. (Refer Slide Time: 28:43) Now these two are from the point of view of hardware vector scan device and raster scan device, there is a closely related term which probably you may have heard of called vector graphics and raster graphics. (Refer Slide Time: 28:58) Now these two are not related to any hardware characteristics unlike the previous terms vector scan and raster scan. 83 (Refer Slide Time: 29:10) In case of vector graphics, what we actually refer to is a where the image is represented, so when we are talking of a vector graphics image we are talking of the representation in terms of continuous geometric primitives such as lines and curves, so if I say that particular image is a vector graphics image, that means I am representing that image in terms of its constituent geometric primitives, lines and curves. (Refer Slide Time: 29:50) In case of raster graphics, the representation is different like in raster scan device in case of raster graphics what we refer to is essentially representing the image as the whole pixel grid with the pixels which are supposed to be excited in an on state and others in a off state. So if 84 we are representing an image as a raster graphics image essentially the image is stored in a form of whole pixel grid where some pixels are in the excited or in the on state or at least it is indicated that these pixels should be in the on state. (Refer Slide Time: 30:48) But again it should be noted that vector graphics or raster graphics are terms to indicate the way images are represented they have nothing to do with the underlying hardware. So even if I represent an image in the form of a vector graphics I can still use a raster scan device to display that image and vice versa if I represent an image as a raster graphics I can still use a vector scan device to render it. So we should be always clear about the distinction between these terms, one term is vector scan device and raster scan device these are related to the way scanning takes place at the hardware level. Other terms are vector graphics and raster graphics these represent the way images are represented internally rather than how they are rendered through actual display hardware. 85 (Refer Slide Time: 32:00) Now let us come back to or let us discuss another important topic that is colour display. So far we are assuming that the pixels are monochromatic implicitly we are assuming that but in reality we get to see images that are having colours, so how they work. In a black and white display each pixel may contain one type of element, for example if you are aware of CRT or cathode ray tube displays and their internal mechanism then you may be knowing that each pixel on a CRT display is having a single phosphor dot. Now when we excite it to generate different light intensities, they result in different shades of grey because that is a single phosphor dot. 86 (Refer Slide Time: 33:05) Like the illustration shown here this is for CRT or cathode ray tube, of course nowadays it is very rare to see such displays but it is good for pedagogical purpose to demonstrate in terms of a CRT, so left side shows a typical CRT display and on the right side we can see that how it works internally. So it has a tube within which there are certain arrangements these arrangements together constitute the video controller component of a generic system that we have discussed earlier, so we have cathode, heater, anode arrangements, then a grid to control this electron flow, then deflection plates vertical and horizontal for deflecting the electron flow. So essentially the arrangement generates a stream of electrons which hits a point on the screen a pixel, after hitting the pixel or the phosphor dot generates intensities which results in different shades of grey, that is in a very brief how CRT’s work and in a similar way other displays also work in a similar way not in this exactly the same way. 87 (Refer Slide Time: 34:44) So what happens in case of a colour image? Now in that case each pixel contains more than one type of element, so like for CRT instead of having one phosphor dot we can have three types of phosphor dots representing three primary colours namely red, green and blue. So when excited each of these phosphor dots generates intensities related to this primary colours so the red dot generates red intensities, green dot generate green intensities and blue dot generates blue intensities. When this intensity is combined together, we get the sensation of desired colour. (Refer Slide Time: 35:44) 88 So as I said each element is capable of generating different shades of the colour and when this shades combine they give us the desired sensation of the colour, schematically it looks somewhat like this figure where we have three streams of electron beams hitting the three elements separately some special arrangements are there which are called masks to guide the electron beams to hit specific pixel group representing the three pixels like the three shown here and finally we get the combination of different shades as the desired colour. (Refer Slide Time: 36:48) Now there are two ways to generate this coloured images. Essentially what we want to do is we want to have some values to guide the exciting of the individual type of elements in a coloured display, so there are two ways to do that, one is direct coding in this case what we 89 do individual colour information for each of the red, green and blue element of a pixel are stored directly in the corresponding frame buffer. So in the frame buffer itself we are storing the information of what should be the intensities of this individual colours, clearly that requires larger frame buffer compared to black and white frame buffers because now in each location we are storing three values instead of one and this frame buffer should be capable of storing the enter combination of RGB values which is also called the colour gamut. So later on will learn more about this colour gamuts the idea but the point to be noted here is that if we are going for direct coding, then we require a large frame buffer. (Refer Slide Time: 38:25) Another way is colour lookup tables where we use a separate table, lookup table which is of course a portion of the memory where each entry of the table contains a specific RGB combination and the frame buffer location contains pointer to the appropriate entry in the table. So frame buffer does not store the values directly instead it stores the location to the table which stores the actual values like illustrated in this figure as you can see this is a frame buffer location which stores the pointer to this particular table entry which stores the values of R G and B these are the values to excite the pixels accordingly. 90 (Refer Slide Time: 39:19) Now if I want the CLT to work or the colour lookup tables scheme to work, then we have to know the subset of the colours that are going to be required in the generation of images. So the table cannot store all possible combinations of R G and B values, it stores only a subset of those combination so essentially a subset of the entire set or the colour gamut and we must know that subset in advance to make this scheme work. If it is not valid of course this method is not going to work but nowadays we do not have any problem with this frame buffer at the size of the frame buffer because memory is cheap. So nowadays it is almost all graphic systems go for direct coding method but in the earlier generation of graphical systems when memory was a factor to determine the overall cost CLT was much in use. In that period of course the screens were not equipped to display all sorts of complex images and mostly wireframes were the images that were displayed. So that time CLT’s were much more useful but nowadays we do not need to bother about CLT much unless there is some specific application and we can directly go for direct coding method. 91 (Refer Slide Time: 40:56) So let us summarise what we have learnt today, we have got introduced to a basic graphic system which consists of three unique components namely the display controller, the video memory and the video control. Display controller is tasked to generate the image which is stored in video memory and which is used by the video controller to render it on a computer screen. We also learnt about different types of graphic systems namely the vector scan devices and the raster scan devices in brief and the associated concepts namely vector graphics, raster graphics, refreshing, frame buffers so on. Also we got some idea of how colour images are generated at the hardware level. So these are basic concepts which will be useful in our subsequent discussions. In the next lecture we will get an introduction to the basic processing that is required to generate a 2D image that is the job of the display controller, now this processing is actually consisting of a set of stages which is collectively known as graphics pipeline, so in the next lecture we will have an introduction to the overall pipeline. 92 (Refer Slide Time: 42:41) The topics that I have covered today can be found in this book chapter 1, section 1.3 and you are also advised to go through the details on the CRT or the cathode ray tube display that is mentioned in this section, although I have not covered it here, for better understanding of the topics. So we will meet again in the next lecture, thank you and goodbye. 93 Computer Graphics Dr. Samit Bhattacharya Computer Science and Engineering Indian Institute of Technology, Guwahati Lecture 4 Introduction to 3D Graphics Pipeline Hello and welcome to lecture number 4 in the course Computer Graphics. (Refer Slide Time: 00:39) Before we start, we will briefly recap what we have discussed in the previous lectures. So we started with a basic introduction to the field where we discussed about the historical evolution as well as the issues and challenges that are encountered by the workers in this field. This was followed by a basic introduction to the graphics system. So whenever we talk about computer graphics implicitly we refer to some hardware platform on which some software works. And the basic hardware structure or architecture of a graphic system has been introduced in one of the previous lectures. Today we are going to introduce the other component of the graphic system, namely the graphics software. Of course at this stage we will restrict ourselves to a basic introduction and the software stages will be discussed in details in the subsequent lecture. 94 (Refer Slide Time: 02:06) So let us recap what we have learned about a generic architecture of a graphic system. As we mentioned in one of our earlier lectures, so there are 3 unique components of a graphic system. One is the display controller, one is the video memory and the 3rd one is a video controller. What the display controller does? It essentially takes input from the host computer as well as from some external input devices which are used to perform interactive graphics. And based on that input, it creates a representation, a digital representation of a 2D image. That is the job of the display controller. Now that representation, that the controller generates is stored in a memory which is called video memory. Now the content of the memory, video memory is given as input to the 3 rd component that is the video controller which takes the memory content as input and then generates certain voltage levels to drive some electro-mechanical arrangements that are required to ultimately display the image on a computer screen. As you may recollect, we also mentioned that most of the things are done separately without involving the CPU of the host computer. So typically computers come with a component which is called as graphics card which probably all of you have heard of which contains the video memory, the video controller and the display controller components. And the processing unit that is typically part of the display controller is known as the GPU or graphics processing unit, this is 95 separate from the CPU or the main processing unit of a host computer. And the GPU is designed to perform graphical activities, graphical operations. (Refer Slide Time: 04:48) Now in this generic architecture as we said, display controller generates representation of an image. So what that representation contains? It contains some color values or intensity values in a specific format which ultimately is used to generate the particular sensation of color on the screen. Now from where these color values are obtained? Let us try to go into a some details of the process involved in generating these color values. 96 (Refer Slide Time: 05:29) Now these color values are obtained by the display processor through some computations that are done in stages. So there are a series of computations and these computations ultimately result in the generation of the color values. (Refer Slide Time: 06:00) Now these stages or the series of steps that are involved in the generation of color values are together called the graphics pipeline. This is a very important terminology and in our subsequent lectures we will discuss in details the stages of the pipeline, that actually will be the crux of this course. 97 (Refer Slide Time: 06:30) But today we are going to introduce the pipeline for our benefit, so that we can understand the later discussion better. So let us get some introductory idea on the pipeline and its stages. (Refer Slide Time: 06:47) There are several stages as I mentioned, first stage is essentially defining the objects. So when we talk of creating a scene or an image, it contains objects. Now there needs to be some way to represent these objects in the computer. That activity where we define objects which are going to be the parts of the images constitute the first stage of the pipeline which is called object 98 representation stage. For example, as you can see in this figure on the screen we want to generate the image of a cube with color values as shown on the right hand part of the screen. Now this image contains an object which is a cube and on the left hand side here we have defined this cube. So when we talk of defining what we mean essentially as we can understand intuitively, defining the cube involves specifying the vertices or edges with respect to some reference frame that is the definition in this simple case that is what are the vertices or what are the edges as pair of vertices. Ofcourse cube is a very simple object, for more complex objects we may require more complex definitions, more complex way of representing the objects. (Refer Slide Time: 08:53) Accordingly, several representation techniques are available for efficient creation and efficient manipulation of the images. Note here on the term efficient, so when we talk of this term efficient, essentially what we refer to, we refer to the fact that the displays are different, the underlying hardware platforms are different. So whatever computational resources we have to display something on a desktop or a laptop are likely to be different with respect to whatever we have to display something on a small mobile device or on a wearable device screen. Accordingly, our representation techniques should be able to utilize the available resources to the extent possible and should be able to allow the users to manipulate images in an interactive 99 setting. So the efficiency is essentially with respect to the available computing resources and the way to make optimum use of those resources. (Refer Slide Time: 10:32) Now once we define those objects, these objects are then passed through the subsequent pipeline stages to get and render images on the screen. So the first stage is defining the objects and the subsequent stages we take these object definitions as input and generate image representation as well as render it on the screen. 100 (Refer Slide Time: 11:00) What are those subsequent stages? First one is modeling transformation which is the 2 nd stage of the pipeline. Now as I said when we are defining an object where considering some reference frame with respect to which we are defining the object. For example, the cube that we have seen earlier. To define the cube, we need to define its coordinates but coordinates with respect to what? There we will assume certain reference frames. Now those reference frames with respect to which the objects are defined are more popularly called local coordinate of the object. So the objects are typically defined in their own or local coordinate system. Now multiple objects are put together to create a scene, so each object is defined in its own or local coordinate system and when we are combining them we are essentially trying to combine these different reference frames. By combining those different objects, we are creating a new assemble of objects in a new reference frame which typically is called world coordinate system. Take the example shown on this figure. So here as you can see there are many objects, some cubes, spheres and other objects, cylinders. Each of these objects is defined in its own coordinate system. Now in this whole scene, consisting of all the objects, this is the whole scene, here we have assembled all those objects from their own coordinate systems. But here again we are assuming another coordinate system in terms of which this assembling of objects is defined. So that coordinate system where we have assembled them is called the world coordinate system. So 101 there is a transformation, transforming an object from its own coordinate system to the world coordinate system. That transformation is called modeling transformation which is the 2nd stage of the graphics pipeline. (Refer Slide Time: 13:58) So in the first stage we define the objects, in the second stage we bring those objects together in the world coordinate system through modeling transformation which is also sometime known as the geometric transformation. So both the terms are used either modeling transformation or geometric transformation that is the 2nd stage of the graphics pipeline. 102 (Refer Slide Time: 14:14) Now once the scene is constructed, the objects need to be assigned colors which is done in the 3 rd stage of the pipeline called lighting or illumination stage. Take for example the images shown here. In the left figure we have simply the object, in the right figure we have the color. So the, we have applied colors on the object surfaces. Now as you can see the way we have applied colors, it became clear which surface is closer to the viewer and which surface is further. In other words, it gives us a sensation of 3D, whereas without colors like the one shown here, that clarity is not there. So to get realistic image which gives us a sensation of 3D, we have to assign colors. Assignment of colors is the job of the 3rd stage which is called lighting or illumination stage. 103 (Refer Slide Time: 15:36) Now as probably you are aware of color is a psychological phenomenon and this is linked to the way light behaves or in other words, this is linked to the laws of optics. And in the 3rd stage, what we do? We essentially try to mimic these optical laws, we try to mimic the way we see color or we perceive color in the real world and based on that we try to assign colors in the synthesized scenes. (Refer Slide Time: 16:17) So first we define an object, 2nd we bring objects together to create a scene, 3rd stage we assign colors to the object surfaces in the scene. Now till this point, everything we were doing in 3D 104 setting in the world coordinate system. Now when we get to see an image, the computer screen is 2D, so essentially what we require is a mapping from this 3D world coordinate scene to 2D computer screen. That mapping is done in the 4th stage that is viewing transformation. Now this stage we perform several activities which is similar to taking a photograph. Consider yourself to be a photographer, you have a camera and you are capturing some photo of a scene. What you do? You place the camera near your eye, focus to some object which you want to capture and then capture it on the camera system and also this is followed by seeing it on the camera display or camera screen, if you are having a digital camera. (Refer Slide Time: 18:01) Now this process of taking a photograph can be mathematically analyzed to have several intermediate operations which in itself forms a pipeline, which is a pipeline within the broader graphics pipeline. So the 4th stage viewing transformation itself is the pipeline which is a part of the overall graphics pipeline. Now this pipeline where we transform a 3D world coordinate scene to a 2D view plane scene is called viewing pipeline. 105 (Refer Slide Time: 18:50) Now in this pipeline what we do? We first setup a camera coordinate system which is also referred to as a view coordinate system. Then the world coordinate scene is transformed to the view coordinate system. This stage is called viewing transformation. So we have setup a new coordinate system which is a camera coordinate system and then we transformed the world coordinate scene to the camera coordinate scene. 106 (Refer Slide Time: 19:30) From there we make another transformation, now we transfer the scene to a 2D view plane. Now this stage is called projection transformation. So we have viewing transformation followed by projection transformation. (Refer Slide Time: 19:49) For projection, we define a region in a viewing coordinate space which is called view volume. For example, in the figure shown here, as you can see this frustum is defining a view volume, the frustum shown here is defining a view volume. So we want to capture objects that are present within this volume, outside objects we do not want to capture. That is typically what we do when 107 we take a photograph, we select some region on the scene and then we capture it. So whichever object is outside will not be projected and whichever are there inside the volume will be projected. (Refer Slide Time: 20:48) So here we require one additional process, a process to remove objects that are outside the view volume. Now those objects can be fully outside or can be partially outside. So in both the cases we need to remove them. So when an object is fully outside we completely remove it and when an object is partially outside we clip the object and keep only the part that is within the view volume, the outside part we remove. The overall process is called clipping. 108 (Refer Slide Time: 21:22) Also when we are projecting, we consider a viewer position where the photographer is situated and in which direction he or she is looking at. Based on that position, some objects may appear fully visible, some may appear partially visible, whereas the other objects will become invisible. But all of them may be within the same volume. For example, with respect to this particular view position, some objects may get, like this object if it is behind this object then it will be invisible. If it is partially behind, then it will be partially visible and if they are not aligned in the same direction, then both of them will be fully visible. So you take care of this fact also before projection which requires some further operations, computations. 109 (Refer Slide Time: 22:32) So to capture this viewing effect, the operations that we perform are typically called hidden surface removal operations or similarly visible surface detection operations. So to generate realistic viewing effect along with clipping what we do is we perform the hidden surface removal or visible surface detection operations. (Refer Slide Time: 23:06) So after clipping and hidden surface removal operations, we project the scene on the view plane. That is a plane define in the system, in the view coordinate system. 110 (Refer Slide Time: 23:21) Now, there is one more transformation Suppose in the right hand figure, suppose this is the object which is projected here in the view plane. Now the object may be displayed on any portion of a computer screen, it need not to be exactly at the same portion as in the view plane. For example, this object may be displayed in a corner of the display. So we will differentiate between two concepts here; one is the view plane which is typically called a window, other one is the display region on the actual display screen which we call viewport. So one more transformation remains in the viewing pipeline that is transferring the content from window to the viewport. So this is called the window-to-viewport transformation. 111 (Refer Slide Time: 24:44) So in summary what we can say is that, in the 4th stage there are 3 transformations. What are those transformations? First we transform from world coordinate scene to camera or view coordinate scene. Then from camera coordinate scene, we perform the projection transformation to view plane, then the view plane window is transform to the viewport. So these are the 3 transformations. Along with those there are 2 major operations that we perform here; one is clipping that is clipping out the objects that lie outside the view volume and the other one is hidden surface removal which means creating a realistic effect, viewing effect with respect to the viewer position. So that is the 4th stage. So first we defined objects in the first stage, in the 2nd stage we combined those objects in the world coordinate scene, in the 3rd stage we assigned colors to the object surfaces in the world coordinate scene, in the 4th stage we transformed in the world coordinate scene to the image on the viewport through a series of transformations which form a sub-section-pipeline within the overall pipeline. And those sub-pipeline stages are viewing transformation, projection transformation and window-to-viewport transformation. This sub-pipeline is called viewing pipeline which is part of the overall graphics pipeline and in the 4th stage along with these viewing pipeline we also have to more operations performed that is clipping and hidden surface removal. 112 (Refer Slide Time: 27:17) One more stage remains that is the 5th stage which is called scan conversion or rendering. Now we mentioned earlier that we transform to a viewport. Now viewport is an abstract representation of the actual display. In the actual display if you recollect our discussion on our raster displays, we mentioned that the display contains a pixel grid. So essentially the display contains locations which are discrete, we cannot assume that any point can have a corresponding point on the screen. For example, if in our image we have a vertex at location 1.5 and 2.5, on the screen we cannot have such a location because on screen we only have integer values as coordinates due to the discrete nature of the grid. So we have either a pixel located at 2, 2 or 3, 3 or 1, 1 or 1, 2 something like that rather than the real numbers 1.5, 2.5. So we cannot have a pixel location at say 1.5, 2.5 but we can have pixel locations only at integer value say 1, 1; 2, 2 and so on. So if we get a vertex in our image located at 1.5, 2.5 then we must map it to these integer coordinates. That stage where we perform this mapping is called the scan conversion stage which is the 5th and final stage of the pipeline. For example, consider these lines shown here, the end points are 2, 2 and 7, 5. Now all the intermediate points may not have integer coordinate values but in the final display, in the actual display we can have pixels, these circles only at integer coordinate values. So we have to map these non-integer coordinates to integer coordinates. That mapping is the job of this 5th stage or scan conversion stage which is also called rasterization. And as you can see it 113 may lead to some distortion because due to the mapping we may not get the exact points on the line, instead we may have to satisfy ourselves with some approximate points that lies close to the actual line. For example, this pixel here or this pixel here is not exactly on the line but the closest possible pixel with respect to the line. (Refer Slide Time: 30:19) So what is the concern? How to minimize the distortions? Now these distortions has a technical name which is called aliasing effect, from where this name originated we will discuss later. So our concern is to eliminate or reduce the aliasing effect to the extent possible so that we do not get to see too much distortions, we do not get to perceive too much distortions. To address this concern, several techniques are used which are called anti-aliasing techniques. These are used to make the image look as smooth as possible to reduce the effect of aliasing. 114 (Refer Slide Time: 31:21) So let us summarize what we have discussed so far. We mentioned the graphics pipeline, it contains the 5 stages; 1st stage is object representation, 2nd stage is modeling transformation, 3rd stage is assigning colors or lighting, 4th stage is the viewing pipeline which itself has sub- pipeline involving viewing transformation, clipping, hidden surface removal, projection transformation and window-to-viewport transformation and the final stage is scan conversion which is the 5th stage. So there are broadly 5 stages involved. Now each of these stages has its own reference frames, own coordinate system. So in stage 1 we deal with local coordinate system of the objects, in stage 2 we deal with world coordinate system. So essentially it transforms from local to world coordinate system. Stage 3 again we deal with world coordinate. So when we are assigning color, we essentially assuming that the objects are defined in the world coordinate system. In stage 4 again different coordinates are used, so first transformation that is viewing transformation, involve transformation from world coordinate to view coordinate system or the camera coordinate system. Clipping is performed on the view coordinate system, hidden surface removal is also performed in the view coordinate system. Then we perform a transformation, projection transformation which transforms the content of the view coordinate system to 2D view coordinate system. 115 So we have 3D view coordinate system from there we transfer the content to a 2D view coordinate system. And in the window-to-viewport transformation, we transfer from this 2D view coordinate system to device coordinate system. And finally in the 5th stage what we do, we transfer from device coordinate to actual screen coordinate system. Note that device coordinate is an abstract an intermediate representation, whereas the screen coordinate is the actual pixel grid. So device coordinate contains continuous values, whereas screen coordinate contains only discrete values in the form of a grid. So this is in summary what is there in the graphics pipeline. So display controller actually performs all these stages to finally get the intensities values to be stored in the frame buffer or video memory. Now these stages are performed through software, of course with suitable hardware support. For a programmer of a graphic system, of course it is not necessary to learn about the intricate details of all these stages, they are quite involves lots of theoretical concepts, lots of theoretical models. Now if a graphics programmer gets brought down with all this theory, models then most of the time will be consumed by understanding the theory rather than actually developing the system. So in order to address this concern of a programmer what is done is essentially development of libraries, graphics libraries. (Refer Slide Time: 35:17) So there is this theoretical background which is involved in generating 2D image. The programmer need not always implement the stages of the pipeline to fully implement the 116 theoretical knowledge, that would be of course too much effort and major portion of the development effort will go into understanding and implementing the theoretical stages. (Refer Slide Time: 35:52) Instead the programmer can use what is called application programming interfaces or APIs provided by the graphics libraries. Where these stages are already implemented in the form of various functions and the developer can simply call those functions with arguments in their program to perform certain graphical tasks. There are many such libraries available, very popular ones are mentioned here OpenGL which is an open source graphics library which is widely used. Then there is DirectX by Microsoft and there are many such other commercial libraries available which are proprietary but OpenGL being open source is widely accessible and useful to many situations. 117 (Refer Slide Time: 37:00) Now what these libraries contains? They contain predefined sets of functions, which, when invoked with appropriate arguments, perform specific tasks. So the programmer need not know every detail about the underlying hardware platform namely processor, memory and OS to build a application. (Refer Slide Time: 37:29) For example, suppose we want to assign colors to an object we have modelled. Do we need to actually implement the optical laws to perform the coloring? Note that this optical law implementation also involves knowledge of the processors available, the memory available and 118 so on. So what we can do is instead of having that knowledge, we can simply go for using a function glColor3f with an argument r, g, b. So this function is defined in OpenGL or the open graphics library which assigns a color to a 3D point. So here we do not need to know details such as how color is defined in the system, how such information is stored, in which portion of the memory and accessed, how the operating system manages the call, which processor CPU or GPU handles the task and so on. So all these complicated details can be avoided and the programmer can simply use this function to assign color. We will come back to this OpenGL functions in a later part of the lecture where we will introduce OpenGL. (Refer Slide Time: 38:57) Now graphics applications such as painting systems which probably all of you are familiar with, CAD tools that we mentioned in our introductory lectures earlier, video games, animations, all these are developed using these functions. So it is important to have an understanding of these libraries if you want to make your life simpler as a graphics programmer. And we will come back later to this library functions, we will discuss in details some functions popularly used in the context of OpenGL. So in summary today we have learned some idea of 3D graphics pipeline and also got some idea, introductory idea to the graphics libraries. In subsequent portion of the course, we will discuss in 119 details all the stags as well as some more details of the graphics libraries and the graphics hardware. (Refer Slide Time: 40:26) That is all for today, whatever I have discussed today can be found in chapter 1 of the book mentioned here. You are advised to refer to section 1.4 and 1.5. Thank you and good bye. 120 Computer Graphics Doctor Samit Bhattacharya Department of Computer Science and Engineering Indian Institute of Technology Guwahati Lecture 5 Introduction and Overview on Object Representation Techniques Hello and welcome to lecture nu

Use Quizgecko on...
Browser
Browser