Artificial Intelligence (AI) PDF

Document Details

SmoothPopArt

Uploaded by SmoothPopArt

Airmen Leadership Academy, Pakistan Air Force

Tags

artificial intelligence AI computer programming machine learning

Summary

This document discusses the term Artificial Intelligence (AI) and its history, definitions, and implications. It explores the different approaches to creating AI, such as Good Old-Fashioned AI (GOFAI), Machine Learning, and Deep Learning. This paper also mentions the concept of smart materials and their applications. It also looks at the use of AI in different fields. The document also touches on the development of computers.

Full Transcript

RESTRICTED The Term – Artificial Intelligence (AI) 2. The use of term AI goes back to the 1950s, although the concept has been around since long. The term was first used by researcher John McCarthy in what is now called the Dartmouth conference. The recent progress in the field and its effect...

RESTRICTED The Term – Artificial Intelligence (AI) 2. The use of term AI goes back to the 1950s, although the concept has been around since long. The term was first used by researcher John McCarthy in what is now called the Dartmouth conference. The recent progress in the field and its effect on our world has made the term very popular in the past 05 years. 3. We all have a basic understanding of the term ‘intelligence’. In fact, Human intelligence has always been a topic of great interest for psychologist and therefore many definitions exist. 4. Encyclopedia of Britannica defines Intelligence as a mental quality which includes the capabilities to learn from experience, adapt to new situations, understand and handle different concepts, and use knowledge to change one’s environment. Computers and Artificial Intelligence 5. Since 1950s computers have been around; which can perform complex calculations much faster than Humans. Should we then call computers intelligent machines? We know that computers have great computational power [related to the capability of performing calculations] but the real question is, can they learn or decide on their own? Computers can only do what we ask them to do and that too after having been given very specific instructions. 6. So computers, in their general sense, do not fit our definition of intelligent machines. However, with the help of AI, computers can be turned into intelligent machines. Artificially intelligent machines can learn on their own, make independent decisions, and find new ways to solve problems. Oxford dictionary has the following definition for AI: “AI is the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages”. Implementing AI – How AI Works 7. Now when we have started to build a concept of what AI is the next big question is how do we achieve it? Conventional [traditional] programming and Artificial Intelligence both aim to make things easier for humans. Conventional programming is limited in terms of capabilities because it does only what it has been tasked to do: that too after writing very explicit [clear] and error-free instructions or code. AI takes a different route, most of the times it learns from data just like humans learn from their experiences. 8. Think for a while, how does a child learn? He learns from his experiences. It is for this reason that he is always keen to try out new things. We do not program our children, we let them experience things. Even when we give them 2 RESTRICTED RESTRICTED instructions they follow those instructions keeping in mind their experience of punishment and reward associated with them. 9. To go further we need to investigate the different approaches to achieve AI i.e. the how part of AI. We shall cover three approaches to AI in this course i.e. (a) Good Old-Fashioned AI (GOFAI) (b) Machine Learning (c) Deep Learning 10. GOFAI. Classical approach to achieve AI was to write computer programs manually and provide the logic to reach a result. This was known as Good Old- Fashioned AI (GOFAI). GOFAI is the oldest form of AI and by 1958 scientists were developing computer code based on it. 11. GOFAI makes use of the logic and can be very good for situations where rules are clear and unlikely to change. GOFAI can make decisions but its limitation lies in the fact that it cannot learn new things. Moreover, GOFAI does not require any data to train itself. As opposed to it, the other forms of AI such as Machine learning and Deep learning heavily depend on data. 12. Recall the definition of AI, it is the ability of machines to do things which humans can. In a way, scientist have always tried to copy human way of thinking and doing things. They have been fascinated by the way Almighty Allah has created us. So GOFAI was the earliest attempt to copy humans and it produced some amazing results. 13. Figure 1 shows Gary Kasparov, world chess champion, playing against Deep Blue in the year 1997. Deep Blue was a supercomputer developed by IBM using GOFAI. We know that chess has a set of rules. These rules were programmed into Deep Blue. IBM scientists wanted to beat a human champion to show the power of their computers. Deep Blue would use a ‘Brute force search strategy’ to consider the outcomes of Figure 1: Gary Kasparov playing against millions of possible moves. In fact, Deep Blue it could consider 200 billion positions in 3 minutes. This was possible because computers have great computational power. Despite this computational power Kasparov had beaten Deep Blue in 1996: but he finally lost to it the next year. Experts say that Kasparov did not fight and lost his hope too early, he could at least draw the game. Another reminder that humans are not very consistent in 3 RESTRICTED RESTRICTED their performance; an area where machines can generally beat us. Human performance and limitations shall be covered in detail in Quality & Safety module. 14. Another question comes here, how could Kasparov beat Deep Blue in 1996 despite such computational power of Deep Blue? The answer to this may lie in the fact that Kasparov and all humans can learn and they have intuition; the ability to understand something without logic. GOFAI cannot learn new things. It is rule- based and follows a set of rules. Chess has clear rules on how a piece can be moved so GOFAI works for chess. GOFAI cannot handle new situations. 15. This is why scientists were still interested in designing machines which can think like humans. Deep Blue could not learn on its own so very soon scientists were looking for more intelligent AI. 16. Machine Learning (ML). We have already discussed that Humans learn from past experiences. What if we provide data, and not coded instructions, to computers to make them learn? Making machines learn through data is known as Machine Learning or ML. Machine learning is a branch of AI which uses large amounts of data and after applying a set of mathematical models tries to find patterns and relationship between different variables / factors. Based on these relationships and patterns, decisions can be made. 17. Do not get intimidated (scared) by the word Mathematical model. Mathematical models use mathematics to define real-world objects / phenomenon. For example, you are going to Lahore on motorway at a speed of 100 km/hr and distance to Lahore is 400km; so, you reach Lahore in 4 hrs. If you want to go further to Multan which is 200 km extra, you will need 2 more hours. This can be easily written in the form of an equation, and by doing so we have converted real-life phenomenon into a simple mathematical model. 𝐝𝐢𝐬𝐭𝐚𝐧𝐜𝐞 𝐭𝐢𝐦𝐞 = 𝒗𝒆𝒍𝒐𝒄𝒊𝒕𝒚 18. Coming back to Machine learning, we said that it requires large sets of data. Consider an example of Snr Tech Ali who likes to eat food which is oily and has lots of meat. Moreover, he generally dislikes eating vegetables. At your Mess, the caterer decides to serve Bhindi-Gosht Fry. What are the chances that he would like it? We can see, the answer is not very clear. Ali may like or dislike it. 19. Let us put Ali’s choices on a graph as in Figure 2. Bhindi-Gosht containing some meat and oil / ghee is somewhere between what he likes and does not like. Now we draw a circle big enough to include the 03 dishes nearest to Bhindi-gosht. Once we do that we see that the circle contains 02 dishes which Ali dislikes and 01 which he likes. Based on this, we can say that there is a 66% chance that Ali would dislike Bhindi Gosht. It is a very simple example but this is how ML works. In fact, you have just learnt an ML technique known as K-nearest neighbor (KNN) algorithm. Here the value of k is 3: since the circle was made big enough to encircle 03 nearest neighbors of Bhindi-gosht i.e. our point of interest. 4 RESTRICTED RESTRICTED Figure 2: Plot showing Likes and Dislikes Data 20. Now imagine if the like / dislike data for your entire course is fed into a computer and when above algorithm / code is run, B-Mess can decide which dish shall be liked or disliked. B-Mess may not be very worried about what you like or not, but just think how important this data can be for McDonalds or Pizza Hut where a product failure / success may mean a profit in millions of dollars. 21. This is the reason, you are always encouraged by companies such as McDonalds and Pizza Hut to use their applications for ordering food. Through these applications, companies gain user data to make customer profiles. Collectively these profiles of thousands of people can be the difference between profit and loss. 22. In order to reinforce what we have learnt we need to go a little deep and understand the three types of ML i.e. (a) Supervised Learning (b) Unsupervised Learning (c) Reinforcement Learning 23. Details are as follows: - (a) Supervised Learning. In Figure 3: A model of Supervised case of Supervised learning we Learning help the computer learn by giving names or labels to different categories of data. Consider the example of an AI machine which we want to use to identify different vegetables. First, we 5 RESTRICTED RESTRICTED train this machine by feeding a large number of vegetable images to it. Each time we tell the machine the name (label) of the vegetable also. The machine slowly learns how a carrot looks like. Of course, no two carrots would be exactly same and the machine will take some time to recognize all sizes, shapes and colors of carrots. Once the machine has been trained, we show it the image of any carrot and the machine should recognizes it correctly. The machine can finally identify one vegetable from other despite variations in size, color and shape. We have helped the machine by repeatedly telling it the labels of vegetables during the training phase. In fact, we have been supervising the whole process so this type of ML is called Supervised learning. It is similar to how children are made to remember the names of different objects. (b) Unsupervised Learning. In case of Unsupervised learning we just feed the data to the machine and the computer itself finds out that multiple categories of data exist. It does this by looking at different attributes [properties] of objects such as color, size and shape. It Figure 4: A model of Unsupervised Learning then groups the data and assigns it some label. Consider the same example of a machine which is required to identify various vegetables by looking at them. We feed different images of Brinjals [Bengan], Onions and Bell peppers [Shimla Mirch] to our machine. By repeatedly seeing these images, after a while, it senses that three different types of vegetables are present. It has reached this decision by finding out that certain features such as color, size, shape and leaf pattern are similar for one vegetable but different for the other. We have offered no help to the machine for doing this. Unsupervised learning therefore requires less input by Humans but takes longer to train the machine. It is like a child who is self-taught [learns by himself]. However, a major advantage is that it is capable of finding unknown patterns in data, which is not possible with supervised machine learning models. (c) Reinforcement Learning. Reinforcement learning is partly supervised and partly unsupervised. It works on the principle of feedback. Let’s say that we feed the image of a bell pepper to a system which is designed to identify different vegetables and it says that it’s an onion. We provide negative feedback to it i.e. the answer is wrong. RL is different from the Supervised learning in the sense that labelled data is not provided to the machine and only feedback is provided. Based on feedback the machine starts giving better results. 6 RESTRICTED RESTRICTED 24. Deep Learning (DL). In Machine learning we had used data to train the machine. Data is also required in case of Deep learning. However, here we try to imitate [copy] the working of a human brain. 25. In order to imitate the Human brain; we need to know how it works. This is a question which we don’t understand fully. Human brain is a complex network of neurons. It has around 86 billion neurons making neural (made up of neurons) networks which process and store data and help us make decisions, remember, and think. In a similar fashion, DL uses Artificial Neural networks (ANN). ANN has layers of neurons or nodes arranged in the same way as human brain. The first layer is called Input layer and the last the Output layer. In between there can be many layers. It is because of the presence of these multiple layers and their depth that we call this arrangement Deep learning. Figure 5: Real & Artificial Neural Networks 26. How does this neural network work? We shall not go in detail because of its complexity but it must be kept in mind that output of one-layer acts as an input for the next layer and the same happens in our brain. For this course we shall consider the set of hidden layers as a black box i.e. we know the input and output of the box but we don’t know what is happening inside that box. AI in our Daily Lives 27. Without our knowledge, AI has silently invaded [entered, attacked] almost all aspects of our life. It may be the first time that we are taking an academic course on AI, but we have been using many applications of AI for quite some time. We may see many more in near future. Few are discussed below. (a) Chatbots. These are widely popular in students, because chatbots can do homework for them. They can chat (talk) with you and answer questions. Unlike traditional web browsing, chatbots collect information from data on the internet and combines it to give you the desired output. For example, if you want to write an essay on Pakistan, you will have to search for Pakistan’s economy, culture and history from multiple sources. A short cut is to use Bing Copilot (Figure 6), a chatbot from Microsoft, which will do the same for you in a matter of seconds. Note the superscripts 1,2 & 3 which indicate the references from which information has been collected. 7 RESTRICTED RESTRICTED References from which information has been collected. Figure 6: Search results of an essay on Pakistan from Bing copilot (b) Virtual Assistants. Pick up a smart phone and tell it to call one of your contacts. Your phone shall dial that number for you. You can also ask Google Maps to find a location for you, it is not necessary that you type the words. How does this happen? Each one of us has a different voice and we may use different words to give the same command. Your phone still understands what you want it to do. This is because it has been trained to recognize different voices and words and derive meaning from them just like humans. Two very famous Virtual assistants are Google Assistant (Android phones) and Siri (Apple I-phones). (c) Search Engines. Have you ever thought how Google search bar automatically finds web-pages which you are interested in. This is because it has already created a profile of your previous searches. It also considers the region from where you have logged in and pages which are popular on that particular day. It also uses AI to understand what you have typed in the search bar. Even with spelling mistakes and poorly constructed phrases you may often reach your desired results due to AI working in the background. 28. Self-Driving / Autonomous cars. These cars are not fully autonomous yet, but have advanced features such as lane-keep assist, lane-departure warning, lane-change assist, traffic light recognition system and smart summon. These cars use AI and a number of sensors to perform all these functions. So, with the help of a camera and Figure 7: Autonomous Test Vehicle AI, it can read the stop sign by road side and judge when to brake; if a child suddenly comes in front. One such car is shown in the Figure 7. Note the presence of cameras and other sensors on top of the car which help it make decisions through AI. 8 RESTRICTED RESTRICTED Figure 8: AI agent (Green) positions to take a shot at Col Banger (yellow) 29. Weaponized AI. AI can also be used in military applications. Alpha Dogfight program (USA) is trying to do just that i.e. replace the pilot in a fighter aircraft with AI. Close combat between fighter aircraft, where one aircraft tries to shoot the other at close-range is called a Dog-fight. It requires great skills from the pilot as compared to other missions such as bombing and reconnaissance etc. M M Alam, a world famous PAF hero, had shot down the 5 Indian aircraft in a dogfight. In the year 2023 success finally came as an AI controlled F-16 beat the human pilot 5-0, giving us a clear indication that AI may very soon replace humans in fighter aircraft. In Figure 8 shown above, AI agent (green Colour) by Heron systems is moving to take its second shot at Colonel Banger (call sign), an experienced F-16 pilot. Stages of Artificial Intelligence 30. There are three stages of AI based on the degree to which it can copy humans. These are as follows: - (a) Artificial Narrow Intelligence. The purpose of ANI is to solve only a particular problem. So they are a little narrow in what they can do and achieve. Google assistant and Siri are an example of ANI. Deep Blue – a champion chess computer which beat world champion Gary Kasparov in 1997 - is also ANI since it can only play chess and do nothing else. At the time AI has only reached this level where it can only perform certain specific tasks. (b) Artificial General Intelligence. It is a level at which machines perform equally well as humans in a number of domains such as image processing, language processing, reasoning, problem solving and even playing chess. So, when Deep Blue can do all these things besides playing chess we may say that it has reached AGI. Gary Kasparov, our human champion, despite losing to Deep Blue can do all these things. AGI is in the experimental stage now. (c) Artificial Super Intelligence. An ASI system would exceed human capabilities in all domains. These systems would see better, think better, 9 RESTRICTED RESTRICTED solve better and even play chess better than humans. Some experts say these shall be able to have consciousness also. Consciousness means that machines are self-aware like Humans. Animals are intelligent but they are not self-aware because they do not know the purpose of their existence. (d) ASI is also only in the theoretical phase now. It is also thought that once the machines reach the level of AGI, they would very quickly improve themselves to the next level of ASI. Some experts say that this change from AGI to ASI would be in a matter of seconds. It is possible because machines do not follow the process of biological evolution. Biological evolution means that humans need a set time to grow i.e. a child will always take 13 years to be a teenager and have the mental and physical capabilities of a boy of that age. Machines however can be build very quickly. Weak and Strong AI 31. After having come this far, we also need to understand a simple concept of Weak and Strong AI which revolves around whether the AI follows how the human mind works. (a) Weak AI. If the goal of the AI is just to produce an intelligent result or to do something smart but not to do it the way humans do, we can call it as a case of weak AI. Here we are only concerned about the results and not the path we wish to use for achieving that result. The system copies human behavior (actions) but cannot tell us about the way humans think. (b) A good example is IBMs’ Deep Blue computer, which was designed to play chess. Deep Blue managed to defeat the sitting world champion Gary Kasparov in 1997 for the first time in history. Deep blue played and won the chess but not like humans. We have earlier discussed that it used the Brute Force search to consider billions of positions before making a move. (c) Strong AI. Strong AI focusses on the idea of imitating [copying] human mind. It not only builds intelligent systems but also tells us about how humans think. Take the example of AlphaZero, which is another chess computer but works differently than the Deep Blue. AlphaZero gradually learned chess by playing against itself. It was trained and not programmed like Deep Blue. Like Humans it uses reinforcement learning and neural networks. AlphaZero has the ability to make new and creative moves which at times has surprised both its creators and champion chess players. Ethical concerns and dilemmas 32. AI is a very powerful tool and as it makes progress, a number of ethical concerns and dilemmas have also come up. These are listed below: - 10 RESTRICTED RESTRICTED (a) Bias and Fairness: AI systems can inherit biases [favoritism] present in the data used to train them. This can result in unfair results, especially when dealing with sensitive factors like race, gender, or ethnicity. Ensuring fairness in AI decision-making, particularly in areas like hiring, lending, and criminal justice, is a significant challenge. (b) Privacy and Surveillance: AI relies on vast amounts of personal data, raising concerns about how this data is collected, stored, and used. The use of AI for surveillance, such as facial recognition, can compromise the individual privacy rights (c) Accountability: Determining who is responsible for AI-generated decisions and actions can be difficult, especially in complex systems. We can punish humans for their wrong decisions but punishing machines has no impact on them. (d) Transparency: Many AI models are "black boxes". We do not fully know how they arrive at their decisions, they just take the decision. This lack of transparency can create problems. (e) Job Displacement and Economic Impact: The automation of jobs by AI and robotics can lead to job displacement, potentially affecting livelihoods and contributing to economic inequality. This has led the governments to slow down the use of AI in certain domains. (f) Ethical Decision-Making: Developing AI systems capable of making ethical decisions aligned with human values is a challenging ethical problem. Ethical dilemmas arise when AI must make life-or-death decisions, such as in the case of an autonomous vehicle. Consider an autonomous car driving on an icy road in the hills. As it moves around a corner a child comes in front, if it brakes suddenly it can go off the cliff with the passengers and if it does not, the child gets hurt. What should the car do now? This is a dilemma [problem] where even humans get confused. (g) Intellectual Property and Ownership: Determining the ownership of AI-generated content, inventions, and intellectual property is complex and often unclear. Who gets the credit for an essay written by Bing copilot or ChatGPT? (h) Deep fakes and Manipulation: AI can create fake videos and images and these can be very real. We have started hearing of such malpractices, which are a cause of huge embarrassment to people, their families and communities. One must try to stay away from such activities and keep personal photographs, videos and audios away from public access; so that these are not used by someone for illegal purposes. 11 RESTRICTED RESTRICTED CHAPTER 2 EXTENDED REALITY Desired Learning Objectives ❖ After covering this chapter, the students shall be able to: (a) Explain basic terminologies related to Extended reality. (b) Explain the concept of Haptics (artificial touch). (c) Distinguish elements of the Reality-Virtuality continuum. (d) Recommend application of Extended reality in the PAF training program. Experiences & Learning 1. Humans learn from experiences. Observe a child when he is being told a story. He is completely immersed (absorbed) in an imaginary world. Through his imagination, he tries to perceive every possible character in the story. In many cases he derives very important lessons from this imagination. This is how we all have grown up. In fact, imagined or simulated scenarios can help us make quicker decisions in real life. 2. We have all heard about the battle of Yarmouk. Even though we have read and heard about it, it is difficult to understand the exact sequence of events. Considering you have to research this brilliant piece of military strategy; what can be your options? You have two options you can read books or watch a documentary. Watching a documentary seems to be an easier option. It also seems to be more effective. Isn’t it? 3. A step further, consider going to Yarmouk and recreating the entire battle. You would only require thousands of men, horses, weapons, uniforms and a huge budget. Despite the magnificence of the moment, of Muslim cavalry charging against the Christian forces, you are unlikely to have the resources for this activity. This is where concept of Extended reality comes in. By using computers, graphics and software tools we may recreate the battle virtually at very little cost and the experience we get may be very close to real. Extended Reality 4. Extended reality (XR) is a broader term which represents a continuum (continuous-space). At one end of this continuum is physical / real world and at the other end the digital world. The word ‘Digital’ here means unreal, virtual and 12 RESTRICTED RESTRICTED imaginary. Anytime we modify the real world by adding some virtual elements to it we move into the domain of XR. Figure 1 illustrates the concept of Reality-Virtuality continuum. Figure 1: The Reality - Virtuality Continuum 5. The figure shows a scenery containing a lake, few buildings, a dog and a duck. In the figure to the right, we add some cartoon images, which are virtual objects in order to augment [enhance / add to] the real scene. This is called Augmented Reality (AR). Further right, we create an entirely virtual scene but retain the image of the dog. This is known as Augmented Virtuality (AV) because we have augmented a virtual environment with a real object i.e. the dog. The last image to the right is completely virtual and therefore Virtuality (Virtual Reality (VR)). 6. We shall now go over each of the following in detail to better understand the concept. Virtual Reality 7. Virtual reality is the use of computer technology to simulate primarily vision to create a 3D environment in which user interacts with the virtual / imaginary objects. VR applications immerse [absorb] the user in a computer-generated environment which simulates reality by using interactive devices. These interactive devices receive and send information and include goggles, headsets, gloves, or body suits. The video games which we play are a good example of VR. 8. Based on how immersive the simulated environment is, we can say there are 03 types of VR. (a) Fully immersive. This type of VR provides a very real experience to the user. It completely absorbs the user in it and cuts him off from the real world. When we are travelling in a Figure 2: A Fully car we are seeing and hearing things and also Immersive Simulator 13 RESTRICTED RESTRICTED feeling vibrations of the car. In order to provide a very realistic experience: we must simulate all these signals to our eyes, ears and body. Generating visual graphics on a screen or producing sounds in a headset is not enough. At the time, technology is not advanced enough to simulate real-life experiences with 100% accuracy, but with every single day we are getting closer to a 100% immersive VR. (b) Semi-immersive. The semi-immersive VR is neither fully immersive nor non-immersive. The user remains partially immersed in VR. Consider the example of a driving simulator. Here the driver sees the car moving on a road, hears the sounds and can feel vibrations and movement in his seat but he is also aware of his real environment. He can see people coming in the room, where simulator has been placed, and can listen to their voices as well. Figure 3: A Semi-immersive (c) Non-immersive. This type of VR Simulator simulates only a small subset of human senses (usually only one). Computer aided Design CAD software are used to visualize parts in 3-D without physically building them. Such software is non-immersive in nature since they only present a visual model and user can only see and inspect a static model. Figure 4: A Non-Immersive Simulator Augmented Reality / Virtuality 9. Augmented Reality (AR) may be defined as the integration [addition / mixing] of digital information with the real environment. AR either visually changes the user’s environment or provides some additional information to him. Unlike Virtual Reality (VR), where everything is fake or simulated; AR is reality as well as simulation. We may also define AR as a view of real world with an overlay of digital elements on top of it. 10. The term Augmented Reality (AR) was first used by a Boeing researcher Thomas Claudell in 1990. One of the first large-scale applications of AR was the aircraft Heads-up displays or HUD. Being in the PAF we are all familiar with this application of AR. On the following page is the picture of an F-16 HUD while it is flying. In Figure 5, you can see the information being displayed on the HUD while 14 RESTRICTED RESTRICTED actual scenery is in the background. This arrangement helps the pilot make various decisions - by providing information such as aircraft speed, flight path and altitude of the aircraft - about aircraft flight, without taking his eyes away. In case of AR, the real world has the actual focus and some additional information is added to increase its realism or to enhance its value. Figure 5: An F-16 HUD with Information laid over actual imagery Mixed Reality 11. Mixed reality (MR) just like its name, mixes both augmented reality and virtual reality. Recall that in VR we only interacted with virtual elements and controlled them while in AR we only interacted with the real objects and digital information was superimposed [added / overlaid] on it. What if we start interacting with both virtual and real objects at the same time. This ability of interacting with real and virtual objects at the same time can benefit us greatly in many fields. 12. What we have discussed can best be explained by an example. In the Figure 6, three doctors (real doctors) are studying the human heart. The body lying in front of them is a holograph, a virtual body. These doctors are wearing HoloLens developed by Microsoft. With the help of HoloLens doctors can see both the holograph and the real Figure 6: Doctors wearing HoloLens environment around them. training for a procedure on Human Heart 13. HoloLens is not like a VR headset which blinds us to the real environment. With the movement of their hands the doctors can rotate the body to view the heart 15 RESTRICTED RESTRICTED from different angles and its various cutouts. You can see that with the movement of her hands the doctor on the right has removed the skin and exposed the heart. This is the virtual part of the story. More importantly they are free to interact with their real environment. They can pick up their instruments and indicate to others how those instruments can be used to perform various procedures, look at each other and even take their phone calls. So, with MR we can have one foot in real world and other in the virtual. 14. Extended Reality. Virtual reality, Augmented reality and Mixed reality when combined together can be called Extended Reality. In other words, all kinds of simulation come under the umbrella of XR. Because we have already discussed the three, there is little need for further discussing XR and we leave it just there. Haptics 15. In all the cases of VR, AR and MR, it is required that all the senses of Humans are replicated [simulated] to provide a high degree of realism. Simulating the sense of vision and hearing is very much possible and we have seen it in case of video games. What about the sense of touch? Haptics is the answer to the challenge of simulating real touch. 16. Haptics is a technology which can artificially create feelings of touch through application of forces, motion or vibration to the user. Game-controllers such as joysticks and steering wheels can be simple and effective examples of Haptics. Consider the example of a driving simulator. As the car goes over a road-bump, a vibration motor in the steering wheel provides an artificial feeling of jerk. More advanced ways to provide artificial feedback or touch can involve ultrasonic speakers. We have all seen that when loud music is played in a car nearby, we can feel the vibrations. An intelligent arrangement of many small ultrasonic speakers, as the one shown on right, can simulate objects which are physically not there by creating a pattern of forces on our hands. Ultrasonic speakers are Figure 7: Arrangement of Ultrasonic used because in this region we cannot listen to Speakers exerting pressure on a the sound. Only a buzzing sound is heard by hand the listener but forces are felt. 16 RESTRICTED RESTRICTED CHAPTER 3 SMART FABRICATION & MATERIALS Desired Learning Objectives ❖ After covering this chapter, the students shall be able to (a) Explain basic terminologies related to 3D, 4D printing and Smart materials. (b) Compare 3D, 4D printing viz a viz conventional manufacturing. (c) Evaluate the potential of Smart materials in their own trades. 1. Modern technology has introduced us to new materials and methods of design and manufacturing. These methods are efficient, quick and cost effective. We have to understand that despite the best innovations in terms of AI, computer and simulation technology: we spend most of our time interacting with physical objects and products. These products reach us after undergoing very unique manufacturing processes. Manufacturing processes and the materials used generally have the highest effect on the cost of these products. Smart Materials 2. Smart materials (SMs) can be defined as materials able to change their shape / behavior when an external stimulus [agent, factor] acts on them. This external stimulus can be electrical, mechanical, thermal, chemical or magnetic. This change in behavior has to be both controllable and reversible. SMs are also known as Intelligent or Responsive materials. 3. Smart materials can also be used to build smart structures. Shown to the right is a model of Eiffel tower which can bend as temperature rises. As temperature decreases, it regains its original shape. Recalling our definition of Smart materials, the structure changes its shape with temperature and then reverses the change when change in temperature is reversed. The behavior Figure 1: A Smart Eiffel Tower is predictable and controllable. An uncontrolled changing its shape with changing response to a change is not sufficient to call a temperature material as smart. Moreover, such uncontrolled response is of no use to us. 17 RESTRICTED RESTRICTED 4. As per the definition of the SMs, changing shapes is not the only capability that SMs have. They can also change their behavior. Based on their ability to modify shape and behavior, SMs can be divided into different types. Few of these types are discussed below to serve as examples: - 5. Piezoelectric materials. Piezo is derived from a Greek word meaning ‘to press’ or ‘to squeeze’. Piezoelectric materials when pressed / squeezed, produce current. Conversely (in the opposite sense), when they are supplied with current, they change their shape. The nature of these materials is such that lots of electrical energy shall produce little deformation but a small deformation can generate a large electric field. This property makes these materials suitable to be used as sensors for measuring deformation. 6. Figure 2 shows a load cell, a device for measuring force or weight. It is a metal structure with wire made of piezoelectric material attached at different points. As the force is applied, the structure deforms under its effect. Since the material is piezoelectric, it produces current which can be easily measured. More force means more deformation and more current produces a higher reading on the scale. Figure 2: Piezoelectric Load Cell This is the science behind the weighing scales. Weights as small as milligrams to many tons can be measured this way. 7. A common cigarette lighter also uses piezoelectric materials but in a different way i.e. to produce spark. As force from the thumb presses the piezoelectric element, electric energy causes a spark which burns the stored fuel to produce flame. Keep in mind that piezoelectric materials are one of the oldest types of SMs. So they may not impress you but a whole new range of SMs has been discovered Figure 3: Piezoelectric Lighter in the recent years, two of which are covered in the following paragraphs. 8. Shape Memory Alloys (SMAs). SMAs are smart materials which remember their shape and can return back to it when certain conditions apply. A very good and simple example of SMA is Nitinol. Nitinol is an alloy (mixture of two or more metals) made up of Nickel and Titanium. 18 RESTRICTED RESTRICTED 9. It is a very strong alloy but its most special property is that when formed into a specific shape, it memorizes it. It is not very difficult to make Nitinol do this. Just take a Nitinol wire and give it the required shape. Heat it to 500ºC, then put it in cold water and the job is done. Now twist and deform the wire. In order to bring back to its original shape, heat the wire again to 500ºC and the wire shall come back to its original shape. 10. Shape Memory Alloys can have lots of applications. An interesting example is the Mars rover – space vehicle built to explore Mars. It is planned that the Mars Rover shall use wheels made of SMA. The rover has to continuously travel on the Mars surface to collect scientific data. The wheels have to be very tough for this reason. Of course, there are no tire shops at Mars and the $ 2.53 billion mission can go waste if rover stops moving because of a broken wheel. Figure 4 shows one of the wheels of Curiosity (vehicle exploring Mars right now – 2023). It is made of aluminum and we can see the damage. These wheels started getting damaged just after moving 10 miles. The newly designed SMA wheels for the Mars rover are considered much tougher than the older aluminum wheels. Figure 4: Damaged wheel of Curiosity (current Mars rover) and newly designed SMA wheels for the new Rover 11. SMAs are also used in manufacturing of stents. These stents are used to open up blocked arteries in human body. These are different from normal stents, because they do not require extra devices to increase their size. When a Radio Frequency (RF) signal is provided to the stent, it expands to open up the artery and provides relief to the patient. Figure 5: Smart Stents 19 RESTRICTED RESTRICTED 12. Hydrogels. Hydrogels are materials which can be made to absorb or hold water or any fluid under certain conditions. Under reverse conditions hydrogels can also be used to release fluids such as medicines. This can be very beneficial in medical applications. 13. The primary benefit of using hydrogels as drug-carriers is that they can first absorb the drug, followed by a slow and steady release into the body under controlled conditions. Because of steady release patient does not require frequent doses. Figure 6 shows the mechanism of drug release from a hydrogel. Various external stimuli such as PH value, temperature, electricity, magnetism, light, glucose and enzymes can be used to release the drug from the hydrogel. Figure 6: Hydrogels can be used effectively as drug carriers 14. We have only covered few of the Smart materials just to give an idea what these materials are capable of. There is a great variety of other smart materials, which may altogether change our way of living in the near future. 3-Dimensional Printing 15. 3D printing is constructing a 3D object with the help of a digital file; by a process known as additive manufacturing. Digital file is the 3D image of an object built by a special software known as Computer Aided Design (CAD) software. AutoCAD is one such software, well known in Pakistan. Through a series of commands, the user can design various components and see it from Figure 7: CAD model or Digital File various angles. An output file from the software is then fed to a 3D printer which uses special materials to physically build that object. 20 RESTRICTED RESTRICTED 16. These days 3D scanners are also used to copy parts for which reverse-engineering is required. Here, very little effort is required by the user and the scanner scans the part and then produces its soft copy. 3D printers and scanners are being used in PAC factories and R&D units of the PAF. 17. 3D printer uses additive manufacturing, Figure 8: 3D Scanner copying a process in which layers of material are a part deposited, solidified and fused [joined] together to form up the physical part. The process is very much like the old Dot-matrix printers which would print dots on a paper line after line, leaving blank spaces in between to print the entire text or pictures. 3D printers work in a similar way using materials such as melted plastic. Instead of ink they deposit solid material i.e. plastic layer after layer. The resulting product has length, width and height along the 03 dimensions. It is for this reason that this process is called 3D printing. 18. The most significant advantage of 3D printing is that complex parts can be manufactured with less material in relatively little time. Machining processes and molds are also not required as in conventional manufacturing. Lathe and milling machines are common in GE shop at PAF, Bases. If you happen to visit these shops you can see that a lot of material is scrapped [wasted] on these machines because the desired object is cut out from a block of material and the remaining material is just wasted. Figure 9 shows this phenomenon. Figure 9: Additive & Subtractive manufacturing, Compare the waste produced 19. Applications. 3D printers have many applications; some of which are as follows: - 21 RESTRICTED RESTRICTED (a) Rapid prototyping. 3D printing is so fast that from idea to prototype st (1 sample) manufacturing, the entire process can take days. Earlier it used to be weeks. Here it has to be kept in mind that 3D printing is still not feasible [suitable] for large-scale commercial production. (b) Automotive. Car collectors are actively using 3D printing to restore [bring back to original form] old cars for which production lines have been closed. This way they can manufacture the required part rather than searching for it in old-spares market. (c) Aviation. Additive printing has become very popular in Aviation industry because strong and lightweight parts can be manufactured. A very interesting example is that of an integrated (single piece) Turbine Center Frame shown in Figure 10. With Figure 10: Integrated Turbine 1-meter diameter, this is the largest Center Frame component produced by additive manufacturing until now. This component was earlier manufactured by assembling together 150 smaller parts. Its mass and cost have reduced by 30% and lead time (time between customer order to product delivery) for manufacturing has reduced from 9 months to 10 weeks. (d) Construction. An entire house can be printed through additive manufacturing. This is known as concrete additive manufacturing. Figure 11 shows one such printer at work. The time to print a single small house is generally less than a day. (e) Healthcare. With the help of biomaterials, it is possible now to print human organs. These organs can then be implanted in humans. Figure 12 shows an ear grown in the laboratory. Figure 11: A large Concrete printer at Figure 12: A lab printed Human Ear work 22 RESTRICTED RESTRICTED (f) Eyewear. lenses are being made by additive manufacturing. Traditional lenses are cut from blocks of material, 80% of which gets wasted. By 3D printing all that matter is not wasted. 4-D Printing (Shape Morphing Systems) 20. In our normal lives we have the knowledge of the three dimensions i.e. length, width and height. However, it is possible to consider time as the fourth dimension while designing objects. This may sound a little confusing but this is the idea behind 4-D printing. 21. In simple words 4D printed objects are 3D printed objects which have the ability to morph [change shape] with time. The stimulus [agent / reason] for change of shape may be a physical factor such as temperature, humidity or light. 22. We can achieve this by processing Smart materials through a 3D printer. We have already discussed in the earlier part of the chapter that SMs have the ability to change shape over time. Figure 13 shows a mechanical gripper in action which is able to pick up a small screw when the temperature is correct. 23. Applications. 4D printing is a very new Figure 13: 4D Gripper able to morph technology and may one day completely revolutionize how we design and build. Because 4D printing uses SMs, a lot depends on what new Smart Materials (SMs) are developed in coming years. Some potential applications of 4D printing are as follows: - (a) Self-adjusting piping systems. Piping systems may one day become adjustable to changing demand for water at different times. For example, a shower system may be designed in such a way that in case of very hot water, the pipe diameter may reduce to decrease water protecting from burns. (b) Self-assembly furniture. 4D printing can help us build foldable furniture. This helps save space when we need to transport such furniture. Figure 14 shows a flat sheet which uses a set of hinges, made up of smart materials. The smart hinges allow the table to be set up when required. Figure 14: Self Assembly Furniture (c) Medicine. 4D printing can be used to manufacture programmed stents. Stents are used to open up arteries 23 RESTRICTED RESTRICTED of heart patients when a blockage exists. These stents can travel to the desired location inside human body where they would expand to open up the blocked artery. These stents shall perform a similar function as the ones we have already discussed in Smart materials, however in this case they shall be created by 3D printing. (d) Aero-space. NASA is in the process of developing a 4D material made into a chain mail to shield the space shuttle and astronauts from meteorites [pieces of meteors]. It looks that many square pieces of the material have been assembled together whereas actually this material was Figure 15: 4D Material for Space 3D printed in one piece. The material can applications fold itself, change its reflectivity and protect space-craft from heat. One side of material reflects heat while other absorbs it. This property makes it very useful for keeping machinery and astronauts warm in the dark and cold space. 24 RESTRICTED

Use Quizgecko on...
Browser
Browser