Module 7: Other Emerging Technologies PDF

Summary

This document introduces various emerging technologies, focusing on their applications across different sectors. It provides a general overview of topics like nanotechnology, biotechnology, and cloud computing, touching upon their crucial aspects and fundamental concepts.

Full Transcript

Module 7: Other Emerging Technologies Introduction: in the previous chapter, you had studied some emerging technologies like data science, artificial intelligence, the internet of things and augmented reality and their ethical issues. In this chapter, you are going to discuss other emergin...

Module 7: Other Emerging Technologies Introduction: in the previous chapter, you had studied some emerging technologies like data science, artificial intelligence, the internet of things and augmented reality and their ethical issues. In this chapter, you are going to discuss other emerging technologies like nanotechnology, biotechnology, block-chain technology, cloud and quantum computing, autonomic computing, computer vision, embedded systems, cybersecurity, and 3D printing. Learning Outcomes: By the end of the topic, students will be able to: ▪ Explain nanotechnology and its application in different sectors. ▪ Explain biotechnology and its application in different sectors. ▪ Explain block-chain technology and its application. ▪ Has gain insights about the cloud, quantum and autonomic computing, their differences, and applications. ▪ Explain how computer vision works and its application. ▪ Identify and explain embedded systems and their pros and cons. Learning Content: ▪ Nanotechnology ▪ Biotechnology ▪ Blockchain Technology ▪ Cloud, quantum and autonomic computing ▪ Computer Vision ▪ Embedded Systems NANOTECHNOLOGY Nanotechnology is science, engineering, and technology conducted at the nanoscale, which is about 1 to 100 nanometers. Nanoscience and nanotechnology are the study and application of extremely small things and can be used across all the other science fields, such as chemistry, biology, physics, materials science, and engineering. Nanoscale is a term used to describe dimensions between 1 and 100 nanometers (nm), or one- billionth of a meter. The word "nano" comes from the Greek word for "dwarf". The ideas and concepts behind nanoscience and nanotechnology started with a talk entitled “There’s plenty of room at the bottom” by physicist Richard Feynman at an American Physical Society meeting at the California Institute of Technology (CalTech) on December 29, 1959, long before the term nanotechnology was used. Over a decade later, in his explorations of ultraprecision machining, Professor Norio Taniguchi coined the term nanotechnology. It wasn't until 1981, with the development of the scanning tunneling microscope that could "see" individual atoms, that modern nanotechnology began. FUNDAMENTAL CONCEPTS IN NANOSCIENCE AND NANOTECHNOLOGY One nanometer is a billionth of a meter or 10-9 of meters. Examples: There are 25,400,000 nanometers in an inch A sheet of newspaper is about 100,000 nanometers thick On a comparative scale, if a marble were a nanometer, then one meter would be the size of the Earth Nanoscience and nanotechnology involve the ability to see and to control individual atoms and molecules. Everything on Earth is made up of atoms—the food we eat, the clothes we wear, the buildings and houses we live in, and our own bodies. But something as small as an atom is impossible to see with the naked eye. In fact, it’s impossible to see with the microscopes typically used in high school science classes. The microscopes needed to see things at the nanoscale were invented relatively recently about 30 years ago. As small as a nanometer is, it's still large compared to the atomic scale. An atom has a diameter of about 0.1 nm. An atom's nucleus is much smaller about 0.00001 nm. Atoms are the building blocks. APPLICATIONS OF NANOTECHNOLOGY: Medicine: customized nanoparticles the size of molecules that can deliver drugs directly to diseased cells in your body. When it's perfected, this method should greatly reduce the damage treatment such as chemotherapy does to a patient's healthy cells. Electronics: it has some answers for how we might increase the capabilities of electronics devices while we reduce their weight and power consumption. Food: it has an impact on several aspects of food science, from how food is grown to how it is packaged. Companies are developing nanomaterials that will make a difference not only in the taste of food but also in food safety and the health benefits that food delivery. Agriculture: nanotechnology can possibly change the whole agriculture part and nourishment industry anchor from generation to preservation, handling, bundling, transportation, and even waste treatment. Vehicle manufacturers: Much like aviation, lighter and stronger materials will be valuable for making vehicles that are both quicker and more secure. Burning motors will likewise profit from parts that are all the more hardwearing and higher temperature safe. BIOTECHNOLOGY At its simplest, biotechnology is technology based on biology - biotechnology harnesses cellular and biomolecular processes to develop technologies and products that help improve our lives and the health of our planet. Examples: ▪ Brewing and baking bread are examples of processes that fall within the concept of biotechnology (use of yeast (= living organism) to produce the desired product). ▪ One example of modern biotechnology is genetic engineering. Genetic engineering is the process of transferring individual genes between organisms or modifying the genes in an organism to remove or add a desired trait or characteristic. HISTORY When Edward Jenner invented vaccines and when Alexander Fleming discovered antibiotics, they were harnessing the power of biotechnology. And, of course, modern civilization would hardly be imaginable without the fermentation processes that gave us beer, wine, and cheese. When he coined the term in 1919, the agriculturalist Karl Ereky described ‘biotechnology’ as “all lines of work by which products are produced from raw materials with the aid of living things.” In modern biotechnology, researchers modify DNA and proteins to shape the capabilities of living cells, plants, and animals into something useful for humans. Biotechnologists do this by sequencing or reading, the DNA found in nature, and then manipulating it in a test tube – or, more recently, inside of living cells. APPLICATION OF BIOTECHNOLOGY Agriculture (Green Biotechnology): Biotechnology had contributed a lot to modify the genes of the organism known as Genetically Modified Organisms such as Crops, Animals, Plants, Fungi, Bacteria, etc. Genetically modified crops are formed by the manipulation of DNA to introduce a new trait into the crops. These manipulations are done to introduce traits such as pest resistance, insect resistance, weed resistance, etc. Medicine (Medicinal Biotechnology): This helps in the formation of genetically modified insulin known as humulin. This helps in the treatment of a large number of diabetes patients. It has also given rise to a technique known as gene therapy. Gene therapy is a technique to remove the genetic defect in an embryo or child. This technique involves the transfer of a normal gene that works over the non-functional gene. Aquaculture Fisheries: It helps in improving the quality and quantity of fish. Through biotechnology, fishes are induced to breed via gonadotropin-releasing hormone. Environment (Environmental biotechnology): is used in waste treatment and pollution prevention. Environmental biotechnology can more efficiently clean up many wastes than conventional methods and greatly reduce our dependence on methods for land-based disposal. BLOCKCHAIN TECHNOLOGY Blockchain A blockchain is, in the simplest of terms, a time-stamped series of immutable records of data that is managed by a cluster of computers not owned by any single entity. Each of these blocks of data (i.e. block) is secured and bound to each other using cryptographic principles (i.e. chain). Originally blockchain is a growing list of records, called blocks, that are linked using cryptography (Cryptography is the process of hiding or coding information so). Each block contains a cryptography hash of the previous block, a timestamp, and transaction data (generally represented as a Merkle tree). Blockchain is a record-keeping technology designed to make it impossible to hack the system or forge the data stored on the blockchain. “Blocks” on the blockchain are made up of digital pieces of information. Specifically, they have three parts: 1. Blocks store information about transactions like the date, time, and dollar amount of your most recent purchase from online shop (ex. Amazon) 2. Blocks store information about who is participating in transactions. 3. Blocks store information that distinguishes them from other blocks. When a block stores new data it is added to the blockchain. Blockchain, as its name suggests, consists of multiple blocks strung together. In order for a block to be added to the blockchain, however, four things must happen: 1. A transaction must occur. Let’s continue with the example of your impulsive Amazon purchase discussed in the introduction part of blockchain technology. After hastily clicking through multiple checkout prompt, you go against your better judgment and make a purchase. 2. That transaction must be verified. After making that purchase, your transaction must be verified. With other public records of information, like the Securities Exchange Commission, Wikipedia, or your local library, there’s someone in charge of vetting new data entries. With blockchain, however, that job is left up to a network of computers. These networks often consist of thousands (or in the case of Bitcoin, about five million) computers spread across the globe. 3. That transaction must be stored in a block. After your transaction has been verified as accurate, it gets the green light. The transaction’s dollar amount, your digital signature, and Amazon’s digital signature are all stored in a block. There, the transaction will likely join hundreds, or thousands, of others like it. 4. That block must be given a hash. Not unlike an angel earning its wings, once all of a block’s transactions have been verified, it must be given a unique, identifying code called a hash. The block is also given the hash of the most recent block added to the blockchain. Once hashed, the block can be added to the blockchain. History The first work on a cryptographically secured chain of blocks was described in 1991 by Stuart Haber and W. Scott Stornetta. They wanted to implement a system where document timestamps could not be tampered with. In 1992, Bayer, Haber, and Stornetta incorporated Merkle trees to the design, which improved its efficiency by allowing several document certificates to be collected into one block. The first blockchain was conceptualized by a person (or group of people) known as Satoshi Nakamoto in 2008. Nakamoto improved the design in an important way using the Hash cash like the method to add blocks to the chain without requiring them to be signed by a trusted party. The design was implemented the following year by Nakamoto as a core component of the cryptocurrency bitcoin, where it serves as the public ledger for all transactions on the network. In August 2014, the bitcoin blockchain file size, containing records of all transactions that have occurred on the network, reached 20 GB (Gigabyte). In January 2015, the size had grown to almost 30 GB, and from January 2016 to January 2017, the bitcoin blockchain grew from 50 GB to 100 GB in size. The words block and chain were used separately in Satoshi Nakamoto's original paper but were eventually popularized as a single word, blockchain, by 2016. The Three Pillars of Blockchain Technology 1. Decentralization - In a centralized system, the information is not stored by one single entity. In fact, everyone in the network owns the information. In a decentralized network, if you wanted to interact with your friend then you can do so directly without going through a third party. That was the main ideology behind Bitcoins. 2. Transparency - One of the most interesting and misunderstood concepts in blockchain technology is “transparency.” A person’s identity is hidden via complex cryptography and represented only by their public address. So, if you were to look up a person’s transaction history, you will not see “Bob sent 1 BTC” instead you will see “1MF1bhsFLkBzzz9vpFYEmvwT2TbyCt7NZJ sent 1 BTC” So, while the person’s real identity is secure, you will still see all the transactions that were done by their public address. 3. Immutability - Immutability, in the context of the blockchain, means that once something has been entered into the blockchain, it cannot be tampered with. The reason why the blockchain gets this property is that of the cryptographic hash function. In simple terms, hashing means taking an input string of any length and giving out an output of a fixed length. The reason why the blockchain has gained so much admiration is that: ▪ It is not owned by a single entity, hence it is decentralized ▪ The data is cryptographically stored inside ▪ The blockchain is immutable, so no one can tamper with the data that is inside the blockchain ▪ The blockchain is transparent so one can track the data if they want to Application of blockchain 1. The sharing economy - With companies like Uber and Airbnb flourishing, the sharing economy is already a proven success. 2. Crowdfunding - Crowdfunding initiatives like Kickstarter and GoFundMe are doing the advance work for the emerging peer-to-peer economy. 3. Governance - The app, Boardroom, enables organizational decision-making to happen on the blockchain 4. Supply chain auditing - Consumers increasingly want to know that the ethical claims companies make about their products are real. 5. File storage - Decentralizing file storage on the internet brings clear benefits. Distributing data throughout the network protects files from getting hacked or lost. CLOUD AND QUANTUM COMPUTING Cloud computing is a means of networking remote servers that are hosted on the Internet. Rather than storing and processing data on a local server, or a PC's hard drive, one of the following three types of cloud infrastructure is used. The first type is a public cloud. Here a third-party provider manages the servers, applications, and storage much like a public utility. Anyone can subscribe to the provider’s cloud service, which is usually operated through their own data center. A business or organization would typically use a private cloud. This might be hosted on their onsite data center, although some companies host through a third-party provider instead. Either way, the computing infrastructure exists as a private network accessible over the Internet. The third option is a hybrid cloud. Here private clouds are connected to public clouds, allowing data and applications to be shared between them. Private clouds existing alone can be very limiting, and a hybrid offers a business more flexibility. Often a hybrid cloud includes multiple service providers. Hybrids can offer more computing capacity for a business application when the need for its spikes. This sudden expansion into the public cloud is known as cloud bursting. Hybrids also enable applications to keep sensitive client data in a private cloud but connect to end-user software in a public cloud. Cloud computing services can focus on infrastructure, web development or a cloud-based app. These are often regarded as a stack; all are on-demand, pay-as-you-go. Infrastructure as a Service (IaaS) gives you management of the whole deal: servers, web development tools, applications. Platform as a Service (PaaS) offers a complete web development environment, without the worry of the hardware that runs it. Finally, Software as a Service (SaaS) allows access to cloud-based apps, usually through a web browser interface. SaaS is the top of the stack. Cloud computing has been around since 2000. Yet it’s only in the last 10 years that major players like IBM, Amazon, and Google have offered commercially viable, high-capacity networks. Advantages of cloud computing Well, much like with any utility -a business benefits from economy of scale, which means cheap computing power. Because a cloud provider’s hardware and software are shared, there’s no need for the initial costly capital investment. And it goes much further than that. Businesses save on the electricity required 24/7 to power and cool that computing infrastructure. In effect, energy costs are shared. It gets better. Cloud providers have vast resources of computing power at their fingertips. They can allocate these whenever required with just a few mouse clicks. Cloud providers source on a global scale, so they can deliver the precise bandwidth, storage and power business needs when it needs it. The cloud allows you and multiple users to access your data from any location. Smartphone, laptop, desktop, wherever you are, you can access the data you need at any time. With cloud computing a business processes its data more efficiently, increasing productivity. Maintenance is much cheaper, often free, so reliability is rarely a worry. Cloud computing allows CEOs to focus on running their business. QUANTUM COMPUTING Quantum computers truly do represent the next generation of computing. Unlike classic computers, they derive their computing power by harnessing the power of quantum physics. Currently, the only organization which provides a quantum computer in the cloud is IBM. They allow free access to anyone who wishes to use their 5-qubit machine. Earlier this year they installed a 17-qubit machine. So far over 40,000 users have taken advantage of their online service to run experiments. Not to be outdone, Google provided the fastest quantum computer with 53qubits and speed of 200 seconds computation while the supercomputer took 10000 years. What is qubit and how many do you need? Qubit is short for a sequence of quantum bits. With a classic computer, data is stored in tiny transistors that hold a single bit of information, either the binary value of 1 or 0. With a quantum computer, the data is stored in qubits. Thanks to the mechanics of quantum physics, where subatomic particles obey their own laws, a qubit can exist in two states at the same time. This phenomenon is called superposition. So, a qubit can have a value of 1, 0, or some value between. Two qubits can hold even more values. Before long, you are building yourself an exponentially more powerful computer the more qubits you add. Advantages of quantum computing Getting a quantum computer to function usefully is an exciting prospect for scientists. Their gargantuan computing power would allow them to crunch very long numbers. They would be able to make complex calculations that would only overwhelm classic computers. Accessing a cloud-based quantum computer combines the benefits of both technologies exponentially. Quantum computing could help in the discovery of new drugs, by unlocking the complex structure of chemical molecules. Other uses include financial trading, risk management, and supply chain optimization. With its ability to handle more complex numbers, data could be transferred over the internet with much safer encryption. AUTONOMIC COMPUTING (AC) Autonomic computing (AC) is an approach to address the complexity and evolution problems in software systems. It is a self-managing computing model named after, and patterned on, the human body's autonomic nervous system. An autonomic computing system would control the functioning of computer applications and systems without input from the user, in the same way, that the autonomic nervous system regulates body systems without conscious input from the individual. The goal of autonomic computing is to create systems that run themselves, capable of high-level functioning while keeping the system's complexity invisible to the user. Characteristics of Autonomic Systems An autonomic system can self-configure at runtime to meet changing operating environments, selftune to optimize its performance, self-heal when it encounters unexpected obstacles during its operation, and of particular current interest. Protect itself from malicious attacks. An autonomic system can self-manage anything including a single property or multiple properties. Autonomic systems/applications exhibit eight defining characteristics: ▪ Self-Awareness: An autonomic application/system “knows itself” and is aware of its state and its behaviors. ▪ Self-Configuring: An autonomic application/system should be able to configure and reconfigure itself under varying and unpredictable conditions. ▪ Self-Optimizing: An autonomic application/system should be able to detect suboptimal behaviors and optimize itself to improve its execution. ▪ Self-Healing: An autonomic application/system should be able to detect and recover from potential problems and continue to function smoothly. ▪ Self-Protecting: An autonomic application/system should be capable of detecting and protecting its resources from both internal and external attacks and maintaining overall system security and integrity. ▪ Context-Aware: An autonomic application/system should be aware of its execution environment and be able to react to changes in the environment. ▪ Open: An autonomic application/system must function in a heterogeneous world and should be portable across multiple hardware and software architectures. Consequently, it must be built on standard and open protocols and interfaces. ▪ Anticipatory: An autonomic application/system should be able to anticipate to the extent possible, its needs and behaviors and those of its context, and be able to manage itself proactively COMPUTER VISION It is an interdisciplinary scientific field that deals with how computers can be made to gain a high- level understanding of digital images or videos. From the perspective of engineering, it seeks to automate tasks that the human visual system can do. Computer vision tasks include methods for acquiring, processing, analyzing and understanding digital images, and extraction of high- dimensional data from the real world in order to produce numerical or symbolic information, e.g. in the forms of decisions. Another way to define computer vision is through its applications. Computer vision is building algorithms that can understand the content of images and use it for other applications. How computer vision works 1. Acquiring an image: Images, even large sets, can be acquired in real-time through video, photos or 3D technology for analysis. 2. Processing the image: Deep learning models automate much of this process, but the models are often trained by first being fed thousands of labeled or pre-identified images. 3. Understanding the image: The final step is the interpretative step, where an object is identified or classified Applications of computer vision Computer vision is being used today in a wide variety of real-world applications, which include: Optical character recognition (OCR): reading handwritten postal codes on letters (Figure 7.5a) and automatic number plate recognition (ANPR); Machine inspection: rapid parts inspection for quality assurance using stereo vision with specialized illumination to measure tolerances on aircraft wings or auto body parts (Figure 7.5b) or looking for defects in steel castings using X-ray vision; Retail: object recognition for automated checkout lanes (Figure 7.5c); Medical imaging: registering pre-operative and intra-operative imagery (Figure 7.5d) or performing long-term studies of people’s brain morphology as they age; Automotive safety: detecting unexpected obstacles such as pedestrians on the street, under conditions where active vision techniques such as radar or lidar do not work well (Figure 7.5e). Surveillance: monitoring for intruders, analyzing highway traffic (Figure 7.5f), and monitoring pools for drowning victims; Fingerprint recognition and biometrics: for automatic access authentication as well as forensic applications EMBEDDED SYSTEMS It is a controller with a dedicated function within a larger mechanical or electrical system, often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. Embedded systems control many devices in common use today. Ninety-eight percent of all microprocessors manufactured are used in embedded systems. Modern embedded systems are often based on microcontrollers (i.e. microprocessors with integrated memory and peripheral interfaces), but ordinary microprocessors (using external chips for memory and peripheral interface circuits) are also common, especially in more complex systems. In either case, the processor(s) used may be types ranging from general-purpose to those specialized in a certain class of computations, or even custom designed for the application at hand. A common standard class of dedicated processors is the digital signal processor (DSP). Advantages of Embedded Easily Customizable Low power consumption Low cost Enhanced performance Disadvantages of Embedded systems High development effort Larger time to market Basic Structure of an Embedded System Sensor − It measures the physical quantity and converts it to an electrical signal which can be read by an observer or by any electronic instrument like an A2D converter. A sensor stores the measured quantity to the memory. A-D Converter − An analog-to-digital converter converts the analog signal sent by the sensor into a digital signal. Processor & ASICs − Processors process the data to measure the output and store it to the memory. D-A Converter − A digital-to-analog converter converts the digital data fed by the processor to analog data. Actuator − An actuator compares the output given by the D-A Converter to the actual (expected) output stored in it and stores the approved output.

Use Quizgecko on...
Browser
Browser