fifth gen.pdf
Document Details
Uploaded by FelicitousBugle8660
Full Transcript
Ecclesiastes 9:10 What do you think? The primary goal of AI research is to increase the understanding of our perceptual, reasoning, learning, linguistic and creative processes. Just as the invention of the internal combustion engine and machines like airplanes amounted to unprece...
Ecclesiastes 9:10 What do you think? The primary goal of AI research is to increase the understanding of our perceptual, reasoning, learning, linguistic and creative processes. Just as the invention of the internal combustion engine and machines like airplanes amounted to unprecedented enhancement in mobility, the tools resulting from AI research are extending human intellectual and creative capabilities in ways that our predecessors could only dream about. AI as a matter of fact is also about the design and implementation of intelligent agents. 1. Neural Networks 6. Speech Recognition 2. Data Mining 7. Computer Vision 3. Machine Learning 8. Semantic Web 4. Natural Language 9. Genetic Algorithm Processing 10.Speech Synthesis NEURAL NETWORKS Quick Fun Fact Q: How many neurons are in the nervous systems of humans? A: Humans have 86,000,000,000 neurons What are Neural Networks then? + = NEURONS NETWORKS NEURAL NETWORKS In Artificial Intelligence Neural Networks can be defined as synthesized computer systems based on the structure of the central nervous system (biological neural network). An artificial neural network can be defined as "...a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs.” Dr. Robert Hecht-Nielsen Architecture of a Neural Network Applications of Neural Networks Disease Diagnosis Investment and especially detection of Management Risk Electronic Sensory Character and Image cancer and heart Control which includes Organs Recognition disease. Stock market prediction Brain Computer Malware Virus Ecosystem Evaluation Gene Recognition Interface Detection Knowledge Discovery Data Classification Targeted Marketing DATA MINING Data + Mining 1,208,856,701,851 Terabytes Estimated amount of digital data by 2020 How Much Data? Today, we have far more information than we can handle, from business transactions and scientific data, to satellite pictures, text reports and military intelligence. How do we then sort this huge chunk of data, how do we find related patterns/relationships between these data? What kind of information are we collecting? DATA MINING IS BASICALLY THE PROCESSING IT IS SAFE TO SAY THAT DATA MINING IS OF INFORMATION (GOTTEN FROM THE SYNONYMOUS TO KNOWLEDGE PROCESSING OF DATA) INTO KNOWLEDGE. EXCAVATION/EXTRACTION. Data mining can be said to be the process of extracting valid, previously unknown, comprehensible, and actionable information from large databases and using that information to make crucial (business) decisions. It is the analysis of data with the intent to prove a hypothesis or to discover gems of information in the vast quantity of data that exists. Looking for patterns in a collection of facts or observations Foundation of Data Mining Data Mining Applications BUSINESS SCIENTIFIC DATA MEDICAL AND GAMES DIGITAL MEDIA TRANSACTIONS PERSONAL DATA CAD AND VIRTUAL WORLDS THE WORLD WIDE SOFTWARE WEB STOREHOUSES ENGINEERING DATA MACHINE LEARNING Consider this common situation: You are in your car, speeding away, when you suddenly hear a “funny” noise. To prevent an accident, you slow down, and either stop the car or bring it to the nearest auto- mechanic. 35 Imagine that, instead of driving your good but old car, you were asked to drive this truck: Would you know a “funny” noise from a “normal” one? Well, probably not, since you’ve never driven a truck before! While you drove your car during all these years, you effectively learned what your car sounds like and this is why you were able to identify that “funny” noise. 36 What is Machine Learning? Machine learning is a branch of artificial intelligence, concerned with the construction and study of systems/algorithms that can learn from data. 38 Algorithm Grouped By Similarity Machine Learning algorithms can be classified under supervised, unsupervised and reinforcement learning. Some of the algorithms used for implementation include: Regression Algorithm ▪ Instance-based Algorithms ▪ Regularization Algorithms ▪ Decision Tree Algorithms ▪ Bayesian Algorithms ▪ Deep Learning Algorithms ▪ Ensemble Algorithms ▪ K-Nearest Algorithms Example: Spam Filtering Agent – the aim is to assign emails to their correct classes AGENT ONE CORRECT CLASS Desired SPAM Spam Filter decides Desired 189 1 SPAM 11 799 AGENT TWO CORRECT CLASS Desired SPAM Spam Filter decides Desired 200 38 SPAM 0 762 Is Agent Two worse than Agent One? NATURAL LANGUAGE PROCESSING What is Natural Language Processing - NLP? This is an area that deals with analyzing, understanding and generating the languages that humans use naturally in order to interface with computers in both written and spoken languages The origin of this field cuts across a number of disciplines computing and information sciences, artificial intelligence, linguistics, mathematics, electrical and electronic engineering and psychology. Also called Computational Linguistics A full NLP System would be able to: Paraphrase an input text Translate the text into another language Answer questions about the contents of the text Draw inferences from the text. Applications of NLP Today Siri: released by Apple Inc. in 2011 as a feature of the iPhone 4S. Siri is a computer program that works as an intelligent personal assistant. The feature uses a natural language user interface to answer questions, make recommendations, and perform actions by delegating requests to a set of Web services. Cortana is the Windows application of NLP Bixby; Cleverbot SPEECH RECOGNITION Speech recognition is the ability of a machine or program to receive and interpret dictation or spoken commands. Speech recognition software gives you the ability to navigate your computer and compile documents faster than with a keyboard and mouse. "SAY IT, DON'T TYPE IT" The robustness of a recognition system is heavily influenced by the ability to: HANDLE THE PRESENCE OF BACKGROUND NOISE AND COPE WITH THE DISTORTION BY THE FREQUENCY CHARACTERISTIC OF THE TRANSMISSION CHANNEL (OFTEN DESCRIBED ALSO AS CONVOLUTIONAL “NOISE” – ALTHOUGH THE TERM CONVOLUTIONAL DISTORTION IS PREFERRED) COMPUTER VISION Computer vision is a field in computer science and technology which includes methods for acquiring, processing, and understanding images and, in general, high-dimensional data from the real world in order to produce numerical or symbolic information. Case Study Computer vision is a very broad topic which is researched daily. It is a very interesting and fascinating topic because it covers a wide range of topics which are currently revolutionizing the world. Thus, our group would focus on certain fields of computer vision which are; Facial recognition, motion analysis, scene reconstruction and image restoration. Computer vision can also be viewed as the science and technology of machines that see; in this case, the machine is able to extract information from an image that is necessary to solve task. The image data extracted can take many forms, such as video sequences, etc. Computer vision helps to provide a detailed understanding of an environment which helps to ease navigation in a robot or other applications of computer vision. Applications of Computer Vision Navigation systems: – E.g. by an autonomous vehicle (google driverless cars), mobile robots, automated traffic light. Controlling process systems: – E.g. an industrial robot (used in arranging bottles in a factory, robots that work on motherboard chips, robots that couple cars or phone) etc. Detecting systems: – E.g. people counting in public places, video surveillance, color combination systems, etc. Systems for modelling objects or environments: –E.g. medical image analysis or topographical modelling using tools like Unity 3D and others in its category. Automatic inspection systems: –E.g. in manufacturing applications in which the robots monitors/inspect series of gadgets or items to be produced in a period of time. Other Computer Vision Applications Gesture Recognition Face Detection Smart offices Character and Pattern Recognition Robotics Medicine Security Transportation CONCEPT OF GENETIC ALGORITHM Normal genetic algorithms consist of a finite repetition of three steps: 1. Selection of the parent strings, 2. Recombination, 3. Mutation. WORKFLOW OF GENETIC ALGORITHM SELECTION STRATEGY COMPARISON IN SOLVING THE TRAVELLING SALESMAN PROBLEM (TSP) Cross-over and Mutation SEMANTIC WEB What is the semantic web? The Semantic Web is a collaborative movement led by the international standards body, the World Wide Web Consortium (W3C). Tim-Berners-Lee, in his article ‘The Semantic Web’ in Scientific American (2001), explains that the semantic web is not a separate Web but an extension of the current one, in which information is given well-defined meaning, better enabling computers and people to work in cooperation. However, The Semantic Web basically provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. We can also say it makes the web smarter Why the Development of the Semantic Web? The semantic web is being developed to overcome the following problems for the current web: Lack of proper structure regarding the representation of information in the current web content Ambiguity of information resulting from poor interconnection of information. Automatic information transfer is lacking (i.e. No use of agents). Ability to deal with the enormous number of users and content ensuring trust at all levels. Understanding the semantic web The Semantic Web, as originally envisioned, is a system that enables machines to "understand" and respond to complex human requests based on their meaning. Such an "understanding" requires that the relevant information sources be semantically structured. The Semantic Web involves publishing in languages specifically designed for data, such as: Resource Description Framework (RDF) Web Ontology Language (OWL) Extensible Markup Language (XML) ONTOLOGY Another basic component of the semantic web is ontology. The term ontology is a theory about the nature of existence of what type of things exists. Ontology enhances the function of the web by using it in a simple way to improve web search or queries accurately. SPEECH SYNTHESIS Speech synthesis is the artificial production of human speech. A system used for this purpose is termed a speech synthesizer and can be implemented in software or hardware. Speech synthesis systems are often called text-to-speech (TTS) systems in reference to their ability to convert text into speech. Applications for the Blind, Deafened and Vocally Handicapped One of the most important Speech-generating and useful application devices(SGDs) have used field in speech synthesis is speech synthesizers to the reading and supplement or replace communication aids for speech for individual with the blind. speech impairments Educational Applications Synthesized speech can be used in educational situations. A computer with a speech synthesizer can teach 24 hours a day and 365 days a year. It can be programmed for special tasks like spelling and pronunciation teaching for different languages. A speech synthesizer can be used with a word processor to aid proofreading. Application for Telecommunications and Multimedia Synthesized speech has been used for decades in all kinds of telephone inquiry systems Telephone inquiry system is a system that allows interactive information retrieval from an ordinary touch-tone Expert Systems Robotics & Softbots Bioinformatics & Precision Medicine Intelligent Multi-Agent Systems Facial Recognition Systems Intelligent Tutoring Systems Gaming Nanomedicine Search Applications in e-commerce and manufacturing In a noisy environment try saying: ICE CREAM vs I SCREAM Causality Creativity We could loose control More of Fifth-generation computing The Internet of things/5G Self-driven cars Nanobots/Medical diagnostics Climate change Robots will do the dangerous/odd jobs Conclusion AI just finished with its period of infancy. It has ramifications that yet remain unknown to everyone. More research efforts can bring about surprising innovations. There are also results which cannot be foreseen when the computer begins to think for itself. A computer can be used in different ways depending on the user’s needs. "The ignorance of what it feels like to be in a social group creates a certain bias. For as long as a man cannot tell what it feels like to be a woman, not necessarily because of his oblivion to the feelings a woman goes through, but his inability to describe such feelings from experience and vice versa. A time would come when machines would have certain perceptions that humans may not be able to describe from experience but only machines. This would consequently herald the dawn of social machines which could lead to man’s inability to fully comprehend the reason behind some of the actions executed by machines, thereby leading to social group bias between man and machines (and vice-versa), just like it already naturally exists between men and women. " -Onuiri E and Durodola O. 2016 GENIUS IS 1% INSPIRATION AND 99% PERSPIRATION