Artificial Intelligence - Mega-Trend 4 (PDF)
Document Details
Tags
Summary
This document provides an overview of artificial intelligence (AI). It explains the difference between regular software code and AI's probabilistic code, highlighting the concept of machine learning and providing examples. It mentions the potential of AI, including its applications in language processing and other fields.
Full Transcript
**MEGA-TREND 4: ARTIFICIAL INTELLIGENCE** This is the scary one, right? With apocalyptic film franchises like 'The Matrix' or 'The Terminator', it's no surprise that many of us see 'Artificial Intelligence' as a kind of Pandora's Box that can only end in the destruction of the human race. In this v...
**MEGA-TREND 4: ARTIFICIAL INTELLIGENCE** This is the scary one, right? With apocalyptic film franchises like 'The Matrix' or 'The Terminator', it's no surprise that many of us see 'Artificial Intelligence' as a kind of Pandora's Box that can only end in the destruction of the human race. In this video, let's take the red pill and see what AI actually IS. First and foremost, the term 'artificial intelligence' is an unfortunate one - in our minds, it evokes images of some sort of living, intelligent artificial being, with its own thoughts, character, and consciousness. Perhaps one day we'll manage to create such artificial beings, but for now, no matter how 'lifelike' AI seems to be, there is absolutely no trace of any consciousness, thought, or 'life' in any AI -- just to get *this* out of the way first. So what *IS* AI made out of? Simply put -- out of *software code*, the same kind that your office suite or adventure games are based on. Code that a human programmer has written. In lots of ways, AI is like *every* other software code of any other application. The big difference is this: regular software code is simply a series of steps, or commands, which are executed one after another, in a very clear and predictable manner. The code might be complex, but it's always understandable and predictable -- you'll *always* be able to know WHY the software arrived at the result it did. We call such code "deterministic", which means that it will always follow the exact same set of steps and produce the exact same output for a given input. Think of a calculator - if your input was "four plus five", you, and every single accountant on this planet, would be very frustrated if the calculator's output was not a predictable "9*", every single time*. In contrast, the type of software code which powers artificial intelligence is not deterministic, but PROBABILISTIC. This means that unlike traditional software, artificial intelligence provides only ESTIMATES -- it predicts only the PROBABILITY of various outcomes rather than offering absolute certainties. How does AI do this? It also uses software code, but this code is different -- we call this type of code 'Machine Learning'. While traditional computers must rely on human programmers to give them **very specific** instructions on how they should complete a task, Machine Learning enables computers to learn AUTONOMOUSLY, and then make decisions on how to complete a task **based only on what they themselves have learned.** Let's take a simple example of this. We'll use machine learning to have an Artificial Intelligence mimic the way the English language works. To do this, we give the AI access to millions upon millions of different English texts, gathered from print media, celebrity magazines, and the internet. It analyzes these texts, and learns all recurring patterns it stumbles upon. For example, it learns that almost every time the words "salt and..." appear, they are immediately followed by the word "pepper". Another pattern it learns is that the word sequence "rock and" is often followed by the word "roll". That the words "trial and" are nearly always followed by "error", the sequence "law and" is followed by "order", and so on. Oddly, it also learns that the words "Brad and" are followed by "Angelina" only half the time... the other half of the time, they're followed by "Jennifer". A most curious pattern. Our AI then feeds all these patterns into its machine learning algorithm. This algorithm creates something called a '**model'** -- this is a mathematical representation of all the patterns it has learned. And NOW, our AI can use its new model to correctly and creatively interact with yet **new and** **unknown data and patterns**. For example, when it is asked to continue a text and the lead in is "Salt and", it will add "pepper" based on what its model has learned. But if it's asked to continue a text that begins with "Brad and", then it MIGHT add "Angelina". Or, it MIGHT add "Jennifer" instead -- we can't really predict what it will choose. How is this different from a regular computer code that a programmer has created? Well, in the regular computer code, the programmer would have to specifically create instructions where each time someone types "Salt and", the code adds the word "pepper". In contrast, AI adds "pepper" by itself -- without any explicit instruction by the programmer -- simply because it has learned that the probability of such a correlation is high. Using these Language Models has tremendous advantages in many situations, especially when the amount of possible choices or answers would completely overwhelm regular computer code. If a problem offers only a few possible choices, a human programmer is able to write code which will pick the best solution for the given task at hand. But when there are hundreds or thousands of possible choices to be made in a situation (like in a human conversation, or in navigating on a busy street) only a Language Model offers the power and flexibility to choose the best answer, based on mathematical probability and what it has learned. Let's look at some of the things that AI can currently help us with using these models: The most obvious (and impressive) use case has to do with language itself. AI finally created a bridge between human language and stored digital information, and now acts as a powerful and efficient interface between the two. Digital assistants, chatbots, and customer interaction systems can communicate with humans and offer help, information, or guidance. Ai can search documents, summarize texts, or even create original content. In medicine, disease diagnostics can be vastly improved due to AI's pattern recognition capabilities. In agriculture, farmers are utilizing AI technology to optimize their crop management practices, transportation and logistics use AI to optimize routes and storage. Cybersecurity in the enterprise sector can adapt quickly to new and unknown threats. Retail, business processes, health care, city planning\... Any industry which creates lots of data -- and these days it's almost *every* industry -- can benefit from AI's powerful pattern analysis. But what about this "AI can only predict the PROBABILITY of various outcomes rather than offering absolute certainties' thing we talked about before? But I *WANT* my calculator to be perfectly correct, and my medical diagnosis to be absolutely certain! Well, for all the amazing things AI can do, it also has a darker side, or at least one we should be aware of. You might have heard that AI sometimes 'hallucinates'... What's up with that? An AI hallucination happens when the AI generates false, misleading or illogical information, but presents it as if it were a fact. Hallucinations are caused by limitations and/or biases in training data and the machine learning algorithms, which can potentially result in producing content that is not just wrong but harmful. Remember how we said that Artificial Intelligence is not conscious, or even actually intelligent? We have to remember that it only APPEARS to be so. AI uses so-called Large Language Models to interact with us and make it seem like it understands us, but although these models are designed to produce fluent and coherent text, they have **no actual understanding** of the underlying reality that they are describing. All they do is predict what the next word will be, based on cold mathematical probability, and not factual accuracy. They're a bit like\... well trained parrots. From a real-world example: If an AI is trained on data with a consistently skewed reference to, let's say, the gender of certain professions (using sentences like "The doctor promised ***he*** would visit me tomorrow" or "The patient saw the nurse and thanked ***her***."), then, because the AI always only saw the word 'Doctor' in the same sentence with 'he', it might conclude, with full confidence, that becoming a doctor is not a viable career choice for young women. So - feed the AI with complete, unbiased datasets and you'll get objective recommendations. Feed the AI with incomplete, racially-skewed data and you'll get RACIST decisions. Or decisions that don't take the emotional well-being of humans sufficiently into account. What about job security? Generative AI -- meaning the type of AI which can generate original content in any medium, whether it's text, art, or music -- is already having a negative impact on the creative industry, with artists, authors, and musicians reporting significant financial losses. This all IS an undeniable risk of AI. It's what every dystopian sci-fi movie with 'evil machine overlords' is based on. But the response can't be to simply "ban AI". In all of human history, there has NEVER been a revolutionary-but-risky technology that people have just 'put back in the box'. There are ALWAYS enough people that value the potential BENEFIT of an invention over its potential RISK: and for better or worse, THEY will continue to develop it -- no MATTER what other groups may say or decide. And despite all the risks, AI has already proven itself to be way too useful to put it back in that box. So, we'll have to find a middle way -- we can reap the benefits, but at the same time we have to try to regulate -- legally, if necessary -- which areas we want AI to shape and influence. People who SEE the risks of AI -- more than anyone else -- NEED to be actively involved in developing the LAWS, the ethical GUIDELINES, the TRANSPARENCY requirements, the individual ACCOUNTABILITY and - above all -- the TOOLS and METHODS that ensure that the DATA which future artificial intelligence learns from is DIVERSE, REPRESENTATIVE, and, not least, anthropologically BENEVOLENT.