Intro to Computational Thinking and Problem Solving Unit-1 Part 1 PDF
Document Details
Uploaded by Deleted User
Tags
Summary
This document provides an introduction to computational thinking and problem-solving, covering key concepts such as decomposition, abstraction, and algorithms. It also touches on data representation, and the use of computational thinking in various fields, such as artificial intelligence and game development.
Full Transcript
# COMPUTATIONAL THINKING AND IT ## Unit-1 Introduction to computational Thinking and Problem Solving ### COMPUTATIONAL THINKING (CT) - Computational Thinking is a set of problem-solving methods that involves expressing problems and their solution in ways that a computer could also execute. - It he...
# COMPUTATIONAL THINKING AND IT ## Unit-1 Introduction to computational Thinking and Problem Solving ### COMPUTATIONAL THINKING (CT) - Computational Thinking is a set of problem-solving methods that involves expressing problems and their solution in ways that a computer could also execute. - It helps us to solve problems. - Computational Thinking is not a programming language. - It is not even thinking like a computer. - It simply enables us to work out exactly what to tell the computer to do. #### There are the four key techniques of computational Thinking: 1. **Decomposition:** Breaking down a complex problem or system into smaller, more manageable parts. - Eg: where to go, how to complete the level. 2. **Abstraction:** Focusing on the important information only, ignoring irrelevant details. - Eg: Whether, location of exit 3. **Pattern Recognition:** Looking for similarities among and within problems. - Eg - Knowledge of previous similar problems used. 4. **Algorithms:** Developing a step-by-step solution to the problem, or the rules to solve the problem. - Eg: To work out a step-by-step plan of action. ### How is Computational Thinking Used? Computational thinking has many uses including: - **Problem-solving:** Computational Thinking helps people to solve complex problems by breaking them down into smaller, more manageable parts. - **Coding:** Computational thinking is a structured method for identifying problems and developing strategies for solving them. - **Data analysis:** It is essential for processing and interpreting large amounts of data to make data-driven decisions. ### Artificial Intelligence and Machine Learning (AI & ML) - Computational thinking is essential in developing AI algorithms and machine learning models. - It helps researchers and engineers design intelligent systems that can learn and adapt from data. ### Creativity - Programming is a creative skill that uses computational principles to create search algorithms, apps, websites, and more. ### Robotics and Automation - Computation thinking plays a crucial role in designing and programming robots and automated systems. - It enables them to perform specific tasks accurately and efficiently. ### Game Development - Game developers use computational thinking to design game mechanics, AI behaviors, and interactive elements that enhance the gaming experience. ### Educational Tool - Computational Thinking is a tool that teaches students critical thinking, logic, and problem-solving skills. ### Career Opportunities - Computational thinking is a valuable soft skill in many industries and leadership positions. ### Internet of Things (IOT) - In the IOT Domain, Computational thinking is used to connect and control smart devices, allowing them to communicate and interact seamlessly. ### INFORMATION AND DATA - In Computational Thinking, the Data is raw facts or observations. - Data can be stored, processed and shared in different forms such as binary, plain text, human readable, comma, delimited and self-describing. - While Information is data that has been processed and interpreted to give it meaning. - **Example:** A credit card number serves as information that can be encoded as data using methods like magnetic strips or bar codes. ### CONVERTING INFORMATION INTO DATA #### Here are some ways to convert information into data: 1. **Digitization:** The process of converting analog information into digital bits, which computers and other devices can process. 2. **Data Conversion:** Transforms data from one format to another to make it compatible with different systems, platforms, or software applications. 3. **Data Mapping:** A common data transformation that changes one data input into its equivalent in another format. 4. **Data Formatting:** Includes changes such as converting data into a particular data type, changing the structure of the data set, or creating the right data model. 5. **Data Validation:** Check the accuracy, quality and authenticity of data, typically before moving or migrating data. 6. **Data Extraction:** Obtains information from various sources and transforms it into a structured format. 7. **Aggregation:** Summarizes data to create a new view of it, reducing its size and complexity. #### NOTE: Data can be categorized into two main types: 1. **Continuous data:** This data spans an infinite range of potential values. 2. **Discrete data:** This data is confined to a finite set of options. ### DATA TYPES - In this topic, we gain an understanding of how particular kinds of data are represented and explore some of the internal mechanisms of a computer system. - We commence by examining the encoding of how colors, images, and audio, and numbers can be represented in digital form. #### 1. Number Systems: - Number systems are methods used to represent numbers in written form. - Example: Different representations of the number 5 using different numeral systems: Tally Marking, Roman numeral system, Decimal system, and Binary system. ##### Positional Numeral System: - A positional numeral system must first specify a base, also known as the radix, that describes how many digits exist in that particular system. - For example, the decimal system has a base of 10 and uses digits from 0 to 9. - The smallest digit in any positional system is always zero, while the largest is one less than the base. - The two most common numeral systems are decimal and binary, with binary being widely used in computing due to its use of only two digits i.e., 0 and 1. - Any positive integer greater than one can be the base in a positional system. - However, only a few, such as binary, octal, decimal and hexadecimal systems, are commonly used as demonstrated in the table below. | NAME | BASE | Digits | | --------------- | ----- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Binary | 2 | 0,1 | | Octal | 8 | 0, 1, 2, 3, 4, 5, 6, 7 | | Decimal | 10 | 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 | | Hexadecimal | 16 | 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F | - **The Decimal Number 925 can be expressed as (9×10^2)+(2×10^1)+(5×10^0)** | Number | Position | Power of Base | Meaning | | -------- | -------- | -------------- | -------------- | | 9 | 2 | 10^2 | 9 x 10^2 | | 2 | 1 | 10^1 | 2 x 10^1 | | 5 | 0 | 10^0 | 5 x 10^0 | - Common powers of ten are mentioned below: | NAME | VALUE | | ----------- | ------ | | Ten | 10^1 | | Hundred | 10^2 | | Thousand | 10^3 | | Million | 10^6 | | Billion | 10^9 | | Trillion | 10^12 | | Googol | 10^100 | - In conclusion, a positional system relies on a chosen base and digit positions to represent numbers, with decimal and binary the most commonly used systems. - Subscript notation helps clarify the base, and the choice of base determines the value of the number. #### Integers as Bit Binary bit strings - Computing systems represent integers as binary bit strings. - Binary systems well-suited for computers as it consists of only two value, enabling a single bit to store a binary digit efficiently. - By aligning the patterns of a bit strings directly, the process of storing integers becomes straightforward. - Visual representations of bit patterns for selected values: | Bit Strings | Decimal Value | | ----------- | -------------- | | 00000000 | 0 | | 00000001 | 1 | | 00000010 | 2 | | 00000011 | 3 | | 00000100 | 4 | | 11111110 | 254 | | 11111111 | 255 | - An 8-bit string can only represent 256 numbers since an 8-bit string can only exhibit 2^8 unique patterns. - This implies that the largest number that can be encoded as an 8-bit binary number is 2^8-1 or 255. - This statement can be generalized by stating that any binary bit string of length N can only encode the numbers 0 through 2^N-1 #### REAL Numbers as Binary Bit Strings - Real numbers, such as 2.31, 2-125, or 3.1415926, can be represented using binary bit strings by extending the positional numeral system to the right of the decimal such that of positions start at -1 and decrease with distance from the decimal. - For example, in base 10, the value 1.625 means (1 x 10^0) + (6 x 10^-1) + (2 x 10^-2) + (5 x 10^-3). - Given that we can determine the location of the decimal point. - For example, the meaning of 1.1012 - 1 = 1012 = 1×2^0+ 1×2^-1+0×2^-2 + 1 x 2^-3 - = 1×1+1/2+ 0/4 + 1/8 - = 1 + ½ + 1/8 - = 1.625 #### Precision as a Source of Error - One difficulty that arises when encoding real numbers is that there may be an arbitrary number of digits on the right side of the decimal value to represent even small numbers. - For Example, if we write 1/3 as a real number, it results as an infinite sequence of 3s, such as (0.333333333333333333) like this. - As a result, most real numbers cannot be encoded by the encoding scheme we discussed earlier, leading to the possibility of rounding error in computer applications. - Precision is used to measure the accuracy of a stored quantity, commonly indicated by the number of available bits. - If a computer uses 16 bits to store real numbers, it is said to have 16-bit precision. #### Underflow and Overflow as Sources of Error - It's another kind of error in computing systems. - Recall that an 8-bit binary string can only hold values between 0 and 255. - Overflow occurs when a computer tries to add 1 to the value 255 and store the result as an 8-bit binary string. - Since 256 cannot be determined in 8 bits, the result will actually wrap around to 0. - Depending on the computing system. - Overflow occurs when a computation produces a value that exceeds the capacity of the available bits. - Underflow occurs when computers produce a value that is too small in magnitude (i.e., very close to zero) to be encoded by the number of bits available. #### 2. TEXT - All data that is stored in a computing system is encoded as bit strings. - Even the words that you are now reading are stored in binary form and this information is then presented to you as text. - The text you are reading even has different fonts using different sizes and weight. - First, it's important to understand that a textual character, when drawn on either a page of text or on a computer screen, is really just a picture. - The figure shown below shows the graphical nature of text by showing the letter Q as it appears in different font families. - The picture of a letter Q are each different. #### Textual characters are usually encoded as integer values using the encoding schemes discussed earlier. - Each number is arbitrarily associated with the image that should be used when the character is drawn on a page or shown on the computer screen. - The associations between numbers and text are known collectively as a character encoding scheme. - **ASCII** is a widely used encoding scheme for English Text, where specific numbers are assigned to English characters, such as uppercase A (Number 65) and lowercase a (Number 97). - The ASCII table includes both pictorial characters, which represent visible characters, and nonprintable text characters, which are commands that text editors or processes follow but do not display as visible symbols. - Different fonts or typefaces can be applied to present the textual characters, leading to diverse visual representations of the same character. - The process of converting binary data into readable text involves employing character encoding schemes to associate numbers with their corresponding text characters and selecting fonts to determine how these characters appear visually. #### 3. COLORS - The human visual system perceives color through three biological photosensors called cones, which are sensitive to specific wavelengths corresponding to Red, Green, and Blue light. - The combined responses of these cones create the perception of a single color, suggesting that color can be understood as a three-dimensional concept. - In image-processing systems, the RGB color model is widely employed for color representation. - This model utilizes red, green, blue as primary colors, allowing the generation of any color by adjusting the intensity of these three primaries. - An analogy using flashlights illustrates how different colors can be projected onto a white wall by controlling the intensity of three flashlights emitting red, green, and blue light. - **In computing systems, colors are typically represented using three dimensional integer numbers, each ranging from 0 to 255. ** - **Since each value requires 8 bits for encoding, a single color is represented as a 24-bit string, comprising the three values.** - Fig. shows an example of how three colors are encoded as 24-bit strings in a computing system, along with their corresponding decimal values. | BITS | COLOURS (Red Green Blue) | Decimal | | --------------------------------------------------------------------- | ------------------------- | -------- | | 111111110000000000000000 | Red | (255,0,0) | | 000000001111111100000000 | Green | (0,255,0) | | 111111111111111100000000 | Yellow | (255,255,0) | #### 4. Pictures - The most common encoding of a digital image is that of a two-dimensional grid of colors. - Each element of the grid is known as a pixel. - These pixels represent small rectangular regions of the image that is comprised of a single color. - To store images, they are converted into sequences of pixels, and each pixel is associated with a 24-bit string that represents its color. - The total number of bits required to encode a digital image depends on the number of pixels, and additional bits, known as the header, store essential information like image width and height. - High-definition videos feature images of dimensions up to 1920 columns and 1080 rows resulting in 2,073,000 pixels. - The encoding of these images requires 49,766,400 bits, equals to 6 megabytes. - Digital cameras can capture images with 10 megapixels max and produce image sizes around 30 megabytes. - **For images with limited color variation, like black and white or grayscale images, fewer bits per color are employed.** - Grayscale images can be represented using 8 bits per color, while black and white image only require a single bit since they contain two colors. - The bits used for image encoding are arranged linearly in memory, not in the two-dimensional structure seen in the image.