Introduction to Computer Science Lecture 1 PDF
Document Details
Cairo University
2015
Tags
Summary
This document is a lecture for an Introduction to Computer Science course at Cairo University. It outlines the course content, basic computer science concepts, and evaluation methods. It also provides online access to course materials.
Full Transcript
Cairo University – Engineering Faculty Engineering Mathematics & Physics Department Introduction to Computer Science (INTG005) Lecture 1 © 2015 Pearson Education Limited 2015...
Cairo University – Engineering Faculty Engineering Mathematics & Physics Department Introduction to Computer Science (INTG005) Lecture 1 © 2015 Pearson Education Limited 2015 0-1 INTG005 Fundamental of computer science with emphasis on programming and problem solving. The course includes two parts. The first part introduces the forms of data, how they are represented digitally, and how the major hardware components store and operate on such data. The second part introduces computer programming and how software is developed to control and manipulate data. Topics include: introduction to algorithm design and analysis, and computer programming (data types, Input and output statements, control structures, functions, arrays, and applications using searching and sorting). © 2015 Pearson Education Limited 2015 Outline Introduction to CS, Data storage and manipulation Problem solving and program development process. C++ elements, syntax, and semantics Input and output statements. Control structures User-defined functions: design and use of parameters Arrays, storage for large amount of data. References: J. Glenn Brookshear and Dennis Brylow, Computer Science, An Overview, 12th edition, Pearson 2015. Gary J. Bronson, A First Book of C++, Introduction To Programming, 4th Edition, Cengage Learning 2012. © 2015 Pearson Education Limited 2015 Grading & Evaluation Semester work (60 pts): – 10 pts Lab Attendance & Participation – 20 pts Quizzes (best 2 of 3 quizzes) – 10 pts Lab Exam 20 pts Midterm Exam Final Exam (40 pts) © 2015 Pearson Education Limited 2015 Drive Lectures and Tutorials will be found in: https://drive.google.c om/drive/folders/1H3 iT4ET52u8eXIvXOJ 9208zATWnjJ19k?u sp=sharing © 2015 Pearson Education Limited 2015 Introduction © 2015 Pearson Education Limited 2015. Introduction Computer science (CS) seeks to build foundation for topics such as computer design, computer programming, information processing, algorithmic solutions of problems,... etc. Some themes of CS such as Data storage and manipulation, Algorithms, and Programming are discussed in this course. © 2015 Pearson Education Limited 2015 0-7 Some Terminology Algorithm: – A set of steps that defines how a task is performed. Program: – A representation of an algorithm. Programming: – The process of developing/encoding a program in a machine compatible form. Software: – Programs and algorithms Hardware: – Equipment / Machinery © 2015 Pearson Education Limited 2015 0-8 © 2015 Pearson Education Limited 2015 0-9 Themes of Computer Science Computing technology effects: – Governments, economics, scientific research, role of data, communication, … “Seven Big Ideas” that unite computer science: – Algorithms, Abstraction, Creativity, Data, Programing, Internet and Impact. © 2015 Pearson Education Limited 2015 0-10 Data Computers can represent any information that can be discretized and digitized. Algorithms process and transform data Data is driving new discoveries in Massive storage capacities & High-speed networks,... etc. We need to explore some questions related to data. © 2015 Pearson Education Limited 2015 0-11 Questions about Data How do computers store data about common digital artifacts? – Numbers, text, images, sounds, and video How do computers approximate data about analog artifacts in the real world? How do computers detect and prevent errors in data? What are the ramifications or an ever-growing and interconnected universe of digital data? © 2015 Pearson Education Limited 2015 0-12 Algorithms CS is the “science of algorithms”. Draws from other subjects, including – Mathematics – Engineering – Psychology – Business Administration – Psychology © 2015 Pearson Education Limited 2015 0-13 The central role of algorithms in CS: Questions to focus: © 2015 Pearson Education Limited 2015 0-14 Questions about Algorithms: © 2015 Pearson Education Limited 2015 0-15 Programming Programming is Translating human intentions into executable algorithms. Computer hardware is capable of only executing simple/basic algorithmic steps. Abstractions in a programming language allow humans to reason and encode solutions to complex problems. Abstractions is to ignore internal details and focus on Design and how components interact with each other. © 2015 Pearson Education Limited 2015 0-16 Questions about Programming How are programs built? What kind of errors can occur in programs? How are errors in programs found and repaired? What are the effects of errors in modern programs? How are programs documented and evaluated? © 2015 Pearson Education Limited 2015 0-17 Data Storage © 2015 Pearson Education Limited 2015. Data Storage 1 Bits and Their Storage 6 Storing Integers 2 Main Memory 7 Storing Fractions 3 Mass Storage 8 Data and Programming 4 Representing Info. as Bit 9 Data Compression Patterns 10 Communications Errors 5 The Binary System © 2015 Pearson Education Limited 2015 1-19 Bits and Bit Patterns Bit: Binary Digit (0 or 1) Bit Patterns are used to represent information – Numbers – Text characters – Images – Sound – And others © 2015 Pearson Education Limited 2015 1-20 Boolean Operations Boolean Operation: An operation that manipulates one or more true/false values Specific operations – AND – OR – XOR (exclusive or) – NOT © 2015 Pearson Education Limited 2015 1-21 The possible input and output values of Boolean operations AND, OR, and XOR © 2015 Pearson Education Limited 2015 1-22 Gates Gate: A device that computes a Boolean operation. – Often implemented as (small) electronic circuit in which the digits 0 and 1 are represented as voltage levels. – Gates provide the building blocks from which computers are constructed. © 2015 Pearson Education Limited 2015 1-23 Representation of AND, OR, XOR, and NOT gates as well as their input and output values © 2015 Pearson Education Limited 2015 1-24 Flip-flops Flip-flop: is a fundamental element of computer memory. It is a circuit built from gates that can store one bit. – One input line is used to set its stored value to 1 – One input line is used to set its stored value to 0 – While both input lines are 0, the output will not change and hence the most recently stored value is preserved. © 2015 Pearson Education Limited 2015 1-25 A simple flip-flop circuit © 2015 Pearson Education Limited 2015 1-26 Setting the output of a flip-flop to 1 © 2015 Pearson Education Limited 2015 1-27 Another way of constructing a flip-flop – VLSI (Very Large-Scale Integration): millions of electrical components to be constructed on a chip, is used to create miniature devices containing millions of flip-flops along with their controlling circuitry. These chips are used as tools in the construction of computer systems. In some cases VLSI is used to create an entire computer system on a single chip. © 2015 Pearson Education Limited 2015 1-28 Hexadecimal Notation Hexadecimal notation: – A shorthand notation for long bit pattern (stream) – Divides a pattern into groups of four bits each – Represents each group by a single symbol (0 – F) Examples: – 10100011 becomes A3 © 2015 Pearson Education Limited 2015 1-29 Main Memory Cells For the purpose of storing data, a computer contains a large collection of circuits (such as flip-flops), each capable of storing a single bit. This bit reservoir is known as the machine’s main memory. Cell: A unit of main memory (typically 8 bits which is one byte). – Most significant bit: the bit at the left (high-order) end of the conceptual row of bits in a memory cell – Least significant bit: the bit at the right (low-order) end of the conceptual row of bits in a memory cell. © 2015 Pearson Education Limited 2015 1-30 Main Memory Addresses © 2015 Pearson Education Limited 2015 1-31 Memory Terminology Random Access Memory (RAM): – Memory in which individual cells can be easily accessed in any order. Dynamic Memory (DRAM): – RAM composed of volatile memory. SDRAM: – Synchronous DRAM that applies additional techniques to decrease the time needed to retrieve the contents from its memory cells. © 2015 Pearson Education Limited 2015 1-32 Measuring Memory Capacity Total number of cells is a power of two. Kilobyte (KB): 210 bytes = 1024 bytes – Example: 3 KB = 3 times1024 bytes Megabyte (MB): 220 bytes = 1,048,576 bytes – Example: 3 MB = 3 times 1,048,576 bytes Gigabyte (GB): 230 bytes = 1,073,741,824 bytes – Example: 3 GB = 3 times 1,073,741,824 bytes Terabyte (TB) : 240 bytes © 2015 Pearson Education Limited 2015 1-33 Mass Storage Due to the volatility and limited size of a computer’s main memory, most computers have additional memory devices called mass storage (or secondary storage) systems. Additional devices: – Magnetic disks – Magnetic tape – CDs – Flash drives – DVDs – Solid-state disks Advantages over main memory – Less volatility, Larger storage capacities, Low cost , In many cases can be removed for archival purposes. © 2015 Pearson Education Limited 2015 1-34 A magnetic disk storage system e.g. HDD © 2015 Pearson Education Limited 2015 1-35 Optical systems: CD/DVD storage Consist of reflective material covered with a clear protective coating. Information is recorded on them by creating variations in their reflective surfaces. This information can then be retrieved by means of a laser that detects irregularities on the reflective surface of the CD as it spins. Compact Disk (CD) capacity : 600 - 700 MB. Multiple layers -> Digital Versatile Disk (DVD) capacity : several GB. © 2015 Pearson Education Limited 2015 1-36 Flash Drives No mechanical (slow) motion Flash Memory: bits are stored by sending signals directly to the storage medium that cause electrons to be trapped in tiny chambers of silicon dioxide. Can hold data for years without power. Excellent for portable, nonvolatile data storage. Not sensitive to physical shocks. Easy connect & dis-connect. Capacity : hundreds of GB => suitable for mass storage (Digital cameras & Smartphones). Repeated erasing slowly damages the media => Not suitable for main memory applications. Solid-State Disks (SSD) disks used similar as HDD. Secure Digital (SD) Cards: provide 2 GBs of storage in small size chip. © 2015 Pearson Education Limited 2015 1-37 Representing Information 1- Representing Text Each character (letter, punctuation, etc.) is assigned a unique bit pattern (code). – ASCII (American Standard Code for Info Interchange): Uses patterns of 7-bits to represent most symbols used in written English text. Extended to 8-bit by adding a 0 at the most significant end. © 2015 Pearson Education Limited 2015 1-38 © 2015 Pearson Education Limited 2015 Representing Text – International Standard Organization (ISO) developed a number of 8-bit extensions to ASCII, each designed to accommodate a major language group. But also insufficient for all languages and does not enable for mixed-language documents. – Unicode: Uses patterns up to 21-bits to represent the symbols used in languages world wide, 16-bits for world’s commonly used languages. – Unicode Transformation Format 8-Bit (UTF-8): is similar to ASCII. UTF-8 can use 24 or 32 bits to represent more symbols and for future extensions. – Text Editors are programs that can read/write files with simple text encoded by ASCII/Unicode. – Word Processors (e.g. MS Word) Adds additional data for font, size and other formatting. © 2015 Pearson Education Limited 2015 1-40 2- Representing Numeric Values Binary notation: – Rpresent a number in base 2. If there is n-bits, we can represent whole (integer) numbers from (0 up to 2n-1) i.e. 16-bits can represent numbers from 0 to (65535 = 1111 1111 1111 1111 = FFFF). Limitations: – Overflow: occurs if value is too big to be represented. – Truncation: occurs if value cannot be represented accurately. Two’s complement for representing negative whole numbers as well as positive. Decimal numbers use Floating-Point notation. © 2015 Pearson Education Limited 2015 1-41 3- Representing Images Bitmap techniques: Represent image as a collection of dots (pixels) and encode each pixel. Pixel (picture element) color: – Black and White : 1 bit per pixel – Gray Scale : 8-bits per pixel – Red-Green-Blue (RGB) system for coloring : 3-bytes (24 bits) per pixel. – RGB Alternatives: Luminance (Brightness) and chrominance: 3- bytes per pixel (1 for luminance and 2 chrominance (B &R)). Luminance = R + G + B (Gray-Scale). Bitmap is not good when re-scaling image (bigger pixels = grainy appearance) © 2015 Pearson Education Limited 2015 1-42 3- Representing Images Vector techniques: represent image as a set of geometric structures (lines, curves,... etc,) using analytic geometry. – Can be re-sized and maintained. – Used in scalable fonts (e.g. TrueType fonts in MS-Word & Adobe PostScript formats) – Computer Aided Design (CAD) systems use vector techniques for 3D objects. – Later can be converted to a Bitmap. © 2015 Pearson Education Limited 2015 1-43 4- Representing Sound Sampling: Sample the amplitude of the sound wave at regular intervals and record the series of values obtained. For instance, the series 0, 1.5, 2.0, 1.5, 2.0, 3.0, 4.0, 3.0, 0 Typical sample rate = 8000 samples/numbers per second. These numeric values are then transmitted over the communication line to the receiving end, where they are used to reproduce the sound. High-fidelity recordings use higher rate for quality sound (e.g. > 44,000 samples per second). Each sample requires 16 or 32 bits. © 2015 Pearson Education Limited 2015 1-44 The sound wave represented by the sequence 0, 1.5, 2.0, 1.5, 2.0, 3.0, 4.0, 3.0, 0 © 2015 Pearson Education Limited 2015 1-45 4- Representing Sound Alternative Encodings: – Musical Instrument Digital Interface (MIDI) : Used in music synthesizers and sounds in video games. – Store directions/instructions to re-produce sound on synthesizer and hence: Less storage is required. Sound can differ as the type of synthesizer. © 2015 Pearson Education Limited 2015 1-46 The Binary System The traditional decimal system is based on powers of 10. Digits (0 – 9). Digit position determines it quantity (10 ^ position) The Binary system is based on powers of 2. Digits (0 – 1). Digit position determines it quantity (2 ^ position, starting from 0 for right digit) © 2015 Pearson Education Limited 2015 1-47 Decoding binary representation 100101 © 2015 Pearson Education Limited 2015 1-48 © 2015 Pearson Education Limited 2015 1-49 Binary addition To add two integers represented in binary, we follow the same procedure of base-10 except that all sums are computed using facts: Perform the following binary addition: © 2015 Pearson Education Limited 2015 1-50 Fractions in Binary We use a "radix point“ similar to “decimal point” in decimal notation. The digits to the left represent the integer part (whole part). Digit quantity is 2 to a positive power (starting from 0). The digits to the right represent the fractional part. Digit quantity is 2 to a negative power (starting from -1). © 2015 Pearson Education Limited 2015 1-51 Fractions in Binary Addition of two binary numbers with fractions: © 2015 Pearson Education Limited 2015 1-52 Storing Integers Some notations are compatible with the design of digital circuitry. Two’s complement notation: – Most popular to represent integers – Uses a fixed number of bits; commonly 32-bits. But shorter (e.g. 4 bits) patterns are also used. – Positive numbers (0,1,2,3,...) : Starting with a string of 0's counting in binary until the pattern of a single 0 followed by 1s is reached (e.g. 0111…1). – Negative numbers (-1, -2, -3,...) : Starting with a string of 1's counting backward in binary until the pattern of a single 1 followed by 0s is reached (e.g. 1000…0). – Leftmost bit of a bit pattern indicates the sign (+/-) of the value. It is often called the “sign bit”. © 2015 Pearson Education Limited 2015 1-53 Two’s complement notation systems © 2015 Pearson Education Limited 2015 1-54 Two’s complement notation systems To convert back and forth between bit patterns representing positive and negative values of the same magnitude: copy the original pattern from right to left until a 1 has been copied, then complement the remaining bits. © 2015 Pearson Education Limited 2015 1-55 Addition in two’s complement notation Apply the same algorithm used for binary addition, except that all bit patterns, including the answer, are the same length. Any extra bit generated on the left by a final carry must be truncated. NOTE: Addition of any combination of signed numbers can be accomplished using the same algorithm and thus the same circuitry. © 2015 Pearson Education Limited 2015 1-56 Overflow problem With 4 bits, only integers from -8 to 7 can be represented. Adding 5 + 4 = 9 falls outside the range (Overflow). The result will be 5 + 4 = -7 (wrong result). Overflow occurs when adding two +ve values or two -ve values. It is indicated if addition of two +ve values results in pattern for a -ve value or if the sum of two -ve values appears to be +ve. Computers use longer bit patterns (e.g. 32 bits) so that larger values can be manipulated without overflow. With 32 bits values up to 2,147,483,647. If larger values are needed, longer bit patterns can be used or the units of measure can be changed e.g. miles instead of inches. Note: computers can make mistakes. So, we must be aware of the dangers involved. © 2015 Pearson Education Limited 2015 1-57 Storing Integers: Excess notation Another mean of representing integers. Use bit patterns of the same length. First write down all the bit patterns of that length in the counting order. The first pattern with a 1 as its most significant bit (100...0) will represent zero. Following patterns represent 1, 2, 3,...; and the patterns preceding (100...0) are used for representing -1, -2, -3,... Sign bit is reversed compared with two’s complement e.g. value 5 has pattern 1101 and -5 has pattern 0011 Also suffers from overflow errors © 2015 Pearson Education Limited 2015 1-58 An excess 8 notation system Note: 4-bits are used. Binary interpretation exceeds the excess notation interpretation by the value 8. Pattern 1000 is used to represent zero. © 2015 Pearson Education Limited 2015 1-59 An excess 4 notation system Note: 3-bits are used. Binary interpretation exceeds the excess notation interpretation by the value 4. Pattern 100 is used to represent zero. © 2015 Pearson Education Limited 2015 1-60 Storing Fractions Floating-point Notation: Consists of a sign bit, a mantissa field, and an exponent field. Note: the mantissa field starting with the leftmost 1 that appears in the binary representation. => “Normalized form” i.e. mantissa starts with 1 except for zero. © 2015 Pearson Education Limited 2015 1-61 Floating-point notation examples © 2015 Pearson Education Limited 2015 1-62 Storing Fractions : Truncation Errors If the mantissa is not large enough, the number is represented approximately with error. Approximation errors: Truncation error & Round-off error Example: 5 2 = 2.625(decimal) = 10.101(binary) 8 sign bit:0 , exponent:110 , mantissa: 101011 (1010 after truncation) => 01101010 1 5 But: 01101010 = 2 (not 2 ) 2 8 error = 1/ 8 = 0.125 (truncation error ) © 2015 Pearson Education Limited 2015 1-63 Storing Fractions : Truncation Errors using 8 − bits : 1 1 1 1 1 2 + + = 2 truncation error = 2 8 8 2 4 1 1 1 3 but : + +2 = 2 ( correct answer ) ) 8 8 2 4 – The order in which numbers are added can be important. – If a very large number is added to a very small number, the small number may be truncated. – Thus, the general rule for adding multiple values is to add the smaller values together first, in hopes that they will accumulate to a value that is significant when added to the larger values. – Practically, number of bits used is 32-bits (single precision, 7 decimals, 10-37 to 1038) or 64 bits (double precision, 15 decimals, 10--308 to 10308). © 2015 Pearson Education Limited 2015 1-64 Data Compression To reduce the data size when store or transfer data => “Data compression” techniques are required. Lossless techniques: exact information. Lossy techniques: – loss of information but more compression ratios. – Popular in compression of sound and images. Run-length encoding technique: – Lossless technique – Suitable if data consists of long sequences of the same value. – Example: Data stream of 458 bits (253 1's , 118 0's , 87 1's). Store: 253 1's , 118 0's , 87 1's better than storing all the 458 bits. © 2015 Pearson Education Limited 2015 1-65 Data Compression Frequency-dependent encoding (Huffman codes) – Lossless technique – Length of the bit pattern used to represent a data item is inversely related to the frequency of the item’s use. – Example: In English, letters (e, t, a, and i) are used more frequently than letters (z, q, and x). So, space (storage) can be saved by using short bit patterns to represent (e, t, a, and i) letters and longer bit patterns to represent (z, q, and x) letters. – Shorter representations are obtained compared with uniform (fixed) -length codes. © 2015 Pearson Education Limited 2015 1-66 Data Compression Relative/differential encoding: – If stream of data to be compressed consists of units, each of which differs only slightly from the preceding one e.g. consecutive frames of a motion picture. – Record the differences between consecutive data units rather than entire units. – Implemented in either lossless or lossy. Dictionary encoding: – Dictionary refers to a collection of building blocks is constructed. – Message/Data is encoded as a sequence of references to the dictionary. – Entire word can be encoded as a single reference to dictionary rather than as a sequence of individual characters. © 2015 Pearson Education Limited 2015 1-67 Data Compression Adaptive/Dynamic Dictionary encoding: – Dictionary is allowed to change during the encoding process. – A popular version is Lempel-Ziv-Welsh (LZW) encoding. LZW encoding: – start with a dictionary containing the basic building blocks from which the message is constructed. – As larger units are found, they are added to the dictionary. Future occurrences of those units can be encoded as single, rather than multiple, dictionary references. – Example: Encoding English text; start with a dictionary containing individual characters, digits,...etc. As words are identified, they could be added to the dictionary. Thus, the dictionary would grow as the message is encoded, and as the dictionary grows, more words/units in the message could be encoded as single references to the dictionary. © 2015 Pearson Education Limited 2015 1-68 Data Compression : LZW encoding LZW Decoding: – Large dictionary is not needed to decode the message. Only the original small dictionary would be needed. As the decoding process continues, it would encounter the same units found during the encoding process. – Example: Consider the message xyx xyx xyx xyx – Regular encoding: dictionary {‘x’ = 1, ‘y’ = 2, space = 3} and message is encoded as: 121312131213121. – LZW encoding: Start by encoding single characters 'x = 1' , 'y=2' and space=3. Having reached a space, then preceding string of characters forms a word, so we add the pattern 'xyx = 4' to the dictionary as the fourth entry. – Dictionary {‘x’ = 1, ‘y’ = 2, space = 3, ‘xyx’ = 4} and the message is encoded as: 121343434. © 2015 Pearson Education Limited 2015 1-69 Data Compression : LZW encoding LZW encoding: – Example: Consider the message xyx xyx xyx xyx – Encoding: Start by encoding single characters 'x = 1' , 'y=2' and space=3. Having reached a space, then preceding string of characters forms a word, so we add the pattern 'xyx = 4' to the dictionary as the fourth entry. – Dictionary {‘x’ = 1, ‘y’ = 2, space = 3, ‘xyx’ = 4} and the message is encoded as: 121343434. – Decoding: Start with basic dictionary {‘x’ = 1, ‘y’ = 2, space = 3} to decode 1213 into ‘xyx ‘. As there is a space found, the preceding characters form a new word entry with code 4, add it to the dictionary and decode it to get ‘xyx xyx ‘ and etc to decode the whole message xyx xyx xyx xyx. © 2015 Pearson Education Limited 2015 1-70 Compressing Images Encoding image into bitmap often produce large size files. So, we need to compress the bitmap. Graphic Interchange Format (GIF): – Use dictionary encoding with 256 entries/colors. – Reducing pixel colors to 256 colors only (1 byte) instead of RGB (3 bytes) => Lossy compression technique. – Adaptive/LZW can be applied for additional compression. – “transparent” color can be added to allow background to be shown and hence is Good for cartoons. © 2015 Pearson Education Limited 2015 1-71 Compressing Images : JPEG Joint Photographic Experts Group (JPEG): – ISO standard for color images. Widely used in photography industry. Used in most of digital cameras. – JPEG’s baseline standard is common. Designed to take advantage of a human eye’s limitations that is is sensitive to changes in brightness (Luminance) than to changes in color (chrominance). – Average chrominance values over 2x2 pixel squares. To reduce the size by a factor of 4 which is a significant compression without a noticeable loss of image quality. – Divide the image into 8x8 blocks and compress information in each block as a unit. – Other compression techniques can also be applied. – JPEG’s baseline compresses by a factor from 10 to 30 without noticeable loss of quality. © 2015 Pearson Education Limited 2015 1-72 Compressing Images : TIFF Tagged Image File Format (TIFF): – Standardized format for storing photographs along with related information such as date, time, and camera settings. – The image itself is stored as RGB components with/without compression. – Designed for compressing images of text documents in facsimile applications. Many white pixels exist. – Run-Length encoding is used to take advantage of long string of white pixels. – Not widely used in photography community. © 2015 Pearson Education Limited 2015 1-73 Compressing Audio and Video Goal is obtaining encodings that allow information to be transmitted over today’s communication systems fast enough to provide timely presentation. Motion Picture Experts Group (MPEG): – Used in High definition television broadcast and Video conferencing. – Encompasses a variety of standards for different applications. – Video is considered as a sequence of recorded pictures/images. Only some pictures/frames, called I-frames, are encoded entirety. – The pictures between I-frames are encoded using relative encoding techniques. – I-frames are usually compressed with techniques such as JPEG. © 2015 Pearson Education Limited 2015 1-74 Compressing Audio : MP3 MPEG layer 3 (MP3): – For Audio compression within MPEG. – Significant compression of audio while maintaining near CD quality sound using the following properties that take advantage of the human ear, removing details that ear cannot perceive: – Temporal masking: for a short period after a loud sound, the human ear cannot detect softer sounds that would otherwise be audible. So, we can mask/delete these sounds. – Frequency masking: A sound at one frequency tends to mask softer sounds at nearby frequencies. So, we can mask/remove these frequencies. © 2015 Pearson Education Limited 2015 1-75 Communication Errors The transmitted bit pattern retrieved may not be identical to the original one. Various techniques allow the detection and even the correction of errors. Those techniques lie behind the reliability of today’s equipment. Parity bits: Each bit pattern an odd number of 1s, otherwise, an error must have occurred. Adding an additional bit, called a “parity bit”, to each pattern so that resulting pattern has an odd number of 1’s. Even parity is also used. © 2015 Pearson Education Limited 2015 1-76 Communication Errors : Parity bits Straightforward applications of parity bits fail to detect any even number of errors. Checkbyte: – A collection of parity bits for long bit patterns. Each bit within the checkbyte is a parity bit associated with a collection of bits within the pattern. Checksums and cyclic Redundancy Checks (CRC): – Error detection schemes use variations of checkbyte technique. © 2015 Pearson Education Limited 2015 1-77 Error-Correcting Codes To enhance reliability of computers, errors should be detected and also corrected. We encode symbols such that the difference (Hamming distance) between any two patterns is at least 3 (see table). If a single bit is modified in this pattern, error is and the result will not be a legal pattern. We must change at least 3 bits in any pattern before it will look like another legal pattern. If error detected we can figure out what the original pattern was (i.e. correct error). If a single bit is modified in a certain pattern, the modified pattern will be a Hamming distance of only one from its original form but at least two from any of the other legal patterns. © 2015 Pearson Education Limited 2015 1-78 Error-Correcting Codes To detect error: Decode a message that was originally encoded using table, we compare each received pattern with the patterns in the code (table) until we find one that is within a distance one from the received pattern. We consider this to be the correct symbol for decoding. Error-correcting techniques are used extensively to increase the reliability of computing equipment e.g. CDs and high-capacity magnetic disk drives. © 2015 Pearson Education Limited 2015 1-79 Data Manipulation © 2015 Pearson Education Limited 2015. Data Manipulation Computer Architecture Machine Language Program Execution Arithmetic/Logic Instructions Communicating with Other Devices Program Data Manipulation Other Architectures © 2015 Pearson Education Limited 2015 2-81 Computer Architecture Central Processing Unit (CPU) or processor – Arithmetic/Logic unit versus Control unit – Registers : holding data General purpose registers Special purpose registers Bus: – connect CPU with memory Motherboard : – contains CPU and other components. © 2015 Pearson Education Limited 2015 2-82 CPU and main memory connected via a bus © 2015 Pearson Education Limited 2015 2-83 Adding two values based on the above design. © 2015 Pearson Education Limited 2015 2-84 The Stored Program Concept Motherboard: Arrangement for flexible connections between CPU, memory and other devices. A program can be encoded as bit patterns and stored in main memory. From there, the CPU can then extract the instructions and execute them. In turn, the program to be executed can be altered easily. © 2015 Pearson Education Limited 2015 2-85 To apply the stored-program concept: Encoded collection of instructions executed by CPU. Machine instruction: – An instruction (or command) encoded as a bit pattern recognizable by the CPU e.g. ADD, MOVE. Machine language: The set of all instructions recognized by a machine © 2015 Pearson Education Limited 2015 2-86 Machine Language Philosophies Reduced Instruction Set Computing (RISC) – Few, simple, efficient, less expensive, and fast instructions – Examples: PowerPC from Apple/IBM/Motorola and ARM Complex Instruction Set Computing (CISC) – Cope with increasing software complexities. – Many, convenient, and powerful instructions – Example: Intel © 2015 Pearson Education Limited 2015 2-87 Machine Instruction Types 1- Data Transfer: copy data from one location to another. e.g. LOAD, STORE,... In addition to I/O instructions. 2- Arithmetic/Logic: use existing bit patterns to compute a new bit patterns e.g ADD, AND, OR,... 3- Control: direct the execution of the program e.g. JUMP. © 2015 Pearson Education Limited 2015 2-88 Adding values stored in memory © 2015 Pearson Education Limited 2015 2-89 Dividing values stored in memory © 2015 Pearson Education Limited 2015 2-90 An illustrative machine © 2015 Pearson Education Limited 2015 2-91 © 2015 Pearson Education Limited 2015 2-92 The architecture of the illustrative machine © 2015 Pearson Education Limited 2015 2-93 Parts of a Machine Instruction Op-code: Specifies which operation to execute Operand: Gives more detailed information about the operation – Interpretation of operand varies depending on op-code © 2015 Pearson Education Limited 2015 2-94 The composition of an instruction for the machine © 2015 Pearson Education Limited 2015 2-95 Decoding the instruction 35A7 © 2015 Pearson Education Limited 2015 2-96 An encoded version of some instructions © 2015 Pearson Education Limited 2015 2-97 Program Execution Controlled by two special-purpose registers – Program counter: address of next instruction – Instruction register: current instruction Machine Cycle – Fetch – Decode – Execute © 2015 Pearson Education Limited 2015 2-98 The machine cycle © 2015 Pearson Education Limited 2015 2-99 Decoding the instruction B258 © 2015 Pearson Education Limited 2015 2-100 The program stored in main memory ready for execution © 2015 Pearson Education Limited 2015 2-101 Performing the fetch step of the machine cycle © 2015 Pearson Education Limited 2015 2-102 Performing the fetch step of the machine cycle (continued) © 2015 Pearson Education Limited 2015 2-103 Arithmetic/Logic Operations Logic: AND, OR, XOR – Masking: using AND/OR – XOR for the complement © 2015 Pearson Education Limited 2015 2-104 Arithmetic/Logic Operations Rotate and Shift: – Rotate: circular shift (right/left), – logical shift (right/left), – arithmetic shift: leave sign/left bit unchanged. – Illustrative machine example : instruction A501 means “Rotate the contents of register 5 to the right by 1 bit.” © 2015 Pearson Education Limited 2015 2-105 Arithmetic/Logic Operations Arithmetic: add, subtract, multiply, divide – Precise action depends on how the values are encoded – Two’s complement notation. Addition and subtraction are the same and multiplication is a repeated addition. – Floating-point notation. Let the 2 numbers be of the same exponent, do the operation as in integers. © 2015 Pearson Education Limited 2015 2-106 Communicating with Other Devices How CPU/Memory communicate with other peripheral/external devices (keyboard, screen, mouse,... etc.) Controller: An intermediary apparatus that handles communication between the computer and a device. – A controller translates data back and forth between forms compatible with the internal characteristics of the computer and those of the peripheral device to which it is attached. – Specialized controllers for each type of device – Recently, general purpose controllers (USB and FireWire) single controller is able to handle a variety of devices. © 2015 Pearson Education Limited 2015 2-107 Controllers attached to a machine’s bus © 2015 Pearson Education Limited 2015 2-108 Communicating with Other Devices Port: The point/connector at which a device connects to a computer. Memory-mapped I/O: CPU communicates with peripheral devices as though they were memory cells/locations. © 2015 Pearson Education Limited 2015 2-109 Communicating with Other Devices Direct memory access (DMA): – Main memory access by a controller over the bus. Enhance performance but complicates design and additional business on bus. Von Neumann Bottleneck: Insufficient bus speed impedes/delays performance. Handshaking: – Two-way dialog between computer & device. It is the process of coordinating the data transfer and exchange information. between components. © 2015 Pearson Education Limited 2015 2-110 Communication Media: There are 2 Types of Paths Parallel Communication: – Several communication paths transfer bits simultaneously (at the same time). – Rapid but complex Serial Communication: – Bits are transferred one after the other over a single communication path. – Simple and hence popular e.g. USB & FireWire. High speed in short distances. – Longer distances use Ethernet connections. – Greater distances use phone lines (modems) or DSL © 2015 Pearson Education Limited 2015 2-111 Data Communication Rates Measurement units – Bps: Bits per second – Kbps: Kilo-bps (1,000 bps) – Mbps: Mega-bps (1,000,000 bps) – Gbps: Giga-bps (1,000,000,000 bps) Bandwidth: Maximum available rate & capacity of transferring data. USB 2.0 has rates = several hundreds Mbps (sufficient for most applications) © 2015 Pearson Education Limited 2015 2-112 Programming Data Manipulation Programing languages shields users from details of the machine: – A single Python\C++ statement might map to one, tens, or hundreds of machine instructions – Programmer does not need to know if the processor is RISC or CISC – Assigning variables surely involves LOAD, STORE, and MOVE op-codes © 2015 Pearson Education Limited 2015 2-113 Other Architectures Technologies to increase throughput: – Pipelining: Overlap steps of the machine cycle – Parallel Processing: Use multiple processors simultaneously SISD: No parallel processing MIMD: Different programs, different data SIMD: Same program, different data © 2015 Pearson Education Limited 2015 0-114 End of Chapter © 2015 Pearson Education Limited 2015.