Computer Programming (01CE1101) Unit 1 Introduction PDF
Document Details
Uploaded by ProgressiveSugilite4977
Marwadi University
Tags
Related
- Introduction to Computers and Programming PDF
- Introduction to Programming with Python PDF 2023
- Ch.01-Introduction to Computers and Programming (1) PDF
- Introduction to Computers and Programming Preparatory Year PDF
- Introduction To Computer & Programming PDF
- Chapter 1: Introduction to Computers and Programming PDF
Summary
This document is an introductory lecture on computer programming, covering fundamental concepts like hardware, software, applications, and number systems. It focuses on computer architecture.
Full Transcript
Department of CE/IT/AI Computer Programming (01CE1101) Unit - 1 Introduction What is Computer…? A Computer is Electronic device that can o accept data, o store data, Computer...
Department of CE/IT/AI Computer Programming (01CE1101) Unit - 1 Introduction What is Computer…? A Computer is Electronic device that can o accept data, o store data, Computer o retrieve the stored data o process data as desired and when required o output the result in desired format. A computer can be defined as a fast electronic calculating machine that accepts the (data) digitized input information process it as per the list of internally stored instructions and produces the resulting information. Hardware is a physical parts computer that cause processing of data. Hardware A computer software is a set of instructions for a computer to perform a specific task. Software Computer is fast and accurate electronic system that is designed to accept and store input data, process them and produce output results using the instructions of a stored program. Characteristics Speed: Computer executes one instruction at a time. Most instructions are carried out in less than a millionth of of a second. Computer Accuracy: Since the circuits in a computer have electronic parts, which far do not have wear and tear, the instructions are carried out without any mistakes. Vast Storage Media: In a computerized system a very large amount of data can be stored. Entertainment( Gaming, Movies, etc.) Communication Applications Text (word) Processing of Image Processing Computer Voice Recognition Numerical and Data Processing Block diagram of computer A Computer has four main components. They are ✓ Central Processing Unit Block diagram ✓ Memory Unit of computer ✓ Input devices ✓ Output devices The CPU acts as the computer brain. CPU is responsible for the overall working of all components of the computer. It consists of two parts: ✓ Arithmetic and Logic Unit Central ✓ Control Unit Processing The ALU performs arithmetic operations and conducts Unit(CPU) the comparison of information for logical decisions. The control unit is responsible for sending /receiving the control signals from / to all components. Under the control of CU, the data comes from input device(s) to memory, it is processed in ALU and the result is stored back in the memory. The processed results are displayed with the help of an output device. The memory unit of the computer is used to store data, instructions for processing data, intermediate results for processing and the final processed information. The memory of the computer of two types: Memory Unit ✓ Primary Memory ✓ Secondary Memory Primary memory is faster in speed, less in size (normally a few megabytes) and costlier. It consists of ROM (Read Only Memory) RAM (Random Access Memory) Primary ROM – In ROM, the information is pre-recorded into Memory the ROM chip at manufacturing time. It stores data and instructions, even when the computer is turned off. It is the permanent memory of the computer where the contents cannot be modified by an end user. ROM is called non-volatile memory because it never loses its contents. ROM holds instructions that the computer needs to operate. ROM stores critical programs such as the program Primary that boots the computer. Memory ✓ Programmable Read Only Memory - PROM ✓ Erasable Programmable Read Only Memory - EPROM ✓ Electrically Erasable Programmable Read Only Memory - EEPROM RAM- It contains all types of intermediate and temporary data to be used by the CPU. In fact, CPU can work / process only that data which is present in the RAM. Any data present in the secondary memory needs Primary to be first brought to RAM and only then can it be Memory used by the CPU. Since data can be both written to and read from this memory it is called Read/Write memory. RAM is volatile, meaning that it loses its contents when the computer is turn off or if there is a power failure. Secondary memory is usually a very large amount of memory (in gigabytes), which is Secondary comparatively cheaper and slower than primary Memory memory, but is permanent in nature. Thus, anything stored in secondary memory remains available even if the computer is switched off. Input devices accept data and instructions from the user or from another computer system. The most common input devices are ✓ Keyboard ✓ Mouse Input Devices ✓ Scanner ✓ Joy stick ✓ Microphone ✓ Optical Character Reader(OCR) Output devices return processed data to the user or to another computer system. The most commonly used output devices are Output Devices ✓ Monitor ✓ Printer ✓ Speakers ✓ Plotters Programming is a creative process done by programmers to instruct a computer on how to do a task. Why Programming Language? Languages A programming language is a formal constructed language designed to communicate instructions to a machine, particularly a computer. Languages The operations of computer are controlled by a set of instructions called a computer program. These instructions are written to tell the computer: ✓ What operation to perform ✓ Where to locate data ✓ How to represent results Languages ✓ When to make certain decisions The communication between two parties, whether they are machines or human beings, always needs a common language or terminology. The language used in the communication of computer instructions is known as the programming language. Languages.model small.data opr1 dw 1234h opr2 dw 0002h.code mov Assembly ax,@data mov ds,ax Languages Language mov ax,opr1 mov bx,opr2 clc C add ax,bx mov di,offset result mov [di], ax Java mov ah,09h mov dx,offset result end Assembly void main() Languages Language { int opr1,opr2; opr1=4660; C opr2=2; result=opr1+opr2; } Java Assembly class Sum Languages Language { public static void main(String arg[]) C { int opr1,opr2; opr1=4660; opr2=2; Java result=opr1+opr2; } } So for as programming language concern these are of two types. 1) Low level language 2) High level language Low level language: Low level languages are machine level and Languages assembly level language. In machine level language computer only understand digital numbers i.e. in the form of 0 and 1. So, instruction given to the computer is in the form binary digit, which is difficult to implement instruction in binary code. This type of program is not portable, difficult to maintain and also error prone. The assembly language is on other hand modified version of machine level language. Where instructions are given in English like word as ADD, SUM, MOV etc. It is easy to write and understand but not understand by the machine. Language So, the translator used here is assembler to translate into machine level. Although language is bit easier, programmer has to know low level details related to low level language. In the assembly level language, the data are stored in the computer register, which varies for different computer. Hence it is not portable. High level language: These languages are machine independent, means it is portable. The language in this category is Pascal, Cobol, Fortran etc. Language High level languages are understood by the machine. So, it need to translate by the translator into machine level. A translator is software, which is used to translate high level language as well as low level language into machine level language. Three types of translator are there: ✓ Compiler ✓ Interpreter ✓ Assembler Translators Compiler and interpreter are used to convert the high-level language into machine level language. The program written in high level language is known as source program and the corresponding machine level language program is called as object program. Both compiler and interpreter perform the same task but there working is different. Compiler read the program at-a-time and searches the error and lists them. If the program is error free, then it is converted into object Translators program. When program size is large then compiler is preferred. Whereas interpreter read only one line or line by line of the source code and convert it to object code. If it check error, statement by statement and hence of take more time. Difference between Compiler and Interpreter The ordered set of instructions required to solve a problem is known as an algorithm. Algorithm is a finite sequence of instructions, each of which has a clear meaning and can be performed with a finite amount of effort in a finite length of time. Algorithm No matter what the input values may be, an algorithm terminates after executing a finite number of instructions. We represent an algorithm using a pseudo language that is a combination of the constructs of a programming language together with informal English statements. Example 1: Write a Algorithm/pseudo code to add two numbers. Ans: Algorithm Step 1: Start Step 2:Read the two numbers into a,b Step 3: c=a+b Step 4: write/print c Step 5: Stop. Example 2: Write a algorithm to find out number is odd or even? Ans: step 1 : start Algorithm step 2 : input number step 3 : rem=number mod 2 step 4 : if rem=0 then print "number even" else print "number odd" endif step 5 : stop The characteristics of a good algorithm are: Precision – the steps are precisely stated (defined). Uniqueness – results of each step are uniquely defined and only depend on the input and the result of the preceding steps. Algorithm Finiteness – the algorithm stops after a finite number of instructions are executed. Input – the algorithm receives input. Output – the algorithm produces output. Generality – the algorithm applies to a set of inputs. A Flow chart is a Graphical representation of an Algorithm or a portion of an Algorithm. Flow charts are drawn using certain special purpose symbols such as Rectangles, Diamonds, Ovals and small circles. These symbols are connected by Flow Chart arrows called flow lines. (or) The diagrammatic representation of way to solve the given problem is called flow chart. Flowchart is very helpful in writing program and explaining program to others. Flow Chart Flow Chart Draw a flowchart to add two numbers entered by user. Flowcharts - Examples Problem solving using flowchart and algorithm Example of Finding Largest Number Problem solving using flowchart and algorithm What are the number systems in Computer? Numbering systems are just symbolic ways to represent the numbers. Number systems are the technique to represent Number numbers in the computer system architecture, every systems value that you are saving or getting into/from computer memory has a defined number system. Base - A base of a number system (or notation) is the number of symbols that we use to represent the numbers. Different number systems ✓ Binary number system Number ✓ Octal number system systems ✓ Decimal number system ✓ Hexadecimal (hex) number system A Binary number system has only two digits that are 0 and 1. Binary Every number (value) represents with 0 and 1 in this Number number system. System The base of binary number system is 2, because it has only two digits. Octal number system has only eight (8) digits from 0 to 7. Octal Every number (value) represents with 0,1,2,3,4,5,6 number and 7 in this number system. system The base of octal number system is 8, because it has only 8 digits. Decimal number system has only ten (10) digits from 0 to 9. Decimal Every number (value) represents with number 0,1,2,3,4,5,6, 7,8 and 9 in this number system system. The base of decimal number system is 10, because it has only 10 digits. A Hexadecimal number system has sixteen (16) alphanumeric values from 0 to 9 and A to F. Every number (value) represents with Hexadecimal 0,1,2,3,4,5,6, 7,8,9,A,B,C,D,E and F in this number number system. system The base of hexadecimal number system is 16, because it has 16 alphanumeric values. Here A is 10, B is 11, C is 12, D is 13, E is 14 and F is 15. Table of the Numbers Systems for C language representation: Number system Base Used digits Example C Language assignment Binary 2 0,1 (11110000)2 int val=0b11110000; Table of number Octal 8 0,1,2,3,4,5,6,7 (360)8 int val=0360; system Decimal 10 0,1,2,3,4,5,6,7,8,9 (240)10 int val=240; Hexadecimal 16 0,1,2,3,4,5,6,7,8,9, (F0)16 int val=0xF0; A,B,C,D,E,F Types of conversion: Number Decimal Number System to Other Base System Other Base to Decimal Number System Conversions Other Base to Other Base To convert Number system from Decimal Number System to Any Other Base is quite easy; you have to Decimal follow just two steps: Number Step 1 : Divide the Number by the base of target System to base system in which you want to convert the Other Base number: Binary (2), octal (8) and Hexadecimal (16). Step 2 : Write the remainder from step 1 as a Least Signification Bit (LSB) to Step last as a Most Significant Bit (MSB). Decimal to Binary Conversion Decimal to Octal Conversion Decimal to Hexadecimal Conversion Any Other Base System to Decimal Number System, you have to follow just three steps: Other Base Step 1: Determine the base value of source Number System (that you want to convert), and also determine the position of System to digits from LSB (first digit’s position – 0, second digit’s position Decimal – 1 and so on). Number Step 2: Multiply each digit with its corresponding Base multiplication of position value and Base of Source Number System’s Base. Step 3: Add the resulted value in step-2. Example : Binary number is (10101)2 Binary into Decimal Conversion Example : Octal number is (12570)8 Octal into Decimal Conversion Example : Hexadecimal Number is (19FDE)16 Hexadecimal into Decimal Conversion Bit It is the smallest unit of memory or instruction that can be given or stored on a computer. A bit is either Units of a 0 or a 1. Measurement Byte of data A group of 8 bits called a byte. Combination of bytes comes with various names like the kilobyte, megabyte, gigabyte and terabyte. Bits and byte Representation Thank You!!