IT 110 Lecture Notes PDF
Document Details
Uploaded by BelovedSurrealism5892
Tags
Summary
These notes summarize fundamental computer science concepts such as digital representation, number systems, and basic programming language elements.
Full Transcript
IT 110 Lecture 1 What is a system? A system is a collection of components linked together and organized in such a way as to be recognizable as a single unit. What is an architecture? The fundamental properties, and the patterns of relationships, connections, constrains, and linkages among the comp...
IT 110 Lecture 1 What is a system? A system is a collection of components linked together and organized in such a way as to be recognizable as a single unit. What is an architecture? The fundamental properties, and the patterns of relationships, connections, constrains, and linkages among the components and between the system and its environment ate known collectively as the architecture of the system. Elements of an information system architecture o Hardware o Software o Data Components o People o Networks Links Abstraction of hardware as a programming language o Input/output o Arithmetic, logic, and assignment o Selection, conditional branching (if-then-else, if-goto) o Looping, unconditional branching (while, for, repeat-until, goto) Page 1 of 1 IT 110 Lecture 2 Base 10 counting : Ten one digit numbers (0–9) To expand beyond 1-digit, add a position on the left, representing the next power of ten. Each position represents a power of ten (a positional number system). For examole : 315,826 3 1 5 8 2 6 5 4 3 2 1 0 3 × 10 1 × 10 5 × 10 8 × 10 2 × 10 6 × 10 Base 2 counting Two one digit numbers (0–1) To expand beyond 1-digit, add a position on the left, representing the next power of two. 0 0 1 1 10 2 1 0 1 11 3 100 4 1 × 22 0 × 21 1 × 20 101 5 110 6 111 7 Page 2 of 2 o Leading zeros o Are insignificant, but often written to indicate the number of bits in a quantity. For example : 0110 = 110 Converting to and from binary Base 10 to base 2 conversion: repeated division with remainders Example: Convert (92)10 to binary. 46 23 11 5 2 1 0 2 92 2 46 2 23 2 11 2 5 2 2 2 1 92 46 22 10 4 2 0 0 0 1 1 1 0 1 The answer from left to right is (1011100)2 Base 2 to base 10 conversion: repeated multiplication and addition Example: Convert (1011100)2 to decimal 1 0 1 1 1 0 0 26×1 25×0 24×1 23×1 22×1 21×0 20×0 add = 0 + 0 + 4 + 8 + 16 + 0 + 64 = (92)10 Page 3 of 3 Binary is cumbersome بطئ،ثقيل Base 10 Base 2 Base 8 Base 16 0 0 0 0 1 1 1 1 2 10 2 2 3 11 3 3 4 100 4 4 5 101 5 5 6 110 6 6 7 111 7 7 8 1000 10 8 9 1001 11 9 10 1010 12 A 11 1011 13 B 12 1100 14 C 13 1101 15 D 14 1110 16 E 15 1111 17 F Long strings of 1’s and 0’s are hard to read. Group into sets or 3 (octal) or 4 (hexadecimal). Example: Rewrite 1101111001012 as octal and hexadecimal. o Group by 3: 110 111 100 101 67458 o Group by 4: 1101 1110 0101 DE516 Page 4 of 4 IT 110 Lecture 3 Binary subtraction: Important notes: Always use zero fill to keep your work consistent to help you doing calculations accurately. To group binary and octal use 3-bianry digit numbers To group hexadecimal use 4-bianry digit numbers In the binary operations always work with 8-bits ---------------------------------------------------- *** حسب ما ذكره الدكتور علي، إال ما يجي سؤال عليها:*** البنود التالية ❶ 1- Signed magnitude: a. Write the two bits as positives. b. To get the nagation of the negative bits just replace the most segnificant by "1", which means negative sign. c. Add the two bits together, but you will always get a wrong answer with the signed numbers subtraction. ❷ 2- 1's complement: notice that the most significant bit (the left most bit) represents a negative number if it is "1", and it represnts a positive number if it is "0". 2.1.1. Flip the bits of the negative number: "swap one's and zero's". 2.1.2. Write the positive bit underneath the negated bit. 2.1.3. Add the two bits together, if you got an overflow, add it to the result, and that will be the 1's complement bits. ❸ 3- 2's complement: Find the "2's" complement: 3.1.1. Change the bits of the negative number; going from right to left, invert every digit after the first "1". 3.1.2. Write the positive bit underneath the negated bits. 3.1.3. Add the two bits together, if you got an overflow, ignore (truncate) it. Page 5 of 5 3.1.4. If the most significant number (the 8th bit, not the overflow) in the result is "1" it means that result still in the negative format, so we revers it by changing the bits again, going from right to left, and inverting every digit after the first "1". Page 6 of 6 IT 110 Lecture 4 *** *** من المهم معرفة األكواد ورموزها Mnemonic Code Description LDA 5XX Load calculator with data from box XX STO 3XX Store calculator value in box XX ADD 1XX Add data in box XX to calculator SUB 2XX Subtract data in box XX from calculator IN 901 Get input from inbox, put in calculator OUT 902 Write calculator total to outbox HLT 000 Stop executing BRZ 7XX Zero? Next instruction is in box XX BRP 8XX Positive? Next instruction is in box XX BR 6XX Next instruction is in box XX DAT Data storage reserved Page 7 of 7 IT 110 Lecture 5 Generations of programming languages o First generation: programmed directly in binary using wires or switches. o Second generation: assembly language. Human readable, converted directly to machine code. o Third generation: high-level languages, while loops, if-then-else, structured. Most programming today, including object-oriented. o Fourth generation: 1990s natural languages, non- procedural, report generation. Use programs to generate other programs. Limited use today. Generations of programming languages o Key idea: Regardless of the language of writing, computers only process machine code. o All non-machine code goes through a translation phase into machine code. o Code generators o Compilers o Assemblers Page 8 of 8 IT 110 Lecture 6 Bus Access: Signals 0, 2, 7, and 12 control which data *** التعاريف والجدول وأرقام البوابات:*** مهم gets written to the bus. Control signals: determine order of operations, access to bus, loading of registers, etc. ALU Operations: Signals 10 and 11 choose among addition, subtraction, multiplication and division performed by the ALU. Selection: Signals 8 and 9 control which PC of two inputs get sent to output. IR Number Operation Number Operation 0 ACCbus 8 ALUACC 1 Load ACC 9 INCPC MAR 2 PCbus 10 ALU operation 3 Load PC 11 ALU operation MDR 4 Load IR 12 Addrbus 5 Load MAR 13 CS Decoder: 6 BusMDR 14 R/W 7 Load MDR Page 9 of 9 IT 110 Lecture 6 Summary o The fetch/execute cycle consists of many steps and is implemented in the control unit as microcode. o Control signals select operations, control access to the bus, and allow data to flow from component to component. o Adding new instructions means modifying the microprogram in the control unit. Page 10 of 10 IT 110 Lecture 7 ISA determines instruction formats o The LMC is a one-address architecture (an accumulator-based machine). e.g., the instruction ADD X ADD o There are other instruction set architectures, all based on the number of explicit operands. o 0-address (stack) o 1-address (accumulator) o 2-address o 3-address *** إال ما يجي عليها سؤالAddress machines *** 0-Address Machines o All operands for binary operations are implicit on the stack. Only push(input)/pop(output) reference memory. e.g., calculating a = a * b + c – d * e Code # Memory Refs PUSH A 1 PUSH B 1 MUL 0 PUSH C 1 PUSH D 1 PUSH E 1 MUL 0 SUB 0 ADD 0 POP A 1 Page 11 of 11 1-Address Machines Accumulator is a source and destination. Second source is explicit. e.g., calculating a = a * b + c – d * e Code # Memory Refs LOAD A 1 MUL B 1 ADD C 1 STORE T1 1 LOAD D 1 MUL E 1 STORE T2 1 LOAD T1 1 SUB T2 1 STORE A 1 2-Address Machines Two source addresses for operands. One source is also the destination. e.g., calculating a = a * b + c – d * e Code # Memory Refs MOVE T1, A 2 MUL T1, B 3 ADD T1, C 3 MOVE T2, D 2 MUL T2, E 3 SUB T1, T2 3 MOVE A, T1 2 T1 here is destination and A is source Page 12 of 12 3-Address Machines o One destination operand, two source operands, all explicit. e.g., calculating a = a * b + c – d * e Code # Memory Refs MPY T1, A, B 3 ADD T1, T1, C 3 MPY T2, D, E 3 SUB A, T1, T2 3 Code # Memory Refs MPY R1, A, B 2 ADD R1, R1, C 1 MPY R2, D, E 2 SUB A, R1, R2 1 Comparison *** *** المقارنات ما راح يجي عليها أي سؤال Assume 8 registers (3 bits), 32 op-codes (5 bits), 15-bit addresses, 16-bit integers. Which ISA accesses memory the least? Instructions Data refs Total 0-address 10 x 20 bits = 200 bits 6 x 16 bits = 96 bits 296 bits 1-address 10 x 20 bits = 200 bits 10 x 16 bits = 160 bits 360 bits 1½-address 7 x 23 bits = 161 bits 6 x 16 bits = 96 bits 257 bits 2 address 7 x 35 bits = 245 bits 18 x 16 bits = 288 bits 519 bits 3-address 4 x 50 bits = 200 bits 12 x 16 bits =192 bits 392 bits 3-address (regs) 4 x 38 bits = 152 bits 6 x 16 bits = 96 bits 248 bits Page 13 of 13 Summary o The instruction set architecture determines the format of instructions (and therefore the assembly language). o Four basic types with variations: o 0-address (stack) o 1-address (accumulator) o 2-address (register variant is 1½-address) o 3-address (with register variant) ISA dramatically affects the number of times memory is accessed. Page 14 of 14 IT 110 Lecture 8 البد يجي عليها سؤال،ًالرسك والسيسك مهمة جدا Definitions o CISC: Complex Instruction Set Computers. o RISC: Reduced Instruction Set Computers. o What is CISC? A type of microprocessor design. CISC processors require more CPU transistors in an effort to maximize code density in memory. Most common microprocessor designs such as the Intel 80x86 and Motorola 68K series followed the CISC philosophy. CISC was developed to make compiler development simpler. CISC Attributes A 2-operand format: where instructions have a source and a destination. Register to register, register to memory, and memory to register commands. Variable length instructions: where the length often varies according to the addressing mode. Multi-clock cycle instructions CISC motivation: 1- High number of operations (300+). 2- Compilers have less work to do to translate HLL into machine code. 3- Large number of instruction formats 4- Multi-clock cycle instructions 5- Fewer registers; more memory access. 6- Large number of transistors, CPU complexity, therefore higher CPU prices. Page 15 of 15 CISC Disadvantages: 1- instruction hardware become more complex. individual instructions could be any length. more time to execute. slowing down the performance. 2- Many specialized instructions aren't used frequently enough. 20% of the available instructions are used. 3- Take More time to examine the condition code bits. What is RISC? a type of microprocessor architecture that utilizes a small, highly- optimized set of instructions. The first RISC projects came from IBM, Stanford, and UC-Berkeley. RISC Attributes: One cycle execution time : RISC processors have a CPI (clock per instruction) of one cycle. This is due to a technique called PIPELINING. Pipelining: technique that allows for simultaneous execution of parts or instructions. large number of registers. RISC motivation : 1- Lower number of operations (150+) 2- Compilers have more work to do. 3- Small number of instruction formats 4- All instructions take one cycle. 5- Load/store architecture 6- Smaller number of transistors, lower CPU complexity, therefore lower CPU prices. Page 16 of 16 RISC Disadvantages: By making the hardware simpler, RISC architectures put a greater burden on the software. ًالمقارنة بين المعالجين مهمة جداً جدا CISC RISC Emphasis on hardware Emphasis on software Includes multi-clock Single-clock, complex instructions reduced instruction only Memory-to-memory: Register to register: "LOAD" and "STORE" "LOAD" and "STORE" incorporated in instructions are independent instructions Small code sizes, Low cycles per second, high cycles per second large code sizes Transistors used for storing Spends more transistors complex instructions on memory registers Page 17 of 17 IT 110 Lecture 9 General Enhancements : o Use RISC-based techniques : o Fewer instruction formats, fixed-length → faster decoding. o More general purpose registers → fewer memory accesses. - Clock cycle and instruction cycle : o Most instructions take several clock cycles to execute: o Fetch the new instruction [IF]. الخطوات الخمس مهمة o Decode the instruction [ID]. Instruction cycle o Execute the instruction [EX]. o Access memory (if needed) [MEM]. o Write back to the registers [WB]. o Each stage takes a clock cycle, so complete execution takes 5 cycles. Notice that the ALU used in stage 3 is idle in stages 1, 2, 4, and 5. The same can be said for other components if they are all discrete. Underutilized hardware! Page 18 of 18 pipeline o Solution: offset and overlap in a pipeline. ًمهم جدا By cycle 5, the CPU is executing 5 instructions at once. After this, one instruction completes every cycle. An n-stage pipelined CPU is n times faster than a non-pipelined CPU. o Problems with pipelining : o Dependencies (register interlock)—if an instruction needs a result from the immediately preceding instruction, that result won’t be written back until WB, but the result is needed in EX. Dependencies بعض العمليات:االعتمادية معتمدة على البعض فال يمكن تنفيذ بعض الخطوات إال بعد االنتهاء من الخطوة السابقة o Branching—when the instruction being executed is a branch, we can’t know if the branch will be taken until after stage 3. But by that time, other instructions are “in flight.” Branching وعندها ال نعرف إذا كان هناك، يتفرع التنفيذ، عندما يبدأ تنفيذ التعليمات:التفرع حاجة للتفريع إال بعد المرحلة الثالية Page 19 of 19 Summary o RISC-based CPUs offer general performance enhancements due to simplified formats and single-clock cycle execution. o Pipelining allows multiple instructions to be in various stages of execution at once. o Superscalar processing duplicates pipelines in a single core to have multiple instructions executing simultaneously. parallel وما بينpipeline يجمع ما بينSuperscalar o Data dependencies and branches are hazards to both pipelining and superscalar architectures. Page 20 of 20 IT 110 Lecture 10 Within the instruction fetch-execute cycle, the slowest steps are those that require memory access. Therefore, any improvement in memory access can have a major impact on program processing speed. The memory in modern computers is usually made up of dynamic random access memory circuit chips. DRAM is inexpensive. Each DRAM chip is capable of storing millions of bits of data. Static RAM, or SRAM, is an alternative type of random access memory that is two to three times as fast as DRAM. SRAM design requires a lot of chip real estate compared to DRAM. 1 or 2 MB of SRAM requires more space than 64MB of DRAM, and will cost more. Three different approaches are commonly used to enhance the performance of memory: Wide path memory access. الخطوات الثالث لتحسين أداء الذاكرة Memory interleaving. مهمة مع تعريف Latency, Bandwidth Cache memory. All three are used simultaneously in the system design. Wide path memory access : the simplest means to increase memory access is to widen the data path so as to read or write several bytes or words between the CPU and memory with each access; this technique is known as wide path memory access. Accessing memory has high latency but also high bandwidth. Latency : is the amount of time it takes for a round trip, i.e., the time from when the CS, R/W signals are asserted until the data is in the MDR. Bandwidth : is the amount of data that can be returned per unit time. Page 21 of 21 Latency Bandwidth Pipelining increases the bandwidth (instructions executed per unit time) but not the latency of a single instruction—that is still 5 cycles. - In the same manner as pipelining, memory bandwidth can be increased. - Requests for memory aren’t satisfied 1 byte at a time, but rather 4, 8, or even 16 bytes at a time. - Requires a wider bus between CPU and memory. Memory interleaving Memory interleaving : تقسيم الميموري إلى أجزاء Another method for increasing the effective rate of memory access is to divide memory into parts, called memory interleaving, so that it is possible to access more than one location at a time. Then, each part would have its own address register and data register, and each part is independently accessible. Memory can then accept one read/write request from each part simultaneously. Although it might seem to you that the obvious way to divide up memory would be in blocks. Cache memory : A different strategy is to position a small amount of high-speed memory, for example, SRAM, between the CPU and main storage. Page 22 of 22 This high-speed memory is invisible to the programmer and cannot be directly addressed in the usual way by the CPU. Because it represents a ‘‘secret’’ storage area, it is called cache memory. Cache memory is the only technique that tries to minimize latency. DRAM has high latency but is inexpensive. SRAM has low latency but is expensive. Use a small amount of expensive SRAM as a buffer against the large amount of DRAM. Hit : Requested data exists in cache—very fast. Miss : Data not in cache, fetch from memory, copy into cache, and then treat as a hit. Cache entries consist of: مكونات الكاش ميموري:مهم o Tag : address. o Data : copy of memory. o Dirty bit : indicates if data in cache is newer than contents of memory. Cache replacement algorithm o Once cache fills, a miss will cause an existing line to be replaced. Which one? Least recently used (LRU(. خوارزمية التخلص من الداتا:مهم First in first out (FIFO(. الموجودة في الكاش ميموري Least frequently used. Random. Etc. Page 23 of 23 What should happen on a memory write? مهم o Write through—write to cache and then immediately write to memory. Safe, simple, slow. o Write back—write only to cache. Use dirty bit to write back to memory when line is replaced. Complicated, fast. Cache coherency gets particularly tricky with multiple cores and multiple levels of cach. Summary : Latency is the round trip time to deliver a single request. Bandwidth is the number of requests that can be fulfilled in a unit time. Three ways of improving memory performance: o Wide path memory access—increase bandwidth to memory by fetching multiple bytes at a time. o Memory interleaving—increase bandwidth to memory by fetching in parallel across blocks. o Cache memory—decrease latency to memory by having fast copies closer to the CPU. Must keep memory synchronized with cache. Page 24 of 24 IT 110 Lecture 11 I : Input. O: Output. Input/Output Characteristics : o Many orders of magnitude slower than memory. o Character vs. block based. o Burst vs. steady transfers. Three approaches to I/O مهم o Programmed. o Interrupt-driven. o Direct memory access. Programmed I/O : o CPU is responsible for reading/writing to devices: o Special “input” instruction on CPU. o I/O data register and I/O address register. o Each device is assigned a unique address. Page 25 of 25 o Memory mapped I/O alternative : o Treat the I/O device as a memory address for reads and writes. Simplifies programmer interface; slightly more complicated control circuitry. o Problems with all programmed I/O : o Must check status bits to see if I/O is “ready.” o Use a polling loop (busy-wait) to send and receive data to devices. Interrupts : o Busy-waits (polling) wastes resources but has simpler hardware. o Alternative: After an I/O request from the CPU, let the I/O device notify the CPU when data is ready to be read (called an interrupt). IRQ: stands for Interrupt Request o Each device is assigned an IRQ line (signal). o I/O controller sets IRQ line status high. o CPU detects IRQ at beginning of fetch/execute. o CPU saves state of running program and switches to an IRQ handler routine. o Routine services the request. o Control is returned to the previously running code o Problems with interrupt driven I/O : Disk or network transfers o CPU still involved with each interrupt. may be hundreds or thousands of bytes. IRQ handler code may be o Only transfers a single byte/word. hundreds of instructions. Still too much overhead. Page 26 of 26 DMA : *** مهمة وكالهما مطلوب:*** الرسمة والشروحات أدناه o Direct Memory Access (DMA) o Add a specialized kind of CPU that can directly transfer data from device to memory. ❶ ❺ ❸ ❷ ❹ o Requires memory arbitration or dual-ported memory. How the programmed I/O, DMA, and interrupt methodologies work together : ❶ CPU uses PIO to specify memory address, operation (read/write), byte count, and block location on disk. ❷ DMA controller initiates I/O with the device controller. ❸ DMA controller receives data and transfers it to memory. ❹ DMA controller interrupts CPU to notify data transfer is complete. ❺ CPU handles interrupt. All bytes are in memory for processing. Summary o Purely programmed I/O requires special I/O instructions, I/O data and address registers, and polling loops that waste CPU resources. o Interrupt-driven I/O avoids busy-waiting but is unsuitable for large block transfers due to interrupt handler execution overhead. o DMA combines PIO and IRQ handlers with a special controller to transfer large amounts of block data efficiently directly to memory. Page 27 of 27 IT 110 Lecture 12 Storage Hierarchy : Computer storage is often conceptualized hierarchically, based upon the speed with which data can be accessed. o Performance is driven by latency and bandwidth. o The more layers away from the CPU... o... the higher the latency. o... the larger the capacity. *** *** الرسمة والشروحات مهمة وكالهما مطلوب At the top of the hierarchy are : CPU registers used to hold data for the short term while processing is taking place. cache memory(SRAM) : is a small fast memory that is used to hold current data and instructions. Page 28 of 28 The CPU will always attempt to access current instructions and data in cache memory before it looks at conventional memory. conventional memory (Main memory)(DRAM): The CPU accesses the data or instruction in conventional memory if cache memory is not present. Both conventional and cache memory are referred to as primary memory. Except for flash memory, access to secondary storage is significantly slower than primary storage. Flash memory uses a special type of transistor that can hold data indefinitely without power. The magnetic media used for disk and tape and the optical media used for DVD and CD disks also retain data indefinitely. Secondary storage has the additional advantage that it may be used to store massive amounts of data. Even though RAM is relatively inexpensive, disk and tape storage is much cheaper yet. Additional advantage that secondary storage may be used for offline archiving, for moving data easily from machine to machine, and for offline backup storage. Magnetic Disk Technology : *** ً*** مهم جداً جدا Terminology: Platter: a spinning disc within a drive, made of glass or aluminum, and coated with magnetic media. Head: floats above the media, reading or writing the magnetically encoded data. Track: a ring on a single platter. Cylinder: a track across all platters. Sector: a wedge shaped slice of a platter. Block: the intersection of a track and a sector. Page 29 of 29 Seek time: time to move the head to the desired track Latency time: time to rotate the desired sector to be under the head Transfer time: time to read a block after seek and latency are accounted for CAV (constant angular velocity): used by HDD; disk always spins at the same speed. Problem: wastes space on the outer rings CLV (constant linear velocity): The number of bits passing under the head is constant. Faster angular velocity at the inner tracks; slower on the outer. RAID : *** *** مهم جداً مع معرفة أنواعه والتركيز على الرسومات o Disks often fail because they are at least partly mechanical. RAID (redundant array of independent disks) attempts to improve redundancy and bandwidth. o Combine three primary functions: o Mirroring o Striping o Parity checks Page 30 of 30 RAID 0: Striping RAID 1: Mirroring RAID 5: Striping with distributed parity RAID 10: Stripe across mirrors Page 31 of 31 Summary : o Memory hierarchy shows the inverse relationship between speed and capacity in computing systems. o Magnetic disks have several kinds of latency: seek time, rotational delay, and transfer time. o RAID attempts to compensate for latency and failures by employing striping, mirroring, and parity checks. Page 32 of 32 IT 110 Lecture 13 Elements of a network *** *** مهم جداً مع التعاريف Protocols : Rules about how messages are sent, received, directed, and interpreted. Like grammar in a human language. Messages : Data that is sent and received as part of a communication. Two parts: protocol header and data payload. Protocol header is the envelope in which data is carried. Media : Material through which the messages move. Wired— copper (electrical) or fiber (optical). Wireless—any non- conducting material (radio waves). Devices : Equipment that sends, receives, or directs messages through media. Endpoints or intermediate devices. OSI and TCP/IP models: The Open Systems Interconnection Reference Model (OSI) is a theoretical model, developed overmanyyears as a standard by the International StandardsOrganization (ISO). TCP/IP is an older and more practical model, independently developed to meet the needs of the original Internet design, and regularly modified and updated to meet current needs. o OSI Model: does not specify concrete protocols, but rather specifies the functions that concrete protocols will need to implement at each layer. Page 33 of 33 Application Presentation Session Transport Network Data Link Physical Physical : Transmits raw bits in either code words or symbols. No knowledge of the data it transmits. Examples: high and low voltages over copper twisted pair wire, or colors of light in fiber. Data Link : Groups of bits called frames sent and received on a single network type. Handles synchronization and collision detection or avoidance. Protocol examples include Ethernet, Token Ring, FDDI, and 802.11 (wireless). Network : Makes it possible to send units of information (packets) across different kinds of networks (routing). Uniform addressing schema, network congestion control. Protocol examples include IP (internet protocol), IPX (internetwork packet exchange), and ICMP (internet control message protocol). Transport: Ensures reliable delivery of packets, error recovery, flow control, congestion control, and multiplexing of the network by several applications at once. Example protocols include TCP (transmission control protocol) and UDP (user datagram protocol). Session : Provides enhanced end-to-end session services such as authentication and authorization. Example protocols include PAP (password authentication protocol), NetBIOS, and PPTP (point-to- point tunneling protocol) Page 34 of 34 Presentation : Manages the way data is represented and formatted via encryption, compression, serialization, and encodings. Example protocols include ASCII and XML. Application : Provides protocols for specific applications. Examples include FTP (file transfer protocol), SMTP (simple mail transfer protocol), SNMP (simple network management protocol), LDAP (lightweight directory access protocol), and HTTP (hypertext transfer protocol). Most are defined in RFCs. OSI and TCP/IP models: o TCP/IP model: a real-world protocol stack used for most network communication today. o Layer separate concerns and build interoperability between different manufacturers. o Intermediate devices examine headers and reformat protocol data units for the next hop. *** ً*** الفروقات بين البروتوكلين مهمة جدا OSI model TCP/IP model Application Application Presentation Session Transport Transport Network Network Data Link Data Link Physical Physical Page 35 of 35 *** ً*** هذا الداياجرام مع الشرح مهم جداً جدا قد يطلب الرسم وقد يطلب الشرح أو كالهما Decapsulation Encapsulation Network Topologies : *** ً*** مهم جداً جدا o Logical vs. physical layouts : Page 36 of 36 Summary o Networks consist of protocols, messages, media, and devices. o The OSI model provides seven layers of functionality that are concretely provided in the 5 layers of TCP/IP. o As data moves down layers, it is encapsulated in the lower protocol data unit, and as it moves up, it is de-capsulated. o Networks can be arranged logically and physically as busses, stars, rings, meshes or hybrids of each. Page 37 of 37