CS 432: Network Modeling and Simulation - Lecture Notes PDF
Document Details
Uploaded by Deleted User
Tags
Related
- Nonlinear Regression Models Notes PDF
- 24-IDL-EE 466-UNIT 3-Technical Aspects of Power Systems Planning PDF
- Unit-1 Database System Concepts PDF
- Sewerage Network Design Project 2023-2024 PDF
- Performance Analysis of Computer Systems and Networks Lecture 2 PDF
- BIM-Based Learning Outcomes and Teaching Activities in Higher Education (PDF)
Summary
These notes provide a comprehensive overview of network modeling and simulation concepts. They discuss various types of models, such as stochastic and deterministic models, and emphasize critical aspects like modeling principles, wireless model requirements, and different model types. They also briefly touch upon practical applications such as handover optimization modelling in the context of the theory.
Full Transcript
**Topic 1 to 8** - Networks are diverse and complex. - Simplification of discussions surrounding networks is necessary. **Need for NeMS :** Understanding the technology landscape, which includes: - **People** - **Technology** - **Relationships** Technology Landscape 1. **Communica...
**Topic 1 to 8** - Networks are diverse and complex. - Simplification of discussions surrounding networks is necessary. **Need for NeMS :** Understanding the technology landscape, which includes: - **People** - **Technology** - **Relationships** Technology Landscape 1. **Communications Systems**: Rapid evolution. 2. **User Demands**: Necessity for **high-performance networks**. 3. **Service Providers**: Rapid expansion of network infrastructure. - Network researchers face the protocol war by developing new communications techniques, architectures and capabilities. - Equipment vendors are releasing new devices with increasing capability and complexity. Technology developers and OEMs are developing NG equipment. - Network designers and developers are working on how to satisfy the QoS demands of users. **Answer** There are various ways to answer and satisfy the goal seekers. These include Prototyping & empirical testing Trial field deployment Modeling and Simulation (M& S) Analysis **What is NeMS?** Network Modeling and Simulation is often considered a single term. **Simulation** is the imitation of behaviour of real-world system or Computational re-enactment according to rules described in model. Simulations are pieces of computer software that implement algorithms, take inputs and give outputs. **Modeling** is a step that precedes simulations. Together they form an iterative process approximating the real world systems. **Model** is the logical representation of a complex entity, system, phenomena or a process. In communications, network model could be analytical representation, mathematical form as a state Machine or closed or approximate form. **Computer simulation:** is the execution of computer software that reproduces behavior with a certain degree of accuracy to provide visual insight. **Computer Model:** a template on which a computer program runs. It has - Inputs - Outputs - Behaviour - **Network Model Definition**: Can be classified as: - **Descriptive** - **Analytical** - **Mathematical** - **Algorithmic** Types of Computer Models - **Stochastic vs Deterministic**: [Stochastic] models incorporate randomness, while [deterministic] models have predictable outcomes. **Deterministic:** Produces the same results for a given set of inputs, and is reliable in predictable scenarios. Deterministic models are simpler than stochastic models, and are often used in engineering designs and scientific experiments. **Stochastic:** Incorporates randomness and uncertainty, and produces many possible answers and outcomes. Stochastic models are versatile in handling randomness, and are often used in forecasting stock market fluctuations or unpredictable weather. - **Continuous vs Discrete**: Continuous models consider values that change smoothly; discrete models focus on distinct, separate values. - **Steady State vs Dynamic**: Steady-state models assume constant conditions; dynamic models account for changes over time. **Dynamic:** A condition where something changes over time. For example, a dynamic simulation model takes into account the rate of mass and energy accumulation within a system, which allows it to determine how long it will take to reach a stable condition. Dynamic simulations are often more detailed and realistic than steady state simulations. - **Local vs Distributed**: [Local] models focus on specific, isolated systems; - [distributed] models involve systems spread across multiple locations. - **Linear vs Nonlinear**: Linear models exhibit a direct proportionality, while nonlinear models show more complex relationships. - **Open vs Closed**: Open models interact with external systems; closed models operate in isolation. Lecture: 5 - **Modeling Principles** - Model only what you **understand**. - The **utility of a model** is determined by its ability to mimic a **real-world system**. - A comprehensive **knowledge of the system** is a prerequisite for effective modeling. - **Wireless Model Requirements** - Incorporates several key factors: - Free space path loss - Hidden terminal issues - Absorption effects - **Understanding Your Model** - Your model encompasses: - Your **perspective** on the system. - Your **assumptions** regarding its behavior. - The **analytics/mathematics** used for analysis. - Caching server - Response time - Infinite buffer - Queuing Theory - **Modeling Approach** - **Model what you need and no more** to avoid complexity. - **Types of Models** - **Underdefined Model**: - Simplified analysis. - Easier simulations. - May yield **untrustworthy** results. - **Overdefined Model**: - More complex analysis. - Long-run simulations. - Generally yields more **reliable** results. - Increased potential for **error** due to complexity. - Effective modeling requires a balance between simplicity and reliability, ensuring that models are neither overly simplistic nor excessively complex to maintain accuracy in representing real-world systems. - **Parameter** **Explanation** ----------------- ----------------------- BER PER, Throughput delay Throughput File transfer delay Packet overhead Network efficiency **Bit Error Rate (BER)** and **Packet Error Rate (PER)** **Lecture: 7** **A simple use case: One-hop Communication Network** **Modeled inputs** +-----------------------------------+-----------------------------------+ | **Parameter** | **Explanation** | +===================================+===================================+ | Signal Power | Received Power, BER, PER | | | | | dBm | | +-----------------------------------+-----------------------------------+ | Waveform Type | BER, PER | | | | | Analog/Digital | | +-----------------------------------+-----------------------------------+ | FEC | BER, PER, ReTX | | | | | Hamming code | | +-----------------------------------+-----------------------------------+ **Retransmission (ReTX)** **Forward Error Correction (FEC)** **Lecture: 8** **Simulation building process** - **Entities** - Wireless computers and their packets (multiple instances) - WiFi AP (single instance) - Traffic generator (single instance) - Creates wireless computers and their packets - **States** - WiFi AP (idle or busy) - Each computer generates a number of packets - Each packet successful/failed - **Events** - Wireless computer creation - Packet generation - Wireless AP activity - Each packet successful/failed - **Queues** - Frames waiting in output queue of wireless computer - Frames (Packet) at WiFi AP input queue - **Random realizations** - Packet lengths - No of frames per wireless computer - No. of wireless computers - BER and PER - Packet drop ratio in WiFi AP input queue - **Distributions** - Uniform/Gaussian - Packet lengths - No of frames per - wireless computer **Lecture: 09** Simulation run - Inputs created/initialized. - Events of **transmission**, **reception**, and **noise** occur. - Randomness causes **queues** to behave unpredictably. - Track **packet successes** and **failures**. - Compile simulation logs and present output in desired formats. Components of a Simulator - **Self-Contained Program**: A complete entity that can run independently. - **Event Queue**: Manages the events that occur during the simulation. - **Simulation Clock**: Tracks the progression of time within the simulation. - **State Variables**: Store the current state of the simulation. - **Event Routines**: Define actions taken for each event. - **Input Routine**: Handles the initialization of simulation parameters. - **Report Generation Routine**: Creates output reports from simulation results. - **Initialization Routine**: Sets up the simulation environment. - **Main Program**: Executes the overall simulation logic. - Efficient and confident - Not abstracted - Incorporate real workload - Maps to real world - Only available for operational system - Behavioral sensitivity - End-to-end not possible Types of Simulations - **Monte Carlo Simulation**: Uses random sampling to understand the behavior of a system. - **Trace Driven**: Uses real-world data to drive the simulation. - **Discrete Events**: Focuses on distinct events that occur at specific times. - **Continuous Events**: Models systems where changes occur continuously over time. **Lecture: 11** - **When to Simulate**: - Analytical model is **not feasible** (complex). - Analytical model is **not possible** (too simple). - Simulate to **verify analysis**. - Simulations are unnecessary otherwise. - **When Not to Simulate**: - Analytical model provides a **good enough representation**. - Simulation requires **excessive time**. - Simulation is **expensive**. - Simulation is **non-scalable**. Common Mistakes in Simulation - Inappropriate Levels of Detail - Improper Selection of Programming Language. - Unverified Models - Improper Initial Conditions - Short Run Times - Poor Random Number Generators - Inadequate Time Estimate - No Achievable Goals - Incomplete Mix of Essential Skills - Inadequate User Participation\] - Inability to Manage Simulation Project **Lecture:12** **Inappropriate Levels of Detail** - Include only what is **relevant**. - Avoid simulations that are too fine and **computationally heavy**. - Acknowledge **interdependent parameters** and their complex interplay. - **Tip**: Focus on **necessity** and **sufficiency** in details. ### Improper Programming Language in Simulation - **Scope & Type of Simulation**: - Determines the best programming language choice. - **Programming Paradigms**: - **Object-oriented vs. Procedural**: Influences the types/diversity of simulation parameters. - **Interpreted vs. Compiled Languages**: - **Machine Dependence**: Affects portability across different systems. - **Speed**: Compiled languages generally offer better performance. **Unverified Models**: - Programming is non trivial - Semantic mistakes - make simulations get - Wrong results - Misleading results - Modular verification a must **Improper Initial Conditions**: - Initial condition not steady state - Often a late realization - Surprisingly wrong results - May never converge **Short Run Times**: - Results may not reflect true steady states. **Lecture:13** **Poor Random Number Generators (RNG)**: - Lack of pseudo-random sequences leads to predictable outcomes. - Incorrect seed values can inadvertently correlate processes. - Use reputable RNG algorithms. **Inadequate Time Estimate**: - Simulations may be overstated compared to reality. - Implementations can be delayed due to unforeseen complexities. - Should assess model complexity accurately. **No Achievable Goals**: - Goals must be clearly defined for tangible output analysis. - Include logs and trace files for monitoring. - Affects simulation complexity and implementation feasibility. **Incomplete Mix of Essential Skills**: - Domain knowledge - Statistics - Programming - Project management - Relevant past experience **Inadequate User Participation**: - Involvement throughout all phases: - From modeling to implementation. - User interface (UI) design. - Output analysis. **Inability to Manage Simulation Project**: - Simulations are not monolithic and require careful management. - **Software Engineering Tools**: - Multivariate design. - Code management. - Change tracking. **Simulation Inaccuracies**: - Over-reliance can lead to misleading results. - Link budget losses may be overly static, sufficient for steady-state analysis but inadequate for dynamic conditions. - Ignoring lower layers can result in loss of critical details (e.g., bit-level Bit Error Rate (BER) and delay). - Often leads to incorrect results in dynamic use cases. ### Lecture: 15 ### Code Example: /\* Height of an object moving under gravity. \*/ /\* Initial height s and velocity v are constants. \*/ main() { float h, v = 100.0, s = 1000.0; int t; } Development of Systems Simulation: - Focus on creating effective models to simulate real-world scenarios. - **Key Scenario**: "Still I am not dead yet!" - **Variables**: - **h** = height (feet) - **t** = time in motion (seconds) - **v** = initial velocity (feet per second, + indicates upward) - **s** = initial height (feet) - **a** = acceleration (feet per second²) - **Not avalible:** Mass of object and air resistance are also considered. - **Main Function Initialization**: - Define variables: - **float h, v = 100.0, s = 1000.0;** - **int t;** - **Development Process**: - **Problem Formulation**: Identify controllable and uncontrollable inputs. - **Data Collection & Analysis**: - Determine what data to collect and the volume required. - Assess cost vs. accuracy trade-offs. - **Simulation Development**: Emphasize coding accuracy---"Codify, codify and codify!" - **Model Validation, Verification, & Calibration**: - **Validation**: Ensure the system accurately emulates real phenomena. - **Verification**: Confirm the implementation corresponds correctly to the model. - **Calibration**: Adjust parameters to align simulated data with real data through tuning. - **Analysis Techniques**: - **"What-if" Analysis**: Evaluate performance measures under varying inputs. - **Sensitivity Analysis**: Analyze the relative importance of parameters on output and their interrelations. - **Life Cycle of Simulation Development**: The iterative process of developing, validating, and refining simulations to ensure accuracy and reliability. Recommended Text and References - **NeMS contents cover:** - Established **mathematical models**, equations, and forms. - Commonly used **simulation tools** and **code reusability**. - Understanding their **inter-relationship**. - **NeMS contents don't cover:** - **Mathematical derivations** from scratch. - **Programming dexterity**. Basics of NeMS - Mohsen Guizani et al., *"Network Modeling and Simulation"* (John Wiley, 2010). - Jack Burbank et al., *"An Introduction to Network Modeling & Simulation for the Practicing Engineer"* (John Wiley, 2011). - John A. Sokolowski & Catherine M. Banks, *"Modeling and Simulation Fundamentals"* (John Wiley, 2010). ### Lecture: 17 ### OMNeT++ Overview - **OMNeT++**: - Stands for **Objective Modular Network Testbed in C++**. - Functions as a **simulation kernel** and a **component-based simulation library**. - It is a **framework**, not a simulator, designed for creating and simulating any type of network. ### Simulation Kernel ### ![](media/image4.png) The image shows how a **simulation kernel** processes an **event queue** in a discrete-event simulation: 1. Events are processed in **timestamp order** (head popped first). 2. Processing an event can generate **new events**, which are added back to the queue. 3. The simulation stops when: - The event queue is empty, - A termination condition is met, or - The user ends it manually. ### Debug and Release Modes - **Debug Mode**: - No optimizations applied to the binary. - Allows for accurate **breakpoint** settings and step-through debugging. - Compiled with full **symbolic debug information**. - **Release Mode**: - Optimizations enabled. - Generates instructions without debug data. - Potential removal or rewriting of extensive code. - Resulting executable may differ from the original written code. - A **workspace** is a logical collection of projects. - Example: A workspace named **p2p** may contain only peer-to-peer applications. ### Lecture: 18 ### Design of OMNeT++ The **design of OMNeT++** includes: *Requirements:* - Support large-scale simulations. - Simplify debugging. - Integrate with standard tools for input/output. - Combine modeling and analysis. *Design Features:* - Hierarchical models. - Reusable components. - Visualization tools. - Open data interface. - Integrated development environment (IDE). #### Model Structure - **Modules**: - Core components of the model. - Communicate through **message passing**. - Implemented as C++ files, running within the simulation kernel. - **Module Types**: - **Simple Modules**: Active components. - **Compound Modules**: Composed of simpler modules. - Unlimited hierarchy levels for modules. - **Gates**: - Input/output interfaces for modules. - Facilitate message passing. - Linked via connections (e.g., **TPROP**, **RDATA**, **BER**). - **Channels**: - Define connection types with specific properties. - Reusable across multiple contexts. - Example: Standard host communication via Ethernet cable. - #### Message: A **Message** consists of: - **Time Stamp** - **Arbitrary Data** - #### Network: A **Network** is a compound module with no external gates. ### Module Parameters - Used to pass configuration data to simple modules. - Define model topology with parameters that can be: - **String**, **Numeric**, **Boolean** - Constants, random numbers, and expressions as references. Lecture: 19 **Internal Architecture** - OMNeT++ simulation programs possess a modular structure - **Model Component Library** comprises the code for **compiled simple and compound modules**. - **Simulation Kernel & SIM Class Library** - The **[simulation kernel]** instantiates modules to construct a concrete simulation model. - [The **SIM class library**] addresses common simulation tasks, including: - **Random number generation** (various distributions). - **Queues** (e.g., FIFO, priority). - **Messages**: Capable of holding arbitrary data structures. - **Routing**: Facilitates exploration of topology and generation of graph data structures. - **Envir, Cmdenv, and Tkenv Libraries** - The simulation operates within an **environment** defined by these libraries. - Key functions include: - Source determination for input data. - Destination for simulation results. - Management of debugging output. - Control of simulation execution. - Visualization of the model. - Stands for **Network Description Language**. - Utilized to create **network topologies** within OMNeT++. - Allows for graphical topology creation, with corresponding **NED source code** generated automatically. - **Typical Ingredients of NED Description** - Network definitions - Compound module definitions - Simple module declarations - **Network Definition** - Defined as **compound modules**, which are self-contained simulation models. - **Simple Module Declaration** - Specifies the interface of modules, including **Gates** and **Parameters** - **Compound Module Definitions** - Include: - Declaration of external interfaces (gates and parameters). - Definition of submodules and their interconnections. - **Creating a Topology Example** - Network Name: **My\_Network** - Module Name: **My\_Module** - Compound Module Name: **standardHost** - **Inheritance**: Modules and channels can be subclassed. - Derived modules and channels may introduce: - New **parameters**. - New **gates**. - Compound modules can add: - New parameters. - New **connections**. - **Interface Instantiation** - Module and channel interfaces serve as placeholders for module/channel types, with concrete types determined at network setup via parameters. - Example Modules: - **GenericTCPClientApp** can be derived to form **FTPApp**. - **BaseHost** can be combined with **WebClientApp**. - Mobility models (e.g., **ConstantSpeedMobility**, **RandomWayPointMobility**) can be used for mobile hosts. - **IMobility**: Represents mobile host compound modules. - Separation of concerns is crucial for a cleaner model. - **Packages** - Address name clashes between different models. - Simplify the specification of required **NED files** for specific simulation models. - **Package book.simulations;** - Package is a mechanism to organize various classes and files. The simulation project inside of OMNeT++ is called \"**Book**\" and this NED file is found in the \"**simulations**\" folder of the Project. **Lecture: 22** - **INI File Editor**: - Considers all **NED** (Network Description) declarations. - Includes: - **Simple modules** - **Compound modules** - **Channels**, etc. - Relates this information directly to the contents of the **INI file**. - The editor has knowledge of which **INI file keys** correspond to which **module parameters**. - **Separation of Model and Experiments**: - It is good practice to separate different aspects of a simulation for clarity: - **Model topology**: - Defined in **NED file**. - Described in **MSG file**. - **Model behavior**: - Implemented in **C++ code**. - This separation leads to a cleaner and more organized model. - **Configuring Simulations**: - To capture the effects of different inputs: - Utilize **run-to-run variables**. - **C++** and **NED code** do not inherently support these variables. - **INI files** provide a mechanism for specifying simulation parameters: - Example: **omnet.ini**. - **INI File Syntax**: - An **INI file** is essentially an **ASCII text file**. - It consists of: - **Key-value pairs** in the format: - \=\. - **INI File Editor** - INI File Editor lets the user configure simulation models for execution - Both form-based and source editing Lecture 23 ### Building Simulation Programs #### Process of Build: - #### Same as building #### any C/C++ program from source - #### All C++ sources need to be compiled into object files - #### All object files need to be linked with necessary libraries to get - #### Executable - #### Or shared library #### #### Using GUI Project Builder: - **Initial Build Time** longer due to indexing before the project is built. - **Dependency Generation** Involves generating make files for **classes**, **functions**, **methods**, **variables**, and **macros**. #### Using Mingwenv: - **Source Files** must contain .ned, .msg, .cc, and .h files. - Set the working directory to the folder containing the source files. - **Commands**: - Execute \$ opp\_makemake to create a **Makefile**. - Execute \$ make to compile the simulation program. - **Outcome**: - Successful execution results in a built simulation program. #### Running Simulations - **Simulation Run** initiate by launching the built project **Makefile**. #### OMNET++ IDE Features - **Types of Runs**: - **Single runs**: Execute one simulation at a time. - **Batch runs**: Execute multiple simulations in a single command. - **Run Numbers**: Manage and track multiple executions. - **Modes**: - **Graphical Mode (Tkenv)**: Visual representation of the simulation. - **Command Mode (Cmdenv)**: Text-based interaction with the simulation. - **Simulation Configuration**: Set parameters and environment for the simulation. - **Event Logs**: Record events during the simulation for analysis. - **Debug Support**: Tools available for troubleshooting and refining simulations. #### Quick Run: - In **Project Explorer**, select the desired project. - Click the **Run** button on the toolbar. - Varies based on the folder structure and available files. - Folder Runs if single ini file present - **INI File**: If a single **ini** file is present, it will be used as the main configuration file. - **NED File**: Scans for available **ini** files to determine execution parameters. #### Launch Configuration ![](media/image7.png) **Animation and Tracing in OMNeT++** - **Animation Capabilities**: - OMNeT++ supports the **animation** of: - The **flow of messages** on network charts. - **State changes** of nodes displayed in simulations. - Animation is **automatic**; no programming is required for simulation engineers. - Generally, suitable for network simulations that **rarely** need fully customizable animation features. - **Simulation Tracing**: - Simple modules can generate **textual debug (trace)** information using functions like **printf()**. - OMNeT++ includes a **Module output window**: - A dedicated window for displaying output streams. - Facilitates easier monitoring of module execution. - **Running Simulations**: - Simulations are initiated using the configuration file omnet.ini. - Location: Executed from the /queuenet directory. - Configuration supports one or more **ini files**. - R = 0 denotes the run number. - Contains directories from which **NED files** are read. - **Simulation Object Inspection**: - An **object inspector** is a GUI window linked to a simulation object. - Displays **contents** and **properties** of the object. - Three types of inspectors: - **Network Display**: Visual representation of the network. - **Log Viewer**: Shows logs generated during simulation. - **Object Inspector**: Detailed inspection of simulation objects. - **Tkenv - Graphical Runtime Interface**: - Tkenv serves as a graphical interface for running simulations. - Features include: - Network visualization. - Message flow animation - Log of message flow. - Display of textual module logs - Inspectors - Visualization of statistics. - Event log recording - **Tkenv in Action**: - Displays a **timeline** of events. - Utilizes **Future Events Set (FES)** on a log scale. - Incorporates network display and log viewer for comprehensive monitoring. ### Organizing and Performing Experiments #### Need for Organizing Experiments - **Repeatable**: Fellow researchers should be able to reproduce the results. - **Unbiased**: Results must not be specific to the particular scenario used in the experiment. - **Rigorous**: Scenarios and conditions must be genuinely representative of real-world contexts. - **Statistically Sound**: Experimental results must adhere to mathematical principles and not violate them. **How to organize experiments?** - **Model**: - The executable representation of the experiment. - Comprises **C++ files, external libraries**, and **NED files**. - Remains invariant for the purpose of experimentation. - The **INI file** is not considered part of the model. - **Study**: - Consists of one or more experiments aimed at investigating a specific phenomenon. - Typically involves multiple experiments and one or more models. - **Experiment**: - Refers to the exploration of a parameter space using a model. - Focuses on only one model at a time. - **Measurement**: - Involves a set of simulation runs on the same model with identical parameters. - Characterized by the **INI file** but utilizes different seed values. - May include replication to average out results. - **Replication**: - Denotes one repetition of a measurement. - Can be characterized by the specific seed values employed. - **Run**: - Represents a single instance of executing the simulation. - Identified by specific attributes such as exact time, date, and computer (host name). #### Example - **Handover Optimization**: - A practical application of the outlined organizational principles in experiments. These structured notes emphasize the critical aspects of organizing and performing experiments, ensuring clarity and comprehensibility for academic purposes. ### Sequence Charts and Event Log Tables #### Event Log Tables - **Event Log File**: - An eventlog file contains tabulated log of messages sent during simulation - Between modules - Self-messages (timers) #### Event Log File Creation - **Command**: To enable logging, use \$ record-eventlog = true. - **Output Location**: Logs are placed in the **/results directory**. - **Filename Format**: The files are named as \${configname}-\${runnumber}.elog. #### Sequence Chart - Displays event log files in a **graphical form**. - Focuses on causes and consequences of **events/messages**. - Aids in understanding complex **simulation models** and verifying desired behavior. #### Timeline in Sequence Charts - #### Maps **simulation time** onto the horizontal axis. - #### Displays intervals between interesting events, which may differ in magnitude (e.g., MAC vs. higher layers). ##### **Types of Timeline** - **Linear**: Simulation time is proportional to distance measured in pixels. - **Event Number**: Event numbers are proportional to pixel distance. - **Step**: Equal distance between subsequent events. - **Nonlinear**: Distance between events varies non-linearly with simulation time. #### Interpreting Sequence Charts - **Zero Simulation Time Regions**: Areas where simulation time is effectively zero. - **Gutter**: Space in the chart separating different events. - **Events**: Specific occurrences captured within the simulation. - **Messages**: Communications between modules recorded in the log. - **Displaying Module State on Axes**: Shows the state of modules at different points in time. ### TicToc Tutorial - **Nodes**: - **Tic** and **Toc** are the two nodes in the system. - **Message Passing**: - One node (**Tic** or **Toc**) initializes communication by sending a message to the other node. - Upon receiving the message, every node is programmed to send it back to the sender. - **Indefinite Communication**: - This message-passing continues indefinitely until the user decides to stop the process. #### Creating an Empty Project - **Start OMNeT++ IDE**: - Open the OMNeT++ Integrated Development Environment (IDE). - **Create Project**: - Navigate to **File \| New \| OMNeT++ Project**. - Enter a name for the project in the dialog that appears. - Click **Next**. - **Select Example**: - Choose the **Tictoc** example file located in the **Examples** folder. - **Completion**: - The Tictoc example project has now been successfully created. #### Opening NED File - In the newly created project, access the **simulations** folder within the **Project Explorer**. - Locate and open the **Tictoc.ned** file for further examination. #### Understanding Tictoc1.ned - **Open Simple Module**: - Access the **Project Explorer**. - Open the **src** folder within the project. - Open the **Txc.ned** file for analysis. #### Understanding Txc.ned - **Open Simple Module**: - Again, access the **Project Explorer**. - Navigate to the **src** folder. - Open the [**Txc.cc**](http://txc.cc/) file to review its contents. - The **omnet.ini** file serves as the configuration file for the simulation environment. - Compile the project and run it using the **Tkenv** simulation environment to visualize and analyze the TicToc message-passing interactions. ### ![](media/image9.png) - **Refining Graphics** - Update **Tictoc2.ned** - Add **debugging output** in [**Txc2.cc**](http://txc2.cc/) - Utilize **Tkenv output** for visualization - **Adding State Variables** - Introduce a **counter** as a class member in the module - Implement deletion of messages after **10 exchanges** - Update in files: [**Txc3.cc**](http://txc3.cc/) - **Adding Parameters** - Incorporate **input parameters** for simulations - Change count limit to **10**, allowing user-defined input - Update files: - **tictoc4.ned** - [**Txc4.cc**](http://txc4.cc/) - **Omnet.ini** - Introduce a **Boolean parameter** for module initialization - File updates: **tictoc4.ned**, [**Txc4.cc**](http://txc4.cc/), **Omnet.ini** - **Using Inheritance** - Differentiate between **tic** and **toc** based on: - **Parameter values** - **Display string** - Utilize inheritance to create a simple module and derive from it: **tictoc5.ned** - **Modeling Processing Delay** - Introduce **processing delay** in tictoc - Implement a **timer** within the tictoc module to send **"Event" messages** - Update files: **tictoc6.ned**, **txc6.c** - **Random Numbers and Parameters** - Integrate **random numbers** into simulation - Allow for **random packet loss** - Vary delay from **1 second** to a **random value** - Update files: [**txc7.cc**](http://txc7.cc/), **tictoc7.ned**, or **omnetpp.ini** - **Timeout and Cancelling Timers** - Approach closer to **real-world protocols** - Implement **Stop-and-Wait protocol** - Relevant files: [**txc8.cc**](http://txc8.cc/), **tictoc8.ned**, **omnetpp.ini** - **Retransmitting Same Message** - Maintain **original packets** for retransmission rather than creating anew - Keep a copy of the message with **tic** - Function additions: - generateNewMessage() - sendCopyOf(cMessage \*msg) - Update files: [**txc9.cc**](http://txc9.cc/), **tictoc9.ned**, **omnetpp.ini** - **More than 2 Nodes** - Create multiple **tic modules** connected in a network - Enable message generation from one node and random routing - Update files: **tictoc10.ned**, **omnetpp.ini**, [**txc10.cc**](http://txc10.cc/) - **Channels & Inner Type Definitions** - Improve connection efficiency with growing topology - Define channels to replicate connections with the same **delay parameter** - Update files: **tictoc11.ned**, **omnetpp.ini**, [**txc11.cc**](http://txc11.cc/) - **Using Two-Way Connections** - Simplify connections by using **two-way (inout) gates** - Reduce coding size by eliminating redundant connections - Update files: **tictoc12.ned**, [**txc12.cc**](http://txc12.cc/), **omnetpp.ini** - **Defining Message Class** - Aim: Increase flexibility by avoiding hardcoded values (e.g., **tic\[3\]**). - Incorporate a **random destination** selection mechanism. - Key components: - **Destination address** - **Files involved**: - **tictoc13.ned** - [**txc13.cc**](http://txc13.cc/) - **tictoc13.msg** - **omnetpp.ini** - **Strategy: Avoid Boilerplate Code Writing** - Implementing strategies to minimize repetitive code. - [**Txc13.cc**](http://txc13.cc/) - Functionality: - Outputs the **number of packets sent/received**. - Tracks: - Total number of messages at each node. - Related files: - **tictoc14.ned** - [**txc14.cc**](http://txc14.cc/) - **tictoc14.msg** - **omnetpp.ini** - [**Txc14.cc**](http://txc14.cc/) - Integration with **Object Inspector in Tkenv** to enhance visualization. - **Adding Statistics Collection** - Importance of gathering network statistics: - Especially critical when packets traverse multiple hops. - Key statistical measures: - **Average hop count** - **Maximum and minimum hop counts**. - Related files: - **tictoc15.ned** - [**txc15.cc**](http://txc15.cc/) - **tictoc15.msg** - **omnetpp.ini** - **Visualizing Output Scalars & Vectors** - **OMNET++** capabilities: - Visualize outputs from scalar and vector files. - Functions include: - **Filtering** - **Processing** - **Displaying** - **Analyzing Results** - Definition of **Simulation Analysis**: - A thorough and often lengthy process to analyze simulation results. - Results are recorded as: - **Scalar values** - **Vector values** - **Histograms** - Application of **statistical methods** to: - Extract relevant information. - Draw conclusions. - **Analysis File (.anf)** - Purpose: Automates analysis steps, including: - Loading result files. - Filtering data. - Transforming data. - **Creating Analysis File** - Utilizing the **Analysis Editor** for efficient file creation. - **Datasets** - Description of input data sets and applied processing. - Visualization as a tree structure showing: - Processing steps and charts. - Nodes functionalities include: - Adding and discarding data. - Processing vectors and scalars. - Selecting operands for operations. - Content management for charts and chart creation. ### Datasets - **Definition**: A dataset describes a set of input data, processing applied to them, and the resulting charts. - **Structure**: Visualized as a tree of processing steps and charts. - **Nodes Functionality**: - Adding and discarding data. - Applying processing to **vectors** and **scalars**. - Selecting operands for operations. - Content management for charts and chart creation. ### Editing Datasets #### Compute Vectors - **Purpose**: Both **Compute Vectors** and **Apply to Vectors** nodes compute new vectors based on existing ones. #### Compute Scalars - **Functionality**: The **Compute Scalars** dataset node adds new scalars computed from other statistics within the dataset. ### Computation Examples #### Bit Rate - **Context**: In a network with multiple source modules generating **Constant Bit Rate (CBR)** traffic. - **Parameters**: - **Packet Length** (in bytes) and **Send Interval** (in seconds) are saved as scalars (pkLen, sendInterval). - **Computation**: - **Node**: Compute Scalar. - **Value**: pkLen \* 8 / sendInterval - **Name**: bitrate #### Throughput - **Context**: Sink modules record **rcvdByteCount** scalars; simulation duration is saved globally as **duration** in the top-level module. - **Computation**: - **Value**: 8 \* rcvdByteCount / Network.duration - **Name**: throughput #### Total Received Bytes - **Objective**: Calculate total bytes received in the network. - **Function**: Utilize the **sum()** function. - **Computation**: - **Value**: sum(rcvdByteCount) - **Name**: totalRcvdBytes - **Module**: Network #### Bytes Received by Hosts - **Scope**: Focus on scalars named **rcvdByteCount** recorded from network hosts. - **Computation**: - **Value**: sum(\*\*.host\*.\*\*.rcvdByteCount) - **Name**: totalHostRcvdBytes - **Module**: Network #### Average of Peak Delay - **Context**: Multiple modules record vectors named **end-to-end delay**. - **Goal**: Calculate the average of the peak delays experienced by each module. - **Computation**: - **Functions**: Use **max()** to get peaks followed by **mean()** for averages. - **Value**: mean(max(\'end-to-end delay\')) - **Name**: avgPeakDelay - **Module**: Network Computation Examples 2 Notes ============================ Packet Loss per Client-Server Pair ---------------------------------- - **Clients and Servers**: Involved entities are 3 clients (**cli0**, **cli1**, **cli2**) and 3 servers (**srv0**, **srv1**, **srv2**). - **Datagram Transmission**: Each client sends datagrams to its corresponding server. - **Packet Loss Calculation**: - Computed using the formula: - **Value**: Net.cli\${i={0..2}}.pkSent - Net.srv{i}.pkRcvd - **Name**: **pkLoss** - **Module**: **Net.srv\${i}** Total Number of Transport Packets --------------------------------- - **Input Scalars**: Recorded by different modules; requires matching host variable for **TCP** and **UDP** modules. - **Calculation**: - **Total Transport Packets**: Sum of **TCP** and **UDP** packet counts for each host. - **Value**: \${host=\*\*}.udp.pkCount + \${host}.tcp.pkCount - **Name**: **transportPkCount** - **Module**: \${host} Modules with Largest RTT ------------------------ - **Module Functionality**: Various modules record **ping round-trip delays (RTT)**. - **Objective**: Count modules with **RTT** values exceeding twice the global average. ### Stepwise Calculation: 1. **Step 1**: - **Value**: mean(\'rtt:vector\') - **Name**: **average** 2. **Step 2**: - **Value**: average / mean(\*\*.average) - **Name**: **relativeAverage** 3. **Step 3**: - **Value**: count(relativeAverage) - **Grouping**: value \> 2.0 ? \"Above\" : \"Normal\" - **Name**: **num\${group}** - **Module**: **Net** Simulation Models and INET -------------------------- - **Simulation Model Definition**: - **OMNET++**: A framework facilitating the creation and simulation of various simulation frameworks, not a simulation itself. ### Types of Simulation Models: - **Domain-Specific Functionality**: Provides frameworks for: - Wireless Sensor Networks (**WSNs**) - Ad-hoc Networks - Internet Protocols - Performance Modeling - Photonic Networks - **Reusability**: Achieved through OMNeT++\'s modular architecture, allowing easy integration of models. ### Notable Simulation Frameworks: - **INET Framework**: Standard protocol model library for OMNeT++. - **Contents**: Models for **TCP**, **UDP**, **IPv4**, **IPv6**, **OSPF**, **BGP**, wired/wireless link layers, mobility support, and **QoS** (DiffServ, RSVP). - **OverSim**: Framework for overlay and peer-to-peer network simulation (models include **Chord**, **Kademlia**, **Pastry**). - **Veins**: Framework for **Inter-Vehicular Communication (IVC)** and road traffic microsimulation. - **INETMANET**: Fork of INET framework for mobile ad-hoc networks. - **MIXIM**: Framework for modeling mobile and fixed wireless networks, WSNs, BANs, VANs, and ad-hoc networks (focus on radiowave propagation, interference estimation, and wireless MAC protocols). CASTALIA Overview - **CASTALIA**: A simulation framework designed for networks of low-power embedded devices. - **Key Models Offered**: - **Temporal Path Loss**: Simulation of signal degradation over time. - **Fine-Grain Interference**: Detailed modeling of interference among devices. - **RSSI Calculation**: Calculation of Received Signal Strength Indicator. - **Physical Process Model**: Representation of physical processes affecting communication. - **Node Clock Drift**: Simulation of variations in node clock timings. - **MAC Protocols**: Modeling of Medium Access Control protocols. Design Tour of INET - **Purpose**: To understand the workings of ARP in Ethernet environments and explore INET features. - **Key Features Reviewed**: - **Packets**: Data units transmitted over the network. - **Queues**: Structures for managing packet flow. - **Internal Tables**: Data structures used for routing and addressing. ARP Scenario Exploration - **Significance of ARP**: - Not the most crucial protocol but provides insight into network operations. - Relates to key networking concepts: **Ethernet**, **IP**, and higher-layer protocols. - **Scenario Description**: - A client computer initiates a **TCP session** with a server. - **ARP** operations follow, as it learns the **MAC address** for the default router. ARP Usage Diagram - **Simulation Start**: Ethernet autoconfiguration occurs before ARP processes. Entities in ARP Operations - Interaction of various compound modules: - **TCP Host on Ethernet**: The client initiating communication. - **Router**: Facilitates packet forwarding between networks. - **TCP Server**: The destination for the client's requests. - **End-to-End Transmission**: Process of data transfer from client through router to server. Ethernet Compound Module - **Exploration of Ethernet**: Understanding its internal structure and functioning. - **Components**: - **ARP**: Address Resolution Protocol responsible for mapping IP addresses to MAC addresses. - **Encap**: Encapsulation of packets for transmission. - **MAC**: Medium Access Control layer managing device communication. ARP Packet Structure - **arpTest.client.eth\[0\].arp**: Reference to the ARP packet structure within the Ethernet module. - **Inside ARP Packet**: - **ARP Packet Class**: Defined by a.msg file, encapsulating the protocol's structure. - **Packet Queue**: Contains IP packets awaiting processing. - **ARP Cache Build-up**: Mechanism for storing learned MAC addresses for efficient future communication. Introduction to Top-Down Approach to Modelling and Simulation **Top-Down Approach to NeMS (Network Management Systems):** - **Networks** are complex to design. - One-time design of simulation is cumbersome. - **Top-down approach** involves a phased roll-out of the model-simulate cycle: - **Iterative process**. - Rolling out the model at every layer to design a network. - **Design goodness (QoE)** focuses on user-centric aspects. Strategy **Rules for Mathematical Reading:** - **Mathematical modeling**: A representation of an object, system, or idea in a form distinct from the entity itself (Shannon). **Quantification:** - The act of counting and measuring that translates human senses, observations, and experiences into numerical sets. - Facts represented as quantitative data form the basis of science. **Formalism:** - Mathematics creates models establishing certain relationships. - Mathematical statements can be seen as consequences of specific string manipulation rules. Best Practices for Reading Mathematical Expressions A\) Understanding math equates to learning a foreign language.\ B) Familiarize yourself with formulas you already comprehend.\ C) Understand the outputs of formulas and their conditions.\ D) Maintain a chart of essential formulas.\ E) Recognize that math can be expressed in various forms while retaining the same meaning. Definitions **Equation:** - A statement indicating that the values of two mathematical expressions are equal, denoted by the "=" sign. **Formula:** - A structured mathematical expression. **Constituents of an Equation:** - Expressions may include: - **Numerical constants** - **Symbolic names** - **Mathematical operators** - **Functions** - **Conditional expressions** Easy Math Writing **2-3-4 Rule:** - Split sentences exceeding: - 2 lines, - 3 verbs, - 4 "long" sentences in a paragraph. **Use of Mnemonics:** - **s** for speed, - **v** for velocity, - **t** for time. Organizational Structure - Organize content into segments to facilitate comfortable reading from beginning to end. - Ensure segments are standalone with: - A definite start, - A definite end. - Segments should be represented linearly for clarity. ### QoE---Usability - **Usability (Ub)**: Defined as the **ease of use** with which network users can access the network and services. - Focus on **ergonomic** and **technological facilitation** to simplify user tasks. - Some design decisions negatively impact usability, e.g., **strict security** measures. - User-friendly choices include: - **WiFi** - **DHCP** #### Understanding Usability - **Usability Models** discussed in Sanjay Kumar Gupta's work emphasize: - **Ub**: Usability as ease of use. - **Ue**: Use effort, indicating the operational state of a system or subsystem. #### Network Reliability and Availability - **Network Components**: - Nodes and Links are susceptible to failures. - **Network Availability**: - Measured as **percent uptime** per defined period (year, month, week, day, hour). - Example: 24/7 operation; if a network is operational for 165 hours in a 168-hour week, availability is **98.21%**. #### Application Perspectives - Applications have varying availability needs: - **Real-time**: Video/Audio - **Commerce**: Non-repudiable transactions - **Non-real-time**: Email #### Key Comparisons - **Availability vs. Reliability**: - **Reliability**: System's capability to complete its function accurately with low error rates and high stability. - A system may be available but not necessarily reliable. - **Availability vs. Capacity**: - A system becomes unavailable if it runs out of capacity (e.g., **ATM connection admission control**). - **Availability vs. Redundancy**: - Redundancy is a means to achieve a desired level of availability, not an end goal. - **Availability vs. Resiliency**: - Resiliency measures how well a network can withstand stress and recover from failures. - Evaluates the time taken for a system to rebound after failures. ### QoE---Disaster Recovery - **Amat Victoria Curam**: Latin phrase meaning "Victory Loves Preparation." - **Disaster Recovery Focus**: - Maximize overall IT function survivability against disasters while staying within budget. #### Redundancy - **Redundancy**: - Essential for disaster preparation, encompassing both **proactive prevention** and **reactive recovery**. - Backup facilities are crucial components. #### Redundancy Allocation Model - IT functions can be supported by various IT assets, including: - **Computing hardware** - **Communication links** - **IT personnel** - Additional infrastructure. - A redundancy allocation model ensures that an IT function remains operational unless all selected solutions fail simultaneously. As long as one solution persists, the function remains active. QoE---Specifying Requirements - **Measurable**: Requirements must be clear and achievable. - **Availability Metrics**: - **Uptime of 99.70%**: - Corresponds to **30 minutes downtime** per year. - **Uptime of 99.95%**: - Corresponds to **5 minutes downtime** per year. - **Assessment of Availability**: - **Calendar Year Availability**: - Considerations for downtime on **weekdays vs weekends**. - Impact of **project deadlines**. - **Availability in Spurts**: - Types: **Staggered** vs **Onetime**. - **99.70% uptime** translates to: - **30 minutes per year**. - **10.70 seconds per hour**. - Acceptability varies among users; some applications may tolerate more downtime. QoE---Five Nines Availability - **Five Nines (99.999%)**: - Represents the best-case scenario for availability. - Implies **5 minutes downtime** per year. - **Critical Questions for Managers**: - Is uptime needed **sometime or all the time?** - Is **repair time** included or excluded? - Can **hot-swaps** be performed during service upgrades? - **Challenges to Achieving Five Nines**: - Hardware manufacturers guarantee 5 Nines, but: - Issues from **carrier/power outages**. - **Faulty software** in routers and switches. - Unexpected spikes in **bandwidth/server usage**. - **Configuration problems** and **human errors** (90% of failures). - **Security breaches** and software glitches. - **Redundancy Requirements**: - Achieving **99.999% availability** may necessitate **triple redundancy**: - One operational, one in **hot standby**, and one in **maintenance**. QoE---Cost of Downtime - **Business Impact**: - **40% of companies** that shut down for three days fail within **36 months** (source: Contingency Planning and Management magazine). - **Step-wise Approach to Measure Downtime Cost**: - **Identify Business Continuity Components**: - People, Property, Systems, Data. - **Define What You Protect**: - Core competencies in products, services, processes, or methodologies. - **Prioritize Business Functions**: - Essential functions for sustaining core competencies and related IT infrastructure. - **80/20 rule**: 80% of resources restore 20% of systems/applications/data. - **Classify Outage Types**: - Branch, Regional, Data Center, National outages. - **Calculate Cost**: - Formula: Frequency x Duration x Hourly Cost = Lost Profits. - Example: - **90 branch outages** per year, 1.5 hours each, costing **\$300/hour** results in: - **90 x 1.5 x 300 = \$40,500** in lost profits. QoE---MTBF AND MTTR - **Availability** defined as: - **MTBF** (Mean Time Between Failures) & **MTTR** (Mean Time to Repair) - Key Metrics: - **MTBSO** (Mean Time Between Service Outages) - **MTTSO** (Mean Time to Recover from Service Outage) - Typical Values: - **MTTF** (Mean Time to Failure): approximately **4000 hours** or **166.7 days** - Acceptable **MTTR**: typically **one hour** - **Availability Formula**: - ( \\text{Availability} = \\frac{\\text{MTBF}}{\\text{MTBF} + \\text{MTTR}} ) - Example Calculation: ( \\frac{4000}{4001} = 99.98% ) availability - Importance: - **MTBF** and **MTTR** assess the frequency and duration of service outages. - Mean values must be supported with **variance** measurements. - Difference between **MTTF** and **MTBF**: - **MTTF** assumes the system is repaired; **MTBF** assumes the system is replaced. QoE---Network Performance - **Definition**: - A **composite metric** representing end-to-end network performance. - Measurement Methods: - **Modeled**, **simulated**, and **measured**. - Each network possesses unique characteristics influencing performance evaluation. QoE---Optimum Network Utilization - **Optimum Definition**: - Selection of the best element from available alternatives based on specific criteria. - **Optimum Network Utilization**: - Percentage of **bandwidth capacity** used over a specific time period. - Characterized as a **time-varying phenomenon** (instantaneous, averaged, weighted). - Goals and Constraints: - Target utilization is typically around **70%**; exceeding this threshold leads to **performance degradation**. - **WAN vs. LAN**: - **WAN link utilization** is more critical than **LAN** due to cost implications (pay per packet). - **Compression**, **caching**, and **concatenation** techniques are employed to minimize **WAN utilization**. - Observations on LANs: - Generally over-budgeted (e.g., **Fast Ethernet**). - Differences in switch types: **full-duplex** vs. **half-duplex** switches affect performance. - User activity levels contribute to potential **exceeding utilization** in switch-to-switch communications. QoE---Throughput Notes - **Throughput Definition**: - Quantity of **error-free data transmitted per second**. - **Erroneous transmissions** are considered futile. - Ideally, throughput should match the **capacity** of the medium. - Deviations from ideal throughput indicate limitations in **media type**, **device**, and **network**. - **QoE---Throughput of Devices**: - Device throughput simulations and specifications are **vendor-specific**. - Types of device throughputs: - **Inter-networking devices** report throughput as: - **TCP/IP**: Measured in **packets per second**. - **ATM**: Measured in **cells per second**. - Packet sizes can vary from **53**, **64** to **1518 Bytes**. - **Example---CISCO Devices**: - Use of **traffic generators** and **traffic checkers** in tandem to measure throughput. - Smaller packets yield better **packets per second (pps)** rates. - Cisco claims a throughput of **400 million pps** for the **Cisco Catalyst 6500 switch**. - Claims of throughput may reflect actual **capacity** rather than true throughput. - **QoE---Application Layer Throughput**: - Application layer throughput is defined as: - **Application layer throughput = goodput + badput**. - **Goodput** vs **Badput**: - **Badput** includes retransmissions, headers, etc. - It is a fraction of packets that are **collided** or **lost**. - **Fc = C/N** (where Fc denotes some factor related to capacity and network). - **Factors Affecting Goodput**: - **End-to-end error rates**. - Functions of **protocols** (e.g., handshaking, windows, acknowledgments). - **Protocol parameters**: frame size, retransmission timers. - **pps rate** of networking devices. - Packet loss at networking devices. - **Workstation & Server Performance Factors**: - **Disk-access speed**. - **Disk-caching size**. - **Device driver performance**. - **Computer Bus Performance**: capacity and arbitration considerations. - **Processor (CPU) Performance**. - **Memory Performance**: access time for real and virtual memory. - **Operating System Inefficiencies**. - **Application Inefficiencies or Bugs**. - **Connotations**: - Application layer throughput offers insights into **useful transmissions**. - Correlates resource allocation with **physical layer throughput**. QoE---Accuracy -------------- - **Definition of Accuracy**: - Data sent and received should be identical. - Defined as the ratio of error-free frames transmitted to the total number of frames transmitted. - **Factors Affecting Accuracy**: - **Packet Reordering**: Occurs at routers, affecting data integrity. - **Power Surges**: E.g., lightning impulses on a 10 Mbps link. - **Impedance Mismatch**: Can lead to signal degradation. - **Poor Physical Connections**: Decreases transmission reliability. - **Failing Devices**: Hardware issues can introduce errors. - **Noise**: Electrical machinery can create interference. - **WAN Links**: Impacted by **Bit Error Rate (BER)** and **Signal-to-Noise Ratio (SNR)**, typically ranging from 10\^-5 to 10\^-11. - **LAN Specifications**: Erroneous frames are measured per 10\^6 Bytes. - **Shared Ethernet**: Collisions are a primary cause of accuracy degradation. - **First 64 Bytes Collision**: Can result in legal or runt frames. - **Acceptable Accuracy**: Generally set at 0.1% of frames. - **Late Collisions**: Considered illegal. - **Accuracy Formula**: - Accuracy = \[(Real Value -- Error) / Real Value\] \* 100 - **Frequency of Events**: - Plays a crucial role in overall accuracy. - **Ei**: Event i in the system. - **freq(Ei)**: Frequency of event i. - **real(Ei)**: Desirable (real) cost of event i. - **sim(Ei)**: Simulated (obtained) cost of event i. QoE---Efficiency ---------------- - **Boiling Water Analogy**: Illustrates concepts related to efficiency. - **Definition of Efficiency**: - **Application Layer Throughput**: Comprises both **Goodput** and **Badput**. - **Goodput**: Successful data transmission. - **Badput**: Includes retransmissions, headers, and the fraction of packets that are collided or lost. - **Fc = C/N**: Where C is collisions and N is total packets. - **Fc = L/N**: Where L is lost packets. - **Factors Affecting Efficiency**: - **Access Protocols**: - High user activity can reduce efficiency. - Ethernet becomes inefficient at high collision rates. - **Frame Size**: - Larger frames are beneficial for single users on WAN links. - **Serialization Delay**: - Occurs on WAN links, affecting fairness for real-time shorter frames in routers. - **Channel Efficiency**: - If capacity is scaled more slowly than throughput while keeping average response time constant, channel efficiency (utilization) will increase. - **Average Efficiency**: - **E(G)**: Represents average efficiency of network G. - **n**: Total nodes in a network. - **d(i,j)**: Shortest path between node i and neighboring node j. - **QoE (Quality of Experience)**: - Delay is often tolerated, but **jitter** is not. - **Delay**: - Critical for **voice** and **video applications**, especially **interactive** ones. - Other applications, like **Telnet remote echo**, require precise timing. - **Sources of Packet Delay**: - **Propagation Delay**: - Influenced by **media type** and **length**. - **Transmission Delay (Serialization)**: - Example: 1024 Bytes on **T1**. - **Switching Delay**: - Ranges from **5-20 microseconds** for a 64 Bytes frame. - **Router Delay**: - Related to **look-up**, **router architecture**, and **configuration**. - Impacted by software features optimizing packet forwarding (e.g., **NAT**, **IPSEC**, **QoS**, **ACL**). - **Queuing Delay**: - Dependent on **utilization**. - **Formula**: - Queue depth = Utilization / (1 - Utilization). - **Implications of Queuing Delay**: - Varies based on overall system **utilization**. - **Delay Variation (Jitter)**: - Amount of time the **average delay** varies. - **Voice**, **video**, and **audio** are sensitive to delay variation. - Balancing efficiency for **high-volume applications** versus **low-volume** is necessary. - **Concept of Jitter Buffer**: - Aims to smooth out jitter. - Acceptable variation should be **1-2%** of the total delay. - **Types of Jitter**: - **Delay Jitter**: - Measures the maximum difference in total delay among packets. - Assumes a perfectly periodic source. - Important for **interactive communication** (e.g., voice and video teleconferencing). - Determines maximum buffer size required at the destination. - **Rate Jitter**: - Measures the difference in packet delivery rates over time. - Analyzes minimal and maximal inter-arrival times. - Relevant for many **real-time applications** (e.g., video broadcasting). - Minor deviations in rate lead to slight quality deterioration. - **Jitter Measurement**: - Essential for ensuring optimal **QoE** in network applications. - **Reference**: - Kay, Rony. "Pragmatic network latency engineering fundamental facts and analysis." cPacket Networks, White Paper (2009): 1-31. QoE---Response Time - **Response Time**: A relative phenomenon defined as the time interval between a request for network service and the corresponding response. - **Measurement Points Locations**: - Referenced in Tim R. Norton's work, "End-To-End Response Time: Where to Measure?" (1999). - **Measurement of Response Time**: - Key literature includes: - Reinder J., Bril, "System Architecture and Networking," TU/e Informatica. - Sjodin, Mikael, and Hans Hansson, "Improved response-time analysis calculations," Real-Time Systems Symposium, 1998. - **Ceiling Function**: - Represents the maximum number of pre-emptions caused by higher priority processes. QoE---Security - **Threat**: Defined as the combination of **Capability** and **Intention**. - **Definition of Security**: - Protection of information systems from threats, which include: - **Hardware** - **Software** - Information contained within these systems. - **Avoidance Goals**: - Prevent **Disruption** and **Misdirection** of services provided by the systems. - **Implementation Measures**: - Involves controlling physical access to hardware. - Protection against harm through: - **Network Access** - **Data** - **Code Injection** - **Trusted Computing Base**: - Referenced in the **Rainbow Series** (Orange Book). - Comprises all hardware, firmware, and software components critical to security. - Vulnerabilities within any component can compromise the entire system's security. - **Bell-Lapadula Model**: - Framework for understanding security in information systems. - **Users as Subjects**: Individuals or processes initiating actions. - **Predicates**: Devices and data serving as **Objects** of interaction. - **Process Algebra**: Provides the action (verb) performed by subjects over predicates, establishing the relationship between users and data. QoE---Reconnaissance Attacks - **Definition**: - **Reconnaissance**: A type of computer attack where an intruder engages with a targeted system to gather information about vulnerabilities. - **Types of Reconnaissance**: - **Active Reconnaissance**: Direct interaction with the target system to collect information. - **Port Scanning**: Identifying open ports on a system to find potential vulnerabilities. - **Passive Reconnaissance**: Gathering information without direct interaction, often through publicly available data. - **Sniffing**: Monitoring network traffic to capture data packets. - **War Driving**: Searching for Wi-Fi networks while in motion. - **War Dialing**: Dialing a range of phone numbers to find modems or vulnerable systems. - **Targeted Threat Index (TTI)**: - Proposed by Hardy et al. (2014) to characterize and quantify politically motivated targeted malware. - **TTI Factors**: - **Vulnerability of System**: Influenced by several aspects: - **Target Feature Set**: Characteristics of the target system. - **Attacker Methods**: Techniques employed by the attacker. - **Attacker Aggressiveness**: The intensity and persistence of the attack. - **Formula**: TTI = Method \* Implementation. QoE---Security Requirements - **Definition**: - Encompasses all activities, actions, and hardware/software necessary to ensure security. - **Security Principles**: - **Confidentiality**: Ensuring information is not disclosed to unauthorized individuals. - **Integrity**: Maintaining the accuracy and consistency of data. - **Authorization**: Granting access rights to users. - **Authenticity**: Verifying the identity of users and systems. - **Availability**: Ensuring systems are accessible when needed. - **Encryption**: Protecting data by converting it into a secure format. - **Assessing Security Levels**: - Reference: Burchett (2011) on quantifying computer network security. - **Common Vulnerability Scoring System (CVSS)**: Provides a quantitative score for assessing computer security vulnerabilities. QoE---Manageability - **Definition**: - The human effort required to maintain a system at satisfactory operational levels, including: - **Deployment** - **Configuration** - **Upgrading** - **Tuning** - **Backup** - **Failure Recovery** - **Assessing Manageability**: - Candea (2008) discusses quantifying system manageability. - **Manageability Metric**: - Efficiency of management operations measured by the time (Timei) to complete tasks (Taski). - Complexity approximated by the number of discrete steps (Stepsi) needed to complete a task. - **Commentary**: - Manageability inversely related to the duration of management tasks and the complexity involved. - Fewer steps lead to lower exposed complexity, enhancing manageability. - Faster task completion reduces the likelihood of issues. - Less management required correlates with easier system management and better overall performance. QoE---DoS Attack Notes **Definition** - **DoS Attack**: An attempt to render a machine or network resource unavailable to its intended users. - **Temporarily**: Access is disrupted for a short duration. - **Indefinitely**: Access may be blocked for an extended period. **Implementation** - **Methodology**: - **Transmit a large number of packets**: Overwhelm the target with excessive traffic. - **TCP SYN attack**: Exploits the TCP handshake process by sending numerous SYN requests. - **Ping attack**: Uses ICMP echo requests to flood the target. - **Server crashing attack**: Inflicts a heavy computational load on the server, leading to failure. **A Simple Attack Analysis** - **Source**: He, Changhua. *Analysis of security protocols for wireless networks*. PhD Diss. Stanford University, 2005. - **Attack Type**: - **TCP SYN flooding DoS attacks**: Utilizes 'n' packets for the attack. - **Countermeasure**: - **Random drop queue 'Q'**: - **Q**: Represents the depth of the queue for managing incoming packets. This method helps mitigate the impact of the attack. **Attack Success Probability** - **Formula**: - **P = 1 − (1 − 1/Q)ⁿ** - **P**: Probability of a successful attack. - **n**: Number of packets used in the attack. - **Q**: Queue depth. **Attack Failure Probability** - **Calculation**: - **Failure Probability**: 1 - P (where P is defined above).