Data Structures with Python Notes PDF

Summary

These notes provide an introduction to various data structures, along with their characteristics and uses. The document discusses linear data structures, such as arrays, linked lists, stacks, and queues, as well as non-linear ones, including trees and graphs. It also touches upon hash tables and heaps, highlighting their different applications.

Full Transcript

7h(S.Y. B.Sc.) CDS-233: Data structure using Python Unit 1: Introduc on to Data Structures 1.1 Introduc on Data structures are fundamental constructs in computer science and programming that allow you to organize and store data efficiently. They play a crucial role in solving various computa onal pr...

7h(S.Y. B.Sc.) CDS-233: Data structure using Python Unit 1: Introduc on to Data Structures 1.1 Introduc on Data structures are fundamental constructs in computer science and programming that allow you to organize and store data efficiently. They play a crucial role in solving various computa onal problems and op mizing the performance of algorithms. Data structures are like containers or organiza onal schemes for data, and they determine how data can be accessed, manipulated, and stored. Here's a brief introduc on to some common data structures: 1. Arrays: Arrays are one of the simplest data structures and consist of a collec on of elements, each iden fied by an index or a key. Elements are stored in con guous memory loca ons, making it easy to access them by their index. However, their size is fixed when created, which can be limi ng. 2. Linked Lists: A linked list is a data structure made up of nodes, where each node contains data and a reference (or link) to the next node in the sequence. Linked lists can be easily resized and are more flexible than arrays when it comes to dynamic memory alloca on. 3. Stacks: Stacks are linear data structures that follow the Last-In-First-Out (LIFO) principle. You can think of them as a collec on of items arranged like a stack of plates, where the last item added is the first to be removed. They are o en used for managing func on calls and keeping track of state in algorithms. 4. Queues: Queues are also linear data structures, but they follow the First-In-First-Out (FIFO) principle. In a queue, the first item added is the first to be removed. They are commonly used for tasks such as task scheduling and managing resources. 5. Trees: Trees are hierarchical data structures consis ng of nodes connected by edges. They have a root node at the top and leaf nodes at the bo om. Common types of trees include binary trees, binary search trees, and AVL trees. Trees are used for various purposes, including organizing data, searching, and sor ng. 6. Graphs: Graphs are a more general data structure composed of nodes and edges. They can represent complex rela onships between data points and are used in various applica ons like social netwnrks, transporta on systems, and computer networks. 7. Hash Tables: Hash tables, or dic onaries, are data structures that use a hash func on to map keys to values. They provide efficient key-value pair retrieval and are o en used for implemen ng associa ve arrays, caches, and databases. 8. Heaps: Heaps are specialized trees used for priori zing and managing data with a priority order. Binary heaps, for example, are used in algorithms like heap sort and in data structures like priority queues. 1.1.1 Concept of Data Structure: A data structure is a fundamental concept in computer science and programming that refers to the way data is organized and stored in memory or on disk. It provides a means to efficiently store, access, and manipulate data. Data structures are essen al for solving a wide range of computa onal problems and op mizing the use of computer resources such as memory and processing power. Here are some key aspects of the concept of data structures: 1. Organiza on of Data: Data structures define how data elements are organized and stored. They can be thought of as containers that hold data items and provide opera ons to interact with them. 2. Data Abstrac on: Data abstrac on is the reduc on of a par cular body of data to a simplified representa on of the whole. Abstrac on, in general, is the process of removing characteris cs from something to reduce it to a set of essen al elements. They provide a high-level interface for accessing and modifying data, allowing programmers to work with data in a more intui ve and efficient manner. 3. Efficiency: Different data structures are designed to be efficient for specific opera ons. For example, some are op mized for quick inser on and dele on, while others are op mized for fast retrieval of data. The choice of data structure depends on the specific requirements of the problem you are trying to solve. 4. Common Data Structures: There are various types of data structures, each with its own characteris cs and use cases. Some common data structures include arrays, linked lists, stacks, queues, trees, graphs, and hash tables. - Arrays: Arrays are collec ons of elements, each iden fied by an index or a key. They have a fixed size and provide fast access to elements using their indices. - Linked Lists: Linked lists are chains of nodes where each node contains data and a reference (or link) to the next node in the sequence. They are dynamic in size and are efficient for inser ons and dele ons. - Stacks: Stacks are a type of linear data structure that follows the Last-In-First-Out (LIFO) principle. Elements can be pushed onto the stack and popped from the stack. - Queues: Queues are another linear data structure that follows the First-In-First-Out (FIFO) principle. Elements are added to the rear and removed from the front of the queue. - Trees: Trees are hierarchical data structures that consist of nodes connected by edges. Common types of trees include binary trees and balanced trees like AVL trees and Red-Black trees. - Graphs: Graphs are non-linear data structures that consist of nodes (ver ces) and edges that connect the nodes. They are used to represent complex rela onships between data points. - Hash Tables: Hash tables use a hash func on to map keys to indices in an array, allowing for efficient key-value pair storage and retrieval. 5. Algorithm Design: The choice of data structure o en influences the design of algorithms. Different algorithms may be more or less efficient depending on the data structure used. 6. Memory Management: Understanding data structures is crucial for efficient memory management in computer programs. Choosing the right data structure can help minimize memory usage and improve program performance. 1.1.2. Data type: In computer programming, a data type is a classifica on or categoriza on that specifies which type of value a variable can hold. Data types define the nature of data that can be stored and manipulated within a program, and they determine the opera ons that can be performed on that data. Common data types in programming include: 1. Integer (int): Represents whole numbers without a decimal point, such as 0,1,2,3,4,5,6,7,8,9,10…. 2. Floa ng-point (float or double): Represents numbers with a decimal point, including frac ons and real numbers. Examples include 3.14, -0.5, or 2.71828. 3. Character (char): Represents a single character, like 'A', 'b', or '7'. 4. String (str): Represents a sequence of characters, such as "Hello, World!" or "12345". 5. Boolean (bool): Represents a binary value, either true or false, used for logical opera ons. 6. Array: Represents a collec on of elements of the same data type, organized in a linear or mul - dimensional structure. 7. Struct (or Structure): Allows you to define a custom data type composed of mul ple fields of different data types. Structs group related data together. 8. Enumera on (enum): Defines a set of named values that represent dis nct elements within a group. Enums are o en used to make code more readable. 9. Object (in object-oriented programming): Represents an instance of a class, which is a blueprint for crea ng objects. Objects can have both data (a ributes) and methods (func ons). 10. Pointer: A pointer is a variable that stores the memory address of another variable as its value. 11. Void: Represents the absence of a data type or the return type of a func on that does not return any value. 1.1.2 Data object: A data object is a region of storage that contains a value or group of values. A data object is a general term used in computer science and informa on technology to refer to any en ty or structure that can hold, represent, or store data. Data objects can take various forms and serve different purposes, depending on the context and the specific programming or data management paradigm being used. Here are some common examples of data objects: 1. Variables: In programming, variables are a fundamental type of data object. They are used to store and manipulate data, such as numbers, text, or other values, during program execu on. It's an area of memory that stores one item of data, such as a number or a character. 2. Data Structures: Data structures like arrays, lists, stacks, queues, trees, and graphs are data objects that organize and store data in a specific way to facilitate efficient retrieval, manipula on, and storage. 3. Objects (Object-Oriented Programming): In object-oriented programming, objects are instances of classes that encapsulate both data (a ributes) and methods (func ons) that operate on that data. Objects are data objects with associated behavior. 4. Database Records: In database management systems, a data object can refer to a record or a row in a database table, which represents a single unit of structured data. 5. Files and Streams: In file systems and I/O opera ons, files and streams are data objects used to store and manipulate data on disk or in memory. 6. JSON Objects: In data interchange formats like JSON (JavaScript Object Nota on), data objects are represented as key-value pairs, making them easy to serialize and deserialize for data exchange between systems. 7. XML Elements: In XML (extensible Markup Language), data objects are represented as elements and a ributes within a hierarchical structure for storing and transpor ng structured data. 8. Graph Nodes and Edges: In graph theory and network data structures, nodes and edges are data objects used to represent and model rela onships between en es. 9. Images and Mul media: In mul media applica ons, images, audio, and video files are data objects that contain visual or auditory data. 10. Sensors and Sensor Readings: In the context of IoT (Internet of Things) and sensor networks, data objects can represent readings from sensors, such as temperature sensors or mo on sensors. 1.1.2 Abstract Data Type (ADT): An ADT, or Abstract Data Type, is a high-level descrip on of a data structure that specifies the type of data it can store and the opera ons that can be performed on it, without specifying the actual implementa on. An ADT for an array typically includes the following opera ons: 1. Ini aliza on: Create an empty array with a specified size. 2. Inser on: Add an element to the array at a specific index. 3. Dele on: Remove an element from the array at a specific index. 4. Access: Retrieve the element at a specific index in the array. 5. Update: Modify the element at a specific index in the array. 6. Size: Return the number of elements currently in the array. 1.1.2 Data Structure: Data structures are fundamental concepts in computer science and programming that are used to organize and store data efficiently. Here are some basic terminology and concepts related to data structures: 1. Data Structure: A data structure is a way of organizing and storing data to perform opera ons on it efficiently. It defines the format of data and the opera ons that can be performed on it. 2. Element: An element is the individual data item within a data structure. Elements can be of various data types, such as integers, strings, or custom objects. 3. Collec on: A collec on is a group of elements stored in a data structure. Collec ons can be homogeneous (all elements are of the same type) or heterogeneous (elements can be of different types). 4. Array: An array is a linear data structure that stores a fixed-size collec on of elements of the same data type. Elements in an array are accessed by their index. 5. Linked List: A linked list is a linear data structure in which each element (node) contains a value and a reference (link) to the next element in the list. Linked lists can be singly linked (each node points to the next node) or doubly linked (each node points to both the next and previous nodes). 6. Stack: A stack is a linear data structure that follows the Last-In-First-Out (LIFO) principle. It is used for opera ons like push (to add an element) and pop (to remove the top element). 7. Queue: A queue is a linear data structure that follows the First-In-First-Out (FIFO) principle. It is used for opera ons like enqueue (to add an element to the back) and dequeue (to remove an element from the front). 8. Tree: A tree is a hierarchical data structure that consists of nodes connected by edges. It has a root node at the top and child nodes branching down from the root. Common types of trees include binary trees, binary search trees, and AVL trees. 9. Graph: A graph is a collec on of nodes (ver ces) and edges that connect pairs of nodes. Graphs can be directed (edges have a direc on) or undirected (edges have no direc on). 10. Hash Table: A hash table is a data structure that uses a hash func on to map keys to values. It provides efficient key-value pair storage and retrieval. 11. Sor ng: Sor ng is the process of arranging elements in a specific order (e.g., ascending or descending). Common sor ng algorithms include bubble sort, selec on sort, merge sort, and quicksort.12. Searching: Searching is the process of finding a specific element within a data structure. Common searching algorithms include linear search and binary search. 13. Time Complexity: Time complexity measures the amount of me an algorithm takes to execute as a func on of the input size. It helps evaluate the efficiency of data structures and algorithms. 14. Space Complexity: Space complexity measures the amount of memory space an algorithm uses as a func on of the input size. It helps evaluate the memory efficiency of data structures and algorithms. 1.1.3: Need of Data Structure 1. Data structure modifica on is easy. 2. It requires less me. 3. Save storage memory space. 4. Data representa on is easy. 5. Easy access to the large database. Data structures are fundamental components of computer science and play a crucial role in organizing, storing, and manipula ng data efficiently. They are essen al for several reasons: 1. Data Organiza on: Data structures provide a way to organize and structure data in memory or on storage devices. They define how data is stored and how it can be accessed, making it easier to manage and work with large sets of data. 2. Efficiency: Different data structures are op mized for different opera ons. For example, arrays are efficient for random access, while linked lists are be er for inser ons and dele ons. Choosing the right data structure for a specific task can significantly improve the efficiency of algorithms and reduce me and space complexity. 3. Algorithm Design: Data structures are closely ed to algorithms. Many algorithms are designed around specific data structures to solve par cular problems efficiently. For instance, sor ng algorithms can be highly op mized when using data structures like heaps or binary search trees. 4. Resource Management: Efficient use of memory is cri cal in computer programming. Data structures allow you to allocate and manage memory effec vely, reducing wastage and preven ng memory leaks. 5. Abstrac on: Data structures provide abstrac ons that simplify complex opera ons. For example, a stack abstracts the concept of Last-In-First-Out (LIFO) behavior, making it easy to implement opera ons like undo func onality in applica ons. 6. Real-world Modeling: Data structures can be used to model real-world objects and rela onships. For instance, graphs can represent networks, social connec ons, or dependencies between elements in a system. 7. Data Retrieval: Data structures like hash tables provide fast data retrieval based on keys. This is essen al in scenarios like database indexing, caching, and dic onary data structures. 8. Parallel and Distributed Compu ng: In parallel and distributed compu ng environments, selec ng the appropriate data structure is crucial for efficient communica on and synchroniza on among processes or nodes. 9. Data Serializa on: When data needs to be stored on disk or transmi ed over a network, data structures play a role in serializing and deserializing data, ensuring that it can be efficiently stored and reconstructed. 10. Code Readability and Maintainability: Well-chosen data structures can make code more readable and maintainable. Using the right data structures can make your code easier to understand and reduce the chances of errors. 11. Problem Solving: Data structures are fundamental to solving various computer science and programming problems. Understanding how to use them effec vely is essen al for so ware developers and computer scien sts. 1.1.4 Types of Data Structure Linear data structure: Data structure in which data elements are arranged sequen ally or linearly, where each element is a ached to its previous and next adjacent elements, is called a linear data structure. Examples of linear data structures are array, stack, queue, linked list, etc. Sta c data structure: Sta c data structure has a fixed memory size. It is easier to access the elements in a sta c data structure. An example of this data structure is an array. Dynamic data structure: In the dynamic data structure, the size is not fixed. It can be randomly updated during the run me which may be considered efficient concerning the memory (space) complexity of the code. Examples of this data structure are queue, stack, etc. Non-linear data structure: Data structures where data elements are not placed sequen ally or linearly are called non-linear data structures. In a non-linear data structure, we can’t traverse all the elements in a single run only. Examples of non-linear data structures are trees and graphs. 1.2 Algorithm Analysis: Algorithm analysis in data structures involves studying and evalua ng the efficiency and performance characteris cs of algorithms when they are applied to specific data structures. This analysis helps in making informed decisions about which algorithm and data structure to use for a par cular problem, taking into account factors like me complexity, space complexity, and the expected workload. 1.2.1 Here are some key aspects of algorithm analysis in data structures: Space Complexity: Space complexity describes how much memory an algorithm consumes based on the input size. It is also expressed using Big O nota on (e.g., O(n), O(1)) and helps in assessing the algorithm's memory usage. Low space complexity is desirable in many applica ons, especially when dealing with limited memory resources. Time Complexity: Time complexity measures how the running me of an algorithm grows with respect to the size of the input data. It is usually expressed using Big O nota on (e.g., O(n), O(log n), O(n^2)) and provides an upper bound on the algorithm's execu on me. Analysing me complexity helps in understanding how efficient an algorithm is and whether it is suitable for a specific 7problem size. 1.2.2 Best-case, Worst-case, Average-case Analysis: Best-case, worst-case, and average-case analysis are common methods used in computer science and algorithm analysis to evaluate the performance of algorithms. These analyses help us understand how an algorithm behaves under different input scenarios and provide insights into its efficiency and behavior. 1. Best-Case Analysis: - Best-case analysis refers to the scenario in which an algorithm performs at its op mal or most efficient level. - It assumes that the input data is specifically chosen to make the algorithm perform as well as possible. - Best-case analysis is o en used to establish a lower bound on the algorithm's performance. - In many cases, the best-case scenario is theore cal and may not occur in prac ce. Example: In a sor ng algorithm like QuickSort, the best-case scenario occurs when the input data is already sorted or nearly sorted, resul ng in faster execu on. 2. Worst-Case Analysis: - Worst-case analysis considers the scenario in which an algorithm performs at its least efficient level. - It assumes that the input data is chosen to make the algorithm take the longest me to complete. - Worst-case analysis is crucial in prac ce because it guarantees that the algorithm will never perform worse than the worst-case scenario. - It helps in determining the upper bound on an algorithm's me complexity. Example: In the QuickSort algorithm, the worst-case scenario occurs when the input data is sorted in reverse order, causing the algorithm to take O(n^2) me. 3. Average-Case Analysis: - Average-case analysis takes into account the expected or average performance of an algorithm when the input data is randomly or probabilis cally chosen. - It provides a more realis c view of how an algorithm will behave on typical inputs. - To perform an average-case analysis, you o en need to consider the probability distribu on of the input data. Example: In a searching algorithm like binary search, the average-case analysis considers that each element in the array has an equal probability of being the target value, resul ng in an expected me complexity of O(log n). 1.2.2 Asympto c nota on Asympto c nota on is a mathema cal nota on used in computer science and mathema cs to describe the behavior of func ons and algorithms as their input size grows to infinity. It provides a way to analyze the efficiency and performance of algorithms and to make comparisons between them without ge ng into the specifics of constant factors or low-level details. The most commonly used asympto c nota ons are: 1.Big O Nota on (O): - Big O nota on, o en referred to as "order of," describes the upper bound of an algorithm's me complexity in the worst-case scenario. It provides an upper limit on the growth rate of a func on. - Example: O(n) represents linear me complexity, indica ng that an algorithm's running me grows linearly with the size of the input. 2. Omega Nota on (Ω): - Omega nota on represents the lower bound of an algorithm's me complexity in the best-case scenario. It provides a lower limit on the growth rate of a func on. - Example: Ω(n^2) represents quadra c me complexity, indica ng that an algorithm's running me is at least propor onal to the square of the input size. 3. Theta Nota on (Θ): - Theta nota on combines both upper and lower bounds, providing a ght bound on the growth rate of an algorithm. It is used to describe the exact behaviour of an algorithm. - Example: Θ(n) represents linear me complexity and indicates that an algorithm's running me grows in direct propor on to the input size. Unit 2: Array as a Data Structure 2.1 ADT of array An Abstract Data Type (ADT) for an array is a conceptual way of defining the behavior and opera ons associated with an array data structure without specifying the par cular implementa on details. It defines a set of opera ons that can be performed on an array and their expected behavior. The most common opera ons associated with an array ADT include: 1. Create: Crea ng an array of a specified size. 2. Insert: Inser ng an element at a specific index in the array. 3. Delete: Dele ng an element from a specific index in the array. 4. Access/Retrieve: Accessing or retrieving an element at a specific index in the array. 5. Update/Modify: Modifying an element at a specific index in the array. 6. Search: Searching for a specific element in the array and returning its index or indica ng if it’s not found. 7. Size: Determining the current number of elements in the array. 8. IsEmpty: Checking if the array is empty. 9. IsFull: Checking if the array is full (for fixed-size arrays). 10.Resize: Changing the size of the array, typically to make it larger or smaller. 11.Iterate/Traverse: Itera ng through all elements in the array. Abstract Data Type (ADT) Opera ons: ADT, or Abstract Data Type, refers to a high-level descrip on of a data structure that defines a set of opera ons that can be performed on the data and the behavior of those opera ons. These opera ons are typically defined without specifying how they are implemented, allowing for various possible implementa ons depending on the programming language and requirements. Here are some basic opera ons commonly associated with ADTs: 1. Ini aliza on: This opera on is used to create and ini alize an instance of the ADT. It sets up the ini al state and memory alloca on, if necessary, for the data structure. 2. Inser on: Inser on opera ons allow you to add new elements or data into the ADT. The specific way of inser ng data can vary depending on the ADT. For example, in a list ADT, you might have opera ons like `append`, `prepend`, or `insertAtIndex`. 3. Dele on: Dele on opera ons remove elements or data from the ADT. Similar to inser on, the way data is deleted depends on the ADT. For example, a list ADT might have opera ons like `remove`, `removeAtIndex`, or `pop`. 4. Search: The search opera on allows you to find and retrieve specific data or elements from the ADT. It may return the data or a reference to the data if found or indicate that the data is not present. 5. Traversal: Traversal opera ons enable you to iterate through all the elements or data in the ADT. This is o en used to perform opera ons on each element in the data structure, like prin ng, coun ng, or applying a func on. 6. Access: Access opera ons retrieve data from the ADT based on specific criteria, such as accessing an element at a par cular index in a list or fetching the value associated with a given key in a dic onary ADT. 7. Size/Length: This opera on returns the number of elements or the current size of the ADT. For example, in a list, you might have a `size` or `length` opera on. 8. Empty/Full: These opera ons indicate whether the ADT is empty (contains no elements) or full (has reached its maximum capacity or limit). 9. Clear: The clear opera on removes all elements from the ADT, effec vely rese ng it to an empty state. 10. Update/Modify: Update or modifica on opera ons allow you to change the value of an exis ng element within the ADT. 11. Comparison: Comparison opera ons can be used to compare two ADT instances for equality or to determine the order of elements, if applicable (e.g., comparing elements in a sorted ADT). 12. Copy/Clone: These opera ons create a copy or clone of the ADT, possibly with the same or different data, depending on the use case 2.1.1 Linear Searching: Linear search is a sequen al searching algorithm where we start from one end and check every element of the list un l the desired element is found. It is the simplest searching algorithm. Binary Searching: Binary search is an efficient algorithm for finding an item from a sorted list of items. It works by repeatedly dividing in half the por on of the list that could contain the item, un l you've narrowed down the possible loca ons to just one. Sen nel Search: Sen nel Linear search is a type of linear search where the element to be searched is placed in the last posi on and then all the indices are checked for the presence of the element without checking for the index out of bound case. Analysis and comparison: Compara ve analysis is the process of comparing items to one another and dis nguishing their similari es and differences. 2.2 Sor ng: Sor ng techniques in data structures is a process of rearranging data elements in an array or list in order to make it easier to search and retrieve. By sor ng in data structure, the complexity of searching for a par cular item is reduced. Terminology – Internal Sor ng: When all data is placed in the main memory or internal memory then sor ng is called internal sor ng. External Sor ng: One example of external sor ng is the external merge sort algorithm, which is a K-way merge algorithm. It sorts chunks that each fit in RAM, then merges the sorted chunks together. -way merge on those sorted lists. In-place Sor ng: An in-place sor ng algorithm uses constant space for producing the output (modifies the given array only). It sorts the list only by modifying the order of the elements within the list. 2.2.1 Comparison Based Sor ng: In comparison-based sor ng, elements of an array are compared with each other to find the sorted array. Bubble Sort: Bubble Sort is the simplest sor ng algorithm that works by repeatedly swapping the adjacent elements if they are in the wrong order. This algorithm is not suitable for large data sets as its average and worst-case me complexity is quite high. Inser on sort: Inser on sort is a simple sor ng algorithm that works similar to the way yo6tu sort playing cards in your hands. The array is virtually split into a sorted and an unsorted part. Values from the unsorted part are picked and placed at the correct posi on in the sorted part. Selec on Sort: Selec on sort is an effec ve and efficient sort algorithm based on comparison opera ons. It adds one element in each itera on. You need to select the smallest element in the array and move it to the beginning of the array by swapping with the front element. Quick Sort: Quicksort is a fast-sor ng algorithm that works by spli ng a large array of data into smaller sub- arrays. This implies that each itera on works by spli ng the input into two components, sor ng them, and then recombining them. Merge Sort: Merge sort is defined as a sor ng algorithm that works by dividing an array into smaller subarrays, sor ng each subarray, and then merging the sorted subarrays back together to form the final sorted array. 2.2.2 Non-Comparison Based Sor ng: Sor ng algorithms which are not comparison based. Radix Sort: Radix sort is an integer sor ng algorithm that sorts data with integer keys by grouping the keys by individual digits that share the same significant posi on and value (place value). Coun ng Sort: Coun ng Sort is a non-comparison-based sor ng algorithm that works well when there is limited range of input values. It is par cularly efficient when the range of input values is small compared to the number of elements to be sorted. 2.2.3 Comparison of sor ng methods: Sor ng is a fundamental opera on in computer science and is used to arrange a collec on of elements in a specific order, such as ascending or descending. There are various sor ng algorithms, each with its own strengths and weaknesses. Here's a comparison of some common sor ng methods: 1. Bubble Sort: - Simple and easy to understand. - Inefficient for large datasets, with a me complexity of O(n^2) in the worst and average cases. - Suitable for small datasets or nearly sorted data. 2. Selec on Sort: - Also, simple and intui ve. - Inefficient for large datasets, with a me complexity of O(n^2) in all cases. - Performs poorly on already par ally sorted data. 3. Inser on Sort: - Efficient for small datasets and almost sorted data. - Performs well with small, par ally sorted datasets. - Time complexity of O(n^2) in the worst case but can be faster for small n. 4. Merge Sort: - A comparison-based, stable sor ng algorithm. - Efficient and guarantees O(n log n) me complexity in all cases. - Requires addi onal memory for the merging step, making it less suitable for large datasets with limited memory. 5. Quick Sort: - O en faster in prac ce than other O(n log n) algorithms. - Not stable, meaning the rela ve order of equal elements may change. - Has good average-case performance but can have poor performance on already sorted data. 6. Heap Sort: - Efficient and in-place sor ng algorithm. - Guarantees O(n log n) me complexity in all cases. - Requires no addi onal memory and is suitable for large datasets. - Not as cache-friendly as some other algorithms. 7. Radix Sort: - Non-compara ve sor ng algorithm suitable for integers or fixed-length strings. - Time complexity depends on the number of digits or characters in the data, o en linear O(nk) where k is the number of digits/characters. - Requires addi onal space for buckets. 8. Coun ng Sort: - Non-compara ve sor ng algorithm for integers within a known range. - Extremely fast for small integers and can achieve linear me complexity O(n). - Requires addi onal space for coun ng array. 9. Bucket Sort: - Non-compara ve sor ng algorithm that distributes elements into buckets and sorts each bucket. - Efficient for uniformly distributed data. - Time complexity depends on the number of buckets and the sor ng algorithm used for individual buckets. Unit 3: Linked List 3.1 List of Data Structure: A list is a common data structure used in computer science and programming to store a collec on of elements. Lists can be implemented in various programming languages and come in different forms, but they typically share some common characteris cs: 1. Ordered: Lists maintain the order of elements, which means that the elements are stored in a specific sequence, and you can access them in that order. 2. Mutable: In most programming languages, lists are mutable, which means you can change their content by adding, removing, or modifying elements. 3. Dynamic Size: Lists can grow or shrink in size as needed, allowing you to add or remove elements without needing to specify a fixed size in advance. 4. Heterogeneous: Lists can contain elements of different data types, although many programming languages allow you to create lists of elements of a specific data type. Here are some examples of how lists are implemented in different programming languages: Python: In Python, lists are created using square brackets `[]` and can hold elements of different data types. JavaScript: In JavaScript, arrays are used to create lists. Arrays can also hold elements of different data types. Java: In Java, you can create lists using the `ArrayList` class from the Java Collec ons Framework. You need to specify the data type of the elements when declaring the list. C++: In C++, you can use arrays or the `std::vector` container to create lists. Here's an example using `std::vector`: 3.2 Dynamic implementa on of Linked List: A linked list is called a dynamic data structure because it can be used with a data collec on that grows and shrinks during program execu on. The major advantage of using linked list over arrays is in implemen ng any data structure like stack or queue. 3.3 Types of linked lists: There are mainly three types of linked lists: Single-linked list Double linked list Circular linked list 1. Single-linked list: In a singly linked list, each node contains a reference to the next node in the sequence. Traversing a singly linked list is done in a forward direc on. 2. Double-linked list: In a doubly linked list, each node contains references to both the next and previous nodes. This allows for traversal in both forward and backward direc ons, but it requires addi onal memory for the backward reference. 3. Circular linked list: In a circular linked list, the last node points back to the head node, crea ng a circular structure. It can be either singly or doubly linked. 3.4 Opera ons on Linked List: Create - crea ng a linked list of a specified size. Traverse - access each element of the linked list. Insert - adds a new element to the linked list. Dele on - removes the exis ng elements. Search - find a node in the linked list. Reverse - reverse the direc on of linked list between nodes. 3.5 Applica on of Linked List: polynomial manipula on Polynomial manipula on: Using a linked list, we can perform various opera ons on polynomials, such as adding, subtrac ng, mul plying, and dividing them. Unit 4: Stack 4.1 Introduc on of Stacks: Stack is a linear data structure that follows a par cular order in which the opera ons are performed. The order may be LIFO(Last In First Out) or FILO(First In Last Out). LIFO implies that the element that is inserted last, comes out first and FILO implies that the element that is inserted first, comes out last. 4.2 Representa on – Sta c & Dynamic Sta c Data structure In Sta c data structure the size of the structure is fixed. The content of the data structure can be modified but without changing the memory space allocated to it. Dynamic Data Structure In Dynamic data structure the size of the structure is not fixed and can be modified during the opera ons performed on it. Dynamic data structures are designed to facilitate change of data structures in the run me. 4.3 Opera ons – init() - ini alize - create an empty stack. push() - Push: This opera on adds an element to the top of the stack. pop() - Pop: This opera on removes and returns the element from the top of the stack. A er a pop opera on, the stack's size decreases by one.. isEmpty() - isEmpty: This opera on checks whether the stack is empty or not. If the stack is empty, it returns true; otherwise, it returns false. isFull() – isFull: This opera on checks whether the stack is full or not. If the stack is full, it returns false; otherwise, it returns true. peek() - Peek (or Top): This opera on returns the element at the top of the stack without removing it. It allows you to examine the top element without altering the stack's contents. 4.4 Applica on – String Traversal - String traversal, which involves itera ng through the characters of a string one by one, is a fundamental opera on in computer science and data structures. Func on Call – In data structures and algorithms, func on calls play a crucial role in implemen ng various opera ons and algorithms efficiently. Infix to pos ix – Conver ng an infix expression to a pos ix expression is a fundamental opera on in computer science and data structures. It's o en used when evalua ng mathema cal expressions in programming and plays a crucial role in building calculators, compilers, and other applica ons. Infix to prefix – Conver ng an infix expression to a prefix expression is a common opera on in computer science and data structures. Infix expressions are the typical mathema cal expressions we use, where operators are placed between operands, like 2 + 3, while prefix expressions, also known as Polish nota on, have operators placed before operands, like + 2 3. Conver ng infix to prefix nota on can be useful in certain applica ons, such as parsing and evalua ng mathema cal expressions or when working with expression trees. Pos ix Evalua on – Pos ix evalua on, also known as pos ix nota on or Reverse Polish Nota on (RPN), is a way of represen ng mathema cal expressions where operators come a er their operands. Unit 5: Queue 5.1 Introduc on: A queue is a fundamental data structure in computer science and programming that is used to store and manage a collec on of elements. It follows the "first-in, first-out" (FIFO) principle, which means that the first element added to the queue is the first one to be removed. Queues are widely used in various applica ons and algorithms, serving as a key tool for managing data in an orderly manner. Here's a basic introduc on to queues: 1. Structure and Characteris cs: - A queue is typically implemented as a linear data structure. - It has two primary opera ons: "enqueue" to add an element to the back (also called the "rear" or "tail") of the queue, and "dequeue" to remove an element from the front (also called the "head"). - Queues are o en used to represent real-world scenarios where elements need to be processed in the order they arrive, like people wai ng in a line or tasks in a printer queue. 2. Usage: - Queues are commonly used in various applica ons, such as task scheduling, breadth-first search algorithms, print spooling, and handling requests in web servers. - They are also used in memory management and CPU scheduling within opera ng systems. 3. Opera ons: - Enqueue: Adds an element to the back of the queue. - Dequeue: Removes and returns the element from the front of the queue. - Peek or Front: Returns the element at the front of the queue without removing it. - isEmpty: Checks if the queue is empty. - Size: Returns the number of elements in the queue. 4.Types of Queues: - Linear Queue: A basic queue where elements are added at one end and removed from the other end. - Circular Queue: A varia on of the linear queue in which the front and back of the queue are connected, allowing elements to wrap around when the end of the queue is reached. - Priority Queue: A queue in which elements are assigned priori es, and the element with the highest priority is dequeued first. - Double-Ended Queue (Deque): A queue that allows elements to be added and removed from both ends. 5. Implementa on: - Queues can be implemented using arrays or linked lists. The choice of implementa on depends on the specific requirements of the applica on. 6. Complexity: - Enqueue and dequeue opera ons in a well-implemented queue typically have a me complexity of O(1), making queues efficient for managing elements in a first-in, first-out order. 5.2 Representa on – Sta c & Dynamic - Queue In computer science and data structures, queues are a common abstract data type used to store and manage a collec on of elements. Queues have two primary representa ons: sta c and dynamic. 1. Sta c Queue: Array-based: A sta c queue is typically implemented using an array data structure. In this representa on, you specify a fixed size for the queue when it is created. The elements are inserted and removed from the front and rear of the array, respec vely. Advantages: - Simple to implement and memory-efficient when the size is known in advance. Disadvantages: - Limited capacity: Once the array is full, you cannot insert more elements without resizing the array. - Inefficient memory usage: Even if the queue is not full, the memory for the fixed-size array is allocated. 2. Dynamic Queue: - Linked List-based: A dynamic queue is implemented using a linked list. In this representa on, the size of the queue can grow or shrink dynamically as elements are inserted or removed. Each element in the queue is represented by a node, and the nodes are linked together. Advantages: - Dynamic size: The queue can grow or shrink as needed, making it suitable for situa ons where the size of the queue is unknown in advance. - Efficient memory usage: Memory is allocated only for the elements currently in the queue. Disadvantages: - Slightly more complex to implement compared to a sta c array-based queue. 5.3 Queue Opera ons – In computer science and data structures, a queue is a linear data structure that follows the First-In- First-Out (FIFO) principle. This means that the first element added to the queue is the first one to be removed. Queues are o en used in various applica ons and algorithms, such as scheduling, task management, and breadth-first search. Common opera ons in a queue data structure include: 1. init(): The __init__ method is the Python equivalent of the C++ constructor in an object-oriented approach. 2. Enqueue (Push): This opera on is used to add an element to the back (or "rear") of the queue. The newly added element becomes the last one in the queue. 3. Dequeue (Pop): This opera on is used to remove and retrieve the element from the front (or "head") of the queue. The element at the front of the queue is removed, and the next element in line becomes the new front. 4. IsEmpty: This opera on checks if the queue is empty, meaning it contains no elements. This is o en used to determine when to stop dequeuing. 5. IsFull: In some implementa ons, especially in fixed-size arrays, you might want to check if the queue is full. 6. Peek (Front): This opera on allows you to view the element at the front of the queue without removing it. It's o en used to check the next element that will be dequeued. 6. Size (Count): This opera on returns the number of elements currently in the queue. 5.4 Types of Queues 1. Simple Queue or linear queue A simple queue is a first-in-first-out (FIFO) data structure. Items are added to the rear of the queue and removed from the front of the queue. 2. Circular Queue A circular queue is a data structure in which the last element of the queue is connected to the first element. This allows for inser on and dele on to take place from any end. 3. Priority Queue A priority queue is a data structure in which elements are treated according to their priority. Elements with higher priority are removed from the queue before elements with lower priority. 4. Double Ended Queue (deque) A deque is a data structure that can be used as a queue or a stack. Items can be added or removed from either the front or the rear of the deque. Unit 6: Trees 6.1 Concept & Terminologies In computer science and data structures, a tree is a widely used abstract data structure that resembles a hierarchical structure. Trees are composed of nodes connected by edges, and they are o en used to represent data in a hierarchical or organized manner. Here are some key concepts and terminologies associated with trees: 1. Node: A fundamental element of a tree data structure. Each node contains data and can have zero or more child nodes. 2. Root: The topmost node in a tree, from which all other nodes are descended. There is only one root in a tree. 3. Parent: A node that has one or more child nodes connected to it. A parent node is higher in the hierarchy than its children. 4. Child: A node that is directly connected to and descended from another node. Each parent node can have mul ple child nodes. 5. Sibling: Nodes that share the same parent are called siblings. 6. Leaf: A node that has no children, meaning it is a node at the end of a branch in the tree. 7. Depth: The depth of a node is the number of edges in the path from the root to that node. The root node has a depth of 0. 8. Height: The height of a node is the maximum depth of any of its descendants. The height of the tree is the height of the root node. 9. Subtree: A subtree is a smaller tree that is part of a larger tree. It consists of a node and all its descendants. 6.2 Binary tree, Binary Search tree (BST) Binary Tree: A tree in which each node has at most two children, referred to as the le child and the right child. Binary trees can be used to implement various data structures like binary search trees. Binary Search Tree (BST): A binary tree with the property that for each node, all nodes in its le subtree have values less than the node's value, and all nodes in its right subtree have values greater than the node's value. This property makes BSTs efficient for searching and sor ng opera ons. 6.3 Representa on – Sta c & Dynamic - Trees 1. Sta c Representa on of Trees: - In a sta c representa on of a tree, the structure of the tree is predetermined and does not change over me. - The number of nodes and their rela onships are fixed and known in advance. - Sta c representa ons are typically used when the tree's structure remains constant throughout its life me. - Examples of sta c representa ons include arrays, lists, or other data structures used to create and store a predefined tree structure. For example, consider a binary search tree (BST) with a sta c representa on using an array: - The elements are stored in an array, and the posi on of each element is determined by its rela onship with the root node. - Elements cannot be added or removed dynamically, and the tree's structure remains constant. 2. Dynamic Representa on of Trees: - In a dynamic representa on of a tree, the structure of the tree can change over me. Nodes can be added, removed, or rearranged dynamically. - Dynamic representa ons are used when the tree's structure needs to adapt to changing data or requirements. - Examples of dynamic representa ons include linked structures like linked lists or various tree data structures like binary trees, AVL trees, or red-black trees. For example, consider a dynamic representa on of a binary search tree (BST): - Nodes are dynamically added or removed as data is inserted or deleted from the tree. - The tree's structure can change in response to data modifica ons, such as inser ons and dele ons, to maintain the proper es of a BST. 6.4 Opera ons on Binary Search Trees (BST) 6.4.1. Basic opera ons on a BST 1. Create: creates an empty tree. 2. Insert: insert a node in the tree. 3. Delete: deletes a node from the tree. 6.4.2. Tree traversals – recursive and non-recursive (preorder,inorder,postorder) Tree traversals are methods used to visit and process all the nodes in a binary tree. There are three main types of binary tree traversals: preorder, inorder, and postorder. These traversals can be implemented using both recursive and non-recursive (itera ve) algorithms. 1. Preorder Traversal: Preorder traversal visits the current node first, then the le subtree, and finally the right subtree. 2. Inorder Traversal: Inorder traversal visits the le subtree first, then the current node, and finally the right subtree. 3. Postorder Traversal: Postorder traversal visits the le subtree first, then the right subtree, and finally the current node. 6.4.3. Coun ng leaf, non -leaf & total nodes Coun ng leaf nodes, non-leaf nodes, and total nodes in a tree or a binary tree can be done through a simple traversal of the tree structure. 1. Leaf Nodes: Leaf nodes are nodes with no children. You can count them by performing a traversal (e.g., in-order, pre-order, or post-order) of the tree and incremen ng the count whenever you encounter a node with no children. 2. Non-Leaf Nodes: Non-leaf nodes are nodes with one or more children. To count non-leaf nodes, you can decrement the count of leaf nodes from the total nodes (assuming you've already counted leaf nodes). So, non-leaf nodes = total nodes - leaf nodes. 3. Total Nodes: You can count the total number of nodes in the tree by performing a traversal (e.g., in-order, pre-order, or post-order) of the tree and incremen ng the count for each node you visit. 6.5 Heap sort Heap sort is a comparison-based sor ng algorithm that operates by transforming an array into a binary heap data structure, and then repeatedly removing the maximum element from the heap and inser ng it into the sorted part of the array. It is an in-place, unstable sor ng algorithm, which means it doesn't require addi onal memory for sor ng and doesn't maintain the rela ve order of equal elements. Here's how the heap sort algorithm works: 1. Heapify: Build a max-heap from the array. This step ensures that the largest element is at the root of the heap. 2. Sor ng: A er the heap is constructed, the largest element is at the top of the heap (root). Swap this element with the last element in the array. Decrease the heap size by 1, and then call the "heapify" procedure on the reduced heap to maintain the max-heap property. Repeat this process for all elements in the array. 3. Repeat: Con nue the process for the reduced heap un l the en re array is sorted. Heap sort has a me complexity of O(n log n) in the worst and average cases, making it efficient for large datasets. However, it is not as commonly used as other sor ng algorithms like quicksort or mergesort in prac ce due to its slower average performance and non-adap ve nature (it doesn't take advantage of par ally sorted data). 6.6 AVL tree An AVL tree, named a er its inventors Adelson-Velsky and Landis, is a self-balancing binary search tree. It is designed to maintain a balanced structure, ensuring that the height of the tree remains logarithmic with respect to the number of nodes. This balanced property makes AVL trees efficient for various opera ons like inser on, dele on, and searching, which all have an average me complexity of O(log n). AVL trees are widely used in data structures and algorithms because they provide efficient performance for opera ons that require maintaining a sorted set of data. Unit 7: Graph 7.1 Concept & Terminologies -Graph Graphs are fundamental data structures used in computer science and mathema cs to represent a wide range of rela onships and networks. They consist of two main components: nodes (or ver ces) and edges. These nodes and edges are used to model and represent various types of connec ons and dependencies. Here are some key concepts and terminologies related to graphs: 1. Node (Vertex): Nodes are the fundamental building blocks of a graph. Each node represents an en ty or a data point. In various applica ons, nodes can represent ci es, people, web pages, or any other discrete object. 2. Edge: Edges are the connec ons or rela onships between nodes. They define how nodes are related to each other. In some cases, edges may have associated weights or labels to convey addi onal informa on. 3. Directed Graph (Digraph): In a directed graph, each edge has a direc on, indica ng that the rela onship between two nodes is one-way. It's represented by an arrow from one node to another. Directed graphs are used when the rela onships have a clear direc on, such as in social networks (A follows B) or in road networks (A connects to B via a one-way street). 4. Undirected Graph: In an undirected graph, edges have no direc on; they simply represent a bidirec onal connec on between nodes. If node A is connected to node B, it implies that node B is also connected to node A. This is o en used to represent symmetric rela onships, like friendships or physical connec ons. 5. Weighted Graph: A weighted graph is one in which each edge has an associated weight or cost. These weights can represent various a ributes, such as distances, costs, or strengths of connec ons. Weighted graphs are used in applica ons like finding the shortest path in a road network or minimum spanning tree in a network design. 6. Degree of a Node: The degree of a node is the number of edges connected to it. In an undirected graph, it's the number of neighbors a node has. In a directed graph, there are two degrees: in-degree (number of incoming edges) and out-degree (number of outgoing edges). 7. Path: A path in a graph is a sequence of nodes where each node is connected to the next node by an edge. The length of a path is the number of edges it contains. 8. Cycle: A cycle in a graph is a path that starts and ends at the same node. It forms a closed loop in the graph. 9. Connected Graph: A graph is connected if there is a path between every pair of nodes. If there are disconnected components in a graph, it's not connected. 10. Tree: A tree is a special type of graph that is connected and acyclic, meaning it has no cycles. Trees have a hierarchical structure with one designated node as the root, and every other node has a unique parent. 11. Forest: A forest is a collec on of disjoint trees. In other words, it's a graph that consists of mul ple disconnected tree components. 12. Graph Traversal: Graph traversal is the process of visi ng all nodes in a graph in a systema c manner. Depth-First Search (DFS) and Breadth-First Search (BFS) are common graph traversal algorithms. 13. Adjacency Matrix and Adjacency List: These are two common data structures for represen ng graphs. An adjacency matrix uses a 2D array to store whether there's an edge between nodes, while an adjacency list uses a list of neighbors for each node. 14. Spanning Tree: A spanning tree of a graph is a subgraph that is a tree and includes all the nodes of the original graph. Spanning trees are used in network design and minimiza on problems. 15. Eulerian Path/Circuit and Hamiltonian Path/Circuit: These are special paths and cycles in graphs. A Eulerian path/circuit visits each edge exactly once, while a Hamiltonian path/circuit visits each node exactly once. 7.2 Graph Representa on – Graphs are a fundamental data structure in computer science and mathema cs that represent a set of objects and the rela onships between them. They are used to model and solve a wide range of problems, from social networks and transporta on systems to data analysis and op miza on. Graphs consist of two main components: nodes (ver ces) and edges (links). Graphs can be categorized into several types, including: 1. Undirected Graph: In an undirected graph, edges have no direc on. If there is an edge between node A and node B, you can travel from A to B and from B to A. 2. Directed Graph (Digraph): In a directed graph, edges have a direc on. If there is an edge from node A to node B, you can travel from A to B, but not necessarily from B to A. 3. Weighted Graph: In a weighted graph, each edge has a weight or cost associated with it. These weights can represent distances, costs, or other quan ta ve measures. 4. Unweighted Graph: In an unweighted graph, all edges are considered equal; there is no associated weight. 7.2.1 Adjacency matrix An adjacency matrix is a two-dimensional array where the rows and columns represent nodes, and the values at the intersec on of row i and column j indicate whether there is an edge between node i and node j. This representa on is suitable for dense graphs but may be inefficient for sparse graphs. 7.2.2 Adjacency list An adjacency list is a collec on of lists, where each node's list contains the nodes, it is connected to. This representa on is more memory-efficient for sparse graphs. 7.2.3 Inverse Adjacency list An inverse adjacency list, o en referred to as a reverse adjacency list, is a data structure used in graph theory to represent the reverse rela onships between nodes in a directed graph. In a standard adjacency list, you store, for each node, a list of its neigh boring nodes to which it has outgoing edges. In an inverse adjacency list, you store, for each node, a list of nodes from which it has incoming edges. 7.2.4 Adjacency mul list An adjacency mul -list is a data structure used in graph theory to represent a graph. It is an alterna ve to the more common adjacency matrix and adjacency list representa ons. The adjacency mul -list is par cularly useful when dealing with graphs that have a mix of directed and undirected edges. In an adjacency mul -list, each vertex in the graph is represented as a node in a linked list. Each node contains two lists: 1. An outgoing edge list: This list contains pointers to the ver ces that can be reached from the current vertex. For directed edges, this list includes all the ver ces to which the current vertex has outgoing edges. 2. An incoming edge list: This list contains pointers to the ver ces that have edges leading to the current vertex. For directed edges, this list includes all the ver ces from which edges are coming into the current vertex. A -> Outgoing edges: [B], Incoming edges: [B, C] B -> Outgoing edges: [A, C], Incoming edges: [A, B] C -> Outgoing edges: [B, A], Incoming edges: [B] 7.3 Traversals – BFS and DFS Traversal algorithms are used to visit and explore all the nodes in a graph or tree data structure. Two common traversal algorithms are Breadth-First Search (BFS) and Depth-First Search (DFS). These algorithms are commonly applied to various data structures, including trees and graphs. Here's an overview of both BFS and DFS. Breadth-First Search (BFS): 1. Queue-Based: BFS uses a queue data structure to keep track of the nodes to be visited. It starts at the root node (or a specified star ng node) and explores all its neighbours at the current level before moving to the next level. 2. FIFO: In BFS, the nodes are processed in a First-In-First-Out (FIFO) order. This means that all nodes at the current level are visited before moving to the nodes at the next level. 3. Shortest Path: BFS is ideal for finding the shortest path from a source node to a target node in unweighted graphs or trees. It guarantees that you'll find the shortest path first because you explore neighbours of the source node before moving further away. 4. Memory Usage: BFS may consume more memory than DFS because it needs to store all the nodes at the current level in the queue. 5. Implementa on: You can implement BFS using a while loop and a queue data structure. Depth-First Search (DFS): 1. Stack-Based (or Recursive): DFS uses a stack data structure, either explicitly as a stack or implicitly via the call stack in a recursive implementa on. It starts at the root node (or a specified star ng node) and explores as far as possible along each branch before backtracking. 2. LIFO: In DFS, the nodes are processed in a Last-In-First-Out (LIFO) order. This means that you explore as deeply as possible along a branch before backtracking. 3. Applica ons: DFS is o en used for tasks like topological sor ng, cycle detec on, and pathfinding. It's not necessarily suited for finding the shortest path in unweighted graphs because it doesn't guarantee the shortest path. 4. Memory Usage: DFS typically uses less memory than BFS, especially when implemented using recursion. It only needs to store the current path and backtrack when necessary. 5. Implementa on: You can implement DFS using a recursive func on or an explicit stack. 7.4 AOV Network and Topological sort AOV (Ac vity on Vertex) network and topological sor ng are concepts o en used in the field of project management and graph theory, respec vely. They are closely related, and understanding one can help in understanding the other. 1. AOV Network (Ac vity on Vertex Network): An AOV network is a type of directed acyclic graph (DAG) that is commonly used to represent and plan ac vi es in a project, such as in project management and cri cal path analysis. In an AOV network, the ver ces (nodes) represent ac vi es or tasks, and the directed edges represent dependencies between ac vi es. Each edge has a weight or dura on associated with it, represen ng the me it takes to complete the ac vity. 2. Topological Sort: Topological sor ng is an algorithm used to order the ver ces of a directed acyclic graph (DAG) in such a way that for every directed edge (u, v), vertex u comes before vertex v in the ordering. In the context of an AOV network, a topological sort helps in determining the order in which ac vi es should be executed to ensure that no ac vity is started before all its dependencies have been completed. It is a cri cal step in determining the cri cal path of a project and es ma ng project comple on mes. 7.5 Spanning Trees: Prims and Kruskals algorithm Prim's and Kruskal's algorithms are greedy algorithms that find the minimum spanning tree of a weighted graph. They use different approaches to achieve the same objec ve:  Prim's algorithm: Starts with a root vertex and expands by including adjacent ver ces.  Kruskal's algorithm: Starts with the smallest weighted edge. Prim's algorithm runs faster in dense graphs, while Kruskal's algorithm runs faster in sparse graphs. Prim's algorithm:  Removes all loops and parallel edges.  Chooses any arbitrary node as the root node.  Traverses from vertex to vertex adjacently.  Ini alizes all the key values of the ver ces to infinity, except the root value.  Updates v as edge (u, v) weight w if the weight of each edge (u, v) is less than the current key value of the vertex v. Kruskal's algorithm:  Commences with the smallest weighted edge.  Uses the Heap data structure.

Use Quizgecko on...
Browser
Browser