Document Details

AstonishingStonehenge

Uploaded by AstonishingStonehenge

Universidad de Manila

Tags

sorting algorithms data structures and algorithms computer science DSA

Summary

This document reviews different sorting algorithms, including Bubble Sort, Selection Sort, Insertion Sort, and Merge Sort. It examines their characteristics, time and space complexities, and provides advantages and disadvantages of each algorithm. The document is a good resource for understanding basic sorting techniques.

Full Transcript

SORTING ALGORITHM SORT AND MERGE OPERATIONS GROUP 4 MEMBERS OF GROUP 4: MEDALLA, Maria Isabelle R. NOORA, Mackenzie Wayne L. OLIVEROS, Chloe A. PADIDA, Jamaica Pablita C. PARILLA, Lymar P. PESA, Jan Aldridge S. WHAT IS SORTING? Sorting is the process of rearranging elements i...

SORTING ALGORITHM SORT AND MERGE OPERATIONS GROUP 4 MEMBERS OF GROUP 4: MEDALLA, Maria Isabelle R. NOORA, Mackenzie Wayne L. OLIVEROS, Chloe A. PADIDA, Jamaica Pablita C. PARILLA, Lymar P. PESA, Jan Aldridge S. WHAT IS SORTING? Sorting is the process of rearranging elements in an array or list based on a comparison operator. The comparison operator determines the new order of elements in the data structure. Reordering of all the elements either in ascending or descending order. WHAT IS MERGING? Example: List A: [1, 3, 5] Merging is the process of combining two List B: [2, 4, 6] lists (or data sequences) that have similar Merged List: [1, 2, 3, 4, 5, 6] types of data. It adds or combines matching items from both lists to create a new list. Typically used in sorting algorithms to combine sorted lists into a single, sorted list. CHARACTERISTICS OF SORTING ALGORITHMS Time Complexity Measures how long an algorithm takes to run Helps compare the efficiency of different sorting methods. Worst-case: The longest time an algorithm takes. Average-case: The typical time for most inputs. Best-case: The shortest time for the most favorable input. Big O Notation Big O notation expresses how fast the run time increases as the input grows. It defines the number of operations (N) relative to input size (n). Big O notation helps compare the efficiency of algorithms. The time complexity of an algorithm is denoted by the combination of all O[n] assigned for each line of function. Understanding Algorithm Time Complexities O(1): Constant Time, meaning the algorithm's running time doesn't change with input size. Example: Accessing the first element in an array. O(n): Linear Time, where the running time increases proportionally or linearly with input size. Example: Searching through an array. O(log n): Logarithmic Time, where the running time grows slower as the input size increases. Example: Binary search, where the input size is reduced by half each step. O(n²): Quadratic Time, where the running time increases as the square of the input size, often seen in less efficient sorting algorithms like: Example: Selection sort or Bubble sort Space Complexity The total space taken by the algorithm with respect to the input size. Space complexity includes both Auxiliary space and space used by input. Auxiliary Space is the extra space or temporary space used by an algorithm. Space complexity is a related concept to time complexity. Worst-case: The maximum amount of memory an algorithm might need. Average-case: The expected amount of memory used by the algorithm. Best-case: The minimum amount of memory required by the algorithm. APPLICATIONS OF SORTING ALGORITHM Searching Algorithms Example: Binary Search Before performing a binary search on an array, it must be sorted. For instance, if you want to find a number in a list of sorted phone numbers, sorting is a necessary first step. Data Management Example: File Systems Files can be sorted by name, size, or date to make it easier to search for specific files in an operating system. Database Optimization Example: Preprocessing Data for Faster Queries In a large dataset, if the data is sorted beforehand, queries that need to fetch a range of values (e.g., fetching records between two dates) can be done more efficiently. Example: Indexing Indexes in databases are often sorted, allowing quick access to records using binary search. Data Analysis Example: Sorting for Trend Detection Financial analysts sort stock prices by date to observe trends and make predictions. Example: Identifying Outliers In large datasets, sorting by values can help detect outliers, such as identifying extremely high or low sales figures in a company’s revenue report. IMPORTANCE OF SORTING ALGORITHMS IMPORTANCE Easier Searching: Enables efficient searches with methods like binary search. Simplifies Data Handling: Organizes large datasets for better management. Software: Enhances data processing and retrieval. Conceptual Problems: Solves complex problems by structuring data effectively. TYPES OF SORTING TECHNIQUES Comparison Based Non Comparison Based SORTING ALGORITHMS BUBBLE SORT REPORTED BY: OLIVEROS, CHLOE Bubble Sort It is the simplest sorting algorithm that works by repeatedly swapping the adjacent elements if they are in the wrong order. This algorithm is not suitable for large data sets as its average and worst-case time complexity is quite high. Link: https://codepen.io/SQLMD/full/RyaLyg/ How it Works 1. Start from the left and compare each pair of adjacent values. 2. For each value, compare the value with the next value. 3. If the first value is larger than the second, swap the values so that the highest value comes last. 4. Repeat the process until the entire array is sorted. Visual Representation Time and Space Complexity TIME COMPLEXITY Best-case: Linear Time: O(N) Occurs when the array is already sorted. Average-case: Quadratic Time: O(N²) Requires N passes through the list = O(N) Each pass involves comparing adjacent pairs, so there are N comparisons per pass = O(N) Therefore overall complexity = O(N) * O(N) = O(N*N) = O(N²) Worst-case: Quadratic Time: O(N²) Occurs when elements of the array are arranged in the opposite order. SPACE COMPLEXITY Constant: O(1) Remains constant regardless of the size of the input array being sorted. Implementation / Code Link: https://www.programiz.com/online-compiler/8OD8QQVOiKvTp Advantages Disadvantages Easy to Understand and Implement Slow for Large Datasets Simple algorithm with a straightforward Time complexity is O(N²), making it approach to sorting. inefficient for large arrays. No Additional Memory Required Comparison-Based Operates in-place, meaning it doesn’t Relies on comparing elements, which need extra space beyond the original can limit efficiency in certain scenarios. array. Stable Sorting Algorithm Maintains the relative order of elements with equal values. SELECTION SORT REPORTED BY: OLIVEROS, CHLOE Selection Sort Selection sort is a simple and efficient sorting algorithm. It works by repeatedly selecting the smallest (or largest) element from the unsorted portion of the list and moving it to the sorted portion of the list. How it Works Scan the array to identify the smallest value. Move the smallest value to the beginning of the unsorted portion of the array. Repeat this process for each element in the array until the entire array is sorted. Visual Representation Time and Space Complexity TIME COMPLEXITY Quadratic Time: O(N²) One loop to select an element of Array one by one = O(N) Another loop to compare that element with every other Array element = O(N) Therefore overall complexity = O(N) * O(N) = O(N*N) = O(N²) SPACE COMPLEXITY Constant: O(1) Remains constant regardless of the size of the input array being sorted. Implementation / Code Link: https://www.programiz.com/online-compiler/2sFBq5TqQIjt8 Advantages Disadvantages Simple and Easy to Understand Not Suitable for Large Datasets Straightforward algorithm with an The time complexity of O(n²) makes it intuitive approach. inefficient for larger datasets. Performance degrades significantly as Works Well with Small Datasets the size of the dataset increases. Efficient for small lists where its simplicity outweighs its inefficiency. Not Stable Does not preserve the relative order of items with equal keys, meaning it is not a stable sorting algorithm. INSERTION SORT REPORTED BY: PESA, JAN ALDRIDGE Insertion Sort Insertion sort is a simple sorting algorithm. It works by iteratively inserting each element of an unsorted list into its correct position in a sorted portion of the list. How it Works We start with the second element, assuming the first is sorted. Compare the second element to the first and swap if it’s smaller. Move to the third element, compare it with the second and first, and swap as needed. Continue this process, comparing each new element with the sorted ones before it and swapping to maintain order. Repeat until the entire array is sorted. Visual Representation Time and Space Complexity TIME COMPLEXITY Best case: O(n) , If the list is already sorted, where n is the number of elements in the list. Average case: O(n 2 ) , If the list is randomly ordered Worst case: O(n 2 ) , If the list is in reverse order SPACE COMPLEXITY Auxiliary Space: O(1), Insertion sort requires O(1) additional space, making it a space- efficient sorting algorithm. Implementation / Code Advantages Disadvantages Simple and easy to implement. Inefficient for large lists. Stable sorting algorithm. Not as efficient as other sorting algorithms Efficient for small lists and nearly sorted (e.g., merge sort, quick sort) for most lists. cases. Space-efficient. Adoptive. The number of inversions is directly proportional to number of swaps. For example, no swapping happens for a sorted array and it takes O(n) time only. MERGE SORT REPORTED BY: PESA, JAN ALDRIDGE Merge Sort Merge sort is a popular sorting algorithm known for its efficiency and stability. Merge sort is a sorting algorithm that follows the divide-and-conquer approach. It works by recursively dividing the input array into smaller subarrays and sorting those subarrays then merging them back together to obtain the sorted array. How it Works It follows the divide-and-conquer approach to sort a given array of elements. Here’s a step-by-step explanation of how merge sort works: Divide: Divide the list or array recursively into two halves until it can no more be divided. Conquer: Each subarray is sorted individually using the merge sort algorithm. Merge: The sorted subarrays are merged back together in sorted order. The process continues until all elements from both subarrays have been merged. Visual Representation Time and Space Complexity TIME COMPLEXITY Best Case: O(n log n), When the array is already sorted or nearly sorted. Average Case: O(n log n), When the array is randomly ordered. Worst Case: O(n log n), When the array is sorted in reverse order. SPACE COMPLEXITY Auxiliary Space: O(n), Additional space is required for the temporary array used during merging. Implementation / Code Advantages Disadvantages Stability : Merge sort is a stable sorting Space complexity: Merge sort requires algorithm, which means it maintains the additional memory to store the merged relative order of equal elements in the sub-arrays during the sorting process. input array. Not in-place: Merge sort is not an in-place Guaranteed worst-case performance: sorting algorithm, which means it requires Merge sort has a worst-case time additional memory to store the sorted complexity of O(N logN) , which means it data. This can be a disadvantage in performs well even on large datasets. applications where memory usage is a Simple to implement: The divide-and- concern. conquer approach is straightforward. Slower than Quick Sort in general. Quick Naturally Parallel : We independently Sort is more cache friendly because it merge subarrays that makes it suitable for works in-place. parallel processing. QUICK SORT REPORTED BY: MEDALLA, MARIA ISABELLE Quick Sort Quick Sort is a popular, efficient, and recursive sorting algorithm. Quick Sort is a sorting algorithm based on the Divide and Conquer that picks an element as a pivot and partitions the given array around the picked pivot by placing the pivot in its correct position in the sorted array. How it Works 1. Choose a pivot 2. Partition the array around pivot. After partition, it is ensured that all elements are smaller than all right and we get index of the end point of smaller elements. The left and right may not be sorted individually. 3. Recursively call for the two partitioned left and right subarrays. 4. We stop recursion when there is only one element left. Visual Representation Visual Representation Visual Representation Implementation / Code Time and Space Complexity Time Complexity: Best Case: (Ω(n log n)), Occurs when the pivot element divides the array into two equal halves. Average Case: (θ(n log n)), On average, the pivot divides the array into two parts, but not necessarily equal. Worst Case: (O(n²)), Occurs when the smallest or largest element is always chosen as the pivot (e.g., sorted arrays). Auxiliary Space: O(n), due to recursive call stack Advantages Disadvantages It is a divide-and-conquer algorithm that It has a worst-case time complexity of makes it easier to solve problems. O(N 2 ), which occurs when the pivot is It is efficient on large data sets. chosen poorly. It has a low overhead, as it only requires a It is not a good choice for small data sets. small amount of memory to function. It is not a stable sort, meaning that if two It is Cache Friendly as we work on the elements have the same key, their relative same array to sort and do not copy data order will not be preserved in the sorted to any auxiliary array. output in case of quick sort, because here Fastest general purpose algorithm for we are swapping elements according to large data when stability is not required. the pivot’s position (without considering It is tail recursive and hence all the tail call their original positions). optimization can be done. HEAP SORT REPORTED BY: MEDALLA, MARIA ISABELLE Heap Sort Heap sort is a comparison-based sorting technique based on Binary Heap Data Structure. It can be seen as an optimization over selection sort where we first find the max (or min) element and swap it with the last (or first). How it Works First convert the array into a max heap using heapify, Then one by one delete the root node of the Max-heap and replace it with the last node and heapify. Repeat this process while size of heap is greater than 1. Build a max heap from the given input array. Repeat the following steps until the heap contains only one element: Swap the root element of the heap (which is the largest element) with the last element of the heap. Remove the last element of the heap (which is now in the correct position). Heapify the remaining elements of the heap. The sorted array is obtained by reversing the order of the elements in the input array. Visual Representation Build a Max Heap Visual Representation Visual Representation Visual Representation Visual Representation Sort the array by placing largest element at end of unsorted array Visual Representation Sort the array by placing largest element at end of unsorted array Visual Representation Sort the array by placing largest element at end of unsorted array Implementation / Code Time and Space Complexity Time Complexity: O(n log n) Auxiliary Space: O(log n), due to the recursive call stack. However, auxiliary space can be O(1) for iterative implementation. Side Note: Heap sort is an in-place algorithm. Its typical implementation is not stable but can be made stable (See this) Typically 2-3 times slower than well-implemented QuickSort. The reason for slowness is a lack of locality of reference. Advantages Disadvantages Efficient Time Complexity: Heap Sort has a time complexity of O(n log n) in all cases. This Costly : Heap sort is costly as the makes it efficient for sorting large datasets. constants are higher compared to merge The log n factor comes from the height of the sort even if the time complexity is O(n Log binary heap, and it ensures that the algorithm n) for both. maintains good performance even with a large Unstable : Heap sort is unstable. It might number of elements. Memory Usage – Memory usage can be rearrange the relative order. minimal (by writing an iterative heapify() Inefficient: Heap Sort is not very efficient instead of a recursive one). So apart from because of the high constants in the time what is necessary to hold the initial list of complexity items to be sorted, it needs no additional memory space to work Simplicity – It is simpler to understand than other equally efficient sorting algorithms because it does not use advanced computer science concepts such as recursion. BUCKET SORT REPORTED BY: PADIDA, JAMAICA PABLITA C. Bucket Sort Bucket Sort is a comparison-based sorting technique that divides the elements into several buckets, where each bucket contains a range of values. Each bucket is then sorted individually (often using another sorting algorithm), and finally, the sorted buckets are concatenated to produce the sorted list. How it Works 1. Distribute elements into buckets: The input array is divided into several buckets. Each bucket represents a range of values. For example, if the values range from 0 to 1, and there are 5 buckets, each bucket could represent a range of 0.2 (like [0, 0.2), [0.2, 0.4), etc.). 2. Sort each bucket: After distributing the elements into the appropriate buckets, each bucket is sorted individually. Typically, a simple sorting algorithm like Insertion Sort is used for this step because buckets are usually small. 3. Concatenate the sorted buckets: Once all the buckets are sorted, the elements from each bucket are combined back into the original array, resulting in a sorted array. Example Consider the input array: [0.42, 0.32, 0.33, 0.52, 0.37, 0.47, 0.51] 1. Distribute into buckets (assume 5 buckets with a range of 0.2 each): Bucket 1 (range [0, 0.2)) → Empty. Bucket 2 (range [0.2, 0.4)) → [0.32, 0.33, 0.37]. Bucket 3 (range [0.4, 0.6)) → [0.42, 0.52, 0.47, 0.51]. Bucket 4 (range [0.6, 0.8)) → Empty. Bucket 5 (range [0.8, 1.0)) → Empty. 2. Sort each bucket: Bucket 2 → Sort [0.32, 0.33, 0.37] to get [0.32, 0.33, 0.37]. Bucket 3 → Sort [0.42, 0.52, 0.47, 0.51] to get [0.42, 0.47, 0.51, 0.52]. 3. Concatenate the sorted buckets: Combine all the sorted buckets: [0.32, 0.33, 0.37, 0.42, 0.47, 0.51, 0.52]. The array is now sorted using Bucket Sort. Visual Representation Visual Representation output : BINGO SORT REPORTED BY: PADIDA, JAMAICA PABLITA C. Bingo Sort Bingo Sort is a comparison-based sorting technique that repeatedly finds the smallest unsorted element and moves all occurrences of that element to their correct positions. It is somewhat similar to Selection Sort, but it handles multiple occurrences of the same minimum element more efficiently. How it Works 1. Find the minimum element: The algorithm begins by identifying the smallest element in the input array. This is the first element to be placed in its correct sorted position. 2. Move all occurrences: After identifying the smallest element, all occurrences of this element are moved to the front of the array. This ensures that the smallest element is in its correct position. 3. Repeat for remaining elements: The process is repeated for the next smallest element in the unsorted portion of the array. The next minimum is found, and all occurrences are moved to the next available position in the sorted portion. 4. Continue until the array is sorted: This process continues until all elements have been placed in their correct positions, resulting in a fully sorted array. Example Consider the input array: [5, 4, 8, 5, 4, 8, 7, 4, 4, 4] 1. Find the minimum element (which is 4): Move all occurrences of 4 to the front. Updated array: [4,4,4,4,4,8,5,8,5,7] 2. Find the next minimum element (which is 5): Move all occurrences of 5 to the next available positions. Updated array: [4,4,4,4,4,5,5,5,8,7] 3. Find the next minimum element (which is 7): Move all occurrences of 7 to the next available positions. Final Sorted Array [4,4,4,4,4,5,5,5,7,8] The array is now sorted using Bingo Sort. Visual Representation Visual Representation output : COUNTING SORT REPORTED BY: PARILLA, LYMAR P. Counting Sort Counting Sort is a non-comparison-based sorting algorithm that operates by counting the occurrences of each unique element in the input array. It is particularly efficient for sorting integers or objects with a limited range of values, as it uses an auxiliary array to store the count of each element, allowing for linear time complexity, O(n + k), where n is the number of elements and k is the range of the input values. Key Characteristics of Counting Sort Advantages and Disadvantages of Counting Sort Efficiency and Limitations Counting Sort is highly efficient for sorting integers within a limited range due to its linear time complexity, O(n + k), but it becomes impractical for large ranges of input values (k) as it requires significant additional space for the counting array, making it less suitable for datasets with a wide variety of values. Step-by-Step Process of Counting Sort Initialization of Count Counting Occurrences Cumulative Count and Array Traverse the input array and for each Output Array element, increment its corresponding Convert the count array into a cumulative Create a count array of size equal to count array, then fill the output array by the range of input values, initializing index in the count array. This step placing each element in its correct all elements to zero. This array will effectively counts how many times position based on the cumulative counts, store the frequency of each unique each value appears in the input data. ensuring the sorting is stable. element from the input array. Pseudocode Structure Pseudocode Structure Counting Sort is highly efficient for sorting integers within a limited range due to its linear time complexity, O(n + k), but it becomes impractical for large ranges of input values (k) as it requires significant additional space for the counting array, making it less suitable for datasets with a wide variety of values. Sorting a Small Array Consider an input array of integers, such as [4, 2, 2, 8, 3, 3, 1], which demonstrates the Counting Sort algorithm's effectiveness in sorting small datasets with repeated elements. The Counting Sort algorithm processes the input by first counting the occurrences of each integer, then calculating cumulative counts, and finally placing each integer in its correct position in the output array, resulting in a sorted array of [1, 2, 2, 3, 3, 4, 8]. Sorting with a Limited Range RADIX SORT REPORTED BY: PARILLA, LYMAR P. RADIX SORT Radix Sort is a non-comparative integer sorting algorithm that sorts numbers by processing individual digits. It works by grouping numbers based on their individual digits, starting from the least significant digit to the most significant. This method allows Radix Sort to achieve linear time complexity, O(nk), where n is the number of elements and k is the number of digits in the largest number, making it efficient for sorting large datasets with fixed-length keys. Comparison with Other Sorting Algorithms Applications of Radix Sort TIME COMPLEXITY Digital Image Processing Network Packet Sorting Radix Sort is particularly effective In digital image processing, Radix Radix Sort is applied in networking for for sorting large datasets, such as Sort can be utilized for sorting sorting packets based on header in database management pixel values, enabling faster information, facilitating efficient data image manipulation and analysis, transmission and routing in high- systems, where it can efficiently especially in applications speed networks, improving overall handle large volumes of integer requiring real-time processing. performance. data with fixed lengths. COCKTAIL SORT REPORTED BY: NOORA, MACKENZIE WAYNE COCKTAIL SORT A variant of bubble sort is cocktail sort. The Bubble Sort algorithm always goes through the elements left to right, moving the largest element in the first iteration to its proper location, followed by the second-largest in the second, and so on. Cocktail Sort iteratively moves through an array in both directions. Cocktail sort is effective for large arrays because it skips the pointless iteration. Characteristics of Cocktail Sort Bidirectional Sorting: Unlike traditional bubble sort, which only passes through the list in one direction, cocktail sort sorts in both directions alternately. Two-Phase Sweeps: There are two sweeps in a complete iteration. one pass each way, one forward and one backward. The largest unsorted element "bubbles up" on the forward pass, while the smallest unsorted element "bubbles down" on the backward pass. Adaptive: By determining whether no swaps are made during a pass, cocktail sort which is an optimized version of bubble sort can end the process early if the array becomes sorted during the process. How it Works first is we check if index 0 > index 1 if true swap instead of early termination, it will continue until the biggest number is in the right index Rather than moving back in the left to move again. in the end of pointer or the 3rd index, it will compare itself backwards. Until the smallest number will be placed on the right spot Time and Space Complexity TIME COMPLEXITY Best Case: O(n) only if when there is already a sort on the list. Average Case: O(n log n) Typical performance for random unsorted lists. Worst Case: O(n2) Happens when the list is in reverse order. SPACE COMPLEXITY Auxiliary Space: Cocktail sort has an O(1) space complexity since it only needs a fixed amount of additional space. Implementation / Code VISUALIZATION/How it works CYCLE SORT REPORTED BY: NOORA, MACKENZIE WAYNE CYCLE SORT Cycle sort is an in place, unstable sorting algorithm, a comparison sort that is theoretically optimal in terms of the total number of writes to the original array, unlike any other in-place sorting algorithm Cycle sort is a comparison based sorting algorithm that is notable for its efficiency in minimizing the number of writes to the original array. Characteristics of Cocktail Sort In Place Sorting: Cycle sorting sorts the array where it is, avoiding the need for another use of storage for an array. Two-Phase Sweeps: There are two sweeps in a complete iteration. one pass each way, one forward and one backward. The largest unsorted element "bubbles up" on the forward pass, while the smallest unsorted element "bubbles down" on the backward pass. Stable: Because cycle sort does not maintain the relative order of equal elements, it is not a stable sorting algorithm. Time and Space Complexity TIME COMPLEXITY Best Case: O(n^2) Even in the best case, where elements are mostly in the correct position, the algorithm still goes through each element to determine its final position, leading to complexity. Average Case: O(n^2) The averagecase complexity is also quadratic since, on average, each element will need to move through the list, leading to numerous comparisons and potential swaps. Worst Case: O(n^2) In the worst case, each element may need to be compared with every other element, resulting in quadratic time complexity. This scenario occurs when the array is sorted in reverse order. SPACE COMPLEXITY Auxilary Space: place sorting algorithm is called cycle sort. It just uses a fixed amount of additional space for variables like item, pos, etc. to perform the sort no additional data structures are needed. The space complexity is therefore constant. How it Works First is we look at the index 0, and observe the number inside the index 0 We count how many numbers are less than the current element inside index 0. and we start counting at index 1 and them swap them out. my method is in the first index [i] - 1 then count starting from index 1 and then swap them out Second, if the lowest element is in the correct index you start with the next index which is 1 repeat the first step and you will still count at the index 1 VISUALIZATION VISUALIZATION [6, 3, 9, 7, 2, 4, 1, 8, 5] Implementation / Code THANK YOU! DATA OPERATIONS Borcelle University Presented by Boodle Kineme (Isaiah Gabriel L. Rafailes) DEFINITION Traversing in data structures means systematically visiting and accessing each element at least once, allowing a user to display or perform operations on every element. Generally, linear data structures use only one way to traverse data while nonlinear data structures have multiple ways to traverse data. Boodle Kineme (Isaiah Gabriel L. Rafailes) TYPES OF TRAVERSAL: According to Method According to Data Structure Other Types of Traversal Boodle Kineme (Isaiah Gabriel L. Rafailes) ACCORDING TO METHOD: ITERATIVE Involves using loops to visit each element or node in a data structure. Boodle Kineme (Isaiah Gabriel L. Rafailes) ACCORDING TO METHOD: RECURSIVE Traversal of elements using functions calling themselves (recursion). Boodle Kineme (Isaiah Gabriel L. Rafailes) KEY DIFFERENCES: ITERATIVE Low Ideal for Complex, Explicit Memory Linear Data Specialized Control Structures Usage Coding RECURSIVE Ideal for High Hierarchial Simpler, Implicit Memory Intuitive Control Data Usage Coding Structures Boodle Kineme (Isaiah Gabriel L. Rafailes) ACCORDING TO DATA STRUCTURE: LINEAR TRAVERSING visiting elements in linear data structures in a sequential, straightforward, or iterative manner. Boodle Kineme (Isaiah Gabriel L. Rafailes) ACCORDING TO DATA STRUCTURE: NONLINEAR TRAVERSING Binary Tree Traversal Graph Traversal Boodle Kineme (Isaiah Gabriel L. Rafailes) BINARY TREE TRAVERSAL Level-order Tree - data structure that has parent and child nodes. Pre-order In-order Binary tree - tree with at most two child nodes per Post-order parent node. Boodle Kineme (Isaiah Gabriel L. Rafailes) LEVEL ORDER the tree’s elements are iterated by level. First on the current level then the next until the end of the tree is level order reached. traversal: 1 2 3 4 5 Boodle Kineme (Isaiah Gabriel L. Rafailes) IN-ORDER 1. The Left part of the subtree. 2. The Current node of the tree. in-order 3. The Right part of the subtree. traversal: 4 2 5 1 3 Boodle Kineme (Isaiah Gabriel L. Rafailes) PRE-ORDER 1. The Current node of the tree. 2. The Left part of the subtree. in-order 3. The Right part of the subtree. traversal: 1 2 4 5 3 Boodle Kineme (Isaiah Gabriel L. Rafailes) POST-ORDER 1. The Left part of the subtree. 2. The Right part of the subtree. post-order 3. The Current node of the tree. traversal: 4 5 2 3 1 Boodle Kineme (Isaiah Gabriel L. Rafailes) GRAPH TRAVERSAL Breadth-first search graph - data structure with nodes containing (BFS) data connected by edges. Depth-first search (DFS) Boodle Kineme (Isaiah Gabriel L. Rafailes) BREADTH-FIRST SEARCH (BFS) Process: iterate over all the elements of a level and then move on to the next level. Order in BFS: A B C D Boodle Kineme (Isaiah Gabriel L. Rafailes) DEPTH-FIRST SEARCH (DFS) Process: iterate completely over the current node and its branch until reaching its end, moving on to the next. Order in DFS: A B D C Boodle Kineme (Isaiah Gabriel L. Rafailes) in-Memory OTHER Random TRAVERSAL Bidirectional TYPES Topological Sort Boodle Kineme (Isaiah Gabriel L. Rafailes) IN-MEMORY RANDOM BIDIRECTIONAL accessing and accesses ability to processing elements in a traverse a data elements stored way that structure in in memory (as doesn't follow both directions opposed to a set pattern such as doubly disk-based to explore linked lists and storage) in a data certain types of systematic way. unbiasedly graphs. Boodle Kineme (Isaiah Gabriel L. Rafailes) TOPOLOGICAL SORT traversal algorithm for ordering Kahn’s vertices in a directed acyclic Algorithm graph (DAG) where for every directed edge u→v, u appears before v. Depth-first search (DFS)- Used in dependency resolution based and scheduling. Boodle Kineme (Isaiah Gabriel L. Rafailes) 1. Calculate in-degree (number of 1. KAHN’S incoming edges) for each vertex. 2. Initialize queue with all vertices having ALGORITHM an in-degree of 0. a. While queue is not empty, remove a Time complexity: vertex and append to topological order. O(V+E) b. For each neighbor of vertex, decrease its in- V - number of degree. If the in-degree becomes 0, add it to vertices the queue. E - number of edges. 3. If all vertices are processed and part of topological order, the graph is a DAG. Otherwise, it has a cycle Boodle Kineme (Isaiah Gabriel L. Rafailes) DFS-BASED 1. Perform a DFS traversal of the graph, 1. marking nodes as visited. Time complexity: 2. On reaching the end of a node, push O(V+E) the node onto a stack. V - number of vertices 3. Once all nodes are processed, pop E - number of nodes from the stack to get the edges. topological order. Boodle Kineme (Isaiah Gabriel L. Rafailes) PROS AND CONS ITERATIVE Bad in Unsuited for Controlled Predictable Complexity Recursive Memory Use Problems RECURSIVE Suited for Stack High Simple Memory Recursive Overflow Problems Usage Boodle Kineme (Isaiah Gabriel L. Rafailes) PROS AND CONS LINEAR Limited No Structural Simple Efficient Information LEVEL ORDER Finds High Shortest Natural for Complex Memory Paths in Trees Usage Graphs Boodle Kineme (Isaiah Gabriel L. Rafailes) PROS AND CONS IN-ORDER Sorted Output Ideal for Only Suitable Binary for Binary Trees Trees PRE-ORDER Best for Not Sorted Limited by Simple Tree Implementation Depth of Tree Copying Boodle Kineme (Isaiah Gabriel L. Rafailes) PROS AND CONS POST-ORDER Unsorted Limited by Best for Tree Simple Depth of Tree Deletion DEPTH-FIRST SEARCH Memory Can Get Shortest Good at Path not Efficient Stuck Path Guaranteed Finding Boodle Kineme (Isaiah Gabriel L. Rafailes) PROS AND CONS BREADTH-FIRST SEARCH Ensures Layered High Not for Shortest Path in Exploration Memory Weighted Unweighted Usage Graphs Graphs IN-MEMORY Efficient Memory Not Always Limitations Feasible Simple Boodle Kineme (Isaiah Gabriel L. Rafailes) PROS AND CONS RANDOM Simple Unpredictable Ideal for Inefficient Randomization Time Control BIDIRECTIONAL Efficient Memory Complex to Pathfinding Intensive Implement Boodle Kineme (Isaiah Gabriel L. Rafailes) PROS AND CONS TOPOLOGICAL SORT Sets Order of Not for Dependencies Cyclic Graphs Excels at Directed Complex to Acyclic Implement Graphs Boodle Kineme (Isaiah Gabriel L. Rafailes) USES OF TRAVERSING Searching Aggregating and Sorting Manipulating Data Processing Trees Managing Exploring Memory Graphs Finding Paths Updating Handling Structures Serialization Boodle Kineme (Isaiah Gabriel L. Rafailes) 1 2 3 SEARCHING SORTING DATA AGGREGATION & MANIPULATION Find a specific Arrange elements in Summarize or element or check for a specific order, modify data stored its existence in a involves traversing in a data structure. data structure elements first. requires traversal. Ex. Microsoft Word Ex. Sorting files by Find and Replace Ex. Find in document alphabetical order or function of word by increasing or processors decreasing size Boodle Kineme (Isaiah Gabriel L. Rafailes) 4 5 6 EXPLORING GRAPHS PROCESSING TREES FINDING PATHS Depth-First Search Pre-Order and Post- Breadth-First Search (DFS): Used to Order Traversal: Used (BFS): Used to traverse a graph for various tree explore nodes level deeply, exploring as operations like copying by level from a far as possible or evaluating starting node. before backtracking. expressions. Ex. Finding the most Ex. Maze solving Ex. Youtube time-saving route in algorithms Recommendations Grab rides. Boodle Kineme (Isaiah Gabriel L. Rafailes) 7 8 9 MEMORY MANAGEMENT UPDATING STRUCTURES HANDLING SERIALIZATION traverse memory In trees or lists, blocks to allocate traversal allows Traverse data structures before space efficiently. updating or modifying converting them into a specific values. more portable format Ex. Allocation of and vice versa. memory between Ex. Notifications in opened and Messenger Ex. Zipping a file unopened browser directory containing tabs various file types using Winrar. Boodle Kineme (Isaiah Gabriel L. Rafailes) APPLICATIONS OF TRAVERSING Aggregating and File Systems Web Manipulating Crawlers Data Databases GPS & Social Network Naviagation Networks Routing Recommendation Compilers Scheduler Algorithms Systems Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PSEUDOCODE (LINEAR TRAVERSAL - ARRAY Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PROGRAM LINEAR TRAVERSAL - ARRAY Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PSEUDOCODE (TRAVERSAL - LINKED LIST) Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PSEUDOCODE (TRAVERSAL - LINKED LIST) Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PSEUDOCODE (TRAVERSAL - LINKED LIST) Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PROGRAMS (TRAVERSAL - LINKED LIST) Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PROGRAMS (TRAVERSAL - LINKED LIST) Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PROGRAMS (TRAVERSAL - LINKED LIST) Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PROGRAMS (TRAVERSAL - LINKED LIST) Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PSEUDOCODE (TRAVERSAL - STACK) Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PROGRAMS (TRAVERSAL - STACK) Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PROGRAMS (TRAVERSAL - STACK) Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PSEUDOCODE (TRAVERSAL - QUEUE) Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PSEUDOCODE (TRAVERSAL - QUEUE) Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PROGRAMS (TRAVERSAL - QUEUE) Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PROGRAMS (TRAVERSAL - QUEUE) Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PROGRAMS (TRAVERSAL - QUEUE) Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PSEUDOCODE (TRAVERSAL - TREE) Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PSEUDOCODE (TRAVERSAL - TREE) Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PSEUDOCODE (TRAVERSAL - TREE) Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PSEUDOCODE (TRAVERSAL - TREE) Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PSEUDOCODE (TRAVERSAL - TREE) Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PROGRAMS (TRAVERSAL - TREE) Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PROGRAMS (TRAVERSAL - TREE) Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PROGRAMS (TRAVERSAL - TREE) Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PROGRAMS (TRAVERSAL - TREE) Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PROGRAMS (TRAVERSAL - TREE) Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PSEUDOCODE (TRAVERSAL - GRAPH) Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PSEUDOCODE (TRAVERSAL - GRAPH) Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PSEUDOCODE (TRAVERSAL - GRAPH) Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PROGRAM (TRAVERSAL - GRAPH) Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PROGRAM (TRAVERSAL - GRAPH) Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PROGRAM (TRAVERSAL - GRAPH) Boodle Kineme (Isaiah Gabriel L. Rafailes) SAMPLE PYTHON PROGRAM (TRAVERSAL - GRAPH) Boodle Kineme (Isaiah Gabriel L. Rafailes) UP NEXT: SEARCHING OPERATION IN DSA Boodle Kineme (Isaiah Gabriel L. Rafailes) ChatGiPiT Matthew pili searching IN Data structure Matthew pili searching The process of finding a specific item in a collection of data, such as an array or list, involves examining each element of the dataset to determine whether it matches the desired value. This search can be performed in several ways Matthew pili types of searching LINEAR SEARCH BINARY SEARCH DFS OR DEPTH-FIRST SEARCH BFS OR BREADTH-FIRST SEARCH INTERPOLATION SEARCH JUMP SEARCH linear search Linear Search, or Sequential Search, is one of the most basic and easy-to-understand searching algorithms. It operates by checking each element in a data collection (such as an array or list) one by one until it finds a match or has looked through the entire collection. Matthew pili linear search The algorithm checks each element in the collection one at a time, considering each as a possible match for the key you’re searching for. If it finds an element that matches the key exactly, the search is successful, and it returns the index of that key. If all elements are examined and none match the key, it concludes with “No match is found.” Matthew pili Binary search Binary Search is a searching algorithm used with sorted arrays. It operates by repeatedly halving the search range, leveraging the array’s sorted order to efficiently narrow down the potential location of the target element. Matthew pili binary search Divide the search space into two halves by finding the middle index “mid.” Compare the middle element of the search space with the key. If the middle element is the key, the search is successful, and the process ends. If the middle element is not the key, determine the next search space: If the key is smaller than the middle element, search the left half. If the key is larger than the middle element, search the right half. Repeat the process until the key is found or the search space is exhausted. Matthew pili Depth First Search Depth-First Search (DFS) is an algorithm for exploring a graph or tree by going as deep as possible along each branch before backtracking. Matthew pili Depth first search Start at the initial node. Mark the node as visited. Visit each adjacent unvisited node: Move to the next node. Repeat the process for this node. Backtrack when there are no more unvisited adjacent nodes. Stop when all nodes are visited or the target node is found. Matthew pili Breadth-First Search BFS is a method used to explore a graph or tree by visiting all the closest nodes first before moving to nodes that are further away. It uses a queue to keep track of which nodes to visit next. Matthew pili Breadth-First Search Start at the beginning node. Mark the node as visited and add it to a queue. While the queue isn't empty: Remove a node from the queue. Visit all its unvisited neighbors, mark them as visited, and add them to the queue. Keep repeating until all nodes are visited or you find what you're looking for. Matthew pili Exponential Search Exponential Search is a search algorithm used to find the range in which a target element lies in a sorted array, after which binary search is applied within that range. It is particularly useful for large, unbounded or infinite arrays. Matthew pili Exponential Search Start at the first element of the array. Compare the target element with the element at the current index. If the target element is greater, increase the index exponentially (i.e., double the index: 1, 2, 4, 8, 16,...). Once you find an element greater than or equal to the target, use binary search between the previous and current index. Return the position if found, or "not found" if the target isn't present. Matthew pili Jump Search Jump Search is a searching algorithm used for sorted arrays. It works by jumping ahead by a fixed number of steps (block size) instead of checking each element, like in linear search. When it finds a block where the target element might be, it performs a linear search within that block. Matthew pili Exponential Search Choose a block size (usually the square root of the array's length). Jump ahead by the block size through the array until you find an element larger than or equal to the target. Perform a linear search within the identified block to find the exact position of the target. If the element is found, return its index; otherwise, return "not found." Matthew pili USES of SEARCHING DATA RETRIEVAL INFORMATION RETRIEVAL FILE SEARCH Matthew pili EXAMPLE CODE Matthew pili Linear Search Matthew pili Binary Search Matthew pili Depth first search Matthew pili Breadth-First Search Matthew pili Exponential Search Matthew pili Jump Search Matthew pili linear search PROS CONS Simplicity Inefficiency with Large Flexibility Datasets Direct Approach No Optimization Performance Degradation Matthew pili Binary Search PROS CONS Reduced Comparisons Sorted Data Required Optimal for Sorted Data More Complex Not Suitable for Small Data Matthew pili Depth first search PROS CONS Simple Implementation Possible Infinite Loops Backtracking Matthew pili Breadth-First Search PROS CONS Shortest Path Guarantee High Memory Usage Slower for Deep Trees Complete Exploration Less Memory Efficient Matthew pili Exponential Search PROS CONS Efficient for Large Data Complex Implementation Combines Techniques Less Effective for Small Data Matthew pili Jump Search PROS CONS Fewer Comparisons Block Size Impact Easy to Implement Performance Variability Matthew pili application of SEARCHING SIMPLE CONTACT LIST SEARCH SHOPPING CART ITEM SEARCH FILE SYSTEMS SHORTEST PATH ALGORITHMS PERFORMANCE OPTIMIZATION Matthew pili Thank you for listening Matthew pili EyAy INSERTION INSERTION Insertion means to add an element in the given data structure. We can insert an element at any position in the array like beginning, end or at any given indexed position. Push (Stack) Push is an operation that adds an element to the top of the stack. A stack follows the LIFO (Last In, First Out) principle, meaning the last element added is the first to be removed. Push (Stack) Push is an operation that adds an element to the top of the stack. A stack follows the LIFO (Last In, First Out) principle, meaning the last element added is the first to be removed. Push (Stack) Push (Stack) Simplicity Limited access Efficiency Potential stack overflow Useful in specific LIFO Restriction scenarios Memory Memory Overhead management Enqueue (Queue) Enqueue is an operation used in the Queue data structure, where an element is added to the end (rear) of the queue. A queue follows the First In, First Out (FIFO) principle, meaning the first element added will be the first one to be removed. Enqueue(Queue Push (Stack) Simplicity Limited access Efficiency Potential stack overflow Useful in specific LIFO Restriction scenarios Memory Memory Overhead management Insert(Linked List) Enqueue is an operation used in the Queue data structure, where an element is added to the end (rear) of the queue. A queue follows the First In, First Out (FIFO) principle, meaning the first element added will be the first one to be removed. Insert(Linked List) Insert(Linked List) Dynamic Size Memory Overhead Sequential Accesss Efficient Complexity of Insertions and Implementation Deletions Cache Performance No Wasted Space Increase Ease of Complexity for Multi- Implementing level insertion Complex Data Structures Insert(Array) Insertion in an array refers to adding a new element into a specific position within the array. Key Steps 1. Determine the Position: 2. Shift Elements: 3. Insert the Element: 4. Update the Array Size: Insert(Array) Insert(Array) Direct Access Shifting Elements Simplicity Fixed Size Predictable Inefficient Memory Memory Layout Reallocation for Dynamic arrays Memory Efficiency for Static arrays Costly Insertions in the Middle BRAVEN GRAZER N. REYES DELETION BRAVEN GRAZER N. REYES Deletion is an essential operation in data structures that requires removing a specific element from the structures. Using this operation, users have the ability DEFINITION to dynamically modify the data structure. To begin deletion, first identify what element or data needs to be removed, taking it out from the data structure, then reorganizing and assembling the data in order to maintain the structure. Braven Reyes TYPES OF DELETION Braven Reyes When to Use? Head When you want to remove the first inserted element in Deletion a data structure. Pertains to the removal of the first element in a linear data structure. It Where to Use? maintains the order based on the insertion sequence. Using this, the head pointer is updated to point to the next element, therefore removing the Singly linked list first element as the head. Double Linked List Queue Braven Reyes Head Deletion Pros Efficient when used in Linked Lists and Queues Cons Inefficient in arrays May lead to reorganizing the whole data structure Braven Reyes When to Use? Tail When you want to remove a recently added Deletion element in the structure, like in the last item of a stack, or the end node in a linked list Removal of the last element in a linear data structure. In some data Where to Use? structures, it requires traversing the structure to find the last node. Tail deletion is commonly used in data structures that take usage of the LIFO Singly linked list principle. Double Linked List Stack Braven Reyes Tail Deletion Pros Good for structures with LIFO principle Cons Complicated in arrays Braven Reyes When to Use? Point-Based When deleting an element from a specific position Deletion Removal of an element in a specific position or index within a data Where to Use? structure. The position is provided or determined by other operations, mainly by Searching. Array Linked List Binary Search Tree Braven Reyes Point-Based Deletion Pros Useful for direct removal Cons Some structures may require shifting of elements Braven Reyes When to Use? Value-Based Used when you know the value of the element you Deletion want to delete but not its position. Searching and deleting an element based on its value. Requires finding Where to Use? the element first then removing it. Unsorted Array/List Hash Table Binary Search Tree Braven Reyes Value-Based Deletion Pros Easy to use if you don’t care about the element’s position Cons Problem with duplicates Braven Reyes Automatic When to Use? Deletion/Garbage When you want to free up memory without Collection manually deleting elements in the data structure Form of automatic deletion wherein the system reclaims memory that is no Where to Use? longer used. This happens in environments with automatic memory management like Java and Python. Heap Memory Smart Pointers Braven Reyes Automatic Deletion/Garbage Collection Pros Easy to use if you don’t care about the element’s position Cons Problem with duplicates Braven Reyes When to Use? Lazy When frequent deletions can lead to expensive Deletion operations such as Rebalancing or Shifting Strategy wherein elements are marked as deleted but not removed Where to Use? immediately, the structure postpones the removal until another operation is in process. Binary Search Trees Heaps Hash Tables Priority Queues Braven Reyes Lazy Deletion Pros No need for immediate rebalancing or restructuring Simplifies deletion logic Cons “Deleted” nodes still take up memory Eventual clean up required Braven Reyes Deletion When to Use? with Balance maintenance is important for the overall data structure’s performance. Rebalancing After removing an element, the structure performs operations to Where to Use? ensure certain balance properties. This process ensures that all future operations remain efficient. AVL Trees Red-Black Trees B-Trees Braven Reyes Deletion with Rebalancing Pros Maintains Efficiency Cons Higher Time Complexity Higher Memory Usage Braven Reyes Binary Tree/Binary Search Tree When to Use? Deletion Binary tree deletion is used when elements need to be removed from a tree while maintaining its order. Removing nodes based on their This type of deletion is applicable in binary trees position and number of children. This ensures that the Tree’s structure is where nodes are structured with respect to a maintained while removing the nodes. parent-child relationship. Braven Reyes Leaf Node Deletion Removing a node without any children Pros Simple deletion Fast Operation Cons Still requires to search for the leaf node Braven Reyes One Child Deletion Removing a node with one child and Pros replacing it with that child Restructuring is easy since there is only one child Tree structure remains valid after node deletion Cons Requires searching for the node to delete Braven Reyes Two Children Deletion Removing a node with two children and Pros replacing it with the max or min value in the subtree Maintains correct node ordering Cons Requires finding the predecessor or successor of the node Braven Reyes THANK YOU traversing & searching target index: 2 target found at index: 3 target found at index: 8 Home Service About Us Contact INSERTION AND DELETION Examples Prado, Joshua Home Service About Us Contact INSERTION IN LISTS OUTPUT Prado, Joshua Home Service About Us Contact DELETION IN LISTS OUTPUT Prado, Joshua Home Service About Us Contact DELETION IN LISTS OUTPUT Prado, Joshua Home Service About Us Contact INSERTION AND DELETION IN LISTS Prado, Joshua Home Service About Us Contact INSERTION IN STACKS Prado, Joshua Home Service About Us Contact OUTPUT Prado, Joshua Home Service About Us Contact OUTPUT Prado, Joshua Home Service About Us Contact USE OF APPEND() Prado, Joshua Home Service About Us Contact 20 30 40 50 10 50 40 40 30 30 30 20 20 20 20 10 10 10 10 10 Prado, Joshua Home Service About Us Contact OUTPUT Prado, Joshua Home Service About Us Contact DELETION IN STACKS OUTPUT Prado, Joshua Home Service About Us Contact 50 50 40 40 30 30 20 20 10 10 Prado, Joshua Home Service About Us Contact IMPORTING LIBRARIES QUEUE LISTS INSERTION AND DELETION IN QUEUES Prado, Joshua Home Service About Us Contact IMPORTING LIBRARIES Importing collections.deque offers several advantages due to its design and performance characteristics. deque is optimized for appending and popping elements from both ends. These operations are performed in O(1) time complexity, making deque ideal for use cases where you need to frequently add or remove items from either end of the queue. Prado, Joshua Home Service About Us Contact IMPORTING LIBRARIES OUTPUTS Prado, Joshua Home Service About Us Contact QUEUES FROM LISTS Importing collections.deque offers several advantages due to its design and performance characteristics. deque is optimized for appending and popping elements from both ends. These operations are performed in O(1) time complexity, making deque ideal for use cases where you need to frequently add or remove items from either end of the queue. Prado, Joshua Home Service About Us Contact QUEUES FROM LISTS OUTPUTS: Prado, Joshua Home Service About Us Contact ADDITION AND DELETION IN LINKED LISTS Prado, Joshua Home Service About Us Contact NODE CLASS self - is a reference to the current instance of a class. It is used within class methods to access attributes and methods associated with the instance. When defining instance methods (methods that operate on the instance of the class), self must be the first parameter. This allows the method to operate on the specific object that called it. __init__ - method in Python is a special method caThe __init__ method can take parameters in addition to self. These parameters are used to initialize the object's attributes.lled the initializer or constructor. It is automatically called when a new instance of a class is created. Its primary purpose is to initialize the instance's attributes and set up any necessary state. Prado, Joshua Home Service About Us Contact LINKED LIST CLASS.head - In the context of a linked list,.head is an attribute commonly used to reference the first node in the list. It serves as the starting point for traversing or manipulating the list. self.head } node() Prado, Joshua Home Service About Us Contact INSERTION METHOD - INSERT().head - In the context of a linked list,.head is an attribute commonly used to reference the first node in the list. It serves as the starting point for traversing or manipulating the list. self.head current current current.next data } node(data) Prado, Joshua Home Service About Us Contact INSERTION METHOD - INSERT() The delete() method removes a node from the linked list at the specified index. The index parameter indicates the position of the node to be deleted. if index >= self.length(): This line checks if the provided index is greater than or equal to the length of the list. The self.length() method returns the number of nodes in the list. If the index is out of bounds, the method prints an error message and returns early, preventing further processing. last_node.next self.head current_node last_node current_node this is the node to be deleted current_node.next current_node.next current_node.next current_node Prado, Joshua Home Service About Us Contact DEBUGGING METHODS - LENGTH() & __STR__() LENGTH() __STR__() Counts the current nodes of the list by traversing is a special method that is used to define a string each node starting from self.head representation of an object. It allows you to control what is returned when you use the print() Prado, Joshua Home Service About Us Contact USAGE OF INSERT() & DELETE() METHODS OUTPUT: Presented by Group 1 Top Coding Language PHP 25 Java 10 GO 22 JavaScript 08 HTML 20 Python 5 Family Feud X Png Png Black And White Library - Family Feud X Family PNG Transparent Feud X PngWith Png Black And White Library - Family Feud Family X PNGFeud Transparent X Png PngWith Black And White Library - Family Feud X PNG Transparent With Clear Background ID 184596 | TOPpng Clear Background ID 184596 | TOPpng Clear Background ID 184596 | TOPpng Start Start Presented by: Rysa Arianne R. Igcasenza Presented by: Rysa Arianne R. Igcasenza Start Start Presented by: Rysa Arianne R. Igcasenza Presented by: Rysa Arianne R. Igcasenza Main Menu Main Menu Just like how every delicious dish needs the right ingredients and steps to turn out perfect, every computer needs a well-prepared recipe or program)to cook up the right results. Welcome Let’s get shopping! to my shop. Here’s your basket. Get all the ingredients we need to cook. Structural Produce Structural Produce RigidBone RigidBone SOY GENERAL SOY GENERAL Buggy Beef Buggy Beef Porktability Porktability ClearSlice Cabbage ClearSlice Cabbage SlimyPeel SlimyPeel SwiftGrain Rice SwiftGrain Rice FlexFlour FlexFlour TraceCrumb Bread Crumbs TraceCrumb Bread Crumbs Features of a Good Programming Language Documentation Efficiency Recipe: Portability Structural Readability Flexibility Generality Pork Tonkatsu Efficiency Every program requires certain processing time and memory to process the instructions and data. As the processing power and memory are the most precious resources of a computer, a program should be laid out in such a manner that it utilizes the least amount of memory and processing time. Every program requires certain processing time and memory to process the instructions and data. As the processing power and memory are the most precious resources of a computer, a program should be laid out in such a manner that it utilizes the least amount of memory and processing time. To develop a program, the task must be broken down into a number of subtasks. These subtasks are developed independently, and each subtask is able to perform the assigned job without the help of any other subtask. If a program is developed structurally, it becomes more readable, and the testing and documentation process also gets easier. To develop a program, the task must be broken down into a number of subtasks. These subtasks are developed independently, and each subtask is able to perform the assigned job without the help of any other subtask. If a program is developed structurally, it becomes more readable, and the testing and documentation process also gets easier. Documentation Flexibility Documentation is one of the most important components of an application development. Even if a program is developed following the best programming practices, it will be rendered useless if the end user is not able to fully utilize the functionality of the application. A well-documented application is also useful for other programmers because even in the absence of the author, they can understand it. Documentation is one of the most important components Documentation of an application development. Even if a program is developed following the best programming practices, it will be rendered useless if the end user is not able to fully utilize the functionality of the application. A well- documented application is also useful for other programmers because even in the absence of the author, they can understand it. Flexibility A program should be flexible enough to handle most of the changes without having to rewrite the entire program, Most of the programs are developed for a certain period and they require modifications from time to time. A program should be flexible enough to handle most of the changes without having to rewrite the entire program, Most of the programs are developed for a certain period and they require modifications from time to time. The program should be written in such a way that it makes other programmers or users to follow the logic of the program without much effort. If a program is written structurally, it helps the programmers to understand their own program in a better way. Readability The program should be written in such a way that it makes other programmers or users to follow the logic of the program without much effort. If a program is written structurally, it helps the programmers to understand their own program in a better way. The program should be written in such a way that it makes other programmers or users to follow the logic of the program without much effort. If a program is written structurally, it helps the programmers to understand their own program in a better way. Readability The program should be written in such a way that it makes other programmers or users to follow the logic of the program without much effort. If a program is written structurally, it helps the programmers to understand their own program in a better way. The program should be written in such a way that it makes other programmers or users to follow the logic of the program without much effort. If a program is written structurally, it helps the programmers to understand their own program in a better way. Readability The program should be written in such a way that it makes other programmers or users to follow the logic of the program without much effort. If a program is written structurally, it helps the programmers to understand their own program in a better way. Portability refers to the ability of an application to run on different platforms (operating systems) with or without minimal changes. Due to rapid development in the hardware and the software, nowadays platform change is a common phenomenon. Hence, if a program is developed for a particular platform, then the life span of the program is severely affected. Portability refers to the ability of an application to run on different platforms (operating systems) with or without minimal Portability changes. Due to rapid development in the hardware and the software, nowadays platform change is a common phenomenon. Hence, if a program is developed for a particular platform, then the life span of the program is severely affected. Apart from flexibility, the program should also be general. Generality means that if a program is developed for a particular task, then it should also be used for all similar tasks of the same domain. For example, if a program is developed for a particular organization, then it should suit all the other similar organizations. Apart from flexibility, the program should also be general. Generality means that if a program is developed for a particular task, then it should also be used for all similar tasks of the same domain. For example, if a program is developed for a particular organization, then it should suit all the other similar organizations. Apart from flexibility, the program should also be general. Generality means that if a program is developed for a particular task, then it should also be used for all similar tasks of the same domain. For example, if a program is developed for a particular organization, then it should suit all the other similar organizations. Apart from flexibility, the

Use Quizgecko on...
Browser
Browser