Operating System: Purpose, Multiprogramming & Time-Sharing

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to Lesson

Podcast

Play an AI-generated podcast conversation about this lesson
Download our mobile app to listen on the go
Get App

Questions and Answers

How does multiprogramming increase CPU and I/O utilization?

Multiprogramming overlaps the demands for the CPU and I/O devices from various users, ensuring the CPU always has something to execute and allowing multiple tasks to run on different I/O devices.

In a time-sharing system, what advantage do users experience because of the quick response times?

Users can interact with the computer while programs are running, often without noticing delays, due to the system's rapid switching between tasks.

Explain why ensuring the same degree of security in a time-shared machine as in a dedicated machine is difficult.

Any protection scheme devised by humans can be fallible, and the more complex the scheme, the harder it is to ensure its correct implementation and security.

Why is it important for an operating system to be interrupt-driven?

<p>An interrupt-driven OS can wait for events, signaled by interrupts, allowing it to respond only when necessary, rather than continuously polling for activity.</p>
Signup and view all the answers

What is the purpose of a mode bit in the context of privileged instructions, and where is it typically located?

<p>The mode bit distinguishes between privileged and non-privileged instructions, typically located in each machine code, and it is used to switch between user mode and monitor mode.</p>
Signup and view all the answers

Why is the Set value of timer instruction typically designated as privileged?

<p>If a user could set the timer's value, they could disrupt system operations, which relies on the timer's correct function.</p>
Signup and view all the answers

Why must the access I/O device instruction be privileged?

<p>Without privilege control, a user might attempt to access an I/O device incorrectly, potentially damaging the system.</p>
Signup and view all the answers

Describe how system calls bridge the gap between user-level processes and the operating system.

<p>System calls are used by user-level processes to request services from the OS such as I/O operations and resource allocation.</p>
Signup and view all the answers

Name three methods for passing parameters from a user program to the OS during a system call.

<p>Parameters can be passed in registers, in a memory location, or on the stack.</p>
Signup and view all the answers

What actions does the kernel take during a context switch to ensure proper execution?

<p>The kernel saves the current process's state to its PCB, loads the next process's state from its PCB, and updates the program counter to the next instruction.</p>
Signup and view all the answers

In the context of process management, what defines a 'zombie process,' and why is it problematic?

<p>A zombie process has completed execution but still has an entry in the process table to store its exit status for the parent to read, consuming system resources and potentially leading to resource exhaustion.</p>
Signup and view all the answers

Explain the key difference between shared memory and message passing as inter-process communication mechanisms.

<p>Shared memory involves direct reading and writing to a designated memory area, requiring synchronization, while message passing involves sending messages through the kernel, which is more secure but may have overhead.</p>
Signup and view all the answers

Contrast synchronous and asynchronous communication in the context of inter-process messaging.

<p>Synchronous communication blocks the sender/receiver until the operation completes, while asynchronous communication allows the process to continue execution without waiting.</p>
Signup and view all the answers

What are the key distinctions between user-level threads and kernel-supported threads?

<p>User-level threads are managed by a library at the user level and are faster to create, whereas kernel-supported threads are managed directly by the OS and can utilize multiple processors effectively.</p>
Signup and view all the answers

Define what is meant by a 'critical section' in concurrent programming.

<p>A critical section is a code segment that must be executed by only one process at a time to avoid race conditions and maintain data integrity.</p>
Signup and view all the answers

Explain the 'progress' requirement for solutions to the critical-section problem.

<p>If no process is in a critical section, only processes not in their remainder section can participate in deciding which will enter its critical section next, and this selection cannot be postponed indefinitely.</p>
Signup and view all the answers

Explain what an atomic instruction is and why atomicity is critical in the context of synchronization primitives such as wait operations.

<p>An atomic instruction is one that executes completely without interruption. If a <code>wait</code> operation is not executed atomically, mutual exclusion may be violated.</p>
Signup and view all the answers

What is a spinlock, and explain its advantage and disadvantage as a synchronization mechanism.

<p>A spinlock is a synchronization mechanism where a thread repeatedly checks a lock in a loop until it becomes available. The advantage is very fast locking and unlocking, but the disadvantage is wasted CPU cycles when waiting.</p>
Signup and view all the answers

Why are spinlocks primarily used on multiprocessor systems rather than single-processor systems?

<p>On a single-processor system, if a thread spins waiting for a lock held by another thread, the waiting thread will prevent the holding thread from ever releasing the lock, leading to a deadlock.</p>
Signup and view all the answers

Describe the role and function of pthread_mutex_lock(pthread_mutex_t *mutex) in multi-threaded programming.

<p><code>pthread_mutex_lock</code> is used to acquire a mutex lock before accessing a shared variable, ensuring exclusive access and preventing race conditions.</p>
Signup and view all the answers

Explain the primary distinction between short-term, medium-term, and long-term scheduling.

<p>Short-term scheduling selects from ready jobs in memory to allocate the CPU. Medium-term scheduling involves swapping partially run programs in and out of memory. Long-term scheduling determines which jobs are brought into memory for processing.</p>
Signup and view all the answers

Why is distinguishing between I/O-bound and CPU-bound programs important for a scheduler?

<p>I/O-bound programs perform small computations before I/O, while CPU-bound programs use the CPU more extensively. Prioritizing I/O-bound programs can better utilize resources.</p>
Signup and view all the answers

Why is a transition from the WAITING state to the RUNNING state considered illegal?

<p>The scheduler selects processes to run from the list of ready threads, meaning a blocked thread must be placed in the ready queue before being selected to run.</p>
Signup and view all the answers

Explain how optimizing response time might reduce overall CPU utilization.

<p>Optimizing the response time of a CPU means it takes less time to actively make operations, thus optimizing its utilization.</p>
Signup and view all the answers

Discuss why shortest job first (SJF) and priority-based scheduling algorithms could lead to starvation.

<p>SJF gives priority to processes with the shortest job length, and priority-based algorithms give priority to processes with the highest priority, both potentially causing longer or lower-priority processes to be indefinitely postponed.</p>
Signup and view all the answers

What would be the primary disadvantage of implementing duplicated pointers in the ready queue of a Round Robin scheduling algorithm?

<p>With doubled pointers in place during context switching, it will have a much larger impact compared to before. As well as removing processes from the running queue is now significantly harder, as you would have to search through the whole list..</p>
Signup and view all the answers

How could you modify the basic RR algorithm to achieve the same effect without the duplicate pointers?

<p>Increased time quantum in a round robin is similar to duplicated pointers, as it will have the same effect.</p>
Signup and view all the answers

What is the difference between turnaround and waiting time for a process?

<p>Turnaround Time is the total time to complete the CPU burst and waiting, While waiting time is the amount of time in the ready queue just simply waiting to be completed.</p>
Signup and view all the answers

Why might be the the SJF algorithm still be beneficial even if it might not be the ideal option for the selected workload?

<p>As SJF reduces waiting time by prioritizing processes within the ready queue, it still can improve a systems workflow within a scheduling algorithm.</p>
Signup and view all the answers

State one instance of when to use each system call function.

<p><code>int pthread_mutex_lock(pthread_mutex_t *mutex);</code> Utilize to access before a shared variable. <code>int pthread_mutex_unlock(pthread_mutex_t *mutex);</code> Used when program finishes using shared variable</p>
Signup and view all the answers

Flashcards

Operating System Purposes

Provides an environment for users to execute programs, manages I/O, supervises program execution, and controls computer resources.

Multiprogramming System

A system where several programs reside in memory concurrently, switching among them for efficient processing and minimal idle time.

Advantage of Multiprogramming

Increases CPU and I/O utilization by overlapping demands from various users.

Timesharing/Multitasking

Allows users to perform multiple tasks at once by rapidly switching the CPU among users, providing an interactive system.

Signup and view all the flashcards

Advantage of Time-sharing

Enables users to interact with the computer while programs run with short response times, built upon Multiprogramming.

Signup and view all the flashcards

Implementing Multitasking

Giving each user a small slice of time in round-robin fashion.

Signup and view all the flashcards

Problems in Multi-user Systems

Copying programs/data or using system resources without proper accounting, causing security issues.

Signup and view all the flashcards

OS is Interrupt Driven

The OS waits for events signaled by interrupts when there are no processes, I/O, or users to respond to.

Signup and view all the flashcards

Privileged Instruction

Hardware instruction executable only in monitor/system mode.

Signup and view all the flashcards

User Mode attempting to run Privileged Instruction

Hardware treats it as an illegal instruction and sends a trap to the OS.

Signup and view all the flashcards

Atomic instruction

An instruction that can be executed completely without any interruption.

Signup and view all the flashcards

Zombie Process

A process that completed execution but still has an entry to store its exit status.

Signup and view all the flashcards

Shared Memory vs. Message Passing

Shared memory involves direct reads/writes with required synchronization. Message passing involves sending/receiving messages through the kernel.

Signup and view all the flashcards

Synchronous vs. Asynchronous Communication

Blocks sender/receiver until operation completes vs allowing the process to continue execution.

Signup and view all the flashcards

User-Level Threads

managed by a library at the user level and are faster to create/manage.

Signup and view all the flashcards

Kernel-Supported Threads

managed directly by the OS, allows separate scheduling and multiple processor utilization.

Signup and view all the flashcards

Critical Section

a code segment ensuring only one process works at a time.

Signup and view all the flashcards

Critical-Section Problem

Making sure when one process is in its critical section, no other process enters.

Signup and view all the flashcards

Requirements for Critical Section Solution

Mutual Exclusion, Progress, Bounded Waiting.

Signup and view all the flashcards

Mutual Exclusion

Ensures only one process accesses shared resource at a time.

Signup and view all the flashcards

Spinlock

Thread checks repeatedly available, rather blocking/waiting.

Signup and view all the flashcards

Short-Term (CPU) Scheduler

Selects jobs from memory ready to execute, allocates the CPU.

Signup and view all the flashcards

Medium-Term Scheduler

Swapping scheme removes and reinstates partially run programs for time-sharing.

Signup and view all the flashcards

Long-Term (Job) Scheduler

Determines jobs brought into memory for processing.

Signup and view all the flashcards

Waiting -> Running

Scheduler selects process from ready threads, blocked thread must be placed ready first.

Signup and view all the flashcards

Running -> Waiting

Running process can become blocked.

Signup and view all the flashcards

Scheduling algorithms

FCFS (First-Come, First-Served), SJF (Shortest Job First)

Signup and view all the flashcards

Study Notes

Purposes of an Operating System

  • Provides a user-friendly environment for program execution.
  • Manages I/O device operations and control.
  • Supervises user program execution, preventing errors.
  • Controls computer resources and provides a base for application creation.

Multiprogramming Systems

  • Several programs reside in memory simultaneously.
  • Enables task switching for efficient processing and reduces idle time.
  • Increases CPU and I/O utilization.
  • Achieves efficiency by overlapping CPU and I/O demands from different users.
  • Maximizes CPU use by having something ready to execute.
  • Increases I/O utilization by running multiple tasks on different I/O devices.

Time-Sharing (Multitasking) Systems

  • Allow users to perform multiple tasks at once using scheduling and multiprogramming.
  • Enables interactive systems for multiple users.
  • Rapidly switches the CPU between users.
  • Programs read inputs from the terminal, with output displayed immediately.
  • Increases CPU and I/O utilization, allows user interaction, and provides short response times.
  • User programs experience quick response times, often in milliseconds, even with concurrent programs.
  • Implemented by giving each user a time-share in round-robin fashion, with jobs continuing until the time-slice ends.
  • CPU switches between jobs when the current process waits for an event, like I/O, without requiring timer interrupts.
  • A context switch will occur when a job needs to wait.

Security in Multiprogramming and Time-Sharing Environments

  • Multiple users sharing a system can lead to security issues.
  • Problems include stealing or copying programs/data and unauthorized use of system resources like CPU, memory and disk space.
  • Achieving the same level of security as in a dedicated machine is unlikely due to the fallibility of protection schemes.
  • More complex schemes are harder to fully trust.

Operating Systems as Interrupt-Driven

  • The OS waits for an event (signalled by an interrupt) when idle.
  • This happens when no processes are running, no I/O is being serviced, and no users are interacting.

Privilege Instructions

  • Hardware instructions executable only in monitor/system/supervisory mode.

Implementation of Privilege Instructions

  • Hardware adds a mode bit: '1' for monitor, '0' for user mode.
  • One bit differentiates between privilege and non-privilege instructions in each machine code.
  • The OS sets the bit to user mode before giving control to user programs.
  • Hardware switches the bit to monitor mode during a trap or interrupt.
  • The OS can also set the bit to the system mode.

Hardware Response to Privilege Instruction Attempts in User Mode

  • Hardware identifies it as an illegal instruction.
  • The hardware sends a trap to the OS.

Privileged Instructions and Consequences of Not Privileging

  • Set Value of Timer: Privileged, or users can change the timer, disrupting correct system operation.
  • Read the Clock: Not privileged.
  • Clear Memory: Privileged, or users could accidentally/intentionally delete memory.
  • Turn-Off Interrupts: Privileged, or users could crash the system by turning off interrupts.
  • Switch from User to Monitor Mode: Privileged; otherwise, all users can access system resources.
  • Modifying Base and Limit Registers: Privileged, or users can increase their memory allocation.
  • Issue a Trap Instruction: Not privileged, as system calls are traps used by user processes..
  • Access I/O Device: Privileged, otherwise novice users may attempt damaging I/O device access.
  • **Modify Entries in Device-Status Table: Privileged; modifying OS-managed device info can damage the system.

System Calls

  • Allow user-level processes to request OS services, including I/O instructions.
  • Parameters are passed to the OS, in a register, in a memory location or in a stack

Unix System Calls Categories

  • Process Control
  • File Manipulation
  • Device Manipulation
  • Information Maintenance
  • Communication

Context Switch Actions

  • Saving the current process state to its Process Control Block (PCB).
  • Loading the next state into the processor registers from the PCB.
  • Updating the Program Counter to the next instruction for execution.

Information for Context Switching

  • Process State
  • Program Counter
  • CPU Registers
  • CPU Scheduling Information
  • Memory Management Information
  • Accounting Information

Process Creation

  • A code with three fork() calls creates 8 processes.
  • The first fork() duplicates the process, then the next fork() creates four.
  • The final fork() creates eight processes because each of the four turns into two.

Zombie Processes

  • Completed processes still with an entry in the process table to store its exit status for the parent process.
  • Consume system resources, which mainly comprise process table entries, potentially leading to resource exhaustion.
  • Terminate when parent processes read their exit status using wait() or when the parent process terminates, which then init() cleans them up.

Inter-Process Communication (IPC)

  • Cooperating processes use interprocess communication mechanisms.

Shared Memory vs. Message Passing

  • Shared memory: Processes read/write directly to a memory area, which also requires synchronization.
  • Message Passing is processes sending and receiving messages through the kernel which is secure and easier to manage but has overhead.

OS Responsibilities in IPC

  • Shared memory: OS manages creation, destruction, and synchronization of segments.
  • Message passing: OS handles transmission, reception, message queues, synchronization, process blocking.

Message-Passing Methods

  • Synchronous: Sender/receiver blocked until operation completes.
  • Asynchronous: Process continues execution.
  • Automatic Buffering: OS stores messages until receiver is ready.
  • Explicit Buffering: Processes manage message storage.
  • Send by Copy: Copies entire message ensuring data safety (slow).
  • Send by Reference: Passes reference - faster, but risk of data issues.
  • Fixed-Size Messages: Simplify buffer management - can waste space or limit message size.
  • Variable-Size Messages: More flexible, requires more complex management.

Threads: User-Level vs. Kernel-Supported

  • User-Level Threads:
    • Managed by a library at the user level, not visible to the kernel.
    • Faster to create and manage because they don't involve system calls.
  • Kernel-Supported Threads:
    • Managed directly by the OS, scheduled separately.
    • Can utilize multiple processors effectively for true parallelism.

Threads: Advantages and Disadvantages

  • User-Level Threads:
    • Advantages: Less overhead, faster context switches.
    • Disadvantages: Lack true concurrency on multiprocessor systems, blocking operations can block the whole process.
  • Kernel-Supported Threads: -Advantages: Better CPU utilization, true concurrency. -Disadvantages: Higher overhead due to system calls for operations and context switches involving the kernel.

Critical Section

  • Ensures only one process works at a time within a code segment.
  • The critical-section problem makes it so no other process is allowed to execute in their critical section when one process is already in it.

Requirements for Critical-Section Problem Solutions

  • Mutual Exclusion: Ensures only one process accesses a shared resource at a time, therefore, no others can execute in their critical sections.
  • Progress: If no process is in its critical section, only processes not their remainder section should participate in deciding which enters next.
  • Bounded Waiting: Sets a limit on a process's wait time to enter; setting the number of other processes that can enter their section before the request is granted.

Atomic Instructions

  • Defined as instructions that execute completely without interruption.
  • Without atomic execution, mutual exclusion can be violated, which compromises synchronized operations.

Three Requirements for the Critical Section

  • Mutual exclusion
  • Progress condition
  • Bound waiting

Spinlocks

  • Synchronization mechanism where a thread checks a lock in a loop until available instead of blocking.
  • Spinlocks implemented using atomic instructions lead to fast locking/unlocking.
  • These have low overhead for critical sections.
  • Spinlocks can waste CPU cycles in high contention or single-core scenarios.

Use of Spinlocks

  • Solaris, Linux, and Windows 2000 utilize it on multi-processor systems.

Semaphore Code Example

Assume Initially
Q=1
S1
wait(Q)
S2
signal(Q)
S3

Pthread Functions

  • *int pthread_mutex_lock(pthread_mutex_t mutex);
    • Used before accessing a shared variable to lock it.
  • *int pthread_mutex_unlock(pthread_mutex_t mutex);
    • Used after finishing with a shared variable to unlock it.

Scheduling Types

  • Short-term (CPU scheduler): Selects from memory and allocates the CPU.
  • Medium-term: For swapping in time-sharing systems.
  • Long-term (job scheduler): Determines which jobs are brought to memory.

I/O-Bound vs. CPU-Bound

  • I/O-bound programs: Small computation before I/O, do not use the entire CPU quantum.
  • CPU-bound programs: Fully use CPU without blocking I/O.
  • Prioritizing and allowing I/O-bound programs to go first means better resource use.

Process State Transitions

  • Waiting -> Running is illegal, must fire be placed in the ready queue before selection.
  • Running -> Waiting is legal if requesting a resource.
  • Ready -> Waiting is illegal, can only transition from Running.

Optimizing Response Time

  • Optimizing response time speeds operations, optimizing CPU usage.

Result of Shortest Job First and Priority-Based

  • Can result in starvation, denying CPU time to lower-priority and those with long process times.

Round Robin Algorithm with Two PCB Pointers Effect

  • The process would run twice as many times.

Round Robin Algorithm with Two PCB Pointers Advantages

  • Prioritization of important processes.
  • Prevention of lower-priority processes from being starved.
  • It does not require changes to the scheduling algorithm.

Round Robin Algorithm with Two PCB Pointers Disadvantages

  • Context switching impact increases.
  • Removing processes becomes significantly harder.

Altering the basic Round Robin Algorithm Without Duplicates

  • Increase the quantums.

CPU utilization for Round-Robin When Time Quantum is 1 millisecond

  • Scheduler incurs 0.1ms of context-switching.
  • CPU utilization is about 91%.

CPU utilization for Round-Robin When Time Quantum is 10 millisecond

  • CPU utilization is 94% because I/O Tasks incur context switch after 1 millisecond.

Formulas for computing Time Bursts for Scheduling

  • ( 8 + (12 - 0.4) + (13 - 1)) / 3 = 10.53
  • ( 8 + (9 - 1) + (13 – 0.4)) / 3 = 9.53
  • ((2 - 1) + ( 6 – 0.4 ) + ( 14 - 0)) / 3 = 6.87

Studying That Suits You

Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

Quiz Team

Related Documents

More Like This

Use Quizgecko on...
Browser
Browser