Context Switching and OS Scheduling Algorithms

Contents Page

  1. Context Switching
  2. OS Scheduling and Scheduling Algorithms
  3. CPU Scheduling Algorithms
  4. Why Scheduling Matters

  5. Final Summary

Context Switching

Introduction

Context switching is a fundamental mechanism in modern operating systems that allows multiple processes to share a single CPU. Without context switching, multitasking would not be possible, and computers would be unable to efficiently run multiple applications at once.

A CPU can only execute one instruction at a time per core. However, an operating system can create the illusion of simultaneous execution by rapidly switching between processes. This process is called context switching.

What is Context Switching?

Context switching occurs when the CPU saves the current process’s state and loads the state of another process. This happens when a process gets preempted due to an interrupt, a system call, or the expiration of its allocated time slice in a scheduling algorithm.

When a process is stopped mid-execution, the CPU must store:

  • The process’s registers (including the Program Counter)
  • CPU state
  • Memory mappings
  • Stack and heap information

This stored state is saved in the Process Control Block (PCB) of the process. When the process is scheduled to run again, the CPU restores the saved state from the PCB and resumes execution from where it left off.

Steps of Context Switching

  1. Process Execution: The current process runs on the CPU.
  2. Interrupt or System Call: The running process is interrupted by an external event (I/O request, timer, higher-priority process, etc.).
  3. Saving Process State: The OS saves the current process’s context (register values, program counter, etc.) in its PCB.
  4. Choosing the Next Process: The CPU scheduler selects another process from the ready queue.
  5. Loading the New Process: The OS restores the saved state of the selected process from its PCB.
  6. Resuming Execution: The CPU starts executing the new process.

This cycle repeats continuously, enabling multitasking.

Overhead of Context Switching

Although context switching is essential, it comes at a cost:

  • Time Overhead: Switching from one process to another consumes CPU cycles.
  • Cache Disruptions: Switching processes flushes CPU caches, causing performance degradation.
  • Increased Latency: Frequent switching can reduce system responsiveness.

To reduce context switching overhead, the OS optimizes scheduling decisions, adjusts time slices, and prioritizes processes based on workload characteristics.


OS Scheduling and Scheduling Algorithms

What is CPU Scheduling?

CPU scheduling is the method by which an operating system determines which process in the ready queue should be executed next. Since multiple processes compete for CPU time, the scheduler ensures fair and efficient allocation of CPU resources.

Types of Schedulers

  1. Long-Term Scheduler (Job Scheduler)
    • Decides which processes enter the system for execution.
    • Controls the degree of multiprogramming (i.e., how many processes are in memory).
    • Less frequent execution.
  2. Short-Term Scheduler (CPU Scheduler)
    • Chooses which process in the ready queue will execute next on the CPU.
    • Runs frequently and must be highly efficient.
  3. Medium-Term Scheduler
    • Manages memory by swapping processes in and out.
    • Used in systems with limited memory to optimize performance.

CPU Scheduling Algorithms

Different scheduling algorithms determine how processes are selected for execution. The best algorithm depends on the system’s goals—whether to optimize throughput, reduce latency, or ensure fairness.

1. First-Come, First-Served (FCFS)

  • The simplest scheduling algorithm.
  • Processes are executed in the order they arrive.
  • Pros: Easy to implement, fair (in the sense that no process gets skipped).
  • Cons: Causes the convoy effect, where short jobs wait behind long ones, leading to inefficiencies.

Example:

Process   Arrival Time   Burst Time
P1        0 ms          8 ms
P2        2 ms          4 ms
P3        4 ms          2 ms

Execution Order: P1 → P2 → P3

2. Shortest Job Next (SJN) / Shortest Job First (SJF)

  • The process with the smallest execution time gets the CPU first.
  • Pros: Minimizes average waiting time.
  • Cons: Requires the OS to know burst times in advance, which is not always possible. Also leads to starvation for longer processes.

Example:

Process   Arrival Time   Burst Time
P1        0 ms          8 ms
P2        2 ms          4 ms
P3        4 ms          2 ms

Execution Order: P3 → P2 → P1

3. Round Robin (RR)

  • Each process gets a fixed time slice (quantum).
  • If the process doesn’t finish, it is moved to the end of the queue.
  • Pros: Ensures fairness, prevents long processes from dominating.
  • Cons: Performance depends on the time quantum size—too small leads to excessive context switches, too large reduces responsiveness.

Example:

Process   Arrival Time   Burst Time
P1        0 ms          8 ms
P2        2 ms          4 ms
P3        4 ms          2 ms

Quantum = 2 ms Execution Order: P1(2) → P2(2) → P3(2) → P1(2) → P2(2) → P1(2)

4. Priority Scheduling

  • Each process is assigned a priority. The CPU executes the process with the highest priority first.
  • Pros: Useful for real-time systems.
  • Cons: Can lead to starvation, where low-priority processes never execute.
  • Solution: Aging, where waiting processes gradually increase in priority.

Example:

Process   Priority   Burst Time
P1        3         8 ms
P2        1         4 ms
P3        2         2 ms

Execution Order: P2 → P3 → P1 (higher priority executes first)

5. Multilevel Queue Scheduling

  • Divides processes into separate queues based on type (e.g., system processes, interactive processes, batch processes).
  • Each queue has its own scheduling policy.
  • Pros: Allows better organization of different types of processes.
  • Cons: Complex to implement and tune for performance.

Example:

  • Foreground queue: Uses Round Robin for interactive processes.
  • Background queue: Uses FCFS for batch jobs.

6. Multilevel Feedback Queue

  • A variation of multilevel queue scheduling.
  • Processes can move between queues based on behavior.
  • Pros: Adaptive and prevents starvation.
  • Cons: Requires careful tuning of queue policies.

Why Scheduling Matters

Scheduling algorithms impact:

  • System responsiveness: Important for user experience.
  • CPU utilization: Ensures the CPU is not sitting idle.
  • Process fairness: Prevents starvation and ensures balanced execution.

Different systems prioritize different scheduling policies:

  • Real-time systems prioritize predictability (Rate-Monotonic Scheduling, Earliest Deadline First).
  • General-purpose systems balance responsiveness and efficiency (Round Robin, Multilevel Queue).

Final Summary

  • Context switching allows multitasking but has an overhead.
  • Schedulers determine which process gets CPU time and when.
  • Different scheduling algorithms balance fairness, efficiency, and responsiveness.
  • Choosing the right algorithm depends on system needs—some optimize throughput, others prioritize responsiveness.

In modern operating systems, efficient scheduling is crucial for delivering smooth user experiences and maintaining optimal CPU performance.