Load the program & static data (used for initialization) to memory
Allocate runtime stack
Allocate heap => Part of memory used for dynamic allocation
I/O tasks
OS handoff control to main() => Program starts running
Architecture of Process
stack => Local variables, Function arguments, Return values
heap => Dynamically allocated memory
data => Global variables & Static data
text => Compiled code
Attributes of Process
Process Table => Allows identifying a process uniquely, Each entry in table is a Process Control Block (PCB)
PCB
Process ID => Unique Identifier
Program Counter => Next instruction address of the program
Process State => Stores process state
Priority => Based on priority a process gets CPU time
Registers => Saves the registers during context switching, Stack Pointer (SP), Base Pointer (BP), CR
List of open Files
List of Open Devices
Process State
New => Process being created
Ready => The process is in memory, waiting to be assigned to a processor
Running => Instructions are being executed, CPU is allocated
Waiting => Waiting for I/O
Terminated => The process has finished execution, PCB entry removed from process table
Process Queues
Job Queue
Processes in new state
Present in secondary memory
Job Schedular (Long term schedular (LTS)) picks process from the pool and loads them into memory for execution, Better to choose mixture of process (IO intensive, CPU intensive)
Ready Queue
Processes in Ready state
Present in main memory
CPU Schedular (Short term schedular (STS)) picks process from ready queue and dispatch it to CPU
Waiting Queue
Processes in Wait state
Degree of multi-programming => The number of processes in the memory
LTS controls degree of multi-programming
Dispatcher => The module of OS that gives control of CPU to a process selected by STS
Swapping
Time-sharing system may have Medium term schedular (MTS)
Remove processes from memory to reduce degree of multi-programming
Swap-out and swap-in is done by MTS
Swapping is necessary to improve process mix or because a change in memory requirements has over-committed available memory, requiring memory to be freed up
Context Switching
Switching the CPU to another process requires performing a state save of the current process and a state restore of a different process
When this occurs, the kernel saves the context of the old process in its PCB (Process Control Block) and loads the saved context of the new process scheduled to run
It is pure overhead, because the system does no useful work while switching
Speed varies from machine to machine, depending on the memory speed, the number of registers that must be copied etc.
Orphan Process
The process whose parent process has been terminated and it is still running
Orphan processes are adopted by init process
Init is the first process of OS with PID = 1
sleep 200 & => Code example
Zombie process/Defunct process
A zombie process is a process whose execution is completed but it still has an entry in the process table
Zombie processes usually occur for child processes, as the parent process still needs to read its child’s exit statu- Once this is done using the wait system call, the zombie process is eliminated from the process tabl- This is known as reaping the zombie process
It is because parent process may call wait () on child process for a longer time duration and child process got terminated much earlier
As entry in the process table can only be removed, after the parent process reads the exit status of child proces- Hence, the child process remains a zombie till it is removed from the process table
ps -elf | grep Z => Check for Zombie process
Processes Scheduling
Long Term Scheduler
Short Term Scheduler
Medium Term Scheduler
Goals of CPU scheduling
Maximum CPU utilization
Minimum Turnaround time (TAT)
Minimum Wait-time
Minimum response time
Maximum throughput of system
Process Synchronization => For Parallel Processes, Not Serial Processes
Cooperative Process => Processes share some Resources/ Variable/ Code/ Memory with each other => Execution on one can affect other
Independent Process => Execution on one can't affect other
Time
Arrival Time (AT) => Time when process is arrived at the ready queue
Burst Time (BT) => The time required by the process for its execution
Completion Time (CT) => Time taken till process gets terminated
Turn Around Time (TAT) => Time taken from first time process enters ready state till it terminates => TAT = CT - AT = WT + BT
Waiting Time (WT) => Time process spends waiting for CPU => WT = TAT - BT
Response Time (RT) => Time duration between process getting into ready queue and process getting CPU for the first time
RT = Time at which process gets CPU first time - AT
Starvation => Process is waiting for Finite amount of time
Time Quantum => Fixed amount of time for which each process is being executed
CPU Scheduling
Non-pre-emptive => Once CPU has been allocated to a process, the process keeps the CPU until it releases CPU either by terminating or by switching to wait-state, Starvation, as a process with long burst time may starve less burst time process, Low CPU utilization
First Come First Serve (FCFS)
Criteria = AT => Whichever process comes first in the ready queue will be given CPU first
In this, if one process has longer BT, It will have major effect on average WT of different processes, called Convoy effect
Convoy Effect is a situation where many processes, who need to use a resource for a short time, are blocked by one process holding that resource for a long time => This cause poor resource management
Shortest Job First (SJF)
Criteria = BT => Process with least BT will be dispatched to CPU first
Must do estimation for BT for each process in ready queue beforehand, Correct estimation of BT is an impossible task (ideally)
This will suffer from convoy effect as if the very first process which came is Ready state is having a large BT
Priority Scheduling
Priority is assigned to a process when it is created
SJF is a special case of general priority scheduling with priority inversely proportional to BT
Longest Job First (LJF)
Highest Response Rate Next (HRRN)
Pre-emptive => CPU is taken away from a process after time quantum expires along with terminating or switching to wait-state, Less Starvation, High CPU utilization, More Overhead
Shortest Job First (SJF)
Criteria = BT => Pre-emptive SJF checks for BT every unit of time
Less starvation
No convoy effect
Priority Scheduling
Current RUN state job will be preempted if next job has higher priority
May cause indefinite waiting (Starvation) for lower priority jobs => Possibility is they won’t get executed ever, True for both preemptive and non-preemptive version
Ageing => Gradually increase priority of process that wait so long, Eg Increase priority by 1 every 15 minutes
Round Robin (RR)
Like FCFS but preemptive, Designed for time sharing systems
Criteria = AT + Time Quantum (TQ)
If TQ is small, more will be the context switch (more overhead)
Maintain Running & Ready Queue
Longest Remaining Time First (LRTF)
Lottery Scheduling
Multi-level Scheduling
Multilevel Queue (MLQ)
Ready queue is divided into multiple queues depending upon priority
A process is permanently assigned to one of the queues (inflexible) based on some property of process, memory, size, process priority or process type
Each queue has its own scheduling algorith- E- SP -> RR, IP -> RR & BP -> FCFS
System process => Created by OS (Highest priority), Interactive process (Foreground process) => Needs user input (I/O), Batch process (Background process) => Runs silently, no user input required
Scheduling among different sub-queues is implemented as fixed priority preemptive scheduling, Eg, foreground queue has absolute priority over background queue
If an interactive process comes & batch process is currently executin- Then, batch process will be preempted
Problem => Only after completion of all the processes from the top-level ready queue, the further level ready queues will be schedule- This came starvation for lower priority process
Convoy effect is present
Multilevel Feedback Queue (MLFQ)
Multiple sub-queues are present
Allows the process to move between queue- The idea is to separate processes according to the characteristics of their B- If a process uses too much CPU time, it will be moved to lower priority queu- This scheme leaves I/O bound and interactive processes in the higher-priority queu- In addition, a process that waits too much in a lower-priority queue may be moved to a higher priority queu- This form of ageing prevents starvation
Less starvation then MLQ
It is flexible
Can be configured to match a specific system design requirement
N- of Queue
Scheduling algorithm in each queue
Method to upgrade a process to higher queue
Method to demote a process to lower queue
In which queue to be pushed first
Multi-Threading
Concurrency
The execution of the multiple instruction sequences at the same tim- It happens in the operating system when there are several process threads running in parallel
Thread
Single sequence stream within a process
An independent path of execution in a process
Light-weight process
Used to achieve parallelism by dividing a process’s tasks which are independent path of execution
Each thread has its own program counter, OS will fetch instructions corresponding to PC of that thread and execute instruction
We have TCB (Thread control block) like PCB for state storage management while performing context switching
Single CPU system would not gain by multi-threading technique as two threads have to context switch
Thread Scheduling
Threads are scheduled for execution based on their priorit- Even though threads are executing within the runtime, all threads are assigned processor time slices by the operating system
Threads Context Switching
OS saves current state of thread & switches to another thread of same process
Doesn’t includes switching of memory address space (But Program counter, registers & stack are included)
Fast switching as compared to process switching
CPU’s cache state is preserved
Benefits of Multi-threading
Responsiveness
Resource sharing
More Economical
Threads allow utilization of multiprocessor architectures to a greater scale and efficiency