Swap space => Part of a hard disk (Secondary memory) that is reserved to act as RAM (Maim memory)
VM = RAM + Swap space
Creates an illusion due to which Process with size larger than the size of MM can be executed
Entire program is not needed at the same time, executing a program that is only partially in memory
A program would no longer be constrained by the amount of physical memory that is available
Increase in CPU utilization and throughput
Benefit both the system and the user
Demand Paging
The pages of a process which are least used, get stored in the secondary memory
A page is copied to the main memory when its demand is made, or page fault occurs
Page replacement algorithms are used to determine the pages which will be replaced
Rather than swapping the entire process into memory, we use Lazy Swapper
A swapper manipulates entire processes, whereas a Pager is concerned with individual pages of a process
When a process is to be swapped-in, the pager guesses which pages will be used
Valid-invalid bit scheme in the page table is used to distinguish between pages that are in memory and that are on the disk
Valid-invalid bit 1 means, the associated page is both legal and in memory
Valid-invalid bit 0 means, the page either is not valid (not in the LAS of the process) or is valid but is currently on the disk
Page Fault
If the process tries to access a page that was not brought into memory
Paging hardware noticing invalid bit for a demanded page will cause a trap to the OS
Generate Trap > Context Switching > Authentication > Find in LAS & put in MM > Update address in PT > Give control back
Effective Memory Access Time (EMAT) = P(Page Fault Service Time) + (1-P)(MM Access Time)
The procedure to handle the page fault
Check an internal table (in PCB of the process) to determine whether the reference was valid or an invalid memory access
If ref. was invalid process throws exception
If ref. is valid, pager will swap-in the page
We find a free frame (from free-frame list)
Schedule a disk operation to read the desired page into the newly allocated frame
When disk read is complete, we modify the page table and Restart the instruction that was interrupted by the trap
Pure Demand Paging
Start executing a process with no pages in memory
The process immediately faults for the page and page is brought in the memory, first instruction of the process
Never bring a page into memory until it is required
We use locality of reference to bring out reasonable performance from demand paging
Advantages of Virtual memory
The degree of multi-programming will be increased
User can run large apps with less real physical memory
Disadvantages of Virtual Memory
The system can become slower as swapping takes time
Thrashing may occur
Page Replacement Algorithms
First In First Out (FIFO)
Allocate frame to the page as it comes into the memory by replacing the oldest page
Easy to implement
Performance is not always good
Belady’s anomaly is present
Hit Ratio may increase even after increasing the Frame/Alloted Memory unlike in other algorithms
Optimal Page Replacement
Replace a page that is never referenced in future otherwise replace a page that is referenced farthest in future
Lowest page fault rate among any algorithm
Difficult to implement as OS requires future knowledge of reference string which is kind of impossible (Similar to SJF scheduling)
Least Recently Used (LRU)
Replace the least recently used Page in the past
We can use recent past as an approximation of the near future then we can replace the page that has not been used for the longest period
Can be implemented by two ways
Counters
Associate time field with each page table entry
Replace the page with smallest time value
Stack
Keep a stack of page number
Whenever page is referenced, it is removed from the stack & put on the top
By this, most recently used is always on the top, & least recently used is always on the bottom
As entries might be removed from the middle of the stack, so Doubly linked list can be used
Counting-based page replacement
Keep a counter of the number of references that have been made to each page (Reference counting)
Types
Least frequently used (LFU)
Actively used pages should have a large reference count
Replace page with the smallest count
Most frequently used (MFU)
Based on the argument that the page with the smallest count was probably just brought in and has yet to be used
Most Recently Used
Replace the most recently used Page in the past
Thrashing
Swapping an active Page that will be needed right away because there are no free frames, will cause lot of page faults
This high paging activity is called Thrashing
A system is Thrashing when it spends more time servicing the page faults than executing processes
Technique to Handle Thrashing
Working set model
This model is based on the concept of the Locality Model
If we allocate enough frames to a process to accommodate its current locality, it will only fault whenever it moves to some new locality. But if the allocated frames are lesser than the size of the current locality, the process is bound to thrash
Page Fault frequency
When it is too high, the process needs more frames. Conversely, if the page-fault rate is too low, then the process may have too many frames
We establish upper and lower bounds on the desired page fault rate
If pf-rate exceeds the upper limit, allocate the process another frame, if pf-rate fails falls below the lower limit, remove a frame from the process