Loading...
Loading...
OS kya hai?
Applications
────────────── ← OS provides interface
OS Kernel
────────────── ← OS manages
Hardware (CPU, RAM, Disk, I/O)
Types of OS: | Type | Description | Example | |------|-------------|---------| | Batch | Jobs queued, no interaction | Early IBM | | Multiprogramming | Multiple jobs in memory | Unix | | Time-sharing | CPU time slice per user | Linux, Windows | | Real-time | Hard timing guarantees | RTOS, VxWorks | | Distributed | Multiple machines, one view | Cloud OS | | Mobile | Touch, battery-aware | Android, iOS |
Process States:
fork()
New ────────────► Ready ◄─── (I/O complete / event)
│
scheduler │ dispatch
▼
Running ──── (I/O request) ──► Waiting
│
exit() │ (timer interrupt)
▼
Terminated
PCB (Process Control Block) contains:
Process ID (PID), State, Program Counter
CPU registers, CPU scheduling info
Memory management info
Accounting info (CPU time used)
I/O status info (open files)
Context Switch: Save current process PCB → Load next process PCB. Overhead: few microseconds. Minimize for performance.
Scheduling Criteria:
Process: P1(24ms) P2(3ms) P3(3ms) arrival order
Gantt: |─────P1(0-24)─────|─P2(24-27)─|─P3(27-30)─|
Waiting: P1=0, P2=24, P3=27
Avg Waiting = (0+24+27)/3 = 17ms ← BAD (convoy effect)
Process: P1(6ms) P2(8ms) P3(7ms) P4(3ms)
Sorted by burst: P4(3) P1(6) P3(7) P2(8)
Gantt: |P4(0-3)|P1(3-9)|P3(9-16)|P2(16-24)|
Avg Waiting = (3+16+9+0)/4 = 7ms ← better
Process: P1(24ms) P2(3ms) P3(3ms), q=4ms
Gantt: |P1(0-4)|P2(4-7)|P3(7-10)|P1(10-14)|P1(14-18)|P1(18-22)|P1(22-26)|P1(26-30)|
P1 runs in chunks: 4,4,4,4,4,4 = 24ms total
Good for interactive systems, fair
Higher priority runs first. Problems:
- Starvation: low priority may never run
- Solution: Aging — priority increases with wait time
Preemptive: running process can be interrupted
Non-preemptive: runs to completion or yield
Queue 1 (highest): Interactive/Foreground — RR
Queue 2: Batch jobs — FCFS
Queue 3 (lowest): Student processes — FCFS
Processes cannot move between queues
Code structure:
entry section
critical section (access shared resources)
exit section
remainder section
Requirements:
1. Mutual Exclusion: one process at a time
2. Progress: if CS empty, waiting process gets in
3. Bounded Waiting: process doesn't wait forever
// Binary semaphore (mutex)
sem_t mutex;
sem_init(&mutex, 0, 1); // value=1
// Thread code:
sem_wait(&mutex); // P() — decrement, block if 0
// critical section
sem_post(&mutex); // V() — increment, wake waiter
// Counting semaphore
sem_t slots;
sem_init(&slots, 0, N); // N slots available
// Producer: sem_post(&slots) after adding
// Consumer: sem_wait(&slots) before consuming
sem_t empty = N, full = 0, mutex = 1;
int buffer[N]; int in=0, out=0;
void producer() {
while(1) {
item = produce();
sem_wait(&empty); // wait for empty slot
sem_wait(&mutex); // lock buffer
buffer[in] = item;
in = (in+1) % N;
sem_post(&mutex); // unlock
sem_post(&full); // signal item available
}
}
void consumer() {
while(1) {
sem_wait(&full); // wait for item
sem_wait(&mutex);
item = buffer[out];
out = (out+1) % N;
sem_post(&mutex);
sem_post(&empty); // signal slot free
consume(item);
}
}
Coffman Conditions (ALL 4 must hold):
1. Mutual Exclusion: resource held exclusively
2. Hold & Wait: hold resource while waiting for another
3. No Preemption: resource only released voluntarily
4. Circular Wait: P1 waits P2, P2 waits P1 (cycle)
Detection: Resource Allocation Graph
Cycle in graph → possible deadlock (for single instance resources = deadlock)
P1 → R1 → P2 → R2 → P1 ← circular wait = deadlock!
Prevention: Break any one condition
Mutual Exclusion: can't always break (printers need it)
Hold & Wait: request all resources at once (or release before new request)
No Preemption: OS can forcibly take resources
Circular Wait: impose ordering on resources (always request R1 before R2)
Banker's Algorithm (Avoidance):
Before granting request:
Check if remaining resources allow safe sequence to exist
(sequence where all processes can finish)
If safe state exists → grant, else → wait
First Fit: allocate first hole that's big enough (fast)
Best Fit: smallest hole that fits (minimizes waste but slow)
Worst Fit: largest hole (leaves large remaining hole)
Physical memory → Fixed size frames (e.g., 4KB)
Logical memory → Fixed size pages (same size as frames)
Page Table translates: logical address → physical address
Logical: [page number | offset]
Physical: [frame number | offset]
Translation:
Page 2, offset 100 → Page table[2] = frame 5 → 5×4096 + 100
TLB (Translation Lookaside Buffer):
Cache for page table entries → fast lookup
TLB hit → 1 memory access total
TLB miss → 2 memory accesses (page table + data)
Page fault occurs when page not in RAM:
1. OS gets page fault interrupt
2. Find page on disk (swap space)
3. Find free frame (evict if needed)
4. Load page into frame
5. Update page table
6. Resume instruction
Page Replacement Algorithms:
FIFO: remove oldest loaded page (simple, Belady's anomaly)
LRU: remove least recently used (good, expensive to implement)
Optimal (OPT): remove page used farthest in future (theoretical best)
Clock (Second Chance): FIFO with reference bit → practical LRU approximation
File Attributes: name, type, size, permissions, timestamps
Access Methods:
Sequential: read records in order (tape, old systems)
Direct/Random: jump to any record by block number
Indexed: index points to record locations
Directory Structure:
Single-level: all files in one directory
Two-level: one directory per user
Tree: subdirectories (Unix/Windows)
Acyclic graph: allow sharing (symlinks)
File Allocation:
Contiguous: fast access, external fragmentation
Linked: no fragmentation, no random access
Indexed: fast random access, index block overhead
FAT (File Allocation Table): variation of linked
Unix inode:
Each file has inode with metadata + block pointers
Direct blocks → small files
Single/Double/Triple indirect → large files
Q: Thrashing kya hai? A: CPU zyada time page swapping mein spend kare aur actual work kam ho — working set RAM se zyada hai. Fix: RAM badhao, processes kam karo, working set model use karo.
Q: Preemptive aur Non-preemptive scheduling mein kya fark hai? A: Preemptive: OS kisi bhi time running process interrupt karke CPU kisi aur ko de sakta hai (RR, SRTF). Non-preemptive: process CPU voluntarily release kare (FCFS, SJF). Interactive systems mein preemptive better.
Q: Kernel mode aur User mode kya hote hain? A: Kernel mode (privileged): OS code runs here, all hardware access allowed. User mode: application code, restricted access, hardware access via system calls only. Protection: user program OS crash nahi kar sakta.
Complete Operating Systems notes for BCA Sem 5 — Process management, CPU scheduling, Memory management, Deadlocks, File systems, Virtual memory with solved examples and viva Q&A.
42 pages · 2.1 MB · Updated 2026-03-11
Hardware aur user programs ke beech interface. Resource management (CPU, memory, I/O). Process scheduling. Memory management. File system. Security. Without OS, har program ko hardware directly manage karna padta — impractical.
Process: independent program execution, own memory space, expensive context switch. Thread: process ke andar lightweight execution unit, shared memory, fast context switch. Ek process mein multiple threads (multi-threading).
Coffman conditions saath poori hon: Mutual Exclusion, Hold & Wait, No Preemption, Circular Wait. Example: P1 holds R1 waits R2; P2 holds R2 waits R1 — neither can proceed.
Paging: fixed-size pages, internal fragmentation, simple management. Segmentation: variable-size logical units (code, data, stack), external fragmentation, programmer-friendly. Modern OS dono combine karta hai (x86 paging dominates).
RAM se zyada memory illusion — disk (swap space) ko RAM extension ki tarah use karta hai. Demand paging: page tabhi load hoti hai jab zaroorat ho. Page fault: page RAM mein nahi → disk se load → resume process.
Your feedback helps us improve notes and tutorials.