2nd Week: CST 334
Hello everyone!
This week we cover abstraction, process API, limited direct execution, CPU scheduling and finally Multi-Level Feedback Queue. We also learn how to use learned how to create a Makefile to make it easier to compile our source files.
We learned that abstraction in the context of CPU virtualization creates the illusion that multiple processes are running simultaneously on separate CPUs. The operating system achieves this through time sharing, where each process is allocated a set amount of time on the CPU before being switched out to allow another process to run. This method ensures efficient multitasking and maximizes CPU utilization by rapidly cycling through processes.
Process API stands for Application Programming Interface, which provides functions that allow user programs to interact with the operating system through system calls. A system call is a function provided by the OS that triggers a switch from user mode to kernel mode, granting the process higher privilege to perform certain operations. Common system calls include fork(), exec(), and exit().
Limited direct execution is the idea that operating systems allow a process to run directly on the CPU to maximize performance while enforcing rules to maintain security. This concept includes CPU time sharing, virtualization, direct execution, and restricted operations. Restricted operations happens when a process needs to perform some tasks such as I/O requests, access additional resources like memory, or execute privileged tasks. To manage these situations, the OS performs a protected control transfer from user mode to kernel mode. In user mode, a process has limited access to system resources, while in kernel mode, the OS has full access to manage resources securely. This transition is made through system calls, making sure that its safe and controlled execution of privileged operations.
Scheduling in operating systems refers to the method by which processes, also known as jobs, are managed when they enter the system and are waiting to run. Scheduling determines when and how processes are given access to the CPU. This occurs when a job voluntarily gives up the CPU or at fixed time intervals to have time sharing among processes.
The goals of scheduling should include:
- Fairness: Ensuring that each process gets a fair share of CPU time.
- Policy Enforcement: Ensuring that scheduling policies are effectively executed.
- Balance: Distributing the workload evenly across CPUs.
- Throughput: Maximizing the number of jobs that complete execution in a given period.
- Turnaround Time: Reducing the time taken to complete processes from submission to finishing.
- CPU Utilization: Keeping the CPU busy as much as possible to avoid idling.
Multi-Level Feedback Queue (MLFQ) is a scheduling algorithm that adapts to process behavior to optimize CPU usage and reduce response time. It organizes processes into multiple priority levels, promoting or demoting them based on how often they give up their CPU time. I/O-bound processes that frequently relinquish CPU time are kept at higher priorities, while CPU bound processes are moved to lower-priority queues. This approach ensures fairness, prioritizes interactive tasks, and prevents starvation through mechanisms like aging.
Comments
Post a Comment