Q.1.
A.1.
Gantt
Chart :-
(ii) Shortest Job Next (SJN or SJF)
(ii)SJF:-
(iii) Round Robin:-
A.1.
An instruction
cycle (sometimes called a fetch–decode–execute cycle) is
the basic operational process of a computer. It is the process by which a
computer retrieves a program instruction from its memory, determines what actions the instruction dictates, and
carries out those actions. This cycle is repeated continuously by a
computer's central processing unit (CPU), from boot-up to when the computer is shut down.
In
simpler CPUs the instruction cycle is executed sequentially, each instruction
being processed before the next one is started. In most modern CPUs the
instruction cycles are instead executed concurrently, and often in parallel, through an instruction
pipeline: the next instruction starts being
processed before the previous instruction has finished, which is possible
because the cycle is broken up into separate steps.
In system programming, an interrupt is
a signal to the processor emitted by hardware or software indicating an event
that needs immediate attention. An interrupt alerts the processor to a
high-priority condition requiring the interruption of the current code the
processor is executing. The processor responds by suspending its current
activities, saving its state, and executing a function called an interrupt
handler (or an interrupt service
routine, ISR) to deal with the event. This interruption is temporary, and,
after the interrupt handler finishes, the processor resumes normal activities. There
are two types of interrupts: hardware interrupts and software interrupts.
Hardware
interrupts are used by devices to
communicate that they require attention from the operating system. Internally, hardware interrupts are
implemented using electronic alerting signals that are sent to the processor
from an external device, which is either a part of the computer itself, such as
a disk controller, or an external peripheral. For example, pressing a key on the keyboard or moving
the mouse triggers hardware interrupts that cause the
processor to read the keystroke or mouse position. Unlike the software type
(described below), hardware interrupts are asynchronous and can occur in the middle of instruction
execution, requiring additional care in programming. The act of initiating a
hardware interrupt is referred to as an interrupt
request (IRQ).
A software
interrupt is caused either by an exceptional condition in the
processor itself, or a special instruction in the instruction
setwhich causes an interrupt when it is
executed. The former is often called a trap or exception and is used for errors or events occurring during
program execution that are exceptional enough that they cannot be handled
within the program itself. For example, a divide-by-zero exception will be
thrown if the processor's arithmetic
logic unit is commanded to divide a
number by zero as this instruction is an error and impossible. The operating
system will catch this exception, and can choose to abort the instruction.
Software interrupt instructions can function similarly to subroutine calls and are
used for a variety of purposes, such as to request services from device drivers, like
interrupts sent to and from a disk
controller to request reading or
writing of data to and from the disk.
Each
interrupt has its own interrupt handler.
What happens When Interrupt Occurs -
1.
Foreground
code is running, interrupts are enabled
2.
Interrupt
event sends an interrupt request to the CPU
3.
After
completing the current instruction(s), the CPU begins the interrupt response
4.
automatically
saves current program counter
5.
automatically
saves some status (depending on CPU)
6.
jump to
correct interrupt service routine for this request
7.
ISR code
saves any registers and flags it will modify
8.
ISR services
the interrupt and re-arms it if necessary
9.
ISR code
restores any saved registers and flags
10.
ISR
executes a return-from-interrupt instruction or sequence
11.
return-from-interrupt
instruction restores automatically-saved status
12.
return-from-interrupt
instruction recovers saved program counter
13.
Foreground
code continues to run from the point it responded to the interrupt
Q.2.(a)
A.2.(a)
CPU-bound :- In computer
science, a computer is CPU-bound (or compute-bound)
when the time for it to complete a task is determined principally by the speed
of the central processor: processor utilization is high, perhaps at 100%
usage for many seconds or minutes. Interrupts generated by peripherals may
be processed slowly, or indefinitely delayed.
The
concept of CPU-bounding was developed during early computers, when data paths between
computer components were simpler, and it was possible to visually see one
component working while another was idle. Example components were CPU, tape
drives, hard disks, card-readers, and printers. Computers that predominantly
used peripherals were characterized as I/O bound. Establishing that a
computer is frequently CPU-bound implies that upgrading the CPU or optimizing
code will improve the overall computer performance.
I/O bound :- The I/O bound
state has been identified as a problem in computing almost since its inception.
The Von Neumann architecture, which is employed by many computing devices,
is based on a logically separate central processor unit which
requests data from main memory, processes it and writes back the results. Since data must be moved
between the CPU and memory along a bus which has a limited data
transfer rate, there exists a condition that is known as the Von
Neumann bottleneck. Put simply, this means that the data bandwidth between
the CPU and memory tends to limit the overall speed of computation. In terms of
the actual technology that makes up a computer, the Von Neumann Bottleneck
predicts that it is easier to make the CPU perform calculations faster than it
is to supply it with data at the necessary rate for this to be possible.
Preemptive Scheduling.:- It is the responsibility of CPU scheduler to allot a process to
CPU whenever the CPU is in the idle state. The CPU scheduler selects a process
from ready queue and allocates the process to CPU. The scheduling which takes
place when a process switches from running state to ready state or from waiting
state to ready state is called Preemptive Scheduling.
Non-Preemptive Scheduling.On the hands, the
scheduling which takes place when a process terminates or switches from running
to waiting for state this kind of CPU scheduling is called Non-Preemptive Scheduling. The basic difference between preemptive and non-preemptive
scheduling lies in their name itself. That is a Preemptive scheduling can be
preempted; the processes can be scheduled. In Non-preemptive scheduling, the
processes can not be scheduled.
In general, turnaround time (TAT) means the
amount of time taken to fulfill a request. The concept thus overlaps with lead time and
can be contrasted with cycle time.
In computing, turnaround time is the total time taken between the
submission of a program/process/thread/task (Linux) for execution and
the return of the complete output to the customer/user. It may vary for various
programming languages depending on the developer of the software or the
program. Turnaround time may simply deal with the total time it takes for a
program to provide the required output to the user after the program is
started.
Turnaround time is one of the metrics used to evaluate an operating
system's scheduling algorithms.
In case of batch systems, turnaround time will include time taken in
forming batches, batch execution and printing results.
Normalized turnaround time
– Ratio of turnaround time to service time per process. Indicates the relative delay experienced by a process.
Response
time – amount of time it takes from when a request was submitted until the
first response is produced
Q.2.(b)
A.2.(b)(i)
(i) First Come First Serve (FCFS)
·
Jobs are executed on first come, first serve basis.
·
It is a non-preemptive, pre-emptive scheduling algorithm.
·
Easy to understand and implement.
·
Its implementation is based on FIFO queue.
·
Poor in performance as average wait time is high.
Gantt
Chart :-
(ii) Shortest Job Next (SJN or SJF)
·
This is also known as shortest
job first, or SJF
·
This is a non-preemptive, pre-emptive scheduling algorithm.
·
Best approach to minimize waiting time.
·
Easy to implement in Batch systems where required CPU time is known
in advance.
·
Impossible to implement in interactive systems where required CPU
time is not known.
·
The processer should know in advance how much time process will
take.
· (iii) Round Robin Scheduling
·
Round Robin is the preemptive process scheduling algorithm.
·
Each process is provided a fix time to execute, it is called a quantum.
·
Once a process is executed for a given time period, it is
preempted and other process executes for a given time period.
·
Context switching is used to save states of preempted processes.
Q.2.(b)
A.2.(b)
(ii)
(ii)FCFS:-
(ii)SJF:-
(iii) Round Robin:-
A.2.(c)
Round-robin algorithms are for
computation by network and process schedulers.
Each process is assigned a time slice
for its completion. The time slice is the time required to complete a process.
In a round-robin algorithm, the time slice for each process or task is the
same. Moreover, since the time slice is assigned in a circular order, there is
no priority for any task.
However, the trade off is that since both big and small tasks are assigned the
same time slices, there are instances of a smaller task getting completed
before the time allocated. In such cases, the next task does not commence until
the allotted time period is over. This, in turn, leads to delay in completion
of the overall process.
Q.3.
A.3.
Disk Scheduling: As
we know that on a single Computer we
can Perform Many Operations at a Time so that Management is also necessary on
all the Running Processes those are running on the System at a Time. With the
help or Advent of the Multi-programming we can Execute Many Programs at a Time.
So fir Controlling and providing the Memory to all the Processes Operating System uses
the Concept of Disk Scheduling.
In this the Time of CPU is
divided into the various Processes and also determines that all the processes
will Work Properly. So that Disk Scheduling Will Specifies that at which time
which Process will be executed by the CPU. So that the Scheduling means to
Execute all the Processes those are given to a CPU at a Time.
The Scheduling is used for Divide the Total Time of
the CPU between the Number or Processes So that the Processes can execute
Concurrently at a Single Time. For Sharing the Time or For Dividing the Total
Time of the CPU, the CPU uses the following the Scheduling Techniques.
1) FCFC or First Come First Serve: In
this Jobs or Processes are Executed in the Manner in which they are entered
into the Computer. In this Operating System Creates a Queue which contains the
Sequence Order in which they are to be Executed and the Sequence in which the
CPU will Execute the Process.. In this all the Jobs are performed according to
their Sequence Order as they have entered. In this the Job which had Requested
first will firstly performed by the CPU. And the Jobs those are entered Later
will be Executed in to their Entering Order.
1) SSTF or Shortest Seek Time First
:- in this Technique The Operating System will Search for the Shortest
time means this will search which job will takes a Less Time of CPU for
Running. And After Examining all the jobs, all the Jobs are arranged in the
Sequence wise or they are Organized into the Priority Order. The Priority of
the Process will be the Total Time which a Process will use For Execution. The
Shortest Seek Time Will Include all the Time means Time to Enter and Time to
Completion of the Process. Means the Total Time which a Process Will Take For
Execution.
4) C-Scan Scheduling: - In the C-Scan all the Processes are Arranged
by using Some Circular List. Circular List is that in which there is no start
and end point of the list means the End of the List is the Starting Point of
the list. In the C-Scan Scheduling the CPU will search for the Process from
Start to end and if an End has Found then this again start from the Starting
Process. Because Many Times When a CPU is executing the processes then may a
user wants to enter some data means a user wants to enter Some data So that at
that Situation the CPU will Again Execute that Process after the Input
Operation. So that C-Scan Scheduling is used for Processing Same Processes
again and Again.
5) Look Scheduling :- In the Look Scheduling the CPU Scans the List
from Starting to End of the Disk in which the various Processes are Running and
in the Look Scheduling the CPU will Scan the Entire Disk from one End to the
Second end.
1) Round Robin. : In the Round
Robin Scheduling the Time of CPU is divided into the Equal Numbers which is
also called as Quantum Time. Each Process which is Request for Execution will
Consumes the Equal Number of Times of the CPU and after the Quantum Time of
First Process, the CPU will automatically goes to the Next Process. But the
Main Problem is that after the Completion of the Process the Time Will also be
Consumed by the Process. Means if a Process either or not have Some Operations
To perform the Time of CPU will also be Consume by the CPU , So this is the
Wastage of the Time of the CPU.
2) Priority Scheduling : In this
Each Process have Some Priorities Assign To them , Means each and Every Process
will be Examined by using the Total Time Which will be Consumed by the Process.
The Total Time of the Process, and Number of times a Process needs Some Input
and Output and Number of Resources will be Examines to set the Priorities of
the Processes. So that all the Processes are Arranged into the Form of these
Criteria’s and After that they will be Processed by the CPU.
3) Multilevel Queue: The
Multilevel Queue is used when there are multiple queues for the various
different processes as we know that there are many different types of Works
those are to be performed on the Computers at a Time. So that for organizing
the various or different Types of Queues the CPU Maintains the Queues by using
this Technique. In this all the queues are Collected and Organized in the Form
of Some Functions which they are requesting to perform. So that the various
Types of Queues are maintained which contains all the Processes which have Same
Type.
Q.4.
A.4.
The Bankers
algorithm is a resources allocation and deadlock avoidance algorithm developed
by Edsger Dijkstra that
tests for safety by simulating the allocation of predetermined maximum possible
amounts ofall resources,
and then makes an "s-state" check to test for possible
deadlock conditions for all other pending activities,
before deciding whether allocation should be allowed to continue.
The
algorithm was developed in the design process for the THE operating
system and originally described.
When
a new process enters a system, it must declare the maximum number of instances
of
each
resource type that it may ever claim; clearly, that number may not exceed the
total number of resources in the
system.
Also, when a process gets all its requested resources it must return them in a
finite amount of time.
Goal:- C
program For Banker's Algorithm.
Method:-
Simple Rules Of Banker Algorithm.
Explanation:-
Banker's Algorithm is a deadlock avoidance algorithm that checks
for safe or unsafe
state of a System after allocating resources to a process.
When
a new process enters into system ,it must declare maximum no. of instances
of each
resource that
it may need.After requesting operating system run banker's algorithm to
check whether
after allocating requested resources,system goes into deadlock state or not. If
yes then it
will deny the request of resources made by process else it allocate resources
to that process.
Safe
or Unsafe State:- A system is in Safe state if its all process finish
its execution or if any
process
is unable to aquire its all requested resources then system will be in Unsafe
state.
This
is a C program for Banker’s algorithm for finding out the safe
sequence. Bankers algorithm is used to schedule processes according
to the resources they need. It is very helpful in Deadlock Handling. Bankers
algorithm produce a safe sequence as a
output
if all the resources can be executed and return error if no safe sequence of
the processes available.
#include<stdio.h>
#include<conio.h>
void
main()
{
int
k=0,output[10],d=0,t=0,ins[5],i,avail[5],allocated[10][5],need[10][5],MA
X[10][5],pno,P[10],j,rz,
count=0;
clrscr();
printf("\n
Enter the number of resources : ");
scanf("%d",
&rz);
printf("\n
enter the max instances of each resources\n");
for(i=0;i<rz;i++)
{
avail[i]=0;
printf("%c=
",(i+97));
scanf("%d",&ins[i]);
}
printf("\n
Enter the number of processes : ");
scanf("%d",
&pno);
printf("\n
Enter the allocation matrix \n ");
for(i=0;i<rz;i++)
printf("
%c",(i+97));
printf("\n");
for(i=0;i
<pno;i++)
{
P[i]=i;
printf("P[%d] ",P[i]);
for(j=0;j<rz;j++)
{
scanf("%d",&allocated[i][j]);
avail[j]+=allocated[i][j];
}
}
printf("\nEnter
the MAX matrix \n ");
for(i=0;i<rz;i++)
{
printf("
%c",(i+97));
avail[i]=ins[i]-avail[i];
}
printf("\n");
for(i=0;i
<pno;i++)
{
printf("P[%d] ",i);
for(j=0;j<rz;j++)
scanf("%d",
&MAX[i][j]);
}
printf("\n");
A:
d=-1;
for(i=0;i
<pno;i++)
{
count=0;
t=P[i];
for(j=0;j<rz;j++)
{
need[t][j]
= MAX[t][j]-allocated[t][j];
if(need[t][j]<=avail[j])
count++;
}
if(count==rz)
{
output[k++]=P[i];
for(j=0;j<rz;j++)
avail[j]+=allocated[t][j];
}
else
P[++d]=P[i];
}
if(d!=-1)
{
pno=d+1;
goto
A;
}
printf("\t
<");
for(i=0;i<k;i++)
printf("
P[%d] ",output[i]);
printf(">");
getch();
}
Q.5.
A.5.
Bharati..Thank you for your help..!!Will u plz upload MCSL-025 solved assignment.
ReplyDeleteThanks.. okk
DeleteBharati..mam can u plz add mcs-042,mcs-043 solved assignments.(January 2018 ),ur blog really helped me in my previous semesters.thank you.
ReplyDelete