Thursday 12 October 2017

MCA 4th sem /MCS-041/Solved Assignment/Operating Systems /2017-2018 New

Q.1.
A.1.
An instruction cycle (sometimes called a fetch–decode–execute cycle) is the basic operational process of a computer. It is the process by which a computer retrieves a program instruction from its memory, determines what actions the instruction dictates, and carries out those actions. This cycle is repeated continuously by a computer's central processing unit (CPU), from boot-up to when the computer is shut down.
In simpler CPUs the instruction cycle is executed sequentially, each instruction being processed before the next one is started. In most modern CPUs the instruction cycles are instead executed concurrently, and often in parallel, through an instruction pipeline: the next instruction starts being processed before the previous instruction has finished, which is possible because the cycle is broken up into separate steps.




In system programming, an interrupt is a signal to the processor emitted by hardware or software indicating an event that needs immediate attention. An interrupt alerts the processor to a high-priority condition requiring the interruption of the current code the processor is executing. The processor responds by suspending its current activities, saving its state, and executing a function called an interrupt handler (or an interrupt service routine, ISR) to deal with the event. This interruption is temporary, and, after the interrupt handler finishes, the processor resumes normal activities. There are two types of interrupts: hardware interrupts and software interrupts.
Hardware interrupts are used by devices to communicate that they require attention from the operating system. Internally, hardware interrupts are implemented using electronic alerting signals that are sent to the processor from an external device, which is either a part of the computer itself, such as a disk controller, or an external peripheral. For example, pressing a key on the keyboard or moving the mouse triggers hardware interrupts that cause the processor to read the keystroke or mouse position. Unlike the software type (described below), hardware interrupts are asynchronous and can occur in the middle of instruction execution, requiring additional care in programming. The act of initiating a hardware interrupt is referred to as an interrupt request (IRQ).
software interrupt is caused either by an exceptional condition in the processor itself, or a special instruction in the instruction setwhich causes an interrupt when it is executed. The former is often called a trap or exception and is used for errors or events occurring during program execution that are exceptional enough that they cannot be handled within the program itself. For example, a divide-by-zero exception will be thrown if the processor's arithmetic logic unit is commanded to divide a number by zero as this instruction is an error and impossible. The operating system will catch this exception, and can choose to abort the instruction. Software interrupt instructions can function similarly to subroutine calls and are used for a variety of purposes, such as to request services from device drivers, like interrupts sent to and from a disk controller to request reading or writing of data to and from the disk.
Each interrupt has its own interrupt handler.
What happens When Interrupt Occurs -

1.                  Foreground code is running, interrupts are enabled
2.                  Interrupt event sends an interrupt request to the CPU
3.                  After completing the current instruction(s), the CPU begins the interrupt response
4.                  automatically saves current program counter
5.                  automatically saves some status (depending on CPU)
6.                  jump to correct interrupt service routine for this request
7.                  ISR code saves any registers and flags it will modify
8.                  ISR services the interrupt and re-arms it if necessary
9.                  ISR code restores any saved registers and flags
10.              ISR executes a return-from-interrupt instruction or sequence
11.              return-from-interrupt instruction restores automatically-saved status
12.              return-from-interrupt instruction recovers saved program counter
13.              Foreground code continues to run from the point it responded to the interrupt

Q.2.(a)

A.2.(a)

 CPU-bound :- In computer science, a computer is CPU-bound (or compute-bound) when the time for it to complete a task is determined principally by the speed of the central processor: processor utilization is high, perhaps at 100% usage for many seconds or minutes. Interrupts generated by peripherals may be processed slowly, or indefinitely delayed.
The concept of CPU-bounding was developed during early computers, when data paths between computer components were simpler, and it was possible to visually see one component working while another was idle. Example components were CPU, tape drives, hard disks, card-readers, and printers. Computers that predominantly used peripherals were characterized as I/O bound. Establishing that a computer is frequently CPU-bound implies that upgrading the CPU or optimizing code will improve the overall computer performance.

 I/O bound  :- The I/O bound state has been identified as a problem in computing almost since its inception. The Von Neumann architecture, which is employed by many computing devices, is based on a logically separate central processor unit which requests data from main memory, processes it and writes back the results. Since data must be moved between the CPU and memory along a bus which has a limited data transfer rate, there exists a condition that is known as the Von Neumann bottleneck. Put simply, this means that the data bandwidth between the CPU and memory tends to limit the overall speed of computation. In terms of the actual technology that makes up a computer, the Von Neumann Bottleneck predicts that it is easier to make the CPU perform calculations faster than it is to supply it with data at the necessary rate for this to be possible.

Preemptive Scheduling.:- It is the responsibility of CPU scheduler to allot a process to CPU whenever the CPU is in the idle state. The CPU scheduler selects a process from ready queue and allocates the process to CPU. The scheduling which takes place when a process switches from running state to ready state or from waiting state to ready state is called Preemptive Scheduling.

Non-Preemptive Scheduling.On the hands, the scheduling which takes place when a process terminates or switches from running to waiting for state this kind of CPU scheduling is called Non-Preemptive Scheduling. The basic difference between preemptive and non-preemptive scheduling lies in their name itself. That is a Preemptive scheduling can be preempted; the processes can be scheduled. In Non-preemptive scheduling, the processes can not be scheduled.
In general, turnaround time (TAT) means the amount of time taken to fulfill a request. The concept thus overlaps with lead time and can be contrasted with cycle time.
In computing, turnaround time is the total time taken between the submission of a program/process/thread/task (Linux) for execution and the return of the complete output to the customer/user. It may vary for various programming languages depending on the developer of the software or the program. Turnaround time may simply deal with the total time it takes for a program to provide the required output to the user after the program is started.
Turnaround time is one of the metrics used to evaluate an operating system's scheduling algorithms.
In case of batch systems, turnaround time will include time taken in forming batches, batch execution and printing results.
Normalized turnaround time – Ratio of turnaround time to service time per process.  Indicates the relative delay experienced by a process.
Response time – amount of time it takes from when a request was submitted until the first response is produced
Q.2.(b)
A.2.(b)(i)

(i) First Come First Serve (FCFS)

·                     Jobs are executed on first come, first serve basis.
·                     It is a non-preemptive, pre-emptive scheduling algorithm.
·                     Easy to understand and implement.
·                     Its implementation is based on FIFO queue.
·                     Poor in performance as average wait time is high.

Gantt Chart :-








(ii)  Shortest Job Next (SJN or SJF)

·                     This is also known as shortest job first, or SJF
·                     This is a non-preemptive, pre-emptive scheduling algorithm.
·                     Best approach to minimize waiting time.
·                     Easy to implement in Batch systems where required CPU time is known in advance.
·                     Impossible to implement in interactive systems where required CPU time is not known.
·                     The processer should know in advance how much time process will take.

·                                          (iii) Round Robin Scheduling

·                                             Round Robin is the preemptive process scheduling algorithm.
·                                             Each process is provided a fix time to execute, it is called a quantum.
·                                             Once a process is executed for a given time period, it is preempted and other process executes for a given time period.
·                                             Context switching is used to save states of preempted processes.
  Q.2.(b)
A.2.(b) (ii)

(ii)FCFS:-






                                                        

(ii)SJF:-

                              

 

(iii) Round Robin:-


A.2.(c)

Round-robin algorithms are for computation by network and process schedulers.

Each process is assigned a time slice for its completion. The time slice is the time required to complete a process. In a round-robin algorithm, the time slice for each process or task is the same. Moreover, since the time slice is assigned in a circular order, there is no priority for any task.

However, the trade off is that since both big and small tasks are assigned the same time slices, there are instances of a smaller task getting completed before the time allocated. In such cases, the next task does not commence until the allotted time period is over. This, in turn, leads to delay in completion of the overall process.


Q.3.

A.3.
Disk Scheduling: As we know that on a single Computer we can Perform Many Operations at a Time so that Management is also necessary on all the Running Processes those are running on the System at a Time. With the help or Advent of the Multi-programming we can Execute Many Programs at a Time. So fir Controlling and providing the Memory to all the Processes Operating System uses the Concept of Disk Scheduling.
In this the Time of CPU is divided into the various Processes and also determines that all the processes will Work Properly. So that Disk Scheduling Will Specifies that at which time which Process will be executed by the CPU. So that the Scheduling means to Execute all the Processes those are given to a CPU at a Time.
The Scheduling is used for Divide the Total Time of the CPU between the Number or Processes So that the Processes can execute Concurrently at a Single Time. For Sharing the Time or For Dividing the Total Time of the CPU, the CPU uses the following the Scheduling Techniques.


1) FCFC or First Come First Serve: In this Jobs or Processes are Executed in the Manner in which they are entered into the Computer. In this Operating System Creates a Queue which contains the Sequence Order in which they are to be Executed and the Sequence in which the CPU will Execute the Process.. In this all the Jobs are performed according to their Sequence Order as they have entered. In this the Job which had Requested first will firstly performed by the CPU. And the Jobs those are entered Later will be Executed in to their Entering Order. 
1) SSTF or Shortest Seek Time First :- in this Technique The Operating System will Search for the Shortest time means this will search which job will takes a Less Time of CPU for Running. And After Examining all the jobs, all the Jobs are arranged in the Sequence wise or they are Organized into the Priority Order. The Priority of the Process will be the Total Time which a Process will use For Execution. The Shortest Seek Time Will Include all the Time means Time to Enter and Time to Completion of the Process. Means the Total Time which a Process Will Take For Execution.
4) C-Scan Scheduling: - In the C-Scan all the Processes are Arranged by using Some Circular List. Circular List is that in which there is no start and end point of the list means the End of the List is the Starting Point of the list. In the C-Scan Scheduling the CPU will search for the Process from Start to end and if an End has Found then this again start from the Starting Process. Because Many Times When a CPU is executing the processes then may a user wants to enter some data means a user wants to enter Some data So that at that Situation the CPU will Again Execute that Process after the Input Operation. So that C-Scan Scheduling is used for Processing Same Processes again and Again.


5) Look Scheduling :- In the Look Scheduling the CPU Scans the List from Starting to End of the Disk in which the various Processes are Running and in the Look Scheduling the CPU will Scan the Entire Disk from one End to the Second end. 
1) Round Robin. : In the Round Robin Scheduling the Time of CPU is divided into the Equal Numbers which is also called as Quantum Time. Each Process which is Request for Execution will Consumes the Equal Number of Times of the CPU and after the Quantum Time of First Process, the CPU will automatically goes to the Next Process. But the Main Problem is that after the Completion of the Process the Time Will also be Consumed by the Process. Means if a Process either or not have Some Operations To perform the Time of CPU will also be Consume by the CPU , So this is the Wastage of the Time of the CPU.


2) Priority Scheduling : In this Each Process have Some Priorities Assign To them , Means each and Every Process will be Examined by using the Total Time Which will be Consumed by the Process. The Total Time of the Process, and Number of times a Process needs Some Input and Output and Number of Resources will be Examines to set the Priorities of the Processes. So that all the Processes are Arranged into the Form of these Criteria’s and After that they will be Processed by the CPU.
3) Multilevel Queue: The Multilevel Queue is used when there are multiple queues for the various different processes as we know that there are many different types of Works those are to be performed on the Computers at a Time. So that for organizing the various or different Types of Queues the CPU Maintains the Queues by using this Technique. In this all the queues are Collected and Organized in the Form of Some Functions which they are requesting to perform. So that the various Types of Queues are maintained which contains all the Processes which have Same Type.


Q.4.
A.4.
The Bankers algorithm is a resources allocation and deadlock avoidance algorithm developed by Edsger Dijkstra that tests for safety by simulating the allocation of predetermined maximum possible amounts ofall resources, and then makes an "s-state" check to test for possible deadlock conditions for all other pending activities, before deciding whether allocation should be allowed to continue.
The algorithm was developed in the design process for the THE operating system and originally described.
When a new process enters a system, it must declare the maximum number of instances of
each resource type that it may ever claim; clearly, that number may not exceed the total number of resources in the
system. Also, when a process gets all its requested resources it must return them in a finite amount of time.

Goal:-  C program For Banker's Algorithm.
Method:-  Simple Rules Of Banker Algorithm.
Explanation:-  Banker's Algorithm is a deadlock avoidance algorithm that checks for safe or unsafe state of a System after allocating resources to a process.
When a new process enters into system ,it must declare maximum no. of instances of each
resource  that it may need.After requesting operating system run banker's algorithm to check whether after allocating requested resources,system goes into deadlock state or not. If yes then it will deny the request of resources made by process else it allocate resources to that process.
Safe or Unsafe State:- A system is in Safe state if its all process finish its execution or if any
process is unable to aquire its all requested resources then system will be in Unsafe state. 

This is a C program for Banker’s algorithm for finding out the safe sequence. Bankers algorithm is used to schedule processes according to the resources they need. It is very helpful in Deadlock Handling. Bankers algorithm produce a safe sequence as a
output if all the resources can be executed and return error if no safe sequence of the processes available.


#include<stdio.h>
#include<conio.h>
void main()
{
int
k=0,output[10],d=0,t=0,ins[5],i,avail[5],allocated[10][5],need[10][5],MA
X[10][5],pno,P[10],j,rz, count=0;
clrscr();
printf("\n Enter the number of resources : ");
scanf("%d", &rz);
printf("\n enter the max instances of each resources\n");
for(i=0;i<rz;i++)
 avail[i]=0;
printf("%c= ",(i+97));
scanf("%d",&ins[i]);
}

printf("\n Enter the number of processes : ");
scanf("%d", &pno);
printf("\n Enter the allocation matrix \n     ");


for(i=0;i<rz;i++)
printf(" %c",(i+97));
printf("\n");
for(i=0;i <pno;i++)
{           P[i]=i;
printf("P[%d]  ",P[i]);
for(j=0;j<rz;j++)
{
scanf("%d",&allocated[i][j]);
avail[j]+=allocated[i][j];
}
}

printf("\nEnter the MAX matrix \n     ");
for(i=0;i<rz;i++)
{         
 printf(" %c",(i+97));
avail[i]=ins[i]-avail[i];
}
printf("\n");
for(i=0;i <pno;i++)
{
printf("P[%d]  ",i);
for(j=0;j<rz;j++)
scanf("%d", &MAX[i][j]);
}

printf("\n");
A: d=-1;
for(i=0;i <pno;i++)
{
count=0; t=P[i];
for(j=0;j<rz;j++)

{
need[t][j] = MAX[t][j]-allocated[t][j];
if(need[t][j]<=avail[j])
count++;
}
if(count==rz)
{
output[k++]=P[i];
for(j=0;j<rz;j++)
avail[j]+=allocated[t][j];
}

else
P[++d]=P[i];
}

if(d!=-1)
{
pno=d+1;
goto A;
}
printf("\t <");
for(i=0;i<k;i++)
printf(" P[%d] ",output[i]);
printf(">");
getch();
}



Q.5.
A.5.

An operating system provides the environment within which programs are executed. To construct such an environment, the system is partitioned into small modules with a well defined interface. The design of a new operating system is a major task. It is very important that the goals of the system be will defined before the design begins. The type of system desired is the foundation for choices between various algorithms and strategies that will be necessary.

1 Process Management
The CPU executes a large number of programs. While its main concern is the execution of user programs, the CPU is also needed for other system activities. These activities are called processes. A process is a program in execution. Typically, a batch job is a process. A timeshared user program is a process. A system task, such as spooling, is also a process. For now, a process may be considered as a job or a time-shared program, but the concept is actually more general. In general, a process will need certain resources such as CPU time, memory, files, I/O devices, etc., to accomplish its task. These resources are given to the process when it is created. In addition to the various physical and logical resources that a process obtains when its is created, some initialization data (input) may be passed along. For example, a process whose function is to display on the screen of a terminal the status of a file, say F1, will get as an input the name of the file F1 and execute the appropriate program to obtain the desired information. We emphasize that a program by itself is not a process; a program is a passive entity, while a process is an active entity. It is known that two processes may be associated with the same program, they are nevertheless considered two separate execution sequences. A process is the unit of work in a system. Such a system consists of a collection of processes, some of which are operating system processes, those that execute system code, and the rest being user processes, those that execute user code. All of those processes can potentially execute concurrently. The operating system is responsible for the following activities in connection with processes managed. o The creation and deletion of both user and system processes o The suspension are resumption of processes. o The provision of mechanisms for process synchronization o The provision of mechanisms for deadlock handling.


2 Memory Management 
Memory is central to the operation of a modern computer system. Memory is a large array of words or bytes, each with its own address. Interaction is achieved through a sequence of reads or writes of specific memory address. The CPU fetches from and stores in memory. In order for a program to be executed it must be mapped to absolute addresses and loaded in to memory. As the program executes, it accesses program instructions and data from memory by generating these absolute is declared available, and the next program may be loaded and executed. In order to improve both the utilization of CPU and the speed of the computer's response to its users, several processes must be kept in memory. There are many different algorithms depends on the particular situation. Selection of a memory management scheme for a specific system depends upon many factor, but especially upon the hardware design of the system. Each algorithm requires its own hardware support. The operating system is responsible for the following activities in connection with memory management. o Keep track of which parts of memory are currently being used and by whom. o Decide which processes are to be loaded into memory when memory space becomes available. o Allocate and deallocate memory space as needed.

3 I/O System

 One of the purposes of an operating system is to hide the peculiarities of specific hardware devices from the user. For example, in Unix, the peculiarities of I/O devices are hidden from the bulk of the operating system itself by the I/O system. The I/O system consists of: o A buffer caching system o A general device driver code o Drivers for specific hardware devices. Only the device driver knows the peculiarities of a specific device. We will discuss the I/O system in great length in section 7.

 4 File Management

 File management is one of the most visible services of an operating system. Computers can store information in several different physical forms; magnetic tape, disk, and drum are the most common forms. Each of these devices has it own characteristics and physical organization. For convenient use of the computer system, the operating system provides a uniform logical view of information storage. The operating system abstracts from the physical properties of its storage devices to define a logical storage unit, the file. Files are mapped, by the operating system, onto physical devices. A file is a collection of related information defined by its creator. Commonly, files represent programs (both source and object forms) and data. Data files may be numeric, alphabetic or alphanumeric. Files may be free-form, such as text files, or may be rigidly formatted. In general a files is a sequence of bits, bytes, lines or records whose meaning is defined by its creator and user. It is a very general concept. The operating system implements the abstract concept of the file by managing mass storage device, such as types and disks. Also files are normally organized into directories to ease their use. Finally, when multiple users have access to files, it may be desirable to control by whom and in what ways files may be accessed. The operating system is responsible for the following activities in connection with file management: o The creation and deletion of files o The creation and deletion of directory o The support of primitives for manipulating files and directories o The mapping of files onto disk storage. o Backup of files on stable (non volatile) storage. 

5 Protection System 

The various processes in an operating system must be protected from each other’s activities. For that purpose, various mechanisms which can be used to ensure that the files, memory segment, cpu and other resources can be operated on only by those processes that have gained proper authorization from the operating system. For example, memory addressing hardware ensure that a process can only execute within its own address space. The timer ensure that no process can gain control of the CPU without relinquishing it. Finally, no process is allowed to do it’s own I/O, to protect the integrity of the various peripheral devices. Protection refers to a mechanism for controlling the access of programs, processes, or users to the resources defined by a computer controls to be imposed, together with some means of enforcement. Protection can improve reliability by detecting latent errors at the interfaces between component subsystems. Early detection of interface errors can often prevent contamination of a healthy subsystem by a subsystem that is malfunctioning. An unprotected resource cannot defend against use (or misuse) by an unauthorized or incompetent user.   


3 comments:

  1. Bharati..Thank you for your help..!!Will u plz upload MCSL-025 solved assignment.

    ReplyDelete
  2. Bharati..mam can u plz add mcs-042,mcs-043 solved assignments.(January 2018 ),ur blog really helped me in my previous semesters.thank you.

    ReplyDelete