In computer science, science, Coffman deadlock refers to a specific condition when two or more processes are each waiting for each other to release a resource, or more than two processes are waiting for resources in a circular chain (see Necessary conditions). conditions). Deadlock is a common problem in multiprocessing where many processes share a specific type of mutually exclusive resource known as a software lock or lock or soft soft lock . Computers intended for the time-sharing and/or time-sharing and/or real-time real-time markets are often equipped with a hardware lock (or lock (or hard hard lock ) which guaranteesexclusive guaranteesexclusive access to processes, forcing serialized access. access. Deadlocks are particularly troubling because there is no general solution general solution to avoid (soft) (soft) deadlocks. deadlocks. Necessary conditions There
are four necessary and sufficient conditions for a Coffman deadlock to occur, known as the C offman offman conditions from their first description in a 1971 article by E. E. G. Coffman. Coffman. Mutual exclusion condition: a resource that cannot be used by more than one process at a time Hold and wait condition: processes already holding resources may request new resources No preemption condition: No resource can be forcibly removed from a process holding it, resources can be released only by the explicit action of the process Circular wait condition: two or more processes form a circular chain where each process waits for a resource that the next process i n the chain holds Prevention
Removing the mutual exclusion c ondition means that no process may have exclusive access to a resource. resource. This proves impossible for resources t hat cannot be spooled, and even with spooled resources deadlock could still occur . Algorithms that avoid mutual exclusion are called non-blocking synchronization algorithms. algorithms. The
"hold and wait" conditions may be removed by requiring processes to request all the resources they will need before starting up (or before embarking upon a particular set of operations); operations); this advance knowledge is frequently difficult to satisfy and, in any case, is an inefficient us e of resources. resources. Another way is to require processes to release all their resources before requesting all the resources they will need. need. This too is often impractical. impractical . (Such algorithms, such asserializing tokens, are known as the all-or-none algorithms.) algorithms.) Avoidance
Deadlock can be avoided if certain information about processes is available in advance of resource allocation. allocation. For every resource request, the system sees if granting the request will mean that the system will enter an unsafe state, meaning a state that could result in deadlock. deadlock. The system t hen only grants requests that will lead to safe states. states. In order for the system to be able to figure out whether the next state will be safe or unsafe, it must k now in advance at any time the number and type of all resources in existence, available, and requested. requested. One known algorithm that is used for deadlock avoidance is theBanker's algorithm, which requires resource usage limit to be known in advance. advance. However, for many systems it is impossible to know in advance what every process will request. request. This means that deadlock avoidance is often impossible. impossible. Two
other algorithms are Wait/Die and Wound/Wait, each of which uses a symmetrybreaking technique. technique. In both these algorithms there exists an older process (O) (O) and a younger
process (Y). (Y). Process age can be determined by a timestamp at process creation time. time. Smaller time stamps are older processes, while larger timestamps represent younger processes. processes.
Scheduling is a key concept in computer multitasking, multiprocessing operating system and real-time operating systemdesigns. systemdesigns. Scheduling refers to the way processes are assigned to run on the available CPUs, since there are typically many more processes running than t here are available CPUs. CPUs. This assignment is carried out by softwares known as a scheduler and and dispatcher . The
scheduler is concerned mainly with:
CPU utilization - to keep the CPU as busy as possible. possible. Throughput
- number of processes that complete their execution per time unit. unit. Turnaround - total time between submission of a process and its completion Waiting time - amount of time a process has been waiting in the ready queue. queue. Response time - amount of time it takes from when a request was submitted until the first response is produced. produced.
Fairness - Equal CPU time to each thread. thread. In real-time environments, such as mobile devices for automatic control in industry (for example robotics) robotics), the scheduler also must ensure that processes can meet deadlines; deadlines; this is crucial for keeping the system stable. stable. Scheduled tasks are sent to mobile devices and managed through an administrative back end. end.
Scheduling algorithm In computer science, a scheduling algorithm is the method by which threads, processes or data flows are given access to syst em resources (e. (e.g. processor time, communications bandwidth). bandwidth). This is usually done to load balance a system effectively or achieve a target quality of service. service. The need for a scheduling algorithm arises from the requirement for most modern systems to perform multitasking (execute more than one process at a time) time) and multiplexing (transmit multiple flows simultaneously). simultaneously).
Round-robin
(RR) (RR) is one of the simplest scheduling algorithms for processes in an operating system, which assigns time slices to e ach process in equal portions and in ci rcular order, handling all processes without priority. priority . Round-robin Round-robin scheduling is both simple and easy to implement, and starvation-free . Roundrobin scheduling can also be applied to other scheduling ing problems, such as data packet scheduling in computer networks. networks. The
name of the algorithm comes from the roundrobin principle known from other fi elds, where each person takes an equal share of something something in t urn. urn.
RR scheduling involves extensive overhead, especially with a small time unit. unit.
Balanced throughput between FCFS and SJF, shorter jobs are completed faster than in FCFS and longer processes are completed faster than in SJF. SJF.
Fastest average response time, waiting time is dependent on number of processes, and not average process length. length.
Because of high waiting times, deadlines are rarely met in a pure RR syst em. em.
Batch
processing
is execution of a series of programs ("jobs" ("jobs")) on a computer without manual intervention. intervention.
Batch jobs are set up so they can be run to completion without manual intervention, so all input data is preselected throughscripts or command-line parameters. parameters. This is in contrast to "online" or interactive programs which prompt the user for such input. input. A program takes a set of data files as input, processes the data, and produces a set of output data files. files. operating This environment is termed as "batch processing" because the input data are collected into batches on files and are processed in batches by the program. program . Batch processing has these benefits:
It allows sharing of computer resources among many users and programs,
It shifts the time of job processing to when the computing resources are less busy,
It avoids idling the computing resources with minute-by-minute manual intervention and supervision,
By keeping high overall rate of utilization, it better amortizes the cost of a computer, especially an expensive one. one.
A real-time operating system (RTOS) is an operating system (OS) (OS ) intended for real-time applications. applications. Such operating systems serve application requests nearly real-time. real-time. A real-time operating s ystem offers programmers more control over process priorities. priorities. An application's process priority level may exceed that of a system process. process. Real-time operating systems minimize critical sections of system code, so that t he application's interruption is nearly critical. critical. A key characteristic of a real-time OS is t he level of its consistency concerning the amount of tim e it takes to accept and complete an application's task; task; the variability is j itter itter . A hard real-time hard real-time operating system has less jitter than a soft realsoft realtime operating system. system. The chief design goal is not high throughput, but rather a guarantee of a soft or hard performance category. category. A real-time OS that can usually or generally or generally meet meet a deadline is a soft real-time OS, but if it can meet a deadlinedeterministically deadlinedeterministically it it is a hard realtime OS. OS.
An operating system (OS)
is software, consisting of programs and data, that runs on computers and manages the computer hardware and provides common services for efficient execution of various application software. software.
For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between application programs and the computer [1][2] hardware, although the application code is usually executed directly by the hardware, but will frequently call the OS or be interrupted by it . Operating systems are found on almost any device that contains a computer²from cellular phones and video game consoles to supercomputers and web servers servers.
memory management a multiprogramming multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by programs . This ensures that a program does not interfere with memory already used by another program. program. Since programs time share, each program must have independent access to memory . Cooperative memory management, used by many early operating systems assumes that all programs make voluntary use of the kernel's memory manager, and do not exceed their allocated memory . This system of memory management is almost never seen anymore, since programs often contain bugs which can cause them to exceed their allocated memory . If a program fails it may cause memory used by one or more other programs to be
affected or overwritten. Malicious programs, or viruses may purposefully alter another program's memory or may affect the operation of the operating system itself . With cooperative memory management it takes only one misbehaved program to crash the system . Memory protection enables the kernel to limit a process' access to the computer's memory . Various methods of memory protection exist, including memory segmentation and paging. All methods require some level of hardware support (such a s the 80286 MMU) which doesn't exist in all computers.
virtual memory
The
use of virtual memory addressing (such as paging or segmentation ) means that the kernel can choose what memory each program may use at any given time, allowing the operating system to use the same memory locations for multiple tasks. If a program tries to access memory that isn't in its current range of accessible memory, but nonetheless has been allocated to it, the kernel will be interrupted in the same way as it would if the program were to exceed its allocated memory . (See section on memory management .) Under UNIX this kind of interrupt is referred to as a page fault. When the kernel detects a page fault it will generally adjust the virtual memory range of the program which triggered it, granting it access to the memory requested . This gives the kernel discretionary power over where a particular application's memory is stored, or even whether or not it has actually been allocated yet .
Multi-user is a term that defines an operating system or application software that allows concurrent access by multipleusers of a computer . Timesharing systems are multi-user systems. Most batch processing systems for mainframe computers may also be considered "multi-user", to avoid leaving the CPU idle while it waits for I/O operations to complete. However, the term "multitasking" is more common in this context. An example is a Unix server where multiple remote users have acc ess (such as via Secure Shell ) to the Unix shell prompt at the same time. Another example uses multiple X Window sessions spread across multiple terminals powered by a single machine - this i s an example of the use of thin client.
Multitasking refers to the running of multiple independent computer programs on the same computer ; giving the appearance that it is performing the tasks at the same time. Since most computers can do at most one or two things at one time, this is generally done via timesharing, which means that each program uses a share of the computer's time to execute. An operating system kernel contains a piece of software called a s cheduler which determines how much time each program will spend executing, and in which order execution control should be passed to programs. Control is passed to a process by the k ernel, which allows the program access to the CPU and memory. Later, control is returned to the kernel through some mechanism, so that another program may be allowed to use the CPU. This so-called passing of control between the kernel and applications is called a context switch. NTFS
[1]
(New Technology File System) is the standard file system ofWindows NT, including its later versions Windows 2000, Windows XP,Windows Server 2003, Windows Server [5] 2008, Windows Vista, and Windows 7. NTFS supersedes the FAT file system as the preferred file system for Microsoft¶s Windows operating systems. NTFS has several improvements over FAT and HPFS (High Performance File System) such as improved support for metadata and the use of advanced data structures to improve performance, reliability, and disk space utilization, plus additional extensions such as security access control lists (ACL) and file system journaling.
Detection Often, neither avoidance nor deadlock prevention may be used. Instead deadlock detection and process restart are used by employing an algorithm that tracks resource allocation and process states, and rolls back and restarts one or more of the processes in order to remove the deadlock. Detecting a deadlock that has already occurred is easily possible since the resources that each process
has locked and/or currently requested are known to the resource scheduler or OS. Detecting the possibility of a deadlock before it occurs is much more difficult and is, in fact, generally undecidable, because the halting problem can be rephrased as a deadlock scenario. However, in specific environments, using specific means of locking resources, deadlock detection may be decidable. In the general case, it is not possible to distinguish between algorithms that are merely waiting for a very unlikely set of circumstances to occur and algorithms that will never finish because of deadlock. Deadlock detection techniques include, but is not limited to, Model checking. This approach constructs a Finite State-model on which it performs a progress analysis and finds all possible terminal sets in the model. These then each represent a deadlock.
What is a Process? A process is a sequential program in execution. The components of a process are the following: y
The
y
The
y
R esources required by the program ( for example, files containing requisite information)
y
The
object program to be executed ( called the program text in UNIX) data on which the program will execute (obtained from a file or interactively from the process's user )
status of execution
the
process's
Process Management Multiprogramming systems explicitly allow multiple processes to exist at any given time, where only one is using the CPU at any given moment, while the remaining processes are performing I/O or are waiting. The process manager is of the four major parts of the operating system. It implements the process abstraction. It does this by creating a model for the way the process uses CPU and any system resources. Much of the complexity of the operating system stems from the need for multiple processes to share the hardware at the same time. As a conseuence of this goal, the process manager implements CPU sharing ( called scheduling ), process synchronization mechanisms, and a deadlock strategy. In addition, the process manager implements part of the operating system's protection and security.
Process Management The operating system manages many kinds of activities ranging from user programs to system programs like printer spooler, name servers, file server etc. Each of these activities is encapsulated in a process. A process includes the complete execution context (code, data, PC, registers, OS resources in use et c.). It is important to note that a process is not a program. A process is only ONE inst ant of a program in execution. There are many processes can be running the same program. The five major activities of an operating system in regard to process management are y
y
Creation and deletion of user and system processes. Suspension and resumption of processes.
y
A mechanism for process synchronization.
y
A mechanism for process communication.
A mechanism for deadlock handling. Main-Memory Management Primary-Memory or Main-Memory is a lar ge array of words or bytes. Each word or b yte has its own address. Main-memory provides storage that can be access directly b y the CPU. That is to say for a program t o be executed, it must in the main memory. y
The major activities of an operating in regard to memory-management are: y
K eep
track of which part of memory are currently being used and by whom.
y
Decide
which process are loaded into memory when memory space becomes available.
y
Allocate and deallocate memory space as needed.
f ile Management A file is a co llected of related information defined by its creator. Computer can store files on the disk (secondary storage), which provide long term storage. Some examples of storage media are magnetic tape, magnetic disk and optical disk. Each of these media has its own properties like speed, capacity, data transfer rate and access methods. A file systems normally organized into directories to ease their use. These directories may contain files and other directions. The five main major activities of an operating system in regard to file management are 1. The creation and deletion of files. 2. The creation and deletion of directions. 3. The support of primitives for manipulating files and directions. 4. The mapping of files onto secondary storage. 5. The back up of files on stable storage media.
Os services Program Execution The purpose of a computer systems is to allow the user to execute programs. So the operating systems provides an environment where the user can conveniently run programs. The user does not have to worry about the memory allocation or multitasking or anything. These things are ta ken care of by the operating systems. Running a program involves the allocating and deallocating memory, CPU scheduling in case of multiprocess. These functions cannot be given to the user-level programs. So user-level programs cannot help the user to run programs independently without the help from operating systems.
I/O Operations Each program requires an input and produces output. This involves the use of I/O. The operating systems hides the user the details of underlying hardware for the I/O. All the user sees is that t he I/O has been performed without any details. So the operating systems by providing I/O makes it convenient for the users to run programs. For efficiently and protection users cannot control I/O so this service cannot be provided by user-level programs.
File System Manipulation The output of a program may need to be written into new files or input taken from some files. The operating systems provides this service. The user does not have to worry about secondary storage management. User gives a command for reading or writing to a file and sees his her task accomplished. Thus operating systems makes it easier for user programs to accomplished their task. This service involves secondary storage management. The speed of I/O that depends on secondary storage management is critical to the speed of many programs and hence I think it is best relegated to the operating systems to manage it than giving individual users the control of it. It is not difficult for the user-level programs to provide these services but for above mentioned reasons it is best if this service s left with operating system.
Communications There are instances where processes need to communicate with each other to exchange information. It may be between processes running on the same computer or running on the different computers. By providing this service the operating system relieves the user of the worry of passing messages between processes. In case where the messages need to be passed to processes on the other
computers through a network it can be done by the user programs. The user program may be customized to the specifics of the hardware through which the message transits and provides the service interface to the operating system.
Error Detection An error is one part of the system may cause malfunctioning of the complete system. To avoid such a situation the operating system constantly monitors the system for detecting the errors. This rel ieves the user of the worry of errors propagating to various part of the system and causing malfunctioning. This service cannot allowed to be handled by user programs because it involves monitoring and in cases altering area of memory or deallocation of memory for a faulty process. Or may be relinquishing the CPU of a process that goes into an infinite loop. These tasks are too critical to be handed over to the user programs. A user program if given these privileges can interfere with the correct (normal) operation of the operating systems.
Layered Approach DesignIn this case the system is easier to debug and modify, because changes affect only limited portions of the code, and programmer does not have to know the details of the oth er layers. Information is also kept only where it is needed and is accessible only in certain ways, so bugs affecting that data are limited to a specific module or layer.
Threads
Despite of the fact that a thread must execute in process, the process and its associated threads are different concept. Processes are used to group resources together and threads are the entities scheduled for execution on the CPU. A thread is a single sequence stream within in a process. Because threads have some of t he properties of processes, they are sometimes called lightweight processes. In a process, threads allow multiple executions of streams. In many respect, threads are popular way to improve application through parallelism. The CPU switches rapidly back and forth among the threads giving illusion that the threads are running in parallel. Like a traditional process i.e., process with one thread, a thread can be in any of several states (Running, Blocked, Ready or Terminated). Each thread has its own stack. Since thread will generally call different procedures and thus a different execution history. This is why t hread needs its own stack. An operating system that has thread facility, the basic unit of CPU utilization is a thread. A thread has or consists of a program counter (PC), a register set, and a stack space. Threads are not independent of one other like processes as a result threads shares with other threads their code section, data section, OS resources also known as task, such as open files and signals
.CPU/Process
Scheduling
The assignment of physical processors to processes allows processors to accomplish work. The problem of determining when processors should be assigned and to which processes is called processor scheduling or CPU scheduling. When more than one process is runable, the operating system must decide which one first. The part of the operating system concerned with this decision is called the scheduler, and algorithm it uses is called the scheduling algorithm.
Goals
of Scheduling (objectives) In this section we try to answer following question: What the scheduler try to achieve? Many objectives must be considered in the design of a scheduling discipline. In parti cular, a scheduler should consider fairness, efficiency, response time, turnaround time, throughput, etc., Some of these goals depends on the system one is using for ex ample batch system, interactive system or real-time system, etc. but there are also some goals that are desirable in all systems.