Assignment A Marks 10 Answer all questions. 1. Explain Flynn's Flynn's classification classification of computer architecture using using neat block diagram. Ans. Flynn's Taxonomy of Computer Architecture : The most popular taxonomy of computer architecture was defined by Flynn in 1966. Flynn's classification scheme is based on the notion n otion of a stream of information. Two types of information information flow into a processor: processor:
Instruction. The instruction stream is defined as the sequence of instructions performed by the processing unit. Data. The data stream is defined as the data traffic exchanged between the memory and the processing unit. According to Flynn's classification, either of the instruction or data streams can be single or multiple. Computer architecture can be classified into the following four distinct categories: 1) single single instructio instruction n single single data data streams streams (SISD) (SISD) 2) single single instruction instruction multipl multiple e data streams streams (SIMD) (SIMD) 3) multiple multiple instruct instruction ion single single data data streams streams (MISD) (MISD) 4) multiple multiple instruct instruction ion multiple multiple data data streams streams (MIMD). (MIMD).
1) SISD (single (single instru instruction, ction, single single data) data) is a term referring to a computer architecture in which a single processor, a uniprocessor, executes a single instruction stream, to operate on data stored in a single memory. This corresponds to the von Neumann architecture. SISD is one of the four main classifications as defined in Flynn's taxonomy. In this system classifications are based upon the number of concurrent instructions and data streams present in the computer architecture. According to Michael J. Flynn, SISD can have concurrent processing charac character terist istics ics.. Instru Instructi ction on fetchi fetching ng and pipeli pipelined ned execut execution ion of instru instructi ctions ons are common common examples found in most modern SISD computers.
2) Single Single instru instructi ction, on, multiple multiple data data (SIMD (SIMD), ), is a class of parallel computers in Flynn's taxonomy. It describes computers with multiple processing elements that perform the same operation on multiple data points simultaneously. Thus, such machines exploit data level level parall paralleli elism. sm. SIMD SIMD is partic particula ularly rly applic applicabl able e to common common tasks tasks like like adjust adjusting ing the contrast in a digital image or adjusting the volume of digital audio. Most modern CPU designs include SIMD instructions in order to improve the performance of multimedia use.
3) MISD (multip (multiple le instructi instruction, on, single single data) data) is a type of parallel computing architecture where where many many functi functiona onall units units perfor perform m differ different ent opera operatio tions ns on the same same data. data. Pipel Pipeline ine architectures belong to this type, though a purist might say that the data is different after processing by each stage in the pipeline. Fault-tolerant computers executing the same
instructions redundantly in order to detect and mask errors, in a manner known as task repl replic icat atio ion, n, may may be consi conside dere red d to belo belong ng to this this type type.. Not Not many many inst instan ance ces s of this this architecture exist, as MIMD and SIMD are often more appropriate for common data parallel techniques. Specifically, they allow better scaling and use of computational resources than MISD does. However, one prominent example of MISD in computing are the Space Shuttle flight control computers.[citation needed] A systolic array is an example of a MISD structure.
techni niqu que e empl employ oyed ed to achie achieve ve 4) MIMD MIMD (multi (multiple ple instru instructi ction, on, multip multiple le data) data) is a tech para parall llel elis ism. m. Mach Machin ines es usin using g MIMD MIMD have have a numb number er of proc proces esso sors rs that that func functi tion on asynchronously and independently. At any time, different processors may be executing different instructions on different pieces of data. MIMD architectures may be used in a numb number er of app applic licatio tion areas reas suc such as comput mputer er-a -aid ided ed desig esign/ n/c comput mputer er-a -aid ided ed manufacturin manufacturing, g, simulation simulation,, modeling, modeling, and as communicat communication ion switches. switches. MIMD machines machines can be of either shared memory or distributed memory categories. These classifications are based on how MIMD processors access memory. Shared memory machines may be of the bus-based, extended, or hierarchical type. Distributed memory machines may have hypercube or mesh interconnection schemes. A multi-core CPU is an MIMD machine.
2. Write about about the classi classificat fication ion of operatin operating g system. system. Ans. Classification of Operating Systems Many operating systems have been designed and developed in the past several decades. They may be classified into different categories depending on their features: (1) multiproc multiprocessor, essor, (2) multiuser, multiuser, (3) multiprogr multiprogram, am, (3) multiprocess, multiprocess, (5) multithread multithread,, (6) pree preemp mpti tive ve,, (7) (7) reen reentr tran ant, t, (8) (8) micr microk okern ernel el,, and and so fort forth. h. Thes These e feat feature ures, s, and and the the challenges involved in implementing them, are discussed very briefly in the following subsections. 1.11.1. Multiprocessor Systems A mult multip ipro roce cess ssor or syst system em is one one that that has has more more than than one one proc process essor or on-b on-boa oard rd in the the computer. They execute independent streams of instructions simultaneously. They share system buses, the system clock, and the main memory, and may share peripheral devices too. too. Such Such system systems s are also also referr referred ed to as tightl tightly y coupl coupled ed multip multiproc rocesso essorr system systems s as opposed to network of computers (called distributed systems). A uniprocessor system can execute only one process at any point of real time, though there there might might be many many proces processes ses ready ready to be execut executed. ed. By contra contrast, st, a multip multiproc rocess essor or system system can can execut execute e many many differ different ent proces processes ses simult simultane aneous ously ly at the same same real real time. time. However, the number of processors in the system restricts the degree of simultaneous process executions. ≫ In multiprocessor systems, many processes may execute the kernel simultaneously. In
uniprocessor systems, concurrency is only achieved in the form of execution interleavings;
only one process can make progress in the kernel mode, while others are blocked in the kernel waiting for processor allocation or some events to occur. There are two primary models of multiprocessor operating systems: symmetric and asymmetric. In a symmetric multiprocessor system, each processor executes the same copy of the resident operating system, takes its own decisions, and cooperates with other processors for smooth functioning of the entire system. In an asymmetric multiprocessor system, each processor is assigned a specific task, and there is a designated master proc proces esso sorr that that cont contro rols ls acti activi viti ties es of the the othe otherr subor subordi dina nate te proc proces esso sors rs.. The The mast master er processor assigns works to subordinate su bordinate processors. In multip multiproc rocess essor or system systems, s, many many proces processors sors can can execut execute e operat operating ing system system progra programs ms simult simultane aneous ously. ly. Conseq Consequent uently, ly, kernel kernel path path synchr synchroni onizat zation ion is a major major challe challenge nge in design designing ing multip multiproc rocess essor or operat operating ing system systems. s. We need need a highly highly concur concurrent rent kernel kernel to achieve real gains in system performance. Synchronization has a much stronger impact on perfor performan mance ce in multip multiproc rocesso essorr system systems s than than on unipro uniproces cessor sor system systems. s. Many Many known known unipro uniproces cessor sor synchr synchroni onizat zation ion techniq techniques ues are ineffe ineffecti ctive ve in multip multiproc rocesso essorr system systems. s. Multiproc Multiprocessor essor systems systems need very sophistica sophisticated, ted, specializ specialized ed synchroniza synchronization tion schemes. schemes. Another challenge in symmetric multiprocessor systems is to balance the workload among processors rationally. Multiprocessor operating systems are expected to be fault tolerant, that is, failures of a few processors should not halt the entire system, a concept called graceful degradation of the system. 1.11.2. Multiuser Systems A multiuser system is one that can be used by more than one user. The system provides an environment environment in which many users can use the system at the same time or exclusively exclusively at different times. Each user can execute her applications without any concern about what other users are doing in the system. When many users run their applications at the same time, they compete and contend for system resources. The operating system allocates them the resources in an orderly manner.
Security is a major design issue in multiuser operating systems. Each user has a private space in the system where she maintains her programs and data, and the operating system must ensure that this space is visible only to her and authorized ones, and is protected from unauthorized and malicious users. The system needs to arbitrate resource sharin sharing g among among activ active e users users so that that nobody nobody is starve starved d of system system resour resources ces.. Multiu Multiuser ser systems may need an accounting mechanism to keep track of statistics of resource usage by individual users. 1.11.3. Multiprogram Systems A multiprogram system is one where many application programs can reside in the main memory at the same time (see Fig. 1.18). (By contrast, in uniprogram systems, at the most one application program can reside in the main memory.) Applications definitely need to share the main memory, and they may also need to share other system resources among themselves. Figure 1.18. Multiplexing of the main memory by applications.
Memory management is a major design challenge in multiprogram operating systems. Multiplexi Multiplexing ng of the main memory is essential essential to hold multiple multiple applicati applications ons in it. Different standalone standalone application applications s should should be able to share common subprogra subprograms ms (routines) (routines) and data. Processor scheduling (long-term) is another design issue in such systems as the operating system needs to decide the best applications to bring in the main memory. Protec Protectio tion n of progra programs ms from from their their own execut execution ions s is another another issue in design designing ing such such systems. 1.11.4. Multiprocess Systems A multiprocess system (also known as multitasking system) is one that executes many proces processes ses concur concurrent rently ly (simul (simultan taneou eously sly or in an interl interleav eaved ed fashio fashion). n). In a uniproc uniprocess ess system, when the lone process executes a wait operation, the processor would sit idle and wast waste e its its time time unti untill the the proc proces ess s come comes s out out of the the wait wait stat state. e. The The obje object ctiv ive e of mult multip ipro roce cess ssing ing is to have have a proc proces ess s runni running ng on the the proc proces esso sorr at all all time times, s, doing doing purposeful work. Many processes are executed concurrently to improve the performance of the system, and to improve the utilization of system resources such as the processor, the main memory, disks, printers, network interface cards, etc. Processes may execute the same program (in uniprogram systems) or different programs (in multiprogram systems). They share the processor among themselves in addition to sharing the main memory and I/O devices. The operating system executes processes by switching the processor among them. This switching is called context switching, process switching, or task switching. Shor Short-t t-term erm proc proces esso sorr sched schedul uling ing is a majo majorr desi design gn issu issue e in multi multipr proc oces ess s syst system ems. s. Multiprocess systems need to have schemes for interprocess communications and process synchronizations. Protection of one process from another is mandatory in multiprocess system systems. s. These These system systems, s, of course course,, need need to provid provide e operat operation ions s for proces process s creati creation, on, maintenance, suspension, resumption, and destruction. 1.11.5. Time-sharing Systems
In an interactive system, many users directly interact with the computer from terminals connected to the computer system. They submit small execution requests to the computer and and expe expect ct resul results ts back back imme immedi diat atel ely, y, afte afterr a shor shortt enoug enough h dela delay y to sati satisf sfy y their their temp temper eram amen ent. t. We need need a comp comput uter er syst system em that that supp suppor orts ts both both mult multip ipro rogr gram ams s and and multiprocesses. The processes appear to be executing simultaneously, each at its own speed. Apparent simultaneous execution of processes is achieved by frequently switching the processor from one process to another in a short span of time. These systems are often called time-sharing systems. It is essentially a rapid time division multiplexing of the processor time among several processes. The switching is so frequent, it almost seems each each proces process s has its own proces processor sor.. A time-sh time-shari aring ng system system is indeed indeed a multip multiproc rocess ess system system,, but it switch switches es the proces processor sor among among proces processes ses more more frequen frequently tly.. Thus, Thus, one additional goal of time-sharing is to help users to do effective interaction with the system while they run their applications.
3. Discuss Discuss different different types types of Interconnect Interconnection ion Networks Networks.. Ans.
4. Explain the the conditions for partitioning partitioning and parallelism with example. example. Ans. Conditions of Parallelism : program is consist consist of of several several segments, segments, so the 1. Data and resource dependencies : A program abilit ability y of execut executing ing sever several al progr program am segmen segmentt in parall parallel el require requires s that that each each segment segment should be independent other segment. Dependencies in various segment of a program may be in various form like resource dependency, control depending & data depending. Dependence graph is used to describe th e relation. Program statements are represented by nodes and the directed edge with different labels shows the ordered relation among the statements. After analyzing dependence graph, it can be shown that where opportunity exist for parallelization & vectorization. Data Dependencies: Relation between statements is represented by data dependences. There are 5 types of data dependencies dependencies given below: (a) Antidependency: A statement S2 is antidependent on statement S1 if S2 follows S1 in program order and if the output of S2 overlap the input to S1. (b) Input dependence: Read & write are input statement input dependence occur not because of same variables involved put because of same file is referenced by both input statements. (c) Unknown dependence: The dependence relation between two statement cannot be determined in following situation · The subscript of variable is itself subscribed. · The subscript does not n ot contain the loop index variable. · Subscript is non linear in the loop index variable. (d) Output dependence: Two statements are output dependence if they produce the same output variable. (e) Flow dependence: The statement S2 is flow dependent if an statement S1, if an expression path exists from S1 to S2 and at least are output of S, feeds in an input to S2.
2. Bernstein’s condition : Bernstein revealed a set of conditions depending on which two process can execute in parallel. A process is a program that is in execution. Process is an active entity. Actually it is an obstraction of a program fragment defined at various processing levels. Ii is the inputset of process Pi which is set of all input variables needed to execute the process similarly the output set of consist of all output variable generated after execution of all process Pi. Input variables are actually the operands which are fetched from the memory or registers. Output variables are the result to be stored in working registers or memory locations. Let’s consider that there are 2 processes P1 & P2
Input sets are I1 & I2 Output sets are O1 & O2 The two processes P1 & P2 can execute in parallel & are directed by P1/P2 if & only if the y are independent and do not create confusing results. Softwa ware re depe depend nden ency cy is defi define ned d by cont contro roll and and data data 3. Softwa Software re Parall Paralleli elism sm : Soft dependency of programs. Degree of parallelism is revealed in the program profile or in program flow graph. Software parallelism is a function of algorithm, programming style and compil compiler er optimi optimizat zation ion.. Progra Program m flow flow graphs graphs shows shows the patter pattern n of simult simultane aneous ously ly executable operation. Parallelism in a program varies during the execution period.
4. Hardware Parallelism : Hardware Parallelism is defined by hardware multiplicity & machine hardware. It is a function of cost & performance trade off. It displays the resource utilization patterns of simultaneously executable operations. It also indicates the performance of the processor resources. One method of identifying parallelism in hardware is by means by number of instructions issued per machine cycle.
5. What are the Characteristics of CISC and RISC Architecture?
Ans.
Assignment B Marks 10 Answer all questions. 1. What is pipeline pipeline computer? computer? Explain Explain the the principles principles for Pipelinin Pipelining. g. Ans.
2. Discuss SIMD Architecture in detail with its variances.
Ans.
3. What is a Vector Vector Processor? Processor? Compare Compare Vector and Stream Stream Architectur Architecture. e.
Ans.
4. Read the case study given below and answer the questions given at the end. Case Study The key to higher performance in microprocessors for a broad range of applications is the ability to exploit fine-grain, instruction-level parallelism. Some methods for exploiting fine-grain parallelism include: 1. Pipelining 2. Multiple Processors 3. Superscalar implementation 4. Specifying multiple independent operations per instruction. Pipelining is now universally implemented in high-performance processors. Little more can be gained by improving the implementation of a single pipeline. Using multiple processors improves performance for only a restricted se t of applications. Superscalar implementations can improve performance for all types of applications. Superscalar means the ability to fetch, issue to execution units, and complete more than one instruction at a time. Superscalar implementations
are required when architectural compatibility must be preserved, and they will be used for entrenched architectures with legacy software, such as X 86 architecture architecture that dominates the desktop computer market. Specifying multiple operations per instruction creates a very-long instruction word architecture or VLIW. AVLIW implementation has capabilities very similar to those of a superscalar processor —issuing and completing more than one operation at a time—with one important exception: the VLIW hardware is not responsible re sponsible for discovering opportunities to execute multiple operations concurrently. For the VLIW implementation, the long instruction word already encodes the concurrent operations. This explicit encoding leads to dramatically dramatically reduced hardware complexity compared compared to a highdegree superscalar implementation of a RISC or CISC. The big advantage of VLIW, then, is that a highly concurrent (parallel) (parallel) implementation is much simpler and cheaper to build than equivalently concurrent RISC or CISC chips. VLIW is a simpler way to build a superscalar microprocessor.
Questions: 1. Why do we need VLIW Architectu Architecture? re? Ans.
2. Compare Compare VILW VILW with with CISC CISC and and RISC? RISC? Ans.
3. Discuss Software instead of Hardware implementation advantages of VLIW.
Ans.
Assignment C Marks 10 Answer all questions. Tick mark(√) the most appropriate answer 1. Multi-computers a) b) c) d) Ans: A
are-Distributed Distributed address address space space accessible accessible by local local processors processors Simultaneo Simultaneous us access to shared variables variables can can produce inconsisten inconsistentt results. Requires Requires message message filterin filtering g for more more than than 1 computer computer.. Share Share the the comm common on memor memory. y.
2. Multi-Processors are-a) Share Share the the comm common on memo memory. ry. b) Systems Systems contain contain multiple multiple processor processors s on a single machine. machine. c) Consists Consists of a number number of proces processors sors accessi accessing ng other proces processors. sors. d) Multiproce Multiprocessor ssor implementa implementation tion for Non-embe Non-embedded dded systems. systems. Ans: A 3. Multivector is -a) A manufac manufactur turer er of Proces Process s Control Control b) An eleme element nt of of a vecto vectorr space space V . A, c) unique High-Express High-Expression ion (HExTM) (HExTM) technol technology ogy platfo platforms rms d) Pair Pair End Reads Reads.. Faster Faster,, Easier. Easier.
4. SIMD computers are-a) Computer Computer consists consists of limit limited ed identica identicall processors. processors. b) A modern superco supercomputer mputer is almost almost always always a cluster cluster of MIMD MIMD machines machines c) Single Single instruc instruction tion with multiple multiple data. data. d) Genera Generall instruc instructio tion n in computer computer.. Ans: C 5.
Progra Program m Parti Partitio tioning ning and Schedu Schedulin ling g Line Lines s are are defi defined ned as thos those e lines lines which which are are copl coplana anarr and do not intersect, is the Condition of-a) Part Partit itio ioni ning ng b) Para Parall llel elis ism. m. c) Sche Sched dulin uling g d) Mult Multip ipro roce cess ssing ing Ans: D 6.
VLSI stands for-a) Very Very Large Large Scale Scale Integ Integrat ration ion b) Variab Variable le lengt length h serial serial mask. mask. c) Virtua Virtuall limit limit of of sub inte interfa rface. ce. d) Very Very last last stack stack instr instruct uction ion..
Ans: A 7. Which Parallel Algorithms is used for multiprocessor-a) SIMD b) VLSI c) APST d) NDPL
Ans: B 8. IEEE standard backplane bus specification is for -a) Multile Multilevel vel archit architect ectures ures b) Multipr Multiproce ocesso ssorr archit architect ectures ures c) Multipa Multipath th archit architect ectures ures.. d) Multi Multi programm programming ing archite architectu cture. re. Ans: B 9.
Hier Hierar arc chic hical memo memory ry sys syste tem m tec technol hnolo ogy use usess--a) Cach Cache e mem memor ory y b) Memo Memory ry stic sticks ks.. c) Hdd. d) Virt Virtua uall mem memor ory. y.
Ans: A 10. 10.
An arb arbit itra rati tion on pro proto toco coll gove govern rns s thethe--. -. a) Inte Interr rrup uptt b) I/ O c) H/w d) Parity.
Ans: A 11.
In which which of the the follow following ing orde orderr of progra program m executi execution on explic explicitl itly y stated stated in in user progr programs ams? ? a) Progra Program m Flow Flow Mechan Mechanism isms s b) Contro Controll Flow Flow mechani mechanism sm c) Data Data Flo Flow w mec mecha hani nism sm d) Reduct Reduction ion flow flow mechan mechanism ism
Ans: A 12
shared memory, program counter, control sequencer are features of -a) b) c) d)
Data Data Flow low Prog Progra ram m Flo Flow w Cont Contro roll Flow Flow mecha mechanis nism m Reduct Reduction ion flow flow mechan mechanism ism
Ans: C 13. ………Ins ………Instruc tructio tion n addres address s (es) effe effecti ctivel vely y replace replaces s the progr program am counte counterr in a contro controll flow flow machine. a) Datafl Dataflow ow Archit Architect ecture ure b) Demand Demand-Dri -Driven ven Mechan Mechanism isms s c) Data Data Redu Reducti ction on Mechan Mechanism ism.. d) Reduct Reduction ion mechan mechanism ism.. 14. APT is -a) b) c) d) Ans: A
Advanc Advanced ed proces processor sor technol technology ogy Advert Advertise ise poste posterr trend trend.. Addi Additi tion on par partt of tec tech. h. Actual Actual plan plannin ning g of tech. tech.
15. Addressing Modes on the Y86 are used in--
ISA. APT. VLSI ISMD Ans: A
16. The 16. The cros crossb sbar ar swi switc tch h was was m mos ostt popu popula larr fro from m -1950 to 1980 1980-2000 1970-1990 1960-2000 Ans: A 17. A memory shared by many processors to communicate among is termed as-Multiport memory Multiprocessor memory. Multilevel memory. Multidevice memory. Ans: B 18.
A switc switching hing system system for for acces accessing sing memo memory ry modul modules es in a mult multipr iproc ocesso essorr is calle called-d-a) Comb Combin inin ing g n/w. n/w. b) Combin Combining ing proces processor sors. s. c) Comb Combin ining ing devi device ces. s. d) Comb Combin ining ing cab cable les. s. Ans: B 19. 19.
What What does does the the foll follow owin ing g dia diagr gram am show show? ?
a) b) c) d)
Proc Process ess Hier Hierar arch chy y Memo Memory ry Hie Hiera rarc rchy hy Acce Access ssin ing g of Memo Memory ry CPU CPU con connec necti tion ons. s.
Ans: B 20. …….. is the first super computer produced by India. a) PARAM b) Inte Intell 500 5000 0 c) sup super Indi India a d) none none of thes these e Ans: A 21. SIMD stands for-a) Single Single Instruct Instruction ion Multip Multiple le Data Data stream stream b) Synchronous Synchronous Instruction Instruction Multip Multiple le Data Data stream stream c) Single Single Interfa Interface ce Multiple Multiple Data stream d) Single Single Instructio Instruction n Multiple Multiple Data Data signal signal Ans: A
22. SISD stands for-a) Single Single Instruct Instruction ion Several Several Data stream. stream. b) Single Single Instruct Instruction ion Single Single Data stream stream c) Single Single Instruc Instruction tion Several Several Document Document stream d) none none of thes these` e` Ans: A 23. MIMD stands for -a) Multiple Multiple Instruc Instruction tion Multip Multiple le Data Data stream. stream. b) Multiple Multiple Instruction Instruction Meta Data stream. stream. c) Multiple Multiple Instruction Instruction Modular Modular Data stream. stream. d) none none of thes these e Ans: A 24. MISD stands for -a) Multiple Multiple Instruc Instruction tion Single Single Data stream. stream. b) More Instructio Instruction n Single Single Data Data stream. stream. c) Multiple Multiple Instruction Instruction Simple Simple Data Data stream. stream. d) none none of thes these e Ans: A 25. Which one is true about MISD? a) Is not not a practicall practically y existing existing model. model. b) Practi Practica cally lly existi existing ng model. model. c) c.Meta c.Meta inst instruc ructio tion n single single data data d) All All abov above e are are true true Ans: B 26. SISD is the example of -a) Distributed Distributed parallel parallel processor processor system. system. b) Sequ Sequent entia iall syst system em c) Multipr Multiproce ocessi ssing ng system system d) None None of thes these e Ans: B 27. VLIW is stand for-a) Variab Variable le length length Instru Instructi ction on wall wall b) Vary Vary large large instr instruct uction ion word word c) Vary Vary long long instruc instructio tion n word word d) None None of thes these e Ans: C 28. RISC stands for-a) Rich Rich instruc instructio tion n set compu computer ters. s. b) Rich instructio instruction n serial serial computers. computers. c) Real Real instru instructi ction on set set compu computer ters. s. d) none none of thes these e Ans: D 29. SIMD has following component-a) PE b) C U c) AU d) All All of of Abo Above ve Ans: B
30. CISC stands for -a) Comple Complex x instruc instructio tion n set compute computers rs b) Complete Complete instructio instruction n set computers computers c) Core Core instru instructi ction on set set compu computer ters s d) None None of thes these e Ans: A 31. The ALU and control unit of most of the microcomputers are combined and manufactured on a single silicon chip. What is it called? a) Monochip b) Micr Microp opro roce cesso ssorr c) Alu d) Cont Contro roll uni unitt Ans: B 32. Which of the following registers is used to keep track of address of the memory location where the next instruction is located? a) Memory Memory Address Address Regist Register er b) Memory Memory Data Data Regist Register er c) Inst Instruc ructi tion on Regi Regist ster er d) Prog Progra ram m Regi Regist ster er Ans: A 33. A complete microcomputer system consist of-a) Micr Microp opro roce cesso ssorr b) Memory c) Peripheral equipment d) All All of of abo above ve Ans: D 34. CPU does perform the operation of-of-a) Data Data tra trans nsfe ferr b) Logi Logic c ope opera rati tion on c) Arit Arithme hmeti tic c opera operati tion on d) All All of of abo above ve Ans: D 35. Pipelining strategy is called implementing-a) Instruc Instructio tion n execut execution ion b) Instruc Instructio tion n prefet prefetch ch c) Inst Instruc ructi tion on deco decodi ding ng d) Instruc Instructio tion n manipul manipulati ation on Ans: B 36. What is the function of control unit in a CPU? a) To transf transfer er data data to primar primary y storage storage b) To store store progr program am instr instruct uction ion c) To perf perform orm logic logic oper operati ations ons d) To decod decode e program program instr instruct uction ion Ans: A 37. Pipeline implements-a) Fetc Fetch h inst instruc ructi tion on
b) c) d) e) f)
Deco Decode de instr instruc ucti tion on Fetc Fetch h ope opera rand nd Calc Calcula ulate te oper operan and d Execu Execute te instr instruc ucti tion on All of above
Ans : F 38. Memory access in RISC architecture is limited to instructions like-a) CALL CALL and RET RET b) PUSH PUSH and and POP POP c) STA STA and LDA d) MOV MOV and and JMP Ans: C 39. The most common addressing techniques employed by a CPU is-a) Imm Immedia ediate te b) Direct c) Indirect d) Regi Regist ster er e) All of the above Ans: E 40. A shared memory SIMD model is …….. than distributed memory model. a) More More comp comple lex x b) Less Less comp comple lex x c) Equa Equall lly y comp comple lex x d) Can’ Can’tt say say Ans: B