Multiple Choice Questions & Answers on Parallel Computing
Multiple Choice Questions & Answers on Parallel Computing
1. During the execution of the instructions, a copy of the instructions is placed in the ______ .
a) register
b) ram
c) system heap
d) cache
Answer: cache
2. Pipe-lining is a unique feature of _______.
a) risc
b) cisc
c) isa
d) iana
Answer: risc
3. If a processor clock is rated as 1250 million cycles per second, then its clock period is ________ .
a) 1.9 * 10 ^ -10 sec
b) 1.6 * 10 ^ -9 sec
c) 1.25 * 10 ^ -10 sec
d) 8 * 10 ^ -10 sec
Answer: 8 * 10 ^ -10 sec
4. A processor performing fetch or decoding of different instruction during the execution of another instruction is called ______ .
a) super-scaling
b) pipe-lining
c) parallel computation
d) none of these
Answer: pipe-lining
5. Both the CISC and RISC architectures have been developed to reduce the______.
a) cost
b) time delay
c) semantic gap
d) all of the above
Answer: semantic ga
6. A collection of lines that connects several devices is called __________
a) bus
b) peripheral connection wires
c) both a and b
d) internal wires
Answer: bus
7. CPU does not perform the operation ________
a) data transfer
b) logic operation
c) arithmetic operation
d) all of the above
Answer: data transfer
8. PC Program Counter is also called ___________
a) instruction pointer
b) memory pointer
c) data counter
d) file pointer
Answer: instruction pointer
9. Memory address refers to the successive memory words and the machine is called as ________
a) word addressable
b) byte addressable
c) bit addressable
d) terra byte addressable
Answer: word addressable
10. A complete microcomputer system consist of __________
a) microprocessor
b) memory
c) peripheral equipment
d) all of the above
Answer: all of the above.
11. A microprogram written as string of 0's and 1's is a ___________
a) symbolic microinstruction
b) binary microinstruction
c) symbolic microinstruction
d) binary micro-program
Answer: binary micro-program
12. In CISC architecture most of the complex instructions are stored in _____.
a) register
b) diodes
c) cmos
d) transistors
Answer: transistors
13. Data hazards occur when ___________
a) greater performance loss
b) pipeline changes the order of read/write access to operands
c) some functional unit is not fully pipelined
d) machine size is limited
Answer: pipeline changes the order of read/write access to operands
14. The average number of steps taken to execute the set of instructions can be made to be less than one by following _______ .
a) isa
b) pipe-lining
c) super-scaling
d) sequential
Answer: super-scaling
15. A pipeline is like ________
a) an automobile assembly line
b) house pipeline
c) both a and b
d) a gas line
Answer: an automobile assembly line
16. As of 2000, the reference system to find the performance of a system is _____ .
a) ultra sparc 10
b) sun sparc
c) sun ii
d) none of these
Answer: ultra sparc 10
17. The Sun micro systems processors usually follow _____ architecture.
a) cisc
b) isa
c) ultra sparc
d) risc
Answer: risc
18. As of 2000, the reference system to find the SPEC rating are built with _____ Processor.
a) intel atom sparc 300mhz
b) ultra sparc -iii 300mhz
c) amd neutrino series
d) asus a series 450 mhz
Answer: ultra sparc -iii 300mhz
19. The access time of memory is ________ the time required for performing any single CPU operation.
a) longer than
b) shorter than
c) negligible than
d) same as
Answer: longer than
20. The cost of a parallel processing is primarily determined by :
a) time complexity
b) switching complexity
c) circuit complexity
d) none of the above
Answer: circuit complexity
21. An instruction to provide small delay in program
a) lda
b) nop
c) bea
d) none of the above
Answer: nop
22. Characteristic of RISC (Reduced Instruction Set Computer) instruction set is
a) three instructions per cycle
b) two instructions per cycle
c) one instruction per cycle
d) none of the
Answer: one instruction per cycle
23. Parallel processing may occur
a) in the instruction stream
b) in the data stream
c) both[a] and [b]
d) none of the above
Answer: both[a] and [b]
24. In daisy-chaining priority method, all the devices that can request an interrupt are connected in
a) parallel
b) serial
c) random
d) none of the above
Answer: serial
25. Two processors A and B have clock frequencies of 700 Mhz and 900 Mhz respectively. Suppose A can execute an instruction with an average of 3 steps and B can execute with an average of 5 steps. For the execution of the same instruction which processor is faster ?
a) a
b) b
c) both take the same time
d) insuffient information
Answer: a
26. Which one of the following is a characteristic of CISC (Complex Instruction Set Computer)
a) fixed format instructions
b) variable format instructions
c) instructions are executed by hardware
d) none of the above
Answer: variable format instructions
27. For a given FINITE number of instructions to be executed, which architecture of the processor provides for a faster execution ?
a) isa
b) ansa
c) super-scalar
d) all of the above
Answer: super-scalar
28. The clock rate of the processor can be improved by,
a) improving the ic technology of the logic circuits
b) reducing the amount of processing done in one step
c) by using overclocking method
d) all of the above
Answer: all of the above
29. The ultimate goal of a compiler is to,
a) reduce the clock cycles for a programming task.
b) reduce the size of the object code.
c) be versatile.
d) be able to detect even the smallest of errors.
Answer: reduce the clock cycles for a programming task.
30. An optimizing Compiler does,
a) better compilation of the given piece of code.
b) takes advantage of the type of processor and reduces its process time.
c) does better memory managament.
d) both a and c
Answer: takes advantage of the type of processor and reduces its process time.
64. SPEC stands for,
a) standard performance evaluation code.
b) system processing enhancing code.
c) system performance evaluation corporation.
d) standard processing enhancement corporation.
Answer: system performance evaluation corporation.
31. If the instruction, Add R1,R2,R3 is executed in a system which is pipe-lined, then the value of S is (Where S is term of the Basic performance equation)
a) 3
b) ~2
c) ~1
d) 6
Answer: ~1
32. CISC stands for,
a) complete instruction sequential compilation
b) computer integrated sequential compiler
c) complex instruction set computer
d) complex instruction sequential compilation
Answer: complex instruction set computer
33. Te devices connected to a microprocessor can use the data bus:
a) all the time
b) at regular interval of time
c) only when it’s sending or receiving data
d) when the microprocessor is reset
Answer: only when it’s sending or receiving data
34. Intel 8080 microprocessor has an instruction set of 91 instruction. The opcode to implement this instruction set should be at least
a) 3 bit long
b) 5 bit long
c) 7 bit long
d) 9 bit long
Answer: 7 bit long
35. Intel Pentium CPU is a
a) risc based
b) cisc based
c) both of the above
d) none of the above
Answer: risc based
36. Dynamic RAMs are best suited to
a) slow system
b) large system
c) one bit system
d) none of the above
Answer: slow system
37. A modem is used to link up two computers via
a) telephone line
b) dedicated line
c) both of the above
d) none of the above
Answer: both of the above
38. The maximum integer which can be stored on a 8 bit accumulator is
a) 112
b) 200
c) 255
d) 224
Answer: 255
39. In a system with a 16 bit address bus, what is the maximum number of 1K byte memory devices it could contain
a) 16
b) 64
c) 256
d) 65536
Answer: 256
40. A peripheral is
a) any drives installed in the computer
b) tapedrive connected to a computer
c) any physical device connected to the computer
d) none of above
Answer: any physical device connected to the computer
41. Which of the following memories in a computer is volatile?
a) ram
b) rom
c) eprom
d) all
Answer: ram
42. Scalability refers to a parallel system’s (hardware and/or software) ability
a) To demonstrate a proportionate increase in parallel speedup with the removal of some processors
b) To demonstrate a proportionate increase in parallel speedup with the addition of more processors
c) To demonstrate a proportionate decrease in parallel speedup with the addition of more processors
d) None of these
Answer: To demonstrate a proportionate increase in parallel speedup with the addition of more processors
43. Uniform Memory Access (UMA) referred to
a) Here all processors have equal access and access times to memory
b) Here if one processor updates a location in shared memory, all the other processors know about the update.
c) Here one SMP can directly access memory of another SMP and not all processors have equal access time to all memories
d) None of these
Answer: Here all processors have equal access and access times to memory
44. Parallel computing can include
a) Single computer with multiple processors
b) Arbitrary number of computers connected by a network
c) Combination of both A and B
d) None of these
Answer: Combination of both A and B
45. In shared Memory
a) Changes in a memory location effected by one processor do not affect all other processors.
b) Changes in a memory location effected by one processor are visible to all other processors
c) Changes in a memory location effected by one processor are randomly visible to all other processors.
d) None of these
Answer: Changes in a memory location effected by one processor are visible to all other processors
46. Collective communication
a) It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective.
b) It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.
c) It allows tasks to transfer data independently from one another.
d) None of these
Answer: It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective.
47. Point-to-point communication referred to
a) It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective.
b) It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.
c) It allows tasks to transfer data independently from one another.
d) None of these
Answer: It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.
48. Shared Memory is
a) A computer architecture where all processors have direct access to common physical memory
b) It refers to network based memory access for physical memory that is not common.
c) Parallel tasks typically need to exchange dat(A) There are several ways this can be accomplished, such as through, a shared memory bus or over a network, however the actual event of data exchange is commonly referred to as communications regardless of the method employe(D)
d) None of these
Answer: A computer architecture where all processors have direct access to common physical memory
49. Data dependence is
a) Involves only those tasks executing a communication operation
b) It exists between program statements when the order of statement execution affects the results of the program.
c) It refers to the practice of distributing work among tasks so that all tasks are kept busy all of the time. It can be considered as minimization of task idle time.
d) None of these
Answer: It exists between program statements when the order of statement execution affects the results of the program.
50. Non-Uniform Memory Access (NUMA) is
a) Here all processors have equal access and access times to memory
b) Here if one processor updates a location in shared memory, all the other processors know about the update.
c) Here one SMP can directly access memory of another SMP and not all processors have equal access time to all memories
d) None of these
Answer: Here one SMP can directly access memory of another SMP and not all processors have equal access time to all memories
51. In the threads model of parallel programming
a) A single process can have multiple, concurrent execution paths
b) A single process can have single, concurrent execution paths.
c) A multiple process can have single concurrent execution paths.
d) None of these
Answer: A single process can have multiple, concurrent execution paths
52. Here a single program is executed by all tasks simultaneously. At any moment in time, tasks can be executing the same or different instructions within the same program. These programs usually have the necessary logic programmed into them to allow different tasks to branch or conditionally execute only those parts of the program they are designed to execute.
a) Single Program Multiple Data (SPMD)
b) Multiple Program Multiple Data (MPMD)
c) Von Neumann Architecture
d) None of these
Answer: Single Program Multiple Data (SPMD)
53. It is the simultaneous use of multiple compute resources to solve a computational problem
a) Parallel computing
b) Single processing
c) Sequential computing
d) None of these
Answer: Parallel computing
54. Synchronous communication operations referred to
a) Involves only those tasks executing a communication operation
b) It exists between program statements when the order of statement execution affects the results of the program.
c) It refers to the practice of distributing work among tasks so that all tasks are kept busy all of the time. It can be considered as minimization of task idle time.
d) None of these
Answer: Involves only those tasks executing a communication operation
55. Parallel Overhead is
a) Observed speedup of a code which has been parallelized, defined as: wall-clock time of serial execution and wall-clock time of parallel execution
b) The amount of time required to coordinate parallel tasks. It includes factors such as: Task start-up time, Synchronizations, Data communications.
c) Refers to the hardware that comprises a given parallel system - having many processors
d) None of these
Answer: The amount of time required to coordinate parallel tasks. It includes factors such as: Task start-up time, Synchronizations, Data communications.
56. These computer uses the stored-program concept. Memory is used to store both program and data instructions and central processing unit (CPU) gets instructions and/ or data from memory. CPU, decodes the instructions and then sequentially performs them.
a) Single Program Multiple Data (SPMD)
b) Flynn’s taxonomy
c) Von Neumann Architecture
d) None of these
Answer: Von Neumann Architecture
57. Latency is
a) Partitioning in that the data associated with a problem is decompose(D) Each parallel task then works on a portion of the dat(A)
b) Partitioning in that, the focus is on the computation that is to be performed rather than on the data manipulated by the computation. The problem is decomposed according to the work that must be done. Each task then performs a portion of the overall work.
c) It is the time it takes to send a minimal (0 byte) message from one point to other point
d) None of these
Answer: It is the time it takes to send a minimal (0 byte) message from one point to other point
58. How many bits do you think will be adequate to encode individual character in Devnagari script
a) 12
b) 16
c) 64
d) 10
Answer: 10
59. Which of the following bus is used to transfer data from main memory to peripheral device?
a) dma bus
b) output bus
c) data bus
d) all of the above
Answer: data bus
60. To provide increased memory capacity for operating system, the
a) virtual memory is created
b) cache memory is increased
c) memory for os is reserved
d) additional memory is installed
Answer: virtual memory is created
61. CD -RAW is
a) input device only
b) output device only
c) both of the above
d) none of the above
Answer: output device only
62. Which of the following require large computer memory?
a) imaging
b) graphics
c) voice
d) all of the above
Answer: all of the above
63. Which major development led to the production of microcomputers?
a) magnetic disks
b) floppy disks
c) logic gates
d) integrated circuits
Answer: integrated circuits
63. Asynchronous communications
a) It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective.
b) It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.
c) It allows tasks to transfer data independently from one another.
d) None of these
Answer: It allows tasks to transfer data independently from one another.
64. In immediate addressing the operand is placed
a) in the cpu register
b) after opcode in the instruction
c) in the memory
d) in the stack
Answer: after opcode in the instruction
65. Synchronous communications
a) It require some type of “handshaking” between tasks that are sharing dat(A) This can be explicitly structured in code by the programmer, or it may happen at a lower level unknown to the programmer.
b) It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective.
c) It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.
d) It allows tasks to transfer data independently from one another.
Answer: It require some type of “handshaking” between tasks that are sharing dat(A) This can be explicitly structured in code by the programmer, or it may happen at a lower level unknown to the programmer.
66. The 16- bit registers in 8085 is
a) general purpose register
b) accumulator
c) stack pointer and program counter
d) all of the above
Answer: stack pointer and program counter
67. Instruction pipelining has minimum stages
a) 4
b) 2
c) 3
d) 6
Answer: 2
68. Systems that do not have parallel processing capabilities are
a) sisd
b) simd
c) mimd
d) all of the above
Answer: sisd
69. Domain Decomposition
a) Partitioning in that the data associated with a problem is decompose(D) Each parallel task then works on a portion of the dat(A)
b) Partitioning in that, the focus is on the computation that is to be performed rather than on the data manipulated by the computation. The problem is decomposed according to the work that must be done. Each task then performs a portion of the overall work.
c) It is the time it takes to send a minimal (0 byte) message from point A to point (B)
d) None of these
Answer: Partitioning in that the data associated with a problem is decompose(D) Each parallel task then works on a portion of the dat(A)
70. The word size of the microprocessor refers to
a) the amount of a information that can be stored in a byte
b) the amount of a information that can be stored in a cycle
c) the number of machine operations performed in a second
d) the maximum length of an english word that can be input to a computer
Answer: the amount of a information that can be stored in a cycle
71. How many address lines are needed to address each memory location in a 2048X 4 memory chip?
a) 10
b) 11
c) 8
d) 12
Answer: 11
72. Pipelining strategy is called implement
a) instruction execution
b) instruction prefetch
c) instruction decoding
d) instruction manipulation
Answer: instruction prefetch
73. The concept of pipelining is most effective in improving performance if the tasks being performed in different stages :
a) require different amount of time
b) require about the same amount of time
c) require different amount of time with time difference between any two tasks being same
d) require different amount with time difference between any two tasks being different
Answer: require about the same amount of time
74. Which Algorithm is better choice for pipelining?
a) small algorithm
b) hash algorithm
c) merge-sort algorithm
d) quick-sort algorithm
Answer: merge-sort algorithm
75. The expression 'delayed load' is used in context of
a) processor-printer communication
b) memory-monitor communication
c) pipelining
d) none of the above
Answer: pipelining
76. The CISC stands for
a) computer instruction set compliment
b) complete instruction set compliment
c) computer indexed set components
d) complex instruction set computer
Answer: complex instruction set computer
77. The iconic feature of the RISC machine among the following are
a) reduced number of addressing modes
b) increased memory size
c) having a branch delay slot
d) all of the above
Answer: having a branch delay slot
78. Out of the following which is not a CISC machine.
a) ibm 370/168
b) vax 11/780
c) intel 80486
d) motorola a567
Answer: motorola a567
79. In a single byte how many bits will be there?
a) 8
b) 16
c) 4
d) 32
Answer: 8
80. Processors of all computers, whether micro, mini or mainframe must have
a) alu
b) primary storage
c) control unit
d) all of above
Answer: all of above
81. What is the control unit's function in the CPU?
a) to transfer data to primary storage
b) to store program instruction
c) to perform logic operations
d) to decode program instruction
Answer: to decode program instruction
82. What is meant by a dedicated computer?
a) which is used by one person only
b) which is assigned to one and only one task
c) which does one kind of software
d) which is meant for application software only
Answer: which is assigned to one and only one task
83. Which of the following code is used in present day computing was developed by IBM corporation?
a) ascii
b) hollerith code
c) baudot code
d) ebcdic code
Answer: ebcdic code
84. When a subroutine is called, the address of the instruction following the CALL instructions stored in/on the
a) stack pointer
b) accumulator
c) program counter
d) stack
Answer: stack
85. A microprogram written as string of 0's and 1's is a
a) symbolic microinstruction
b) binary microinstruction
c) symbolic microprogram
d) binary microprogram
Answer: binary microprogram
86. Interrupts which are initiated by an instruction are
a) internal
b) external
c) hardware
d) software
Answer: external
87. Memory access in RISC architecture is limited to instructions
a) call and ret
b) push and pop
c) sta and lda
d) mov and jmp
Answer: sta and lda
88. From where interrupts are generated?
a) central processing unit
b) memory chips
c) registers
d) i/o devices
Answer: i/o devices
89. The output of a gate is low when at least one of its input is low . It is true for
a) and gate
b) or gate
c) nand gate
d) nor gate
Answer: and gate
90. Which one of the following is most suitable to make a parity checker
a) and gate
b) or gate
c) exclusive- or gate
d) none of the above
Answer: exclusive- or gate
91. What is the minimum number of flip-flops required in a counter to count 100 pulses?
a) five
b) seven
c) ten
d) hundred
Answer: seven
92. For a RS flip-flop constructed with NAND gates and input R=1 and s=1 the state is
a) memory state
b) set state
c) reset state
d) unused state
Answer: unused state
93. The advantage of RISC processor over CISC processor is that
a) the hardware architecture is simpler
b) an instruction can be executed in one cycle
c) less number of registers accommodate in chip
d) parallel execution capabilities
Answer: an instruction can be executed in one cycle
94. Which of the following is true about interrupts?
a) they are generated when memory cycles are stolen
b) they are used in place of data channels
c) they can be generated by arithmetic operation
d) they can indicate completion of an i/o operation
Answer: they are generated when memory cycles are stolen
95. Which of the architecture is power efficient?
a) cisc
b) risc
c) isa
d) iana
Answer: risc
96. It is the simultaneous use of multiplecompute resources to solve a computational problem
a) Parallel computing
b) Single processing
c) Sequential computing
d) None of these
Answer: Parallel computing
97. Parallel Execution
a) A sequential execution of a program, one statement at a time
b) Execution of a program by more than one task, with each task being able to execute the same or different statement at the same moment in time
c) A program or set of instructions that is executed by a processor.
d) None of these
Answer: Execution of a program by more than one task, with each task being able to execute the same or different statement at the same moment in time
98. Scalability refers to a parallel system’s (hardware and/or software) ability
a) To demonstrate a proportionate increase in parallel speedup with the removal of some processors
b) To demonstrate a proportionate increase in parallel speedup with the addition of more processors
c) To demonstrate a proportionate decrease in parallel speedup with the addition of more processors
d) None of these
Answer: To demonstrate a proportionate increase in parallel speedup with the addition of more processors
99. Parallel computing can include
a) Single computer with multiple processors
b) Arbitrary number of computers connec- ted by a network
c) Combination of both A and B
d) None of these
Answer: Combination of both A and B
100. Serial Execution
a) A sequential execution of a program, one statement at a time
b) Execution of a program by more than one task, with each task being able to execute the same or different statement at the same moment in time
c) A program or set of instructions that is executed by a processor.
d) None of these
Answer: A sequential execution of a program, one statement at a time
101. Shared Memory is
a) A computer architecture where all processors have direct access to common physical memory
b) It refers to network based memory access for physical memory that is not common.
c) Parallel tasks typically need to exchange dat(A) There are several ways this can be accomplished, such as through, a shared memory bus or over a network, however the actual event of data exchange is commonly referred to as communications regardless of the method employ
d) None of these
Answer: computer architecture where all processors have direct access to common physical memory
102. Distributed Memory
a) A computer architecture where all processors have direct access to common physical memory
b) It refers to network based memory access for physical memory that is not common
c) Parallel tasks typically need to exchange dat(A) There are several ways this can be accomplished, such as through, a shared memory bus or over a network, however the actual event of data exchange is commonly referred to as communications regardless of the method employ
d) None of these
Answer: It refers to network based memory access for physical memory that is not common
103. Parallel Overhead is
a) Observed speedup of a code which has been parallelized, defined as: wall-clock time of serial execution and wall-clock time of parallel execution
b) The amount of time required to coordi- nate parallel tasks. It includes factors such as: Task start-up time, Synchro- nizations, Data communications.
c) Refers to the hardware that comprises a given parallel system - having many processors
d) None of these
Answer: The amount of time required to coordi- nate parallel tasks. It includes factors such as: Task start-up time, Synchro- nizations, Data communications.
104. Massively Parallel
a) Observed speedup of a code which has been parallelized, defined as: wall-clock time of serial execution and wall-clock time of parallel execution
b) The amount of time required to coordinate parallel tasks. It includes factors such as: Task start-up time, Synchronizations, Data communications.
c) Refers to the hardware that comprises a given parallel system - having many processors
d) None of these
Answer: The amount of time required to coordinate parallel tasks. It includes factors such as: Task start-up time, Synchronizations, Data communications.
105. Fine-grain Parallelism is
a) In parallel computing, it is a qualitative measure of the ratio of computation to communication
b) Here relatively small amounts of computational work are done between communication events
c) Relatively large amounts of computational work are done between communication / synchroni- zation events
d) None of these
Answer: Here relatively small amounts of computational work are done between communication events
106. In shared Memory
a) Changes in a memory location effected by one processor do not affect all other processors.
b) Changes in a memory location effected by one processor are visible to all other processors
c) Changes in a memory location effected by one processor are randomly visible to all other processors.
d) None of these
Answer: Changes in a memory location effected by one processor are visible to all other processors
107. In shared Memory:
a) Here all processors access, all memory as global address space
b) Here all processors have individual memory
c) Here some processors access, all memory as global address space and some not
d) None of these
Answer: Here all processors access, all memory as global address space
108. In shared Memory
a) Multiple processors can operate independently but share the same memory resources
b) Multiple processors can operate independently but do not share the same memory resources
c) Multiple processors can operate independently but some do not share the same memory resources
d) None of these
Answer: Multiple processors can operate independently but share the same memory resources
109. In designing a parallel program, one has to break the problem into discreet chunks of work that can be distributed to multiple tasks. This is known as
a) Decomposition
b) Partitioning
c) Compounding
d) Both a and b
Answer: Both a and b
110. Latency is
a) Partitioning in that the data associated with a problem is decompose(D) Each parallel task then works on a portion of the dat(A)
b) Partitioning in that, the focus is on the computation that is to be performed rather than on the data manipulated by the computation. The problem is decomposed according to the work that must be done. Each task then performs a portion of the overall work.
c) It is the time it takes to send a minimal (0 byte) message from one point to other point
d) None of these
Answer: It is the time it takes to send a minimal (0 byte) message from one point to other point
111. Domain Decomposition
a) Partitioning in that the data associated with a problem is decompose(D) Each parallel task then works on a portion of the dat(A)
b) Partitioning in that, the focus is on the computation that is to be performed rather than on the data manipulated by the computation. The problem is decomposed according to the work that must be done. Each task then performs a portion of the overall work.
c) It is the time it takes to send a minimal (0 byte) message from point A to point (B)
d) None of these
Answer: Partitioning in that the data associated with a problem is decompose(D) Each parallel task then works on a portion of the dat(A)
112. Functional Decomposition:
a) Partitioning in that the data associated with a problem is decompose(D) Each parallel task then works on a portion of the dat(A)
b) Partitioning in that, the focus is on the computation that is to be performed rather than on the data manipulated by the computation. The problem is decomposed according to the work that must be done. Each task then performs a portion of the overall work.
c) It is the time it takes to send a minimal (0 byte) message from point A to point (B)
d) None of these
Answer: Partitioning in that, the focus is on the computation that is to be performed rather than on the data manipulated by the computation. The problem is decomposed according to the work that must be done. Each task then performs a portion of the overall work.
113. Synchronous communications
a) It require some type of “handshaking” between tasks that are sharing dat(A) This can be explicitly structured in code by the programmer, or it may happen at a lower level unknown to the pro- grammer.
b) It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective.
c) It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.
d) It allows tasks to transfer data independently from one another.
Answer: It require some type of “handshaking” between tasks that are sharing dat(A) This can be explicitly structured in code by the programmer, or it may happen at a lower level unknown to the pro- grammer.
114. Collective communication
a) It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective.
b) It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.
c) It allows tasks to transfer data independently from one another.
d) None of these
Answer: It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective.
115. Point-to-point communication referred to
a) It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective.
b) It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.
c) It allows tasks to transfer data independently from one another.
d) None of these
Answer: It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.
116. Uniform Memory Access (UMA) referred to
a) Here all processors have equal access and access times to memory
b) Here if one processor updates a location in shared memory, all the other processors know about the update.
c) Here one SMP can directly access memory of another SMP and not all processors have equal access time to all memories
d) None of these
Answer: Here all processors have equal access and access times to memory
117. Asynchronous communications
a) It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective.
b)It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.
c) It allows tasks to transfer data independently from one another.
d) None of these
Answer: It allows tasks to transfer data independently from one another.
118. Granularity is
a) In parallel computing, it is a qualitative measure of the ratio of computation to communication
b) Here relatively small amounts of computational work are done between communication events
c) Relatively large amounts of computa- tional work are done between communication / synchronization events
d) None of these
Answer: In parallel computing, it is a qualitative measure of the ratio of computation to communication
119. Coarse-grain Parallelism
a) In parallel computing, it is a qualitative measure of the ratio of computation to communication
b) Here relatively small amounts of computational work are done between communication events
c) Relatively large amounts of computa- tional work are done between communication / synchronization events
d) None of these
Answer: Relatively large amounts of computa- tional work are done between communication / synchronization events
120. Cache Coherent UMA (CC-UMA) is
a) Here all processors have equal access and access times to memory
b) Here if one processor updates a location in shared memory, all the other processors know about the update.
c) Here one SMP can directly access memory of another SMP and not all processors have equal access time to all memories
d) None of these
Answer: Here if one processor updates a location in shared memory, all the other processors know about the update.
121. Non-Uniform Memory Access (NUMA) is
a) Here all processors have equal access and access times to memory
b) Here if one processor updates a location in shared memory, all the other processors know about the update.
c) Here one SMP can directly access memory of another SMP and not all processors have equal access time to all memories
d) None of these
Answer: Here one SMP can directly access memory of another SMP and not all processors have equal access time to all memories
122. It distinguishes multi-processor computer architectures according to how they can be classified along the two independent dimensions of Instruction and Dat(A) Each of these dimensions can have only one of two possible states: Single or Multiple.
a) Single Program Multiple Data (SPMD)
b) Flynn’s taxonomy
c) Von Neumann Architecture
d) None of these
Answer: Flynn’s taxonomy
123. In the threads model of parallel programming
a) A single process can have multiple, concurrent execution paths
b) A single process can have single, concurrent execution paths.
c) A multiple process can have single concurrent execution paths.
d) None of these
Answer: A single process can have multiple, concurrent execution paths
124. These applications typically have multiple executable object files (programs). While the application is being run in parallel, each task can be executing the same or different program as other tasks. All tasks may use different data
a) Single Program Multiple Data (SPMD)
b) Multiple Program Multiple Data (MPMD)
c) Von Neumann Architecture
d) None of these
Answer: Multiple Program Multiple Data (MPMD)
125. Here a single program is executed by all tasks simultaneously. At any moment in time, tasks can be executing the same or different instructions within the same program. These programs usually have the necessary logic programmed into them to allow different tasks to branch or conditionally execute only those parts of the program they are designed to execute.
a) Single Program Multiple Data (SPMD)
b) Multiple Program Multiple Data (MPMD)
c) Von Neumann Architecture
d) None of these
Answer: Single Program Multiple Data (SPMD)
126. These computer uses the stored-program concept. Memory is used to store both program and data instructions and central processing unit (CPU) gets instructions and/ or data from memory. CPU, decodes theinstructions and then sequentially performs them.
a) Single Program Multiple Data (SPMD)
b) Flynn’s taxonomy
c) Von Neumann Architecture
d) None of these
Answer: Von Neumann Architecture
127. Load balancing is
a) Involves only those tasks executing a communication operation
b) It exists between program statements when the order of statement execution affects the results of the program.
c) It refers to the practice of distributing work among tasks so that all tasks are kept busy all of the time. It can be considered as minimization of task idle time.
d) None of these
Answer: It refers to the practice of distributing work among tasks so that all tasks are kept busy all of the time. It can be considered as minimization of task idle time.
128. Synchronous communication operations referred to
a) Involves only those tasks executing a communication operation
b) It exists between program statements when the order of statement execution affects the results of the program.
c) It refers to the practice of distributing work among tasks so that all tasks are kept busy all of the time. It can be considered as minimization of task idle time.
d) None of these
Answer: Involves only those tasks executing a communication operation
129. Data dependence is
a) Involves only those tasks executing a communication operation
b) It exists between program statements when the order of statement execution affects the results of the program.
c) It refers to the practice of distributing work among tasks so that all tasks are kept busy all of the time. It can be considered as minimization of task idle time.
d) None of these
Answer: It exists between program statements when the order of statement execution affects the results of the program.
130.which of the following is not a granularity type
a) course grain
b) large grain
c) medium grain
d) fine grain
Answer: large grain
Comments
Post a Comment