Multiple Choice Questions & Answers on Data Compression

Multiple Choice Questions & Answers on Data Compression

1. The development of data compression algorithms for a variety of data can be divided into  
a) 2
b) 3
c) 4
d) 5 
Answer: 2

2. Based on the requirements of reconstruction, data compression schemes can be divided into broad classes
a) 3
b) 4
c) 2
d) 5 
Answer: 2
3. How many printable characters does the ASCII character set consists of?
a) 128
b) 100
c) 98
d) 90
Answer: 100

4. The unary code for 4 is
a) 11100
b) 11110
c) 00001
d) 00011
Answer: 11110

5. An alphabet consist of the letters A, B, C and D. The probability of occurrence is P(A) = 0.4, P(B)= 0.1, P(C) = 0.2 and P(D) = 0.3. The Huffman code is
a) A = 0 B = 111 C = 110 D = 10
b) A = 0 B = 11 C = 10 D = 111
c) A = 0 B = 111 C = 11 D = 101
d) A = 01 B = 111 C = 110 D = 10
Answer: A = 0 B = 111 C = 110 D = 10

6. A Huffman code: A = 1, B = 000, C = 001, D = 01 , P(A) = 0.4, P(B) = 0.1, P(C) = 0.2, P(D) = 0.3 The average number of bits per letter is
a) 0 bit
b) 1 bit
c) 0 bit
d) 9 bit 
Answer: 0 bit

7. Suppose storing an image made up of a square array of 256×256 pixels requires 65,536 bytes. The image is compressed and the compressed version requires 16,384 bytes. Then the compression ratio is             .
a) 1:4
b) 4:1
c) 1:2
d) 2:1
Answer: 4:1

8. Bits are needed for standard encoding if the size of the character set is X
a) X+ 1
b) log(X)
c) X2
d) 2X 
Answer: log(X)

9. Running time of the Huffman algorithm, if its implementation of the priority queue is done using linked lists
a) O(log(C))
b) O(Clog(C))
c) O(C2)
d) O(C)
Answer: O(C2)

10. Running time of the Huffman encoding algorithm is
a) O(Nlog(C))
b) O(Clog(C))
c) O(C)
d) O(log(C)) 
Answer: O(Clog(C))

11. In how many parts we can divide audio and video services into broad categories?
a) Two
b) Three
c) Four
d) None of the above 
Answer: Three

12. Data compression means to the file size.
a) Increase
b) Decrease
c) Can’t say
d) None of the above
Answer: Decrease

13. Information theory was given by
a) Claude von Regan
b) Claude Elwood Shannon
c) Claude Monet
d) Claude Debussy 
Answer: Claude Elwood Shannon

14. Data compression usually works by          .
a) Deleting random bits data
b) Finding repeating patterns 
c) Both (a) and (b) 
d) None of these 
Answer: Finding repeating patterns 

15. Which is a type of data compression?
a) Resolution
b) Zipping
c) Inputting
d) Caching 
Answer: Zipping

16. Why data compressed?
a) To optimise the data
b) To reduce secondary storage space
c) To reduce packet congestion on networks
d) Both (b) and (c) 
Answer: Both (b) and (c) 

17. compression is the method which eliminates the data which is not noticeable and  compression does not eliminate the data which is not
a) Lossless, lossy
b) Lossy, lossless
c) Both (a) and (b)
d) None of these 
Answer: Lossy, lossless

18. compression is generally used for applications that cannot tolerate any difference between the original and reconstructed data.
a) Lossy
b) Lossless
c) Both
d) None of these 
Answer: Lossless

19. If fidelity or quality of a reconstruction is , then the difference between the reconstruction and the original is .
a) High, small
b) Small, small
c) High, high
d) None of the above Answer
Answer: None of the above Answer

20. What is compression?
a) To compress something by pressing it very hardly
b) To minimize the time taken for a file to be downloaded
c) To reduce the size of data to save space
d) To convert one file to another 
Answer: To reduce the size of data to save space

21. Lossy techniques are generally used for the compression of data that originate as analog signals, such as
a) Speech
b) Video
c) Both speech and video
d) None of these 
Answer: Both speech and video 

22. Which of the following is true of lossy and lossless compression techniques?
a) Lossless compression is only used in situations where lossy compression techniques can’t be used
b) Lossy compression is best suited for situations where some loss of detail is tolerable, especially if it will not be detectable by a human
c) Both lossy and lossless compression techniques will result in some information being lost from the original file
d) Neither lossy nor lossless compression can actually reduce the number of bits needed to represent a file
Answer: Lossy compression is best suited for situations where some loss of detail is tolerable, especially if it will not be detectable by a human

23. Which of the following would not be suitable for Lossy Compression?
a) Speech
b) Video
c) Text
d) Image
Answer: Text

24. Which of the following are not in a compressed format?
a) MP3
b) Bitmap
c) MPEG
d) JPEG
Answer: Bitmap

25. The unit of information depends on the base of the log. If we use log base 2, the unit is ; if we use log base e, the unit is ; and if we use log base 10, the unit is                                    .
a) Hartleys, nats, bits
b) Hartleys, bits, nats
c) Bits, nats, hartleys
d) Bits, hartleys, nats
Answer: Bits, nats, hartleys

26. Which type of method is used is used to compress data made up of combination of symbols?
a) Run- length encoding
b) Huffman encoding
c) Lempel Ziv encoding
d) JPEG encoding 
Answer: Run- length encoding

27. How many passes does lossy compression makes frequently?
a) One pass
b) Two pass
c) Three pass
d) Four pass 
Answer: Two pass

28. The essential condition/s for a good error control coding technique?
a) Better error correcting capability
b) Maximum transfer of information in bits/sec
c) Faster coding & decoding methods
d) All of the above 
Answer: All of the above 

29. What is compression ratio?
a) The ratio of the number of bits required to represent the data before compression to the number of bits required to represent the data after
b) The ratio of the number of bits required to represent the data after compression to the number of bits required to represent the data before
c) The ratio of the number of bits required to represent the data after reconstruction to the number of bits required to represent the data before
d) The ratio of the number of bits required to represent the data before reconstruction to the number of bits required to represent the data after
Answer: The ratio of the number of bits required to represent the data before compression to the number of bits required to represent the data after

30. Self information should be .
a) Negative
b) Positive
c) Both
d) None of these 
Answer: Positive

31. A code in which no codeword is a prefix to another codeword is called as
a) Prefix cod
b) Parity code
c) Convolutional code
d) Block code 
Answer: Prefix cod

32. The set of binary sequences is called a , and the individual members of the set are called
a) Codewords, code
b) Code, codewords
c) Block code 
d) None of these 
Answer: Code, codewords

33. Full form of ASCII is
a) American Standard Code for Information Intercaste
b) American Standard Codewords for Information Interchange
c) American Standard Code for Information Interchange
d) American System Code for Information Interchange
Answer: American Standard Code for Information Interchange
 
34. Composite source models is a combination or composition of several sources. In which how many source being active at any given time?
a) All
b) Only one
c) Only first three
d) None of these 
Answer: Only one

35. For models used in lossless compression, we use a specific type of Markov process called a
a) Continous time Markov chain
b) Discrete time Markov chain
c) Constant time Markov chain
d) None of the above 
Answer: Discrete time Markov chain

36. Markov model is often used when developing coding algorithms for
a) Speech
b) Image
c) Both
d) None of these 
Answer: Both

37. Which of the following compression type is supported by SQL Server 2014?
a) Row
b) Column
c) Both row and column
d) None of the mentioned 
Answer: Both row and column

38. Point out the correct statement:
a) The details of data compression are subject to change without notice in service packs or subsequent releases
b) Compression is not available for system tables
c) If you specify a list of partitions or a partition that is out of range, an error will be generated
d) All of the mentioned
Answer: All of the mentioned

39. In which type of Data Compression, the integrity of the data is preserved?
a) Lossy Compression
b) Lossless Compression
c) Both of the above
d) None of the above 
Answer: Lossless Compression

40. Which of the following are Lossless methods?
a) Run-length
b) Huffman
c) Lempel Ziv
d) All of the above
Answer: All of the above

41. Which of the following are lossy methods?
a) JPEG
b) MPEG
c) MP3
d) All of the above 
Answer: All of the above 

42. Sequence of code assigned is called
a) code word
b) word
c) byte
d) nibble
Answer: code word

43. The Huffman procedure is based on observations regarding optimum prefix codes, which is/are
a) In an optimum code, symbols that occur more frequently (have a higher probability of occurrence) will have shorter codewords than symbols that occur less
b) In an optimum code,thetwo symbolsthat occurleast frequently will havethe samelength
c) Both (a) and (b)
d) None of these
Answer: Both (a) and (b)

44. Huffman codes are codes and are optimum for a given model (set of probabilities).
a) Parity
b) Prefix
c) Convolutional code
d) Block code 
Answer: Prefix

45. The best algorithms for solving Huffman codes
a) Brute force algorithm
b) Divide and conquer algorithm
c) Greedy algorithm
d) Exhaustive search 
Answer: Greedy algorithm

46. The redundancy is zero when
a) The probabilities are positive powers of two
b) The probabilities are negative powers of two
c) Both
d) None of the above 
Answer: The probabilities are negative powers of two

47. Unit of redundancy is
a) bits/second
b) symbol/bits
c) bits/symbol
d) none of these 
Answer: bits/symbol

48. Which bit is reserved as a parity bit in an ASCII set?
a) Sixth
b) Seventh
c) Eighth
d) Ninth 
Answer: Eighth

49. In the Tunstall code, all codewords are of However, each codeword represents a number of letters.
a) different, equal
b) equal, different
c) Both
c) none of these
Answer: equal, different

50. In Huffman coding, data in a tree always occur in
a) Leaves
b) Roots
c) Left sub trees
d) None of these
Answer: Leaves

51. Which of the following is not a part of the channel coding?
a) rectangular code
b) Checksum checking
c) Hamming code
d) Huffman code 
Answer: Huffman code

52. Tunstall coding is a form of entropy coding used for
a) Lossless data compression
b) Lossy data compression
c) Both
d) None of these 
Answer: Lossless data compression

53. Applications of Huffman Coding
a) Text compression
b) Audio compression
c) Lossless image compression
d) All of the above
Answer: All of the above

54. Information is the 
a) data
b) meaningful data
c) raw data
c) Both a and b
d) JPEG encoding
Answer: meaningful data

55. The main advantage of a Tunstall code is that
a) Errors in codewords do not propagate
b) Errors in codewords propagate
c) The disparity between frequencies
d) None of these 
Answer: Errors in codewords do not propagate

56. The basic idea behind Huffman coding is to
a) compress data by using fewer bits to encode fewer frequently occuring characters
b) compress data by using fewer bits to encode more frequently occuring characters
c) compress data by using more bits to encode more frequently occuring characters
d) expand data by using fewer bits to encode more frequently occuring characters
Answer: compress data by using fewer bits to encode more frequently occuring characters

57. Huffman coding is an encoding algorithm used for
a) lossless data compression
b) broadband systems
c) files greater than 1 Mbit
d) lossy data compression
Answer: lossless data compression

58. A Huffman encoder takes a set of characters with fixed length and produces a set of characters of
a) random length
b) fixed length
c) variable length
d) constant length 
Answer: variable length

59. Which of the following is the first phase of JPEG?
a) DCT Transformation
b) Quantization
c) Data Compression
d) None of the above
Answer: None of the above

60. Data compression involves
a) Compression only
b) Reconstruction only
c) Both compression and reconstruction
d) None of the above 
Answer: Both compression and reconstruction

61. According to Claude Elwood Shannon’s second theorem, it is not feasible to transmit information over the channel with error probability, although by using any coding technique?
a) Large
b) May be large or small
c) Unpredictable
d) Small 
Answer: Small

62. The difference between the entropy and the average length of the Huffman code is called
a) Rate
b) Redundancy
c) Power
d) None of these 
Answer: Redundancy

63. Point out the correct statement:
a) The details of data compression are subject to change without notice in service packs or subsequent releases
b) Compression is not available for system tables
c) If you specify a list of partitions or a partition that is out of range, an error will be generated
d) All of the mentioned
Answer: All of the mentioned

64. An optimal code will always be present in a full tree?
a) True
b) False
Answer: True

65. Data compression and encryption both work on binary
a) True
b) False
Answer: True

Comments