1 / 66

Entropy Coding of Video Encoded by Compressive Sensing

Entropy Coding of Video Encoded by Compressive Sensing. Yen-Ming Mark Lai, University of Maryland, College Park, MD ( ylai@amsc.umd.edu ) Razi Haimi-Cohen, Alcatel-Lucent Bell Labs, Murray Hill, NJ ( razi.haimi-cohen@alcatel-lucent.com ). August 11, 2011. Why is entropy coding important?.

oren
Download Presentation

Entropy Coding of Video Encoded by Compressive Sensing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Entropy Coding of Video Encoded by Compressive Sensing Yen-Ming Mark Lai, University of Maryland, College Park, MD (ylai@amsc.umd.edu) Razi Haimi-Cohen, Alcatel-Lucent Bell Labs, Murray Hill, NJ (razi.haimi-cohen@alcatel-lucent.com) August 11, 2011

  2. Why is entropy coding important?

  3. Transmission is digital 101001001 101001001 channel

  4. 101001001 channel 101001001 channel 1101 1101

  5. Input video Output video Break video into blocks Deblock L1 minimization Take compressed sensed measurements Quantize measurements Arithmetic encode Arithmetic decode channel

  6. Statistics of Compressed Sensed blocks

  7. = + - - + - + + - 5 3 4 1 3 1 13 2 2 18 CS Measurement (integers between -1275 and 1275) Input (integers between 0 and 255)

  8. Since CS measurements contain noise from pixel quantization, quantize at most to standard deviation of this noise standard deviation of noise from pixel quantization in CS measurements N is total pixels in video block

  9. We call this minimal quantization step the “normalized” quantization step

  10. What to do with values outside range of quantizer? quantizer range large values that rarely occur

  11. CS measurements are “democratic” ** Each measurement carries the same amount of information, regardless of its magnitude ** “Democracy in Action: Quantization, Saturation, and Compressive Sensing,” Jason N. Laska, Petros T. Boufounos, Mark A. Davenport, and Richard G. Baraniuk (Rice University, August 2009)

  12. What to do with values outside range of quantizer? quantizer range Discard values, small PSNR loss since occurrence rare

  13. Simulations

  14. 2 second video broken into 8 blocks 60 frames (2 seconds) 288 pixels 352 pixels

  15. PSNR Bit Rate PSNR Bit Rate PSNR Bit Rate

  16. Processing Time • 6 cores, 100 GB RAM • 80 simulations (5 ratios, 4 steps, 4 ranges) • 22 hours total • 17 minutes per simulation • 8.25 minutes per second of video

  17. Results

  18. Fraction of CS measurements outside quantizer range 2.7 million CS measurements

  19. Fraction of CS measurements outside quantizer range

  20. How often do large values occur theoretically? 34.13% 34.13% 2.14% 2.14% 0.135% 13.59% 13.59% 0.135%

  21. How often do large values occur in practice? (theoretical) 2.7 million CS measurements (0.135%) 0.037%

  22. What to do with large values outside range of quantizer? quantizer range Discard values, small PSNR loss since occurrence rare

  23. Discarding values comes at bit rate cost 1001010110110101010011101000101 discard 0010100001010101000101001000100 0100100010110010101010010101010 1001010110110101010011101000101 discard

  24. Bits Per Measurement, Bits Per Used Measurement

  25. Bits Per Measurement, Bits Per Used Measurement 9.4 bits

  26. Best Compression (Entropy) of Quantized Gaussian Variable X Arithmetic Coding is viable option !

  27. Fix quantization step, vary standard deviation Faster arithmetic encoding, less measurements

  28. PSNR versus Bit Rate (10 x step size)

  29. Fixed bit rate, what should we choose? 18.5 minutes, 121 bins 2.1 minutes, 78 bins 2.1 minutes, 78 bins

  30. Fix standard deviation, vary quantization step Increased arithmetic coding efficiency, more error

  31. PSNR versus Bit Rate (2 std dev)

  32. Fixed PSNR, which to choose? Master’s Bachelor’s PhD

  33. Demo

  34. Future Work • Tune decoder • to take quantization noise into account. • make use of out-of-range measurements • Improve computational efficiency of arithmetic coder

  35. Questions?

  36. Supplemental Slides(Overview of system)

  37. Input video Output video Break video into blocks Deblock L1 minimization Take compressed sensed measurements For each block: 1) Output of arithmetic encoder 2) mean, variance 3) DC value 4) sensing matrix identifier Quantize measurements Arithmetic encode Arithmetic decode channel

  38. Supplemental Slides(Statistics of CS Measurements)

  39. “News” Test Video Input • Block specifications • 64 width, 64 height, 4 frames (16,384 pixels) • Input 288 width, 352, height, 4 frames (30 blocks) • Sampling Matrix • Walsh Hadamard • Compressed Sensed Measurements • 10% of total pixels = 1638 measurements

  40. Histograms of Compressed Sensed Samples (blocks 1-5)

  41. Histograms of Compressed Sensed Samples (blocks 6-10)

  42. Histograms of Compressed Sensed Samples (blocks 11-15)

  43. Histograms of Compressed Sensed Samples (blocks 21-25)

  44. Histograms of Compressed Sensed Samples (blocks 26-30)

  45. Histograms of Compressed Sensed Samples (blocks 16-20)

  46. Histograms of Standard Deviation and Mean (all blocks)

  47. Supplemental Slides(How to Quantize)

  48. Given a discrete random variable X, the fewest number of bits (entropy) needed to encode X is given by: For a continuous random variable X, differentialentropy is given by

  49. Differential Entropy of Gaussian function of variance maximizes entropy for fixed variance i.e. h(X’) <= h(X) for all X’ with fixed variance

  50. Approximate quantization noise as i.i.d. with uniform distribution where w is width of quantization interval. Then, Variance from initial quantization noise

More Related