670 likes | 909 Views
Entropy Coding of Video Encoded by Compressive Sensing. Yen-Ming Mark Lai, University of Maryland, College Park, MD ( ylai@amsc.umd.edu ) Razi Haimi-Cohen, Alcatel-Lucent Bell Labs, Murray Hill, NJ ( razi.haimi-cohen@alcatel-lucent.com ). August 11, 2011. Why is entropy coding important?.
E N D
Entropy Coding of Video Encoded by Compressive Sensing Yen-Ming Mark Lai, University of Maryland, College Park, MD (ylai@amsc.umd.edu) Razi Haimi-Cohen, Alcatel-Lucent Bell Labs, Murray Hill, NJ (razi.haimi-cohen@alcatel-lucent.com) August 11, 2011
Transmission is digital 101001001 101001001 channel
101001001 channel 101001001 channel 1101 1101
Input video Output video Break video into blocks Deblock L1 minimization Take compressed sensed measurements Quantize measurements Arithmetic encode Arithmetic decode channel
= + - - + - + + - 5 3 4 1 3 1 13 2 2 18 CS Measurement (integers between -1275 and 1275) Input (integers between 0 and 255)
Since CS measurements contain noise from pixel quantization, quantize at most to standard deviation of this noise standard deviation of noise from pixel quantization in CS measurements N is total pixels in video block
We call this minimal quantization step the “normalized” quantization step
What to do with values outside range of quantizer? quantizer range large values that rarely occur
CS measurements are “democratic” ** Each measurement carries the same amount of information, regardless of its magnitude ** “Democracy in Action: Quantization, Saturation, and Compressive Sensing,” Jason N. Laska, Petros T. Boufounos, Mark A. Davenport, and Richard G. Baraniuk (Rice University, August 2009)
What to do with values outside range of quantizer? quantizer range Discard values, small PSNR loss since occurrence rare
2 second video broken into 8 blocks 60 frames (2 seconds) 288 pixels 352 pixels
PSNR Bit Rate PSNR Bit Rate PSNR Bit Rate
Processing Time • 6 cores, 100 GB RAM • 80 simulations (5 ratios, 4 steps, 4 ranges) • 22 hours total • 17 minutes per simulation • 8.25 minutes per second of video
Fraction of CS measurements outside quantizer range 2.7 million CS measurements
How often do large values occur theoretically? 34.13% 34.13% 2.14% 2.14% 0.135% 13.59% 13.59% 0.135%
How often do large values occur in practice? (theoretical) 2.7 million CS measurements (0.135%) 0.037%
What to do with large values outside range of quantizer? quantizer range Discard values, small PSNR loss since occurrence rare
Discarding values comes at bit rate cost 1001010110110101010011101000101 discard 0010100001010101000101001000100 0100100010110010101010010101010 1001010110110101010011101000101 discard
Best Compression (Entropy) of Quantized Gaussian Variable X Arithmetic Coding is viable option !
Fix quantization step, vary standard deviation Faster arithmetic encoding, less measurements
Fixed bit rate, what should we choose? 18.5 minutes, 121 bins 2.1 minutes, 78 bins 2.1 minutes, 78 bins
Fix standard deviation, vary quantization step Increased arithmetic coding efficiency, more error
Fixed PSNR, which to choose? Master’s Bachelor’s PhD
Future Work • Tune decoder • to take quantization noise into account. • make use of out-of-range measurements • Improve computational efficiency of arithmetic coder
Input video Output video Break video into blocks Deblock L1 minimization Take compressed sensed measurements For each block: 1) Output of arithmetic encoder 2) mean, variance 3) DC value 4) sensing matrix identifier Quantize measurements Arithmetic encode Arithmetic decode channel
“News” Test Video Input • Block specifications • 64 width, 64 height, 4 frames (16,384 pixels) • Input 288 width, 352, height, 4 frames (30 blocks) • Sampling Matrix • Walsh Hadamard • Compressed Sensed Measurements • 10% of total pixels = 1638 measurements
Given a discrete random variable X, the fewest number of bits (entropy) needed to encode X is given by: For a continuous random variable X, differentialentropy is given by
Differential Entropy of Gaussian function of variance maximizes entropy for fixed variance i.e. h(X’) <= h(X) for all X’ with fixed variance
Approximate quantization noise as i.i.d. with uniform distribution where w is width of quantization interval. Then, Variance from initial quantization noise