1 / 38

4C8

4C8. Image Compression. Lossy Compression. Effective bit rate = 8 bits/pixel. Effective bit rate = 1 bit/pixel (approx). Signal Energy. Lossy Transform Coding. Lossless. Lossy. Lossless. Lossless. Energy Compaction with Xforms. The Haar Xform. LoLo. Hi-Lo. Lo-Hi. Hi-Hi.

esben
Download Presentation

4C8

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 4C8 Image Compression

  2. Lossy Compression Effective bit rate = 8 bits/pixel Effective bit rate = 1 bit/pixel (approx)

  3. Signal Energy

  4. Lossy Transform Coding Lossless Lossy Lossless Lossless

  5. Energy Compaction with Xforms

  6. The Haar Xform LoLo Hi-Lo Lo-Hi Hi-Hi

  7. Implementation Details • When calculating the haar transform for the image the mid gray value represents 0 (except for the Lo-Lo Band). • Colour Images are processed by treating each colour channel as separate gray scale images. • If YUV colourspace is used subsampling of the U and V channels is probable.

  8. Quantisation • After we create the image we quantise the transform coefficients. • Step size is shown by perceptual evaluation • We can assign different step sizes to the different bands. • We can use different step sizes for the different colour channels. • We will consider a uniform step size, Qstep, for each band for now.

  9. Entropy Qstep = 15

  10. Entropy Qstep = 15 • Calculating the overall entropy is trickier • Each coefficient in a band represents 4 pixel locations in the original image. • So bits/pixel = (bits/coefficient)/4 • So the entropy of the transformed and quantised lenna is

  11. Mistake in Fig. 5 of handout Red Dashed Line is the Histogram. Blue bars represent the “entropies” (ie. - p * log2(p) ) and not vice versa

  12. Multilevel Haar Xform

  13. Calculating the Entropy for Level 2 of the transform • One Level 1 coefficient represents 4 pixels • One level 2 coefficient represents 16 pixels Total Entropy = 1.70 bits/pixel Qstep = 15

  14. Multilevel Haar Xform

  15. Calculating the Entropy for Level 3 of the transform • One Level 1 coefficient represents 4 pixels • One level 2 coefficient represents 16 pixels • One level 3 coefficient represents 64 pixels Qstep = 15

  16. Calculating the Entropy for Level 3 of the transform • One Level 1 coefficient represents 4 pixels • One level 2 coefficient represents 16 pixels • One level 3 coefficient represents 64 pixels Total Entropy = 1.62 bits/pixel Qstep = 15

  17. Multilevel Haar Xform Qstep = 15

  18. Measuring Performance • Compression Efficiency - Entropy • Reconstruction Quality – Subjective Analysis Haar Transform Quantisation Quantisation

  19. Reconstruction Qstep = 15

  20. Reconstruction Qstep = 30

  21. Reconstruction Qstep = 30 Original Quantised Haar Transform + Quantisation

  22. Laplacian Pdfs We assume that the histograms are derived from a continuous Laplacian PDF quantised along the intensity (x) axis. This will give us a theoretical expression for entropy wrt to the step size and standard deviation of the image.

  23. GOAL – estimate a theoretical value for the entropy of one of the subbands So we can estimate x0 for the band by finding the standard deviation of the coefficient values.

  24. x1 = 0, x2 = Q/2 x1 = (k-1/2)Q, x2 = (k-1/2)Q

  25. See Handout for Missing Steps Here

  26. Measured Entropy is less than what we would expect for a laplacian pdf. This is because the actual decay of the histogram is greater than an exponential decay.

  27. Practical Entropy Coding

  28. Huffman Coding

  29. Practical Results

  30. The code is inefficient because level 0 as a probability >>0.5 (0.8 approx) Remember the ideal codelength So if pk = 0.8, then However, the minimum code length we can use for a symbol is 1 bit. Therefore, we need to find a new way of coding level 0 – use run length coding

  31. RLC

  32. RLC coding to create “events” 13 -5 1 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 0 0 0 0 Define max run of zeros as 8, and we are coding runs of 1, 2, 4 and 8 zeros Here we have 4 non-zero “events” 1 x Run-of-4-Zeros event 2 x Run-of-2 zeros event 1x Run-of-8-zeros event 1 x Run-of-1-zero event

  33. Practical Results

  34. Synchronisation Say we have a source with symbols A, B and C. Say we wish to encode the message ABBCCBCABAA using the following code table The Coded message is therefore 010101111101101000 Q. What is the decoded message if the 6th bit in the stream is corrupted? Ie. We receive 010100111101101000

  35. Synchronisation • 010100111101101000 • The decoded stream is ABBACCACABA • The problem is that 1 bit error causes subsequent symbols to be decoded incorrectly as well. • The stream is said to have lost synchronisation. • A solution is to periodically insert synchronisation symbols into the stream (eg. One at the start of each row). This limits how far errors can propagate.

  36. Summary

More Related