1 / 27

Lossy Compression

Lossy Compression. Trevor McCasland Arch Kelley. How Do I Lossy Compress?. Goal: reduce the size of stored files and data while retaining all necessary perceptual information Used to create an encoded copy of the original data with a (much) smaller size

Download Presentation

Lossy Compression

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lossy Compression Trevor McCasland Arch Kelley

  2. How Do I Lossy Compress? • Goal: reduce the size of stored files and data while retaining all necessary perceptual information • Used to create an encoded copy of the original data with a (much) smaller size Compression Ratio = Uncompressed Size/Compressed Size Typical Compression Ratios GIFJPEG(low) JPEG(mid)JPEG(high)PNG 4:1- 10:1- 30:1- 60:1- 10-30% 10:1 20:1 50:1 100:1 smaller than GIF

  3. Visual Example • The following JPEGs are compressed with different ratios 1:1(low) 1:10(low) 1:30(mid) 1:30 with 5Xzoom

  4. Lossy Vs. Lossless- The Difference • Lossy: Original data->Lossy Compressor ->Compressed Data • Reverse: Compressed Data->Decompressor ->Altered Data (not the same as original!) • Lossless: Original data->Lossy Compressor ->Compressed Data • Reverse: Compressed Data->Decompressor ->Original Data (exactly the same!) • Lossy is best used for data that we can afford to irreversibly alter (images, audio, video)

  5. Lossy Compression Tools • Common techniques used in lossy compression methods include: • Color spacing/chromadownsampling (images) • Quantization • Discrete cosine transform (DCT) • Zigzag ordering and run-length encoding • Entropy coding (Huffman coding)

  6. JPEG Process

  7. The Human Eye- Observations • 1) Image contents change slowly across the image, i.e., it is unusual for intensity values to vary widely several times in a small area, for example, within an 8x8 image block • 2) Humans are much less likely to notice the loss of very high spatial frequency components than the loss of lower frequency components • High frequencies can be thrown out without noticeable change to the image

  8. The Human Eye- Observations • 3) Visual accuracy in distinguishing closely spaced lines is much greater for black and white than for color • Taken together, these three observations can be used to compress images in a way that reduces loss of visual quality while significantly downsizing the filesize

  9. Color Spacing • A way of defining the boundaries in which an image can be represented using color • Examples: RGB, YCbCr, YPbPr, YUV • RGB is most basic color space

  10. Color Spacing • A pixel’s color is determined by its RGB (red blue green) value • Eg. R=30, G=100, B=50 • Image formats using lossy compression often convert this data into a format that separates luminance (brightness) and chrominance (hue) • Eg. Y = (R + G + B) / 3, Cb= B - YCr = R - Y Data of original image (top) is Separated into luminance data (left) and chrominance data (right) *Operation takes O(1) time for each 8x8 block and is done on n blocks => Running time O(n)

  11. Chroma Downsampling • Used to reduce the possible values that can be used to represent the chrominance of a pixel • Specific color spacing allows for chromadownsampling • Throw out portions of the chrominance (color) data in a group of pixels to reduce the total space used • Use the chroma from one part of the group to display the other part of the group • Source of data loss (‘lossy’ method)

  12. Chroma Downsampling • Different patterns exist for disposing of chrominance (UV values in figures) for portions or pixel groups • Most common is 4:2:2 • 4:4:4 is pointless because no data is being disposed (no downsampling occurs) 4:2:2 sampling 4:1:1 sampling 4:2:1 sampling 4:4:4 sampling

  13. Discrete Cosine Transform • DCT main purpose is to remove redundancy of neighboring pixels to provide compression • The transform maps the correlated data to transformed uncorrelated coefficients High compaction of information

  14. Discrete Cosine Transform 1D-DCT-II 1D-IDCT-II 1D DCT-II x[n] = original value X[k]= transformed value N= number of columns k= transform coefficient

  15. Discrete Cosine Transform 2D DCT • Create an intermediate • Sequence by computing • 1D-DCT on rows • Compute 1D-DCT • on the columns 1D-DCT 1D-DCT

  16. Discrete Cosine Transform The DCT coefficient matrix is U The input coefficient matrix is A Where

  17. Discrete Cosine Transform High information compaction at (0,0)

  18. Inverse Discrete Cosine Transform Each basis function is multiplied by its coefficient and then added to the transformed image

  19. Discrete Cosine Transform • complexity O(n ), running it 2n times to build a 2D DCT with complexity O(n ). • We can do better again by replacing the O(n) DCT algorithm with one factored similarly to a Fast Fourier Transform which would have O(nlogn)complexity. • O(2n*nlogn) = O(n logn)

  20. Quantization • Reduce the range of values used to represent image/audio data • Similar to chromadownsampling, but applied to a full array of values • Achieved by multiplying a constant ‘quantization matrix’ with the DCT coefficient matrix • Quantization matrix can be user-defined • Can adjust quantization level (throw out more or less data) by altering matrix

  21. Quantization Example X= Q= XQ=round(Xn,m/Qn,m) Ex: XQ= Time complexity: O(n2) where n=#columns and rows (n2 O(1) operations)

  22. Quantization Details • Main source of data loss in lossy compression algorithms • Only works on data represented using frequencies • Encoder uses different quantization levels for different frequencies based on human vision preferences • Usually results in a much smaller filesize(typical JPEG compression ratio is 30:1)

  23. Final Steps: Zig-Zag Ordering • Reorder quantized matrix to put non-zero elements in a sortable sequence -26 -3 0 -3 -3 -6 2 4 1 -4 1 1 5 1 2 -1 1 -1 2 0 0 0 0 0 -1 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   [-26, -3 0, -3, …, -1, 0, 0] --Do not store zeroes afterfinal element in zigzag row with a non-zero element

  24. Final Steps: Run-Length Encoding • Simple method of storing similar data using a single value and a run length • Ex: WWWWWWWWWWWWBWWWWWWWWWWWWBBBWWWWWWWWWWWWWWWWWWWWWWWWBWWWWWWWWWWWWWW becomes 12W1B12W3B24W1B14W Time complexity: O(n) where n=# of characters in string

  25. Final Steps: Entropy Coding • RLE string is entropy coded to add an extra layer of compression/further reduce filesize • Entropy coding is a lossless method • Common algorithm used is Huffman Coding • Makes trees that look like: *Not important to understand lossy compression

  26. Applications • JPEG • Lossyimage filetypethat follows the process exactly • MPEG-2 • Uses chromadownsampling and different quantization values to adjust level of compression • Streaming video lets users adjust quality • MP3 • Quantization removes frequencies that humans can’t hear by rounding them to zero • Many, many more

  27. Obligatory Question Slide • Questions? • -Where does the loss of data actually occur? • -Why do highly compressed images look ‘blocky’? • -What flaws appear in highly compressed audio? • -How long does it take to learn all of this? • O(nn) time

More Related