E N D
1. Chapter 6 Image Compression
2. Necessary of Image Compression
4. Purpose of Image Compression Saving storage space
Saving transfer time ;
Easy processing & reduce cost?
5. Image Compression Coding
6. General compression system model
7. Coding redundancy
Interpixel redundancy
Psychovisual redundancy Three basic data redundancies
8. Data Redundancies
9. Objective fidelity criteria
10. Subjective fidelity criteria
11. Huffman coding is an entropy encoding algorithm used for lossless data compression.
Huffman coding uses a specific method for choosing the representation for each symbol, resulting in a prefix code that expresses the most common characters using shorter strings of bits than are used for less common source symbols. A method was later found to do this in linear time if input probabilities are sorted. Huffman Coding
12. The simplest construction algorithm uses a priority queue where the node with lowest probability is given highest priority:
(1)Create a leaf node for each symbol and add it to the priority queue. While there is more than one node in the queue, remove the node of highest priority (lowest probability) twice to get two nodes.
(2)Create a new internal node with these two nodes as children and with probability equal to the sum of the two nodes' probabilities. Add the new node to the queue.
(3)Repeat step 2 until the remaining node is the root and the tree is complete. Huffman Coding Algorithm
13. Example of Huffman Coding
14. Run-length encoding (RLE) is a very simple form of data compression in which runs of data (that is, sequences in which the same data value occurs in many consecutive data elements) are stored as a single data value and count, rather than as the original run. This is most useful on data that contains many such runs: for example, relatively simple graphic images such as icons, line drawings, and animations. Run Length Encoding
15. Example of RLE Let us take a hypothetical single scan line, with B representing a black pixel and W representing white:
WWWWWWWWWWWWBWWWWWWWWWWWWBBBWWWWWWWWWWWWWWWWWWWWWWWWBWWWWWWWWWWWWWW
16. Predictive Coding
17. The system consists of an encoder and a decoder, each containing an identical predictor. As each successive pixel of the input image, is introduced to the encoder, the predictor generates the anticipated value of that pixel based on some number of past inputs. The output of the predictor is then rounded to the nearest integer. The principle of Predictive Coding
18. Predictive coding model I
19. Predictive coding model II
20. Delta Modulation I
21.
Delta Modulation II
22. Differential Pulse Code Modulation Differential Pulse Code Modulation (DPCM) compares two successive analog amplitude values, quantizes and encodes the difference, and transmits the differential value.
23. The principle of DPCM
24. Transform Coding I Transform coding is a type of data compression for "natural" data like audio signals or photographic images.
The transformation is typically lossy, resulting in a lower quality copy of the original input.
25. Transform Coding II
26. Transform Coding III
27. The name "JPEG" stands for Joint Photographic Experts Group, the name of the committee that created the standard. The group was organized in 1986, issuing a standard in 1992, which was approved in 1994 as ISO 10918-1. JPEG is a commonly used method of compression for photographic images. The degree of compression can be adjusted, allowing a selectable tradeoff between storage size and image quality. Joint Picture Expert Group
28. Steps of JPEG The image is subdivided into pixel blocks of size 8 * 8, which are processed left to right, top to bottom. As each 8 * 8 block or subimage is encountered, its 64 pixels are level shifted by subtracting the quantity 2n-1, where 2n is the maximum number of gray levels.
The 2-D discrete cosine transform of the block is then computed, quantized and reordered.
Use the zigzag pattern, to form a 1-D sequence of quantized coefficients.
Use the DPCM (differential pulse code modulation) code the DC coefficients.
The nozero AC coefficients are coded using a run-length encoding.
Entropy coding
29. JPEG Model
30. JPEG2000 JPEG 2000 is a wavelet-based image compression standard. It was created by the Joint Photographic Experts Group committee in the year 2000 with the intention of superseding their original discrete cosine transform-based JPEG standard (created about 1991).
31. MPEG was an early standard for lossy compression of video and audio.
Development of the MPEG standard began in May 1988. MPEG
32. MPEG-1 MPEG-1 was designed to compress VHS-quality raw digital video and CD audio down to 1.5 Mbit/s (26:1 and 6:1 compression ratios respectively) without excessive quality loss, making Video CDs, digital cable/satellite TV and digital audio broadcasting (DAB) possible.
The MPEG-1 standard consists of the following five Parts: Systems?Video?Audio? Conformance testing?Reference software
33. MPEG-2 MPEG-2 describes a combination of lossy video compression and lossy audio compression methods which permit storage and transmission of movies using currently available storage media and transmission bandwidth. MPEG-2 is widely used as the format of digital television signals that are broadcast by terrestrial (over-the-air), cable, and direct broadcast satellite TV systems.
MPEG-2 Audio section enhances MPEG-1's audio by allowing the coding of audio programs with more than two channels. This method is backwards-compatible, allowing MPEG-1 audio decoders to decode the two main stereo components of the presentation.
34. MPEG-4 MPEG-4 provides the following functionalities:
Improved coding efficiency?Ability to encode mixed media data?Error resilience to enable robust transmission?Ability to interact with the audio-visual scene generated at the receiver .
MPEG-4 was aimed primarily at low bit-rate video communications; however, its scope was later expanded to be much more of a multimedia coding standard.
35. MPEG-7 MPEG-7 is a multimedia content description standard. This description will be associated with the content itself, to allow fast and efficient searching for material that is of interest to the user.
The objectives of MPEG-7 are: Provide a fast and efficient searching, filtering and content identification method; Describe main issues about the content; Index a big range of applications; Audiovisual information( Audio, voice, video, images, graphs and 3D models); Inform about how objects are combined in a scene; Independent between description and the information itself.
36. MPEG-21 MPEG-21 is based on two essential concepts: the definition of a fundamental unit of distribution and transaction, which is the Digital Item.
Digital Items can be considered the kernel of the Multimedia Framework and the users can be considered as who interacts with them inside the Multimedia Framework.