1 / 21

Computer Vision – Compression(1)

Computer Vision – Compression(1). Hanyang University Jong-Il Park. Image Compression. The problem of reducing the amount of data required to represent a digital image Underlying basis Removal of redundant data Mathematical viewpoint

awolverton
Download Presentation

Computer Vision – Compression(1)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computer Vision – Compression(1) Hanyang University Jong-Il Park

  2. Image Compression • The problem of reducing the amount of data required to represent a digital image • Underlying basis • Removal of redundant data • Mathematical viewpoint • Transforming a 2-D pixel array into a statistically uncorrelated data set

  3. Topics to be covered • Fundamentals • Basic concepts of source coding theorem • Practical techniques • Lossless coding • Lossy coding • Optimum quantization • Predictive coding • Transform coding • Standards • JPEG • MPEG • Recent issues

  4. History of image compression • Theoretic foundation • C.E.Shannon’s works in 1940s • Analog compression • Aiming at reducing video transmission bandwidth  Bandwidth compression • Eg. Subsampling methods, subcarrier modulation… • Digital compression • Owing to the development of ICs and computers • Early 70s: Facsimile transmission – 2D binary image coding • Academic research in 70s to 80s • Rapidly matured around 1990.  standardization such as JPEG, MPEG, H.263, …

  5. Data redundancy • Data vs. information • Data redundancy • Relative data redundancy • Three basic redundancies • Coding redundancy • Interpixel redundancy • Psychovisual redundancy

  6. Coding redundancy • Code: a system of symbols used to represent a body of information or set of events • Code word: a sequence of code symbols • Code length: the number of symbols in each code word • Average number of bits

  7. Eg. Coding redundancy • Reduction by variable length coding

  8. Correlation • Cross correlation • Autocorrelation

  9. Eg. Correlation

  10. Interpixel redundancy • Spatial redundancy • Geometric redundancy • Interframe redundancy

  11. Eg. Interpixel redundancy

  12. Eg. Run-length coding

  13. Psychovisual redundancy +

  14. Image compression models • Communication model • Source encoder and decoder

  15. Basic concepts in information theory • Self-information: I(E)= - log P(E) • Source alphabet A and symbols • Probability of the events z • Ensemble (A, z) • Entropy(=uncertainty): • Channel alphabet B • Channel matrix Q

  16. Mutual information and capacity • Equivocation: • Mutual information: • Channel capacity C • Minimum possible I(z,v)=0 • Maximum possible I over all possible choices of source probabilities in z is the channel capacity

  17. 1-pe pbs 0 0 pe pe 1-pbs 1 1 1-pe Eg. Binary Symmetric Channel BSC Entropy Channel capacity Mutual information

  18. Noiseless coding theorem • Shannon’s first theorem for a zero-memory source • It is possible to make L’avg/n arbitrarily close to H(z) by coding infinitely long extensions of the source • Efficiency = entropy/ L’avg • Eg. Extension coding • Extension coding  better efficiency

  19. Extension coding A B A Efficiency =0.918/1.0=0.918 Better efficiency B Efficiency =0.918*2/1.89=0.97

  20. Noisy coding theorem • Shannon’s second theorem for a zero-memory channel: For any R<C, there exists an integer r and code of block length r and rate R such that the probability of a block decoding error is arbitrarily small. • Rate-Distortion theory The source output can be recovered at the decoder with an arbitrarily small probability of error provided that the channel has capacity C > R(D)+e. x feasible x x x Never Feasible!

  21. Using mappings to reduce entropy • 1st order estimate of entropy > 2nd order estimate of entropy > 3rd order estimate of entropy …. • The (estimated) entropy of a properly mapped image (eg. “difference source”) is in most cases smaller than that of original image source. How to implement ? The topic of the next lecture!

More Related