Thesis on video compresion


You are here

Compression is useful because it helps reduce the consumption of expensive resources, such as disk space or transmission bandwidth. On the downside, compressed data must be uncompressed to be viewed or heard , and this extra processing may be detrimental to some applications. For instance, a compression scheme for video may require expensive hardware for the video to be decompressed fast enough to be viewed as it's being decompressed you always have the option of decompressing the video in full before you watch it, but this is inconvenient and requires storage space to put the uncompressed video.

The design of data compression schemes therefore involve trade-offs between various factors, including the degree of compression, the amount of distortion introduced if using a lossy compression scheme , and the computational resources required to compress and uncompress the data.

Master Thesis - Researcher on Video Compression Technologies i Stockholm, Sweden

The assignment of compression is to code the image data into a compact form, minimizing both the number of bits in the representation, and the distortion caused by the compression. Lossless image compression algorithms are generally used for images that are documents and when lossy compression is not applicable. Lossless algorithms are especially important for systems transmitting and archiving medical data, because lossy compression of medical images used for diagnostic purposes is, in many countries, forbidden by law. In such a system, the images or volume slices are stored in the memory since mass storage turns out to be too slow here, the fast lossless image compression algorithm could virtually increase the memory capacity allowing processing of larger sets of data.

An image may be defined as a rectangular array of pixels. The pixel of a grayscale image is a nonnegative integer interpreted as the intensity brightness, luminosity of the image. When image pixel intensities are in the range [0, 2N - 1], then we say that the image is of N bit depth, or that it is an N -bit image. Typical grayscale images are of bit depths from 8 to 16 bits.

Grayscale image compression algorithms are used as a basis for color image compression algorithms and for algorithms compressing other than images 2-dimensional data characterized by a specific smoothness. These algorithms are also used for volumetric 3-dimensional data. Sometimes such data, as a set of 2-dimensional images, is compressed using regular image compression algorithms.

Other possibilities include preprocessing volumetric data before compressing it as a set of 2-dimensional images or using algorithms designed exclusively for volumetric data the latter are usually derived from regular image compression algorithms. We could use a universal algorithm to compress images, i.

video compression thesis - Printable Version

For a universal algorithm such a sequence is hard to compress. Universal algorithms are usually designed for alphabet sizes not exceeding 2 8 and do not exploit directly the following image data features: images are 2-dimensional data, intensities of neighboring pixels are highly correlated, and the images contain noise added to the image during the acquisition process the latter feature makes dictionary compression algorithms perform worse than statistical ones for image data 1.

Modern grayscale image compression algorithms employ techniques used in universal statistical compression algorithms. However, prior to statistical modeling and entropy coding the image data is transformed to make it easier to compress. If represent oft-recurring elements as short codes and rare-recurring as long codes, then the block of data needs a smaller memory size than if all elements were represented by codes of identical length.

Significant loss can often be tolerated by the human visual system without interfering with perception of the scene content. In most cases, digital input to the compression algorithm itself is an imperfect representation of the real world scene. This is certainly true when the image sample values are quantized version of the real- valued quantities. Lossless compression is usually incapable of achieving the high compression requirements of many storage and distribution applications.

The term lossy is used in an abstract sense, and does not mean random lost pixels, but instead means loss of a quantity such as a frequency component, or perhaps loss of noise. The fundamental question of lossy compression methods is where to lose information.

Nevertheless, the lossless compression is often applied in medical applications, because on such images all information has big significance and lossy compression here is intolerable. Lossless compression is also applied in cases where it is difficult to determine how to introduce an acceptable loss, which will increase compression. In palletized color images, for example, a small error in the numeric sample value may have an intense effect upon the color representation.

LIST OF TABLES

Finally, lossless compression may be appropriate in applications where the image is to be extensively edited and recompressed so that the accumulation of errors from multiple lossy compression operations may become unacceptable. In the definition of lossless and lossy compression, it is assumed that the original image is in digital form.


  • iron deficiency anemia research paper.
  • 3d video compression thesis – The Social Captial Institute.
  • Comparison of Lossless Image Compression Techniques based on Context Modeling.
  • LIST OF TABLES.
  • bath salt research paper.
  • Saliency maps in Virtual reality.

For compression digital images are used; but source may be in analog view in the real world, and therefore, the loss in image quality already takes place in digitalization of source images, when the picture is converted from analog to digital representation. For simplicity, in compression the digitalization phase is skipped, images are restored in digital form.

Compression efficiency and distortion. For our purposes an image is a two dimensional sequence of sample values:. Having finite size, N 1 and N 2, in vertical and horizontal directions respectively. The sample value x [ n 1 ,n 2 ] of source image is intensity of the location [ n 1 ,n 2 ] and can have the following values:. The objective is to keep the length C as small as possible.

In the absence of any compression, we require N 1 N 2 B bits to represent the image sample values. Let us define the compression ratio as following equation 18 :.

Comparison of Lossless Image Compression Techniques based on Context Modeling

Bit-rate is the most obvious measure of compression efficiency; it shows the average number of bits per stored pixel of the image. For lossy compression bit-rate is a more meaningful performance for image compression systems, since the least significant bits of high bit-depth imagery can often be excluded without significant visual distortion.

The average number of bits spent in representing each image sample is often a more meaningful measure of compression performance, because it is independent of the precision with which original samples were represented. If the image is displayed or printed with physical size regardless the size of samples, then more meaningful measure in such case is the size of the bit-stream. Such situation is typical for lossy compression; the bit-rate is a meaningful measure only when N 1 and N 2 are proportional to the physical dimensions with which the image is to be printed or displayed.

Compression algorithms are also estimated by distortion measure, i. The more distortion we allow, the smaller the compression representation can be. The primary goal of lossy compression is to minimize the number of bits required to represent an image with an allowable level of distortion. The measure of distortion is an important feature for lossy compression. Formally, distortion is calculated between the original image, x x [ n 1 ,n 2 ], and the reconstructed image, x x [ n 1 ,n 2 ].


  • Phd thesis on video compression.
  • essay on microsoft office.
  • Video compression phd thesis.
  • gmu writing center thesis?
  • Video Compression and Standard Codecs?

The quantitative distortion of the reconstructed image is measured by the. For estimation of compression method the following measures are applied: speed of compression, robustness against transmission errors and memory requirements of the algorithm. For estimation efficiency of algorithms in this thesis we will use the value of bit rate. According to the given approach the compression process consists of two separate parts:.


  • introduction to romeo and juliet essay on fate!
  • Thesis defence R. Choupani: video coding.
  • Thesis on Image compression by Manish Myst!
  • Retired Site | PBS Programs | PBS?
  • critical care nursing essay!

Modeling assigns probabilities to the symbols, and coding produces a bit sequence from these probabilities. This concept is illustrated in Figure Decompression scheme is symmetrical to compression scheme illustrated in Figure The same model for both coder and decoder is used. For all compression methods exists following common principle: if representation of oft-recurring elements is short codes and representation of rare-recurring elements is long codes, then the block of data needs a smaller memory size than if all elements were represented by codes of equal length.

The essence of this theorem is that element s i with probability p s i is represented more advantageous by code with length - log 2 p s i bits. If during the coding process length of codes equals exactly - log 2 p s i bits, it means that length of coded bit stream is minimal for all possible compression methods.

Such value is denoted as entropy :. And means information content of an element s i in the alphabet. Here source alphabet is a set of all possible non-recurrent elements from source image. The entropy rate of a random process provides a lower bound of the average number of bits that must be spent in coding each of its outcomes, and this bound may be approached arbitrary closely as the complexity of the coding scheme is allowed to grow without bound.

Where k is the number of elements or symbols in the alphabet. This value also means entropy of the probability distribution. In order to achieve good compression rate, exact probability estimation is needed. The more accurately the probabilities of symbols occurrence are estimated, the more closely code length correspond to the optimal, and the better compression is. Since the model is responsible for the probability estimation of each symbol, statistical modeling is one the most important tasks in data compression.

thesis on video compresion Thesis on video compresion
thesis on video compresion Thesis on video compresion
thesis on video compresion Thesis on video compresion
thesis on video compresion Thesis on video compresion
thesis on video compresion Thesis on video compresion
thesis on video compresion Thesis on video compresion
thesis on video compresion Thesis on video compresion
thesis on video compresion Thesis on video compresion
thesis on video compresion Thesis on video compresion

Related thesis on video compresion



Copyright 2019 - All Right Reserved