My Cart is empty


Some Theory

Compression of video and audio signals
Video compression
At present there are used a few compression methods. The oldest of them (JPEG, H-261, MPEG-2) use groups of pixels (covering some area, for example 8 bits x 8 bits). These methods don't analyze the actual structure of the image. This generation of compressors may be used in the case of lower resolutions when transmission speed isn't less than 64kbps.
More effective compression is offered by second generation of algorithms. These algorithms (e.g. MPEG-4) carry out a kind of scene analysis before realization of adequate compression procedure. They make out that image can be divided into fragments in more effective way. Image which consists of pixels with the same features can be divided into specific groups, and this operation significantly improves compression process. To achieve such analysis procedure, a model of human perceiving has been employed (HVS - Human Vision System) .
Typical second generation encoder consists of following modules: segmentation, motion estimation, outline and texture coding. The system offers compromise between compression rate and the quality of the image after decompression.
Video compression standards: H-261, H263
The H-261 system has been created for ISDN networks with 64 kbps bandwidth. The H-261 standard allows compression rates from 100:1 to 2000:1, however the higher compression ratio, the lower quality of decompressed image. The codec works in two modes: intra frame - INTRA and inter frame - INTER.
During the INTRA compression mode, the frame is being divided into blocks sized by 8x8 pixels. Then they are compressed using Discrete Cosines Transform (similar to JPEG).
In INTER mode, only changes which have been occurred between following frames are being saved. However, it has to be marked that at least once every 123 frames the compression in INTRA mode has to be carried out.
Development of H.263 standard has been meant to secure video signal compression for low bandwidth networks, about 28.8 to 33.6 kbps.
H-263 is a modernized version of the H-261 standard. In comparison with the original version, half-pixel motion compensation has been employed instead of pixel motion compensation. The header has been reduced, detection and transmission of error correction have been abandoned, and new optional modes of operation have been integrated.
More information about H-261, and H-263
MPEG-4 compression
MPEG-4 is the newest video compression standard which quickly becomes popular, also in the case of video streaming. In comparison with previously presented standards, it integrates effective methods for multimedia data.
New attitude consists in interrelation of data contents with the role for human perception, and extraction of so called audio-visual objects (AVO). There have been integrated solutions allowing scalability of data representation in many dimensions, that is space, time, quality and computational complexity.
The standard has been adjusted to many types of telecommunication networks and can operate at rates from 64 to 1024 kbps.
The concept of MPEG-4 allows transmission of single or several audiovisual AV objects from the sender to the receiver, however before the transmission begins, the initial exchange of information between the encoder and decoder has to be carried out. This way, the convenient algorithm classes and the tools essential for effective use of the link can be settled. Obviously, the coder protects audiovisual objects against errors.
Audio compression
Data stream from audio compression is significantly smaller than video data stream. However, the compression is normally performed to minimize the total bandwidth.
Simple reduction of link's bandwidth can be achieved by reduction of bit rate (sampling frequency) and reduction of amplitude resolution.
Depending on transmission type, various samplings are used:

Sampling frequency


8.00 kHz


16 kHz

Multimedia communication

22.05 kHz

Personal computers

32.00 kHz

Digital radio and television

44.10 kHz

CD - Audio

48.00 kHz

DAT cassette recorders, HDTV

96.00 kHz


Typical sampling frequencies
In telecommunications, 12 to 14-bit quantization is applied to code voice.
Besides methods causing some deterioration in quality of the sound, there are also used compression methods that practically do not influence the original form.
An example of such compression can be ADPCM (Adaptive Differential Pulse Code Modulation), consisting in estimation of current sample on the sample preceding it, on account of the fact that in audio signal there is a strong correlation between successive samples. ADPCM is lossless method.
Wavelet compression. It is compression based on the use of wavelets algorithms. Wavelets are mathematical functions taking values different from zero only in some finite interval. Through adequate mathematic conversion, any function can be presented in this way.
Input signal which we want to compress is being filtered from objects invisible for human eye, then it is divided into fragments and each of these fragments is represented by adequate function.
This method belongs to loss methods, the compressed images are characterized by blurred edges. But in comparison to JPEG and MPEG methods, wavelet compression doesn't cause distortion of structure of objects on the screen. Wavelet methods are still in development and they will probably be a base for new compressors. An interesting fact is that the method has been used by FBI for implementing fingerprint recognition system, because it has passed tests better than JPEG.
Wavelet compression method is available, besides MJPEG, for users of Hicap digital recording system. The manufacturer informs that using MJPEG compression, 10GB of disc space allows to record 4.5 hours of live material, whereas Wavelet method extends the time to 9 hours. Anyway, if we need better quality, we should use MJPEG compression.