Communications Toolbox    
lloyds

Optimize quantization parameters using the Lloyd algorithm

Syntax

Description

[partition,codebook] = lloyds(trainingset,initcodebook) optimizes the scalar quantization parameters partition and codebook for the training data in the vector trainingset. initcodebook, a vector of length at least 2, is the initial guess of the codebook values. The output codebook is a vector of the same length as initcodebook. The output partition is a vector whose length is one less than the length of codebook.

See either Representing Quantization Parameters or the reference page for quantiz, for a description of the formats of partition and codebook.

[partition,codebook] = lloyds(trainingset,length) is the same as the first syntax, except that the scalar argument length indicates the size of the vector codebook. This syntax does not include an initial codebook guess.

[partition,codebook] = lloyds(trainingset,...,tol) is the same as the two syntaxes above, except that tol replaces 10-7 in condition 1 of the algorithm description below.

[partition,codebook] = lloyds(trainingset,...,tol,plotflag) is the same as the syntax above, except that it also plots the original signal, the optimized partition, and codebook in a figure. The value of plotflag is not important.

[partition,codebook,distor] = lloyds(...) returns the final mean square distortion in the variable distor.

[partition,codebook,distor,reldistor] = lloyds(...) returns a value reldistor that is related to the algorithm's termination. In case 1 of Algorithm below, reldistor is the relative change in distortion between the last two iterations. In case 2 , reldistor is the same as distor.

Examples

The code below optimizes the quantization parameters for a sinusoidal transmission via a 3-bit channel. Since the typical data is sinusoidal, trainingset is a sampled sine wave. Since the channel can transmit 3 bits at a time, lloyds prepares a codebook of length 23.

Algorithm

lloyds uses an iterative process to try to minimize the mean square distortion. The optimization processing ends when either:

  1. The relative change in distortion between iterations is less than 10-7, or
  2. The distortion is less than eps*max(trainingset), where eps is MATLAB's floating-point relative accuracy

See Also
compand, dpcmopt, quantiz

References

S. P. Lloyd. "Least Squares Quantization in PCM." IEEE Transactions on Information Theory. Vol IT-28, March 1982, 129-137.

J. Max. "Quantizing for Minimum Distortion." IRE Transactions on Information Theory. Vol. IT-6, March 1960, 7-12.


 istrellis marcumq