Quantization Input-output characteristic of a scalar quantizer
x Output
x
Q
x ˆ
ˆ
Sometimes, this convention is used:
xq 2 ˆ
M representative levels
x
xq 1 ˆ
xq
q -1
ˆ
tq
Q
t q+1
q
t q+2 input signal x
M-1 decision thresholds
Q
x ˆ
Example of a quantized waveform Original and Quantized Signal
Quantization Error
Lloyd-Max scalar quantizer
Problem : For a signal x with given PDF f X ( x) find a quantizer with M representative representative levels such that 2 d MSE E X X min. ˆ
Solution : Lloyd-Max quantizer [Lloyd,1957]] [Max,1960] [Lloyd,1957 [Max,1960]
M-1 decision thresholds exactly
tq
half-way between representative levels.
x x q 1, 1, 2, , M - 1 2 ˆ
ˆ
q 1
q
t q1
x f
X
M representative levels in the
centroid of the PDF between two xq successive decision thresholds. Necessary (but not sufficient) conditions ˆ
1
( x )dx
t q
q 0, 0,1, , M - 1
t q1
f
X
t q
( x )dx
Iterative Lloyd-Max quantizer design 1. 2.
Guess initial set of representative levels xq q 0,1, 2,, M -1 Calculate decision thresholds ˆ
tq 3.
1
x x q 1, 2,, M -1 2 ˆ
ˆ
q 1
q
Calculate new representative levels t q1
x f
X
xq ˆ
( x)dx
t q
q 0,1, , M -1
t q1
f
X
( x )dx
t q
4.
Repeat 2. and 3. until no further distortion reduction
Example of use of the Lloyd algorithm (I)
X zero-mean, unit-variance Gaussian r.v.
Design scalar quantizer with 4 quantization indices with minimum expected distortion D* Optimum quantizer, obtained with the Lloyd algorithm
Decision thresholds -0.98, 0, 0.98 Representative levels –1.51, -0.45, 0.45, 1.51 D*=0.12=9.30 dB Boundary Reconstruction
Example of use of the Lloyd algorithm (II)
Convergence Initial quantizer A: decision thresholds –3, 0 3
n o 6 i t c 4 n u F 2 n o 0 i t a z -2 i t n -4 a u Q-6
5
10
Initial quantizer B: decision thresholds –½, 0, ½ n 6 o i t c 4 n u F 2 n o 0 i t a z -2 i t n a -4 u Q-6
15
Iteration Number
0.2
D0.2
D 0.15
] B 10 d [ T U O
R N S
0
5
SNR
10
OUT final
15
0.1 0 ] B 10 d [ T U O
0
5
10
Iteration Number
= 9.2978
5 0
10
15
Iteration Number
0.4
0
5
15
R N S
5
SNR
10
= 9.298
OUT final
15
8 6
0
5
10
Iteration Number
After 6 iterations, in both cases, ( D- D*)/ D*<1%
15
Example of use of the Lloyd algorithm (III)
X zero-mean, unit-variance Laplacian r.v.
Design scalar quantizer with 4 quantization indices with minimum expected distortion D* Optimum quantizer, obtained with the Lloyd algorithm
Decision thresholds -1.13, 0, 1.13 Representative levels -1.83, -0.42, 0.42, 1.83 D*=0.18=7.54 dB Threshold Representative
Example of use of the Lloyd algorithm (IV)
Convergence
Initial quantizer A, decision thresholds –3, 0 3
n 10 o i t c n 5 u F n o 0 i t a z i t -5 n a u Q -10
2
4
6
8
10
Iteration Number
12
14
Initial quantizer B, decision thresholds –½, 0, ½ n 10 o i t c n 5 u F n o 0 i t a z i t -5 n a u Q -10
16
0.4
0.4
D 0.2
D0.2
0
0 ] 8 B d [ 6 R N S 4 0
5
10
15
SNR
= 7.5415
final
5
10
Iteration Number
15
2
4
6
8
10
Iteration Number
0 0 ] 8 B d [ 6 R N S 4 0
After 6 iterations, in both cases, ( D- D*)/ D*<1%
5
12
14
16
10
15
SNR
= 7.5415
final
5
10
Iteration Number
15
Lloyd algorithm with training data 1.
Guess initial set of representative levels xq ; q 0,1, 2, , M -1
2.
Assign each sample xi in training set T to closest representative xq
ˆ
ˆ
q
Bq x : Q x 3.
Calculate new representative levels xq ˆ
4.
q 0,1,2,, M -1
1 Bq
x q 0,1,, M -1
x Bq
Repeat 2. and 3. until no further distortion reduction
Lloyd-Max quantizer properties
Zero-mean quantization error
E X
X 0
Quantization error and reconstruction decorrelated
E X
ˆ
X X 0 ˆ
ˆ
Variance subtraction property 2
X ˆ
2 X
2 E X X ˆ
High rate approximation
Approximate solution of the "Max quantization problem," assuming high rate and smooth PDF [Panter, Dite, 1951]
x( x) const Distance between two successive quantizer representative levels
1 3
f X ( x) Probability density function of x
Approximation for the quantization error variance: 2 1 3 d E X X 12 M 2 x ˆ
f X ( x )dx
3
Number of representative levels
High rate approximation (cont.)
High-rate distortion-rate function for scalar Lloyd-Max quantizer
d R 2 X 2 22 R 2
3 f X ( x)dx 12 x 1
2 X
with
Some example values for
2
uniform
1 9 2
Laplacian Gaussian
4.5
3 2
2.721
3
High rate approximation (cont.)
Partial distortion theorem: each interval makes an (approximately) equal contribution to overall mean-squared error 2 Pr ti X ti 1 E X X ti X ti 1 2 Pr t j X t j 1 E X X t j X t j 1 ˆ
ˆ
for all i, j
[Panter, Dite, 1951], [Fejes Toth, 1959], [Gersho, 1979]
Entropy-constrained scalar quantizer
Lloyd-Max quantizer optimum for fixed-rate encoding, how can we do better for variable-length encoding of quantizer index? Problem : For a signal x with given pdf f X ( x) find a quantizer with rate M 1
p
R H X ˆ
q
log 2 pq
q 0
such that
2 d MSE E X X min. ˆ
Solution: Lagrangian cost function 2 J d R E X X H X min. ˆ
ˆ
Iterative entropy-constrained scalar quantizer design 1.
2.
Guess initial set of representative levels xq ; q 0,1, 2,, M -1 and corresponding probabilities pq ˆ
Calculate M-1 decision thresholds xq -1 xq ˆ
tq = 3.
ˆ
2
log 2 pq 1 log 2 pq 2 xq -1 xq ˆ
q 1, 2, , M -1
ˆ
Calculate M new representative levels and probabilities pq t q1
xf
X
xq ˆ
( x )dx
t q
q 0,1,, M -1
t q1
f
X
( x)dx
t q
4.
Repeat 2. and 3. until no further reduction in Lagrangian cost
Lloyd algorithm for entropy-constrained quantizer design based on training set 1.
2.
Guess initial set of representative levels xq ; q 0,1, 2,, M -1 and corresponding probabilities pq ˆ
Assign each sample xi in training set T to representative xq 2 minimizing Lagrangian cost J x q xi xq log2 pq ˆ
ˆ
i
Bq 3.
x
: Q x q
Calculate new representative levels and probabilities pq xq ˆ
4.
q 0,1, 2, , M -1
1 Bq
x q 0,1,, M -1
x Bq
Repeat 2. and 3. until no further reduction in overall Lagrangian 2 cost J J x Q x log p
xi
xi
xi
i
i
2
q xi
Example of the EC Lloyd algorithm (I)
X zero-mean, unit-variance Gaussian r.v.
Design entropy-constrained scalar quantizer with rate R≈2 bits, and minimum distortion D* Optimum quantizer, obtained with the entropy-constrained Lloyd algorithm
11 intervals (in [-6,6]), almost uniform D*=0.09=10.53 dB, R=2.0035 bits (compare to fixed-length example)
Threshold Representative level
Example of the EC Lloyd algorithm (II)
Same Lagrangian multiplier
used in all experiments
Initial quantizer A, 15 intervals (>11) in [-6,6], with the same length
Initial quantizer B, only 4 intervals (<11) in [-6,6], with the same length n6 o i t c4 n u F2 n o i t 0 a z i t-2 n a-4 u Q -6
n 6 o i t 4 c n u F 2 n o 0 i t a z -2 i t n a -4 u Q -6
] 12 B d [ 10 R 8 N S 6 ] l 0 o 2.5 b m 2 y s / t i 1.5 b [ R 1 0
λ
5
10
15
20
25
Iteration Number
30
35
SNR
40
= 10.5363
final
5
10
15
20
25
30
R
5
10
15
20
25
Iteration Number
final
30
35
40
= 2.0044
35
40
] 12 B d [ 10 R 8 N S 6 ] l o 2.50 b m 2 y s / t i 1.5 b [ R 1 0
5
10
15
20
Iteration Number
SNR
final
5
10
15 R
5
10
Iteration Number
= 8.9452
20 = 1.7723
final
15
20
Example of the EC Lloyd algorithm (III)
X zero-mean, unit-variance Laplacian r.v.
Design entropy-constrained scalar quantizer with rate R≈2 bits and minimum distortion D* Optimum quantizer, obtained with the entropy-constrained Lloyd algorithm
21 intervals (in [-10,10]), almost uniform D*= 0.07=11.38 dB, R= 2.0023 bits (compare to fixed-length example)
Decision threshold Representative level
Example of the EC Lloyd algorithm (IV)
Same Lagrangian multiplier λ used in all experiments Initial quantizer A, 25 intervals (>21 & odd) in [-10,10], with the same length
o 10 i t c n u 5 F n o 0 i t a z i t -5 n a u Q -10
5
10
15
20
25
30
35
Initial quantizer B, only 4 intervals (<21) in [-10,10], with the same length o 10 i t c n u 5 F n o 0 i t a z i t -5 n a u Q-10
40
5
10
]15 B d [10 R N S 5 ] l 0 o 3 b m y s 2 / t i b [ R 1 0
SNR
10
15
20
25
= 11.407
30 R
5
10
15
20
25
Iteration Number
] 15 B d [ 10 R N S 5
final
5
35
40
= 2.0063
final
30
35
15
20
Iteration Number
Iteration Number
40
] l 0 o b 3 m y s 2 / t i b [ R 1 0
SNR
= 7.4111
final
5
10
15 R
5
10
Iteration Number
20 = 1.6186
final
15
Convergence in cost faster than convergence of decision thresholds
20
High-rate results for EC scalar quantizers
For MSE distortion and high rates, uniform quantizers (followed by entropy coding) are optimum [Gish, Pierce, 1968] Distortion and entropy for smooth PDF and fine quantizer interval
d
2
2 d
2
h X log
H X ˆ
12
2
Distortion-rate function
d R is factor
e
6
1 12
2
2 h X
22 R
or 1.53 dB from Shannon Lower Bound D R
1 2 e
2
2 h X
22 R
2
Comparison of high-rate performance of scalar quantizers
High-rate distortion-rate function
d R 2 X 2 22 R
2 Scaling factor
Shannon LowBd Uniform Laplacian Gaussian
6 e
e
Lloyd-Max
Entropy-coded
1
1
0.703 0.865 1
9 2 3 2
4.5 2.721
e2 6 e
6
1.232 1.423
Deadzone uniform quantizer
Quantizer output x ˆ
Quantizer input x
Embedded quantizers
Motivation: “scalability” – decoding of compressed bitstreams at different rates (with different qualities)
Nested quantization intervals Q0 Q1 Q2
In general, only one quantizer can be optimum (exception: uniform quantizers)
x
Example: Lloyd-Max quantizers for Gaussian PDF
0
2-bit and 3-bit optimal quantizers not embeddable Performance loss for embedded quantizers
1
0
0
1
1
0
1
0
1
-.98
.98
0
0 0 01 1 1
1
0
0 1 10 0 1
1
0
1 0 10 1 0
1
-1.75 -1.05 -.50
.50 1.05 1.75
Information theoretic analysis
“Successive refinement” – Embedded coding at multiple rates w/o loss relative to R-D function E d ( X , X 1 ) D1
I ( X ; X 1 ) R( D1 )
E d ( X , X 2 ) D2 ˆ
ˆ
I ( X ; X 2 ) R( D2 ) ˆ
ˆ
“Successive refinement” with distortions D1 and D2 D1 can be achieved i f f there exists a conditional distribution
f X ˆ
1,X2 X ˆ
( x1 , x2 , x) f X ˆ
ˆ
ˆ
2
X
( x2 , x) f X ˆ
ˆ
1
X 2 ˆ
( x1 , x2 ) ˆ
ˆ
Markov chain condition
X
X 2 X 1 ˆ
ˆ
[Equitz,Cover, 1991]
Embedded deadzone uniform quantizers x=0 8
4
Q0 Q1 Q2
x 4
2
2
Supported in JPEG-2000 with general for quantization of wavelet coefficients.
Vector quantization
LBG algorithm
Lloyd algorithm generalized for VQ [Linde, Buzo, Gray, 1980]
Best representative vectors for given partitioning of training set
Best partitioning of training set for given representative vectors
Assumption: fixed code word length Code book unstructured: full search
Design of entropy-coded vector quantizers
Extended LBG algorithm for entropy-coded VQ [Chou, Lookabaugh, Gray, 1989]
Lagrangian cost function: solve unconstrained problem rather than constrained problem
J d R E
X
X H X min. 2
ˆ
ˆ
Unstructured code book: full search for J xi q xi
2
xq log2 ˆ
pq
The most general coder structure: Any source coder can be interpreted as VQ with VLC!
Lattice vector quantization
•
•
2 e d u t i l p m A
cell
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Amplitude 1
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
pdf
•
•
• •
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
representative vector
8D VQ of a Gauss-Markov source 18
] B d [
LBG fixed CWL
12
E8-lattice LBG variable CWL
R N S
6 scalar
0.95
r
0 0
0.5 Rate [bit/sample]
1.0
8D VQ of memoryless Laplacian source 9 E8-lattice
LBG variable CWL
] B d [
6
scalar
R N S
3 LBG fixed CWL
0 0
0.5
Rate [bit/sample]
1.0