EE 451
Example of Hard- and Soft-Decision Decoding
Spring 2016
Consider a simple (2,1) linear, binary repetition code used as an error control code (ECC). Since the minimum distance is 2, only one of the two 1-bit error patterns can be corrected. The resulting probability of decoding error is then Pe = ε (1 − ε ) + ε 2 = ε , which is no better than uncoded transmission, and in fact is worse because with the rate-1/2 code the information transmission rate is 1/2 bit per channel use. Now, consider consider binary PAM signaling over an additive additive white Gaussian Gaussian noise channel channel using the same (2,1) block code. At the output of the matched filter/correlator receiver the model is y = a + w,
a1 w is the transmitted signal point, w = 1 is zero-mean Gaussian noise of a 2 w2 y1 covariance σ w2 I where σ w2 = N 0 / 2 , and y = is the pair of samples from the output of y 2 where a =
the matched filter (or correlators). The block diagram is shown below, indicating the distinction between the separate PAM detection followed by ECC decoding, and the combined detection and decoding (the soft-decision case).
The figures below shows the “noise clouds” about the two different 2-dimensional signal a1 ∆ − ∆ , with ∆ = 1 . The first figure shows the decoding regions points a = ∈ , a ∆ − ∆ 2 for hard-decision decoding of each bit. The second shows the decoding regions for the soft-decision decoding of each 2-bit codeword. In the figure, there appear to be about 6 hard-decision decoding errors and one soft-decision decoding error for the particular realization of 100 iid additive white Gaussian noise vectors added to each of the two signal points (with (with the SNR = 6 dB).
Figure 1. Hard-decision detection regions.
Figure 2. Soft-decision detection/decoding regions (SNR ( E b / N 0 ) = 6 dB). The simulation is repeated with 500 AWGN vectors added to each signal point, and the result shown below. Note that there are far fewer decoding errors with the soft-decision decoding, even using this simple code that, for the BSC, offers no reduction in probability of bit error.
Figure 3. Comparison of hard- and soft-decision detection/decoding regions. Analysis
For sample-by-sample (hard-decision) decoding, first each PAM symbol is individually detected and mapped to a decoded codeword bit; then the error control code (ECC) decoder forms the decoded information bit. The maximum likelihood (ML) detection rule for binary PAM is to map the noisy (matched filter output) sample y to the closest (in
∆, if y ≥ 0; When applied to i y , f 0 . − ∆ <
Euclidean distance) symbol, aˆ . This rule is simply: aˆ =
two consecutive PAM symbols, this results in the (2-dimensional) detection regions shown in Figure 1. From previous work, the binary PAM bit error probability is Pb = Q For the hard-decision decoding, the BSC then has bit error probability
ε
E s / σ w2 .
= Pb , and the ECC
decoder then yields bit error probability ε = Pb . That is, there is no reduction in bit error rate by using the (2,1) code. For the combined ECC/PAM detection/decoding (the soft-decision decoding), the pair of received PAM symbols corresponding to ECC codeword bits are viewed together, as a 2dimensional vector. The ML detector then selects the (2-dimensional) symbol aˆ to maximize p( y | a ) . Since there are just two binary codewords in the code, there are just two 2-dimensional symbol vectors to consider in the maximization. Using the assumption that the PAM channel noise samples are white and Gaussian, the ML decision rule is:
− ∆
if Choose aˆ = − ∆
− [( y1 − (−∆)) 2 + ( y 2 − (−∆ )) 2 ] − [( y1 − ∆) 2 + ( y 2 − ∆) 2 ] 1 ; > exp exp 2 2 2 2 σ σ 2p σ w 2p σ w w w ∆ otherwise, choose aˆ = . Since the dependence on the symbols is only in the exponent, ∆ − ∆ if || y − (− ∆) || 2 < || y − ∆ || 2 ; otherwise choose the ML decision rule becomes: aˆ = − ∆ ∆ ∆ aˆ = , where ∆ = . The ML decision regions for the soft-decision detection are ∆ ∆ 1
shown in Figure 2, and a comparison of the two cases shown in Figure 3. Let D− ∆ and D∆ be the ML decision regions for the two respective (2-dimensional)
− ∆ is − ∆
symbols. The probability of detection error, given transmission of symbol − ∆ = then Pe|− ∆ =
∫∫ p( y | −∆)d y . Taking advantage of the circular symmetry of the 2-
y∉ D− ∆
dimensional Gaussian probability density function, this integral is evaluated as
d min , where d min = 2∆ 2 is the distance between the two (2-dimensional) 2σ w
Pe|− ∆ = Q
symbols. By symmetry, the probability of detection error, conditioned on transmitting the other symbol, is the same. The energy per symbol per dimension is still E s = ∆2 , so the soft-decision probability of (2-dimensional) symbol error becomes
2 E s σ 2 . w
Pe = Q
Note that this implies that it takes a factor of two less signal-to-noise ratio for soft-decision detection/decoding to achieve the same bit error probability as the hard-decision decoding case. In summary, by using soft-decision decoding, the minimum distance between (multidimensional) symbols is increased, and hence the probability of ML decoding error is reduced. This is accomplished at the expense of bandwidth. In the example considered, a rate-1/2 code is used, so it takes two channel transmissions to send one information bit. Problem 1. Suppose a (3, 1) repetition code is used with the binary PAM signaling. Find the bit error probability for hard-decision and soft-decision decoding. Problem 2. Suppose a (n, 1) repetition code is used with the binary PAM signaling. Find the bit error probability for hard-decision and soft-decision decoding. Problem 3. A (8, 4) linear binary code (a shortened Hamming code) with d min = 4 is used with binary PAM. Find (approximate) bit error probability expressions for hard-decision and soft-decision decoding. What is the effective coding gain of the soft-decision decoding over the hard-decision decoding?