International Journal of Electronics, Electrical and Computational System IJEECS ISSN 2348-117X Volume 4, Special Issue March 2015
Performance Analysis of Turbo Codes Over AWGN Channel Shalini Bahel and Rajdeep Singh Department of Electronics Technology, Guru Nanak Dev University, Amritsar, India.
Abstract—Turbo code is the current research topic in the recent years, having the performance very close to the theoretical limit of Shannon. Turbo code has implemented in 3G mobile, deep-sea communications and satellite communications. In this paper, the turbo code encoding and decoding principle are introduced. Turbo encoders with different component codes are designed. The simulations of turbo codes are performed over AWGN channel and the effects of number of decoding iterations, the number of memory elements, the type of interleaver and the choice of decoding algorithm to turbo code performance are discussed. Simulation results show that at the same signal to noise ratio, the turbo code performance improves with increase in the number of iterations and the memory elements. Keywords—Turbo codes; interleaver; iterative decoding; log likelihood ratio; bit error rate INTRODUCTION Shannon introduced the concept of channel capacity in 1948, defining the limit to the amount of data that could be transmitted over any channel [1]. Since then, attaining this maximum theoretical channel capacity has been the goal of many mobile communications researchers. Channel coding added the redundancy in the information message to increase the reliability. The channel coding theorem states that reliable communication can be achieved even at low Eb/N0. Then the new capacity approaching codes are introduced and increase the interest in the coding area. Turbo codes were first introduced in 1993 by Berrou, Glavieux, and Thitimajshima, having a BER of 10-5 using a rate ½ codes over an additive white Gaussian noise (AWGN) channel and BPSK modulation at an Eb/N0 of 0.7 dB [2-3]. This scheme is relies on the iterative soft decoding process. Here two recursive systematic convolutional (RSC) encoders are concatenated in parallel, separated by an interleaver to increase the average code weight output prior to transmission. The receiver has two soft-input soft-output (SISO) decoders in serial concatenation. Turbo decoder has two main algorithms MAP (maximum a posteriori) and SOVA (soft output Viterbi algorithm). Though SOVA has less complexity than MAP but it has poor performance. So here we are implementing the MAP and its derivatives Log-MAP and Max-Log-MAP [4]. This paper gives the system structure of turbo encoder with different component codes, the interleaver design, and the principle of turbo decoding and discussing the main decoding algorithms. With the help of MATLAB the simulation results are discussed for number of iterations, the number of memory elements, the type of interleaver used and the type of decoding algorithm used.
I.
II. TURBO ENCODER A. Component Codes
The turbo encoder is built by concatenating two RSC encoders with an interleaver in between. The overall turbo encoder produces a codeword for an input sequence u consisting of N bits by combining this message sequence which is often called the systematic bit u, together with the two
179
Shalini Bahel and Rajdeep Singh
International Journal of Electronics, Electrical and Computational System IJEECS ISSN 2348-117X Volume 4, Special Issue March 2015
parity sequences p1 and p2 which are produced by two identical encoders. The systematic outputs from the both RSC encoders are not needed because they are identical to each other (although ordered differently) and to the turbo code input [5-6]. Thus the overall code rate becomes R=1/3. Figure 1 shows the general encoding structure for the classical rate 1/3 turbo encoder. A turbo encoder is generally defined by its generator polynomials in the form of (1, g1/g2, g1/g2). A (1, 5/7, 5/7) encoder simply means that g1= [101] and g2= [111] in binary numbers, where a `1` refers to the connection between the memory element and the XOR operator. The encoding of turbo codes is describes by the state diagram for any given generator polynomial. The contents of memory define the state of encoder at particular time instant. If the overall encoder memory is denoted by m then 2m is the total number of states. The output of the encoder and the current state is depends on the previous state and current input.
Figure 1: Block diagram of a general rate 1/3 turbo encoder. The state diagram is a kind of graph which has nodes, representing the encoder states with the arrow direction representing the state transitions [8]. Figure 2 shows the state diagram of [7, 5]8 RSC Encoder. State diagram can also be expanded to trellis diagram. At each time step all the state transitions are clearly shown to retain the time dimension. Trellis diagram is very useful for describing the behavior of decoder. Figure 3 shows the trellis diagram of [7, 5]8 RSC encoder. The solid lines in the diagram representing the state transitions when input is „1‟ whereas, dotted lines representing the state transitions when input is „0‟.
Figure 2: State Diagram of [7, 5]8 RSC Encoder.
Figure 3: Trellis representation of [7, 5]8 RSC Encoder.
180
Shalini Bahel and Rajdeep Singh
International Journal of Electronics, Electrical and Computational System IJEECS ISSN 2348-117X Volume 4, Special Issue March 2015
The Interleaver The interleaving is the rearranging of the order in which the data is read. The bits of codeword are scrambled by the interleaver using the scheme which is also known by the decoder, thus the long burst of errors are broken up by the de-interleaver at the receiver which are introduced by the transmitted data stream. To increase the weight of the output codeword is the main purpose of the interleaver. The high weight codeword can be produced if the bits of low weight codeword are rearranged, thus the likelihood of low-weight turbo codes is reduced. The interleaver size of the turbo codes effect the bit error rate performance. The BER performance increases with the increase in the size of interleaver at the cost of higher complexity. The some interleaver design ideas are as follows- Random interleaver, General block interleaver, Matrix interleaver and Helical scan interleaver [7-9]. III. DECODING PRINCIPLE OF TURBO CODE A. Decoding Structure of Turbo code In turbo decoder outputs from one decoder are feed to the inputs of other decoder in an iterative fashion, so hard output decoder would not be possible. Thus turbo decoder use simple soft input/soft output (SISO) decoder. The rate R=1/3 turbo decoder uses two SISO decoders which shares a priori information obtained from other decoder works iteratively. The a-priori information is also called extrinsic information [10]. The decoding scheme of turbo codes is shown in figure 4. B.
Figure 4: The decoding scheme of turbo-codes The received codeword has three parts, the original systematic bit (u) and two parity bits (p1, p2). At the starting there is no previous extrinsic information so takes it as zero, therefore only systematic bit and parity from first component encoder are given to the first component decoder. The output of first component decoder is a posteriori information. A priori information is extracted by subtracting the systematic input and a priori input. Passing this output through interleaver before applying to second component decoder. The other two inputs are parity from second component encoder and interleaved version of systematic bit. The priori information from second component decoder is extracted in the same way and provided to first component decoder after passing through de-interleaver to match it with the original data. The turbo decoder will repeat this process for fixed number of iteration and after that hard decisions are made on de-interleaved output of second component decoder [8]. B. Log Likelihood Ratio(LLR) Log Likelihood Ratio (LLR) is the soft representation of the information bits is. For data bit uk, the LLR L(uk) is defined as the logarithm of the ratio of probability that uk=+1 to the probability that uk=-1 [10-11]. This means the ratio between a-priori probabilities. (1)
181
Shalini Bahel and Rajdeep Singh
International Journal of Electronics, Electrical and Computational System IJEECS ISSN 2348-117X Volume 4, Special Issue March 2015
Unlike LLR, the conditional LLR L(uk | y) is commonly used in decoding techniques. It is based on the ratio of a-posteriori probabilities. Its equation is given as follows (2) Where, y is the received codeword. DECODING ALGORITHM OF TURBO CODES There are four main decoding algorithms, MAP, Log-MAP, Max-Log-MAP and SOVA. The SOVA is 0.7dB inferior to Log-MAP at 10-4 BER and Max-Log-MAP is lying in between [4]. The Log-MAP and the Max-log-MAP are the simplified versions of the MAP algorithm which operates in the logarithmic domain and reduces many multiplications into additions, a desired operation due to its reduced computational complexity [12]. The max-log-MAP is the simplest algorithm but is inferior to Log-MAP as it involves the approximation of the log-likelihood function with a maximum function after some mathematical manipulations. A. Maximum A Posteriori (MAP) Algorithm The BCJR or MAP algorithm was originally presented coincidentally in both [13] and [14]. It is a modified version of this that is used by each of the component decoders of the MAP turbo decoder. Consider the N data bits u = (u1,u2,…….,uk,….uN) are used by the RSC encoder to produce a state sequence s0k = (s0,s1,……,sk) of equal length (where s0k represents the state sequence from time 0 to k). The value of LLR by using MAP is
IV.
(3) Where, αk-1(s') is called the Forward estimation of state probability of state s‟ at time slot k-1, γk (s', s) is called Branch metric probability and βk(s) is called the Backward estimation of state probability of state s at time slot k. The Forward state metric is expressed as
α0(0) = 1 and α0(s) = 0 for s ≠ 0 The Backward state metric is expressed as
(4)
βN(0) = 1 and βN(s) = 0 for s ≠ 0 (5) The Branch metric probability is expressed as γik-1→k (s', s) = P (yk | xk ). P (xk | sk = s, sk-1 = s'). P (sk = s | sk-1 = s') (6) P (yk | xk ) is transition probability of channel, P (xk | sk = s, sk-1 = s‟) is the probability of original bit, P (sk = s | sk-1 = s') is transition state probability and for the equiprobable binary data, it is ½ [11, 1516]. B. Max-Log-MAP Algorithm The MAP algorithm considered too complex because of large amount of additions, multiplications and delay. The Max-Log-MAP modified the MAP algorithm [17]. The system can be simplified
182
Shalini Bahel and Rajdeep Singh
International Journal of Electronics, Electrical and Computational System IJEECS ISSN 2348-117X Volume 4, Special Issue March 2015
by employing the logarithms of the probabilities defined in equations (4), (5) and (6) and thus transforming any multiplicative operations to summations. This gives the following: Ak (s) = log αk (s) (7) Bk (s) = log βk (s) (8) Γk (s) = log γk (s', s) (9) Using the simple approximation log(a + b + c + …..) = log(max(a, b, c, ….)) (10) Thus the forward state metric is expressed as Ak (s) = max [Ak-1 (s') + Γk (s', s)] A0 (0) = 0 and Ak (s) = -∞ for s ≠ 0 (11) The backward state metric is expressed as Bk (s) = max [Bk+1 (s') + Γk (s, s')] BN (0) = 0 and BN (s) = -∞ for s ≠ 0 (12) The LLR value can be expressed as L(uk) = max [Γk1 (s', s) + Ak-1 (s') + Bk (s)] ̶ max [Γk0 (s', s) + Ak-1 (s') + Bk (s)] (13) The Ak (s) and Bk (s) are approximated in Max-Log-MAP by using maximization operation. The Max-Log-MAP performance is worse than MAP [15]. C. Log-MAP Algorithm The Max-Log-MAP is sub-optimal as it uses the approximation. The Log-MAP is uses the Jacobian algorithm to improve the performance at the cost of increase in complexity [4, 15]. log (ea + eb) = max (a, b) + log (1 ̶ e ̶ |a ̶ b|) = max (a, b) + fc ( |a ̶ b|) (14) Where, fc (|a ̶ b|) is a correction function. The Ak (s) and Bk (s) are approximated by using maximization operation with the additional correction term in Log-MAP. Thus the performance is very close to MAP algorithm. The algorithms in the term of BER performance, the sequence is MAP>Log-MAP>Max-Log-MAP [4, 15-16]. SIMULATION RESULTS Simulations are made over the Additive White Gaussian Noise (AWGN) channel by using the Binary Phase Shift Keying (BPSK) modulation. The simulation results are presented in the form graphs showing the BER performance at different values of signal to noise ratio (SNR). The BER performance of rate 1/3 turbo code is analyzed by using simple MAP algorithm. Generally all the simulations are made using four decoding iterations and N=106 number of message bits. There are four different factors on the basis of that the turbo code performance is analyzed. These are as followsA. Number of Decoding iterations Turbo decoding algorithms are based on the iterative decoding. The BER performance is improving with the increase in the number of decoding iterations. But it is up to some limit and after that BER curves going to become saturate. The BER performance of rate R=1/3 (1, 5/7, 5/7) turbo code using random interleaver is shown in figure 5. The decoding algorithm used is MAP and four decoding iterations are considered.
V.
183
Shalini Bahel and Rajdeep Singh
International Journal of Electronics, Electrical and Computational System IJEECS ISSN 2348-117X Volume 4, Special Issue March 2015
Figure 5: Turbo code (1, 5/7, 5/7) performance using Random interleaver over AWGN channel. B. The Choice of Decoding Algorithm There are basically three algorithms which are implemented in this paper. The BER performance of MAP is compared with that of Log-MAP and Max-Log-MAP. The Log-MAP has the same performance as that of MAP, but has less complexity than MAP because all the calculations are done in logarithmic scale. The Max-Log-MAP has the least complexity among all, but has worst performance among them. It is clear from the simulation results that there is 0.2 dB degradation in the performance of Max-Log-MAP than that of MAP and Log-MAP. The performance comparison of these all three algorithms for rate 1/3 turbo code (1, 5/7, 5/7) after 4th decoding iteration using random interleaver over AWGN channel is shown in figure 6.
Figure 6: Performance Comparison of different algorithms for (1, 5/7, 5/7) Turbo code after 4th iteration. C. Number of Memory Elements or Constraint Length The constraint length (K) is very important factor for designing the turbo codes, which is the maximum number of bits in the output that can affected by the input bit. In general if m is the number of memory elements in one component encoder, then K=m+1 is the constraint length. Thus the number of memory elements in the component encoder plays the very important role in the performance analysis of turbo codes. The performance of turbo codes improves with the increase in the number of memory elements. The figure 7 shows the turbo code performance with different memory elements ranging from 1 to 4 using random interleaver.
184
Shalini Bahel and Rajdeep Singh
International Journal of Electronics, Electrical and Computational System IJEECS ISSN 2348-117X Volume 4, Special Issue March 2015
Figure 7: Turbo code performance using Random interleaver over AWGN channel for different memory elements. D. Inerleaver Design Here the performance of four different interleavers is compared by using different number of bits. The four interleavers are Random interleaver, General Block interleaver, Matrix interleaver and Helical scan interleaver. The performance of turbo codes is increase as the size of interleaver is increases. The performance of turbo code (1, 5/7, 5/7) is compared by using different interleavers after the 4th decoding iteration using N= 4096 and N=65536 is shown in figure 8 and 9 respectively. It is quite clear from the simulation results that the General Block interleaver performed better than all others when the message size is small and random interleaver performed better than all other designs when the message size is very large.
Figure 8: Performance Comparison of Turbo code (1, 5/7, 5/7) using different interleavers after 4th iteration for N=4096 bits.
Figure 9: Performance Comparison of Turbo code (1, 5/7, 5/7) using different interleavers after 4th iteration for N=65536 bits.
185
Shalini Bahel and Rajdeep Singh
International Journal of Electronics, Electrical and Computational System IJEECS ISSN 2348-117X Volume 4, Special Issue March 2015
VI. CONCLUSION Turbo codes are the high performance error correcting codes gives performance close to Shannon limit at low SNR. The performance of turbo codes depends on the number of parameters including the memory size or constraint length, generator polynomials, interleaver type and size, number of decoding iterations and the type of decoding algorithm used. The Turbo code performance improves with the increase in the number of memory elements. Increasing the number of decoding iterations also improves the performance of turbo codes. The BER performance of turbo codes increases with the increase in the interleaver size. The random interleaver performs better with large amount of input data, whereas general block interleaver performs better with small amount of input data. The MAP algorithm has best performance with higher complexity and Max-Log-MAP has worst performance with least complexity, Whereas Log-MAP has very close in performance to MAP and has medium complexity among all. References [1] Shannon C. E., “A Mathematical Theory of Communication”, Bell Syst. Tech. Journal, Vol. 27, pp. 379-423 (July) and pp. 623-656, October 1948. [2] Berrou C., Glavieux A., Thitimajshima P., “Near Shannon Limit Error-Correcting Coding and Decoding: Turbo Codes(1)”, Proc. IEEE Int. Conf. On Communications, pp. 1064-1070, May 1993. [3] C. Berrou and A. Glavieux, “Near optimum error correcting coding and decoding: turbo-codes”, IEEE Transactions on Communications, vol. 44, no. 10, pp. 1261–1271, 1996. [4] Robertson, P., Villebrun, E. and Hoeher, P., “A Comparison of Optimal and Suboptimal MAP decoding Algorithms Operating in the Log Domain”, Proc. ICC‟95, June 1995. [5] G. Battail, C. Berrou, and A. Glavieux, “Psuedo-Random recursive convolutional coding for near-capacity performance”, Globecom 1993, pp. 23-27, Dec. 1993. [6] D. Divsalar and F. Pollara, “On the Design of turbo codes”, The Telecommun and Data Acquisition Progress Report, Jet Propulsion Laboratory, CIT, vol. 42-123, pp. 99-123, Nov. 1995. [7] Emilia Käsper, “Turbo Codes”, www.tkk.fi/~pat/coding/essays/turbo.pdf, cited 15 June, 2012. [8] Alister Burr., “Turbo-codes: The ultimate error control codes?”, Electronics and Communications Engineering Journal, Vol. 13, Issue 4, pages 155–165, 2001. [9] Raffi Achiba, Mehrnaz Mortazavi, and William Fizell, “Turbo Code Performance and Design Trade-offs”, IEEE Proceedings on Communications, 2000. [10] Hagenauer J., Offer E. and Papke L., “Iterative Decoding of Binary Block and Convolutional Codes”, IEEE Trans. Inf. Theory, Vol. 42, No. 2, March 1996. [11] Silvio A. Abrantes, “From BCJR to turbo decoding: MAP algorithms made easier”, online paper, April 2004. [12] L. Hebbesl, R. R. Malyanl and P. Ball, “Computational Complexities and the Relative Performance of Turbo Codes”, IEEE Proceedings on Communications, 2001. [13] Bahl L. R., Cocke J., Jelinek F. and Raviv J., “Optimal Decoding of Linear Codes for Minimising Symbol Error Rate”, IEEE Trans. Info. Theory, Vol. IT-20, pp. 248-287, March 1974. [14] McAdam P. L., Welch L. and Weber C., “M.A.P. Bit Decoding of Convolutional Codes”, Abstracts of Papers, Int. Symp. Info. Theory, p. 91, January 1972. [15] Hamid R. Sadjadpour, “Maximum A Posteriori Decoding Algorithms For Turbo Codes”, Proceedings of SPIE, Vol. 4045, 2000. [16] S. Benedetto, G. Montorsi, D. Divsalar and F. Pollara, “Soft-output decoding algorithms in iterative decoding of turbo codes”, The Telecommun and Data Acquisition Progress Report, Jet Propulsion Laboratory, CIT, vol. 42-124, pp. 63-87, Feb. 1996. [17] Erfanian J. A., Pasupathy S. and Gulot G., “Reduced Complexity Symbol Detectors with Parallel Structures for ISI Channels”, IEEE Trans. Comm., Vol. 42, pp. 1661-1671, February 1994.
186
Shalini Bahel and Rajdeep Singh