Friday, April 12, 2013

Efficient implementation of a Viterbi decoder

Since there is already a lot of material available on the web explaining Viterbi decoder, I will not waste time explaining it again. The Viterbi Algorithm has mainly 3 units:

  1. Branch Metric Unit (BMU)
  2. Path Metric Unit (PMU)
  3. Traceback Unit (TU)
The most time-consuming operation is the Add-Compare-Select operation which is part pf PMU. Traceback is comparatively less time-consuming.

I wrote a small C program for a hard-decision Viterbi decoder for a 1/2 rate Convolutional code. This code is written optimally so that the porting can be done for available DSP processors which have parallel instruction support (like doing add and subtract in the same cycle) or those who have in-built instruction support to carry out the ACS and traceback.

For soft-decision decoder there can be more optimization possible in the calculation of the branch-metrics for which I will discuss in the next issue. To summarize, this code has optimal implementation for the PMU and TU parts of Viterbi decoder.

The C code is uploaded here.

Friday, August 28, 2009

Binary Step-size LMS algorithm

Any discussion of adaptive algorithms is worthless without the mention of world's most popular and easiest algorithm or the Least Mean Square Algorithm. It was given by Widrow and his first Ph.D student Ted Hoff.

I was trying out modifications of the LMS algorithm so that it will converge faster and the mean square error will also be smaller. Getting to one of the drawbacks of LMS, that it has only one controllable parameter "mu", the selection of whose value will be the most critical from design point of view w.r.t. convergence. So, I wanted to implement LMS in such a way that the step-size adapts to the error occurring in each iteration.

What I came out with is the Binary Step-size LMS algorithm.Here, we have two step sizes calculated from 2 values, delta and deviation. When the error increases from the previous value of error , step size is (delta+deviation). And when the error decreases from its previous value, step size is (delta-deviation). I implemented an adaptive equalizer using the BS-LMS algorithm. It was found that this converges faster than the LMS algorithm.

Moreover, considering the NLMS(Normalized LMS) algorithm where the step size is always (delta/energy of input signal), the NLMS converges faster than LMS. Putting the binary step size concept along with the NLMS, I found that the convergence rates of BS-NLMS and NLMS are nearly equal, however, the mean square error resulting from BS-NLMS has a smaller value as compared to that from NLMS.

In the above figure it may be noted that the mean square error for binary step size based algorithms is lesser than their one step size counterparts. For example, the MSE from BS-LMS is smaller than LMS and that of BS-NLMS is smaller than NLMS. This is advantageous when we would need to maintain high precision in our equalizers.

I have uploaded the required MATLAB files here.

Everyone is invited to please check my findings and post comments for improvement.

Tuesday, August 25, 2009

One of Netbeans Turorials

Netbeans IDE is an open source environment for developing JAVA applications. I had put up a tutorial in Netbeans wiki about making a simple plugin there. Have a look at it

Here is the link to the plugin.