Friday, August 28, 2009

Binary Step-size LMS algorithm

Any discussion of adaptive algorithms is worthless without the mention of world's most popular and easiest algorithm or the Least Mean Square Algorithm. It was given by Widrow and his first Ph.D student Ted Hoff.

I was trying out modifications of the LMS algorithm so that it will converge faster and the mean square error will also be smaller. Getting to one of the drawbacks of LMS, that it has only one controllable parameter "mu", the selection of whose value will be the most critical from design point of view w.r.t. convergence. So, I wanted to implement LMS in such a way that the step-size adapts to the error occurring in each iteration.

What I came out with is the Binary Step-size LMS algorithm.Here, we have two step sizes calculated from 2 values, delta and deviation. When the error increases from the previous value of error , step size is (delta+deviation). And when the error decreases from its previous value, step size is (delta-deviation). I implemented an adaptive equalizer using the BS-LMS algorithm. It was found that this converges faster than the LMS algorithm.

Moreover, considering the NLMS(Normalized LMS) algorithm where the step size is always (delta/energy of input signal), the NLMS converges faster than LMS. Putting the binary step size concept along with the NLMS, I found that the convergence rates of BS-NLMS and NLMS are nearly equal, however, the mean square error resulting from BS-NLMS has a smaller value as compared to that from NLMS.


In the above figure it may be noted that the mean square error for binary step size based algorithms is lesser than their one step size counterparts. For example, the MSE from BS-LMS is smaller than LMS and that of BS-NLMS is smaller than NLMS. This is advantageous when we would need to maintain high precision in our equalizers.

I have uploaded the required MATLAB files here.

Everyone is invited to please check my findings and post comments for improvement.

No comments:

Post a Comment