Binary Step Size Based LMS Algorithms(BS-LMS) (Scripts) Publisher's description
from Samir Mishra
I was trying out modifications of the LMS algorithm so that it will converge faster and the mean square error will also be smaller.
I was trying out modifications of the LMS algorithm so that it will converge faster and the mean square error will also be smaller. Getting to one of the drawbacks of LMS, that it has only one controllable parameter "mu", the selection of whose value will be the most critical from design point of view w.r.t. convergence. So, I wanted to implement LMS in such a way that the step-size adapts to the error occurring in each iteration.
What I came out with is the Binary Step-size LMS algorithm.Here, we have two step sizes calculated from 2 values, delta and deviation. When the error increases from the previous value of error , step size is (delta+deviation). And when the error decreases from its previous value, step size is (delta-deviation). I implemented an adaptive equalizer using the BS-LMS algorithm. It was found that this converges faster than the LMS algorithm.
Moreover, considering the NLMS(Normalized LMS) algorithm where the step size is always (delta/energy of input signal), the NLMS converges faster than LMS. Putting the binary step size concept along with the NLMS, I found that the convergence rates of BS-NLMS and NLMS are nearly equal, however, the mean square error resulting from BS-NLMS has a smaller value as compared to that from NLMS.
From the figures it may be noted that the mean square error for binary step size based algorithms is lesser than their one step size counterparts. For example, the MSE from BS-LMS is smaller than LMS and that of BS-NLMS is smaller than NLMS. This is advantageous when we would need to maintain high precision in our equalizers.
System Requirements:MATLAB 7 (R14)
Program Release Status: New Release
Program Install Support: Install and Uninstall