# The Royal Consortium for DSP

## Noise Cancellation Project ## Design Approach and Procedure

12/21/96

### Filter Classifications

A review of the literature enabled us to determine the adaptive filtering systems fall into two broad categories. One category, of which the Kalman filter is a representative example, requires that there be knowledge of the state space of the system to which the filter is being applied. The second category, of which the LMS filter is a representative example, requires only an input signal and some reference signal in order to work.

Filters can also be classified as to whether they are linear, and whether they are time invariant. Non-linear filters were determined to be beyond the scope of this project. The LMS filter is a time varying, linear filter which makes it a relatively straightforward filter to implement and understand. Strictly speaking, the filter is not actually linear, but its final output results behave as though the filter were linear.

### Description of the LMS Filter

The first step in constructing an adaptive filter is to understand the construction of the filter itself. The figure below is a typical construction of an LMS adaptive filter.

GRAPHIC TO BE INSERTED HERE (LMS Filter Schematic)

It can be seen that the LMS filter looks remarkably similar to a standard FIR filter. The input signal passes through a sequence of delays, each of which has a coefficient associated with it. These coefficients are the "weights" of the adaptive filter. The difference between the FIR and LMS filter is that after each sample is processed, the output signal is compared to some reference signal and via some algorithm, the weights are updated in and LMS adaptive filter.

### The Weight Updating Algorithm

Having gained insight into the general operation of the simplest implementation of an adaptive filter, the next area of investigation became the algorithm used to update the weights. While many, many different schemes have been concocted, it is interesting to note that the least mean squares (LMS) algorithm is still the most widely used because it is computationally efficient and works well for a wide variety of problems.

The algorithm is sometimes referred to as the "steepest descent algorithm". We begin by examining the equations in the diagram below

INSERT GRAPHIC OF EQUATIONS DESCRIBING LEAST SQUARES

By the first equation it can be seen that we obtain an output, 'y', by convolving the input with the filter weights. The error in the output signal is defined as the difference between the output signal and some reference signal, or "desired" signal, 'd'. The gradient of this error amount (squared) determines the direction in which the weights must be adjusted in order to minimize the error.

The actual adjustment to the weights is then determined by taking the previous value of the weights and adding the gradient quantity by some factor, mu. It turns out that there is not a precise mathematical definition of what the value of mu should be. It controls the rate of convergence of the adaptive filter. Large values cause the filter to converge rapidly (sometimes within a few samples), while smaller values cause the filter to converge slowly. While it would seem that large values make the most sense, large values of mu, in the presence of a signal with widely varying noise levels, can cause the filter to oscillate or ring about its desired convergence point. In one early experiment, our filter oscillated to such a large extent that it exceeded the floating number capacity of the machine within a few samples.

### Input Signals, Reference Signals, and Overview of Operation

The input signal is, as its name implies, the primary input to the filter or the filtering system. Depending upon the filter application (see below), the input signal may or may not contain noise. The input signal is also referred to as the primary signal.

The reference signal is the signal (or an approximation to it) that is to be removed from the primary or input signal stream. For example, in a noise cancellation application, the reference signal must be something akin to the noise to be removed from the primary signal.

Depending upon the nature of the filter application, the reference signal is either generated within the system as a result of filter operation, or it is applied from an external source. How the reference signal is derived is explained in the next section.

### Filtering Systems

Having determined the construction of an adaptive filter and its weighting algorithm, the next question was, "Well, how does one actually implement such a filter? Just what is an input signal and a reference signal?" A further review of the literature at hand showed that there are four possible implementations of an adaptive filter, as shown in the diagram below.

INSERT DIAGRAM OF FOUR SYSTEM SCHEMATICS

The Type I - Identification Filter

Let us assume that we have some "plant" that provides an unknown impulse response. The signal that feeds the plant is also fed to the filter. Note that "noise" is not an issue here. The output of the filter and the plant are subtracted from one another. The result is the "reference" or "desired" signal which is sent back to the filter to cause it to adjust its weights. When the difference is zero, the reference signal is zero, implying that there is nothing to be removed from the input. Thus, the filter will no longer adjust weights. Also, if the difference is zero, it follows that the impulse repsonse of the filter must be identical to that of the plant.

The Type II - Inverse Modelling Filter

Let us again assume that some input is fed into a plant of unknown impulse response. However, now the output of the plant is fed into the filter. Now, the input, delayed so that it equals the delay it underwent in the plant, is subtracted from with the filter output. Again, the difference is the reference signal which is sent to the filter to adjust weights. Note now though, that if the reference signal goes to zero, it means that the output of the filter must be identical to that of the input signal. Thus, the filter now has the inverse impulse response of the plant.

Type III - Predictive Filter

The input, after being delayed by one sample is sent to the filter. The output of the filter is then subtracted directly from the input which was not delayed. In essence, the filter is always one sample behind the input, and if its reference signal is to go to zero, it must "guess" what the next sample will be so that it can generate the correct output so that the reference signal goes to zero.

These types of filters are used in an attempt to remove random noise from a signal content. The LMS filter, relying upon statistical methods, will not work for random noise, and hence, will not be found in this application.

Type IV - Noise Cancellation

The last type of filter system requires that two separate signals be fed to it. The primary signal does not go through the filter at all, an odd thought one might suppose, when the whole purpose is to filter noise from the primary signal. Instead, the reference signal is fed into the filter, and the filter output is subtracted from the primary signal. Let us assume for a moment that the reference signal is an exact duplicate of the noise that is found in the primary signal. It is clear that the subtraction will leave an error which is exactly the wanted signal. Because this signal must have minimum power (all the reference signal is removed), the filter weight will remain constant so long as the reference signal remains constant.

Suppose now that the reference signal is not an exact duplicate of the noise in the primary signal. It may be shifted in time (delayed, which means a phase shift in the frequency domain), it may have a different amplitude, and it may be at a slightly different frequency. Now, in order to minimize power in the output signal, the filter must adapt to make the reference signal match as closely as possible to the noise in the primary signal. This can occur only if the noise in the primary signal and the reference signal are statistically correlated, and if the noise and the input signal in the primary signal are NOT correlated.

### Implementation of the Adaptive Filter

Not having a DSP chip lying around which was readily programmable, we elected instead to implement our filtering system in Matlab. This of course, removed the possibility for any real time processing, but as a side benefit we were able to examine inputs and outputs in great detail.

The Matlab function we wrote, which is the adaptive filter, is shown in the illustration below.

function [err,output,tap_wts] = adapt(len,u,d)

% LMS Active Adaptive Filter Routine
% copyright 1996, Ian Gravagne, all rights reserved
%
% [err,y,W] = ADAPT(len,u,d)
%
% len : the desired FIR filter length
% u : a vector of input values to the adaptive filter
% d : a vector of desired, or reference values. This vector must be
% the same length as the input vector. If "d" is not specified,
% it will assumed to be 0. (i.e. the filter will converge to zero
% output. % err : err = d - y where y is the output of the FIR filter.
% y : the output of the filter when u is applied.
% W : a column vector describing the wieghts of the FIR filter.
%

% set up filter for zero or non-zero output

if nargin == 2,
d = zeros(1,length(u));
W = ones(len,1);
else W = zeros(len,1);
end

U = zeros(len,1);

% convolve the filter and input. Update the weights in the filter
for n = 1:length(u)
U = [u(n);U(1:length(U)-1)];

y(n) = W'*U;
e(n) = d(n) - y(n);
W = W + .025*U*conj(e(n));
end

if nargout == 1,
err = e;
else
err = e;
output = y;
tap_wts = W;
end

Given the preceeding explanations of filter operation, it should be clear to the reader that the function takes two inputs (the primary and reference signal), and produces an output signal. In this routine weights are recomputed after every sample; it is also possible to recompute weights after 'n' samples, called a 'block' weighting routine. An example of this type of function will be found in the Results and Discussion Section.

### Experimenting With Adaptive Filters

Having become slightly knowledgable about LMS adaptive filters, the next step was to try a number of different experiments to better understand the filter operation. Two large questions faced us at this point: How long should an adaptive filter be, and what values of mu should we use.

We ran the following series of signal simulations using Matlab:

• We recorded a voice and then overlaid a simple 60 Hz sine wave onto the voice and used the LMS filter to remove it.
• We recorded the sound at the tailpipe of a Chevrolet truck and then used the LMS filter to make the output signal quiet, in an attempt to mimic the concept of a sound cancelling muffler.
These simple simulations gave us information about the operation of the filter, and more importantly gave us insight into the role that phase shift and amplitude differences play in noise cancelling schemes. As a result of these experiments we implemented the following simulations:
• We recorded the electrical response signals of a moving muscle which were contaminated with 60 Hz noise, noise which was shifted in phase and amplitude due to its attenuation in the body and recording instruments. We then used an unshifted reference sine wave to remove the noise.
• We created a "plant", an analog impulse response by using Matlab's ODE23 facilities to solve a differential equation system. We then used the LMS filter in a Type I construction to mimic the plant repsonse.
• We created a "real world" noie cancellation simulation. We recorded a "noise" (actually, it was 2 Live Crew... so noise might be a perjorative term). We then phase shifted this noise (non-linearly), and changed its amplitudes at various frequencies. We then added this new noise to the voice of a speaker, and proceeded to remove it using the LMS adaptive filter.
Each of these experiments and their output is described in detail in the Results and Discussion Section of this web document.

11/05/96

The project will begin with a review of available reference material, both to flesh out the basics of the project and to better define its scope. We have made the following group assignments:

• Wayne Herbert - Contact Digisonix to gather information on noise attenutation algorithms used. Also, begin a review of the text, "Adaptive Filter Theory", by Simon Haykin.
• Ian Gravagne - Examination of models available for both communications applications and sound applications. Provide a "back of the envelope" design for the Matlab model to be used.
• Ann Ramos - Review the material in, "Introduction to Applied Mathematics", by Gilber Strang, to garner information covering least squares filters, and other adaptive methods.
• Kevin Speller - Develop a survey of noise cancellation methods and techniques in use in industry, contact those with techniques relevant to the project.
• Pat Friel - Develop a survey of other papers/texts on the subject of noise cancellation beginning with Rich B's paper sources.
Preliminary reports on each of these topics will be due on Wednesday, November 12, 1996.

10/31/96

blah, blah, woof, woof.