Abstract

Beamforming Using Support Vector Machines

Description
Beamforming Using Support Vector Machines
Categories
Published
of 4
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
Share
Transcript
  IEEE ANTENNAS AND WIRELESS PROPAGATION LETTERS, VOL. 4, 2005 439 Beamforming Using Support Vector Machines M. Martínez Ramón, Nan Xu, and C. G. Christodoulou  , Fellow, IEEE   Abstract— Support vector machines (SVMs) have improvedgeneralization performance over other classical optimizationtechniques. Here, we introduce an SVM-based approach forlinear array processing and beamforming. The developmentof a modified cost function is presented and it is shown how itcan be applied to the problem of linear beamforming. Finally,comparison examples are included to show the validity of the newminimization approach.  Index Terms— Beamforming, complex support vector machines,support vector machines (SVMs). I. I NTRODUCTION S UPPORT vector machines (SVMs) have shown severaladvantages in prediction, regression, and estimation oversome of the classical approaches in a wide range of applica-tions due to its improved generalization capabilities. Here, weintroduce the basic framework of the SVM approach as appliedto linear array processing.Array signal processing involves complex signals, for whicha complex-valued formulation of the SVM is needed. We in-troduce this formulation by introducing the real and imaginarypartsoftheerror intheprimal optimizationand thenproceedingas usual to solve a complex valued constrained optimizationproblem. The resulting algorithm is a natural counterpart of thereal valued support vector regressor, which can be immediatelyappliedtoarraysignalprocessing.Theadjustmentoftheparam-eters into this cost function leads to an improved robustness of the method in the presence of any additional noise in the signal.We apply the newly developed formulation in optimizing thebeamforming from an array antenna of six elements as a proof of concept. Several examples illustrate the advantage of SVMsover minimum mean square error (MMSE)-based algorithmsdue to its improved generalization ability. The first examplescompare the behavior of both algorithms in an environment inwhich interfering signals are close to the desired ones, thus pro-ducing non-Gaussian noise. The last example illustrates the im-proved generalization ability of the SVM when small data setsare used for training, which is common in several communica-tion applications.II. T HE  S UPPORT  V ECTOR  A PPROACHAND THE  C OST  F UNCTION Let be spatially sampled data. A linear beamformer canbe written as(1) Manuscript received July 14, 2005; revised September 27, 2005.TheauthorsarewiththeDepartmentofElectricalandComputerEngineering,University of New Mexico, Albuquerque, NM 87106 USA.Digital Object Identifier 10.1109/LAWP.2005.860196 where is the vector of elements of the array output andis the output error.Coefficients are usually estimated through the minimiza-tion of a certain cost function on .The SVM approach can be applied to the adjustment of thismodel. The main idea of SVMs is to obtain the solution whichhas the minimum norm of . Due to the minimization of theweight vectornorm, the solution will be regularized in the senseof Thikonov [1]), improving the generalization performance.The minimization has to be subject to the constraints(2)not to be trivial. and are the “slack” variables or losses.The optimization is intended to minimize a cost function overthese variables. The parameter is used to allow those orfor which the error is less that to be zero. This is equivalentto the minimization of the so-called -insensitive or Vapnik lossfunction [2], given by(3)The functional to be minimized is then(4)subject to , where is the tradeoff between the mini-mization of the norm (to improve generalization ability) and theminimization of the errors [2].The optimization of the above constrained problem throughLagrange multipliers , leads to the dual formulation [3](5)to be minimized with respect to .It involves the Gram matrix of the dot products of the datavectors .Thismatrixmaybesingularandthusproducinganill-conditionedproblem.Toavoidthisnumericalinconvenience,a small diagonal is added to the matrix prior to the numericaloptimization.We present here a modified derivation of the SVM regressorwhich leads to a more convenient equivalent cost function(Fig. 1)(6) 1536-1225/$20.00 © 2005 IEEE  440 IEEE ANTENNAS AND WIRELESS PROPAGATION LETTERS, VOL. 4, 2005 Fig. 1. Cost function applied to the SVM beamformer. where .This cost function provides a functional that is numericallyregularizedbythematrix .Asitcanbeseen,thecostfunctionis quadratic for the data which produce errors between and, and linear for errors above . Thus, one can adjust theparameter toapplyaquadraticcostforthesampleswhicharemainly affected by thermal noise (i.e., for which the quadraticcost is maximum likelihood). The linear cost is then appliedto the samples which are outliers [4], [5]. Using a linear costfunction, the contribution of the outliers to the solution will notdepend on its error value, but only on its sign, thus avoiding thebias that a quadratic cost function produces.Finally, we generalize the derivation to the complex-valuedcase, as it is necessary for array processing.III. S UPPORT  V ECTOR  M ACHINE  B EAMFORMER Theoutput vectorofan -elementarrayreceiving signalscan be written in matrix notation as(7)where(8)are respectively the steering matrix and vectors, thereceivedsignals,andthenoisepresentattheoutputofeacharrayelement.The output vector is linearly processed to obtain the de-sired output d[n]. The expression for theoutput of the array pro-cessor is(9)where is the weight vector of the array andis the estimation error.For a set of observed samples of and when nonzeroempirical errors are expected, the functional to be minimized is:(10)Thus, according to the error cost function (6), we have to min-imize(11)subject to(12)where , and stand for positive, and negative errors inthe real part of the output, respectively. and repre-sent the errors for the imaginary part. Note that errors are ei-ther negative or positive and, therefore, only one of the lossestakes a nonzero value, that is, either or (either or) is null. This constraint can be written as. Finally, as in other SVM formulations, the pa-rameter canbeseenasa tradeofffactorbetweentheempiricalrisk and the structural risk.It is possible to transform the minimization of the primalfunctional (11) subject to constraints in (12), into the optimiza-tion of the dual functional or Lagrange functional. First, we in-troduce the constraints into the primal functional by means of Lagrangemultipliers,obtaining thefollowing primal-dual func-tional:(13)with the dual variables or Lagrange multipliers constrained to, , , , , , , andwith , , , .Note that cost function (1) has two active segments, a quadraticone and a linear one.The following constraints must also be fulfilled:(14)  442 IEEE ANTENNAS AND WIRELESS PROPAGATION LETTERS, VOL. 4, 2005 Fig. 2. BER performance for example 1. SVM (continuous line) and regularLS (dashed line) beamformers.Fig. 3. BER performance for example 2. SVM (continuous line) and regularLS (dashed line) beamformers. noise will fall in the quadratic area, whereas the outliers pro-duced by interfering signals will fall in the linear area.We calculated the BER performance of LS and SVM for dif-ferent noise levels from 0 to 15 dB. Each BER has been eval-uated by averaging the results of 100 independent trials. Theresults can be seen in Fig. 2.In the next case, the desired signal coming from the anglesof arrival (AOA) and 0.25 , with amplitudes 1 and 0.3,where the interfering signals come from the AOAs ,0.2 , 0.3 with amplitude 1 (see Fig. 3).In the last example, the interfering signals are much closerto the desired ones, thus biasing the LS algorithm. The betterperformance of the SVM is due to its better robustness againstthe non-Gaussian outliers produced by the interfering signals.  B. Robustness Against Overfittng One advantage of SVMs is that their generalization ability iscontrolled bytheregularization imposed bytheminimization of  Fig. 4. BER performance against the number of training samples. SVM(continuous line) and regular LS (dashed line) beamformers. theweightvectornorm.WehighlightthisfactbycalculatingtheBER for different number of training samples. The results areshown in Fig. 4.V. C ONCLUSION WeintroduceinthisworkawaytoapplytheSVMframework to linear array beamforming. SVMs have a clear advantage overMMSE-based algorithms in those cases in which small datasets are available for training and where non-Gaussian noiseis present, due to the fact that the generalization ability of themachine is controlled. In order to make the algorithm adequateto array processing purposes, we first apply an alternativecost function which is suitable in problems in which there areGaussian noise and other non-Gaussian sources, as multiuserinterferencewhichmay produceoutliers inthe signal.Also, thiscost function provides a natural way to explain the numericalregularization present in any SVM.Ongoing work is being done in the application of nonlinearSVMs for beamforming and detection of AOA.R EFERENCES[1] A. Tikhonov and V. Arsenen,  Solution to Ill-Posed Problems . NewYork: Winston, 1977.[2] V. Vapnik,  Statistical Learning Theory, Adaptive and Learning Systems for Signal Processing, Communications, and Control . New York:Wiley, 1998.[3] A. Smola and B. Schölkopf, “A Tutorial on Support Vector Regression,”Royal Holloway College, Univ. London, U.K., NeuroCOLT Tech. Rep.NC-TR-98-030, 1988.[4] P. J. Huber, “Robust statistics: a review,”  Ann. Math. Statist. , vol. 43, pp.1041–1067, 1972.[5] K.-R. Müller, A. Smola, G. Rätsch, B. Schölkopf, J. Kohlmorgen, andV. Vapnik, “Predicting time series with support vector machines,” in  Ad-vances in Kernel Methods—Support Vector Learning , B. Schölkopf, C.Burges, and A. Smola, Eds. Cambridge, MA: MIT Press, 1999, pp.243–254.[6] A. Navia-Vázquez, F. Pérez-Cruz, A. Artés-Rodríguez, and A.Figueiras-Vidal, “Weighted least squares training of support vectorclassifiers leading to compact and adaptive schemes,”  IEEE Trans. Neural Netw. , vol. 12, no. 5, pp. 1047–1059, Sep. 2001.
Search
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks