of 48
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
  Progress In Electromagnetics Research, PIER 46, 265–312, 2004 A GENERAL FRAMEWORK FOR CONSTRAINTMINIMIZATION FOR THE INVERSION OFELECTROMAGNETIC MEASUREMENTST. M. Habashy and A. Abubakar Schlumberger-Doll Research36 Old Quarry Road, Ridgefield, CT 06877-4108, USA Abstract —In this paper, we developed a general framework forthe inversion of electromagnetic measurements in cases whereparametrization of the unknown configuration is possible. Due tothe ill-posed nature of this nonlinear inverse scattering problem, thisparametrization approach is needed when the available measurementdata are limited and measurements are only carried out from limitedtransmitter-receiver positions (limited data diversity). By carryingout this parametrization, the number of unknown model parametersthat need to be inverted is manageable. Hence the Newton basedapproach can advantageously be used over gradient-based approaches.In order to guarantee an error reduction of the optimization process,the iterative step is adjusted using a line search algorithm. Furtherunlike the most available Newton-based approaches available in theliterature, we enhanced the Newton based approaches presented in thispaper by constraining the inverted model parameters with nonlineartransformation. This constrain forces the reconstruction of theunknown model parameters to lie within their physical bounds. Inorder to deal with cases where the measurements are redundant orlacking sensitivity to certain model parameters causing non-uniquenessof solution, the cost function to be minimized is regularized by addinga penalty term. One of the crucial aspects of this approach is howto determine the regularization parameter determining the relativeimportance of the misfit between the measured and predicted dataand the penalty term. We reviewed different approaches to determinethis parameter and proposed a robust and simple way of choosing thisregularization parameter with aid of recently developed multiplicativeregularization analysis. By combining all the techniques mentionedabove we arrive at an effective and robust parametric algorithm. Asnumerical examples we present results of electromagnetic inversion atinduction frequency in the deviated borehole configuration.  266 Habashy and Abubakar 1 Introduction2 The Cost Function3 Constrained Minimization4 Normalization of the Vector of Residuals5 The Newton Minimization Approach 5.1 Case (1):  G  is Singular5.2 Case (2):  G  Is Nonsingular 6 A Modified Newton Method7 The Gauss-Newton Minimization Approach8 The Steepest-Descent Method9 Line Searches10 The Choice of the Lagrange Multiplier11 Update Formulas for the Hessian 11.1 The Rank-One Matrix Update11.2 The Rank-Two Matrix Updates11.2.1 The Powell-Symmetric-Broyden (PSB) Update11.2.2 The Davidson-Fletcher-Powell (DFP) Update11.2.3 The Broyden-Fletcher-Goldfarb-Shanno (BFGS)Update 12 Criterion for Terminating the Iteration13 Regularization 13.1  L 1 -Norm13.2 Maximum Entropy 14 The Weighted Least Squares Minimization in theFramework of Stochastic Estimation 14.1 Preliminaries14.2 The Fisher Information Matrix14.3 The Estimator’s Covariance Matrix and the Cramer-RaoLower Bound 15 Numerical Examples 15.1 Example 115.2 Example 2 16 Conclusions  Minimization for the inversion of EM measurements 267 AcknowledgmentAppendix A. Nonlinear Transformations for ConstrainedMinimization A.1 First TransformationA.2 Second TransformationA.3 Third Transformation References1. INTRODUCTION In inverse scattering problems, one aims to determine the shape,location and material parameters of the object from measurementsof the (scattered) wavefield, when a number of waves are generatedsuch that they successively illuminate the domain of interest. Sincethe wavefields itself depend on the material parameters, the inversescattering problem is essentially nonlinear. For simple forwardscattering models (mostly one-dimensional cases), a forward solver cangenerate a database containing a full collection of scattered-field datafor a number of possible realizations of the scattering object, and onecan then select a particular realization with the best fit to the actuallymeasured data. But, in practice, the execution of such an enumerateinversion method is often impossible due to the large number of discreteunknown variables. Improvements of these Monte Carlo methods, seee.g., [1], are obtained when certain probability is taking into account,e.g., as in genetic algorithms [2]. The advantages of these inversionschemes are the simplicity and the feature that the algorithm willnot trap into a local minimum. However, the major disadvantageis the large number of forward solutions to be generated and stored.Furthermore, presently, for a realistic full three-dimensional scatteringproblem (including multiple scattering), the generation of a full classof forward solutions is not feasible.We therefore opt for an iterative approach, where the modelparameters are updated iteratively by a consistent minimization of the misfit between the measured data and the predicted data. Sincethe updates of the model parameters are determined from the misfit,precaution has to be taken to overcome the ill-posed nature of theinverse problem and to avoid the local minima of the nonlinearproblem. Two- and three-dimensional inverse scattering problemhave been studied with various optimization methods, in particular,deterministic gradient type methods. Among the huge amount of literature we only list a few without intention of review: [3–11,  268 Habashy and Abubakar 14,15]. These gradient type methods are very attractive in casewhere we deal with a huge amount of unknown parameters (pixel-like/tomography inversion) and a large number of measurement datacollected from (ideally) all illumination angle is available (enough datadiversity) . When the number of available measurement data arelimited and collected only from limited possible transmitter-receiverpositions (limited data diversity), in order to arrive at a reliablesolution, parametrization of the problem is needed. These are caseswhen one deal with Ground Penetrating Radar, geophysical boreholeapplications and non-destructive evaluation of corrosion in the metalpipe. By carrying out this parametrization, the number of unknownmodel parameters that need to be inverted is manageable. With otherword, after this parametrization we end up with non under determinedsystem. For this parametric inversion the Newton-based approachesare more appropriate than gradient-based methods because it havefaster convergence rates.In this paper first we reviewed the Newton based methods byputting them in a general framework. After that in order to enhancethe performance of these Newton based methods, we constrainedthem with nonlinear transformations. This constrain forces thereconstruction of the unknown model parameters to lie within thephysical bounds. In order to guarantee the error reducing natureof the algorithm, we adjusted the iterative step using a line searchalgorithm. In this way we prevent the algorithm to jumping aroundbetween two successive iteration. Further since the problem isessentially highly non-linear, some regularization is definitely needed.The main problem of adding a regularization term is how we candetermine the appropriate regularization parameter or the Lagrangemultiplier. In this paper we review a number of way of choosingthis regularization parameter and proposed a new way for the Newtonbased method to choose this parameter. This new approach of choosingthe regularization parameter is inspired by the work of [12,13] on themultiplicative regularization for gradient-type algorithm. Our analysisshows that by posing our cost functional as a multiplicative functionalwe found that the regularization parameter can be chosen proportionalto the srcinal cost functional. This provides us with a robust wayto determine the weight for the regularization term. Finally we alsopoint out the relationship between the Newton based minimizationmethod and the Bayesian inference approach. As numerical examples,we present inversion result from electromagnetic data at inductionfrequency in single-well deviated borehole configuration.  Minimization for the inversion of EM measurements 269 2. THE COST FUNCTION We define the vector of residuals  e ( x ) as a vector whose  j -th elementis the residual error (also referred to as the data mismatch) of the  j -thmeasurement. The residual error is defined as the difference betweenthe measured and the predicted normalized responses: e ( x ) =  e 1 ( x )... e M  ( x )  =  S  1 ( x ) − m 1 ... S  M  ( x ) − m M   =  S ( x ) − m ,  (1)where  M   is the number of measurements. The symbol  m  j  is thenormalized observed response (measured data) and  S   j  is correspondingthe normalized simulated response as predicted by the vector of modelparameters,  x T  : x T  = [ x 1  x 2  ··· x N  − 1  x N  ] = ( y − y R ) T  ,  (2)where  N   is the number of unknowns and the superscript  T  indicatestransposition. We represent the vector of model parameters  x  asthe difference between the vector of the actual parameters  y  and areference/background model  y R . The reference model includes all  a priori   information on the model parameters such as those derived fromindependent measurements. We pose the inversion as the minimizationof the following cost (or objective) function,  C  ( x ), of the form [20]: C  ( x ) = 12  µ  W d · e ( x )  2 − χ 2  +  W x · ( x − x  p )  2   (3)The scalar factor  µ (0  < µ <  ∞ ) is a Lagrange multiplier. Its inverseis called the regularization parameter or damping coefficient. It isa tradeoff parameter determining the relative importance of the twoterms of the cost function. The determination of   µ  will produce anestimate of the model  x  that has a finite minimum weighted norm(away from a prescribed model  x  p ) and which globally misfits the datato within a prescribed value  χ  (determined from  a priori   estimates of noise in the data). The second term of the cost function is included toregularize the optimization problem. It safeguards against cases whenmeasurements are redundant or lacking sensitivity to certain modelparameters causing non-uniqueness of solution. It also suppresses anypossible magnification of errors in our parameter estimations due tonoise which is unavoidably present in the measurements. These errormagnifications may result in undesirable large variations in the modelparameters which may cause instabilities in the inversion.  W T x  · W x  is
Similar documents
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks