Art & Photos

A Unified Framework for Uncertainty, Compatibility Analysis, and Data Fusion for Multi-Stereo 3-D Shape Estimation

A Unified Framework for Uncertainty, Compatibility Analysis, and Data Fusion for Multi-Stereo 3-D Shape Estimation
of 9
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
  2834 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 59, NO. 11, NOVEMBER 2010 A Unified Framework for Uncertainty, CompatibilityAnalysis, and Data Fusion for Multi-Stereo3-D Shape Estimation Mariolino De Cecco, Marco Pertile, Luca Baglivo, Massimo Lunardelli,Francesco Setti, and Mattia Tavernini  Abstract —This paper describes the uncertainty analysis per-formed for the reconstruction of a 3-D shape. Multiple stereosystemsareemployedtomeasurea3-Dsurfacewithsuperimposedcolored markers. The procedure comprised a detailed uncertaintyanalysis of all measurement phases, and the uncertainties evalu-ated were employed to perform a compatibility analysis of pointsacquiredbydifferentstereopairs.Thecompatibleacquiredmark-ers were statistically merged in order to obtain the measurementof a 3-D shape and an evaluation of the associated uncertainty.Both the compatibility analysis and the measurement merging arebased on the evaluated uncertainty.  Index Terms —Multiple stereo systems, uncertainty evaluation,3-D shape reconstruction. I. I NTRODUCTION T HREE-DIMENSIONAL shape reconstruction using vi-sion systems is a technique widely used to reconstructspatial objects. Its flexibility has fostered the developmentof a significant number of algorithms and methods that areavailable in the literature. However, the increasing resolution of CCD and CMOS is bringing actual measurement performancetoward limits that were not available until a few years ago. In3-D vision metrology, calibration, matching, and uncertaintyestimation need to be developed at even more reliable levelsin order to follow increased resolution with a comparablelevel of accuracy. Conversely, increasing computation power isenabling these applications in real-time computing (RTC).As regards the hardware setup, one main difference is thatbetween multi-camera and multi-stereo. Multiple stereo sys-tems reconstruct shapes by associating camera couples. Theuse of multiple pairs of cameras allows the reconstruction of different portions visible to each pair and partially overlap-ping. Compared with the multi-camera procedure, the multi-stereo approach allows a better match between the two views,which are commonly very close to each other. However, the Manuscript received July 30, 2009; revised September 25, 2009;accepted October 7, 2009. Date of current version October 13, 2010.The Associate Editor coordinating the review process for this paper wasDr. Alessandro Ferrero.M. De Cecco, L. Baglivo, M. Lunardelli, F. Setti, and M. Tavernini arewith the Department of Mechanical and Structural Engineering, University of Trento, 38100 Trento, Italy (e-mail: Pertile is with the Department of Mechanical Engineering, University of Padova, 35131 Padova, Italy (e-mail: versions of one or more of the figures in this paper are available onlineat Object Identifier 10.1109/TIM.2010.2060930 short baseline is prone to high depth uncertainty. In order toincrease shape accuracy, the different parts can be matchedusing Iterative Closest Point (ICP) methods [11], [12]. Then,for each point, a compatibility analysis can be performed withtheir neighbors in order to fuse each estimate coming from thedifferent couples.This paper describes a method for uncertainty estimation,compatibility analysis, and position fusion of 3-D points recon-structed from multi-stereo systems. The algorithm can be usedin RTC.Several methods can be used to match the information ondifferent cameras: shape detection, edge detection, correlationanalysis, marker matching in the two views, etc. For instance,[1]describesamethodforsurfacereconstructionthatemploysaLagrangian polynomialforsurfaceinitializationandaquadraticvariation method to improve the results. [2] recovers the firstapproximation of the shape through the object silhouettes seenby the multiple cameras. Then, the shape is improved by a carv-ing approach, employing local correlation analyses between theimages taken by different cameras. This approach is based onthe hypothesis that if a 3-D point belongs to the object surface,its projection into the different cameras that really see it willbe closely correlated. [3] presents a method for the spatialgrouping of 3-D points viewed by multiple stereo systems. Thegrouping algorithm comprises a 3-D space-compressing stepin order to map the 3-D points into a space of even density,which allows easier grouping by a neighborhood approach; asubsequent decompressing step preserves the adjacencies of thecompressed space and helps the fusion of the grouped pointsseen by the different cameras.One of the most important aspects of the multi-stereo ap-proach is the fusion of data coming from different stereo pairs.The process of merging images requires techniques that decidewhether points should be merged or not; if they are to bemerged, the results must account for each estimation.Onedrawbackofthepreviouslymentionedapproachesisthatthey do not evaluate the uncertainty of the reconstructed object.If a multiple stereo system is used to perform measurements,a region of confidence of the measured 3-D points or objectsshould definitely be evaluated to a desired level of confidence.The method described in this paper develops a symbolic uncer-tainty estimation that merges the measurements performed withdifferent stereo pairs and yields the uncertainty associated withthe measured quantities. 0018-9456/$26.00 © 2010 IEEE  DE CECCO  et al. : UNCERTAINTY, COMPATIBILITY ANALYSIS, AND DATA FUSION FOR MULTI-STEREO 3-D SHAPE ESTIMATION 2835 Each step of the measurement process is affected by uncer-tainty, which propagates to the final 3-D estimates. Uncertaintyhas different sources, such as noise in image acquisition, defo-cusing, intrinsic and extrinsic calibration of the parameters of each camera, triangulation, choice of the points to be merged,and the merging method for the final fusion of 3-D parts.In [4], an uncertainty analysis is presented for a binocularstereo reconstruction, but it does not describe a method to com-pare and fuse the measurements of the different stereo pairs.In addition, in our method, the covariance of the parametersestimated during the calibration phase is obtained by meansof a Monte Carlo simulation, thereby avoiding linearization.Moreover, the correlation between the different parameters isanalyzed in depth, giving rise to a covariance that can beconsidered sparse but not simply diagonal, as in [4].A method that takes uncertainty into consideration in order tochoose the best combination of camera pairs for stereo triangu-lation is described in [5]. In this case, however, the uncertaintiesassociated with the intrinsic and extrinsic camera calibrationparameters are not taken into account, and a simplified geomet-rical uncertainty estimation and a propagation algorithm thatmakes use of scalar instead of vector quantities are employed.In this way, cross-correlations between the different sources of uncertainty are neglected. The resulting criteria for camera pairassociation considers the relative position between the camerasand each point, thus leading to association zones within thefield of view and neglecting the uncertainty associated withthe matched point lying on the CCD of each stereo pair. Thisdepends on the relative view of the object seen by each camera,i.e.,fromtheorientationofthetangentsurfacetotheobjectwithrespect to the camera. In our method, the contribution of eachcamera pair is used for the final result.This paper presents a detailed uncertainty analysis performedby applying the general method described in the  Guide to the Expression of Uncertainty in Measurement   (GUM) [6] andits supplement 1 [7]. The described procedure also includes astatistical compatibility analysis performed before the fusion of the different stereo pairs.The experimental verification used to assess achievable re-sults is based on the acquisition of markers superimposed onthe shape to be reconstructed by means of pairs of cameras.The centroid of each marker is detected on each camera, andmatching is performed by means of epipolar geometry. Depthevaluation is performed for each pair. Then, the compatibility of the points measured by different stereo pairs is verified. Lastly,the fusion of compatible points is performed on a common ref-erence frame for all cameras. Uncertainty evaluation associatesa covariance matrix with each 3-D point reconstructed by eachstereo pair. The information contained in the uncertainty ellip-soidisthebasisforverifyingcompatibility,mergingcompatiblepoints, and estimating their uncertainty. In such a process, eachpoint reconstructed in the 3-D space is not only identified by itscoordinates, but is also associated with an uncertainty ellipsoidderiving from the whole reconstruction process.This information is also available for other purposes, e.g.,surface interpolation, which examines not only the positionof a point, but also its uncertainty. This can significantlyimprove interpolation and therefore shape reconstruction. Un-certainty can improve ICP synchronization of point cloudsby giving confidence to each point according to its accuracy[11]. Bundle adjustment would also give more confidence toaccurate reprojections, thus leading to a more accurate structureand relative pose (or motion depending on the application)reconstruction [15].In the following sections, the study method is first described(Sections II–V) with a detailed uncertainty analysis, and is thenapplied to the measurement of a 3-D shape with superimposedmarkers. Experimental results are presented in Section VI.These results are an important extension of those presented in[16] as they include a bundle adjustment and an ICP method inorder to verify the overall proposed method.II. S TEREO  C AMERA  M ODEL As described in [8], the stereo system comprises two cam-eras, 1 and 2. Each camera has a corresponding frame of refer-ence with its  z -axis aligned with the optical axis. Consideringthemodelofeachcamera,thegenericpositionofapointfeaturewithin the field of view of both cameras may be written as i X =  i  X Y Z   =  λ i  x i y i 1  =  λ i x i  (1)where  i  is 1 or 2 depending on which camera is considered; 1 X  (or  2 X ) is the point position expressed in frame 1 (or 2)associated with camera 1 (or 2); x 1  (or x 2 ) is the projection of the point  1 X (or  2 X ) with an ideal camera aligned like camera1 (or 2) with a focal length equal to 1 (in length units); and λ i  ∈ R + is a scalar parameter associated with the depth of thepoint.Each camera is characterized by a set of intrinsic parametersthat are evaluated during camera calibration, as described be-low, and defines the functional relationship between projection x i , expressed in length units, and projection  x  i , expressed inpixels ( x  i  and  y  i  are the column number and row number,respectively, from the upper left corner of the sensor). An idealpinhole camera reveals the following direct model.  x  =  f  m  · s · y i  +  x  0 y  =  − f  m  · x i  +  y  0 ⇔ x  i  =  x  i y  i 1  =  0  f  m  · s x  0 ,i − f  m  0  y  0 ,i 0 0 1  x i y i 1  = K · x i .  (2)The inverse model becomes  x  =  y  i − f  m +  y  0 ,i f  m y  =  x  i f  m · s  −  x  0 ,i f  m · s ⇔ x i =  x i y i 1  =  0  −  1 f  m y  0 ,i f  m 1 f  m · s  0  − x  0 ,i f  m · s 0 0 1  ·  x  i y  i 1  = K − 1 · x  i  (3)where  f  m  =  f   · Sx ;  s  =  Sy/Sx ; and  Sx  =  pixels/lengthunit  along the  x  axis (not  x  );  Sy  =  pixels/length unit  along  2836 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 59, NO. 11, NOVEMBER 2010 the  y  axis (not  y  );  f   is the focal length in length units; x  0 ,i  and  y  0 ,i  are the distances (in pixel columns and rows,respectively) between the upper left corner and the principalpoint (intersection of the optical axis with the sensor).III. T RIANGULATION When both cameras of the stereo system are calibrated, the3-D position of a feature point in space may be measured usinga triangulation algorithm. In this paper, the algorithm of themiddle point is used for triangulation. In theory, when a pointfeature in space X   is acquired by both cameras, the preimagelines that project point X   into the sensors should intersect inpoint  X   itself. In practice, due to measurement uncertainty,the lines do not intersect. Thus, the algorithm, starting fromprojected points  x  i , finds 3-D points  X 1 ,s  and  X 2 ,s  with theminimum distance belonging to the preimage lines of cameras1 and 2, respectively. Points  X 1 ,s  and  X 2 ,s  define a segmentorthogonal to the two skew preimage lines. Middle point X m of this segment is selected as the measured 3-D point of thefeature.Equation (1) can be used to find two generic points, X 1  and X 2 . Each of these points belong to the corresponding preimageline and are expressed in the reference frame associated withthe corresponding camera 1 X 1  =  λ 1  · x 1 ,  2 X 2  =  λ 2  · x 2 .  (4)Both points in the world reference frame yield w X 1  =  λ 1  · w 1 R · x 1  +  w P 01 ,  w X 2  =  λ 2  · w 2 R · x 2  +  w P 02 (5)where wi  R istherotationmatrixfromframe i totheworldframeand  w P 0 i  is the srcin of frame  i  expressed in the world frame.In order to find points  X 1 ,s  and  X 2 ,s  with minimumdistance, the following cost function  g  is defined and thenminimized by imposing gradient  [( ∂g/∂λ 1 ) ( ∂g/∂λ 2 )]  equalto zero g  =   w X 1  − w X 2  2 = ( w X 1  − w X 2 ) T  ( w X 1  − w X 2 )= ( λ 1  · w 1 R · x 1  +  w P 01  − λ 2  · w 2 R · x 2  − w P 02 ) T  · ( λ 1  · w 1 R · x 1  +  w P 01  − λ 2  · w 2 R · x 2  − w P 02 ) where superscript  T   indicates matrix transposition. Defining w P 12  =  w P 01  − w P 02 , which is the srcin of frame 1 withreference to the srcin of frame 2 and expressed in the worldframe, and using the combination of rotation matrices yield g = λ 21  · x T  1  · x 1 − 2 · λ 1  · λ 2  · x T  1  · 12 R · x 2  − 2 · λ 1  · x T  1  · 1 P 21 − 2 · λ 2  · x T  2  · 2 P 12  +  λ 22  · x T  2  · x 2  +  w P T  12  · w P 12  (6)where  1 P 12  =  1 w R w P 12  =  − 1 P 21 ;  2 P 12  =  2 w R w P 12 ;  1 P 21  isthe srcin of frame 2 with reference to the srcin of frame 1and expressed in frame 1;  2 P 12  is the srcin of frame 1 withreference to the srcin of frame 2 and expressed in frame 2;  1 w R (or  2 w R ) is the rotation matrix from the world frame to frame1 (or 2); and  12 R =  21 R T  is the rotation matrix from frame 2 toframe 1.Taking partial derivatives and assigning a value of zero to thegradient yield the following equation system: ∂g∂λ 1 = λ 1  ·  2 · x T  1  · x 1  +  λ 2  ·  − 2 · x T  1  · 12 R · x 2  −  2 · x T  1  · 1 P 21  = 0 ∂g∂λ 2 = λ 1  ·  − 2 · x T  1  · 12 R · x 2  + 2 · λ 2  ·  x T  2  · x 2  −  2 · x T  2  · 2 P 12  = 0 . The solutions of this system are the  λ 1 ,s  and  λ 2 ,s  values thatdefine the minimum distance segment between the two skewpreimage lines λ 1 ,s  =  x T  1  · 12 R · x 2  ·  x T  2  · 2 P 12  +  x T  2  · x 2  ·  x T  1  · 1 P 21  x T  1  · x 1  ·  x T  2  · x 2  −  x T  1  · 12 R · x 2  2 (7) λ 2 ,s  =  x T  1  · x 1  ·  x T  2  · 2 P 12  +  x T  1  · 12 R · x 2  ·  x T  1  · 1 P 21  x T  1  · x 1  ·  x T  2  · x 2  −  x T  1  · 12 R · x 2  2  . (8)Thus, the extreme points  X 1 ,s  and  X 2 ,s  of the minimumdistance segment are w X 1 ,s  = λ 1 ,s  · w 1 R · x 1  +  w P 01 , w X 2 ,s  = λ 2 ,s  · w 2 R · x 2  +  w P 02 and the middle point associated with the point feature is X m  =  X 1 ,s  + X 2 ,s 2  .  (9)IV. U NCERTAINTY  A NALYSIS In the triangulation algorithm previously described, triangu-lated point  X m  is computed from the values of the followingquantities: 1)  x  i  (i = 1 , 2) , which are the projections of 3-Dpoint  X  in cameras 1 and 2, and are presumed to be knownfrom the measurement (the evaluation of the uncertainty of  x  i  is described in Section IV-B); 2)  f  m,i ,  s i ,  x  0 ,i , and  y  0 ,i ,which are the intrinsic calibration parameters of the cameras;3)  w P 0 i , which are the srcin positions of the camera frames;4)  wi  R , which are the rotation matrices from camera frame ito the world frame.  w P 0 i  and  wi  R are the extrinsic calibrationparameters of the cameras. Both intrinsic and extrinsic para-meters with their uncertainties are evaluated by the calibration,as described in Section IV-A. Each rotation matrix can beconveniently expressed by a set of three Euler angles definingrotations around three different axes:  α i ,  β  i , and  γ  i . Thus, foreach camera, the extrinsic parameters are the three values of  w P 0 i  and the three Euler angles  α i ,  β  i , and  γ  i .  A. Calibration Uncertainty The parameters previously defined in the camera model areestimated through camera calibration. The procedure used issimilar to that proposed by Tsai [9], with a planar target that  DE CECCO  et al. : UNCERTAINTY, COMPATIBILITY ANALYSIS, AND DATA FUSION FOR MULTI-STEREO 3-D SHAPE ESTIMATION 2837 translates the orthogonal to itself, generating a 3-D grid of calibration points. At the first step, the parameters are obtainedby a pseudo-inverse solution of a least-squares problem em-ploying points on the calibration volume and image points.After this first estimation of intrinsic and extrinsic parameters,an iterative optimization is performed in order to minimize theerrors between the acquired image points and the projections of the 3-D calibration points on the image plane with the estimatedparameters.Before the algorithm of the calibration can be applied, op-tical radial distortions are estimated and adjusted by rectifyingdistorted images. Radial distortion coefficients are estimated bythe compensation of the curvature induced by radial distortionon the calibration grid [10].Camera parameter uncertainties are evaluated propagatingby the uncertainties of the 3-D calibration points and those of image points [4], [9]. A Monte Carlo simulation is used.There are various reasons for the deviation between themeasured image points and the projection of the 3-D calibrationpoints on the image plane: 1) simplification of the cameramodel; 2) camera resolution; 3) dimensional accuracy of thecalibration grid; and 4) geometrical and dimensional accuracyof grid translation. As the motion of the grid to generate acalibrationvolumeisnotperfectlyorthogonaltotheopticalaxisof the camera, a biasisinduced inthe uncertainty distributionof the grid points so that the uncertainty becomes nonsymmetric.Two more parameters are therefore introduced to characterizethe horizontal and vertical deviations from orthogonality. Thefirst parameter,  α R , considers the angle between the translationdirectionandgridrows.Intheidealmotionofthegrid,thevalueof   α R  is 90 ◦ . The second parameter,  β  C  , considers the anglebetween the translation direction and the grid columns; also,the ideal value here is 90 ◦ .Summarizing, the calibration routine consists of four steps.1) Estimation and adjustment of optical radial distortions.2) First estimation of parameters, which defines the imper-fection of the 3-D calibration target  ( α R ,β  C  ) . These val-ues are achieved by minimizing iteratively the deviationbetween the measured image coordinates of the calibra-tion points and those reconstructed projecting volumepoints. The principal point is assumed to lie in the middleof the image. With this assumption, once the systematicdeviations from orthogonality have been compensated by α R  and  β  C  , extrinsic and intrinsic parameters can bederived from a pseudo-inverse solution [9].3) Final iterative optimization of all camera parameters (in-cluding principal points) is performed, iteratively mini-mizing the deviation between the measured image pointsand those reconstructed projecting the 3-D calibrationpoints. This step supplies the final estimation of intrinsicand extrinsic parameters. Standard deviation  σ  of theresiduals after the projection is combined with resolutionuncertainty, and is used to evaluate the uncertainty asso-ciated with the image points used in step 4.4) Lastly, through a Monte Carlo simulation, the uncertain-ties of the image points (as evaluated in step 3) and the3-D calibration points are propagated in order to evaluatethe uncertainty of the calibration parameters. Fig. 1. Example of a badly segmented elliptical marker and the fitted one(on the left), and distribution ∆ b  (on the right). Steps 2 and 3 usually require 5–6 iterations each, and alwaysless than 10 iterations, while the Monte Carlo simulation in step4 used  10 5 iterations.  B. Matching Uncertainty Point  X  in the 3-D space is defined as the centroid of acircular marker; for this reason, the determination of projection x  i  on the CCD of point  X  is always affected by uncertainty.First, digitalization and successive binarization of the imagedeforms the circular shape into a polygonal shape, and thecentroid of these two shapes is not the same. Second, themarker, which was srcinally a circle, is deformed in order toadhere to the surface of the target; as a first approximation,the deformed marker is expressed as an ellipse. Third, due toperspective effects, an ellipse, which is not perpendicular to theoptical axis of the camera, is projected on the CCD as an ovoid.A simplified model of the perspective geometry identifieseach marker projection as an ellipse. This ellipse can be fittedby the covariance matrix of the distribution of the pixels recog-nized as markers. The projected marker can then be comparedwith the corresponding covariance ellipse, and the 2-D distanceof the two boundaries  (∆ b )  can be computed as a function of angle  α ∗ . This distribution,  ∆ b ( α ∗ ) =  b real ( α ) − b fit ( α ) , is amap  R → R 2 . Fig. 1 shows an example of a badly segmentedelliptical marker and the fitted one, and distribution  ∆ b .Two parameters  D C   and  C b , which express the “difference”between the projected ovoid and the estimated ellipse canbe computed;  D C   = [ x D C  y D C ] T  ∈ R 2 is the displacementbetween the centroids of the segmented marker and the fittedellipse, and  C b  is the covariance matrix of distribution  ∆ b .The uncertainty of the centroid of the segmented marker isrepresented by a covariance matrix, which is a function of thesetwo parameters U meas  =  f  ( D C  ,C  b ) =  a  x ∆ C  00  y ∆ C  +  bC  b . The larger the difference between the projected ovoid and theestimated ellipse, the larger the uncertainty associated with thecomputed centroid. In this function, parameters (a) and (b) areevaluated by a calibration procedure that uses a grid of circularphotolithographic markers. This grid is moved in a set of knownpositions and orientations, and the computed centroid of thesegmented marker is compared with the projected reference onthe CCD. In order to have a large set of views, two kinds of grid  2838 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 59, NO. 11, NOVEMBER 2010 are used: the first is a planar surface and the second is the lateralsurface of a cylinder. C. Uncertainty Propagation The uncertainty evaluation for the triangulated point X m  of each stereo pair becomes an uncertainty propagation problemthat employs the functional model between input and outputquantities X m  =  f   x  i ,f  m,i ,s i ,x  0 ,i ,y  0 ,i , w P 0 i ,α i ,β  i ,γ  i   i = 1 , 2 . (10)Several uncertainty propagation methods are known. Eachof them is based on a theory (i.e., probability, possibility, orevidence theory), which can express uncertainty by correspond-ing suitable means (i.e., probability density functions, fuzzyvariables, or random-fuzzy variables). In this paper, accordingto the GUM [6], the uncertainty is analyzed according tothe probability theory and is expressed by probability densityfunctions (PDFs). In order to calculate the propagated uncer-tainty of triangulated position X m , and taking into account thecontributions of all uncertainty sources, the method based onthe formula expressed in the GUM [6] is used. This method isselected instead, for example, of the Monte Carlo propagationapproach, to increase computing speed and to allow RTCimplementation. The propagation formula uses the sensitivitycoefficients obtained from the linearization of the mathematicalmodel; this method is based on the hypothesis that a probabilitydistribution, assumed or experimentally determined, can beassociated with every uncertainty source considered, and thata corresponding standard uncertainty can be obtained from theprobability distribution.The GUM proposes a formula for the calculation of theuncertainty to be associated with output quantities  X m , ob-tainable as an indirect measurement of all input quantities U out  = c · U in  · c T  , where  U in  ∈ R 24 × 24 is the covariancematrix associated with the input quantities, which are 24 in thisapplication;  U out  ∈ R 3 × 3 is the covariance matrix associatedwith the output quantities, which are the three components of  X m ; and c ∈ R 3 × 24 is the matrix of the sensitivity coefficientsachievable from partial derivatives of   f  ()  with respect to inputvariables c i,j  =  ∂f  i ∂input j . In this application, the following assumptions are made.a) The 24 scalar input quantities are considered in this order: x  T  1  , x  T  2  ,f  m, 1 ,s 1 ,x  0 , 1 ,y  0 , 1 ,f  m, 2 ,s 2 ,x  0 , 2 ,y  0 , 2 ,α 1 ,β  1 ,γ  1 , w P T  01 ,α 2 ,β  2 ,γ  2 , w P T  02 b) The two components of   x  i  of each camera are assumedto be cross-correlated and not correlated with any otherinput quantity.c) The intrinsic calibration parameters of each camera areassumed to be cross-correlated and not correlated withthe corresponding parameters of the other camera or anyother input quantity.d) The extrinsic calibration parameters of each camera areassumed cross-correlated among themselves and not cor-related with the corresponding parameters of the othercamera and all other input quantities.These assumptions allow us to build the 24  ×  24 covariancematrix of scalar input quantities, putting six reduced dimensioncovariance matrices along the diagonal of   U in  and assigningzero values to all other elements of   U in . The six matricesare the following: 1) U meas , 1  ∈ R 2 × 2 , associated with mea-surement x  1  of camera 1; 2) U meas , 2  ∈ R 2 × 2 , associated withmeasurement  x  2  of camera 2; 3)  U int , 1  ∈ R 4 × 4 , associatedwith intrinsic parameters  f  m, 1 ,  s 1 ,  x  0 , 1 , and  y  0 , 1  of camera 1;4) U int , 2  ∈ R 4 × 4 ,associated withintrinsicparameters f  m, 2 , s 2 , x  0 , 2 , and  y  0 , 2  of camera 2; 5) U ext , 1  ∈ R 6 × 6 , associated withextrinsic parameters  α 1 ,  β  1 ,  γ  1 , and  w P T  01  of camera 1; and 6) U ext , 2  ∈ R 6 × 6 , associated with extrinsic parameters α 2 , β  2 , γ  2 ,and  w P T  02  of camera 2.The propagation model between the input and output quan-tities described in Section III, although not very simple, doeshave the advantage of being explicit. Thus, it is possible tocompute explicitly the sensitive coefficients as symbolic ex-pressions, and it is not necessary to evaluate them numerically,which often happens with complex applications.V. C OMPATIBILITY  A NALYSIS In non-ideal conditions, the stereo systems at different posi-tions provide different measurements of the same feature (likethe center of mass of a colored spot on a surface). Each mea-surement comes with its uncertainty, and a fusion process cancombine them in a single best-estimated one with the associatedfused uncertainty. Before points measured from different stereosystems can be fused, it is necessary to state whether theyare associated with the same feature or, statistically speaking,whether they belong to the same distribution. A compatibilityanalysis of the measured points is therefore performed. Acompatibility test on two points, X 1  and X 2  with covariances C 1  and  C 2 , is based on the consideration that the differ-ence  X 1  − X 2  is distributed with zero mean and covariance C 1  + C 2 . On the Gaussian assumption, the Mahalanobis Dis-tance (MD)  D 2 = ( X 1  − X 2 ) T  ( C 1  + C 2 ) − 1 ( X 1  − X 2 )  hasa  χ 2 distribution with a degree of freedom  ν   equal to thedimension of vectors X . Once a confidence level  α  has beenchosen, it is stated that the two points are compatible if   D 2 ≤ χ 2 ( ν,α  ) . Let  X i,m  be the ith 3-D point measured by stereosystem  m  with covariance  C i,m . The analysis is made upof the following steps: from measured points sets  m  and  n  of stereo system  m  and  n , respectively, each  X i,m  isassociated with the point of   n  having the minimum MDto  X i,m ; if the compatibility test is passed, the association isaccepted and the associated couple is fused, yielding the bestestimate X ∗ k,mn = C i,n ( C i,m + C j,n ) − 1 X i,m + C i,m ( C i,m + C j,n ) − 1 X j,n
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks