Internet & Technology

01.05.Restoration of Multiple Images With Motion Blur in Different Directions

Restoration of Multiple Images with Motion Blur in Different Directions Alex Rav-Acha Shmuel Peleg School of Computer Science and Engineering The Hebrew University of Jerusalem 91904 Jerusalem, ISRAEL Abstract Images degraded by motion blur can be restored when several blurred images are given, and the direction of motion blur in each image is different. Given two motion blurred images, best restoration is obtained when the directions of motion blur in the two images are orthogonal. Motion blur
of 7
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
  Restoration of Multiple Images with Motion Blur in Different Directions Alex Rav-Acha Shmuel PelegSchool of Computer Science and EngineeringThe Hebrew University of Jerusalem91904 Jerusalem, ISRAEL Abstract  Images degraded by motion blur can be restored whenseveral blurred images are given, and the direction of mo-tion blur in each image is different.Given two motion blurred images, best restoration isobtained when the directions of motion blur in the two im-ages are orthogonal. Motion blur at different directions is common, for ex-ample, in the case of small hand-held digital cameras dueto fast hand trembling and the light weight of the camera. Restoration examples are given on simulated data aswell as on images with real motion blur. 1 Introduction Blurred images can be restored when the blur functionis known [1]. Restoration of a single motion-blurred im-age without prior knowledge of the blur function is muchharder. Early deblurring methods treated blurs that canbe characterized by a regular pattern of zeros in the fre-quency domain such as uniform motion blur [9]. More re-cent methods deal with a wider range of blurs, but requirestrong assumptions on the image model. For example, as-suming that the image is spatially isotropic [12], or can bemodeled as an autoregressive process [7]. A summary andanalysisofmanymethodsfor“blinddeconvolution”canbefound at [5]. In case that the image motion is constant forthe entire imaging period, the motion blur can be inferredfrommotionanalysisandusedforrestoration[11,10, 2,6].Unfortunately, the assumption of constant motion dur-ingtheentireimagingprocessdoesnotholdformanycasesof motionblur. For example,analysis of images taken withsmall digital cameras shows that consecutive images cov-eringthe same scenehavedifferentmotionblur. Inparticu-lar, the directionof motionbluris differentfromoneimageto another due to trembling of the hand.In [8] the image restoration algorithm included an esti-mation of the PSF (Point Spread Function) from two im-ages. However, it assumes a pure translation between theimages, and uses the location of singularities in the fre-quency domain which are not stable.In this paper we describe how different images, eachdegraded by a motion blur in a different direction, can beused to generate a restored image. It is assumed that themotion blur can be described by a convolution with a onedimensional kernel. No knowledge is necessary regardingthe actual motion blur other than its direction which is pre-computed either by one of the existing methods [9, 12], orusing the scheme offered in this paper. The relative imagedisplacements can be image translations and image rota-tions. 2 A Model for Motion Blur Let   denote the observed image, degraded by a motionblur with a one dimensional kernel Ñ  ´  Ñ  ½  Ñ  à µ  atan angle «  . Let   be the srcinal image. We assume that   was degraded in the following way.   ´  ÜÝ  µ    «  £  Ñ       à   ½      ¼  Ñ    ¡    ´  Ü  ·    Ó×´  «  µ  Ý  ·    ×Ò´  «  µµ  This assumption is valid when the motion blur is uni-form for the entire image. Otherwise, the image can bedivided into regions having approximately a constant mo-tion blur. For a discrete image   , interpolation is used toestimate the gray levels at non-integer locations. 3 Deblurring in the Spatial Domain Using two images for deblurringrequires alignment be-tween them. However, accurate alignment can be madeonly by accounting for the motion blur as seen in Fig. 1.This section describes the algorithmfordeblurringtwo im-ages degraded by motion blur in the spatial domain. Boththe alignment between the images, and the deblurring are1 Proceedings of the Fifth IEEE Workshop on Applications of Computer Vision (WACV’00)0-7695-0813-8/00 $ 17.00 © 2000 IEEE  Figure 1. With motion blur the correspon-dence between images is fuzzy. It can be de-scribed by the convolution matrix that turnsthe left image into the right image. done simultaneously. In the first sub-section we assumethat one image is not blurred. In practice we do not need torestore a blurred image when the srcinal image is given.However, we present this case since it is used as a basisfor the deblurring method described in the following sub-section. The last sub-section describes a method for therecovery of the motion blur directions. 3.1 Deblurring an Image Using the Original Image Let   and   be two input images.   is a motion-blurredimage obtained from   as follows:(i)   ¼  is a warped version of     ¼  ´  ÜÝ  µ    ´  Ü  ·   Ô  ´  ÜÝ  µ  Ý  ·  Õ  ´  ÜÝ  µµ  (1)(ii)   is a degradationof    ¼  by a motion blur with kernel Ñ  and direction «  ,       ¼  «  £  Ñ  It can be shown [4] that the desired displacement ´   Ô  ´  ÜÝ  µ  Õ  ´  ÜÝ  µµ  between images   and   ¼  minimizes thefollowing error function in the region of analysis Ê  . ÖÖ  ´   ÔÕ  µ    ´  ÜÝ  µ  ¾  Ê  ´   Ô  Ü  ·  Õ  Ý  ·    Ø  µ  ¾    (2)where the partial derivations are as follows:   Ü     Ü    Ý     Ý     Ø     Ø          ¼          «  £  Ñ    ½    (3)We assume that the motion blur operation is invertible,and can be approximated by a convolution with a discretekerneldenotedas Ñ    ½  . Inpractice, a onedimensionalvec-tor with 16 to 32 elements was found to be sufficient fordeblurring.2-D parametric transformations are used as an approxi-mation for the image motion. This approximation is validwhen the differences in depth are small relative to the dis-tancesfromthecamera. Sincethedirection «  ofthemotionblur is pre-computed,the resulting minimization equationsare linear, and minimization is performed over the deblur-ring kernel ( Ñ    ½  ) and the image displacement parameters.We used one of the following models for image displace-ment:1. Translation: 2 motion parameters,  Ô  ´  ÜÝ  µ    , Õ  ´  ÜÝ  µ    . In order to minimize ÖÖ  ´   ÔÕ  µ  , itsderivativeswith respect to   and   are set to zero. Thisyieldstwo linear equationsforeachimagepoint in the à ·¾  unknowns: à is the size of the deblurring ker-nel, and the two additional parameters represents thetranslation.2. Translation & Rotation : This model of motion isdescribed by the following equations.  Ô  ´  ÜÝ  µ´Ó×´  ¬  µ    ½µ  Ü    ×Ò´  ¬  µ  Ý  ·   Õ  ´  ÜÝ  µ×Ò´  ¬  µ  Ü  ·´Ó×´  ¬  µ    ½µ  Ý  ·    for small rotations we use the approximations Ó×´  ¬  µ    ½  and ×Ò´  ¬  µ    ¬  , to obtain linear equa-tions with 3 parameters.  Ô  ´  ÜÝ  µ      ¬Ý  , Õ  ´  ÜÝ  µ    ·  ¬Ü  . Foreach imagepointwe get threelinear equa-tions with à ·¿  unknowns.3. Morecomplicatedmodelsforimagedisplacementcanbe used, e.g. an Affine motion or an Homography .Thecomputationframeworkis basedonmultiresolutionand iterations, using a Gaussian pyramid, similar to theframework described in [3], with some main differences: ¯  The deblurring kernel is computed as well as the mo-tion parameters. ¯  The number of parameters varies throughout the dif-ferent levels of the pyramid, since the deblurring of upper levels of the Gaussian pyramid can be repre-sented by smaller kernels. ¯  Different kernels, related by a convolution with ashifted delta-impulse are equivalent in the aboveframework. Therefore, the motion component paral-lel to the motion-blur direction can not converge. Inorder to handle this variant, the iterations are repeateduntil convergence only in the motion component per-pendicular to the direction of the motion blur. 3.2 Deblurring Two Blurred Images Let   ½  and   ¾  be two images degradedby motion blurindifferent directions. The following steps are done in orderto restore the srcinal image.1. The blur directions are calculated as described in thenext sub-section, or alternatively, using one of the ex-isting methods (for example [9, 12]). Proceedings of the Fifth IEEE Workshop on Applications of Computer Vision (WACV’00)0-7695-0813-8/00 $ 17.00 © 2000 IEEE  2. Deblur   ½  using the method of 3.1, the known di-rection of blur, and using   ¾  as the srcinal image.The deblurring is done with a one dimensional kernelat the same direction as the direction of motion blur.Call the deblurred image   ´½µ ½  .3. Deblur   ¾  using   ´½µ ½  as the srcinal image, giving   ´½µ ¾  .4. Repeat steps 2 and 3, always using the latest versionof    ´    µ ½  and   ´    µ ¾  , until convergence.The principle ensuring the convergence of the imagesto the srcinal image is that 1-D blurs in different direc-tions are independent, with the exception of degeneratecases. Two images having motion blur in different direc-tions preserve the information of the srcinal image. Amore theoretical approach towards the convergence prop-erties is given in the next section. 3.3 Recovery of Motion Blur Directions Most existing methods cope with the problem of recov-ering the motion blur directions either by assuming a con-stant velocity during the entire imaging process, or assum-ing certain properties of the image model or of the mo-tion blur operator. For example, assuming that the imageis spatially isotropic [12] or that the motion blur kernel isuniform[9]. Ouraim is to recoverthe directions ofthe mo-tion blur using information from two images, while avoid-ing the constant velocity assumption.Each iteration of the algorithm described in the previoussub-sections deblurres the srcinal image. An error in theestimation of the direction of the motion blur reduces thedeblurring effect. In the extreme case, using the directionof the motion blur of the second image as an estimator forthe motion blur direction of the first one will cause the op-posite effect, i.e, will blur the image with the motion blurkernel of the second image. One can use this phenomenonto recover the motion blur direction by enumerating overthe angles of the motion blur. For each angle, a single it-eration of the method described in the first sub-section canbe applied, and the angle which gives the strongest deblur-ring effect is the angle of the motion blur.We propose to use this strategy with two exceptions: ¯  It is preferred to work on the lower resolution levelsof the Gaussian pyramids of the images. The accu-racy achieved in this way is high enough to obtain thedirection of the motion blur, and the computation isfaster. ¯  The sharpness of the recoveredimage as a function of the estimated direction is approximately monotonic.Thus, partial search can be used. 4 Frequency-Domain Algorithm 4.1 Direct Motion-Blur Reconstruction In this section we prove the convergence of the deblur-ring algorithm in the frequency domain. This algorithmis equivalent to the spatial domain algorithm described inthe previous section. For simplicity, we deal only with thecase of two input images in which the two directions of themotion blur are perpendicular,and the motion between thetwo images is a pure translation.Inthis case, thetwoinputimages   ½  and   ¾  areobservedby two systems modeled as:   ½    Ñ  ½  £    (4)   ¾    Ñ  ¾  £    (5)Where   ½  and   ¾  are images degradedby horizontalandvertical motion blur respectively. The displacement be-tween the two images is already expressed by the convolu-tions, which represent both the motion blur and the imagedisplacement.Let   be the Discrete Fourier Transform (DFT) of thesrcinal image   . Let   ½  and   ¾  be the DFT of the inputimages   ½  and   ¾  respectively, and let Å  ½  and Å  ¾  be theDFT of the motion-blur kernels Ñ  ½  and Ñ  ¾  respectively.Relations 4 and 5 are equivalent to:   ½    Å  ½  ¡    (6)   ¾    Å  ¾  ¡    (7)All the Fourier Transforms described in this section aretwo-dimensional. However, since each motion blur kernel( Ñ  ½  or Ñ  ¾  ) is one-dimensional by definition, it has a uni-form frequency response along the direction perpendicularto the direction of the kernel. In other words, Å  ½  is uni-form along the Ý  coordinate, and Å  ¾  is uniform along the Ü  coordinate.The methoddescribedin Sect. 3 finds an horizontalblurkernel   that minimizes the Р ¾  -norm error function     ½  £        ¾    ¾  . Since minimizing the Р ¾  -norm error function inthe spatial and frequency domains are equivalent, we wishto find a horizontal blur kernel   whose Fourier transform À  minimizes the error function   À  ¡    ½      ¾    ¾  (8)Since À  is uniformalongthe Ý  coordinate,we will referto it as a one-dimensional vector, i.e: À  ´    µ  À  ´    µ  forall ½        Æ  , where   and   stand for the Ü  and Ý  co-ordinates respectively. For each column we minimize theexpression Æ      ½    À  ´    µ  ¡    ½  ´    µ      ¾  ´    µ    ¾  (9) Proceedings of the Fifth IEEE Workshop on Applications of Computer Vision (WACV’00)0-7695-0813-8/00 $ 17.00 © 2000 IEEE  When È  Æ   ½      ½  ´    µ    ¾   ¼  this minimumis achievedfor: À  ´    µ  È  Æ   ½    ¾  ´    µ  ¡      ½  ´    µ  È  Æ   ½      ½  ´    µ    ¾  (10)With     denoting the complex conjugateof    . This blur isa weighted average of the   Ø  row in   ¾  , which minimizesits Р ¾  distance to the respective row in   ½  . The reconstruc-tion of the second image using the first one is symmetricalup to changes in the Ü  and Ý  directions. 4.2 Iterative Reconstruction Similarly to the spatial-domain approach, the algorithmcan be enhanced by iteratively updating the first image us-ing the second one and vice versa. Each iteration reducesthe motion blur effect upon the images, which in turn en-ables better results when applying the iteration. The itera-tive algorithm is derived from Eq. 10, and can be summa-rized using the following equations:   ´¼µ ½      ½  (11)   ´¼µ ¾      ¾  (12)   ´  Ò  ·½µ ½  ´    µ    ´  Ò  µ ½  ´    µ  È      ´  Ò  µ ¾  ´    µ  ¡      ´  Ò  µ ½  ´    µ  È        ´  Ò  µ ½  ´    µ    ¾  (13)   ´  Ò  ·½µ ¾  ´    µ    ´  Ò  µ ¾  ´    µ  È      ´  Ò  ·½µ ½  ´    µ  ¡      ´  Ò  µ ¾  ´    µ  È        ´  Ò  µ ¾  ´    µ    ¾  (14) 4.3 A Convergence Proof Sketch It canbe shownthatthe transformationrelatingthe DFTof the blur kernels in two consecutivesteps is linear. More-over, it can be described by a stochastic matrix   , withnon-negativeelements:   ´    µ ½  È        ´    µ    ¾  ¡  Æ    Р ½      ´  Ð  µ    ¾  ¡    ´  Ð  µ    ¾  È        ´  Ð  µ    ¾  (15)Where   ´    µ  is the element in the   Ø  column and the   Ø  row of A.   is a probability matrix thus describing a contractionmapping. One can conclude, that the all-ones vector is aneigen-vectorof    witheigen-value1, andthereis novectorwith a bigger eigen-value.If    ´      Á  µ    Æ    ½  , there is no other eigen-vectorwith eigen-value 1, and we receive that ÐÑ  Ò  ½  Å  ´  Ò  µ ¾  ÐÑ  Ò  ½    Ò  ¡  Å  ´¼µ ¾      ½  and equivalently, ÐÑ  Ò  ½    ´  Ò  µ ¾   ÐÑ  Ò  ½  Å  ´  Ò  µ ¾  ¡        With an exponential convergence. The convergence of    ´  Ò  µ ½  to   follows immediately the convergence of    ´  Ò  µ ¾  to   . 4.4 Failure points As shown in the previous sub-section, the condition forconvergence is that   ´      Á  µ    Æ    ½  , where   is thematrix relating the DFT of the blur kernels in two consec-utive steps. A simple case where this condition does nothold is when     Á  . This happens when the image in-cludes only parallel diagonal lines. In this case, applyingmotion blur in the Ü  and Ý  directions yield the same de-graded images, and thus there is no information for recov-ery. 5 Examples We have implemented both the spatial-domain and thefrequency-domain methods, and tested them on simulatedand real cases of motion blur. The images with real motionblur were restored in the spatial-domain using a 2-D imagedisplacement model describing rotations and translations.The iterations described in Sect. 3 converged after a fewsteps. 5.1 Restoration from Synthetic Blur The images in Fig. 2(a)-(c) were obtained by blurringthe srcinal image of Fig. 2(d) using a Gaussian-like mo-tion blur. The direction of the motion blur is vertical inFig. 2(a), horizontal in Fig. 2(b) and diagonal in Fig. 2(c).From comparison of Fig. 2(e) and 2(f) it is clear that theimages are better recovered when the directions of motionblur in the two images are orthogonal. The frequency-domain method enables the recovery of images degradedby a wide blur kernel, but limits the motion between thetwo images to pure translation. 5.2 Restoration from Real Motion Blur The images shown in Fig. 3 were taken by a cameramovingrelativetoa flatposter. ThemotionblurinFig.3(a)and Fig. 3(b) were obtained by vertical and horizontal mo-tions respectively.Fig. 3(c) and Fig. 3(d) show a clear enhancement of each of the images. Due to the small rotation betweenthe images, any method assuming only a pure translationwould have failed for this sequence.Fig. 4 shows how using different estimations for theangle of the motion blur direction for images 3(a) andFig. 3(b) affects the enhancement of the images. These di-agrams can be used to recover the directions of the motionblur from two images. For practical reasons the ¿  Ö  levelof the Gaussian pyramid was used instead of the srcinalimages. As can be seen from 4(a) and 4(b), the horizon- Proceedings of the Fifth IEEE Workshop on Applications of Computer Vision (WACV’00)0-7695-0813-8/00 $ 17.00 © 2000 IEEE

2009 Spring

Sep 1, 2017
Similar documents
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks