Advertisement

Approach to Metric and Discrimination of Blur Based on Its Invariant Features

Description
Blur metrics have been used in broad range of applications to quantify the amount of blur especially in images. The spatially varying blur due to defocus or camera shake is hard to estimate. It is observed that the existing blur metrics does not
Categories
Published
of 6
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
Share
Transcript
  Approach to Metric and Discrimination of Blur Based on Its Invariant Features Saqib Yousaf School of Automation Science and Electrical Engineering Beihang University, Beijing, China sy_isb@hotmail.com Shiyin Qin School of Automation Science and Electrical Engineering Beihang University, Beijing, China qsy@buaa.edu.cn   Abstract   —  Blur metrics have been used in broad range of applications to quantify the amount of blur especially in images. The spatially varying blur due to defocus or camera shake is hard to estimate. It is observed that the existing blur metrics does not perform well for images having very few or many features. In this work, we present contrast based blur invariant features named as CBIF, which utilizes useful information available in different contrast levels. We further, used CBIF along with local standard deviation to formulate a no reference objective blur metric which shows better results compared with other existing blur metrics. Additionally, the proposed blur metric can be modified for perceptual quality assessment by implementing the scheme which takes advantage of a better correlation with human blur perception. Also, the blur metric can be modified to provide blur assessment in the presence of gaussian noise. The proposed metric is monotonic as well as accurate even for severely blurred images. The comparison of results with subjective scores of CSIQ and LIVE image databases also validated the superiority of our proposed metric over existing metrics. The applicability of our blur metric is also demonstrated for the assessment of JPEG distortions. The property of CBIF for being more sensitive to blur effected regions is also used for obtaining blur likelihood map which is further used in blur segmentation.  Keywords  —   blur metric, blur segmentation, blur invariance I.   I  NTRODUCTION  Image quality assessment (IQA) algorithms become very important in recent years due to advent of new technologies of displays, panels, cameras and mobiles. These are usually aimed to meet a promised Quality of Service (QoS) in improving end user’s Quality of Experience (QoE)  [1]. Objective  IQA algorithms are classified into full-reference (FR), reduced-reference (RR) and no-reference (NR) based on the available amount of reference information [2]. FR objective metrics like  peak to signal noise ratio (PSNR), Mean square error (MSE) or Mean absolute error (MAE) utilize the srcinal reference [3]. RR metrics utilize features extracted from the srcinal reference while, NR metrics are used in applications where the reference is not available. Although, FR metrics like PSNR, MSE or MAE are simple and widely accepted but are unable to predict the  perceived visual quality, especially when the noise is not additive [3]. Structural similarity (SSIM) index [4] is a FR metric aims to model human’s  subjective  evaluation which is in the form of Mean Opinion Scores (MOS) obtained by conducting experiments on human subjects. Therefore, Objective IQA metrics can be classified into two broad types: signal  fidelity  measures, and  perceptual   quality metrics. In case of NR scenario, perceptual quality metrics are also very important since in most real time environments, reference image is not available. In this paper, we deal with both fidelity and  perceptual metrics for NR-IQA towards blur distortions. Blur is a major factor effecting the image quality and is unavoidable in many cases. It can be uniform or spatially variant, and is generally due to misfocus optics, camera shake or atmospheric turbulence. It can also degrades the performance in many computer vision tasks like segmentation, object recognition etc. The blur metrics  are commonly used to quantify  blur and are important in many applications like autofocus, super-resolution and sharpness enhancement. They can also be used for detecting blur manipulation or image tampering. For blur metrics, different approaches are proposed which includes the use of Kurtosis [5], derivative [6], edge-width [7], histogram [8], power spectrum [9], wavelets [10] and local contrast via 2-D analytic filters [11]. One of the first works on  blur metrics is by Marichal et al [8] in which DCT coefficients ’ histogram is used, but this method was limited to the frames in the compressed domain. Marziliano et al [7] uses inflection  points delimiting an edge to find the edge spreading. However, in addition to edges, texture information [12] along with a  pooling strategy may also be used [13] to model human visual system (HVS) for blur assessment. In [14], probabilistic support vector machine (SVM) and perceptually motivated pooling strategy are used to find an effective blur metric. In [15], the idea of  just noticeable blur   (JNB) is used which is further improved in [16] by incorporating a visual attention model  , such that the areas in the images which are most likely noticed by humans are given more weight than the others, however, the improvement is not much significant. In [17], JNB is utilized with a cumulative  probability of blur detection (CPBD). CPBD algorithm calculates the percentage of edges at which blur cannot be detected. Crete et al [18] utilized the discrimination between different levels of blur perceptible on the same picture to  propose an effective blur metric. Contrast based blur metric recently proposed in [12] shows good correlation with human  blur perception. Hence, there has been lot of efforts to find reliable features and HVS models for blur assessment and our  present work is another step in this quest.  Blur likelihood map  is also desired in many applications instead of a single blur metric [19], [20], [21] e.g., content-based image retrieval, image enhancement, restoration, segmentation, and object detection. On the other hand, blur invariant feature descriptors  like local phase quantization (LPQ) [22] are already found useful for the applications like object recognition [23] etc. However, to our knowledge, there has been no efforts to combine the concepts of blur invariance  with blur metrics .  Blur 978-1-4673-5791-3/13/$31.00 ©2013 IEEE  274   segmentation [19] can also be useful to aid an extremely ill- posed problem of non-uniform blind deconvolution. In this paper, we are aiming to find more reliable features to give blur metric as well as blur likelihood map leading to blur segmentation. In Section II, the contrast based blur invariant features named as CBIF is described. In Section III, the blur metric based on CBIF is proposed. In Section IV, CBIF is used to find blur likelihood map which is used in blur segmentation. Section V contains algorithms and in Section VI we provide results and their analysis. We conclude in Section VII. II.   C ONTRAST BASED B LUR INVARIANT FEATURES (CBIF) Linear contrast stretching is a point operation which stretches the range of intensity values to a desired range using linear interpolation while saturating the pixel values beyond the minimum and maximum limits. By relaxing saturation of values, the values of less than zero and more than maximum intensity level can be obtained. The information in images of different contrast levels are useful for blur analysis.  A.    Effect of Contrast Stretching on Blur To illustrate, let us consider an n-bit 1-dimensional blurry signal ()  as shown in Fig. 1 (a), and its contrast stretched signal     with contrast limits [   ,1  ] while   <0.5  is:    ()   ()−  (−  )−  ×2   (1) where,   ()(1()/2  )  is the rescaled version of srcinal signal and here the subtraction from 1 is performed to enhance the blur sensitivity. By taking its absolute difference from input signal  , we obtained the following signal:     ()|()   ()|  (2) Subtracting     ()  from its maximum value, we get a useful variable    ():      ()max    ()    ()  (3) In Fig. 1 (a), both     and     are shown for three values of contrast limits   . The shaded area represents the blur effected regions in the signal. It can be seen that the difference between maximum and minimum value of stretched signal     increases as    increases. The regions other than blur effected regions are not changed in terms of variation, however their values do change. Also,     is almost zero everywhere except blur effected areas. This property can also be seen for an 8-bit blurry image as shown in Fig. 2. The intensity range shown in colorbar increases with the increase in   . For    , the blur effected regions are white which corresponds to peak for 1D-signal in Fig. 1(a).  Notably, the regions which are lesser effected by blur like top of table cover and bookshelf are appeared as dark in    . Fig. 2     and     for three contrast limits for a blurred image Fig. 3 Effect of blur size on   ,   and    for an image  B.    Blur Invariant Feature Extraction The key  bright features in     can be enhanced by taking average of     from different contrast limits    to give  . This Image Original image CBIF:  Mean: 257 g Mean: 12.4  ' Mean: 142.4Blur: 3 pixelMean: 229Mean: 2.8Mean: 36.4Blur: 13 pixelMean: 234.5Mean: 2.3Mean: 11.5   Fig. 1 (a)     and     for three    for 8-bit signal (b) A signal  g with different blurs (c) Local standard deviation for  g (d) Effect of blur size on   and  . (a)   (b) (c) (d) 275  is referred here as Contrast based Blur invariant features (CBIF). For three contrast limits, it can be calculated as: ()  [   ()   ()   ()]  (4) From Fig. 1 (d) and Fig. 3, we see that height and position of  ’s peaks  are almost unchanged in spite of change in blur size. However, the width of peaks increase with the increase in blur size. Due to this property, mean of   is almost unchanged for different blur sizes. Also, the prominent details like eyes, nose are available in even extremely blurred image. Thus CBIF is blur insensitive, unique and also a little different from usual edges. The invariance of mean of   under varying blur is tested for image databases: LIVE [24], CSIQ [25] and USC-SIPI [26]. III.   B LUR M ETRIC BASED ON CBIF To derive blur metric for a signal or image   using our blur invariant feature CBIF (  ), we need to find regions in   which are locally variant   with blur, represented by   . To find these features, first, we calculate standard deviation in local neighborhood (21)×(21)  around each location in   resulting in blur variant    shown in Fig. 1 (c) and Fig. 3:   (,;) √  (+)  ∑∑[(,)  (,)] +=−+=−   (5) where, n  is integer and    is the local mean for   given as:   (,;)  (+)  ∑∑(,) +=−+=−   (6) Then, we find a binary image  B  representing those values of    which are above certain threshold   : {0, if   <   1, if   ≥    (7) The blur dependent features      are obtained by keeping only those locations in   which have nonzero values in  B  i.e.,     (,)(,)∗(,)  (8)  Thus,       is a component of   with only those entries which are locally variant. It is shown in Fig. 3, that      features and their mean value decreases with the increase in blur size. As      images are derived from  , therefore, its mean is always less than the mean of    . So, the ratio between their sums over whole image domain Ω  is between 0 to 1 and is observed as a good measure of blur or focus of the image.   ∑    /∑   (9) By subtracting the ratio from 1, we get CBIF    based non- perceptual objective blur metric     which grows from 0 to 1 as the blur increases:   1    (10) The objective blur metric    is non-perceptual fidelity measure as it compares locally variant features      with the whole image. By extensive analysis of subjective scores for various image datasets, we realize that humans blur perception does not consider global conditions. e.g., “Turtle” image in Fig. 4 having partial out of focus background still has a high  perceptual subjective score. The humans evaluate image quality  by observing the sharp turtle only while rejecting out of focus  background. This is also true for images having large uniform areas.  A.    Perceptual Blur Assessment      represents portions of   having variation higher than a certain threshold    in   . If we take two thresholds e.g.,    and   , and modify Eq. (10) as follows: Fig. 4 Blur segmentation for different types of images. First row: srcinal image with Perceptual and Objective blur metric respectively, Second row: ψ and a Blur likelihood map χ (Top left corner), Third row: Segmentation results (Cyan: invalid, Blue: out of   focus, Yellow: sharp, Red: Motion blur). 276      1 ∑    ′Ω / ∑    ′Ω   (11) Then the new objective blur metric will correlate better with human visual system. In our experiments, we used   6  and   3 , which gives quite remarkable results for 8-bit images. Having two thresholds, enables the blur metric to focus on the area of interest only instead of whole image, which is exactly how human perceive blur. The objective and perceptual blur metric is given for different images as shown in Fig. 4. Their large difference for second and third image is due to out of focus  background.  B.    Adding Antinoise Capability to Blurr Metric Sometimes robustness against noise is also desired from blur metric. We suggest following modification to form other blur metrics    and   ,  which are robust to additive gaussian noise. First, we find approximation subband (LL), n  times using discrete stationary wavelet transform. Each time LL image from  previous iteration is used as input and for the first time given image is used as input. Then, CBIF and its locally variant components are derived from this LL image using similar  procedure. Importantly, with increasing number of iterations n ,  blur metric is not severely changed. Therefore,    and     are less sensitive to n , but, the noise does decrease as n  increases. IV.   B LUR C LASSIFICATION AND OTHER APPLICATIONS  As CBIF inherently recognizes blur effected regions quite efficiently without using local gradients or complex learning techniques, so naturally its advantages can be explored for the generation of blur likelihood map, which can be useful in a variety of applications.  A.    Blur Likelihood Map From Fig. 4 (second row), Blur likelihood map is shown which can be derived from   by first replacing each pixel value with its local standard deviation   (,;5)  as in Eq. (5). Then, Blur likelihood map    is obtained by averaging   (,;5)  over a small local windows as in Eq. (6) (e.g., 15 x 15 pixels) which removes isolated points, so      (,,15) with   (,;5)  (12) In   , sharp regions have higher values then uniform and  blurred regions as shown in Fig. 4. The invalid or smooth regions have very small local standard deviation in   and these have no information to find blur or sharpness. To find invalid regions, first we will find   from image   as follows:     (,,5), Here   (,;5)  (13) After identifying invalid regions having small values in  ,  blur segmentation can be performed by simple thresholding of   .  B.    Blur segmentation A threshold T  = 27 for blur segmentation found to be working well for all types of images and some are shown in Fig. 4. Motion blur is a different type of blur and to detect this, we used Liu et al. [27] approach based on local autocorrelation congruency. Fig. 4 shows the detection of motion blur in “ motor cycle ” image.  The advantage in our blur segmentation is again the uncomplicatedness of approach by using simple thresholds, which are tested for a variety of images. The classifi cation can  be made according to the following rules:  ≥ → Sharp region < → Blurred regions<3 →Invalid  (14) The image segmentation for blur detection has recently attracted some research. In [19], it is shown that estimating non-uniform blur can be cast as segmentation problem. V.   A LGORITHMS  The main procedures of the proposed blur invariant feature CBIF are summarized in Algorithm 1. Algorithm 1: Finding Blur invariant Feature  –   CBIF Input: Image   1.   Find eight images    ,…,    with different contrast ranges (Section II.B) e.g, [0.45 0.55], [0.40 0.60],…,[0.10 0.90].  2.   Find eight     using Eq. (3).  3.   Find their average to get CBIF or ()  using Eq. (4).   return CBIF In step 1, eight contrast ranges are selected after extensive experiments, however, the results are not heavilty depends on its choice. The details of calculating CBIF based blur metric are outlined in Algorithm 2. Algorithm 2: CBIF based Blur Metrics -   ,   ,    or     Inputs: Image  , window size  s , standard deviation thresholds,  Number of iterstions n  (for    or    only).  1.   Denoise the image using n  iterations and replace   with denoised image (Section III.B) (only for    or   )  2.   Find CBIF or ()  using Algorithm 1  3.   Finding local standard deviation in ×  pixels  block around each location to get   (,;)  using Eq. (5).  4.   Find binary image   using Eq. (7).  5.   Find      (and      for perceptual metric) using Eq. (8).  6.   Find Blur metric using Eq. (10) or Eq. (11) depending on perceptual or objective blur metric required.   return   ,   ,    or    The parameters  s , v  and n  are selected as 3, 6 and 3 respectively after the analysis on a variety of natural images and 277   blur sizes. For document images, higher values of v  e.g., 11 is recommended. Algorithm 3: Blur Segmentation Input: Image   1.   Find CBIF or ()  using Algorithm 1.  2.   Blurlikelihood map    (Section IV.A)  a.   Find   (,;)  using Eq. (5)  b.   Find      (,,)  using Eq. (6)  3.   Blur segmentation (Section IV.B)  a.   Find   using Eq. (11) b.   Invalid regions: <  c.   Sharp regions:  ≥  d.   Blurred regions:  <  4.   Find Motion segmentation (Section IV.B).  5.   Labelling each pixel with its region type to obtain matrix  L .   return  L  VI.   D EMONSTRATION AND V ERIFICATION  For performance evaluation of our CBIF based blur metric, images from the LIVE [24], CSIQ [25] and USC-SIPI [26] databases are used. Fig. 5 shows the behavior of   ,    and three other blur metrics i.e., Crete [18], JNBM [15] and CPBD [17], with the increase in blur size for different images. CPBD is an objective quality metric while the other two are perceptual metrics. Both    and    correlates well with the image  blurriness and behaves monotonically, however other metric unable to show monotonic behavior. Both non-perceptual blur metrics CPBD and    behave different from other perceptual metrics, but CPBD shows inconsistent and non-monotonic  behavior. For feature rich images like “Mandrill” , our curve rises sharply as compared to “Vegeables” image , which is expected behavior as small features are effected easily with the increase in blur size. The performance of our metrics with the antinoise features    and    is shown in the presence of noise for “ Child ”  image which is much better than other metrics. To demonstrate the perceptual assessment, CSIQ and LIVE databases are used. Their difference mean opinion scores (DMOS) based on subjective scores are calculated by scaling and shifting the difference score. Fig. 6 shows cumulative histogram of absolute difference of various blur metrics with DMOS. For CSIQ our blur metric gives better perceptual  performance and for LIVE, it also performs reasonably well. Moreover, in the absence of noise both   and   behaves almost similarly. Also, CPBD is a non-perceptual blur metric, so    curve lies closer to CPBD. For the perceptual assessment of JPEG distortions, we obtained promising results as shown in Fig. 6. To test blur segmentation, we used variety of images from various image databases and photo sharing websites like “xaxor.com”  and some results are shown in Fig. 4.  A.    Example  –   document images It is becoming a normal practice of taking photos of document pages as an alternative to scanning due to extensive use of smart phones. An effective blur metric to determine the sharpness of document image in real time can be useful. It can also be used after the image is acquired in blind deconvolution for image restoration. Our proposed blur metric is also analyzed for document images. In Fig. 7, it is shown that for a sharp document image our perceptual blur metric based on CBIF  behaves very nicely and predicts a value of 0.01. However, the Fig. 5 Proposed blur metrics    and     with the increase in blur size and their comparison. “Child noisy” image: Performance with 7 dB gaussian noise. Fig. 6 Cumulative histogram for absolute difference with the DMOS for various blur metrics.    M   e   t   r   i   c   :   S   h   a   r   p   (   0   )   =   >   B   l   u   r   r   y   (   1   ) Vegetables  0 5 1000.10.20.30.40.50.60.70.80.91  P  OB CreteCPBDJNBM   Mandrill  0 5 1000.10.20.30.40.50.60.70.80.91  P  OB CreteCPBDJNBM   Child  0 5 1000.10.20.30.40.50.60.70.80.91  P  OB CreteCPBDJNBM   Child Noisy  0 5 1000.10.20.30.40.50.60.70.80.91  P  OB CreteCPBDJNBM 0.10.20.30.40.50.60.70102030405060708090100      S  u  c  c  e  s  s  p  e  r  c  e  n   t CSIQ database for Blur distortions   P   OB   P CreteJNBMCPBD 0.10.20.30.40.50.60.70102030405060708090100   LIVE database for Blur distortions   P   OB   P CreteJNBMCPBD 0.10.20.30.40.50.60.70102030405060708090100   LIVE database for JPEG distortions   P   OB   P CreteJNBMCPBD 278
Search
Similar documents
View more...
Tags
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks