Automotive

DIGITAL IMAGE PROCESSING BASED ON SPATIAL PROCESSING

Description
DIGITAL IMAGE PROCESSING BASED ON SPATIAL PROCESSING
Categories
Published
of 15
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
Share
Transcript
  DIGITAL IMAGE PROCESSING BASED ON SPATIAL PROCESSING School of Electronics Engineering, Tianjin University of Technology and Education, Michael Daniel Nachipyangu ( 麦迪 ), Msc. Signal and Information Processing, October, 2013 Abstract This paper is going to introduce some digital image processing techniques based on Spatial  processing which compose of intensity transformations and spatial filtering with the use of smoothing spatial filters and sharpening spatial filters. The results of these techniques will also  be presented. 1.   INTRODUCTION 1.1   Image An image as defined in the mathematical point of view is considered to be a function of two real variables, for example, a(x,y) with a as the amplitude (e.g. brightness) of the image at the real coordinate position (x,y). Further, an image may be considered to contain sub-images sometimes referred to as Regions-of-interest, ROIs, or simply regions. This concept reflects the fact that images frequently contain collections of objects each of which can be the basis for a region. 1.2   Digital Image Processing Is the use of computer tools to perform some processes on the digital image, these tools are often the computer algorithm used to accomplish a certain task. As a subfield of digital signal processing, digital image processing has many advantages over analog image processing; it allows a much wider range of algorithms to be applied to the input data, and can avoid problems such as the build-up of noise and signal distortion during  processing. 1.3   Processes which can be done in Image Processing    Geometric transformations such as enlargement, reduction, and rotation.    Color corrections such as brightness and contrast adjustments, quantization, or Conversion to a different color space.    Registration (or alignment) of two or more images.    Combination of two or more images, e.g. into an average, blend, difference, or image composite.    Interpolation and recovery of a full image from a RAW image format.    Segmentation of the image into regions.    Image editing and Digital retouching.     Extending dynamic range by combining differently exposed images.  1.4   Applications of Image Processing    Photography and Printing    Satellite Image Processing    Medical Image Processing    Face detection, Feature detection, Face identification    Microscope image processing   2.   SPATIAL PROCESSING The term Spatial domain refers to the image plane itself and the image processing methods are based on direct manipulation of pixels in an image. Two principle categories of spatial processing involve intensity transformation and spatial filtering.  Intensity transformation operates to single pixels of the image for accomplishing contrast manipulation and Image thresholding. Spatial filtering deals with performing operations such as Image sharpening by working in a neighborhood of every pixel in an image. The name filter refers to accepting or rejecting some components, if we take an example of frequency as the component to be filtered, the filter that passes low frequencies is called low pass filter. The effect produced  by lowpass filter is to blur (smooth) an image. We can accomplish a similar smoothing directly on the image itself by using spatial filters also called spatial masks. 2.1   Smoothing spatial Filters The smoothing filters are used in blurring and for noise reduction, the output of the linear smoothing filter is the average of pixels contained in the neighborhood of the filter mask. These filters are sometimes called averaging filters. This is done by replacing the value of every pixel in an image by average of the intensity levels in the neighborhood defined by the filter mask, this process result into an image with reduced sharp transitions in intensity levels. 1 1 1 1 1 1 1 1 1 The figure above shows two 3x3 smoothing filters, the spatial averaging filter in which all coefficients are equal sometimes is called a box filter   and the one with different coefficient yields the so called weighted average . Below is the example of the smooth image; 3 2 4 5 6 1 5 3 2 1/9 x 1/31x  Original ImageSmooth Image   2.2   Piecewise-Linear transformation The piecewise-linear transformation involves the changing the value of intensity of the gray scale image depending on the desire of the problem, it is called piecewise since the different range of intensity levels are grouped for changing the value of the intensity and it is called linear since the equations used are linear equations as shown in below example. . One of the practical piecewise linear function is contrast-stretching transformation. Contrast stretching is the process of expanding the range of intensity levels in an image so that it spans the full intensity range of displaying device. Below is the formula that brings about piecewise linear transformation; Where u  is the gray-level values of input image, v  is the transformed gray-level values of output image. Original ImagePiecewise Linear Transform         Lubbuaauvbuvauuv ba 0       2.3   Edge detection and Image segmentation  Image segmentation  is the process of dividing an image into multiple parts. This is typically used to identify objects or other relevant information in digital images. The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain visual characteristics. The result of image segmentation is a set of segments that collectively cover the entire image, or a set of contours extracted from the image. Each of the pixels in a region is similar with respect to some characteristic or computed property, such as color, intensity, or texture. Adjacent regions are significantly different with respect to the same characteristic(s).When applied to a stack of images, typical in medical imaging, the resulting contours after image segmentation can be used to create 3D reconstructions with the help of interpolation algorithms like Marching cubes. There are many different ways to perform image segmentation, including:    Thresholding methods, such as Otsu’s method      Clustering methods, such as K-means and principle components analysis    Transform methods, such as watershed    Texture methods, such as texture filters   The following are some practical applications of the Image segmentations    Content-based image retrieval    Machine vision    Medical imaging    Locate tumors and other pathologies    Measure tissue volumes    Diagnosis, study of anatomical structure      Object detection    Pedestrian detection    Face detection    Brake light detection    Locate objects in satellite images (roads, forests, crops, etc.)     Recognition Tasks    Face recognition    Fingerprint recognition    Iris recognition    Traffic control systems    Video surveillance The figure below shows the sample of segmented image; srcinal imagesegmentated image    Edge detection  is the name for a set of mathematical methods which aim at identifying  points in a digital image at which the image brightness changes sharply or, more formally, has discontinuities. The points at which image brightness changes sharply are typically organized into a set of curved line segments termed edges . The same problem of finding discontinuities in 1D signal is known as step detection and the problem of finding signal discontinuities over time is known as change detection. Edge detection is a fundamental tool in image processing, machine vision and computer vision, particularly in the areas of feature detection and feature extraction. The main goals of edge detection are as follows;    Produce a line drawing of a scene from an image of that scene.    Important features can be extracted from the edges of an image (e.g., corners, lines, curves).    These features are used by higher-level computer vision algorithms (e.g., recognition). There are four steps of edge detection 1.  Smoothing: suppress as much noise as possible, without destroying the true edges. 2.  Enhancement: apply a filter to enhance the quality of the edges in the image (sharpening).
Search
Tags
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks