Investor Relations

Project Proposal Summary

Project Proposal Summary
of 4
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
  Project Proposal   Summary 1.   Subject: Cloud to street project proposal 2.   Topic: Image restoration of cloud shadowed part from satellite imagery 3.   Objective: To restore the image under the clouds and shadowed region from the satellite to estimate the probability of each pixel at the resolution of respective satellite. 4.   Time Period: Fall 2019  About Me  Name: Gaurav Pradeep Pendhari  I am the graduate student at NYU Tandon School of engineering, majoring in Computer Engineering . I have obtained a bachelor’s degree in Electrical engineering and possess good knowledge of computer science, software engineering, and programming. I am extremely attracted by the power of deep learning and I will try my best to explore how amazing it is. Contacts: Email : , Phone : +(917)-769-9340 LinkedIn :   Few Important points related to project a.   I am familiar with TensorFlow, pytorch. b.   I have been using python for 2 years and I have implemented YOLOv1 and DQN algorithms by using machine learning framework. c.   I am familiar with design patterns and software architecture. d.   I have good knowledge of c++/Python and many tools for programming. e.   I am experienced in Linux and AWS tools f.   I am hard-working, responsible and always willing to contribute to the community.  Problem description   There are gaps in the current satellite To fill the “gaps” in satellite observed flood maps where optical sensors have clouds or cloud shadows covering part of the image by reconstructing the gaps using deep learning model and the feeding it as an input to the pipeline which will classify the likelihood for the “flood pixels under clouds.”  . Implementation plan   First task is to identify the missing pixel values under the cloud and shadows correctly and then process those images to calculate the probability estimate that it is flooded. This is like filling in missing pixels of an image, often referred as image inpainting or completion. The core challenge of image inpainting lies in synthesizing visually realistic and semantically plausible pixels for the missing reports that are coherent with the existing ones. Rapid progress in deep convolutional neural networks (CNN) and generative adversarial networks (GAN) inspired recent works to formulate inpainting as a conditional image generation problem where high-level recognition and low-level pixel synthesis are formulated into a convolutional encoder-decoder network. However, this CNN based approach often creates boundary artifacts, distorted structures and blurry textures inconsistent with the surrounding areas. This is due the ineffectiveness of CNN models in correlating with the distant contextual information and the hole regions. To supersede all the above problems, I propose to use the approach “ Generative Image Inpainting with Contextual Attention ”  presented in the paper in Dec 2018[1]. This paper suggests an approach of using a two-stage network for predicting missing pixel values in the network. Stage one is uses a network to make a prediction of initial coarse prediction, whereas stage two network uses the coarse predicted image as an input and predict the refined results. The coarse network is trained with the reconstruction loss, while the refinement network is trained with reconstruction as well as GAN losses. Figure 1 shows the clear approach. of using the two-stage image inpainting. From : Paper “ Generative Image Inpainting with Contextual Attention ” by 1 University of Illinois at Urbana-Champaign and 2 Adobe Research In our problem, 1. Data Pre-processing for training step: In this step, for training data we artificially introduce the white spots in the true images and use them as an input to the pipeline. We keep the dataset of images from different resolutions separately.  2. Training:  In this step, the preprocessed image shall be I suggest using images with artificially added clouds and shadows as a white spot in images to learn the parameters of the model. Training of parameters shall be done separately for different resolutions and the trained parameters shall be saved for inferencing. 3. Data Pre-processing for testing step: In this step, for training data we should identify the clouds and shadows in the image and mask them with white spots. To do that we can use either some common computer vision algorithms or we can also create our own convolutional network layers. This must be done, separately for different resolutions and the dataset also for different resolution of images should be kept separate for training. 4. Post- processing the network’s output : We combine the features new images and feed it to the pipeline for predicting the probability of pixels for flood maps. 5. Processing time : It can take around 120-150 GPU hours of training for each model on (GTX 1080).This is a rough estimate on the basis of my experience and the information shared in paper[1]. Implications of approach : Pros : 1. Below is the example of such approach published in the paper [1]. This shows the well sportiness of the model and the feasibility of the approach in the past problems to identify true value of missing pixel images. From : Paper “ Generative Image Inpainting with Contextual Attention ” by 1 University of Illinois at Urbana-Champaign and 2 Adobe Research 2. The approach will be more useful for scaled down version of high-resolution images from satellite for better white spot predictions. 3. This approach can be implemented within 4-6 months’ time  (tentative)and is very much scalable. (20 hours per week) Cons: 1. There can ’ t be a mid-project status. 2. It is using pipeline separately for images with different resolutions. Ideally there shall be just one pipeline and one set of parameters for all the images.  3. I do not have experience in deploying the model in production so I can’t  access the tangibility’s  of approach in production environment. Skills and Experience : I can implement this model by myself with little help in some areas may require doing the actual flood mapping since I am not having any such similar experience. References: 1. Jiahui Yu, Zhe Lin, Jimei Yang, Xiaohui Shen, Xin Li, Thomas S. Huang, Generative Image Inpainting with Contextual Attention, arXiv:1801.07892v2 [cs.CV] 21 Mar 2018   2. Guilin Liu, Fitsum A. Reda, Kevin J. Shih, Ting-Chun Wang, Andrew Tao, Bryan Catanzaro,   Image Inpainting for Irregular Holes Using Partial Convolutions, arXiv:1804.07723v2 [cs.CV] 15 Dec 2018 3.
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks

We need your sign to support Project to invent "SMART AND CONTROLLABLE REFLECTIVE BALLOONS" to cover the Sun and Save Our Earth.

More details...

Sign Now!

We are very appreciated for your Prompt Action!