Deep Learning for Multi-Site Solar Observations: from Quality Estimation to Image Reconstruction
874 views
02.05.2019
Affiliation
Main category
Natural Sciences (Astrophysics and Astrononmy)
Abstract
In recent years deep learning had a major impact in various disciplines. New state-of-the-art results have been achieved in image processing, voice recognition, decision making and many other domains. Deep learning has proven to be robust and capable of solving complex tasks. In contrast to other algorithms deep learning benefits from large amounts of data and the inference times are typically short. Under these aspects deep learning is also of interest for tasks within the SOLARNET H2020 project. We will discuss the potential benefits and applications of deep learning for SPRING. Adequate data filtering and merging of simultaneous observations from multiple observation sites need suitable measure for single-site image quality. Recent approaches with neural networks have proven to perform well on image quality assessment. For full-disk solar observations additional effects, like local atmospheric and seeing conditions, complicate the situation. We propose a customized neural network, to account for the image quality assessment. As a baseline we use the currently operating quality estimation of the Kanzelhöhe Observatory for Solar and Environmental Research (KSO), which is based on a combination of parameters that describe local and global properties extracted from each recorded image. The dataset will consist of manually annotated H-alpha images between 2008 and 2019, covering a wide range of solar activity conditions. The advantage of this approach is that additional observation sites can be included with reduced effort by reusing the pre-trained neural network. Additionally, we are investigating reconstruction and homogenization methods to compensate for local seeing conditions. This can be accomplished by a neural network which translates between high- and low-quality images. The architecture is based on generative adversarial networks (GANs). We use high quality images as conditional input for generating a neural network to create realistic low quality solar images. In parallel a second generator is trained to reproduce the original image. With this approach a dataset of paired ground-truth and degraded images is created. This will be of further use for artificial scenarios of multi-site observation with mixed qualities and for performance estimation of the reconstruction algorithms. To enforce the generation of artificial low-quality images a discriminating network is used to identify the differences between low- and high-quality images. With further training on the full augmented dataset, this network serves as an image quality classifier.
Further information
Further reading
Language
English
DOI
Conference
Do you have problems viewing the pdf-file? Download presentation here
If the presentation contains inappropriate content, please report the presentation. You will be redirected to the landing page.