sensors-logo

Journal Browser

Journal Browser

Advances in Remote Sensing Image Enhancement and Classification

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 15 August 2024 | Viewed by 2429

Special Issue Editor

Associate Professor, School of Electronics and Information, Northwestern Polytechnical University, Xi’an 710072, China
Interests: image fusion; remote sensing image enhancement; classification; quality assessment

Special Issue Information

Dear Colleagues,

Remote sensing technology plays a crucial role in acquiring information about the Earth's surface using sensors mounted on satellites or aircrafts. The acquired images often require enhancement and classification techniques to extract meaningful information. This summary explores recent advancements in remote sensing image enhancement and classification, focusing on two main streams: enhancement and classification.

Remote sensing image acquisition processes are generally influenced by various kinds of degradation, such as noise, geometric distortions, blur (motion, atmospheric turbulence, out-of-focus), etc. Image enhancement is becoming one of the central issues in the development of remote sensing. The enhancement explores fusing, denoising, and hardware design to improve the quality of images, enabling the extraction of more comprehensive and accurate knowledge.

Turning to image classification, researchers have explored various techniques to extract meaningful information from enhanced remote sensing images. Classification explores various research lines utilizing advanced algorithms and machine learning techniques to accurately classify remote sensing data.

This Special Issue aims to develop state-of-the-art technologies for remote sensing image enhancement and classification.

Furthermore, with the help of these new data and technologies, the application of remote sensing data can also be improved and expanded. Authors are sincerely invited to contribute their research results, focusing on cutting-edge technologies, novel applications, and remote sensing classification and improvement evaluation methods, including, but not limited to, the following topics:

For enhancement:

  1. Fusion-based enhancement (multi-modal, multi-temporal, multi-source, multi-sensor, etc.);
  2. Super-resolution-based enhancement;
  3. Denoising-based enhancement;
  4. Quality assessment for enhanced images;
  5. Design of imaging sensor/system.

For classification, there are even more research lines, such as:

  1. Hyperspectral image classification;
  2. New image classification architectures;
  3. New datasets for remote sensing image classification with deep learning;
  4. Remote sensing image processing and pattern recognition;
  5. Scene classification;
  6. Image or data fusion/fusion classification;
  7. Target detection/change detection.

If you want to learn more information or need any advice, you can contact the Special Issue Editor Penelope Wang via <[email protected]> directly.

Dr. Xu Li
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 10524 KiB  
Article
VELIE: A Vehicle-Based Efficient Low-Light Image Enhancement Method for Intelligent Vehicles
by Linwei Ye, Dong Wang, Dongyi Yang, Zhiyuan Ma and Quan Zhang
Sensors 2024, 24(4), 1345; https://0-doi-org.brum.beds.ac.uk/10.3390/s24041345 - 19 Feb 2024
Viewed by 955
Abstract
In Advanced Driving Assistance Systems (ADAS), Automated Driving Systems (ADS), and Driver Assistance Systems (DAS), RGB camera sensors are extensively utilized for object detection, semantic segmentation, and object tracking. Despite their popularity due to low costs, RGB cameras exhibit weak robustness in complex [...] Read more.
In Advanced Driving Assistance Systems (ADAS), Automated Driving Systems (ADS), and Driver Assistance Systems (DAS), RGB camera sensors are extensively utilized for object detection, semantic segmentation, and object tracking. Despite their popularity due to low costs, RGB cameras exhibit weak robustness in complex environments, particularly underperforming in low-light conditions, which raises a significant concern. To address these challenges, multi-sensor fusion systems or specialized low-light cameras have been proposed, but their high costs render them unsuitable for widespread deployment. On the other hand, improvements in post-processing algorithms offer a more economical and effective solution. However, current research in low-light image enhancement still shows substantial gaps in detail enhancement on nighttime driving datasets and is characterized by high deployment costs, failing to achieve real-time inference and edge deployment. Therefore, this paper leverages the Swin Vision Transformer combined with a gamma transformation integrated U-Net for the decoupled enhancement of initial low-light inputs, proposing a deep learning enhancement network named Vehicle-based Efficient Low-light Image Enhancement (VELIE). VELIE achieves state-of-the-art performance on various driving datasets with a processing time of only 0.19 s, significantly enhancing high-dimensional environmental perception tasks in low-light conditions. Full article
(This article belongs to the Special Issue Advances in Remote Sensing Image Enhancement and Classification)
Show Figures

Figure 1

25 pages, 15056 KiB  
Article
Remote Sensing Retrieval of Cloud Top Height Using Neural Networks and Data from Cloud-Aerosol Lidar with Orthogonal Polarization
by Yinhe Cheng, Hongjian He, Qiangyu Xue, Jiaxuan Yang, Wei Zhong, Xinyu Zhu and Xiangyu Peng
Sensors 2024, 24(2), 541; https://0-doi-org.brum.beds.ac.uk/10.3390/s24020541 - 15 Jan 2024
Viewed by 657
Abstract
In order to enhance the retrieval accuracy of cloud top height (CTH) from MODIS data, neural network models were employed based on Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) data. Three types of methods were established using MODIS inputs: cloud parameters, calibrated radiance, and [...] Read more.
In order to enhance the retrieval accuracy of cloud top height (CTH) from MODIS data, neural network models were employed based on Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) data. Three types of methods were established using MODIS inputs: cloud parameters, calibrated radiance, and a combination of both. From a statistical standpoint, models with combination inputs demonstrated the best performance, followed by models with calibrated radiance inputs, while models relying solely on calibrated radiance had poorer applicability. This work found that cloud top pressure (CTP) and cloud top temperature played a crucial role in CTH retrieval from MODIS data. However, within the same type of models, there were slight differences in the retrieved results, and these differences were not dependent on the quantity of input parameters. Therefore, the model with fewer inputs using cloud parameters and calibrated radiance was recommended and employed for individual case studies. This model produced results closest to the actual cloud top structure of the typhoon and exhibited similar cloud distribution patterns when compared with the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) CTHs from a climatic statistical perspective. This suggests that the recommended model has good applicability and credibility in CTH retrieval from MODIS images. This work provides a method to improve accurate CTHs from MODIS data for better utilization. Full article
(This article belongs to the Special Issue Advances in Remote Sensing Image Enhancement and Classification)
Show Figures

Figure 1

Back to TopTop