Next Article in Journal
RumorLLM: A Rumor Large Language Model-Based Fake-News-Detection Data-Augmentation Approach
Next Article in Special Issue
Context-Aware System for Information Flow Management in Factories of the Future
Previous Article in Journal
The Optimization of the Geometry of the Centrifugal Fan at Different Design Points
Previous Article in Special Issue
Software and Architecture Orchestration for Process Control in Industry 4.0 Enabled by Cyber-Physical Systems Technologies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Emerging Technologies for Automation in Environmental Sensing: Review

by
Shekhar Suman Borah
1,
Aaditya Khanal
2 and
Prabha Sundaravadivel
1,*
1
Department of Electrical and Computer Engineering, The University of Texas at Tyler, Tyler, TX 75799, USA
2
Department of Chemical Engineering, The University of Texas at Tyler, Tyler, TX 75799, USA
*
Author to whom correspondence should be addressed.
Submission received: 19 March 2024 / Revised: 11 April 2024 / Accepted: 19 April 2024 / Published: 22 April 2024

Abstract

:
This article explores the impact of automation on environmental sensing, focusing on advanced technologies that revolutionize data collection analysis and monitoring. The International Union of Pure and Applied Chemistry (IUPAC) defines automation as integrating hardware and software components into modern analytical systems. Advancements in electronics, computer science, and robotics drive the evolution of automated sensing systems, overcoming traditional limitations in manual data collection. Environmental sensor networks (ESNs) address challenges in weather constraints and cost considerations, providing high-quality time-series data, although issues in interoperability, calibration, communication, and longevity persist. Unmanned Aerial Systems (UASs), particularly unmanned aerial vehicles (UAVs), play an important role in environmental monitoring due to their versatility and cost-effectiveness. Despite challenges in regulatory compliance and technical limitations, UAVs offer detailed spatial and temporal information. Pollution monitoring faces challenges related to high costs and maintenance requirements, prompting the exploration of cost-efficient alternatives. Smart agriculture encounters hurdle in data integration, interoperability, device durability in adverse weather conditions, and cybersecurity threats, necessitating privacy-preserving techniques and federated learning approaches. Financial barriers, including hardware costs and ongoing maintenance, impede the widespread adoption of smart technology in agriculture. Integrating robotics, notably underwater vehicles, proves indispensable in various environmental monitoring applications, providing accurate data in challenging conditions. This review details the significant role of transfer learning and edge computing, which are integral components of robotics and wireless monitoring frameworks. These advancements aid in overcoming challenges in environmental sensing, underscoring the ongoing necessity for research and innovation to enhance monitoring solutions. Some state-of-the-art frameworks and datasets are analyzed to provide a comprehensive review on the basic steps involved in the automation of environmental sensing applications.

1. Introduction

Automated sensing systems optimize processes, minimizing human intervention, ensuring uniformity, and facilitating remote surveillance. According to the International Union of Pure and Applied Chemistry (IUPAC), “automation” denotes mechanization with systematic control, emphasizing the coordination of sequential manipulations [1]. Modern analytical systems exhibit varying degrees of mechanization, incorporating both hardware and software components. However, the interpretation of “automation” in the analytical systems literature lacks consistency. Advancements in electronics, computer science, and robotics propel the evolution of automated sensing systems, driving their progression [2].
Traditionally, data collection was manual, restricted by weather and cost, limiting coverage. A discussion about traditional methods and the transformative role of environmental sensor networks (ESNs) is reported in [3]. These wireless networks enhance data observation, providing high-quality time-series data. Yet, challenges like interoperability, calibration, communication, and longevity hinder the broad use of ESNs and robots in ecosystem sciences. Solving these problems is crucial for the widespread use of environmental robots.
Researchers are focusing on creating better smart sensors that can measure things accurately in tough places like hospitals, space, and weather stations. These sensors help monitor machines and devices, but there are challenges, especially with wireless sensors. They face issues like tracking problems, being reliable, and working well in harsh conditions. Sometimes, they have trouble communicating, leading to delays and signal loss. Another issue is about how much area they can cover, affecting how well they work. Solving these problems is important to make a network that works well and is affordable for Industry 4.0 [4].
The challenges in environmental sensing systems include high costs and maintenance requirements for traditional monitoring instruments, with a need for cost-efficient solutions. While expensive systems provide accurate data, low-cost alternatives often compromise accuracy. Obtaining 3-dimensional air pollution statistics poses challenges due to cost and power consumption. Passive monitoring systems lack the flexibility and quality offered by active monitoring. Limited flexibility and scalability in existing systems hinder adaptability to changes in sensing node classes. Power consumption, particularly in outdoor sensor networks, remains a concern, requiring sustainable energy sources and ongoing research efforts [5,6].
Monitoring water pollution through sensor networks offers a simplified approach. The primary challenge lies in detecting pollution, often determined by monitoring values surpassing defined thresholds. Distinguishing between natural pollution concentrations and those from specific sources poses a complexity for sensor nodes. To address this, cutting-edge techniques are introduced. Methods based on hypothesis testing, as presented in [7,8], aim to enhance accuracy and reliability in identifying water pollution, thereby advancing robust monitoring systems.
In smart agriculture, data from various sources need to work together seamlessly. These data, coming from farms, animal industries, and businesses, often come in different formats, making integration a challenge. Achieving interoperability—ensuring diverse data types can work together—is crucial for enhancing the value of widely distributed agricultural data [9]. For successful communication among different devices, interconnected and interoperable devices play a vital role in improving the overall efficiency of the system through cross-technology communication [10]. As smart agriculture advances, addressing these challenges becomes crucial for maximizing the benefits of diverse and distributed data.
Again, setting up a smart farming system in extensive open areas comes with big challenges. All gadgets like IoT devices, wireless sensors, machinery, and more have to face tough weather conditions such as heavy rain, extreme temperatures, humidity, and strong winds. These conditions can harm electronic parts or mess up how they work [11]. One way to handle this is by making strong and durable covers for these expensive devices. This helps protect them from real-world conditions, ensuring they can keep working well [12]. This approach focuses on keeping important equipment safe so they can do their job in tough agricultural settings.
Smart farming, dispersed and vulnerable, encounters cyber threats such as eavesdropping and denial-of-service attacks, risking privacy and system integrity. Addressing these concerns requires implementing privacy-preserving techniques and federated learning approaches [13]. Securing smart agriculture is essential for safeguarding sensitive data and ensuring the reliable operation of these advanced farming systems.
Adopting smart technology in agriculture faces challenges due to costs. Setting up smart systems involves significant expenses for hardware like robots, drones, and networks. Skilled labor adds to the bill for performing tasks. Subscriptions for networks and software, even with free options, bring ongoing expenses. Maintenance is crucial, further increasing costs. Overcoming these financial barriers is essential for successful adoption in agriculture [14,15].
In various environmental monitoring applications, robots are proving effective in overcoming limitations inherent in traditional methodologies. Scientists increasingly rely on robotic systems as indispensable tools for data collection, providing fresh insights into our planet’s environmental processes. From exploring deep oceans to tracking algal blooms, monitoring climate variables, and studying remote volcanoes, today’s robots play diverse roles [16].
New technological developments have made Unmanned Aerial Systems (UASs) a great choice for monitoring. They can capture the needed information for different tasks using a small investment. These systems are versatile, adaptable, and flexible, outperforming traditional manned airborne systems or satellites. Additionally, UASs can be quickly and repeatedly used to gather detailed information about specific areas [17]. UASs cannot cover as much area as satellites, but they give highly detailed information in both space and time, which satellites cannot match. Advancements in Unmanned Aerial Systems (UASs) and sensor technologies have broadened applications, enabling multispectral, hyperspectral, thermal, SAR, and LiDAR sensing. Recent UAS uses include land cover mapping, vegetation monitoring, precision farming, atmospheric observations, disaster mapping, soil erosion assessment, and change detection [18,19,20,21]. However, using UAVs in construction comes with challenges. These include dealing with rules and laws, managing technical limitations, overcoming data processing issues, ensuring proper training, and addressing safety concerns.
In recent times, there has been an increase in using underwater robots, such as Autonomous Underwater Vehicles (AUVs) [22,23], Remotely Operated Vehicles (ROVs) [24], and Autonomous Surface Vehicles (ASVs) [25] when on the water’s surface, to assess water quality. These robots, armed with various sensors, can monitor water quality in oceans, rivers, and lakes.
Various underwater robots, such as REMUS and AutoSub [26,27], were made to measure the ocean. Initially, they could only work briefly (e.g., 7 h for REMUS and 12 h for AutoSub). However, they set the stage for research in designing integrated systems, navigating without GPS, and planning missions. This allowed them to gather accurate environmental data in tough conditions for longer periods [28].
An AUV (Autonomous Underwater Vehicle) is a super-smart robot that goes underwater all by itself. It does cool things like exploring, collecting data, and checking the ocean floor without needing humans to control it. These robots have special tools and sensors to do their job and are helpful in areas like marine science, oceanography, and checking the environment underwater [29].
Scientists developed a system [30] using a special robot (ROV) to check water quality near hydropower sites. The robot has a dissolved oxygen sensor and moves with a solar-powered system. It can be controlled through a website showing real-time data on Google Earth. Tested at McNary Dam, it effectively checks water quality at various depths and locations, offering a flexible way to monitor challenging aquatic environments.
The historical context of ground robotics traces back to the development of the first mobile robots in the 1960–70 decade, such as Nilsson’s Shakey and the Laboratory of Systems Analysis and Architecture’s Hilare robot [31]. Since then, the field has witnessed significant growth, with rovers and minibots playing important roles in modern robotics. Rovers, exemplified by notable systems like AIR-K and ROSI, showcase advancements in autonomous ground inspection and monitoring. These robots employ tracked wheels, sensors like cameras and gas sensors, and varying maturity levels to execute operations and observations in emergencies [32]. Additionally, minibots, with their tracked wheels and laser scanner sensors, offer versatility in ground exploration [33]. The recent surge has led to applications in agriculture, industry, military, medical, and logistics, underlining the need for robust security solutions in robotics [34]. Ground robotic systems are also gaining prominence in disaster search-and-rescue missions, where autonomous mobile robotics research plays an important role [35,36]. The integration of advanced technologies, including artificial intelligence, computer vision, and sensor fusion, is explored to enhance the autonomy and efficiency of these robotic systems [37].
Therefore, this paper discusses the significant impact of automation on environmental sensing, focusing on the integration of advanced technologies such as Unmanned Aerial Systems (UASs) and environmental sensor networks (ESNs). It explores how these technologies overcome traditional limitations in data collection and analysis. This paper examines challenges in pollution monitoring, smart agriculture, and cybersecurity in environmental sensing. It also addresses financial barriers to adopting smart agricultural technology and stresses the importance of ongoing research and innovation in this area. Additionally, the integration of robotics, especially underwater vehicles, is highlighted for its crucial role in enhancing environmental monitoring applications. Overall, this article emphasizes the importance of automation and robotics in tackling complexities and advancing efficient, accurate, and scalable environmental monitoring solutions. Figure 1 shows the integration and synergies between environmental sensing automation, robotics, and edge computing technologies.
This paper is structured as follows: Section 2 outlines the methodologies employed in formulating this review article. Section 3 introduces emerging technologies such as deep learning, transfer learning, and edge computing, crucial for achieving automation in environmental sensing frameworks. Section 4 addresses the challenges associated with implementing these automated frameworks. Section 5 explores the future of automation in environmental remote sensing. Finally, Section 6 provides the conclusion of the review article.

2. Methodology

The literature review on automation for environmental monitoring involves thoroughly exploring diverse sources, including esteemed academic journals, conference proceedings, and scientific publications, employing rigorous selection criteria centered on relevance, recency, and credibility. Keywords like “environmental monitoring”, “transfer learning”, “automation”, “sensors”, “UAVs”, “robotics”, “IoT”, “edge computing”, and “deep learning” guide the identification of pertinent articles, revealing significant advancements such as unmanned aerial vehicles (UAVs) and underwater robots for data collection, the integration of wireless sensor networks for real-time monitoring, and the application of machine learning algorithms for data analysis. Furthermore, the emergence of edge computing for decentralized data processing is noted. Challenges like interoperability, security, and ethical considerations surrounding autonomous systems in environmental monitoring are also underscored. This synthesis of the literature offers profound insights into cutting-edge technologies, trends, and future trajectories in automation for environmental monitoring, with a focal point on enhancing efficiency, accuracy, and sustainability in data collection and analysis. This comprehensive review was conducted through a thorough exploration of databases like “Scopus”, “IEEE Explore”, and “Google Scholar”, alongside scrutiny of reputable journals, including “MDPI”, “Elsevier”, “Sensors”, and “IOPscience”, reflecting a systematic and exhaustive approach. Initially yielding 200 results, the process was refined to 128 publications through the classification procedure. This foundational work will not only expand current understanding but also lay the groundwork for future research and development in this crucial domain.

3. Emerging Technologies in Environmental Sensing

3.1. Leveraging Transfer Learning for Enhanced Environmental Sensing

In today’s world, people can barely afford to indulge in investing resources in data assemblage due to its inaccessibility, which is usually expensive and challenging to gather. Consequently, scientists and researchers uncovered more suitable norms of data collection, that is, the transfer of learning or knowledge between the tasks [38]. So, this perspective has encouraged transfer learning (TL) to enhance data collection in machine learning (ML) using the data gathered before they were introduced. Straightforwardly, TL is the refinement or advancement of comprehending a new task through knowledge transfer from a corresponding task that has already been learned, which will lower reliance on many target domain data for making target learners [39,40]. In TL, the negative transfer of learning usually occurs if the approach adopted reduces the overall performance. Thus, producing the positive transfer by avoiding the negative transfer between corresponding tasks is very essential.
Figure 2 depicts the transfer learning (TL) workflow, which involves pre-training a model on a large dataset for a related task, extracting relevant features, and adapting it to a new task by fine-tuning its parameters or using its features as input to a new model. This process accelerates learning and improves performance in the new task, like how existing skills aid in learning new tasks. For instance, just as knowing how to play the violin can help in learning the piano faster, transfer learning allows transferring knowledge from one task to facilitate learning in another, making adaptation quicker and more effective. The classification of TL includes two categories, homogeneous and heterogeneous transfer learning [40]. The strategies related to homogeneous transfer learning are developed and offered for handling situations with identical attributes, otherwise, it is called heterogeneous transfer learning. Also, heterogeneous transfer learning is more complex than homogeneous transfer learning due to its requirement of feature space adaptation [41].
Transfer learning in computer vision makes use of pre-trained models and datasets to effectively address new tasks. By utilizing a pre-trained model as a foundational framework, known as a backbone model, practitioners minimize the necessity for extensive new data and annotations, thereby enhancing performance on the target task. This strategy streamlines the learning process, mitigating challenges such as data scarcity and annotation expenses. Drawing parallels with human learning, where skills transfer across contexts, pre-trained neural networks such as VGG, ResNet, Inception, and MobileNet are widely employed as starting points. These models, accessible via platforms like TensorFlow Model Garden and PyTorch Hub, facilitate efficient model adaptation [42,43,44,45,46]. Both local and cloud-based training strategies offer benefits, with local training granting control over configurations and cloud-based training providing scalability and potentially expedited processing times. Table 1 summarizes the key features of various neural network architecture.
Transfer learning relies on three aspects: what, how, and when the data should be transferred. Due to the variability of the settings in the source dataset and target data in the transfer learning, the transfer learning subsets can be classified as inductive, transductive, and unsupervised learning [47]. When the tasks are dissimilar, any resemblance between the source and target domains is called inductive learning. Again, when the tasks are identical, but the feature spaces are different, it is called transductive learning, and when no labeled data can be used for training, it is called unsupervised learning. The various transfer learning methods include fine-tuning, domain adaptation, multi-task learning, feature extraction, training the model, the direct use of a pre-trained model, etc. Table 2 presents an overview of publicly accessible datasets frequently utilized for transfer learning investigations in the field of remote sensing.
Environmental sensing involves various tools and processing techniques to characterize the environment, including hyperspectral monitoring, atmospheric propagation, pollution monitoring, temperature and humidity monitoring, air quality measurement, soil monitoring, wind speed and direction monitoring, rainfall, radiation, gas, water pressure, forest fire detection, landslide detection, etc. [48]. For environmental studies, various satellites have also been widely employed [49]. The emergence of sensing methods like Synthetic-Aperture Radar (SAR) is beneficial for forest monitoring and disaster management due to its capability to penetrate via clouds and work effectively in all weather conditions [50]. Light Detection and Ranging (LiDAR) can capture 3D data and has been employed for harvest and topographic mapping [51,52]. Nowadays, unmanned aerial vehicles (UAVs) mounted with high-resolution sensors have seen vast applications in precision agriculture [53,54] as they provide high flexibility and permit real-time data acquisition. Therefore, the advancement of sensing technologies revolutionized how someone observes the Earth’s surface and delivered strategies for comprehending environmental transformations that may not be noticeable from the ground.
Table 2. Commonly used datasets for transfer learning in remote sensing.
Table 2. Commonly used datasets for transfer learning in remote sensing.
Ref.Dataset NameLocationPurposeNumber of ImagesNumber of ClassesBandsResolutionPixel Count
[55]DeepWeedsAustraliaWeed detection and classification17,5098 RGBNA40.3 B
[56]The Crop/Weed Field Image Dataset (CWFID)GlobalCrop and weed discrimination602RGBNA0.08 B
[57]Aerial
Image Data Set (AID)
GlobalLand use classification from aerial images10,00030 RGB8 M/0.5 MNA
[58]Agriculture-Vision
Dataset
USAField anomaly pattern segmentation94,9869RGB and Near-Infrared (NIR)10/15/20/Cm/Px22.6 B
[59]SatlasGlobal/
USA
Land use classification from satellite imagery856,000 137 RGB and Near-Infrared (NIR)10 m/1 m5 M

3.2. Harnessing the Power of Deep Learning in Environmental Sensing

The Fourth Industrial Revolution generally concentrated on technology-driven “industrialization, smart and intelligent systems”, where deep learning (DL) [60] has evolved as one of the vital technologies. It is a frontier for AI, based on linear regression and artificial neural networks (ANNs) followed by some activation function, which trains the systems to process data in a manner motivated by the human brain. DL is widely applied in various application areas like automated driving to medical devices, automatic facial recognition systems, digital assistants and fraud detection, healthcare, cybersecurity, and many more [61]. Hence, its automation capability and understanding from prior knowledge can transform the globe and our daily lives.
Figure 3 depicts the deep learning workflow, which includes data preparation, model design, training, validation, testing, and deployment stages. It begins with data collection and preprocessing, followed by the design of a neural network architecture tailored to the task. The model is then trained using labeled data to minimize error, validated for generalization to unseen examples, and tested on new data. Upon successful validation, the trained model is deployed for real-world applications.
DL has emerged as a universal learning due to its capability to perform in approximately all application domains. Figure 4 represents the various DL applications in environment sensing. The DL applications are found in object detection and recognition [62], predictive maintenance [63], agriculture [64], robotic surgery [65], medical applications [66], autonomous driving [67], etc. The robots can recognize and categorize things in their surroundings with increased precision, by training neural networks with huge quantities of labeled data. Also, by analyzing sensor data, predictive maintenance algorithms can foresee when a robot’s segments may fail, permitting proactive repairs or replacements. DL has the prospect of revolutionizing the farming industry by allowing more efficient crop production by autonomously guiding and managing crops and lowering labor expenses in agribusiness, such as planting, harvesting, spraying, etc. [68].
Traditional farming incorporates agroforestry, crop rotation, intercropping, polycultures, and water harvesting practices. Before scientific advancements, traditional agriculture relied on tools, organic fertilizers, indigenous knowledge, and cultural beliefs. Traditional farming depletes soil nutrients, with practices like slash and burn reducing soil organic matter. Traditional farming causes deforestation in tropical rainforests for agriculture, and it leads to soil erosion, removing fertile topsoil that takes decades to replenish [69].
Deep learning, revolutionary across industries, transforms agriculture by integrating innovative solutions into traditional farming practices. Employing deep learning with drone technology is crucial for convenient crop monitoring through high-quality, high-resolution image capture [70]. This technology facilitates the identification of field advancements and quality assessment. For instance, using images from drone technology, agriculturalists can ascertain the readiness of crops for harvesting. The integration of this technology in agriculture has revolutionized farming. Researchers are particularly motivated to explore the applications of deep learning in enhancing efficiency, especially in farming, harvesting, and yield predictions.
Precise fruit counting is vital for growers as it allows for yield estimation, facilitating effective yard management. Employing automated fruit detection and algorithms, as outlined in [71], becomes instrumental in optimizing agricultural production and streamlining the harvest process.
In [72], the researchers proposed an automated yield estimation method using efficient robotic agricultural techniques. Achieving a high 91% accuracy with Inception-ResNet, the approach eliminates the need for an extensive dataset, benefiting farmers in efficient and precise fruit counting and decision-making. Undesirable plants, known as weeds, can hamper crop production by competing for resources. DL techniques, detailed in [73,74], provide efficient weed identification, diminishing the need for weedicides and tackling herbicide resistance. Through the utilization of SVMs and CNNs, DL aids in classifying weeds, alleviating farmers’ workload, and ultimately improving crop yields. Therefore, while deep learning has the potential to revolutionize agriculture, addressing robustness, interpretability, data modality integration, and implementing few-shot learning is crucial. Further research in these areas is necessary for unleashing the full power of deep learning in agriculture.
Forecasting air pollution provides reliable data on upcoming pollution levels, assisting in the efficient implementation of air pollution control measures and enabling proactive planning. The intricacies of air pollution dynamics are commonly influenced by factors such as temperature, humidity, wind direction, wind speed, snowfall, rainfall, etc., intensifying the complexity of grasping changes in air pollutant concentration. Global concerns over air pollution stem from its adverse effects on human health, climate, agriculture, ecosystems, and visibility [75,76,77]. Particulate matter with an aerodynamic diameter of less than 2.5 μm (PM2.5) and ozone (O3) are major contributors to premature mortality globally, especially in the case of PM2.5 pollution. Outdoor fine particulate and ozone air pollution are estimated to cause 8.34 million excess deaths annually, with O3 pollution accounting for nearly half a million deaths [78]. Table 3 summarizes various studies addressing air and water pollution monitoring and mitigation strategies. Each study is labeled with relevant information provided regarding the location of the study, the model or approach utilized, the performance measure used to evaluate effectiveness, and the key findings or outcomes.

3.3. Edge Computing in Environmental Sensing

The surge in data and processing reveals drawbacks in cloud-based big data processing. Challenges include real-time constraints due to amplified data transmission from edge devices, jeopardizing privacy and causing delays. Figure 5 presents the edge computing workflow that involves real-time data processing, data caching, buffering, and optimization, and decision-making, followed by storage and communication for data transmission, ultimately enabling iterative refinement for enhanced efficiency and effectiveness. The growing number of smart devices leads to substantial energy consumption in data centers, hampering efforts to boost energy efficiency. These issues underscore the need for alternative solutions to address the evolving demands of our intelligent society. As human needs continually evolve, and smart societies advance, intelligence becomes an integral part of our daily routines, as seen in smart homes, autonomous vehicles, transportation, cameras, and industrial manufacturing components. The rapid progression of IoT, AI, big data, and cloud computing has given rise to edge computing, a distinctive open forum networking philosophy. This model harnesses network, computing, storage, and application core capabilities in the proximity of physical surroundings, aiming to achieve faster network service responses by reducing latency and bandwidth utilization [90].
Before edge computing, traditional cloud computing centralized data transfer to the cloud center. The concept originated in 2006 when Google’s CEO introduced it at a search engine conference. With Google’s influence, cloud computing became a robust platform with distributed computing, load balancing, and virtualization. Despite strengths, the IoT’s rise challenges cloud computing with increased device data, impacting bandwidth for time-sensitive and real-time systems [91,92]. This affects load management, real-time performance, bandwidth, energy consumption, and data security [93,94,95]. The escalating number of mobile and IoT devices strains network bandwidth, resulting in delays in data delivery. Consequently, ref. [96] introduces a strategy aimed at selecting high-performance cloudlets within edge computing frameworks. Edge computing improves scalability issues in IoT platforms by decentralizing data processing, reducing latency, optimizing bandwidth, and increasing security. By dispersing computation nearer to the data source, edge computing enables instantaneous decision-making, diminishes data transmission expenses, and enhances system robustness.
Therefore, edge computing expands on cloud computing, offering distinct features. Cloud computing excels in comprehending the entirety, processing extensive data, and contributing to non-real-time data processing, such as in business decision-making. In contrast, edge computing focuses on the local context, proving more effective in small-scale, real-time intelligent analysis, especially in meeting the immediate needs of local businesses. In applications involving intelligence, cloud computing is better suited for the centralized processing of voluminous data, while edge computing can be applied for compact intelligent analysis and localized services [97].
Figure 5. Edge computing workflow [97].
Figure 5. Edge computing workflow [97].
Applsci 14 03531 g005
Smart agriculture envisions leveraging diverse Information and Communication Technologies (ICTs) to enhance productivity in a sustainable and economically feasible way. Edge computing [98] presents a method by which the farming community could more efficiently tap into and make use of smart agriculture services.
In their research [99], the authors presented a budget-friendly solution for the extensive monitoring of environmental parameters through the utilization of flying IoT and edge–cloud computing. Implemented and evaluated on an actual Medenine farm in Tunisia, the system assists farmers, government entities, or manufacturers in anticipating environmental conditions across the vast farm area. This contributes to improving crop productivity and farm management in a cost-effective and timely manner.
An advanced, cooperative, and hierarchical UAV-WSN system for intelligent crop monitoring in precision agriculture was proposed in [100]. The system demonstrated both resilience and efficiency, with improved performance and optimized trajectories, allowing for the effective utilization of limited resources within the ground sensor network.
The effectiveness of edge computing in scalable data analytics is demonstrated in [101]. Utilizing a Raspberry Pi as both a base station and an edge node, cherry tomato growth states are predicted by the system. These predictions are then transmitted to a central cloud server for consolidation, model integration, and analysis, leading to yield predictions. This approach minimizes data traffic and provides farmers with the capability to secure and selectively share their data.
Amidst the rapid proliferation of IoT devices, the significance of edge computing as a viable alternative to cloud computing has grown considerably. Edge computing involves technologies that enable computation at the network’s edge. In a study [102], the researchers created a prototype to evaluate pollution using an Arduino board and the IBM Watson IoT platform. With edge computing, their model aims to reduce the computational load on the cloud. Merging low-cost sensors with Wireless Sensor Networks (WSNs) forms a comprehensive system. These models enhance the efficient detection of air pollutant dispersion, enabling individuals, including community users, to assess personal exposure to pollutants through wearable sensor nodes [103]. Various methodologies exist for air quality assessment.
Edge computing tackles issues like delays and network instability in cloud computing, finding applications in smart transportation, urban settings, and water conservancy. Research by [104] efficiently predicted ecological water demand by combining edge computing and GIS. Ref. [105] successfully minimized service response times through mobile edge computing. Additionally, ref. [106] proposed a framework integrating blockchain and edge computing to promote water-saving practices, fostering conservation efforts intelligently. Table 4 showcases how edge computing techniques are applied across different domains in agriculture and environmental monitoring. From livestock management to environmental sensing, edge computing is utilized for tasks such as reducing latency, offloading computation, and managing data traffic. These applications highlight the crucial role of edge computing in enhancing efficiency and effectiveness across various sectors, from agriculture to wildlife protection.

3.4. Evaluating Advantages and Addressing Limitations

This section explores three key technologies—transfer learning, deep learning, and edge computing—in environmental sensing, each bringing distinctive advantages to Table 5. The ensuing table delineates their applicability and potential limitations in diverse environmental sensing scenarios, marking a paradigm shift in monitoring for smarter and more sustainable solutions.

4. Challenges and Solutions in Implementing Automation

The successful implementation of automation in environmental sensing and monitoring is challenged by various obstacles that must be addressed. It is essential to ensure that automated sensors and monitoring systems can collect data accurately and precisely. Factors like changes in environmental conditions, sensor drift, and calibration issues can affect the accuracy of measurements, so regular calibration and maintenance are necessary. Bringing together data from different sources, such as remote sensors, satellite images, and ground-based stations, is difficult due to differences in data formats, resolution, and quality. Effective strategies for data integration are needed to create a unified picture of environmental conditions. The upfront and ongoing costs of automation technologies can be too high for many organizations, especially in areas with limited resources. Moreover, identifying cost-effective solutions that maintain data quality without compromise is essential. Keeping automated sensors and systems running smoothly requires regular upkeep, including sensor calibration, software updates, and hardware fixes. Managing maintenance costs while minimizing downtime is a significant challenge. Deploying automated monitoring systems in remote or hazardous areas can be tricky. Access to power, network connections, and infrastructure for data storage and transmission are fundamental considerations. Scaling up automated monitoring to cover large areas or handle more data can strain existing resources and infrastructure. Ensuring the system can scale while still performing well and remaining reliable is a complex task.
To overcome these challenges, experts from various fields like engineering, data science, environmental science, and economics need to collaborate. Working together, researchers, practitioners, policymakers, and industry players can find innovative solutions and overcome obstacles to automation in environmental sensing and monitoring. Automation technologies, along with advanced methods like machine learning (ML), deep learning (DL), transfer learning, and edge computing, offer promising solutions to enhance environmental monitoring effectively. ML algorithms can analyze extensive environmental data, identify patterns, and anomalies, and predict changes, which helps create early warning systems for natural disasters or pollution events [106]. DL techniques, especially Convolutional Neural Networks (CNNs), excel at recognizing images, making them valuable for tasks like monitoring wildlife habitats or identifying pollution sources from satellite imagery or drone footage accurately. Transfer learning allows adapting pre-trained models to specific environmental monitoring tasks by transferring knowledge from one domain to another, speeding up model training and enhancing performance, especially when labeled environmental data are limited. Edge computing, bringing computational capabilities closer to data sources, enables real-time processing and analysis, reducing latency and bandwidth requirements, making it ideal for air quality monitoring or wildlife tracking in remote areas. Sensor fusion techniques integrate data from various sensors to improve the reliability and depth of environmental monitoring data, offering a more comprehensive understanding of environmental conditions. Deploying IoT-enabled sensors creates interconnected systems that continuously monitor environmental parameters in real time, providing valuable insights for environmental management and decision-making [112,113,114]. Engaging citizens in environmental monitoring through crowdsourcing initiatives empowers communities to contribute data and observations, enhancing traditional monitoring efforts and enabling broader coverage and increased spatial resolution [115]. Using these solutions in environmental monitoring can help us understand nature better and improve how we conserve and manage it.

5. Future of Automation in Environmental Remote Sensing

The adoption of IoT technology and smart sensors in agriculture has revolutionized farming practices, empowering farmers to monitor environmental conditions, crop health, and soil quality in real time. This advancement has resulted in heightened productivity and sustainability [116]. A review on wearable devices for environmental monitoring [117] discusses key pollutants, sensor types, and uses, emphasizing the move toward fully wearable tech to link pollution levels with personal data for estimating individual exposure, despite challenges in standardizing air quality assessments. By 2032, it is expected to reach $508.64 billion, showing a big increase in how much it is used and how innovative it becomes. The market for IoT sensors is expected to grow fast. It is supposed to go from being worth $11.1 billion in 2022 to $29.6 billion by 2026 [118]. This quick growth is because more people are using IoT technology to watch the environment.
Advancements in technology, particularly the integration of unmanned aerial vehicles (UAVs) and artificial intelligence (AI) with machine learning (ML) algorithms, have revolutionized environmental monitoring and management [119]. UAVs equipped with sophisticated sensors collect high-resolution data over vast areas, enabling comprehensive surveillance of environmental parameters such as air and water quality, biodiversity, and land use. These data are then processed using AI and ML algorithms to perform predictive modeling for environmental changes. By analyzing historical and real-time data, these algorithms can forecast future environmental trends, anticipate potential threats such as natural disasters or habitat degradation, and inform proactive mitigation strategies [120]. This synergy between UAVs, AI, ML, and predictive modeling represents a powerful tool for addressing environmental challenges and promoting the sustainable stewardship of our planet.

6. Conclusions

In conclusion, automation and advanced technologies have brought about transformative changes in environmental sensing and monitoring. From environmental sensor networks (ESNs) to Unmanned Aerial Systems (UASs) and smart agriculture, automation plays a crucial role in overcoming traditional limitations and enhancing data collection and analysis. Despite challenges such as cost, maintenance, and scalability, collaborative efforts across various fields are essential to address these obstacles and advance automation in environmental monitoring. Emerging technologies like machine learning (ML), deep learning (DL), transfer learning, and edge computing offer promising solutions to enhance monitoring accuracy and efficiency. Furthermore, the future of automation in environmental remote sensing looks promising, with the integration of the IoT, smart sensors, unmanned aerial vehicles (UAVs), and artificial intelligence (AI) expected to revolutionize environmental monitoring and management. By harnessing the power of automation and technological innovations, we can better understand and protect our environment for future generations.

Author Contributions

All authors contributed equally in the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in parts by Venturewell Grant ID: 25827-22, Title: Robotic Things for Sustainable Smart Cities—An Entrepreneurial Engineering Perspective, and by the Proctor and Gamble (P&G) 2022/23 U.S. Higher Education Grant titled—Data-driven Approach to Problem Solving in Engineering Education Using Greenhouse Gas Mitigation Experiments and Simulations. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the funding institutions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. McNaught, A.D.; Wilkinson, A. Compendium of Chemical Terminology: IUPAC Recommendations, 2nd ed.; Blackwell Scientific Publications: Hoboken, NJ, USA, 1997. [Google Scholar]
  2. Raju, C.M.; Elpa, D.P.; Urban, P.L. Automation and Computerization of (Bio)sensing Systems. ACS Sens. 2024, 9, 1033–1048. [Google Scholar] [CrossRef]
  3. Ballesteros-Gómez, A.; Rubio, S. Recent Advances in Environmental Analysis. Anal. Chem. 2011, 83, 4579–4613. [Google Scholar] [CrossRef] [PubMed]
  4. Majstorovic, V.; Stojadinovic, S.; Zivkovic, S.; Djurdjanovic, D.; Jakovljevic, Z.; Gligorijevic, N. Cyber-physical Manufacturing Metrology Model (CPM3) for Sculptured Surfaces–Turbine Blade Application. Procedia CIRP 2017, 63, 658–663. [Google Scholar] [CrossRef]
  5. Yi, W.Y.; Lo, K.M.; Mak, T.; Leung, K.S.; Leung, Y.; Meng, M.L. A Survey of Wireless Sensor Network Based Air Pollution Monitoring Systems. Sensors 2015, 15, 31392–31427. [Google Scholar] [CrossRef] [PubMed]
  6. Gao, Y.; Dong, W.; Guo, K.; Liu, X.; Chen, Y.; Liu, X.; Bu, J.; Chen, C. Mosaic: A Low-Cost Mobile Sensing System for Urban Air Quality Monitoring. In Proceedings of the IEEE IN-FOCOM 2016—The 35th Annual IEEE International Conference on Computer Communications, San Francisco, CA, USA, 10–14 April 2016; pp. 1–9. [Google Scholar]
  7. Luo, X.; Yang, J. Water Pollution Source Detection in Wireless Sensor Networks. In Proceedings of the 2015 IEEE International Conference on Information and Automation, Lijiang, China, 8–10 August 2015; pp. 2311–2315. [Google Scholar]
  8. Luo, X.; Yang, J. Water Pollution Detection Based on Hypothesis Testing in Sensor Networks. J. Sens. 2017, 2017, 3829894. [Google Scholar] [CrossRef]
  9. Aydin, S.; Aydin, M.N. Semantic and Syntactic Interoperability for Agricultural Open-Data Platforms in the Context of IoT Using Crop-Specific Trait Ontologies. Appl. Sci. 2020, 10, 4460. [Google Scholar] [CrossRef]
  10. He, Y.; Guo, J.; Zheng, X. From Surveillance to Digital Twin: Challenges and Recent Advances of Signal Processing for Industrial Internet of Things. IEEE Signal Process. Mag. 2018, 35, 120–129. [Google Scholar] [CrossRef]
  11. Farooq, M.S.; Riaz, S.; Abid, A.; Abid, K.; Naeem, M.A. A Survey on the Role of IoT in Agriculture for the Implementation of Smart Farming. IEEE Access 2019, 7, 156237–156271. [Google Scholar] [CrossRef]
  12. Villa-Henriksen, A.; Edwards, G.T.; Pesonen, L.A.; Green, O.; Sørensen, C.A.G. Internet of Things in arable farming: Implementation, applications, challenges and potential. Biosyst. Eng. 2020, 191, 60–84. [Google Scholar] [CrossRef]
  13. Xu, G.; Li, H.; Liu, S.; Yang, K.; Lin, X. VerifyNet: Secure and Verifiable Federated Learning. IEEE Trans. Inf. Forensics Secur. 2019, 15, 911–926. [Google Scholar] [CrossRef]
  14. Sinha, B.B.; Dhanalakshmi, R. Recent advancements and challenges of Internet of Things in smart agriculture: A survey. Futur. Gener. Comput. Syst. 2022, 126, 169–184. [Google Scholar] [CrossRef]
  15. Caffaro, F.; Cavallo, E. The Effects of Individual Variables, Farming System Characteristics and Perceived Barriers on Actual Use of Smart Farming Technologies: Evidence from the Piedmont Region, Northwestern Italy. Agriculture 2019, 9, 111. [Google Scholar] [CrossRef]
  16. Bogue, R. The role of robots in environmental monitoring. Ind. Robot. Int. J. Robot. Res. Appl. 2023, 50, 369–375. [Google Scholar] [CrossRef]
  17. Pajares, G. Overview and Current Status of Remote Sensing Applications Based on Unmanned Aerial Vehicles (UAVs). Photogramm. Eng. Remote Sens. 2015, 81, 281–330. [Google Scholar] [CrossRef]
  18. Singh, K.K.; Frazier, A.E. A meta-analysis and review of unmanned aircraft system (UAS) imagery for terrestrial applications. Int. J. Remote Sens. 2018, 39, 5078–5098. [Google Scholar] [CrossRef]
  19. Akar, O. Mapping land use with using Rotation Forest algorithm from UAV images. Eur. J. Remote Sens. 2017, 50, 269–279. [Google Scholar] [CrossRef]
  20. von Bueren, S.K.; Burkart, A.; Hueni, A.; Rascher, U.; Tuohy, M.P.; Yule, I.J. Deploying four optical UAV-based sensors over grassland: Challenges and limitations. Biogeosciences 2015, 12, 163–175. [Google Scholar] [CrossRef]
  21. Ludovisi, R.; Tauro, F.; Salvati, R.; Khoury, S.; Mugnozza, G.S.; Harfouche, A. UAV-Based Thermal Imaging for High-Throughput Field Phenotyping of Black Poplar Response to Drought. Front. Plant Sci. 2017, 8, 1681. [Google Scholar] [CrossRef]
  22. Paull, L.; Saeedi, S.; Seto, M.; Li, H. AUV Navigation and Localization: A Review. IEEE J. Ocean. Eng. 2013, 39, 131–149. [Google Scholar] [CrossRef]
  23. Li, D.; Du, L. AUV Trajectory Tracking Models and Control Strategies: A Review. J. Mar. Sci. Eng. 2021, 9, 1020. [Google Scholar] [CrossRef]
  24. Aguirre-Castro, O.A.; Inzunza-González, E.; García-Guerrero, E.E.; Tlelo-Cuautle, E.; López-Bonilla, O.R.; Olguín-Tiznado, J.E.; Cárdenas-Valdez, J.R. Design and Construction of an ROV for Underwater Exploration. Sensors 2019, 19, 5387. [Google Scholar] [CrossRef] [PubMed]
  25. Peng, Z.; Wang, J.; Wang, D.; Han, Q.-L. An Overview of Recent Advances in Coordinated Control of Multiple Autonomous Surface Vehicles. IEEE Trans. Ind. Inform. 2020, 17, 732–745. [Google Scholar] [CrossRef]
  26. Allen, B.; Stokey, R.; Austin, T.; Forrester, N.; Goldsborough, R.; Purcell, M.; von Alt, C. REMUS: A Small Low-Cost AUV: System Description, Field Trials, Performance Results. In Proceedings of the Oceans 97. MTS/IEEE Conference Proceedings, Halifax, NS, Canada, 6–9 October 1997; pp. 994–1000. [Google Scholar]
  27. Wynn, R.B.; Huvenne, V.A.I.; Le Bas, T.P.; Murton, B.J.; Connelly, D.P.; Bett, B.J.; Ruhl, H.A.; Morris, K.J.; Peakall, J.; Parsons, D.R.; et al. Autonomous Underwater Vehicles (AUVs): Their past, present and future contributions to the advancement of marine geoscience. J. Mar. Geol. 2014, 352, 451–468. [Google Scholar] [CrossRef]
  28. Yuh, J. Design and Control of Autonomous Underwater Robots: A Survey. Auton. Robot. 2000, 8, 7–24. [Google Scholar] [CrossRef]
  29. Yang, Y.; Xiao, Y.; Li, T. A Survey of Autonomous Underwater Vehicle Formation: Performance, Formation Control, and Communication Capability. IEEE Commun. Surv. Tutor. 2021, 23, 815–841. [Google Scholar] [CrossRef]
  30. Salalila, A.; Martinez, J.; Elsinghorst, R.; Hou, H.; Yuan, Y.; Deng, Z.D. Realtime and Autonomous Water Quality Monitoring System Based on Remotely Operated Vehicle. In Proceedings of the Global Oceans 2020: Singapore—U.S. Gulf Coast, Biloxi, MS, USA, 5–30 October 2020; pp. 1–5. [Google Scholar]
  31. Raj, R.; Kos, A. A Comprehensive Study of Mobile Robot: History, Developments, Applications, and Future Research Perspectives. Appl. Sci. 2022, 12, 6951. [Google Scholar] [CrossRef]
  32. Wijayathunga, L.; Rassau, A.; Chai, D. Challenges and Solutions for Autonomous Ground Robot Scene Understanding and Navigation in Unstructured Outdoor Environments: A Review. Appl. Sci. 2023, 13, 9877. [Google Scholar] [CrossRef]
  33. Bruzzone, L.; Nodehi, S.E.; Fanghella, P. Tracked Locomotion Systems for Ground Mobile Robots: A Review. Machines 2022, 10, 648. [Google Scholar] [CrossRef]
  34. Yaacoub, J.-P.A.; Noura, H.N.; Salman, O.; Chehab, A. Robotics cyber security: Vulnerabilities, attacks, countermeasures, and recommendations. Int. J. Inf. Secur. 2022, 21, 115–158. [Google Scholar] [CrossRef]
  35. Loganathan, A.; Ahmad, N.S. A systematic review on recent advances in autonomous mobile robot navigation. Eng. Sci. Technol. Int. J. 2023, 40, 101343. [Google Scholar] [CrossRef]
  36. Chitikena, H.; Sanfilippo, F.; Ma, S. Robotics in Search and Rescue (SAR) Operations: An Ethical and Design Perspective Framework for Response Phase. Appl. Sci. 2023, 13, 1800. [Google Scholar] [CrossRef]
  37. Rayhan, A. Artificial Intelligence in Robotics: From Automation to Autonomous Systems. 2023. Available online: https://www.researchgate.net/profile/Abu-Rayhan-11/publication/372589771_ARTIFICIAL_INTELLIGENCE_IN_ROBOTICS_FROM_AUTOMATION_TO_AUTONOMOUS_SYSTEMS/links/64bf8f01b9ed6874a543348b/ARTIFICIAL-INTELLIGENCE-IN-ROBOTICS-FROM-AUTOMATION-TO-AUTONOMOUS-SYSTEMS.pdf (accessed on 27 February 2024).
  38. Torrey, L.; Shavlik, J. Transfer Learning. In Handbook of Research on Machine Learning Applications and Trends: Algorithms, Methods, and Techniques; ACM: New York, NY, USA, 2010; pp. 242–264. [Google Scholar]
  39. Zhuang, F.; Qi, Z.; Duan, K.; Xi, D.; Zhu, Y.; Zhu, H.; Xiong, H.; He, Q. A Comprehensive Survey on Transfer Learning. Proc. IEEE 2019, 109, 43–76. [Google Scholar] [CrossRef]
  40. Weiss, K.; Khoshgoftaar, T.M.; Wang, D.D. A survey of transfer learning. J. Big Data 2016, 3, 1345–1459. [Google Scholar] [CrossRef]
  41. Day, O.; Khoshgoftaar, T.M. A survey on heterogeneous transfer learning. J. Big Data 2017, 4, 29. [Google Scholar] [CrossRef]
  42. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Li, F. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef]
  43. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  44. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
  45. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. MobileNetV2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
  46. Chen, C.; Guo, Y.; Tian, F.; Liu, S.; Yang, W.; Wang, Z.; Wu, J.; Su, H.; Pfister, H.; Liu, S. A Unified Interactive Model Evaluation for Classification, Object Detection, and Instance Segmentation in Computer Vision. IEEE Trans. Vis. Comput. Graph. 2024, 30, 76–86. [Google Scholar] [CrossRef]
  47. Pan, S.J.; Yang, Q. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
  48. Ma, Y.; Chen, S.; Ermon, S.; Lobell, D.B. Transfer learning in environmental remote sensing. Remote Sens. Environ. 2024, 301, 113924. [Google Scholar] [CrossRef]
  49. Marshall, M.; Belgiu, M.; Boschetti, M.; Pepe, M.; Stein, A.; Nelson, A. Field-level crop yield estimation with PRISMA and Sentinel-2. ISPRS J. Photogramm. Remote Sens. 2022, 187, 191–210. [Google Scholar] [CrossRef]
  50. Liu, Y.; Zhao, W.; Chen, S.; Ye, T. Mapping Crop Rotation by Using Deeply Synergistic Optical and SAR Time Series. Remote Sens. 2021, 13, 4160. [Google Scholar] [CrossRef]
  51. Shan, J.; Toth, C.K. Topographic Laser Ranging and Scanning: Principles and Processing; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
  52. Di Tommaso, S.; Wang, S.; Vajipey, V.; Gorelick, N.; Strey, R.; Lobell, D.B. Annual Field-Scale Maps of Tall and Short Crops at the Global Scale Using GEDI and Sentinel-2. Remote Sens. 2023, 15, 4123. [Google Scholar] [CrossRef]
  53. Maimaitijiang, M.; Sagan, V.; Sidike, P.; Hartling, S.; Esposito, F.; Fritschi, F.B. Soybean yield prediction from UAV using multimodal data fusion and deep learning. Remote Sens. Environ. 2020, 237, 111599. [Google Scholar] [CrossRef]
  54. Sun, C.; Zhou, J.; Ma, Y.; Xu, Y.; Pan, B.; Zhang, Z. A review of remote sensing for potato traits characterization in precision agriculture. Front. Plant Sci. 2022, 13, 871859. [Google Scholar] [CrossRef] [PubMed]
  55. Olsen, A.; Konovalov, D.A.; Philippa, B.; Ridd, P.; Wood, J.C.; Johns, J.; Banks, W.; Girgenti, B.; Kenny, O.; Whinney, J.; et al. DeepWeeds: A Multiclass Weed Species Image Dataset for Deep Learning. Sci. Rep. 2019, 9, 2058. [Google Scholar] [CrossRef] [PubMed]
  56. Haug, S.; Ostermann, J. A Crop/Weed Field Image Dataset for the Evaluation of Computer Vision Based Precision Agriculture Tasks. In Proceedings of the Computational Science and Its Applications—ICCSA 2020, Cagliari, Italy, 1–4 July 2020; Springer: Cham, Switzerland, 2014; pp. 105–116. [Google Scholar]
  57. Xia, G.-S.; Hu, J.; Hu, F.; Shi, B.; Bai, X.; Zhong, Y.; Zhang, L.; Lu, X. AID: A Benchmark Data Set for Performance Evaluation of Aerial Scene Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3965–3981. [Google Scholar] [CrossRef]
  58. Chiu, M.T.; Xu, X.; Wei, Y.; Huang, Z.; Schwing, A.G.; Brunner, R.; Khachatrian, H.; Karapetyan, H.; Dozier, I.; Rose, G.; et al. Agriculture-Vision: A Large Aerial Image Database for Agricultural Pattern Analysis. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 2825–2835. [Google Scholar]
  59. Bastani, F.; Wolters, P.; Gupta, R.; Ferdinando, J.; Kembhavi, A. Satlas: A largescale, multi-task dataset for remote sensing image understanding. arXiv 2022, arXiv:2211.15660. [Google Scholar]
  60. Hinton, G.E.; Osindero, S.; Teh, Y.-W. A Fast Learning Algorithm for Deep Belief Nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef] [PubMed]
  61. Sarker, I.H. Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions. SN Comput. Sci. 2021, 2, 420. [Google Scholar] [CrossRef] [PubMed]
  62. Bai, Q.; Li, S.; Yang, J.; Song, Q.; Li, Z.; Zhang, X. Object Detection Recognition and Robot Grasping Based on Machine Learning: A Survey. IEEE Access 2020, 8, 181855–181879. [Google Scholar] [CrossRef]
  63. Lee, W.J.; Wu, H.; Yun, H.; Kim, H.; Jun, M.B.; Sutherland, J.W. Predictive Maintenance of Machine Tool Systems Using Artificial Intelligence Techniques Applied to Machine Condition Data. Procedia CIRP 2019, 80, 506–511. [Google Scholar] [CrossRef]
  64. Linaza, M.T.; Posada, J.; Bund, J.; Eisert, P.; Quartulli, M.; Döllner, J.; Pagani, A.; Olaizola, I.G.; Barriguinha, A.; Moysiadis, T.; et al. Data-Driven Artificial Intelligence Applications for Sustainable Precision Agriculture. Agronomy 2021, 11, 1227. [Google Scholar] [CrossRef]
  65. Panesar, S.; Cagle, Y.; Chander, D.; Morey, J.; Fernandez-Miranda, J.; Kliot, M. Artificial Intelligence and the Future of Surgical Robotics. Ann. Surg. 2019, 270, 223–226. [Google Scholar] [CrossRef]
  66. He, S.; Leanse, L.G.; Feng, Y. Artificial intelligence and machine learning assisted drug delivery for effective treatment of infectious diseases. Adv. Drug Deliv. Rev. 2021, 178, 113922. [Google Scholar] [CrossRef] [PubMed]
  67. Elallid, B.B.; Benamar, N.; Hafid, A.S.; Rachidi, T.; Mrani, N. A Comprehensive Survey on the Application of Deep and Rein-forcement Learning Approaches in Autonomous Driving. J. King Saud Univ. Comput. Inf. Sci. 2022, 34, 7366–7390. [Google Scholar]
  68. Attri, I.; Awasthi, L.K.; Sharma, T.P.; Rathee, P. A review of deep learning techniques used in agriculture. Ecol. Inform. 2023, 77, 102217. [Google Scholar] [CrossRef]
  69. Singh, R.; Singh, G.S. Traditional agriculture: A climate-smart approach for sustainable food production. Energy Ecol. Environ. 2017, 2, 296–316. [Google Scholar] [CrossRef]
  70. Kashyap, P. Machine Learning for Decision-Makers: Cognitive Computing Fundamentals for Better Decision Making; Apress: Bangalore, India, 2017; pp. 227–228. [Google Scholar]
  71. Chen, S.W.; Shivakumar, S.S.; Dcunha, S.; Das, J.; Okon, E.; Qu, C.; Taylor, C.J.; Kumar, V. Counting Apples and Oranges with Deep Learning: A Data-Driven Approach. IEEE Robot. Autom. Lett. 2017, 2, 781–788. [Google Scholar] [CrossRef]
  72. Rahnemoonfar, M.; Sheppard, C. Deep Count: Fruit Counting Based on Deep Simulated Learning. Sensors 2017, 17, 905. [Google Scholar] [CrossRef] [PubMed]
  73. Yashwanth, M.; Chandra, M.L.; Pallavi, K.; Showkat, D.; Kumar, P.S. Agriculture Automation Using Deep Learning Methods Implemented Using Keras. In Proceedings of the 2020 IEEE International Conference for Innovation in Technology (INOCON), Bangalore, India, 6–8 November 2020. [Google Scholar]
  74. Mishra, A.M.; Gautam, V. Weed Species Identification in Different Crops Using Precision Weed Management: A Review. Proc. CEUR Workshop 2021, 2786, 180–194. [Google Scholar]
  75. Cohen, A.J.; Brauer, M.; Burnett, R.; Anderson, H.R.; Frostad, J.; Estep, K.; Balakrishnan, K.; Brunekreef, B.; Dandona, L.; Dandona, R.; et al. Estimates and 25-year trends of the global burden of disease attributable to ambient air pollution: An analysis of data from the Global Burden of Diseases Study 2015. Lancet 2017, 389, 1907–1918. [Google Scholar] [CrossRef]
  76. Myhre, G.; Shindell, D. Anthropogenic and Natural Radiative Forcing. In Climate Change 2013: The Physical Science Basis; Stocker, T.F., Qin, D., Plattner, G.-K., Tignor, M.M.B., Allen, S.K., Boschung, J., Nauels, A., Xia, Y., Bex, V., Midgley, P.M., Eds.; Cambridge University Press: Cambridge, UK; New York, NY, USA, 2013; pp. 659–740. [Google Scholar]
  77. Fuhrer, J.; Martin, M.V.; Mills, G.; Heald, C.L.; Harmens, H.; Hayes, F.; Sharps, K.; Bender, J.; Ashmore, M.R. Current and future ozone risks to global terrestrial biodiversity and ecosystem processes. Ecol. Evol. 2016, 6, 8785–8799. [Google Scholar] [CrossRef]
  78. Shindell, D.; Faluvegi, G.; Nagamoto, E.; Parsons, L.; Zhang, Y. Reductions in premature deaths from heat and particulate matter air pollution in South Asia, China, and the United States under decarbonization. Proc. Natl. Acad. Sci. USA 2024, 121, e2312832120. [Google Scholar] [CrossRef]
  79. Castelli, M.; Clemente, F.M.; Popovič, A.; Silva, S.; Vanneschi, L. A Machine Learning Approach to Predict Air Quality in California. Complexity 2020, 2020, 8049504. [Google Scholar] [CrossRef]
  80. Abimannan, S.; Chang, Y.; Lin, C.Y. Air Pollution Forecasting Using LSTM-Multivariate Regression Model. In International Conference on Internet of Vehicles; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
  81. Yan, L.; Zhou, M.; Wu, Y. Long Short-Term Memory Model for Analysis and Forecast of PM2.5. In International Conference on Cloud Computing and Security; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  82. Dua, R.D.; Madaan, D.M.; Mukherjee, P.M.; Lall, B.L. Real Time Attention Based Bidirectional Long Short-Term Memory Networks for Air Pollution Forecasting. In Proceedings of the 2019 IEEE Fifth International Conference on Big Data Computing Service and Applications (BigDataService), Newark, CA, USA, 4–9 April 2019; pp. 151–158. [Google Scholar]
  83. Tsai, Y.-T.; Zeng, Y.-R.; Chang, Y.-S. Air pollution forecasting using RNN with LSTM. In Proceedings of the 2018 IEEE 16th International Conference on Dependable, Autonomic and Secure Computing, 16th International Conference on Pervasive Intelligence and Computing, 4th International Conference on Big Data Intelligence and Computing and Cyber Science and Technology Congress (DASC/PiCom/DataCom/CyberSciTech), Athens, Greece, 12–15 August 2018; pp. 1074–1079. [Google Scholar] [CrossRef]
  84. Hassan, S.; Haq, I.U. Pervasive Pollution Problems Caused by Plastics and its Degradation. Int. J. Online Biomed. Eng. 2019, 15, 29–39. [Google Scholar] [CrossRef]
  85. Kim, S.; Lee, J.M.; Lee, J.; Seo, J. Deep dust: Predicting Concentrations of Fine Dust in Seoul Using LSTM. arXiv 2019, arXiv:1901.10106. [Google Scholar]
  86. Zhang, L.; Yang, Y.; Li, Y.; Qian, Z.; Xiao, W.; Wang, X.; Rolling, C.A.; Liu, E.; Xiao, J.; Zeng, W.; et al. Short-term and long-term effects of PM2.5 on acute nasopharyngitis in 10 communities of Guangdong, China. Sci. Total. Environ. 2019, 688, 136–142. [Google Scholar] [CrossRef]
  87. De Stefano, C.; Ferrigno, L.; Fontanella, F.; Gerevini, L. Evolutionary Computation to Implement an IoT-Based System for Water Pollution Detection. SN Comput. Sci. 2022, 3, 1–15. [Google Scholar] [CrossRef]
  88. AlZubi, A.A. IoT-Based Automated Water Pollution Treatment Using Machine Learning Classifiers. Environ. Technol. 2022, 45, 2299–2307. [Google Scholar] [CrossRef] [PubMed]
  89. Kang, S.-H.; Jeong, I.-S.; Lim, H.-S. A deep learning-based biomonitoring system for detecting water pollution using Caenorhabditis elegans swimming behaviors. Ecol. Inform. 2024, 80, 102482. [Google Scholar] [CrossRef]
  90. Bai, Y.-Y.; Huang, Y.-H.; Chen, S.-Y.; Zhang, J.; Li, B.-Q.; Wang, F.-Y. Cloud-edge Intelligence: Status Quo and Future Prospective of Edge Computing Approaches and Applications in Power System Operation and Control. Acta Autom. Sinica 2020, 46, 397. [Google Scholar]
  91. Belej, O. The Cryptography of Elliptical Curves Application for Formation of the Electronic Digital Signature. In Advances in Computer Science for Engineering and Education II. ICCSEEA 2019; Hu, Z., Petoukhov, S., Dychka, I., He, M., Eds.; Advances in Intelligent Systems and Computing; Springer: Cham, Switzerland, 2020; Volume 938. [Google Scholar] [CrossRef]
  92. Hadeed, W.; Abdullah, D.B. Load Balancing Mechanism for Edge-CloudBased Priorities Containers. Int. J. Wirel. Microw. Technol. 2022, 12, 1–9. [Google Scholar] [CrossRef]
  93. Stojmenovic, I. Fog computing: A cloud to the ground support for smart things and machine-to-machine networks. In Proceedings of the 2014 Australasian Telecommunication Networks and Applications Conference (ATNAC), Southbank, VIC, Australia, 26–28 November 2014; pp. 117–122. [Google Scholar]
  94. Yangui, S.; Ravindran, P.; Bibani, O.; Glitho, R.H.; Ben Hadj-Alouane, N.; Morrow, M.J.; Polakos, P.A. A platform as-a-service for hybrid cloud/fog environments. In Proceedings of the 2016 IEEE International Symposium on Local and Metropolitan Area Networks (LANMAN), Rome, Italy, 13–15 June 2016; pp. 1–7. [Google Scholar]
  95. Zhu, X.; Chan, D.S.; Prabhu, M.S. Improving video performance with edge servers in the fog computing architecture. Intel Technol. J. 2015, 19, 202–224. [Google Scholar]
  96. Alakberov, R.G. Clustering Method of Mobile Cloud Computing According to Technical Characteristics of Cloudlets. Int. J. Comput. Netw. Inf. Secur. 2022, 14, 75–87. [Google Scholar] [CrossRef]
  97. Cao, K.; Liu, Y.; Meng, G.; Sun, Q. An Overview on Edge Computing Research. IEEE Access 2020, 8, 85714–85728. [Google Scholar] [CrossRef]
  98. Shi, W.; Cao, J.; Zhang, Q.; Li, Y.; Xu, L. Edge Computing: Vision and Challenges. IEEE Internet Things J. 2016, 3, 637–646. [Google Scholar] [CrossRef]
  99. Alsamhi, S.H.; Shvetsov, A.V.; Kumar, S.; Shvetsova, S.V.; Alhartomi, M.A.; Hawbani, A.; Rajput, N.S.; Srivastava, S.; Saif, A.; Nyangaresi, V.O. UAV Computing-Assisted Search and Rescue Mission Framework for Disaster and Harsh Environment Mitigation. Drones 2022, 6, 154. [Google Scholar] [CrossRef]
  100. Popescu, D.; Stoican, F.; Stamatescu, G.; Ichim, L.; Dragana, C. Advanced UAV–WSN System for Intelligent Monitoring in Precision Agriculture. Sensors 2020, 20, 817. [Google Scholar] [CrossRef] [PubMed]
  101. Park, J.; Choi, J.-H.; Lee, Y.-J.; Min, O. A layered features analysis in smart farm environments. In Proceedings of the International Conference on Big Data and Internet of Thing (BDIOT2017), London, UK, 20–22 December 2017; ACM: London, UK, 2017; pp. 169–173. [Google Scholar]
  102. Idrees, Z.; Zou, Z.; Zheng, L. Edge Computing Based IoT Architecture for Low-Cost Air Pollution Monitoring Systems: A Comprehensive System Analysis, Design Considerations & Development. Sensors 2018, 18, 3021. [Google Scholar] [CrossRef] [PubMed]
  103. Hojaiji, H.; Goldstein, O.; King, C.E.; Sarrafzadeh, M.; Jerrett, M. Design and calibration of a wearable and wireless research grade air quality monitoring system for real-time data collection. In Proceedings of the 2017 IEEE Global Humanitarian Technology Conference (GHTC), San Jose, CA, USA, 19–22 October 2017. [Google Scholar]
  104. Jan, F.; Min-Allah, N.; Düştegör, D. IoT Based Smart Water Quality Monitoring: Recent Techniques, Trends and Challenges for Domestic Applications. Water 2021, 13, 1729. [Google Scholar] [CrossRef]
  105. Abbas, N.; Zhang, Y.; Taherkordi, A.; Skeie, T. Mobile edge computing: A survey. Proc. IEEE Internet Things J. 2017, 5, 450–465. [Google Scholar] [CrossRef]
  106. Li, Y.; Xie, J.; Jiang, R.; Yan, D. Application of edge computing and GIS in ecological water requirement prediction and optimal allocation of water resources in irrigation area. PLoS ONE 2021, 16, e0254547. [Google Scholar] [CrossRef] [PubMed]
  107. Jukan, A.; Carpio, F.; Masip, X.; Ferrer, A.J.; Kemper, N.; Stetina, B.U. Fog-to-cloud computing for farming: Low-cost technol-ogies, data exchange, and animal welfare. Computer 2019, 52, 41–51. [Google Scholar] [CrossRef]
  108. Romli, M.A.; Daud, S.; Raof, R.A.A.; Ahmad, Z.A.; Mahrom, N. Aquaponic growbed water level control using fog architecture. J. Phys. Conf. Ser. 2018, 1018, 012014. [Google Scholar] [CrossRef]
  109. Avgeris, M.; Spatharakis, D.; Dechouniotis, D.; Kalatzis, N.; Roussaki, I.; Papavassiliou, S. Where there is fire there is Smoke: A Scalable edge computing framework for early fire detection. Sensors 2019, 19, 639. [Google Scholar] [CrossRef] [PubMed]
  110. Singh, S.K.; Carpio, F.; Jukan, A. Improving animal-human cohabitation with machine learning in fiber-wireless networks. J. Sens. Actuator Netw. 2018, 7, 35. [Google Scholar] [CrossRef]
  111. Ferrández-Pastor, F.J.; García-Chamizo, J.M.; Nieto-Hidalgo, M.; Mora-Martínez, J. Precision Agriculture Design Method Using a Distributed Computing Architecture on Internet of Things Context. Sensors 2018, 18, 1731. [Google Scholar] [CrossRef]
  112. Yadav, G.; Sundaravadivel, P.; Kesavan, L. Affect-Learn: An IoT-based Affective Learning Framework for Special Education. In Proceedings of the 2020 IEEE 6th World Forum on Internet of Things (WF-IoT), New Orleans, LA, USA, 2–16 June 2020; pp. 1–5. [Google Scholar]
  113. Sundaravadivel, P.; Fitzgerald, A.; Indic, P. i-SAD: An Edge-Intelligent IoT-Based Wearable for Substance Abuse Detection. In Proceedings of the 2019 IEEE International Symposium on Smart Electronic Systems (iSES) (Formerly iNiS), Rourkela, India, 16–18 December 2019; pp. 117–122. [Google Scholar]
  114. Sallah, A.; Sundaravadivel, P. Tot-Mon: A Real-Time Internet of Things Based Affective Framework for Monitoring Infants. In Proceedings of the 2020 IEEE Computer Society Annual Symposium on VLSI (ISVLSI), Limassol, Cyprus, 6–8 July 2020; pp. 600–601. [Google Scholar]
  115. Polineni, S.; Shastri, O.; Bagchi, A.; Gnanakumar, G.; Rasamsetti, S.; Sundaravadivel, P. MOSQUITO EDGE: An Edge-Intelligent Real-Time Mosquito Threat Prediction Using an IoT-Enabled Hardware System. Sensors 2022, 22, 695. [Google Scholar] [CrossRef] [PubMed]
  116. Rajak, P.; Ganguly, A.; Adhikary, S.; Bhattacharya, S. Internet of Things and smart sensors in agriculture: Scopes and challenges. J. Agric. Food Res. 2023, 14, 100776. [Google Scholar] [CrossRef]
  117. Bernasconi, S.; Angelucci, A.; Aliverti, A. A Scoping Review on Wearable Devices for Environmental Monitoring and Their Application for Health and Wellness. Sensors 2022, 22, 5994. [Google Scholar] [CrossRef]
  118. MarketWatch. Global Sensor Technology Market Size 2021|Leading Players—ABB Ltd., Honeywell International Inc., Texas Instruments Incorporated, Siemens AG, STMicroelectronics N.V. Available online: https://www.alliedmarketresearch.com/sensor-market (accessed on 19 March 2024).
  119. Anand, A.; Bharath, M.Y.; Sundaravadivel, P.; Roselyn, J.P.; Uthra, R.A. On-device Intelligence for AI-enabled Bio-inspired Autonomous Underwater Vehicles (AUVs). IEEE Access 2024. [Google Scholar] [CrossRef]
  120. Vignesh Babu, J.V.; Roselyn, P.; Sundaravadivel, P. Multi-objective genetic algorithm-based energy management system considering optimal utilization of grid and degradation of battery storage in microgrid. Energy Rep. 2023, 9, 5992–6005, ISSN 2352-4847. [Google Scholar] [CrossRef]
Figure 1. Integration of automation, robotics, and edge computing in environmental sensing.
Figure 1. Integration of automation, robotics, and edge computing in environmental sensing.
Applsci 14 03531 g001
Figure 2. Transfer learning workflow [38].
Figure 2. Transfer learning workflow [38].
Applsci 14 03531 g002
Figure 3. Deep learning workflow [60].
Figure 3. Deep learning workflow [60].
Applsci 14 03531 g003
Figure 4. Deep learning applications in environment sensing.
Figure 4. Deep learning applications in environment sensing.
Applsci 14 03531 g004
Table 1. Description of various neural network architectures.
Table 1. Description of various neural network architectures.
ArchitectureDescriptionApplications
MobileNetV2
Lightweight architecture for mobile and embedded vision applications.
Uses depthwise separable convolutions and inverted residuals for efficiency.
Mobile and embedded vision apps
Inception-ResNet
Hybrid architecture combining elements from Google’s Inception and ResNet.
Incorporates inception modules for parallel processing of filter sizes.
Image classification
DenseNet201
Builds upon DenseNet architecture with dense connections between layers.
Facilitates feature reuse and improved gradient flow.
Image classification, object detection, image segmentation
VGG-16
Proposed by the Visual Geometry Group (VGG) at the University of Oxford.
Consists of 16 layers, including 13 convolutional layers and 3 fully connected layers.
Image classification
EfficientNetV2S
Part of the EfficientNet family, optimized for resource-constrained environments.
Employs compound scaling for balancing network depth, width, and resolution.
Resource-constrained environments
Table 3. Summary of various studies addressing air and water pollution monitoring and mitigation strategies.
Table 3. Summary of various studies addressing air and water pollution monitoring and mitigation strategies.
Ref.LocationModel/ApproachPerformance MeasureKey Findings
[79]CaliforniaSupport Vector Regression with radial basis functionAccuracyPrecise prediction of pollutants
[80]N/ALSTM, LSTM-MVRForecastingLSTM-MVR superior in forecasting
[81]N/ARandom Forest, Encoder–Decoder, LSTMPrecisionLSTM showed enhanced precision for PM2.5 correlation
[82]DelhiBiLSTM-AForecastingPromising foresight for NO2, PM10, and PM2.5
[83]TaiwanRNN with LSTMRMSEComparable RMSE values for PM2.5 prediction
[84]South KoreaRNN with GRUForecastingSuccessful PM10 and PM2.5 forecasting
[85]SeoulN/AMSEFine dust forecasting achieved with MSE below 10.7%
[86]N/AVarious factors consideredN/APollution mitigation in water bodies and air
[87]N/AEvolutionary IoT-based approachPCA, SVM, KNNDetection of water pollution with sensors and algorithms
[88]N/AAutomated IoT-driven water quality analysisN/AIntroduction of an automated water quality analysis system
[89]N/ADeep learning for biomonitoringN/ADetection of water pollution through Caenorhabditis elegant swimming behaviors
Table 4. Utilization of edge computing techniques in agricultural and environmental domain.
Table 4. Utilization of edge computing techniques in agricultural and environmental domain.
Sl. No.Application ThemeDomainEdge Computing Techniques
[107]Animal welfareLivestock managementLatency reduction
[108]AquafarmingAquaponicsOffloading computation
[109]ForestryEnvironmental monitoring and fire detectionOffloading computation
[110]SafetyWildlife protectionReduction of latency and computational load, as well as data traffic
[111]Smart farmingEnvironmental sensingUtilization of flexible layered IoT-assisted PA architecture
Table 5. Advantages and disadvantages of technologies in environmental sensing.
Table 5. Advantages and disadvantages of technologies in environmental sensing.
TechnologyAdvantagesLimitations
Transfer Learning
-
Efficient data collection: enhances machine learning (ML) data collection by leveraging knowledge from previous tasks, reducing the need for extensive target domain data.
-
Positive transfer: avoids negative transfer, improving overall performance.
-
Categorization: homogeneous and heterogeneous transfer learning strategies cater to situations with identical or different attributes.
-
Negative transfer risk: the approach may lead to negative transfer, diminishing overall performance.
-
Complexity: heterogeneous transfer learning is more intricate, requiring feature space adaptation.
-
Classification challenges: inductive, transductive, and unsupervised learning face variations in source and target datasets.
Deep Learning
-
Broad application domains: applicable across various domains, including object detection, predictive maintenance, agriculture, robotic surgery, and autonomous driving.
-
Automation capability: contributes to transformative advancements in agriculture and other industries.
-
Precision in agriculture: facilitates precise fruit counting, optimizing production and harvest processes.
-
Robustness challenges: needs to address challenges related to robustness, interpretability, and data modality integration.
-
Data efficiency: achieving high accuracy may require extensive datasets, impacting efficiency.
-
Privacy concerns: applications may raise privacy concerns, especially in smart agriculture and autonomous systems.
Edge Computing
-
Real-time analysis: enables real-time intelligent analysis by processing data in proximity to physical surroundings, reducing latency and bandwidth utilization.
-
Energy efficiency: minimizes energy consumption in data centers, boosting energy efficiency.
-
Localized services: focuses on local context, effective for small-scale, real-time intelligent analysis and localized services.
-
Privacy and security: poses challenges related to privacy and security, especially with amplified data transmission from edge devices.
-
Energy consumption: growing smart device numbers can lead to substantial energy consumption in data centers, impacting efficiency.
-
Load management: may face challenges in load management, real-time performance, and bandwidth during extensive data processing.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Borah, S.S.; Khanal, A.; Sundaravadivel, P. Emerging Technologies for Automation in Environmental Sensing: Review. Appl. Sci. 2024, 14, 3531. https://0-doi-org.brum.beds.ac.uk/10.3390/app14083531

AMA Style

Borah SS, Khanal A, Sundaravadivel P. Emerging Technologies for Automation in Environmental Sensing: Review. Applied Sciences. 2024; 14(8):3531. https://0-doi-org.brum.beds.ac.uk/10.3390/app14083531

Chicago/Turabian Style

Borah, Shekhar Suman, Aaditya Khanal, and Prabha Sundaravadivel. 2024. "Emerging Technologies for Automation in Environmental Sensing: Review" Applied Sciences 14, no. 8: 3531. https://0-doi-org.brum.beds.ac.uk/10.3390/app14083531

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop