sensors-logo

Journal Browser

Journal Browser

Human Activity Recognition Using Sensors and Machine Learning

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Wearables".

Deadline for manuscript submissions: closed (31 January 2024) | Viewed by 25610

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science, Aalborg University, DK-9220, Aalborg, Denmark
Interests: data mining, deep learning and sensor-based human activity recognition
Special Issues, Collections and Topics in MDPI journals
School of Computer Science and Engineering, University of New South Wales, Sydney, NSW 2052, Australia
Interests: Internet of Things; data mining; pervasive and ubiquitous computing; data mining and machine learning applications with a focus on Internet of Things analytics; recommender systems; human activity recognition; brain–computer interface
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Minerals & Energy Resources Engineering, University of New South Wales, Sydney, NSW 2052, Australia
Interests: data mining; machine learning; IoT data analytics

E-Mail Website
Guest Editor
Department of Computer Science, Aalborg University, 9220 Aalborg, Denmark
Interests: deep learning; mobile computing; pervasive computing; Internet of Things; brain–computer interface; health informatics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The recent advances in hardware and acquisition devices have accelerated the deployment of the Internet of Things, thus enabling myriad applications of human activity recognition. Human activity recognition is a time series classification task that involves predicting user behavior based on sensor data. The task is challenging in real-world applications due to many inherent issues and various practical problems in different scenarios. The most major inherent issue is how to filter noisy sensor data and extract high-quality features for better recognition performance. The practical problems include lightweight algorithms for wearable devices, modelling human behaviors with less annotated data, learning to recognize complex activities, continually learning patterns of streaming data, etc. Recently, we have witnessed compelling evidence from successful investigations of machine learning for activity recognition. While machine learning is shown to be effective and achieves a state-of-the-art performance, the increasing number of related studies indicates that, in both academic and industrial communities, there is a considerable demand for developing more advanced machine learning algorithms to tackle the challenges and achieve a better activity recognition performance. Therefore, it is vital and timely to offer an opportunity of reporting the progress in human activity recognition using sensors and machine learning. The research foci of this Special Issue include theoretical study, model design, development, and advanced applications of machine learning algorithms on sensor-based activity data.

Dr. Kaixuan Chen
Dr. Lina Yao
Dr. Chaoran Huang
Dr. Dalin Zhang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • supervised learning
  • semi-supervised learning
  • unsupervised learning
  • active learning
  • transfer learning
  • online learning
  • imbalance learning
  • representation learning
  • ensemble methods
  • auto machine learning
  • data segmentation
  • explainable AI

Related Special Issue

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

29 pages, 7125 KiB  
Article
A Wearable Inertial Sensor Approach for Locomotion and Localization Recognition on Physical Activity
by Danyal Khan, Naif Al Mudawi, Maha Abdelhaq, Abdulwahab Alazeb, Saud S. Alotaibi, Asaad Algarni and Ahmad Jalal
Sensors 2024, 24(3), 735; https://0-doi-org.brum.beds.ac.uk/10.3390/s24030735 - 23 Jan 2024
Viewed by 1163
Abstract
Advancements in sensing technology have expanded the capabilities of both wearable devices and smartphones, which are now commonly equipped with inertial sensors such as accelerometers and gyroscopes. Initially, these sensors were used for device feature advancement, but now, they can be used for [...] Read more.
Advancements in sensing technology have expanded the capabilities of both wearable devices and smartphones, which are now commonly equipped with inertial sensors such as accelerometers and gyroscopes. Initially, these sensors were used for device feature advancement, but now, they can be used for a variety of applications. Human activity recognition (HAR) is an interesting research area that can be used for many applications like health monitoring, sports, fitness, medical purposes, etc. In this research, we designed an advanced system that recognizes different human locomotion and localization activities. The data were collected from raw sensors that contain noise. In the first step, we detail our noise removal process, which employs a Chebyshev type 1 filter to clean the raw sensor data, and then the signal is segmented by utilizing Hamming windows. After that, features were extracted for different sensors. To select the best feature for the system, the recursive feature elimination method was used. We then used SMOTE data augmentation techniques to solve the imbalanced nature of the Extrasensory dataset. Finally, the augmented and balanced data were sent to a long short-term memory (LSTM) deep learning classifier for classification. The datasets used in this research were Real-World Har, Real-Life Har, and Extrasensory. The presented system achieved 89% for Real-Life Har, 85% for Real-World Har, and 95% for the Extrasensory dataset. The proposed system outperforms the available state-of-the-art methods. Full article
(This article belongs to the Special Issue Human Activity Recognition Using Sensors and Machine Learning)
Show Figures

Figure 1

27 pages, 8620 KiB  
Article
Human Walking Direction Detection Using Wireless Signals, Machine and Deep Learning Algorithms
by Hanan Awad Hassan Ali and Shinnazar Seytnazarov
Sensors 2023, 23(24), 9726; https://0-doi-org.brum.beds.ac.uk/10.3390/s23249726 - 9 Dec 2023
Cited by 1 | Viewed by 1308
Abstract
The use of wireless signals for device-free activity recognition and precise indoor positioning has gained significant popularity recently. By taking advantage of the characteristics of the received signals, it is possible to establish a mapping between these signals and human activities. Existing approaches [...] Read more.
The use of wireless signals for device-free activity recognition and precise indoor positioning has gained significant popularity recently. By taking advantage of the characteristics of the received signals, it is possible to establish a mapping between these signals and human activities. Existing approaches for detecting human walking direction have encountered challenges in adapting to changes in the surrounding environment or different people. In this paper, we propose a new approach that uses the channel state information of received wireless signals, a Hampel filter to remove the outliers, a Discrete wavelet transform to remove the noise and extract the important features, and finally, machine and deep learning algorithms to identify the walking direction for different people and in different environments. Through experimentation, we demonstrate that our approach achieved accuracy rates of 92.9%, 95.1%, and 89% in detecting human walking directions for untrained data collected from the classroom, the meeting room, and both rooms, respectively. Our results highlight the effectiveness of our approach even for people of different genders, heights, and environments, which utilizes machine and deep learning algorithms for low-cost deployment and device-free detection of human activities in indoor environments. Full article
(This article belongs to the Special Issue Human Activity Recognition Using Sensors and Machine Learning)
Show Figures

Figure 1

22 pages, 2556 KiB  
Article
More Reliable Neighborhood Contrastive Learning for Novel Class Discovery in Sensor-Based Human Activity Recognition
by Mingcong Zhang, Tao Zhu, Mingxing Nie and Zhenyu Liu
Sensors 2023, 23(23), 9529; https://0-doi-org.brum.beds.ac.uk/10.3390/s23239529 - 30 Nov 2023
Cited by 2 | Viewed by 848
Abstract
Human Activity Recognition (HAR) systems have made significant progress in recognizing and classifying human activities using sensor data from a variety of sensors. Nevertheless, they have struggled to automatically discover novel activity classes within massive amounts of unlabeled sensor data without external supervision. [...] Read more.
Human Activity Recognition (HAR) systems have made significant progress in recognizing and classifying human activities using sensor data from a variety of sensors. Nevertheless, they have struggled to automatically discover novel activity classes within massive amounts of unlabeled sensor data without external supervision. This restricts their ability to classify new activities of unlabeled sensor data in real-world deployments where fully supervised settings are not applicable. To address this limitation, this paper presents the Novel Class Discovery (NCD) problem, which aims to classify new class activities of unlabeled sensor data by fully utilizing existing activities of labeled data. To address this problem, we propose a new end-to-end framework called More Reliable Neighborhood Contrastive Learning (MRNCL), which is a variant of the Neighborhood Contrastive Learning (NCL) framework commonly used in visual domain. Compared to NCL, our proposed MRNCL framework is more lightweight and introduces an effective similarity measure that can find more reliable k-nearest neighbors of an unlabeled query sample in the embedding space. These neighbors contribute to contrastive learning to facilitate the model. Extensive experiments on three public sensor datasets demonstrate that the proposed model outperforms existing methods in the NCD task in sensor-based HAR, as indicated by the fact that our model performs better in clustering performance of new activity class instances. Full article
(This article belongs to the Special Issue Human Activity Recognition Using Sensors and Machine Learning)
Show Figures

Figure 1

18 pages, 6318 KiB  
Article
A Wearable Force Myography-Based Armband for Recognition of Upper Limb Gestures
by Mustafa Ur Rehman, Kamran Shah, Izhar Ul Haq, Sajid Iqbal and Mohamed A. Ismail
Sensors 2023, 23(23), 9357; https://0-doi-org.brum.beds.ac.uk/10.3390/s23239357 - 23 Nov 2023
Cited by 1 | Viewed by 836
Abstract
Force myography (FMG) represents a promising alternative to surface electromyography (EMG) in the context of controlling bio-robotic hands. In this study, we built upon our prior research by introducing a novel wearable armband based on FMG technology, which integrates force-sensitive resistor (FSR) sensors [...] Read more.
Force myography (FMG) represents a promising alternative to surface electromyography (EMG) in the context of controlling bio-robotic hands. In this study, we built upon our prior research by introducing a novel wearable armband based on FMG technology, which integrates force-sensitive resistor (FSR) sensors housed in newly designed casings. We evaluated the sensors’ characteristics, including their load–voltage relationship and signal stability during the execution of gestures over time. Two sensor arrangements were evaluated: arrangement A, featuring sensors spaced at 4.5 cm intervals, and arrangement B, with sensors distributed evenly along the forearm. The data collection involved six participants, including three individuals with trans-radial amputations, who performed nine upper limb gestures. The prediction performance was assessed using support vector machines (SVMs) and k-nearest neighbor (KNN) algorithms for both sensor arrangments. The results revealed that the developed sensor exhibited non-linear behavior, and its sensitivity varied with the applied force. Notably, arrangement B outperformed arrangement A in classifying the nine gestures, with an average accuracy of 95.4 ± 2.1% compared to arrangement A’s 91.3 ± 2.3%. The utilization of the arrangement B armband led to a substantial increase in the average prediction accuracy, demonstrating an improvement of up to 4.5%. Full article
(This article belongs to the Special Issue Human Activity Recognition Using Sensors and Machine Learning)
Show Figures

Figure 1

14 pages, 1757 KiB  
Article
TinyFallNet: A Lightweight Pre-Impact Fall Detection Model
by Bummo Koo, Xiaoqun Yu, Seunghee Lee, Sumin Yang, Dongkwon Kim, Shuping Xiong and Youngho Kim
Sensors 2023, 23(20), 8459; https://0-doi-org.brum.beds.ac.uk/10.3390/s23208459 - 14 Oct 2023
Cited by 2 | Viewed by 1579
Abstract
Falls represent a significant health concern for the elderly. While studies on deep learning-based preimpact fall detection have been conducted to mitigate fall-related injuries, additional efforts are needed for embedding in microcomputer units (MCUs). In this study, ConvLSTM, the state-of-the-art model, was benchmarked, [...] Read more.
Falls represent a significant health concern for the elderly. While studies on deep learning-based preimpact fall detection have been conducted to mitigate fall-related injuries, additional efforts are needed for embedding in microcomputer units (MCUs). In this study, ConvLSTM, the state-of-the-art model, was benchmarked, and we attempted to lightweight it by leveraging features from image-classification models VGGNet and ResNet while maintaining performance for wearable airbags. The models were developed and evaluated using data from young subjects in the KFall public dataset based on an inertial measurement unit (IMU), leading to the proposal of TinyFallNet based on ResNet. Despite exhibiting higher accuracy (97.37% < 98.00%) than the benchmarked ConvLSTM, the proposed model requires lower memory (1.58 MB > 0.70 MB). Additionally, data on the elderly from the fall data of the FARSEEING dataset and activities of daily living (ADLs) data of the KFall dataset were analyzed for algorithm validation. This study demonstrated the applicability of image-classification models to preimpact fall detection using IMU and showed that additional tuning for lightweighting is possible due to the different data types. This research is expected to contribute to the lightweighting of deep learning models based on IMU and the development of applications based on IMU data. Full article
(This article belongs to the Special Issue Human Activity Recognition Using Sensors and Machine Learning)
Show Figures

Figure 1

14 pages, 3370 KiB  
Article
Conformer-Based Human Activity Recognition Using Inertial Measurement Units
by Sowmiya Seenath and Menaka Dharmaraj
Sensors 2023, 23(17), 7357; https://0-doi-org.brum.beds.ac.uk/10.3390/s23177357 - 23 Aug 2023
Cited by 3 | Viewed by 1454
Abstract
Human activity recognition (HAR) using inertial measurement units (IMUs) is gaining popularity due to its ease of use, accurate and reliable measurements of motion and orientation, and its suitability for real-time IoT applications such as healthcare monitoring, sports and fitness tracking, video surveillance [...] Read more.
Human activity recognition (HAR) using inertial measurement units (IMUs) is gaining popularity due to its ease of use, accurate and reliable measurements of motion and orientation, and its suitability for real-time IoT applications such as healthcare monitoring, sports and fitness tracking, video surveillance and security, smart homes and assistive technologies, human–computer interaction, workplace safety, and rehabilitation and physical therapy. IMUs are widely used as they provide precise and consistent measurements of motion and orientation, making them an ideal choice for HAR. This paper proposes a Conformer-based HAR model that employs attention mechanisms to better capture the temporal dynamics of human movement and improve the recognition accuracy. The proposed model consists of convolutional layers, multiple Conformer blocks with self-attention and residual connections, and classification layers. Experimental results show that the proposed model outperforms existing models such as CNN, LSTM, and GRU. The attention mechanisms in the Conformer blocks have residual connections, which can prevent vanishing gradients and improve convergence. The model was evaluated using two publicly available datasets, WISDM and USCHAD, and achieved accuracy of 98.1% and 96%, respectively. These results suggest that Conformer-based models can offer a promising approach for HAR using IMU. Full article
(This article belongs to the Special Issue Human Activity Recognition Using Sensors and Machine Learning)
Show Figures

Figure 1

15 pages, 4937 KiB  
Article
Identification and Classification of Human Body Exercises on Smart Textile Bands by Combining Decision Tree and Convolutional Neural Networks
by Bonhak Koo, Ngoc Tram Nguyen and Jooyong Kim
Sensors 2023, 23(13), 6223; https://0-doi-org.brum.beds.ac.uk/10.3390/s23136223 - 7 Jul 2023
Cited by 2 | Viewed by 1351
Abstract
In recent years, human activity recognition (HAR) has gained significant interest from researchers in the sports and fitness industries. In this study, the authors have proposed a cascaded method including two classifying stages to classify fitness exercises, utilizing a decision tree as the [...] Read more.
In recent years, human activity recognition (HAR) has gained significant interest from researchers in the sports and fitness industries. In this study, the authors have proposed a cascaded method including two classifying stages to classify fitness exercises, utilizing a decision tree as the first stage and a one-dimension convolutional neural network as the second stage. The data acquisition was carried out by five participants performing exercises while wearing an inertial measurement unit sensor attached to a wristband on their wrists. However, only data acquired along the z-axis of the IMU accelerator was used as input to train and test the proposed model, to simplify the model and optimize the training time while still achieving good performance. To examine the efficiency of the proposed method, the authors compared the performance of the cascaded model and the conventional 1D-CNN model. The obtained results showed an overall improvement in the accuracy of exercise classification by the proposed model, which was approximately 92%, compared to 82.4% for the 1D-CNN model. In addition, the authors suggested and evaluated two methods to optimize the clustering outcome of the first stage in the cascaded model. This research demonstrates that the proposed model, with advantages in terms of training time and computational cost, is able to classify fitness workouts with high performance. Therefore, with further development, it can be applied in various real-time HAR applications. Full article
(This article belongs to the Special Issue Human Activity Recognition Using Sensors and Machine Learning)
Show Figures

Figure 1

15 pages, 6993 KiB  
Article
Human Activity Recognition Using Attention-Mechanism-Based Deep Learning Feature Combination
by Morsheda Akter, Shafew Ansary, Md. Al-Masrur Khan and Dongwan Kim
Sensors 2023, 23(12), 5715; https://0-doi-org.brum.beds.ac.uk/10.3390/s23125715 - 19 Jun 2023
Cited by 2 | Viewed by 3763
Abstract
Human activity recognition (HAR) performs a vital function in various fields, including healthcare, rehabilitation, elder care, and monitoring. Researchers are using mobile sensor data (i.e., accelerometer, gyroscope) by adapting various machine learning (ML) or deep learning (DL) networks. The advent of DL has [...] Read more.
Human activity recognition (HAR) performs a vital function in various fields, including healthcare, rehabilitation, elder care, and monitoring. Researchers are using mobile sensor data (i.e., accelerometer, gyroscope) by adapting various machine learning (ML) or deep learning (DL) networks. The advent of DL has enabled automatic high-level feature extraction, which has been effectively leveraged to optimize the performance of HAR systems. In addition, the application of deep-learning techniques has demonstrated success in sensor-based HAR across diverse domains. In this study, a novel methodology for HAR was introduced, which utilizes convolutional neural networks (CNNs). The proposed approach combines features from multiple convolutional stages to generate a more comprehensive feature representation, and an attention mechanism was incorporated to extract more refined features, further enhancing the accuracy of the model. The novelty of this study lies in the integration of feature combinations from multiple stages as well as in proposing a generalized model structure with CBAM modules. This leads to a more informative and effective feature extraction technique by feeding the model with more information in every block operation. This research used spectrograms of the raw signals instead of extracting hand-crafted features through intricate signal processing techniques. The developed model has been assessed on three datasets, including KU-HAR, UCI-HAR, and WISDM datasets. The experimental findings showed that the classification accuracies of the suggested technique on the KU-HAR, UCI-HAR, and WISDM datasets were 96.86%, 93.48%, and 93.89%, respectively. The other evaluation criteria also demonstrate that the proposed methodology is comprehensive and competent compared to previous works. Full article
(This article belongs to the Special Issue Human Activity Recognition Using Sensors and Machine Learning)
Show Figures

Figure 1

22 pages, 6391 KiB  
Article
Prediction of Joint Angles Based on Human Lower Limb Surface Electromyography
by Hongyu Zhao, Zhibo Qiu, Daoyong Peng, Fang Wang, Zhelong Wang, Sen Qiu, Xin Shi and Qinghao Chu
Sensors 2023, 23(12), 5404; https://0-doi-org.brum.beds.ac.uk/10.3390/s23125404 - 7 Jun 2023
Cited by 2 | Viewed by 1550
Abstract
Wearable exoskeletons can help people with mobility impairments by improving their rehabilitation. As electromyography (EMG) signals occur before movement, they can be used as input signals for the exoskeletons to predict the body’s movement intention. In this paper, the OpenSim software is used [...] Read more.
Wearable exoskeletons can help people with mobility impairments by improving their rehabilitation. As electromyography (EMG) signals occur before movement, they can be used as input signals for the exoskeletons to predict the body’s movement intention. In this paper, the OpenSim software is used to determine the muscle sites to be measured, i.e., rectus femoris, vastus lateralis, semitendinosus, biceps femoris, lateral gastrocnemius, and tibial anterior. The surface electromyography (sEMG) signals and inertial data are collected from the lower limbs while the human body is walking, going upstairs, and going uphill. The sEMG noise is reduced by a wavelet-threshold-based complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) reduction algorithm, and the time-domain features are extracted from the noise-reduced sEMG signals. Knee and hip angles during motion are calculated using quaternions through coordinate transformations. The random forest (RF) regression algorithm optimized by cuckoo search (CS), shortened as CS-RF, is used to establish the prediction model of lower limb joint angles by sEMG signals. Finally, root mean square error (RMSE), mean absolute error (MAE), and coefficient of determination (R2) are used as evaluation metrics to compare the prediction performance of the RF, support vector machine (SVM), back propagation (BP) neural network, and CS-RF. The evaluation results of CS-RF are superior to other algorithms under the three motion scenarios, with optimal metric values of 1.9167, 1.3893, and 0.9815, respectively. Full article
(This article belongs to the Special Issue Human Activity Recognition Using Sensors and Machine Learning)
Show Figures

Figure 1

23 pages, 4785 KiB  
Article
Deep SE-BiLSTM with IFPOA Fine-Tuning for Human Activity Recognition Using Mobile and Wearable Sensors
by Shaik Jameer and Hussain Syed
Sensors 2023, 23(9), 4319; https://0-doi-org.brum.beds.ac.uk/10.3390/s23094319 - 27 Apr 2023
Cited by 3 | Viewed by 1833
Abstract
Pervasive computing, human–computer interaction, human behavior analysis, and human activity recognition (HAR) fields have grown significantly. Deep learning (DL)-based techniques have recently been effectively used to predict various human actions using time series data from wearable sensors and mobile devices. The management of [...] Read more.
Pervasive computing, human–computer interaction, human behavior analysis, and human activity recognition (HAR) fields have grown significantly. Deep learning (DL)-based techniques have recently been effectively used to predict various human actions using time series data from wearable sensors and mobile devices. The management of time series data remains difficult for DL-based techniques, despite their excellent performance in activity detection. Time series data still has several problems, such as difficulties in heavily biased data and feature extraction. For HAR, an ensemble of Deep SqueezeNet (SE) and bidirectional long short-term memory (BiLSTM) with improved flower pollination optimization algorithm (IFPOA) is designed to construct a reliable classification model utilizing wearable sensor data in this research. The significant features are extracted automatically from the raw sensor data by multi-branch SE-BiLSTM. The model can learn both short-term dependencies and long-term features in sequential data due to SqueezeNet and BiLSTM. The different temporal local dependencies are captured effectively by the proposed model, enhancing the feature extraction process. The hyperparameters of the BiLSTM network are optimized by the IFPOA. The model performance is analyzed using three benchmark datasets: MHEALTH, KU-HAR, and PAMPA2. The proposed model has achieved 99.98%, 99.76%, and 99.54% accuracies on MHEALTH, KU-HAR, and PAMPA2 datasets, respectively. The proposed model performs better than other approaches from the obtained experimental results. The suggested model delivers competitive results compared to state-of-the-art techniques, according to experimental results on four publicly accessible datasets. Full article
(This article belongs to the Special Issue Human Activity Recognition Using Sensors and Machine Learning)
Show Figures

Figure 1

23 pages, 2173 KiB  
Article
Validity of Two Consumer Multisport Activity Tracker and One Accelerometer against Polysomnography for Measuring Sleep Parameters and Vital Data in a Laboratory Setting in Sleep Patients
by Mario Budig, Riccardo Stoohs and Michael Keiner
Sensors 2022, 22(23), 9540; https://0-doi-org.brum.beds.ac.uk/10.3390/s22239540 - 6 Dec 2022
Cited by 7 | Viewed by 3798
Abstract
Two commercial multisport activity trackers (Garmin Forerunner 945 and Polar Ignite) and the accelerometer ActiGraph GT9X were evaluated in measuring vital data, sleep stages and sleep/wake patterns against polysomnography (PSG). Forty-nine adult patients with suspected sleep disorders (30 males/19 females) completed a one-night [...] Read more.
Two commercial multisport activity trackers (Garmin Forerunner 945 and Polar Ignite) and the accelerometer ActiGraph GT9X were evaluated in measuring vital data, sleep stages and sleep/wake patterns against polysomnography (PSG). Forty-nine adult patients with suspected sleep disorders (30 males/19 females) completed a one-night PSG sleep examination followed by a multiple sleep latency test (MSLT). Sleep parameters, time in bed (TIB), total sleep time (TST), wake after sleep onset (WASO), sleep onset latency (SOL), awake time (WASO + SOL), sleep stages (light, deep, REM sleep) and the number of sleep cycles were compared. Both commercial trackers showed high accuracy in measuring vital data (HR, HRV, SpO2, respiratory rate), r > 0.92. For TIB and TST, all three trackers showed medium to high correlation, r > 0.42. Garmin had significant overestimation of TST, with MAE of 84.63 min and MAPE of 25.32%. Polar also had an overestimation of TST, with MAE of 45.08 min and MAPE of 13.80%. ActiGraph GT9X results were inconspicuous. The trackers significantly underestimated awake times (WASO + SOL) with weak correlation, r = 0.11–0.57. The highest MAE was 50.35 min and the highest MAPE was 83.02% for WASO for Garmin and ActiGraph GT9X; Polar had the highest MAE of 21.17 min and the highest MAPE of 141.61% for SOL. Garmin showed significant deviations for sleep stages (p < 0.045), while Polar only showed significant deviations for sleep cycle (p = 0.000), r < 0.50. Garmin and Polar overestimated light sleep and underestimated deep sleep, Garmin significantly, with MAE up to 64.94 min and MAPE up to 116.50%. Both commercial trackers Garmin and Polar did not detect any daytime sleep at all during the MSLT test. The use of the multisport activity trackers for sleep analysis can only be recommended for general daily use and for research purposes. If precise data on sleep stages and parameters are required, their use is limited. The accuracy of the vital data measurement was adequate. Further studies are needed to evaluate their use for medical purposes, inside and outside of the sleep laboratory. The accelerometer ActiGraph GT9X showed overall suitable accuracy in detecting sleep/wake patterns. Full article
(This article belongs to the Special Issue Human Activity Recognition Using Sensors and Machine Learning)
Show Figures

Figure 1

25 pages, 1607 KiB  
Article
Matched Filter Interpretation of CNN Classifiers with Application to HAR
by Mohammed M. Farag
Sensors 2022, 22(20), 8060; https://0-doi-org.brum.beds.ac.uk/10.3390/s22208060 - 21 Oct 2022
Cited by 5 | Viewed by 2155
Abstract
Time series classification is an active research topic due to its wide range of applications and the proliferation of sensory data. Convolutional neural networks (CNNs) are ubiquitous in modern machine learning (ML) models. In this work, we present a matched filter (MF) interpretation [...] Read more.
Time series classification is an active research topic due to its wide range of applications and the proliferation of sensory data. Convolutional neural networks (CNNs) are ubiquitous in modern machine learning (ML) models. In this work, we present a matched filter (MF) interpretation of CNN classifiers accompanied by an experimental proof of concept using a carefully developed synthetic dataset. We exploit this interpretation to develop an MF CNN model for time series classification comprising a stack of a Conv1D layer followed by a GlobalMaxPooling layer acting as a typical MF for automated feature extraction and a fully connected layer with softmax activation for computing class probabilities. The presented interpretation enables developing superlight highly accurate classifier models that meet the tight requirements of edge inference. Edge inference is emerging research that addresses the latency, availability, privacy, and connectivity concerns of the commonly deployed cloud inference. The MF-based CNN model has been applied to the sensor-based human activity recognition (HAR) problem due to its significant importance in a broad range of applications. The UCI-HAR, WISDM-AR, and MotionSense datasets are used for model training and testing. The proposed classifier is tested and benchmarked on an android smartphone with average accuracy and F1 scores of 98% and 97%, respectively, which outperforms state-of-the-art HAR methods in terms of classification accuracy and run-time performance. The proposed model size is less than 150 KB, and the average inference time is less than 1 ms. The presented interpretation helps develop a better understanding of CNN operation and decision mechanisms. The proposed model is distinguished from related work by jointly featuring interpretability, high accuracy, and low computational cost, enabling its ready deployment on a wide set of mobile devices for a broad range of applications. Full article
(This article belongs to the Special Issue Human Activity Recognition Using Sensors and Machine Learning)
Show Figures

Figure 1

14 pages, 2671 KiB  
Article
Faster Deep Inertial Pose Estimation with Six Inertial Sensors
by Di Xia, Yeqing Zhu and Heng Zhang
Sensors 2022, 22(19), 7144; https://0-doi-org.brum.beds.ac.uk/10.3390/s22197144 - 21 Sep 2022
Cited by 3 | Viewed by 2436
Abstract
We propose a novel pose estimation method that can predict the full-body pose from six inertial sensors worn by the user. This method solves problems encountered in vision, such as occlusion or expensive deployment. We address several complex challenges. First, we use the [...] Read more.
We propose a novel pose estimation method that can predict the full-body pose from six inertial sensors worn by the user. This method solves problems encountered in vision, such as occlusion or expensive deployment. We address several complex challenges. First, we use the SRU network structure instead of the bidirectional RNN structure used in previous work to reduce the computational effort of the model without losing its accuracy. Second, our model does not require joint position supervision to achieve the best results of the previous work. Finally, since sensor data tend to be noisy, we use SmoothLoss to reduce the impact of inertial sensors on pose estimation. The faster deep inertial poser model proposed in this paper can perform online inference at 90 FPS on the CPU. We reduce the impact of each error by more than 10% and increased the inference speed by 250% compared to the previous state of the art. Full article
(This article belongs to the Special Issue Human Activity Recognition Using Sensors and Machine Learning)
Show Figures

Figure 1

Back to TopTop