Next Article in Journal
Online Quantitative Analysis of Perception Uncertainty Based on High-Definition Map
Previous Article in Journal
Research on the Surface Deformation, Fault Rupture, and Coseismic Geohazard of the 2022 Luding Mw 6.8 Earthquake
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Identification of Myofascial Trigger Point Using the Combination of Texture Analysis in B-Mode Ultrasound with Machine Learning Classifiers

1
Institute of Biomedical Engineering, University of Toronto, Toronto, ON M5S 3G9, Canada
2
KITE Research Institute, Toronto Rehabilitation Institute, University Health Network, Toronto, ON M5G 2A2, Canada
3
Department of Physical Medicine and Rehabilitation, Dokuz Eylul University, Izmir 35340, Turkey
*
Author to whom correspondence should be addressed.
Submission received: 6 November 2023 / Revised: 5 December 2023 / Accepted: 12 December 2023 / Published: 16 December 2023
(This article belongs to the Special Issue Advanced Acoustic Sensing Technology)

Abstract

:
Myofascial pain syndrome is a chronic pain disorder characterized by myofascial trigger points (MTrPs). Quantitative ultrasound (US) techniques can be used to discriminate MTrPs from healthy muscle. In this study, 90 B-mode US images of upper trapezius muscles were collected from 63 participants (left and/or right side(s)). Four texture feature approaches (individually and a combination of them) were employed that focused on identifying spots, and edges were used to explore the discrimination between the three groups: active MTrPs (n = 30), latent MTrPs (n = 30), and healthy muscle (n = 30). Machine learning (ML) and one-way analysis of variance were used to investigate the discrimination ability of the different approaches. Statistically significant results were seen in almost all examined features for each texture feature approach, but, in contrast, ML techniques struggled to produce robust discrimination. The ML techniques showed that two texture features (i.e., correlation and mean) within the combination of texture features were most important in classifying the three groups. This discrepancy between traditional statistical analysis and ML techniques prompts the need for further investigation of texture-based approaches in US for the discrimination of MTrPs.

1. Introduction

Chronic pain (e.g., myofascial pain syndrome (MPS)) affects nearly one hundred million adults in the United States with an annual cost between USD 560 to 635 billion [1]. MPS is one of the most prevalent musculoskeletal pain disorders that occur in every age group and has been associated with primary pain conditions, including osteoarthritis, disc syndrome, tendinitis, migraines, and spinal dysfunction [2]. Myofascial trigger points can be used to characterize MPS. These can be split into two types: active MTrPs (A-MTrP), which are spontaneously painful nodules, and latent MTrPs (L-MTrP), which are nodules that are only painful when palpated.
MTrPs have been classically defined as a “hyperirritable spot” in skeletal muscle that is associated with a hypersensitive palpable nodule in a taut band [3]. The diagnostic criteria for MPS involve physical screening, but studies have shown that the manual detection of MTrPs is unreliable [4]. Quantitative techniques can help improve the detection of MTrPs.
Ultrasound (US) is an attractive modality for this problem as it has been used to identify MTrPs [4,5,6]. It is a non-invasive way to assess muscles, tendons, and ligaments [7,8,9] and is relatively low cost. Doppler and elastography US have been used to visualize and distinguish MTrPs from normal tissue [8,10,11,12]. Unfortunately, not all clinical US machines are equipped with elastography capabilities, and these approaches require comprehensive training to use and interpret. Brightness mode (B-mode) US, on the other hand, is readily available in most clinics and hospitals and would be the preferred option for diagnosing and screening musculoskeletal disorders if possible.
However, B-mode US has high variability in echo intensity depending on the operator, model, and more. Thus, texture features have been used to mitigate this issue and have been widely used to discriminate variables in B-mode US images. Texture features play a vital role in radiomics, providing information such as muscle fiber orientation, normal anatomy, and the extent of adipose, fibrous, and other connective tissues within muscle [10]. Previous studies have suggested that the muscle fibers within the MTrPs in the affected zone and the muscle fibers in the surrounding regions have different orientations in comparison with normal skeletal muscle [10].
Although texture feature analysis of US images has been explored to distinguish MTrPs in affected muscle from normal tissue [8,11,12], there is currently no “gold standard” to detect the presence of MTrPs within B-mode US images. Previous research has used various methods of analyzing texture to tackle this problem, such as using entropy characteristics [11], gray-level co-occurrence matrices (GLCM), blob analysis, local binary pattern (LBP), and statistical analysis [12]. A comprehensive review paper on texture analysis or classification categorized these techniques into four main categories [13]:
  • Transform-Based: Transform-based techniques employ a set of predefined filters or kernels to extract texture information from an image. Common filters include Gabor filters and LBP [14,15]. These filters highlight certain frequency components or local variations in pixel values, making them suitable for tasks where patterns are characterized by specific spatial frequencies or orientations.
  • Structural: Structural techniques focus on describing the spatial arrangement and relationships between different elements in an image. They often involve identifying and characterizing specific patterns or structures within the texture (e.g., GLCM). These methods are valuable for capturing details related to texture regularity, directionality, or organization.
  • Statistical: Statistical methods involve the analysis of various statistical properties of pixel intensities within an image or a region of interest (ROI). Common statistical features include entropy, contrast, correlation, homogeneity, energy, mean, and variance. These metrics quantify the distribution and variation of pixel values, providing insights into the texture’s overall properties, such as roughness, homogeneity, or randomness.
  • Model-Based: Model-based methods involve fitting mathematical or statistical models to patterns in an image. These models can be simple, such as a parametric distribution (i.e., Gaussian distribution or Markov random fields), or more complex, such as deep learning models like convolutional neural networks. Model-based approaches are versatile and can capture intricate texture patterns, making them increasingly popular for texture analysis.
Of these categories, we focused on features that may better describe spots, edges, and patterns. This is because a variety of studies describe MTrPs as “knots” in the muscle. A wide variety of studies describe the MTrPs as a hyperechoic band, hypoechoic elliptical region, or simply a different echo architecture than the surrounding muscle tissue in clinical examination (e.g., US screening) [16,17].
For many clinicians and investigators, the finding of one or more MTrPs is required to assure the diagnosis of MPS. However, there remains a lack of optimal methods for characterizing these muscle structures, and achieving an objective characterization of MTrPs has the potential to enhance their localization and diagnosis. This can facilitate the development of clinical measures [15]. One of the leading challenges in the classification of B-mode US images is that they may vary in scale, view, or intensity. For these reasons, various approaches attempt to address these challenges.
Gabor filters are a feature that can be used to detect direction and are often used to reveal lines and edges in an image [18]. They can also be used to determine the structure and visual content contained within an image [13]. Previously, Gabor filters have been used to enhance fiber orientation and detect edges in US images [19,20]. LBP is another approach that can be used to characterize skeletal muscle composition in patients with MPS compared with normal healthy participants [8,21]. Most of these approaches have been used for image processing and statistical analysis, but classification may benefit greatly from the incorporation of machine learning (ML).
ML approaches may enhance classification as they are able to autonomously learn patterns and relationships from data [22,23,24]. ML is focused on making predictions as accurate as possible, while traditional statistical models are aimed at inferring relationships between variables [24]. ML offers advantages in terms of flexibility and scalability when contrasted with conventional statistical methods, allowing its utilization across various tasks like diagnosing, classifying, and predicting survival. Nevertheless, it is crucial to assess and compare the accuracy of muscle characterization through traditional statistical methods and ML within the context of clinical screening [25]. Supervised ML algorithms (e.g., neural networks (NNs), decision trees (DTs), etc.) can generalize from training data to make accurate predictions or classifications on new, unseen data. Their adaptability allows them to handle diverse domains and tasks, making them invaluable tools for tasks ranging from image recognition to medical diagnosis, enhancing efficiency and precision in decision-making processes [22].
Thus, this study delves into the utilization of various texture feature approaches and ML techniques to classify and characterize MTrPs in US images. We investigate different texture feature approaches (i.e., LBP, Gabor, SEGL method, and their combination with texture features) extracted from US images to classify MTrPs. We further employ various ML techniques as well as traditional statistical analysis to explore the effectiveness of the extracted features from the US images to characterize and classify the muscle between A-MTrPs, L-MTrPs, and healthy muscle.

2. Materials and Methods

2.1. Participants

Participants (n = 63) were recruited from the musculoskeletal/pain specialty outpatient clinic at the Toronto Rehabilitation Institute. The upper trapezius muscle of all participants was examined. All participants underwent a physical examination by a trained clinician on our team (BD), who determined the presence or absence of MTrPs (i.e., A-MTrPs and L-MTrPs) in the upper trapezius muscle according to the standard clinical criteria defined by Travell and Simons [3] and through visual confirmation on B-mode US. Participants who demonstrated no symptoms or history related to neuromuscular disease, based on diagnostic criteria, were included in this study. Each participant’s muscle(s) (right and/or left) was labeled as A-MTrPs (n = 30), L-MTrPs (n = 30), or healthy control (n = 30) (Table 1).
All subjects gave their informed consent for inclusion before they participated in the study and their upper trapezius muscles were included in our study. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Institutional Review Board of the University Health Network (UHN) (protocol code 15-9488).

2.2. Ultrasound Acquisition Protocol and Pre-Processing

US videos were acquired using a US system (SonixTouch Q+, Ultrasonix Medical Corporation, Richmond, BC, Canada) with a linear ultrasonic transducer of 6–15 MHz and a depth set to 2.5 cm. The acquisition settings including time gain compensation, depth, and sector size were held constant across all participants. Acquisition was performed by an experienced sonographer with the participant sitting upright in a chair with their arms relaxed on their sides and forearms resting on their thighs. The transducer was placed on the skin in the center of the trapezius muscle (i.e., the midpoint of the muscle belly between the C7 spinous process and the acromioclavicular joint) with enough gel to cover the entire surface (Figure 1). A ten-second video (sampling frequency: 30 frames/second) of the trapezius muscle from each side per participant was recorded by moving the transducer towards the acromioclavicular joint (parallel to the orientation of the muscle fibers) at approximately 1 cm/s, generating 300 images per participant for analysis (Figure 1). While recording the video, the researcher manipulated the transducer’s position to reduce artifacts and mitigate muscle distortion caused by the transducer, such as applying downward pressure. From each video, 4 unique frames/images were manually selected out of 300 B-mode images. These selected images captured various sections of the muscle (i.e., lateral to medial) and were used to validate the presence or absence of MTrPs evident in the video (Figure 2A). Images from each side of a participant (e.g., left and/or right trapezius) were treated as independent sites (Table 1).
ROIs of the muscle (i.e., the region between the upper trapezius muscle’s superior and inferior fascia) were manually extracted from the acquired images via visual localization. These muscle ROIs were further analyzed using the following texture features (Figure 3).

2.3. Texture Feature Analyses

I. Local Binary Patterns. LBP, a rotationally invariant feature, is one of the most popular texture feature analysis operators [26]. It can evaluate the local spatial patterns and contrast of grayscale images. This technique calculates eigenvalues for the different patterns in an image, such as edges and corners within a neighborhood. LBP was calculated for every B-mode image using the following equation below (Equation (1)).
L B P P ,     R = p = 0 P 1 s g p g c 2 p   ,   s x = 1 ,   x 0 0 , x < 0 ,  
where P is the number of pixels within the neighborhood and within a circle radius of R = 1, g p represents the pth neighboring pixel, g c represents the center pixel, and s x is the obtained binary code at position (x) neighbors.
In our study, a 3 by 3 neighborhood was used, and its central pixel intensity was compared with its surrounding eight neighbor pixels [27]. If the neighboring pixel intensity was below the pixel intensity of the central pixel, then it was labeled 0; otherwise, it was assigned the value 1. This resultant binary matrix was then multiplied by a fixed weight matrix, which was then summed replacing the central pixel (i.e., the LBP measure). This produced one of 256 (28) possible patterns.
LBP was calculated across the entire ROI, and the outer border of the ROI (i.e., did not have eight neighbors) was replaced with the next closest pixel values (Figure 2C).
II. Gabor Feature. Gabor filtering was introduced by Daugman and used in pattern analysis applications [28,29,30]. The Gabor filter-based features are directly extracted from the gray-level images (i.e., B-mode images) and compute a measure of “energy” in a window around each pixel in each response image. In the spatial domain, a two-dimensional Gabor filter is a Gaussian kernel function modulated by a complex sinusoidal plane wave, defined as (Equation (2)) [31]:
G x , y = e x p 1 2   x 2 σ 1 2 + x 2 σ 2 2   c o s 2 π ƒ x + φ ,
x = x s i n θ + y c o s θ ,     y = x c o s θ + y s i n θ ,
where ƒ is the spatial frequency of the wave at angle θ with the x-axis, σ 1 and σ 2 are the standard deviations of the 2-D Gaussian envelope, and φ is the phase.
Gabor features were calculated using the Gabor feature extraction function created by Haghighat et al. [32] in MATLAB (2023a, The MathWorks, Natick, MA, USA). Forty Gabor filters were calculated at 5 frequency scales for eight orientations (i.e., θ: 0°, 45°, 90°, 135°, 180°, 225°, 270°, and 315°), producing 40 Gabor feature images for each B-mode US image (Figure 2B).
III. SEGL Method. SEGL stands for statistical, edge, GLCM, and LBP and was proposed by Fekri Ershad S. for textual analysis [33]. It is a feature extraction method that combines statistical, edge, GLCM, and LBP features. First, LBP is calculated from the input image. Then, GLCM is calculated on the resultant LBP image in which the edge feature is then calculated before calculating the statistical features.
GLCM was proposed by Haralick and Shanmugam [34]. GLCM provides information about how often a pixel with the intensity value i occurs in a specific spatial relationship to a pixel with the value j. In this study, GLCM was calculated along 8 directions (i.e., θ: 0°, 45°, 90°, 135°, 180°, 225°, 270°, and 315°) with an empirically determined distance (offset = one pixel).
Edge detection is the process of localizing pixel intensity transitions that have been used to extract information in the image via object recognition, target tracking, segmentation, etc. It is defined by a discontinuity in gray-level values or a boundary between two regions with relatively distinct gray-level values [35]. The Canny edge detection method was used, as previous literature has shown that the Sobel edge detection method cannot produce smooth and thin edges compared to the Canny method [36].
Finally, the seven statistical features, described below (Section 4), were calculated over the edge detected images. This resulted in 56 features (8 directions × 7 statistical features).
IV. Statistical Feature. Statistical features were used to measure the image variation. In our study, 7 statistical features of entropy, energy, mean, contrast, homogeneity, correlation, and variance were computed. A summary of these statistical features is provided below (Equations (3)–(9)).
  • Entropy: shows the degree of randomness of pixel intensities within an image (Equation (3)) [7,30,34].
E n t r o p y : a 1 = i , j = 0 N 1 Ln p i , j   p i , j ,
  • Contrast: measures the local contrast of an image (Equation (4)).
C o n t r a s t : a 2 = i , j = 0 N 1 P i , j ( i j ) 2 ,  
  • Correlation: provides a correlation between two pixels in a pixel pair (Equation (5)).
C o r r e l a t i o n : a 3 = i , j = 0 N 1 P i , j ( i μ ) ( j μ ) / σ 2 ) ,
  • Homogeneity: measures the local homogeneity of a pixel pair (Equation (6)).
H o m o g e n e i t y : a 4 = i , j = 0 N 1 P i , j 1 + i j 2
  • Energy: measures the number of repeated pairs (Equation (7)).
E n e r g y : a 5 = i , j = 0 N 1 ( P i , j ) 2  
  • Mean (Equation (8)):
M e a n : a 6 = i , j = 0 N 1 i   ( P i , j )  
Variance (Equation (9)):
V a r i a n c e : a 7 = i , j = 0 N 1 P i , j   i μ 2  
where Pi,j is the pixel value in position (i, j) in the output image, μ and σ are, respectively, the mean and standard deviation (variance) of all Pi,j values in the output image, and N is the number of gray levels in the output image.

2.4. Classification Techniques, Training, and Evaluation

The features calculated from each approach (Table 2) were used to train a variety of ML models to discriminate muscle with MTrPs (A-MTrPs and L-MTrPs) from healthy muscle. ML models were implemented in Python using the Scikit Learn library. These ML models were logistic regression (LR) [37], decision tree (DT) [38], random forest (RF) [39], k-nearest neighbors (kNN) [40], naive Bayes (NB) [41], support vector machine (SVM) [42], and artificial neural networks (NNs) [43,44]. These models were used because they are common in the literature [45], have different strengths, and could easily be implemented. Each method used the libraries’ default parameters and other hyperparameters such as the number of neighbors in kNN, which was tuned using grid search (Table 3).
The NN was a single hidden-layer network (512) with a dropout layer (50% of nodes dropped). All activation functions were Rectified Linear Units. The output layer was a 3-node output with an activation function of SoftMax. The NN was trained for 250 epochs with an early stopping criterion of 7 epochs of no improvement in the validation loss. The learning rate was the default set by Keras and was decreased by a factor of 0.1 after 3 epochs of no improvement to a minimum learning rate = 0.00001.
Input to all classifiers were the features from each approach as seen in Table 2. A leave-one-site-out approach was used due to the low number of images to better evaluate performance. The remaining examples were used for training (i.e., LR, DT, RF, k-NN, NB, and SVM) with the exception of the NN approach, where they were split into 75% training and 25% validation sets. For example, a training set would consist of 356 images (89 sites × 4 US images), and a test set would consist of 4 images. In the case of the NN, the training and validation sets would consist of 268 and 88 images (67 sites × 4 US images; 22 sites × 4 US images), respectively. Performance was evaluated using classification accuracy, F1-score, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV), which were calculated via the function statsOfMeasure in MATLAB.

2.5. Ensemble Approaches, Feature Importance, and Statistical Analysis

The ML techniques were further investigated using an ensemble approach. The best-performing trained classifier for each technique (e.g., kNN, SVM, etc.) was selected based on the mean performance across all 4 feature approaches (i.e., B-mode LBP, Gabor, and SEGL), as shown with asterisks (*) in Table 3. These selected classifiers were then used to perform a majority vote for a classification task. This was implemented via the function majorityvote in MATLAB.
In addition, to determine which features were more important toward the classification task, we examined the classification performance of using a single statistical feature (e.g., entropy, mean, etc.) and removing a single feature (from the set of 7).
For the single statistical feature case, we took the features from all approaches (as seen in Table 2) and used only the statistical feature of interest. In cases where there were more than 7 features (i.e., Gabor and SEGL), the mean values were used (i.e., 40 entropy features converted into a single mean entropy feature for the Gabor approach). This resulted in a vector of 4 values (i.e., entropy feature from the 4 approaches).
For the removal of a single feature case, the same procedure was used except that the features that were not removed were used as inputs (i.e., vector of 24 values (6 statistical features × 4 approaches)). Statistical analysis was performed on each feature using a one-way analysis of variance (ANOVA) to compare the 3 groups: A-MTrPs, L-MTrPs, and healthy control.

3. Results

Table 4 shows the classification accuracy (%), F1-score, sensitivity, specificity, PPV, and NPV for the best parameter of each ML technique, with the bolded values showing the best performance for each parameter for each approach (B-mode, LBP, Gabor feature, and SEGL).
Figure 4 shows the confusion matrices of the ML techniques for each approach (B-mode, LBP, Gabor feature, and SEGL), each approach with the majority vote, a single statistical feature, and the removal of a single statistical feature. For each analysis, the ML classifier with the parameter that presented the best performance is shown in Table 5.
Table 5 shows the classification accuracy (%), F1-score, sensitivity, specificity, PPV, and NPV for the ensemble approach and the effects of using a single statistical feature and the removal of a single statistical feature. The highest performance can be seen with the “correlation” feature (accuracy = 53.33%, F1-score = 0.4861) and the removal of variance (accuracy = 51.67%, F1-score = 0.518).
Table 6 shows the results of the statistical analysis (mean and standard deviation) of all four approaches (B-mode, SEGL, Gabor, and LBP) between all 3 groups: A-MTrPs, L-MTrPs, and healthy controls with bolded values showing statistical significance. Statistical differences (p < 0.05) were seen for almost all features for all approaches except in B-mode (i.e., entropy, contrast, and energy) and Gabor (i.e., mean and correlation).

4. Discussion

Our study investigated the effectiveness of combining texture features derived from US images that focused on edges and spots for the purposes of discriminating muscles with MTrPs from healthy muscle.
Our findings indicate that a combined approach did not achieve a high level of accuracy in distinguishing between A-MTrPs, L-MTrPs, and healthy muscle. The combined approach showed slightly better performance (in majority votes) for the B-mode and SEGL method compared to the LBP and Gabor feature (49.44% and 49.44% vs. 47.22% and 48.89%, respectively). We hypothesized that structural and statistical approaches and a combination of them could better classify muscle with MTrPs from healthy muscle. However, the overall accuracies obtained from these combination approaches exhibited a similar range, ranging from 43.33% to 53.33%. These results are comparable to another study that compared texture features to a CNN approach [46]. Their F1-score ranged from 0.383 to 0.477 for their texture approaches (i.e., first-order statistical, LBP, and blob analysis) when classifying these three groups using an NN. This study shows better performance in the texture feature approach, which may be attributed to the combined ensemble approach and features that focus on structural information (i.e., spots and edges).
Additionally, when a simple ensemble approach using majority voting was used, almost no improvements were observed in the different approaches (i.e., SEGL classification accuracy: 48.05% to 49.44%, LBP classification accuracy: 48.89% to 47.22%, B-mode classification accuracy: 53.06% to 49.44%, and Gabor classification accuracy: 48.33 to 48.89%).
It is worth mentioning that, while the PPV and classification accuracy only showed an approximately 50% ability to distinguish MTrPs (i.e., A-MTrPs and L-MTrPs) from healthy muscle, the specificity and NPV results demonstrated almost 75%. This may be helpful in providing clinicians with more certainty in identifying the absence of MTrPs.
Statistical analysis showed no statistically significant differences in “correlation” and “mean” with respect to the Gabor feature approach (p = 0.0857 and p = 0.2338). This could be attributed to the fact that the Gabor feature measures the gray level of US images [47], and there were similar mean and standard deviations seen in the A-MTrP and L-MTrP groups as indicated in Table 5. These findings align with previous studies that have reported muscle with MTrPs to exhibit anisotropy [10].
While the statistical analysis revealed statistically significant differences in most features among the three groups, the ML techniques could not classify the three groups sufficiently. This may be due to the fact that the features are relatively overlapped among the three groups as seen in Table 5.
The result of our traditional statistical analysis agrees with the results seen in previous literature [8]. One study using LBP and blob analysis demonstrated statistically significant results between healthy individuals and patients with MPS (p < 0.001) [8]. Based on this, they suggested that a combination of texture features (i.e., LBP and blob area and count) can be used to describe differences between individuals with MPS and healthy individuals using a principal component analysis. However, this study grouped individuals with both A-MTrPs and L-MTrPs into the group of individuals with MPS. Koh et al. demonstrated better performance in classifying MTrPs (i.e., A-MTrP and L-MTrP grouped) from healthy muscle compared to the three-group case (i.e., A-MTrP, L-MTrP, and healthy muscle) [46]. These studies within the literature plus the results seen in this study suggest that MTrPs can be distinguished from healthy muscle but may not be sufficient for discrimination between the two types of MTrPs (i.e., A-MTrPs from L-MTrPs).
Notably, the ‘correlation’ and ‘mean’ features demonstrated better discriminatory ability than the other features, yielding accuracies of 53.33% and 52.5%, respectively. Unsurprisingly, when these features were removed, the accuracies decreased to the lowest values of 49.17% and 50.83%, respectively, suggesting that these features carry significant weight in the classification performance.
Overall, MTrPs have been identified and labeled as hypoechoic (dark grey) nodules in US images in previous literature [48,49]. However, recent research has proposed the identification of MTrPs as large hypoechoic contracture knots, which also exhibit smaller hyperechoic “speckles” within the hypoechoic contracture knot [50,51]. The presence of these “speckles” can affect the structural information of MTrPs within the muscle ROI and interfere with the characterization of muscle with MTrPs using texture feature analysis. For instance, entropy is capable of describing homogeneity and randomness in the observed patterns in US images, while LBP depicts the structural elements (spots, edges, etc.) of US backscatter. Consequently, the presence of different patterns within muscles affected by MTrPs may lead to variations in the values of calculated texture features within each group, thereby reducing the predictive power of the ML techniques.
Another aspect to consider is the relationship between the US image and the clinical scenario. The existing literature has proposed certain clinical criteria for MTrPs, but these criteria have not been clearly associated with specific US abnormalities. Currently, most researchers in this field concur that MTrPs are a physical entity that exhibits a spherical or elliptical shape, but this has not been thoroughly investigated [52]. Therefore, it is crucial to identify characteristics that can identify the MTrP in ultrasound, which can then be exploited for classification purposes.
In addition, defining the “border zone” that separates this region from the surrounding normal muscle is necessary, as previous literature has suggested that this border or transition zone may provide more valuable information than the lesion (i.e., hypoechoic contracture knot) itself [53]. Moreover, in cases where a patient experiences pain but does not present with MTrPs, it is uncertain if there is an ‘at-risk’ area that later transforms into a visually defined spherical/elliptical MTrPs.
To the best of our knowledge, this study represents the first attempt to investigate the combination of texture features focusing on information that represents the known representation of MTrP in US for discriminating muscle with MTrPs from healthy muscle. Previous studies have primarily relied on traditional statistical methods as opposed to ML approaches [12,54]. This study focused on a data-driven process relying less on user knowledge to achieve more precise predictions. This helps to avoid the mistake of using an inappropriate statistical model on the dataset, which could limit accuracy [24].
It is worth mentioning that the proposed approach of using a combination of texture features may be a potential tool in discriminating and characterizing the muscular structural information in various medical fields of activity. For example, a study used the features of entropy and energy in LBP images to quantitively assess the spastic biceps brachii muscle in post-stroke patients [55]. Additionally, a study used the angular second moment, contrast, and homogeneity features calculated over a GCLM feature in US images of the quadriceps to measure the muscle texture (pattern) under the effects of neuromuscular electrical stimulation to characterize individuals with lower back pain [56]. Thus, it is likely that the proposed approach could be used to interpret the uniformity of muscle patterns and abnormalities in other applications (e.g., rehabilitation).

Limitations

One limitation of this study lies in the proper definition and localization of the region of MTrPs within the US images from the muscle for the analysis. While the literature agrees that MTrPs present as hypoechoic structures in US, it is uncertain what area around these regions constitutes the MTrP. Thus, the entire ROI of the muscle was used for analysis to ensure no information was missed, but this may not be an optimal approach.
Furthermore, while hypoechoic images are generally associated with hyperperfused areas and hyperechoic images with hypoperfused areas [57], it is important to acknowledge the possibility of image artifacts, such as anisotropy, in our patient population. Hypoechoic areas may also arise from acoustic shadowing behind calcifications, lymph nodes, and certain pathological conditions. However, in this study, the manual selection of images aimed to alleviate the presence of any artifacts.

5. Conclusions

In conclusion, this paper sheds light on the utilization of texture features and combining them in different approaches (i.e., statistical features with B-mode, Gabor, LBP, and SEGL method) for the classification of A-MTrPs, L-MTrPs, and healthy muscle. The focus was to capture structural information such as edges, spots, and other relevant features. In comparison to traditional statistical analysis methods (e.g., ANOVA), the employed ML classification techniques did not achieve high classification results, likely due to the significant overlapping observed among the statistical values between the groups (maximum reported accuracy of 53.33%). Nevertheless, our developed ML algorithms were mainly able to perform better when there were no MTrPs (e.g., identify the healthy muscles (true negative results)). The results, however, were still much higher than chance, suggesting that these groups may be distinguishable, but further investigation is required to improve either the features or technique for classification.
Therefore, this study highlights the need to explore the potential of extracting advanced texture features in combination with non-traditional statistical analysis for effectively identifying MTrPs from healthy muscle. Such endeavors can contribute to the development of more robust diagnostic criteria based on US image characteristics. The findings from these future studies hold promise for the development of improved mechanisms to aid in the accurate identification and diagnosis of MTrPs.

Author Contributions

Conceptualization: F.S.Z., R.G.L.K., K.M. and D.K.; data curation: D.K.; formal analysis: F.S.Z. and R.G.L.K.; funding acquisition: D.K.; investigation: B.D. and D.K.; methodology: F.S.Z. and R.G.L.K.; project administration: F.S.Z., R.G.L.K., K.M. and D.K.; resources: B.D. and D.K.; software: F.S.Z. and R.G.L.K.; supervision: K.M. and D.K.; validation: F.S.Z., R.G.L.K., K.M. and D.K.; writing—original draft: F.S.Z.; writing—review and editing: F.S.Z., R.G.L.K., K.M. and D.K. All authors have read and agreed to the published version of the manuscript.

Funding

Banu Dilek was supported by the Tubitak 2219 grant program (grant number: 1059B192100800).

Institutional Review Board Statement

This individual protocol was approved by the Institutional Review Board of the University Health Network (UHN) (protocol code 15-9488).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data collected and analyzed in this study are available from the corresponding authors upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

List of Abbreviations (Alphabetical Order)

A-MTrPsActive Myofascial Trigger Points
DTDecision Tree
GLCMGray-Level Co-occurrence Matrices
KNNK-nearest Neighbors
L-MTrPsLatent Myofascial Trigger Points
LBPLocal Binary Pattern
LRLogistic Regression
MLMachine Learning
MPSMyofascial Pain Syndrome
MTrPsMyofascial Trigger Points
NBNaive Bayes
NPVNegative Predictive Value
NNNeural Network
PPVPositive Predictive Value
ROIRegion of Interest
RFRandom Forest
SDStandard Deviation
SEGLStatistical + Edge + Gray-Level Co-occurrence Matrices + Local Binary Pattern
SVMSupport Vector Machine
USUltrasound

References

  1. Gaskin, D.J.; Richard, P. The Economic Costs of Pain in the United States. J. Pain 2012, 13, 715–724. [Google Scholar] [CrossRef] [PubMed]
  2. Suputtitada, A. Myofascial Pain Syndrome and Sensitization. Phys. Med. Rehabil. Res. 2016, 1, 1–4. [Google Scholar] [CrossRef]
  3. Simons, D.; Travell, J.G.; Simons, L. Travell & Simons’ Myofascial Pain and Dysfunction: The Trigger Point Manual, 2nd ed.; Williams & Wilkins: Baltimore, MD, USA, 1999; Volume 1. [Google Scholar]
  4. Hsieh, C.-Y.J.; Hong, C.-Z.; Adams, A.H.; Platt, K.J.; Danielson, C.D.; Hoehler, F.K.; Tobis, J.S. Interexaminer Reliability of the Palpation of Trigger Points in the Trunk and Lower Limb Muscles. Arch. Phys. Med. Rehabil. 2000, 81, 258–264. [Google Scholar] [CrossRef] [PubMed]
  5. Gerwin, R.D.; Shannon, S.; Hong, C.Z.; Hubbard, D.; Gevirtz, R. Interrater Reliability in Myofascial Trigger Point Examination. Pain 1997, 69, 65–73. [Google Scholar] [CrossRef] [PubMed]
  6. Rathbone, A.T.L.; Grosman-Rimon, L.; Kumbhare, D.A. Interrater Agreement of Manual Palpation for Identification of Myofascial Trigger Points: A Systematic Review and Meta-Analysis. Clin. J. Pain 2017, 33, 715–729. [Google Scholar] [CrossRef] [PubMed]
  7. Kumbhare, D.; Shaw, S.; Grosman-Rimon, L.; Noseworthy, M.D. Quantitative Ultrasound Assessment of Myofascial Pain Syndrome Affecting the Trapezius: A Reliability Study: A. J. Ultrasound Med. 2017, 36, 2559–2568. [Google Scholar] [CrossRef]
  8. Kumbhare, D.; Shaw, S.; Ahmed, S.; Noseworthy, M.D. Quantitative Ultrasound of Trapezius Muscle Involvement in Myofascial Pain: Comparison of Clinical and Healthy Population Using Texture Analysis. J. Ultrasound 2020, 23, 23–30. [Google Scholar] [CrossRef]
  9. Mourtzakis, M.; Wischmeyer, P. Bedside Ultrasound Measurement of Skeletal Muscle. Curr. Opin. Clin. Nutr. Metab. Care 2014, 17, 389–395. [Google Scholar] [CrossRef]
  10. Bird, M.; Le, D.; Shah, J.; Gerber, L.; Tandon, H.; Destefano, S.; Sikdar, S. Characterization of Local Muscle Fiber Anisotropy Using Shear Wave Elastography in Patients with Chronic Myofascial Pain. In Proceedings of the 2017 IEEE International Ultrasonics Symposium (IUS), Washington, DC, USA, 6–9 September 2017; IEEE Computer Society: Washington, DC, USA, 2017. [Google Scholar] [CrossRef]
  11. Turo, D.; Otto, P.; Shah, J.P.; Heimur, J.; Gebreab, T.; Zaazhoa, M.; Armstrong, K.; Gerber, L.H.; Sikdar, S. Ultrasonic Characterization of the Upper Trapezius Muscle in Patients with Chronic Neck Pain. Ultrason. Imaging 2013, 35, 173–187. [Google Scholar] [CrossRef]
  12. Kumbhare, D.A.; Ahmed, S.; Behr, M.G.; Noseworthy, M.D. Quantitative Ultrasound Using Texture Analysis of Myofascial Pain Syndrome in the Trapezius. Crit. Rev. Biomed. Eng. 2018, 46, 1–30. [Google Scholar] [CrossRef]
  13. Armi, L.; Fekri-Ershad, S. Texture Image Analysis and Texture Classification Methods—A Review. arXiv 2019, arXiv:1904.06554. [Google Scholar] [CrossRef]
  14. Ojala, T.; Pietikäinen, M.; Harwood, D. A Comparative Study of Texture Measures with Classification Based on Featured Distributions. Pattern Recognit. 1996, 29, 51–59. [Google Scholar] [CrossRef]
  15. Jain, A.K. Unsupervised Texture Segmentation Using Gabor Filters. Pattern Recognit. 1991, 24, 1167–1186. [Google Scholar] [CrossRef]
  16. Thomas, K.; Shankar, H. Targeting Myofascial Taut Bands by Ultrasound. Curr. Pain Headache Rep. 2013, 17, 349. [Google Scholar] [CrossRef] [PubMed]
  17. Kumbhare, D.A.; Elzibak, A.H.; Noseworthy, M.D. Assessment of Myofascial Trigger Points Using Ultrasound. Am. J. Phys. Med. Rehabil. 2016, 95, 72–80. [Google Scholar] [CrossRef] [PubMed]
  18. Daugman, J.G. Complete Discrete 2-D Gabor Transforms by Neural Networks for Image Analysis and Compression. IEEE Trans. Acoust. Speech Signal Process. 1988, 36, 1169. [Google Scholar] [CrossRef]
  19. Udomhunsakul, S. Edge Detection in Ultrasonic Images Using Gabor Filters. In 2004 IEEE Region 10 Conference TENCON 2004; IEEE: Piscataway, NJ, USA, 2004; Volume 1, pp. 175–178. [Google Scholar] [CrossRef]
  20. Zhou, Y.; Zheng, Y.P. Longitudinal Enhancement of the Hyperechoic Regions in Ultrasonography of Muscles Using a Gabor Filter Bank Approach: A Preparation for Semi-Automatic Muscle Fiber Orientation Estimation. Ultrasound Med. Biol. 2011, 37, 665–673. [Google Scholar] [CrossRef]
  21. Paris, M.T.; Mourtzakis, M. Muscle Composition Analysis of Ultrasound Images: A Narrative Review of Texture Analysis. Ultrasound Med. Biol. 2021, 47, 880–895. [Google Scholar] [CrossRef]
  22. Sarker, I.H. Machine Learning: Algorithms, Real-World Applications and Research Directions. SN Comput. Sci. 2021, 2, 160. [Google Scholar] [CrossRef]
  23. Gobeyn, S.; Mouton, A.M.; Cord, A.F.; Kaim, A.; Volk, M.; Goethals, P.L.M. Evolutionary Algorithms for Species Distribution Modelling: A Review in the Context of Machine Learning. Ecol. Modell. 2019, 392, 179–195. [Google Scholar] [CrossRef]
  24. Ley, C.; Martin, R.K.; Pareek, A.; Groll, A.; Seil, R.; Tischer, T. Machine Learning and Conventional Statistics: Making Sense of the Differences. Knee Surg. Sports Traumatol. Arthrosc. 2022, 30, 753–757. [Google Scholar] [CrossRef] [PubMed]
  25. Rajula, H.S.R.; Verlato, G.; Manchia, M.; Antonucci, N.; Fanos, V. Comparison of Conventional Statistical Methods with Machine Learning in Medicine: Diagnosis, Drug Development, and Treatment. Medicina 2020, 56, 455. [Google Scholar] [CrossRef] [PubMed]
  26. Texture Classification Based on Random Threshold Vector Technique. Available online: https://www.researchgate.net/publication/242611926_Texture_Classification_Based_on_Random_Threshold_Vector_Technique (accessed on 20 August 2023).
  27. Ojala, T.; Pietikäinen, M.; Mäenpää, T. Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
  28. Haghighat, M.; Zonouz, S.; Abdel-Mottaleb, M. Identification Using Encrypted Biometrics. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Proceedings of the Computer Analysis of Images and Patterns: 15th International Conference, CAIP 2013, York, UK, 27–29 August 2013; Springer: Berlin/Heidelberg, Germany, 2013; Volume 8048, pp. 440–448. [Google Scholar] [CrossRef]
  29. Daugman, J.G. Uncertainty Relation for Resolution in Space, Spatial Frequency, and Orientation Optimized by Two-Dimensional Visual Cortical Filters. J. Opt. Soc. Am. A 1985, 2, 1160. [Google Scholar] [CrossRef] [PubMed]
  30. Gdyczynski, C.M.; Manbachi, A.; Hashemi, S.M.; Lashkari, B.; Cobbold, R.S.C. On Estimating the Directionality Distribution in Pedicle Trabecular Bone from Micro-CT Images. Physiol. Meas. 2014, 35, 2415–2428. [Google Scholar] [CrossRef]
  31. Vazquez-Fernandez, E.; Dacal-Nieto, A.; Martin, F.; Torres-Guijarro, S. Entropy of Gabor Filtering for Image Quality Assessment. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Proceedings of the International Conference Image Analysis and Recognition, Berlin, Heidelberg, Varzim, Portugal, 21–23 June 2010; Springer: Berlin/Heidelberg, Germany, 2010; Volume 6111, pp. 52–61. [Google Scholar] [CrossRef]
  32. CloudID: Trustworthy Cloud-Based and Cross-Enterprise Biometric Identification|Request PDF. Available online: https://www.researchgate.net/publication/279886437_CloudID_Trustworthy_cloud-based_and_cross-enterprise_biometric_identification (accessed on 16 August 2023).
  33. Ershad, S.F. Texture Classification Approach Based on Combination of Edge & Co-Occurrence and Local Binary Pattern. arXiv 2012, arXiv:1203.4855. [Google Scholar]
  34. Haralick, R.M.; Dinstein, I.; Shanmugam, K. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, SMC-3, 610–621. [Google Scholar] [CrossRef]
  35. Kumar, R.; Arthanari, M.; Sivakumar, M. Image Segmentation Using Discontinuity-Based Approach. Int. J. Multimed. Image Process. 2011, 1, 72–78. [Google Scholar] [CrossRef]
  36. Othman, Z.; Haron, H.; Kadir, M.A. Comparison of Canny and Sobel Edge Detection in MRI Images. Comput. Sci. Biomech. Tissue Eng. Group Inf. Syst. 2009, 133–136. [Google Scholar]
  37. Hosmer, D.W.; Lemeshow, S.; Sturdivant, R.X. Applied Logistic Regression, 3rd ed.; Wiley Series in Probability and Statistics; John Wiley and Sons: Hoboken, NJ, USA, 1989; 528p. [Google Scholar]
  38. Quinlan, J.R. Induction of Decision Trees. Mach. Learn. 1986, 1, 81–106. [Google Scholar] [CrossRef]
  39. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  40. Cover, T.M.; Hart, P.E. Nearest Neighbor Pattern Classification. IEEE Trans. Inf. Theory 1967, 13, 21–27. [Google Scholar] [CrossRef]
  41. Lindley, D.V. Fiducial Distributions and Bayes’ Theorem. J. R. Stat. Soc. Ser. B 1958, 20, 102–107. [Google Scholar] [CrossRef]
  42. Kecman, V. Support Vector Machines—An Introduction. In Support Vector Machines: Theory and Applications; Springer: Berlin/Heidelberg, Germany, 2005; pp. 1–47. [Google Scholar] [CrossRef]
  43. McCulloch, W.S.; Pitts, W. A Logical Calculus of the Ideas Immanent in Nervous Activity. Bull. Math. Biophys. 1943, 5, 115–133. [Google Scholar] [CrossRef]
  44. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning Representations by Back-Propagating Errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  45. Uddin, S.; Khan, A.; Hossain, M.E.; Moni, M.A. Comparing Different Supervised Machine Learning Algorithms for Disease Prediction. BMC Med. Inform. Decis. Mak. 2019, 19, 1–16. [Google Scholar] [CrossRef]
  46. Koh, R.G.L.; Dilek, B.; Ye, G.; Selver, A.; Kumbhare, D. Myofascial Trigger Point Identification in B-Mode Ultrasound: Texture Analysis Versus a Convolutional Neural Network Approach. Ultrasound Med. Biol 2023, 49, 2273–2282. [Google Scholar] [CrossRef]
  47. Gao, X.; Sattar, F.; Venkateswarlu, R. Corner Detection of Gray Level Images Using Gabor Wavelets. In Proceedings of the 2004 International Conference on Image Processing, 2004, ICIP’04, Singapore, 24–27 October 2004; Volume 4, pp. 2669–2672. [Google Scholar] [CrossRef]
  48. Mazza, D.F.; Boutin, R.D.; Chaudhari, A.J. Assessment of Myofascial Trigger Points via Imaging: A Systematic Review. Am. J. Phys. Med. Rehabil. 2021, 100, 1003–1014. [Google Scholar] [CrossRef]
  49. Duarte, F.C.K.; West, D.W.D.; Linde, L.D.; Hassan, S.; Kumbhare, D.A. Re-Examining Myofascial Pain Syndrome: Toward Biomarker Development and Mechanism-Based Diagnostic Criteria. Curr. Rheumatol. Rep. 2021, 23, 69. [Google Scholar] [CrossRef]
  50. Dommerholt, J.; Gerwin, R.D. Contracture Knots vs. Trigger Points. Comment on Ball et al. Ultrasound Confirmation of the Multiple Loci Hypothesis of the Myofascial Trigger Point and the Diagnostic Importance of Specificity in the Elicitation of the Local Twitch Response. Diagnostics 2022, 12, 321. Diagnostics 2022, 12, 2365. [Google Scholar] [CrossRef]
  51. Ball, A.; Perreault, T.; Fernández-De-las-peñas, C.; Agnone, M.; Spennato, J. Ultrasound Confirmation of the Multiple Loci Hypothesis of the Myofascial Trigger Point and the Diagnostic Importance of Specificity in the Elicitation of the Local Twitch Response. Diagnostics 2022, 12, 321. [Google Scholar] [CrossRef] [PubMed]
  52. Sikdar, S.; Shah, J.P.; Gebreab, T.; Yen, R.-H.; Gilliams, E.; Danoff, J.; Gerber, L.H. Novel Applications of Ultrasound Technology to Visualize and Characterize Myofascial Trigger Points and Surrounding Soft Tissue. Arch. Phys. Med. Rehabil. 2009, 90, 1829–1838. [Google Scholar] [CrossRef] [PubMed]
  53. Lefebvre, F.; Meunier, M.; Thibault, F.; Laugier, P.; Berger, G. Computerized Ultrasound B-Scan Characterization of Breast Nodules. Ultrasound Med. Biol. 2000, 26, 1421–1428. [Google Scholar] [CrossRef] [PubMed]
  54. Zçakar, L.O.; Merve Ata, A.; Kaymak, B.; Kara, M.; Kumbhare, D. Ultrasound Imaging for Sarcopenia, Spasticity and Painful Muscle Syndromes. Curr. Opin. Support Palliat. Care 2018, 12, 373–381. [Google Scholar] [CrossRef]
  55. Liu, P.-T.; Wei, T.-S.; Ching, C.T.-S. Quantitative Ultrasound Texture Analysis to Assess the Spastic Muscles in Stroke Patients. Appl. Sci. 2020, 11, 11. [Google Scholar] [CrossRef]
  56. Qiu, S.; Zhao, X.; Xu, R.; Xu, L.; Xu, J.; He, F.; Qi, H.; Zhang, L.; Wan, B.; Ming, D. Ultrasound Image Analysis on Muscle Texture of Vastus Intermedius and Rectus Femoris Under Neuromuscular Electrical Stimulation. J. Med. Imaging Health Inform. 2015, 5, 342–349. [Google Scholar] [CrossRef]
  57. Ihnatsenka, B.; Boezaart, A.P. Ultrasound: Basic Understanding and Learning the Language. Int. J. Shoulder Surg. 2010, 4, 55–62. [Google Scholar] [CrossRef]
Figure 1. US transducer location from upper trapezius muscle (x = C7, y = acromion).
Figure 1. US transducer location from upper trapezius muscle (x = C7, y = acromion).
Sensors 23 09873 g001
Figure 2. (A) An example of a B-mode US image from a participant with active MTrP. The red arrows show the MTrP, a hypoechoic region. (B) An example of a corresponding Gabor-filtered image (at θ = 0 degree) from the same participant. (C) An example of a corresponding LBP from the same participant.
Figure 2. (A) An example of a B-mode US image from a participant with active MTrP. The red arrows show the MTrP, a hypoechoic region. (B) An example of a corresponding Gabor-filtered image (at θ = 0 degree) from the same participant. (C) An example of a corresponding LBP from the same participant.
Sensors 23 09873 g002
Figure 3. This chart shows a summary of the methods that were used for feature extraction. The red color connections represent the SEGL method, a combination of statistical, edge, and gray-level co-occurrence matrices (GLCM), and local binary pattern (LBP). Note: The numbers in each circle represent each approach.
Figure 3. This chart shows a summary of the methods that were used for feature extraction. The red color connections represent the SEGL method, a combination of statistical, edge, and gray-level co-occurrence matrices (GLCM), and local binary pattern (LBP). Note: The numbers in each circle represent each approach.
Sensors 23 09873 g003
Figure 4. Confusion matrices of the ML algorithms with the best performance for (A) each approach (B-mode, LBP, Gabor feature, and SEGL) and each approach with the majority vote; (B) a single statistical feature; and (C) the removal of a single statistical feature for discriminating the three groups: A-MTrPs, L-MTrPs, and healthy controls.
Figure 4. Confusion matrices of the ML algorithms with the best performance for (A) each approach (B-mode, LBP, Gabor feature, and SEGL) and each approach with the majority vote; (B) a single statistical feature; and (C) the removal of a single statistical feature for discriminating the three groups: A-MTrPs, L-MTrPs, and healthy controls.
Sensors 23 09873 g004
Table 1. Number of participant’s muscles in each group.
Table 1. Number of participant’s muscles in each group.
GroupNumber of Sites
A-MTrPs30
L-MTrPs30
Healthy Control30
Note: Number of sites shows the number of each participant’s left and/or right muscle.
Table 2. The summary of approached texture features for each image (LBP, Gabor, SEGL, and LBP).
Table 2. The summary of approached texture features for each image (LBP, Gabor, SEGL, and LBP).
ApproachNumber of Features
I. LBP7
II. Gabor Feature280 (40 × 7)
III. SEGL Method56 (8 × 7)
IV. Statistical Features7
Table 3. The following ML classifier techniques with their associated parameters were used. * Shows the best accuracy performance for each classifier technique.
Table 3. The following ML classifier techniques with their associated parameters were used. * Shows the best accuracy performance for each classifier technique.
Classifier TechniquesHyperparameters
K-nearest neighbors (kNN) [39]n_neighbors = 3, 5 *, 7
Decision tree (DT) [37]Criterion = ‘gini’ *, ‘entropy’, ‘log_loss’
Random forest (RF) [38]Criterion = ‘gini’ *, ‘entropy’, ‘log_loss’
Logistic regression (LR) [36]C = 0.1, 1, 10 *
Naive bayes (NB) [40] Gaussian   ( var_smoothing = 1.0 ,   10 5 ,   10 9   *)
Support vector machine (SVM) [41]C = 0.1, 1, 10 *
Artificial neural network (NN) [42,43]
Table 4. Classification accuracy (%), F1-score, sensitivity, specificity, positive prediction values (PPVs), and negative prediction values (NPVs) for the best parameter of each ML technique for each approach (SEGL, LBP, B-mode, and Gabor).
Table 4. Classification accuracy (%), F1-score, sensitivity, specificity, positive prediction values (PPVs), and negative prediction values (NPVs) for the best parameter of each ML technique for each approach (SEGL, LBP, B-mode, and Gabor).
ApproachML Technique, ParameterAccuracy (%)F1-ScoreSensitivitySpecificityPPVNPV
SEGL MethodSVM, C = 1048.050.48060.48060.74030.48140.7398
LR, C = 1046.390.46390.46390.73190.46440.7317
DT, Criterion = ‘gini’41.670.41680.41670.70830.42580.7056
RF, Criterion = ‘log_loss’45.560.45560.45560.72780.47030.7228
KNN, N-neighbors = 543.330.43330.43330.71670.43150.7157
NB, Gaussian, smoothing = 1.045.560.45560.45560.72780.45050.7081
NN44.440.44440.44440.72220.44670.7215
LBPSVM, C = 1044.170.44170.44170.72080.44470.7200
LR, C = 1.045.560.45560.45560.72780.46290.7249
DT, Criterion = ‘gini’40.280.40280.40280.70140.39680.7026
RF, Criterion = ‘log_loss’45.280.45280.45280.72640.45550.7256
KNN, N-neighbors = 348.890.48940.48890.74440.48790.7447
NB ,   Gaussian ,   smoothing = 10 5 40.000.40000.40000.70000.41380.6896
NN43.330.43330.43330.71670.44450.7132
B-modeSVM, C = 0.152.220.52220.52220.76110.52780.7372
LR, C = 1.045.830.45830.45830.72920.47100.7234
DT, Criterion = ‘gini’44.170.44170.44170.72080.44500.7196
RF, Criterion = ‘gini’49.720.48680.49720.74860.50880.7431
KNN, N-neighbors = 550.830.50830.50830.75420.51080.7534
NB, Gaussian, smoothing = 1.053.060.53060.53060.76530.53550.7460
NN46.940.46940.46940.73470.48580.7283
Gabor FilterSVM, C = 1048.330.48480.48890.74440.49450.7424
LR, C = 1045.000.45000.45000.72450.45150.7245
DT, Criterion = ‘gini’45.000.45000.45000.72500.45420.7237
RF, Criterion = ‘log_loss’46.670.46670.46670.73330.47770.7297
KNN, N-neighbors = 547.220.47220.47220.73610.47570.7350
NB ,   Gaussian ,   smoothing = 10 5 43.610.43610.43610.71810.43410.7183
NN43.060.43060.43060.71530.43430.7141
Note: The bolded numbers represent the best performance for each approach.
Table 5. The classification accuracy (%), F1-score, sensitivity, specificity, PPV, and NPV for the ML techniques: for each approach (B-mode, LBP, Gabor feature, and SEGL) with the majority vote (highlighted in orange), a single statistical feature (highlighted in green), and the removal of a single statistical feature (highlighted in blue). (SVM: support vector machine, LR: logistic regression, KNN: K-nearest neighbors, DT: decision tree).
Table 5. The classification accuracy (%), F1-score, sensitivity, specificity, PPV, and NPV for the ML techniques: for each approach (B-mode, LBP, Gabor feature, and SEGL) with the majority vote (highlighted in orange), a single statistical feature (highlighted in green), and the removal of a single statistical feature (highlighted in blue). (SVM: support vector machine, LR: logistic regression, KNN: K-nearest neighbors, DT: decision tree).
Approach/FeatureAccuracy (%)F1-ScoreSensitivitySpecificityPPVNPV
SEGL Method (Majority Vote)49.440.47310.49440.74720.50340.7384
LBP (Majority Vote)47.220.45820.47220.73610.47030.7311
B-Mode (Majority Vote)49.440.47860.49440.74720.50780.7397
Gabor Filter (Majority Vote)48.890.48550.48890.74440.49220.7429
Entropy (SVM, C = 10)43.330.42480.43330.71670.44720.7125
Energy (LR, C = 0.1)48.060.46140.48060.74030.49930.7309
Contrast (SVM, C = 1)49.720.48310.49720.74860.50820.7415
Correlation (SVM, C = 1)53.330.48610.53330.76670.5250.7485
Variance (KNN, K = 3)49.170.4050.49170.74580.49010.7462
Homogeneity (LR, C = 0.1)46.670.45080.46670.73330.48820.7258
Mean (SVM, C = 10)52.50.510.5250.76250.53590.7551
Without Entropy (SVM, C = 10)50.830.5070.50830.75420.51070.7535
Without Energy (SVM, C = 10)50.280.50140.50280.75140.50530.7507
Without Contrast (SVM, C = 10)50.280.50140.50280.75140.50510.7507
Without Correlation (DT, Criterion = gini)49.170.48680.49170.74580.49830.7435
Without Variance (LR, C = 10)51.670.5180.51670.75830.51350.759
Without Homogeneity (SVM, C = 10)51.110.5090.51110.75560.51490.7545
Without Mean (SVM, C = 10)50.830.50880.50830.75420.50710.7544
Note: The bolded numbers represent the best performance for the ML algorithms in each category.
Table 6. The results of the statistical analysis (mean and standard deviation (SD)) of all four approaches (B-mode, SEGL, Gabor, and LBP) between all 3 groups: A-MTrPs, L-MTrPs, and healthy controls.
Table 6. The results of the statistical analysis (mean and standard deviation (SD)) of all four approaches (B-mode, SEGL, Gabor, and LBP) between all 3 groups: A-MTrPs, L-MTrPs, and healthy controls.
Approachp-ValueMean
(A-MTrPs)
SD
(A-MTrPs)
Mean
(Healthy)
SD
(Healthy)
Mean
(L-MTrPs)
SD
(A-MTrPs)
EntropyGabor 2.32   ×   10 2 7.30   ×   10 4 1.58   ×   10 4 6.23   ×   10 4 2.00   ×   10 4 7.40   ×   10 4 1.77   ×   10 4
SEGL 1.70   ×   10 2 7.67   ×   10 2 2.96   ×   10 2 5.34   ×   10 2 3.04   ×   10 2 7.19   ×   10 2 3.72   ×   10 2
B-mode 6.88   ×   10 1 6.19 4.17   × 10 5.72 4.12   × 10 6.10 4.58   ×   10 1
LBP 1.00   ×   10 3 5.37 1.75   × 10 5.36 2.47   × 10 5.36 1.81   ×   10 1
EnergyGabor 1.00   ×   10 3 9.36   ×   10 8 1.83   ×   10 8 1.13   ×   10 9 2.87   ×   10 8 9.31   ×   10 8 2.05   ×   10 8
SEGL 1.38   ×   10 2 2.27   ×   10 8 9.94   ×   10 7 1.86   ×   10 8 4.25   ×   10 7 2.17   ×   10 8 6.56   ×   10 7
B-mode 6.38   ×   10 2 1.73   ×   10 8 9.50   ×   10 7 9.28   ×   10 7 5.42   ×   10 7 1.47   ×   10 8 9.62   ×   10 7
LBP 1.40   ×   10 3 1.40   ×   10 9 2.57   ×   10 8 1.56   ×   10 9 4.37   ×   10 8 1.37   ×   10 8 2.68   ×   10 8
MeanGabor 2.34   ×   10 1 1.23   ×   10 2 1.58 1.24   ×   10 2 1.26 1.23   ×   10 2 1.64   ×   10 4
SEGL 1.38   ×   10 2 9.74   ×   10 3 4.13   ×   10 3 6.47   ×   10 3 4.10   ×   10 7 9.17   ×   10 3 5.20   ×   10 3
B-mode 4.20   ×   10 3 4.72   ×   10 1.73   ×   10 3.08 E   ×   10 1.03   ×   10 4.31   ×   10 1.80   ×   10
LBP 1.40   ×   10 3 1.17   ×   10 2 7.90 1.09   ×   10 2 1.00   ×   10 1.16   ×   10 2 9.89
ContrastGabor 3.80   ×   10 3 4.13   ×   10 11 5.36   ×   10 10 4.62   ×   10 11 7.85   ×   10 10 4.11   ×   10 11 5.95   ×   10 10
SEGL 2.04   ×   10 2 7.65   ×   10 6 3.53   ×   10 6 4.95   ×   10 6 3.57   ×   10 6 7.09   ×   10 6 4.42   ×   10 6
B-mode 1.90   ×   10 1 1.59   ×   10 11 5.37   ×   10 10 1.23   ×   10 11 4.22   ×   10 10 1.41   ×   10 11 5.05   ×   10 10
LBP 2.01   ×   10 2 3.98   ×   10 11 4.96   ×   10 10 4.20   ×   10 11 7.70   ×   10 10 3.93   ×   10 11 4.90   ×   10 10
HomogeneityGabor 1.7   ×   10 3 4.58   ×   10 4 9.30   ×   10 3 5.54   ×   10 4 1.44   ×   10 4 4.57   ×   10 4 1.05   ×   10 4
SEGL 1.02   ×   10 2 39.76 1.18   ×   10 2.9   ×   10 1.20   ×   10 3.8   ×   10 1.52   ×   10
B-mode 3.24   ×   10 2 1.74   ×   10 4 6.61   ×   10 3 1.30   ×   10 4 5.60   ×   10 3 1.54   ×   10 4 6.69   ×   10 3
LBP 3.14   ×   10 2 4.30   ×   10 4 8.12   ×   10 3 4.82   ×   10 4 1.40   ×   10 4 4.11   ×   10 4 8.63   ×   10 3
CorrelationGabor 8.56   ×   10 2 2.27   ×   10 8 9.94   ×   10 7 1.86   ×   10 8 4.26   ×   10 7 2.17   ×   10 8 6.56   ×   10 7
SEGL 3.10   ×   10 2 1.39   ×   10 9 1.89   ×   10 8 1.51   ×   10 9 2.56   ×   10 8 1.39   ×   10 9 1.43   ×   10 8
B-mode 3.00   ×   10 4 1.05   ×   10 7 2.71   ×   10 7 7.01   ×   10 7 5.83   ×   10 7 1.90   ×   10 7 3.14   ×   10 7
LBP 2.00   ×   10 5 5.08   ×   10 6 7.12   ×   10 5 3.88   ×   10 6 1.33   ×   10 6 4.86   ×   10 6 9.44   ×   10 5
VarianceGabor 6.00   ×   10 4 2.71   ×   10 7 5.72   ×   10 6 3.18   ×   10 7 6.45   ×   10 6 2.63   ×   10 7 4.96   ×   10 6
SEGL 1.40   ×   10 2 6.30   ×   10 2 2.66   ×   10 2 4.19   ×   10 2 2.64   ×   10 2 5.93   ×   10 2 3.35   ×   10 2
B-mode 4.70   ×   10 3 3.06   ×   10 7 1.40   ×   10 7 1.97   ×   10 7 1.01   ×   10 7 2.80   ×   10 7 1.13   ×   10 7
LBP 1.70   ×   10 3 5.94   ×   10 8 1.12   ×   10 8 7.04   ×   10 8 1.83   ×   10 8 5.93   ×   10 8 1.36   ×   10 8
Note: The bolded numbers represent the statistical significance (p < 0.05).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shomal Zadeh, F.; Koh, R.G.L.; Dilek, B.; Masani, K.; Kumbhare, D. Identification of Myofascial Trigger Point Using the Combination of Texture Analysis in B-Mode Ultrasound with Machine Learning Classifiers. Sensors 2023, 23, 9873. https://0-doi-org.brum.beds.ac.uk/10.3390/s23249873

AMA Style

Shomal Zadeh F, Koh RGL, Dilek B, Masani K, Kumbhare D. Identification of Myofascial Trigger Point Using the Combination of Texture Analysis in B-Mode Ultrasound with Machine Learning Classifiers. Sensors. 2023; 23(24):9873. https://0-doi-org.brum.beds.ac.uk/10.3390/s23249873

Chicago/Turabian Style

Shomal Zadeh, Fatemeh, Ryan G. L. Koh, Banu Dilek, Kei Masani, and Dinesh Kumbhare. 2023. "Identification of Myofascial Trigger Point Using the Combination of Texture Analysis in B-Mode Ultrasound with Machine Learning Classifiers" Sensors 23, no. 24: 9873. https://0-doi-org.brum.beds.ac.uk/10.3390/s23249873

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop