Next Article in Journal
Microporous Adsorbent-Based Mixed Matrix Membranes for CO2/N2 Separation
Previous Article in Journal
At the Intersection of Housing, Energy, and Mobility Poverty: Trapped in Social Exclusion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Short-Term Load Forecasting Based on Optimized Random Forest and Optimal Feature Selection

IT—Instituto de Telecomunicações, University of Beira Interior, 6201-001 Covilhã, Portugal
*
Author to whom correspondence should be addressed.
Submission received: 6 March 2024 / Revised: 10 April 2024 / Accepted: 12 April 2024 / Published: 18 April 2024
(This article belongs to the Topic Short-Term Load Forecasting)

Abstract

:
Short-term load forecasting (STLF) plays a vital role in ensuring the safe, efficient, and economical operation of power systems. Accurate load forecasting provides numerous benefits for power suppliers, such as cost reduction, increased reliability, and informed decision-making. However, STLF is a complex task due to various factors, including non-linear trends, multiple seasonality, variable variance, and significant random interruptions in electricity demand time series. To address these challenges, advanced techniques and models are required. This study focuses on the development of an efficient short-term power load forecasting model using the random forest (RF) algorithm. RF combines regression trees through bagging and random subspace techniques to improve prediction accuracy and reduce model variability. The algorithm constructs a forest of trees using bootstrap samples and selects random feature subsets at each node to enhance diversity. Hyperparameters such as the number of trees, minimum sample leaf size, and maximum features for each split are tuned to optimize forecasting results. The proposed model was tested using historical hourly load data from four transformer substations supplying different campus areas of the University of Beira Interior, Portugal. The training data were from January 2018 to December 2021, while the data from 2022 were used for testing. The results demonstrate the effectiveness of the RF model in forecasting short-term hourly and one day ahead load and its potential to enhance decision-making processes in smart grid operations.

1. Introduction

Electrical load forecasting is a fundamental aspect for power suppliers, as it ensures the safe, effective, and economical operation of a power system. This forecasting practice is categorized into four types based on forecasting time: very short-term, short-term, medium-term, and long-term load forecasting [1]. Short-term load forecasting (STLF) is particularly important because it covers a forecast horizon of a few hours to a few days, allowing for the planning of generation resources, the optimization of power flow on the transmission grid, and the successful trading of power in the markets [2].
The complexity of STLF derives from several factors, including non-linear trends, multiple seasonality, variable variance, significant random interruptions, and variable daily profiles in electricity demand time series. Profiles have become more complex over the years. Together with the upcoming green revolution in power systems, they will certainly present added challenges [3], requiring advanced models to capture the hard non-linear characteristics and deliver accurate load predictions [4].
Accurate load forecasting offers numerous advantages for electric utilities, such as reduced operational and maintenance costs, increased reliability, and informed decisions for future development [5]. In addition, accurate load forecasting is essential in competitive power markets, where electricity prices are driven by demand, directly affecting the financial performance of market participants [6].
Improving the accuracy of short-term load forecasting is a challenging task but is of great value in ensuring safe and stable power system operation. Even a small increase in forecast accuracy can lead to substantial profits, making accurate, high-speed forecasting crucial for economic load dispatch in power systems [7]. Thus, the development of an efficient short-term load forecasting model is very important for smart grid operations and enhances decision-making processes [8,9]. This is particularly true for more disaggregated loads (as studied in this work), which usually constitute a more substantial challenge with higher forecasting errors [10].

1.1. Literature Review: Forecasting Models

A vast array of research exists on the prediction of time series data, encompassing various fields and presenting a multitude of proposed approaches. These methodologies strive to effectively capture the connections between future values of the predicted variable (output) and past data related to various influencing factors, which may include the predicted variable itself (input) [11].
In the past, conventional models such as autoregressive integrated moving average (ARIMA) or exponential smoothing were widely regarded as effective techniques for capturing the internal dependencies within the forecast variable. These models were part of the hard computing paradigm [12], and, to mitigate some of their shortcomings, they have been effectively integrated into assembled STLF models [13]. The adoption of artificial intelligence (AI) techniques has become increasingly prevalent in various fields, including in forecasting models [14], a trend that is also driven by data proliferation, including in this field, where increasingly disaggregated levels of load time series and related variables are now available for customers, researchers, etc. The characteristics of these time series determine different input selection and processing approaches, which are both key for the training and fine-tuning of these models [15].
In the context of the power system, AI can help operators maximize profit by forecasting electric load and electricity prices. Conventional machine learning models such as support vector machines [16], regression models [17], random forest (RF) [1], decision trees [18], extreme gradient boosting (XGBoost) [19], and AI models such as artificial neural networks (ANNs) [20] are commonly used to solve electrical engineering problems, in particular electricity load forecasting. Different types of artificial neural networks have been explored, including feedforward and recurrent neural networks, e.g., Elman networks, Jordan networks, long-short term memory (LSTM), and gated recurrent unit (GRU) neural networks] and convolutional neural networks [21,22]. In addition, many authors prefer to use hybrid models, combining two or more models [23,24,25,26].
Given the complexities of electricity demand time series, machine learning (ML) methods are the preferred class of electricity demand forecasting method [27]. Random forest (RF)—an ensemble learning method for both regression and classification problems—was presented by [28] in regression problems and has been one of the most popular machine learning (ML) models. The RF algorithm has been applied efficiently and extensively in several works that involve long-, medium-, short-, and ultra-short-term forecasting. Some recent approaches, namely electricity load forecasting, have been applied to the electricity sector. In [29], a hybrid RF–MGF–RSM model is proposed for short-term forecasting, combining random forest with the averaging function. In [30], medium-term models for isolated power systems are applied, exploring machine learning methods such as random forest and XGBoost. In contrast, ref. [31] proposes a hybrid feature selection algorithm for short-term forecasting, integrating genetic algorithms and random forest. Meanwhile, ref. [32] highlights an ultra-short-term forecasting model that combines LSTM and random forest. The authors in [33] have developed a long-term forecasting system based on knowledge and data, using machine learning regression methods and inference based on fuzzy logic. These recent works highlight a variety of approaches and forecasting horizons for electricity load, from the short to the long term, exploring different machine learning techniques and models.

1.2. Contributions

In a framework where consumers are expected to manage their electrical loads more actively, it is important to tackle the issue of STLF at a more decentralized level, including high-consumption public buildings. These entities can, thereby, best understand their consumption patterns, adjust their activities, consider different dynamic pricing options, and define a self-consumption plan with the possibility of storage [34]. This more proactive role requires accurate STLF as a crucial task for a better decision-making process. Therefore, in this work, we will delve into the main issues surrounding this goal, specifically by acknowledging that a one-size-fits-all approach is not the correct path and that we should pursue a more tailored input data selection coupled with forecasting model fine-tuning.
The main contributions of this paper can be summarized as follows: (i) the selection of features combined with exogenous variables—such as calendar (hour and weekday) and temperature—allowed us to find an optimal and customized combination for each case study. This analysis proved that, regardless of the forecasting model used, it is essential to explore several types of input variables to find the most significant one; (ii) the study of the correlation between weekdays showed that, although highly correlated, replacing weekdays will not necessarily improve result accuracy; (iii) hyperparameter tuning showed that there is no single optimal parameterization for all consumption profiles, i.e., each case study obtained the best results from different parameter settings. This proves that hyperparameter tuning is fundamental to improve results; (iv) contrary to expectation, training with rolling forecast showed benefits in only one case study; in the others, introducing recent information and removing older information did not improve the results. Thus, this analysis proves that the use of rolling forecast improves forecast accuracy depending on the consumption profile; (v) since there is no consensus in the literature on the use of an optimal forecasting model for STLF, this paper used the RF model, which in most cases resulted in the more accurate forecasts when compared to baseline models.

1.3. Paper Structure

In this paper, Section 2 presents the proposed forecast model and the input data selection as well as the exogenous variables used for training. Section 3 presents a brief statistical analysis of the data and provides the results of various experiments, including similar weekday, tuning hyperparameter, rolling forecast, and a comparative baseline model. Lastly, Section 4 presents the main conclusions.

2. Proposed Forecast Model

2.1. Input Data Selection

Electricity load time series express random fluctuations, trends, and seasonality on an annual, weekly, and daily basis. The daily load profiles are dependent on the day of the week and the hour of the day and vary throughout the year. These components will depend on system size and structure, as well as weather conditions. In this study, the STLF model produced hourly forecasts one day ahead (hour t of day i ). For this purpose, the predictors used as input for the forecast model should be the most relevant variables, selected from recent history. Furthermore, data processing is fundamental to reducing forecast errors.
To obtain the most accurate forecast possible, the time series input patterns should be the most representative of the time series. Thoughtful data selection allows for maximizing the accuracy and reliability of the analysis or model. Furthermore, by selecting relevant data, it is possible to reduce noise and bias in the results, optimize the use of computational resources, and facilitate scalability and generalization of the model. This leads to deeper insights, more reliable decisions, and a better understanding of the behavior of the electrical system over time. Table 1 shows different input patterns with different combinations of relevant lags. The settings for some of these input patterns were inspired by [35].
Depending on the input pattern, you can introduce different input information into the model. Unlike p 2 , which only offers insights into daily seasonality, pattern p 1 contains comprehensive details about the weekly sequence leading up to the forecasted day i . Notably, p 1 captures both daily and weekly seasonality. Patterns p 3 and p 4 consider the same hour t that will be forecasted for the next day i , but considering one and three the earlier weeks, respectively, expressing only weekly seasonality.
Pattern p 5 considers only the hour t of the seven last same weekdays as the forecasted day i . For example, if the weekday to be forecasted is a Sunday, the sequence will be the hour t of the seven preceding Sundays. Pattern p 6 selects some of the lags generally considered the most relevant and, in addition, the previous and subsequent hours. For example, for lag 24, it also considered hours 23 and 25. This pattern reduces the number of relevant lags while prioritizing recent information from up to a week ago.
Cross-patterns p 7 and p 8 are a combination of patterns p 2 + p 3 and p 2 + p 4 , respectively, that consider daily as well as weekly seasonality. In [35], the authors demonstrated that combining daily and weekly patterns in STLF produces more favorable results than using individual daily or weekly patterns for forecasting.
Figure 1 shows the sequence of input patterns p 1 , p 2 , p 3 , p 4 , p 5 and p 6 ; Figure 2 shows the sequence of input cross-patterns p 7 ( p 2 + p 3 ) and p 8 p 2 + p 4 ; and Figure 3 show the representation of these input patterns.
To explore multiple prediction performances, we used three different variants of input data for model training. Variant 1 only included load information, namely an input pattern ( p 1 p 8 ) and the target (24 h ahead). Variant 2 also included calendar variables, namely hour of the day h o u r = 1 , , 24 and day of the week d w e e k = S u n d a y ( 1 ) , , S a t u r d a y ( 7 ) , both categorical variables. Variant 3 added ambient temperature in Celsius ( t e m p ( ) ) as an exogenous variable. Figure 4 represents these variants schematically.

2.2. Forecasting Model: Random Forest (RF)

Random forest (RF) is an ensemble algorithm based on a decision tree model (CART—classification and regression trees) [36]. The main idea of the RF model is to create a random combination of regression trees, using techniques such as bagging [37] and random subspace [38]. In bagging, each tree in the forest is built from a bootstrap sample (sampling with replacement) of the original dataset. The random subspace technique, in addition to bagging, randomly selects a certain number of features to be sampled, increasing the diversity among trees, since it creates random subsets of the complete predictor space (more specifically, a random predictor subset is selected at each tree node). When combined, these techniques improve prediction accuracy and reduce model variability.
The RF algorithm uses a bootstrap sample ( B k ) with the same size as the training data for each of K trees ( k = 1 ,   . . . ,   K ). For each sample, a tree is constructed by recursively partitioning the input space at each node until a minimum sample leaf size is reached. At each node, splits are based on randomly chosen features ( p / n ) from among the total number of features ( N ). The best split is chosen by maximizing the reduction of the mean square error (MSE) among all split candidates and cut points. After all K trees are constructed in this way, the RF forecast can be calculated by (1) [39]:
f ^ x = 1 K k = 1 K T k ( x )
where x is the input pattern.
Some hyperparameters can be settled to improve the forecasting results, such as the number of trees ( K ), the minimum sample leaf size ( m ) and the number of maximum features to select at random for each split ( n ). For regression problems, the literature recommends that the number of maximum features n should be the total number of features ( N ) divided by 3, because as n decreases, the correlation between trees decreases and, with this, the variation of the mean also decreases.
The best values for the hyperparameters will always depend on each problem. In this case study, these parameters were initially set as K = 300 , m = 1 and n = N / 3 . Pseudocode of the random forest algorithm is described in Algorithm 1.
Algorithm 1: Pseudocode of the Random Forest algorithm.
Input:
Training data.
Number   of   trees   K .
Minimum   leaf   size   m .
Number   of   features   to   select   at   random   for   each   split   n .
Procedure:
for   k = 1   to   K
 Select a bootstrap sample randomly of the same size as the training data.
Repeat   the   following   steps   for   each   terminal   node ,   until   the   minimum   node   size   m is reached:
Step 1: Select n features randomly from all features.
Step 2: From these randomly selected n features, choose the best feature that splits the data to compose the current node.
Step 3: Split the node into two other nodes.
Output:
Ensemble   of   trees   { T k } k = 1 ,   2 ,   ,   K .
Calculate the forecast of point x :
f ^ x = 1 K k = 1 K T k ( x )

3. Simulation Study

3.1. Statistical Analysis of Data

Historical hourly load data (active power demand [kW]) from 4 transformer substations that supply the distribution grid of four campuses of the University of Beira Interior-UBI, located in Covilhã (Portugal), were used to test the proposed forecast model. Four case studies were chosen: the Sciences and Humanities campus (represented in this paper by POL1), the Mathematics and Informatics campus (POL2), the Engineering campus (POL3), and the Health Sciences campus (POL4). The data comprised the period from January 2018 to December 2022. Figure 5 represents the average hourly consumption per month and per year for each campus. The data from 2018 to 2021 were used for training, and 2022 was used for testing.
Figure 6 presents the descriptive statistics of the load data from the training and testing periods. Null power data were taken from the database, but POL3 still showed very low minimum load values of 1kW. The mean values were similar for both periods, except for POL3, which had a lower mean value in the test period. Clearly, POL4 has the highest consumption among all campuses. The standard deviation (SD) was higher in the testing period only for POL4; the same was true for the maximum values. Statistically, the test period resembled the training period, showing that the data were consistent to test the proposed forecast model.

3.2. Results for All Configurations of Input Patterns and Training Modes

The forecast error was calculated using the mean absolute percentage error (MAPE) and root mean square error (RMSE). These error metrics can be calculated by (2) and (3), respectively. Table 2 and Table 3 show the MAPE and RMSE, respectively, for input patterns p 1 p 8 and different training variants.
Variant 1, which only used load data, achieved the worst result for each input pattern tested in all case studies. The highest errors for Variant 1 were observed with patterns p 2 and p 5 and the lowest errors were observed with pattern p 8 . Variant 2, which adds calendar information (hour and weekday) reduced the error in all cases. Variant 2 presented the lowest errors when combined with patterns p 6 for POL1 and p 8 for POL3. In some cases, Variant 3, which adds temperature information, reduced the error. For this variant, the best options were combining patterns p 8 for POL2 and p 7 for POL4. The RMSE values for POL4 were greater than for the other case studies due to relatively higher consumption compared to the other campuses, as shown in Figure 6.
M A P E ( % ) = 1 n o i = 1 n o R i F i R i  
R M S E ( k W ) = i = 1 n o R i F i 2 n o  
where R i is the real value; F i is the forecast value; and n o is total number of observations.
Based on the results, a different combination is recommended for each case study. The best combination for POL1 and POL3 is Variant 2 with patterns p 6 and p 8 , respectively, while for POL2 and POL4, the best combination is Variant 3 with patterns p 8 and p 7 , respectively. For POL1 and POL3, where temperature data are not required, the load profiles are less sensitive to temperature variations, either due to specific characteristics of the data or the forecasting method that effectively captured variation in electricity demand without the need to include temperature data. In fact, the correlation values between temperature and electricity demand were relatively low, with values of −0.1 for POL1 and −0.2 for POL3. On the other hand, despite a correlation of −0.2, POL2 showed greater sensitivity to climate variations, which is why the forecasting model favored a combination that includes the temperature. In the case of POL4, the best combination confirmed the expectations given by the correlation between temperature and electricity demand, with a significant value of 0.6, which is why the temperature data were also included. This suggests that clear variations in ambient temperature, such as a hotter summer or a colder winter, directly influence the amount of electricity consumed. Figure 7 illustrates the best combination among all results for each case study. These combinations were used in the next subsections.

3.3. Results for Best Configuration: Similar Weekday

In addition to seasonality, energy consumption among university campuses is similar across weekdays. Certain days are highly correlated, such as Saturday and Sunday (weekend), when academic activity is reduced and consequently energy consumption is lower.
To improve the forecast results, an analysis of the correlation between the days of the week was performed using Pearson’s coefficient ( ρ ). Figure 8 shows the correlation matrix for each case study. There is clearly a high correlation between certain days of the week. Consequently, the code for the calendar variable d w e e k —initially, 1 for Sunday, 2 for Monday and so on—was changed to 1 for Saturday and Sunday; 2 for Monday and Tuesday; and 3 for Wednesday, Thursday, and Friday.
Contrary to expectation, the use of similar weekdays did not prove relevant, as prediction error did not decrease when compared to the initial result. Table 4 shows the MAPE and RMSE values for both cases.

3.4. Tuning the Hyperparameters

Hyperparameter tuning is a critical step in the development of machine learning models. The selection of appropriate hyperparameters can profoundly impact model performance, influencing its ability to generalize well to unseen data and achieve optimal predictive accuracy. Failure to select hyperparameters accurately or adequately can lead to sub-optimal model performance, even with state-of-the-art algorithms. Therefore, understanding and effectively addressing the hyperparameter tuning process are imperative to ensuring model effectiveness and reliability. The authors in [40] addresses challenges such as overfitting and parameter selection inherent to standard methods based on artificial neural networks (ANNs).
To improve the accuracy of the forecast, the parameters K , m , and n were adjusted. Details of each of these parameters are presented below, highlighting the advantages and disadvantages of setting them:
  • The number of trees ( K ): This parameter defines the number of trees in the forest. It is an essential parameter that significantly influences the model’s performance. Increasing the number of trees offers several advantages, including reducing the variability of the model, leading to more consistent prediction errors and improving prediction accuracy. This increased robustness allows the model to better capture complex patterns in the data and generalize well to unseen instances, improving its overall forecasting capabilities. However, setting K too low can result in underfitting, where the model fails to adequately capture essential patterns in the data, leading to sub-optimal prediction performance. On the other hand, setting K too high can lead to longer training times without significant improvements in prediction accuracy, potentially wasting computational resources. It is therefore crucial to find the optimal value for K , striking a balance between model complexity, computational efficiency and prediction performance.
  • The minimum sample leaf size ( m ): This parameter controls the minimum size of the leaves in the tree. It is a fundamental parameter in the construction of decision trees. Adjusting the minimum size of the leaves in the tree appropriately is essential in reducing uncertainty in decision-making. A significant advantage of setting this parameter is the ability to avoid forming excessively deep trees, which, although they may have small biases, also tend to have high variability. However, this setting can result in models with a larger bias and a loss of fine detail in the data. It is therefore essential to find a balance when adjusting this parameter, in order to mitigate uncertainty without compromising the model’s ability to capture the complexity of the data and avoid overfitting.
  • The maximum number of features selected randomly for each split ( n ): This parameter is crucial in random forest algorithms. It determines the maximum number of features to be considered when splitting a node during the construction of each tree in the forest. Adjusting this parameter has significant implications for the performance and robustness of the model. One of the main advantages of adjusting this parameter is the introduction of randomness in the process of building the trees. Limiting the number of features considered at each split reduces the correlation between the trees, making the model more robust and less prone to overfitting the training data. This is especially important to ensure that the model generalizes well to new data. Furthermore, adjusting this parameter allows for a customized adjustment of the model according to the specific characteristics of the dataset and the demands of the problem at hand. This provides flexibility in model tuning, allowing for a more precise adaptation to the problem’s needs. However, an improper choice of the value of n can lead to bias in the model. If n is too small, underfitting may occur, as the model may not be able to capture the complexity of the data. On the other hand, if n is too large, the model may become excessively complex, leading to overfitting. Therefore, it is essential to adjust this parameter carefully, seeking a balance between reducing overfitting and maintaining the model’s bias. Experimentation and careful adjustment are necessary to find the ideal value of n that optimizes the model’s performance and ensures good generalization to new data.
This subsection aims to provide transparency about the methodology used to adjust hyperparameters in this study. The exhaustive search method was applied to check all possible values of each hyperparameter within a given range.
Two different methods were used to optimize these hyperparameters. In one case, each hyperparameter was optimized individually, keeping the other hyperparameters constant, namely number of trees ( K ) = 300; minimum leaf size ( m ) = 1; and number of features to sample ( n ) = N / 3 , where N is the total number of features. In the second case, the hyperparameters were optimized simultaneously.

3.4.1. Tuning the Hyperparameters: Individually

The range for the number of trees ( K ) was 1 to 300, and for minimum leaf size ( m ), it was 1 to 20, for all case studies. The number of features to sample ( n ) was 1 to N , where these values differed among case studies: 14 for POL1; 47 for POL2; 46 for POL3; and 33 for POL4. Figure 9 shows the impact of hyperparameters on the forecasting error (MAPE) for each case study. Regarding the number of maximum features to select at random for each split ( n ), the optimal value was different for each case study: 13 for POL1; 46 for POL2; 38 for POL3; and 12 for POL4. For minimum leaf size ( m ), the optimal value also differed among case studies: 1 for POL1; 10 for POL2; 2 for POL3; and 5 for POL4. Except for POL2, small values of m were preferred.
MAPE remained constant with an increasing number of trees. To confirm this, the MAPE’s standard deviation (SD) was calculated considering a K between 100 and 300. The SD of the MAPE was low: 0.04 for POL1; 0.09 for POL3; and 0.02 for POL2 and POL4. In contrast, for the interval below 99 trees, the SD was higher: 0.88, 0.20, 0.95, and 0.64 for POL1 to POL4, respectively. Therefore, in the simultaneous optimization of the hyperparameters, the number of trees was kept fixed at K = 300.
Table 5 shows the MAPE values for the best configuration for each individually tuned hyperparameter. POL1 showed the best configuration when only K was optimized; however; the MAPE was the same as that obtained with the initial configuration ( k = 300 ) . POL2 and POL4, initially with m = 1 , obtained the lowest MAPE when the value of m was optimized, reducing the MAPE by 2% and 0.7%, respectively. POL3 showed the best result when optimizing the value of n separately, obtaining the most significant reduction in MAPE (3.1%) compared with a MAPE of 26.89 obtained with the initial n = 15 . Table 6 shows the MAPE values for the combinations of best hyperparameter values tuned individually and for the same tuned values of m and n with k = 300 . The MAPE value slightly decreased for POL1 and POL3 and remained the same for POL2 and POL4.

3.4.2. Tuning the Hyperparameters: Simultaneously

In this subsection, the number of trees was kept constant at k = 300 and the MAPE was calculated for all possible combinations of m and n values within the range defined previously. Figure 10 shows, in 3 dimensions, the resulting MAPE for all combinations in each case study. Table 7 shows the MAPE and RMSE values for the initial and best configurations. Tuning the hyperparameters in a combined way reduced the MAPE by 3.8% for POL3 (the most significant reduction), by 3.3% for POL2, 2.6% for POL1 and 1.1% for POL4. The RMSE also decreased with tuned hyperparameters, except for POL1.
The bars shown in Figure 11 represent, for each case study, the absolute percentage difference in the mean hourly between the forecast and real values for the testing period. The percentage difference is calculated by the absolute value of 1 minus the ratio between the hourly average of the forecasted value and the real value. The colored areas behind the bars show the mean percentage difference for all the hours. Overall, POL4 had the lowest average percentage error of just 1.8%, representing an absolute error of 3.28 kW. Although POL1 had the highest average percentage error, of 2.6%, in absolute terms, it presented an error of 1.27kW, behind POL2 with 1.61kW and ahead of POL3, which had the smallest absolute average error of just 1.19kW.

3.5. Results of Training Using Monthly Rolling

A rolling forecast was used in this subsection. A rolling forecast predicts the load over a continuous period, based on historical data. Unlike static training and testing periods that forecast the load for a fixed time frame, e.g., January to December, a rolling forecast is regularly updated throughout the year to reflect any changes. In other words, the forecast is made for a certain month, and this is added to the training period as new information and is dropped one month after the training period.
Table 8 shows the MAPE and RMSE values for each month of 2022 trained using rolling forecasting. The mean value of all these months was compared with the forecast error without rolling training, using a fixed period for training, named Annual. Except for POL2, predictions were more accurate without rolling training.
POL1 presented MAPE values that were on average 19.2% higher than the annual MAPE value. The highest MAPE value was in April, a 105.9% increase relative to the annual; while the lowest MAPE was in September, 19.8% below the annual MAPE. For POL2, the rolling approach training reduced the mean MAPE values by 4.1% and mean RMSE value by 5.1%, compared to the annual values. Only three months had MAPE and RMSE values higher than the annual values. POL3 had a 280.1% increase in MAPE compared to the annual value in April, and a 42.2% reduction in October. Like POL1 and POL2, April was the worst month for predictions. This can be explained by April’s many holidays and reduced activity, unlike POL4. Between May and September, POL4 had MAPE values well above the mean. However, the training with rolling increased the mean MAPE values by 16% and mean RMSE value by just 1.7%. February was the only month that presented MAPE and RMSE values lower than the annual value for all case studies. Figure 12 shows the real load data and the forecasted value through this period, for all case studies.

3.6. Baseline Models Comparison

As a benchmark, five other well-known methods were used to predict the load 24 h ahead: the persistence method [41]; autoregressive moving average (ARMA) and autoregressive integrated moving average (ARIMA) [42]; artificial neural networks (ANNs) [43]; and extreme gradient boosting (XGBoost) [44]. Table 9 presents the main parameter setting for these baseline models, which were thoughtfully chosen to ensure their efficiency, as well as other main aspects of model implementation. All these models were tested using the same available training and testing data as the RF model.
The forecast results obtained by the RF model proposed in this paper were compared with those obtained by these comparison models, as shown in Table 10. The MAPE and RMSE values clearly show the benefits of the proposed forecasting model. For the testing dataset, the RF model was able to outperform most baseline models for all case studies. Figure 13 shows the percentage difference between the MAPE and RMSE values of each model tested compared to the proposed RF model.
The worst performance was given by the ARMA model for POL1 with a MAPE value of 70.03, representing a degradation of 394.6% relative to the proposed RF model. Considering the RMSE value, the worst performance was given by the ARIMA model, also for POL1. POL1 had the worst results for almost all the tested models compared to the other case studies. POL2 performed best with training using the ANN model, which reduced the MAPE value by 7.3% and the RMSE value by 4.4%. POL3 and POL4 had RMSE values using the XGBoost model that were very close to the RF model, with less than 1% degradation.

4. Conclusions

Power systems are undergoing an unprecedented transformation towards an environment where the consumer is empowered with a more proactive role, both by actively managing their electrical load and engaging in self-consumption. In other words, the advent of data-driven solutions for consumers, the significant cost reduction of photovoltaic technology, and the forthcoming maturation of modular energy storage solutions are evolving from the old paradigm of an “inelastic consumer” to a “flexible prosumer”, who is more prone to actively match their electrical load with off-peak pricing tariffs and/or to match their power production. Public buildings/services with significant loads (like hospitals, universities, and city halls, among others) will certainly be at the forefront of this transformation.
Reliable forecasting models constitute a key component of this proactive approach towards load management, guiding the decision-making process of consumers regarding their volatile electrical load. Short-term load forecasting represents the most important horizon in the exploration of this new paradigm and has therefore been the focus of researchers. Despite the countless approaches in the literature to individual and simultaneous load forecasting models, there is no hard consensus regarding these regression-type modelling problems. The limitations of every single approach demand comparing popular methods but also tailoring each model to the individual characteristics of the studied problem. Both data selection/processing sensitivity and fine-tuning model hyperparameters are a crucial part of any proposed STLF work.
In the present study, the electrical load from four university campuses (with different profiles and seasonality levels) were used to test the validity of a tailored forecasting approach. We began with the often-disregarded task of input selection and feature selection. Eight different input load patterns (sequences of relevant lags) were chosen for testing, each exploring different philosophies in terms of the targeted seasonalities; i.e., some explore only daily or weekly insights, while others target both. Thus, some patterns favored the autocorrelation function and continuity between the relevant lags, while others favored the lags with spikes in the partial autocorrelation function. This in turn leads to different input sizes. Then, recognizing the well-documented importance of exogenous variables, particularly calendar type variables, three different input variants were assembled for each of the eight input patterns, resulting in a total of 24 different input data selections for each of the four case studies.
With the input data prepared and having defined an initial forecasting model based on a RF model, we then trained the models with four years of data and determined which input data selection ensured the best forecasting accuracy in an entire year for each STLF case study. As expected, the results revealed an overall preference for the input data selection that targeted a mixture of daily and weekly relevant lags, but with a preference for sequences where only certain correlation spikes were present (p8, p7 and p6). Additionally, Variant 2—which includes weekday and hour as exogenous variables—ensured the lowest errors for POL1 and POL3. In contrast, Variant 3 revealed a greater forecasting accuracy for POL2 and POL4, underlining the benefits of considering temperature alongside the previous two calendar exogenous variables.
The results reveal significant changes in the MAPE and RMSE values according to input data selection, thus confirming the consensus regarding the use of typical exogenous variables for the considered STLF case studies. Nevertheless, the inferences from correlation analyses need to be confirmed in terms of the forecasting model, since the tested similar weekdays approach (reducing the input range of the weekday variable) did not improve forecasting accuracy. Testing all individual input data selections allowed us to pinpoint which input data selection to target in the subsequent hyperparameter optimization of the forecasting model for each STLF case study. This process was divided into two stages. First, the influence of hyperparameters was studied individually, highlighting some clear trends in the most influential hyperparameters and their range. In particular, we observed that, after a sharp decline, the forecasting error remained (almost) constant with an increasing number of trees; except for POL2, a smaller minimum leaf size was generally preferred; and regarding the features to sample, there were benefits of using larger values, with exponential reductions in forecasting error. Second, with these inferences, we proceeded to simultaneously optimize the hyperparameters, with the number of trees fixed at its upper bound. This revealed non-neglectable forecasting improvements for all case studies, in comparison with the results obtained by the initial RF model for the best input data selection configuration. A monthly rolling training and testing approach was also tested to check if it produced better results than the static approach. However, except for POL2, this did not improve the overall annual MAPE and RMSE error value. Nevertheless, it allowed us to identify the most problematic testing months and to conjecture about the reasons behind some of these larger errors, namely a connection with holidays and intra-year seasonalities, which, interestingly, were not present in every university campus load profile.
To complete the work, a baseline comparison with other well-established models was performed to underscore the relative superiority of the proposed approach based on an optimized RF model with tailored input data. Considering all the case studies, XGBoost was the closest baseline model, with an average MAPE and RMSE percentual difference of 5.2% and 1.6%, respectively. The ANN was the only model able to outscore the proposed RF model for POL2, although its MAPE and RMSE were worse by 12.0% and 11.2%, respectively, in terms of average percentage.
Finally, these results have validated the proposed forecasting approach by comprehensively studying the influence of the selected exogenous variables, correlation spikes and the model hyperparameters on the fine-tuning of the RF based load forecasting. We thus fulfilled our goals of taking advantage of the historical data and delivering a relatively simple-to-implement, and accurate, forecasting model that can be used by institutions like universities, in order to better predict their loads, adapt their patterns, and adjust their energy contracts, and in the near future, position themselves as active prosumers to take advantage full advantage of the green transition.

5. Future Works

Future research could explore the application of the proposed method for different types of applications, such as industrial or residential load forecasting. The extension of the model to encompass a broader range of load profiles and consumption patterns would provide valuable insights into the adaptability and robustness of the forecasting approach. Additionally, investigating the integration of real-time data sources, such as weather forecasts and market pricing, could enhance the model’s accuracy and applicability in dynamic energy environments. Furthermore, the exploration of hybrid forecasting models, combining the random forest algorithm with other advanced techniques such as artificial neural networks or XGBoost, could lead to improved forecasting performance across diverse energy consumption scenarios. Lastly, the potential integration of the proposed forecasting model into smart grid systems and demand response programs warrants further investigation to assess its effectiveness in supporting grid stability and facilitating efficient energy management.

Author Contributions

Conceptualization, B.M.; methodology, B.M., P.B. and S.M.; validation, B.M., P.B. and M.d.R.C.; formal analysis, B.M., P.B., J.P., M.d.R.C. and S.M.; investigation, B.M., P.B., J.P., M.d.R.C. and S.M.; resources, M.d.R.C. and S.M.; writing—original draft preparation, B.M.; writing—review and editing, B.M., P.B. and M.d.R.C.; visualization, B.M., P.B., J.P., M.d.R.C. and S.M.; supervision, M.d.R.C. and S.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

Bianca Magalhães gives his special thanks to the Fundação para a Ciência e a Tecnologia (FCT), Portugal, for the Ph.D. Grant (2023.02678.BDANA).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Veeramsetty, V.; Reddy, K.R.; Santhosh, M.; Mohnot, A.; Singal, G. Short-Term Electric Power Load Forecasting Using Random Forest and Gated Recurrent Unit. Electr. Eng. 2022, 104, 307–329. [Google Scholar] [CrossRef]
  2. Holderbaum, W.; Alasali, F.; Sinha, A. Short Term Load Forecasting (STLF). Lect. Notes Energy 2023, 85, 13–56. [Google Scholar] [CrossRef]
  3. Pinheiro, M.G.; Madeira, S.C.; Francisco, A.P. Short-Term Electricity Load Forecasting—A Systematic Approach from System Level to Secondary Substations. Appl. Energy 2023, 332, 120493. [Google Scholar] [CrossRef]
  4. Akhtar, S.; Shahzad, S.; Zaheer, A.; Ullah, H.S.; Kilic, H.; Gono, R.; Jasí Nski, M.; Leonowicz, Z. Short-Term Load Forecasting Models: A Review of Challenges, Progress, and the Road Ahead. Energies 2023, 16, 4060. [Google Scholar] [CrossRef]
  5. Yang, D.; Guo, J.; Li, Y.; Sun, S.; Wang, S. Short-Term Load Forecasting with an Improved Dynamic Decomposition-Reconstruction-Ensemble Approach. Energy 2023, 263, 125609. [Google Scholar] [CrossRef]
  6. Leal, P.; Castro, R.; Lopes, F. Influence of Increasing Renewable Power Penetration on the Long-Term Iberian Electricity Market Prices. Energies 2023, 16, 1054. [Google Scholar] [CrossRef]
  7. Xia, Y.; Wang, J.; Wei, D.; Zhang, Z. Combined Framework Based on Data Preprocessing and Multi-Objective Optimizer for Electricity Load Forecasting. Eng. Appl. Artif. Intell. 2023, 119, 105776. [Google Scholar] [CrossRef]
  8. Dewangan, F.; Abdelaziz, A.Y.; Biswal, M. Load Forecasting Models in Smart Grid Using Smart Meter Information: A Review. Energies 2023, 16, 1404. [Google Scholar] [CrossRef]
  9. Alquthami, T.; Zulfiqar, M.; Kamran, M.; Milyani, A.H.; Rasheed, M.B. A Performance Comparison of Machine Learning Algorithms for Load Forecasting in Smart Grid. IEEE Access 2022, 10, 48419–48433. [Google Scholar] [CrossRef]
  10. Groß, A.; Lenders, A.; Schwenker, F.; Braun, D.A.; Fischer, D. Comparison of Short-Term Electrical Load Forecasting Methods for Different Building Types. Energy Inform. 2021, 4, 13. [Google Scholar] [CrossRef]
  11. Wang, F.; Li, K.; Zhou, L.; Ren, H.; Contreras, J.; Shafie-khah, M.; Catalão, J.P.S. Daily Pattern Prediction Based Classification Modeling Approach for Day-Ahead Electricity Price Forecasting. Int. J. Electr. Power Energy Syst. 2019, 105, 529–540. [Google Scholar] [CrossRef]
  12. Pourdaryaei, A.; Mohammadi, M.; Karimi, M.; Mokhlis, H.; Illias, H.A.; Kaboli, S.H.A.; Ahmad, S. Recent Development in Electricity Price Forecasting Based on Computational Intelligence Techniques in Deregulated Power Market. Energies 2021, 14, 6104. [Google Scholar] [CrossRef]
  13. Bento, P.M.R.; Pombo, J.A.N.; Calado, M.R.A.; Mariano, S.J.P.S.; Rodrigues, F.; Calado, J.M.F. Stacking Ensemble Methodology Using Deep Learning and ARIMA Models for Short-Term Load Forecasting. Energies 2021, 14, 7378. [Google Scholar] [CrossRef]
  14. Shi, J.; Li, C.; Yan, X. Artificial Intelligence for Load Forecasting: A Stacking Learning Approach Based on Ensemble Diversity Regularization. Energy 2023, 262, 125295. [Google Scholar] [CrossRef]
  15. Du, J.; Cao, H.; Li, Y.; Yang, Z.; Eslamimanesh, A.; Fakhroleslam, M.; Mansouri, S.S.; Shen, W. Development of hybrid surrogate model structures for design and optimization of CO2 capture processes: Part I. Vacuum pressure swing adsorption in a confined space. Chem. Eng. Sci. 2024, 283, 119379. [Google Scholar] [CrossRef]
  16. Rao, C.; Zhang, Y.; Wen, J.; Xiao, X.; Goh, M. Energy Demand Forecasting in China: A Support Vector Regression-Compositional Data Second Exponential Smoothing Model. Energy 2023, 263, 125955. [Google Scholar] [CrossRef]
  17. Luo, J.; Hong, T.; Gao, Z.; Fang, S.C. A Robust Support Vector Regression Model for Electric Load Forecasting. Int. J. Forecast. 2023, 39, 1005–1020. [Google Scholar] [CrossRef]
  18. Vardhan, B.V.S.; Khedkar, M.; Srivastava, I.; Thakre, P.; Bokde, N.D. A Comparative Analysis of Hyperparameter Tuned Stochastic Short Term Load Forecasting for Power System Operator. Energies 2023, 16, 1243. [Google Scholar] [CrossRef]
  19. Tran, N.T.; Giang Tran, T.T.; Nguyen, T.A.; Lam, M.B.; City, C.M.; Minh, H.C. A New Grid Search Algorithm Based on XGBoost Model for Load Forecasting. Bull. Electr. Eng. Inform. 2023, 12, 1857–1866. [Google Scholar] [CrossRef]
  20. Tarmanini, C.; Sarma, N.; Gezegin, C.; Ozgonenel, O. Short Term Load Forecasting Based on ARIMA and ANN Approaches. Energy Rep. 2023, 9, 550–557. [Google Scholar] [CrossRef]
  21. Donnelly, J.; Daneshkhah, A.; Abolfathi, S. Physics-informed neural networks as surrogate models of hydrodynamic simulators. Sci. Total Environ. 2024, 912, 168814. [Google Scholar] [CrossRef]
  22. Jiang, L.; Hu, G. A Review on Short-Term Electricity Price Forecasting Techniques for Energy Markets. In Proceedings of the 2018 15th International Conference on Control, Automation, Robotics and Vision, ICARCV 2018, Singapore, 18–21 November 2018; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2018; pp. 937–944. [Google Scholar]
  23. Li, S.; Kong, X.; Yue, L.; Liu, C.; Khan, M.A.; Yang, Z.; Zhang, H. Short-Term Electrical Load Forecasting Using Hybrid Model of Manta Ray Foraging Optimization and Support Vector Regression. J. Clean. Prod. 2023, 388, 135856. [Google Scholar] [CrossRef]
  24. Yin, C.; Mao, S. Fractional Multivariate Grey Bernoulli Model Combined with Improved Grey Wolf Algorithm: Application in Short-Term Power Load Forecasting. Energy 2023, 269, 126844. [Google Scholar] [CrossRef]
  25. Zhang, D.; Wang, S.; Liang, Y.; Du, Z. A Novel Combined Model for Probabilistic Load Forecasting Based on Deep Learning and Improved Optimizer. Energy 2023, 264, 126172. [Google Scholar] [CrossRef]
  26. Ran, P.; Dong, K.; Liu, X.; Wang, J. Short-Term Load Forecasting Based on CEEMDAN and Transformer. Electr. Power Syst. Res. 2023, 214, 108885. [Google Scholar] [CrossRef]
  27. Imani, M.H.; Bompard, E.; Colella, P.; Huang, T. Forecasting Electricity Price in Different Time Horizons: An Application to the Italian Electricity Market. IEEE Trans. Ind. Appl. 2021, 57, 5726–5736. [Google Scholar] [CrossRef]
  28. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  29. Fan, G.F.; Zhang, L.Z.; Yu, M.; Hong, W.C.; Dong, S.Q. Applications of Random Forest in Multivariable Response Surface for Short-Term Load Forecasting. Int. J. Electr. Power Energy Syst. 2022, 139, 108073. [Google Scholar] [CrossRef]
  30. Matrenin, P.; Safaraliev, M.; Dmitriev, S.; Kokin, S.; Ghulomzoda, A.; Mitrofanov, S. Medium-Term Load Forecasting in Isolated Power Systems Based on Ensemble Machine Learning Models. Energy Rep. 2022, 8, 612–618. [Google Scholar] [CrossRef]
  31. Srivastava, A.K.; Pandey, A.S.; Houran, M.A.; Kumar, V.; Kumar, D.; Tripathi, S.M.; Gangatharan, S.; Elavarasan, R.M. A Day-Ahead Short-Term Load Forecasting Using M5P Machine Learning Algorithm along with Elitist Genetic Algorithm (EGA) and Random Forest-Based Hybrid Feature Selection. Energies 2023, 16, 867. [Google Scholar] [CrossRef]
  32. Fang, Z.; Yang, Z.; Peng, H.; Chen, G. Prediction of Ultra-Short-Term Power System Based on LSTM-Random Forest Combination Model. J. Phys. Conf. Ser. 2022, 2387, 012033. [Google Scholar] [CrossRef]
  33. Kalhori, M.R.N.; Emami, I.T.; Fallahi, F.; Tabarzadi, M. A Data-Driven Knowledge-Based System with Reasoning under Uncertain Evidence for Regional Long-Term Hourly Load Forecasting. Appl. Energy 2022, 314, 118975. [Google Scholar] [CrossRef]
  34. Kabeyi, M.J.B.; Olanrewaju, O.A. Smart grid technologies and application in the sustainable energy transition: A review. Int. J. Sustain. Energy 2023, 42, 685–758. [Google Scholar] [CrossRef]
  35. Dudek, G. A Comprehensive Study of Random Forest for Short-Term Load Forecasting. Energies 2022, 15, 7547. [Google Scholar] [CrossRef]
  36. Breiman, L.; Friedman, J.H.; Olshen, R.A.; Stone, C.J. Classification and Regression Trees; Taylor & Francis: Abingdon, UK, 2017; pp. 1–358. [Google Scholar] [CrossRef]
  37. Breiman, L. Bagging Predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef]
  38. Ho, K. The Random Subspace Method for Constructing Decision Forests. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 832–844. [Google Scholar] [CrossRef]
  39. Hastie, T.; Tibshirani, R.; Friedman, J. The Elements of Statistical Learning; Springer Science & Business Media: New York, NY, USA, 2009. [Google Scholar] [CrossRef]
  40. Nurullah, C.; Güneş, F.; Koziel, S.; Pietrenko-Dabrowska, A.; Belen, M.A.; Mahouti, P. Deep-learning-based precise characterization of microwave transistors using fully-automated regression surrogates. Sci. Rep. 2023, 13, 1445. [Google Scholar] [CrossRef]
  41. Ghayekhloo, M.; Azimi, R.; Ghofrani, M.; Menhaj, M.B.; Shekari, E. A Combination Approach Based on a Novel Data Clustering Method and Bayesian Recurrent Neural Network for Day-Ahead Price Forecasting of Electricity Markets. Electr. Power Syst. Res. 2019, 168, 184–199. [Google Scholar] [CrossRef]
  42. Mishra, S.; Prasad, K.; Tigga, A.M. Electrical Price Prediction Using Machine Learning Algorithms. In Machine Learning Algorithms and Applications in Engineering; CRC Press: Boca Raton, FL, USA, 2023; pp. 255–270. ISBN 9781003104858. [Google Scholar]
  43. Nascimento, J.; Pinto, T.; Vale, Z. Electricity Price Forecast for Futures Contracts with Artificial Neural Network and Spearman Data Correlation. Adv. Intell. Syst. Comput. 2019, 801, 12–20. [Google Scholar] [CrossRef]
  44. Bitirgen, K.; Filik, Ü.B. Electricity Price Forecasting Based on XGBooST and ARIMA Algorithms. BSEU J. Eng. Res. Technol. 2020, 1, 7–13. [Google Scholar]
Figure 1. Illustration of the sequence of input patterns p 1 , p 2 , p 3 , p 4 , p 5 and p 6 in the time series.
Figure 1. Illustration of the sequence of input patterns p 1 , p 2 , p 3 , p 4 , p 5 and p 6 in the time series.
Energies 17 01926 g001
Figure 2. Illustration of the sequence of input patterns p 7 ( p 2 + p 3 ) and p 8 ( p 2 + p 4 ) in the time series.
Figure 2. Illustration of the sequence of input patterns p 7 ( p 2 + p 3 ) and p 8 ( p 2 + p 4 ) in the time series.
Energies 17 01926 g002
Figure 3. Representation of input patterns.
Figure 3. Representation of input patterns.
Energies 17 01926 g003
Figure 4. Scheme of the variants for training the forecast model.
Figure 4. Scheme of the variants for training the forecast model.
Energies 17 01926 g004
Figure 5. Average hourly load data for each campus for the period January 2018 to December 2022.
Figure 5. Average hourly load data for each campus for the period January 2018 to December 2022.
Energies 17 01926 g005
Figure 6. Descriptive statistics of load data from the training and testing period for each case study.
Figure 6. Descriptive statistics of load data from the training and testing period for each case study.
Energies 17 01926 g006
Figure 7. Best combination of training variant and input pattern for each case study.
Figure 7. Best combination of training variant and input pattern for each case study.
Energies 17 01926 g007
Figure 8. Correlation matrix between the weekdays for each case study.
Figure 8. Correlation matrix between the weekdays for each case study.
Energies 17 01926 g008
Figure 9. Tuning the hyperparameters individually for each case study.
Figure 9. Tuning the hyperparameters individually for each case study.
Energies 17 01926 g009
Figure 10. Tuning the hyperparameters simultaneously for each case study.
Figure 10. Tuning the hyperparameters simultaneously for each case study.
Energies 17 01926 g010
Figure 11. Bar plot: Mean hourly absolute percentage error (difference between the forecast and real hourly load values) for the entire testing period. Area plot: Mean yearly absolute percentage error.
Figure 11. Bar plot: Mean hourly absolute percentage error (difference between the forecast and real hourly load values) for the entire testing period. Area plot: Mean yearly absolute percentage error.
Energies 17 01926 g011
Figure 12. Real and forecasted load data in February for all case studies using monthly rolling.
Figure 12. Real and forecasted load data in February for all case studies using monthly rolling.
Energies 17 01926 g012
Figure 13. Percentage difference of MAPE and RMSE compared with the proposed RF model.
Figure 13. Percentage difference of MAPE and RMSE compared with the proposed RF model.
Energies 17 01926 g013
Table 1. Input patterns with different combinations of relevant lags.
Table 1. Input patterns with different combinations of relevant lags.
Input PatternsDescription of Input PatternsSequence of Relevant LagsSet Size
p1The sequence is composed of demands from 168 h preceding hour t of day i.Seq1 = [1,…, 168]168
p2The sequence is composed of demands from 24 h preceding hour t of day i.Seq2 = [1,…, 24]24
p3The sequence is composed of demands at hour t of 7 consecutive days preceding the forecast day i.Seq3 = [24, 48, 72, 96, 120, 144, 168]7
p4The sequence is composed of demands at hour t of 21 consecutive days preceding the forecast day i.Seq4 = [24, 48, 72, 96, 120, 144, 168, …, 504]21
p5The sequence is composed of demands at hour t of 7 consecutive same weekdays preceding the forecast day i.Seq5 = [168, 336, 504, …, 1176]7
p6The sequence is composed of demands of some specific hours preceding the forecast day i.Seq6 = [1, 2, 3, 23, 24, 25, 47, 48, 49, 167, 168, 169]12
p7The sequence is a cross-pattern combining p2 and p3.Seq7 = [1, …, 24, 48, 72, 96, 120, 144, 168]30
p8The sequence is a cross-pattern combining p2 and p4.Seq8 = [1, …, 24, 48, 72, 96, 120, 144, 168, …, 504]44
Table 2. MAPE values for all combinations of training variants and input patterns for each case study.
Table 2. MAPE values for all combinations of training variants and input patterns for each case study.
Case StudyVariant p 1 p 2 p 3 p 4 p 5 p 6 p 7 p 8
POL1V116.6140.4617.2315.9034.8323.1517.1415.57
V215.8420.1715.4715.5820.3514.5415.4015.20
V315.8319.2015.6515.4420.6014.8915.3915.12
POL2V17.359.757.617.2912.078.697.346.88
V27.107.887.257.279.037.506.956.75
V37.077.807.197.209.007.616.896.64
POL3V130.2944.5235.6630.3648.5037.7431.3128.03
V228.1228.1930.5929.6334.0027.8427.6326.89
V328.2727.1930.5829.3632.8027.5127.5426.93
POL4V112.1016.7713.7813.1722.7714.9012.0311.78
V211.6811.6012.6012.7516.7811.6411.2211.38
V311.4911.4212.0912.1414.4311.4011.0811.16
The best values are highlighted in bold.
Table 3. RMSE values for all combinations of training variants and input patterns for each case study.
Table 3. RMSE values for all combinations of training variants and input patterns for each case study.
Case StudyVariant p 1 p 2 p 3 p 4 p 5 p 6 p 7 p 8
POL1V113.4526.6715.2714.5026.7018.5113.4813.21
V213.0215.1614.5714.2918.2112.7013.1813.07
V312.9514.5614.3914.3218.0112.7213.2013.04
POL2V17.039.567.397.0410.928.617.126.68
V26.787.936.977.008.767.306.766.57
V36.767.856.886.938.517.286.706.48
POL3V113.7319.1714.7613.6120.9116.6213.5512.78
V213.1213.6113.8313.4516.2513.0412.6812.55
V313.1013.2913.8013.4315.6612.9512.6412.65
POL4V135.0447.4439.0337.5257.1842.6734.9534.39
V233.9134.4036.4136.5646.6334.0732.6733.21
V333.4333.8734.4134.3240.3233.4432.2832.58
The best values are highlighted in bold.
Table 4. Comparison of MAPE and RMSE when used for training on similar weekdays.
Table 4. Comparison of MAPE and RMSE when used for training on similar weekdays.
Case StudyBest CombinationMAPERMSE
Initial Similar   d w e e k Initial Similar   d w e e k
POL1V2— p 6 14.5417.1812.7013.45
POL2V3— p 8 6.646.716.486.55
POL3V2— p 8 26.8927.9712.5512.82
POL4V3— p 7 11.0811.8232.2834.58
Table 5. MAPE values for the best configurations for each tuned hyperparameter individually.
Table 5. MAPE values for the best configurations for each tuned hyperparameter individually.
Case Study K m n MAPE K m n MAPE K m n MAPE
POL11921514.543001514.5930011315.23
POL2661166.6830010166.513001466.63
POL326511526.6830021526.7430013826.06
POL417111111.0530051111.0030011211.04
The best MAPE values are highlighted in bold, and the tuned hyperparameters values are in italics.
Table 6. MAPE values for the best combinations of best hyperparameters values tuned individually.
Table 6. MAPE values for the best combinations of best hyperparameters values tuned individually.
Case Study K m n MAPE K m n MAPE
POL119211314.3230011314.30
POL26610466.4530010466.45
POL326523826.1630023826.14
POL417151211.0130051211.01
The best MAPE values are highlighted in bold, and the tuned hyperparameters values are in italics.
Table 7. MAPE and RMSE values for the best configuration for simultaneously tuned hyperparameters.
Table 7. MAPE and RMSE values for the best configuration for simultaneously tuned hyperparameters.
Case StudyInitialTuned
K m n MAPERMSE K m n MAPERMSE
POL13001514.5412.7030091214.1612.89
POL23001166.646.4830015286.426.34
POL330011526.8912.5530044425.8812.45
POL430011111.0832.2830051510.9632.20
The best values are highlighted in bold.
Table 8. MAPE and RMSE values for training using monthly rolling.
Table 8. MAPE and RMSE values for training using monthly rolling.
Case StudyError MetricsJanFebMarAprMayJuneJulyAugSeptOctNovDecMeanAnnual
POL1MAPE15.5112.9914.2129.1615.3517.3714.8118.8811.3615.2915.1022.5516.8814.16
RMSE13.9912.0616.0621.369.5213.509.547.857.1111.7613.6321.4213.1512.89
POL2MAPE5.465.877.457.934.556.365.655.824.605.766.078.326.156.42
RMSE6.206.277.827.864.455.745.554.684.075.066.048.466.026.34
POL3MAPE15.4717.0817.3198.3831.4119.6825.3727.3715.6114.9631.2826.9028.4025.88
RMSE10.409.9212.1317.8814.8213.0715.078.777.1211.0516.0718.2912.8812.45
POL4MAPE7.608.068.288.5616.4018.2616.7516.8219.1810.4910.2411.8912.7110.96
RMSE19.5819.7620.4122.7047.5754.8554.5941.5243.6922.6021.6924.0232.7532.20
The best values are highlighted in bold.
Table 9. Main parameter settings for the comparison models.
Table 9. Main parameter settings for the comparison models.
MethodsMain Parameters
PersistenceThe forecast value was defined as the load of the previous 24 h.
ARMAFor this model, p = 1 and q = 1 . Based on the augmented Dickey–Fuller (ADF) test, ARIMA was used without non-seasonal difference ( d = 0 ).
ARIMAFor this model, p = 1 , d = 1 and q = 1 , where p is the order of the autoregressive (AR) term, d is the order of non-seasonal difference and q is the order of the moving average (MA) term. These parameters were set according to the autocorrelation function (ACF), the partial autocorrelation function (PACF) and the augmented Dickey–Fuller (ADF) test to verify the series stationarity.
ANNThe first layer size was set to 29, the activation function was set to use rectified linear units, the last layer of size one was set to sigmoid activation function, and the solver was set to use Adam. The threshold for training data used to validate was set at 33%. The ANN was trained 20 times, and the results showed are the mean of all evaluations.
XGBoostNumber of estimators = 800, learning rate = 0.01, subsample = 0.7, and colsample bytree = 0.7.
Table 10. MAPE and RMSE values for the testing period for all models compared.
Table 10. MAPE and RMSE values for the testing period for all models compared.
MethodsPOL1POL2POL3POL4
MAPERMSEMAPERMSEMAPERMSEMAPERMSE
RF (proposed)14.1612.896.426.3425.8812.4510.9632.20
Persistence46.1936.8110.7712.9061.1728.8323.3566.04
ARMA70.0334.4911.0611.0655.8724.0234.3880.12
ARIMA37.7241.199.6911.5642.5728.3325.9697.78
ANN19.3714.535.956.0627.2013.0311.8033.58
XGBoost15.8013.306.906.6526.3012.4711.4332.46
The best values are highlighted in bold.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Magalhães, B.; Bento, P.; Pombo, J.; Calado, M.d.R.; Mariano, S. Short-Term Load Forecasting Based on Optimized Random Forest and Optimal Feature Selection. Energies 2024, 17, 1926. https://0-doi-org.brum.beds.ac.uk/10.3390/en17081926

AMA Style

Magalhães B, Bento P, Pombo J, Calado MdR, Mariano S. Short-Term Load Forecasting Based on Optimized Random Forest and Optimal Feature Selection. Energies. 2024; 17(8):1926. https://0-doi-org.brum.beds.ac.uk/10.3390/en17081926

Chicago/Turabian Style

Magalhães, Bianca, Pedro Bento, José Pombo, Maria do Rosário Calado, and Sílvio Mariano. 2024. "Short-Term Load Forecasting Based on Optimized Random Forest and Optimal Feature Selection" Energies 17, no. 8: 1926. https://0-doi-org.brum.beds.ac.uk/10.3390/en17081926

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop