Next Article in Journal
Reviews Based on the Reconfigurable Intelligent Surface Technical Issues
Next Article in Special Issue
Guaranteeing Zero Secrecy Outage in Relaying Systems under Eavesdropper’s Arbitrary Location and Unlimited Number of Antennas
Previous Article in Journal
High-Capacity Imperceptible Data Hiding Using Permutation-Based Embedding Process for IoT Security
Previous Article in Special Issue
Research on the Enhancement Method of Specific Emitter Open Set Recognition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

RIS-Assisted Robust Beamforming for UAV Anti-Jamming and Eavesdropping Communications: A Deep Reinforcement Learning Approach

1
College of Electronic and Information Engineering, Nanjing University of Information Science and Technology, Nanjing 210044, China
2
The Sixty-Third Research Institute, National University of Defense Technology, Nanjing 210007, China
3
School of Information Science and Engineering, Southeast University, Nanjing 214135, China
*
Authors to whom correspondence should be addressed.
Submission received: 14 September 2023 / Revised: 18 October 2023 / Accepted: 30 October 2023 / Published: 1 November 2023

Abstract

:
The reconfigurable intelligent surface (RIS) has been widely recognized as a rising paradigm for physical layer security due to its potential to substantially adjust the electromagnetic propagation environment. In this regard, this paper adopted the RIS deployed on an unmanned aerial vehicle (UAV) to enhance information transmission while defending against both jamming and eavesdropping attacks. Furthermore, an innovative deep reinforcement learning (DRL) approach is proposed with the purpose of optimizing the power allocation of the base station (BS) and the discrete phase shifts of the RIS. Specifically, considering the imperfect illegitimate node’s channel state information (CSI), we first reformulated the non-convex and non-conventional original problem into a Markov decision process (MDP) framework. Subsequently, a noisy dueling double-deep Q-network with prioritized experience replay (Noisy-D3QN-PER) algorithm was developed with the objective of maximizing the achievable sum rate while ensuring the fulfillment of the security requirements. Finally, the numerical simulations showed that our proposed algorithm outperformed the baselines on the system rate and at transmission protection level.

1. Introduction

Recently, the advancement of next-generation wireless communications has led to exponential growth in data transmission and connected nodes [1]. However, owing to the open nature of wireless channels, wireless communications are progressively susceptible to active jamming and passive eavesdropping [2,3]. With this as the focus, the academic community has studied various techniques to combat jamming and eavesdropping attacks, e.g., power control [4], frequency hopping [5], artificial-noise-aided beamforming [6], and cooperative relaying scheme [7]. However, power control cannot handle the jamming attacks with high power, and frequency hopping consumes additional spectrum resources. On the other hand, releasing artificial noise consumes extra power, and employing relays may incur additional hardware cost [4,5,6,7].
The above-mentioned shortcomings have motivated a new paradigm called the reconfigurable intelligent surface (RIS) [8]. This technology has recently been regarded as a promising solution for enhancing the power/spectral efficiency of wireless communication systems [8,9,10,11]. Specifically, the RIS consists of massive passive elements, which can dynamically adjust the reflection coefficient on the elements according to the needs of different communication scenarios to increase the received signal power or significantly reduce the impact of interference in the network [9,10,11]. Therefore, the RIS has garnered extensive research attention in the domain of secure communications [12,13,14,15,16,17,18,19,20,21,22,23,24,25]. However, in the face of increasingly complex electromagnetic environments, there is an urgent need for highly efficient and reliable beamforming algorithms for RIS-aided secure communications.

1.1. Related Works

In recent years, several fundamental technical challenges of RIS-assisted secure communication systems have been addressed [12,13,14]. In [12], the joint beamforming scheme was proposed to protect secure transmission from eavesdropping attacks, where several optimization algorithms were applied, including alternating optimization (AO) and semidefinite relaxation (SDR). To maximize the secrecy rate of the RIS-assisted Gaussian multiple-input multiple-output (MIMO) channel, the authors in [14] used the AO algorithm to jointly optimize the transmit covariance at the transmitter and the phase shift coefficient at the RIS and further proposed the minimization–maximization (MM) algorithm to optimize the local optimal phase shift. However, these works assumed that the base station (BS) can acquire the ideal channel state information (CSI) of all nodes, which is impractical due to the uncooperative relationship between the BS and the illegitimate nodes. To tackle this matter, a robust algorithm has been developed to jointly optimize active beamforming and passive reflecting beamforming to secure the wireless transmission system against jammer attacks, where the CSI of illegitimate nodes at the BS is not completely known [15,16,17]. In addition, the authors in [18] iteratively solved an energy-efficient secure transmission problem with the probabilistic outage constraint by low-complexity first-order algorithms in the presence of imperfect information about the eavesdropper’s channel state.
Considering that the actual communication environment may become increasingly complex, such as in densely populated areas with clusters of buildings, the links between the RIS and various nodes may encounter obstacles. Unmanned aerial vehicles (UAVs) have been widely used in complex communication networks due to their low cost and flexible maneuverability [19,20,21,22,23]. In addition, when we mount the RIS on a UAV, the channel attenuation of the ground-to-air channel is much lower than that of the ground channel, which can significantly reduce the energy loss of passive reflection. The authors in [21] utilized UAVs carrying reflective surfaces to facilitate power delivery to intelligent devices, while simultaneously transmitting information. Liu et al. used an AO framework to study a multi-controllable system for RIS-aided UAV communication [22]. In [23], the authors studied the secrecy problem of RIS-based integrated satellite UAV relay networks with multiple eavesdroppers.
The obvious challenge is that traditional convex optimization algorithms may be less efficient for large-scale communication systems. Besides, the practical RIS’s coefficient adjustment is discrete, which makes the traditional algorithms no longer applicable. Benefiting from the rapid development of artificial intelligence (AI), reinforcement learning (RL) has attracted much interest in beamforming design in RIS-assisted wireless communication systems [24,25,26,27,28,29,30], which can effectively deal with the large-scale discrete RIS’s coefficients. The authors in [24] proposed a passive phase shift design to maximize the downlink received signal-to-noise ratio based on deep reinforcement learning (DRL). In [25], DRL and extremum seeking control were incorporated for the purposes of model-free control of the RIS. In response to increased network demand and interference challenges from nearby UAV cells, a direct collaborative-communication-enabled multi-agent decentralized double-deep Q-network (CMAD–DDQN) approach facilitates direct collaboration among UAVs, optimizing their 3D flight trajectories to maximize energy efficiency while outperforming existing methods by up to 85% [26]. However, these works did not explore the issue of the security of AI in RIS-enhanced communication systems. In [27,28], the authors proposed secure DRL-based beamforming methods for protecting RIS-assisted wireless communications from active jamming or passive eavesdropping. Furthermore, in order to maximize the energy efficiency of multi-UAV-assisted wireless coverage, the authors in [29] proposed a cooperative multi-agent decentralized double-deep Q-network (MAD-DDQN) approach, but the algorithm could not be directly applied to optimize the reflecting beamforming for the RIS. To the best of our knowledge, no exiting work has considered the design of DRL in RIS-assisted secure transmission strategies in the presence of both jammers and eavesdroppers and imperfect CSI conditions.

1.2. Contributions

In this paper, we aimed to delve into the anti-jamming and anti-eavesdropping problems in an RIS-assisted UAV transmission system and introduce an innovative robust DRL-based approach to design discrete RIS coefficients in the presence of imperfect CSI from illegitimate nodes. In conclusion, our principal contributions are itemized as follows:
  • Considering the illegitimate nodes’ imperfect CSI, the joint optimization problem of power allocation at the BS and reflecting beamforming at the RIS is formulated to maximize the achievable system rate, while ensuring fulfillment of the security requirements.
  • To cope with the non-convex and non-conventional optimization problem, we first used a robust method to process the imperfect CSI, and subsequently, the optimization problem was reformulated into a Markov decision process (MDP) framework. Then, a noisy dueling double-deep Q-network with prioritized experience replay (Noisy-D3QN-PER) algorithm with safety performance awareness is proposed, where the D3QN is the improvement of the DQN, the NoisyNet can be encouraged to avoid falling into local optima, and the PER accelerates the convergence.
  • The numerical results indicated that the Noisy-D3QN-PER algorithm outperformed conventional approaches in improving the safety performance protection level and achievable sum rate. For example, the proposed algorithm improved the system rate and transmission protection level by 27.43% and 11.11%, respectively, compared to the conventional DQN of the benchmark scheme.

2. System Model and Problem Formulation

2.1. System Model

Figure 1 depicts the secure transmission scenario under consideration, wherein a BS aided by a fixed aerial RIS-UAV seeks to establish dependable links with K single-antenna users in the presence of a smart jammer and a single-antenna eavesdropper. Here, we assumed that the BS and the jammer are equipped with N, N J antennas, respectively, and the RIS deployed on the UAV has L reflecting units. For the ease of exposition, we further denote the channel matrix between the BS and the RIS-UAV, the smart jammer and the RIS-UAV, the BS and the k-th user, the RIS-UAV and the k-th user, the smart jammer and the k-th user, the BS and the eavesdropper, and the RIS-UAV and the eavesdropper by G BR C L × N , G JR C L × N J , h BU , k H C 1 × N , h RU , k H C 1 × L , h JU , k H C 1 × N J , h BE H C 1 × N , and h RE H C 1 × L , respectively. Due to the cooperation between the legitimate nodes, we assumed that the CSI of the involved legitimate channel G BR , h BU , k , h RU , k is accurately available at the BS. However, in light of the expectation that illegitimate nodes will not collaborate with the BS to perform channel estimation, we took the practical assumption into account that the CSI of illegitimate channels, namely G JR , h JU , k , h BE , h RE , cannot be perfectly obtained. To elaborate on this, considering a more-practical and more-general situation, rather than using a statistical or bounded uncertainty model [15], we further characterized the illegitimate CSI as a given angle-based range, i.e.,
Δ J , G   =   { h JR | θ G J , R θ G , L J , R , θ G , U J , R , φ G J , R φ G , L J , R , φ G , U J , R , g G J g G , L J , g G , U J } ,
Δ J , h   =   { h JU , k | θ k J θ k , L J , θ k , U J , φ k J φ k , L J , φ k , U J , g k J g k , L J , g k , U J , k K } ,
Δ E = { h i | θ i E θ i , L E , θ i , U E , φ i E φ i , L E , φ i , U E , g i E g i , L E , g i , U E , i BE , RE } ,
where Δ J = Δ J , h , Δ J , G , θ L represents the minimum vertical angle of AoD (AoA), while θ U represents the maximum vertical angle of AoD (AoA). Similarly, φ L represents the minimum horizontal angle of AoD (AoA), while φ U represents the maximum horizontal angle of AoD (AoA). Finally, g L and g U represent the lower and upper limits of the channel gain amplitude, respectively.
Let s k be defined as the information symbol transmitted to the k-th user, satisfying E s k and E s k 2 = 1 . Before transmission, s k should be multiplied by the beamforming vector w k C N × 1 satisfying w k 2 = 1 . Consequently, the total transmitted signal at the BS can be written as x = k = 1 K P k w k s k , where P k denotes the allocated transmit power assigned to the k-th user. Meanwhile, the smart jammer endeavors to disrupt the legitimate communication by transmitting jamming signal w J s J C N J × 1 . As such, the RIS receives the superimposed signals and imposes the phase shift coefficient Φ = d i a g β 1 e j ϕ 1 , , β l e j ϕ l , , β L e j ϕ L on them, where ϕ l [ 0 , 2 π ] and β l [ 0 , 1 ] represent the phase shift and the amplitude of the l-th RIS reflective element, respectively. Hence, the received signal at the k-th user and the eavesdropper can be, respectively, expressed by
y U , k = h ¯ BU , k P k w k s k + i k h ¯ BU , k P i w i s i
+ h ¯ JU , k w J s J + n U , k ,
y E = h ¯ BE P k w k s k + i k h ¯ BE P i w i s i + n E ,
where h ¯ BU , k = h RU , k H Φ G BR + h BU , k H , h ¯ JU , k = h RU , k H Φ G BR + h JU , k H , h ¯ BE = h RE , k H Φ G BR + h BE H . The symbol n U , k C N 0 , σ U , k 2 represents the additive white Gaussian noise (AWGN) at the k-th user, and n E C N 0 , σ E 2 is the AWGN at the eavesdropper. Hence, the achievable system rate of the k-th user and the wiretap rate of the eavesdropper can be, respectively, expressed as
R U , k = log 2 1 + P k h ˜ BU , k w k 2 i k P i h ˜ BU , k w i 2 + h ˜ JU , k w J 2 + σ U , k 2 ,
R E , k = log 2 1 + P k h ˜ BE w k 2 i k P i h ˜ BE w i 2 + σ E 2 .
The secrecy rate of the k-th user can be written as
R sec , k = R U , k R E , k + ,
where z + = max ( z , 0 ) .

2.2. Problem Formulation

Our objective is to maximize the achievable sum rate through jointly optimizing the transmit power allocation P k k K and the reflecting beamforming matrix Φ under the imperfect illegitimate node’s CSI, while meeting the worst-case secrecy/achievable rate constraints. As such, the optimization problem can be formulated as
F : max P k k K , Φ min Δ J k K R U , k , s . t . C 1 : min Δ E R sec , k R sec , k min , k K , C 2 : min Δ J R U , k R k min , k K , C 3 : k = 1 K P k P max , C 4 : β l e j ϕ l = 1 , 0 θ l 2 π , l L ,
where R sec , k min and R k min represent the minimum secrecy rate and the target rate of the k-th user. The power allocation is restricted to C 3 due to the limited energy supply at the BS, and P max is the BS’s maximum transmit power. Note that, due to the non-convexity of both the objective function and the constraints, (9) is a non-convex and non-trivial problem. Many existing traditional optimization methods, such as the SDR algorithm and the AO algorithm, obtain the solution in each time slot, where the correlation of consecutive instants is ignored, and phase adjustment is usually discrete in form on practical RIS elements, which leads traditional methods to no longer be applicable. In addition, in the scenario we are considering, the jammer is intelligent and can change the unknown jamming strategy in real-time. In order to be able to optimize in real-time and from the perspective of long-term interests, instead of directly solving this problem mathematically, we propose a robust DRL-based approach that can constantly interact with the environment that contains eavesdroppers and smart jammers to learn the optimal solution.

3. DRL-Based Algorithm Design

3.1. Robust Channel Processing

As stated in Section 2, the imperfect CSI results in infinite non-convexity in both the objective function and constraints. With this as the focus, according to the works [28,29,30], the equivalent worst-case CSI of the illegitimate channel that can be obtained through utilizing the discretization method is given, respectively, by
G ˜ JR = i 1 = 1 N J 1 i 2 = 1 N J 2 j 1 N 1 j 2 N 2 ( 1 / N j + N ) G JR i 1 , i 2 ,
h ˜ M = i 1 = 1 M N 1 i 2 = 1 M N 2 ( 1 / M N ) h M i 1 , i 2 ,
where M BE , RE , { JU , k } , M N N , L , N J , and G JR i 1 , i 2 , h JU , k i 1 , i 2 , h BE i 1 , i 2 , h RE i 1 , i 2 are the discrete CSI by uniformly discretizing all the angles in the set of Δ J and Δ U , respectively, i.e.,
θ ( i 1 ) = θ L + ( i 1 1 ) θ U θ L / Q 1 1 , i 1 = 1 , , Q 1 ,
φ ( i 2 ) = φ L + ( i 2 1 ) φ U φ L / Q 2 1 , i 2 = 1 , , Q 2 ,
where Q 1 and Q 2 are the sample numbers of θ and φ . Here, the detail is omitted for brevity, which can be referenced in [31,32].

3.2. Overview of DRL

DRL amalgamates the feature acquisition prowess inherent to deep learning (DL) with the decision-making capabilities intrinsic to RL. It comprises two fundamental constituents: the agent and the environment. The agent continuously improves its strategy by receiving feedback through interactions with the environment to achieve maximum return. This learning process is described as an MDP [33]. The MDP framework can be defined by a tuple S , A , P , R . Herein, S represents the state space denoting the set of observations characterizing the environment. A denotes the set of potential choices. P is the state transition probability denoting the distribution of the next state s t + 1 given the action a t taken in the current state s t . Lastly, R is the immediate reward, which provides the quality evaluation r t s t , a t of the state–action pair s t , a t . At each time step t, the agent obtains the state s t S from the environment and executes an action a t A according to the policy function π a t | s t = Pr A t = a t | S t = s t . Subsequently, the environment will transit to a new state s t + 1 with probability P s t + 1 | s t , a t = Pr S t + 1 = s t + 1 | S t = s t , A t = a t ; in the meantime, the agent will receive the immediate reward r t R . The agent aims at learning strategies maximizing the long-term reward, i.e., the cumulative discounted future reward U t = τ = 0 γ τ R t + τ + 1 , where γ 0 , 1 is the discount factor. Therefore, the tuples s 1 , a 1 , r 1 , s 2 , , s t 1 , a t 1 , r t 1 , s t constitute the trajectory in an episode used for the iterative updating of the agent.
To accommodate the proposed algorithm in our problem, we first reformulated Problem (9) into an MDP framework. The corresponding elements of the MDP problem are specified as follows:
State   S : The state s t fed back from the RIS-UAV-assisted communication system is given as
h k t k K , h e t , R U , k t 1 k K ,
where h k and h e denote the composite channel coefficients of the k-th user and eavesdropper, respectively.
Action   A : Based on the current state s t , the agent needs to make a coordinated decision on the phase shift at the RIS and the power allocation at the BS. Hence, the action a t at each time step t is given as
a t = Δ ϕ l l L , Δ P k k K ,
where Δ ϕ l π 4 , 0 , π 4 is the variable for the phase shift of the l-th reflection element and Δ P k p ˜ , 0 , p ˜ is the variable for assigning the k-th user’s transmit power.
Reward   R : Our goal was not only to maximize the achievable rate, but also to ensure the system safety performance requirements. Therefore, we designed a composite reward function expressed as
r t = k K R U , k basic k K ρ 1 p U , k k K ρ 2 p E , k penalty ,
where
p E , k = 1 ,   i f   R sec , k < R sec , k min , k K , 0 ,   o t h e r w i s e . ,
p U , k = 1 ,   i f   R U , k < R U , k min , k K , 0 ,   o t h e r w i s e . .
In (16), the base reward is the sum of the rates of all users, and when the constraints in (17) or (18) are not satisfied, we add a penalty term to encourage the agent’s behavioral strategy to be closer to our needs. The coefficients ρ 1 and ρ 2 are the positive constants.
With DRL, a well-known function measuring the expected return for the agent to execute action a t in the state s t under the policy π is the action value function Q:
Q π s t , a t ; w = E U t | S t = s t , A t = a t ,
where w represents the parameters of the deep neural networks (DNNs). In the learning process, the agent intends to find optimal policy π * . Thus, the optimal Q function is expressed as
Q * s , a = max π Q π s , a , s S , a A .
In order to obtain the above equation, the optimal Q function can be constantly approximate by updating the parameter w using the temporal difference (TD) algorithm:
w t + 1 = w t α w L w ,
where α 0 , 1 is the learning rate for the update on w and w L w is the gradient of the loss function L w with respect to w , which is given by
L w = r t + γ max a A Q s t + 1 , a ; w Q s t , a t ; w 2 ,
where r t + γ max a A Q s t + 1 , a ; w refers to the TD target value.

3.3. Joint Power Allocation and Reflecting Beamforming Using Noisy-D3QN-PER

Prevailing reinforcement learning techniques, such as Q-learning, the policy gradient, and the deep Q-network (DQN), have demonstrated notable accomplishments in diverse control tasks. However, regarding the safety beamforming policy requirements discussed in Section 2, the policy gradient algorithm is inadequate for addressing Problem (9), as it involves continuous action space optimization and may converge to suboptimal solutions [34]. Furthermore, although the DQN performs well in in environments characterized by high-dimensional continuous state spaces and discrete action spaces, it remains plagued by several inherent limitations, which adversely affect algorithmic efficacy [35]. Therefore, the Noisy-D3QN-PER algorithm was developed to deal with the challenges in this paper, as shown in Figure 2, which can overcome the constraints associated with the aforementioned methods and significantly enhance the attainable performance.
It is noteworthy that a significant disadvantage inherent to the DQN algorithm is over-estimation of the Q function value. The overestimation issue is primarily attributable to two principal factors. First, the process of maximization causes the target value to overestimate the value of the true value. Second, bootstrapping engenders the propagation of bias. In order to address this issue, the double-DQN was adopted in the algorithm [36]. We applied another neural network, i.e., the target network Q π ( s t , a t ; w ) , whose neural network architecture is identical to that of the primary network, but the parameter w is different from w . Specifically, the primary network was used to choose an action that maximizes the output of the Q function a * = arg max a A Q ( s t + 1 , a ; w ) , and then, the target network calculates the TD target value r t + γ Q s t + 1 , a * ; w with the selected action. Thus, the primary network parameter is updated with the following loss function:
L w = r t + γ Q s t + 1 , a * ; w Q s t , a t ; w 2 .
Subsequently, the parameter of the target network is updated with w and w every regular interval.
In order to further enhance the algorithm’s performance, we incorporated the dueling layer [37], resulting in the formation of the dueling double-DQN (D3QN). The core concept underlying the dueling layer is the decomposition of the optimal action value Q * into the optimal state value V * and the optimal advantage D * . As such, the expression of the optimal advantage function is formulated as follows:
D * s , a = Δ Q * s , a V * s .
The advantage of modeling the state value function and the advantage function separately is that, in some specific situations, agents only pay attention to the value of the state and do not care about the differences caused by different actions. More specifically, in the optimization problem we are considering, the state values differ greatly, while the action in the same state differs little. The agent pays attention to the difference in the advantage value of different actions, which makes the algorithm converge more stably. As shown in Figure 3, the dueling layer comprises two distinct neural networks. The neural network denoted by D s , a ; w D is an approximation of the optimal advantage function D * s , a , and the other neural network is V ( s ; w V ) , which is an approximation of the optimal state value function V * ( s ) . The corresponding optimal action value function can be approximated as the following neural network:
Q ( s , a ; w ) = Δ V ( s ; w V ) + D ( s , a ; w D ) mean a A D ( s , a ; w D )
where mean a A D ( s , a ; w D ) ensures the stability of the parameters in the training process and w = Δ ( w V ; w D ) , since, at each iteration, the function V ( s ; w V ) is updated, which also affects the action value of the other actions.
In addition, there is a dilemma of exploration and exploitation in RL that greatly affects the performance of the algorithm. By gathering more information, or sufficient information, the agent can achieve the optimal long-term strategies on a macro-level at the expense of some short-term benefits. In an effort to attain a good tradeoff between exploration and exploitation, several basic strategies have been proposed, such as Boltzmann exploration and the ε -greedy policy. However, these methods only utilize action dithering, which results in a low exploration rate, especially in complex and unstable environments. Therefore, we propose a NoisyNet technique to improve the exploration efficiency, i.e., adding parameterized noise to the DNN layer [38]. Specifically, as shown in Figure 4, the weight parameter w of the DNN is replaced with
w = μ + σ ξ ,
where μ and σ are learnable parameters and denote the mean and standard deviation, respectively, and ξ N 0 , 1 is the noise. Here, the term ∘ denotes the multiplication of the corresponding elements, i.e.,
w i j = μ i j + σ i j ξ i j .
Hence, the Q function is written as
Q ˜ ( s , a , ξ ; μ , σ ) = Δ Q ( s , a ; μ + σ ξ ) .
The loss function can be further rewritten as
L ( μ , σ ) = ( r t + γ Q ˜ ( s t + 1 , a * , ξ ; μ , σ ) Q ˜ ( s t , a t , ξ ; μ , σ ) ) 2 ,
where a * = arg max a A Q ˜ ( s t + 1 , a , ξ ; μ , σ ) , and the noise value ξ is different from ξ . In the training process, noise is added to the training parameters to force the algorithm to minimize the error in the case of parameters with noise, which means that it is forced to tolerate the disturbance of the parameters. It does not matter if the parameters are not strictly equal to the mean; as long as the parameters are in the neighborhood of the mean, the prediction made by the agent can be reasonable. Therefore, the NoisyNet is not only beneficial to enhance exploration, but also to enhance the robustness of the algorithm.
Experience replay is often utilized in the classical DQN to store and uniformly sample experience transitions, which help in reusing experiences and breaking the correlation of experience transition sequences. However, due to the uncertainty of the jamming strategy of the jammer, the importance of different transitions is different, and uniformly sampling may be ineffective. Hence, we adopted prioritized experience replay (PER) to make the algorithm learn more efficiently and converge faster [39]. PER non-uniformly samples each transition, where the priority of the transition is proportional to its TD error value. Therefore, the sampling probability of transition j is given by
P ( j ) = δ i α j / n δ n α j ,
where α j adjusts the importance of the priority. In addition, the loss function needs to be multiplied by importance sampling weights to counteract the bias caused by varying the sampling probabilities. Thus, the parameters of the proposed algorithm are update by a mini-batch transition:
σ t + 1 = σ t α σ σ 1 m j = 1 m ( ( N · P ( j ) ) ϖ L j ( μ , σ ) ) ,
μ t + 1 = μ t α μ μ 1 m j = 1 m ( ( N · P ( j ) ) ϖ L j ( μ , σ ) ) ,
where α σ and α μ are the learning rate, m is the mini-batch size, N represents the number of samples in the buffer, and ϖ 0 , 1 is a hyperparameter that determines the extent to which PER affects the convergence result.
The detailed training process of the Noisy-D3QN-PER algorithm is shown in Algorithm 1. At the beginning of the training, we sample new channel realizations and randomly choose the phase shifts and power allocation to compute the first state s 0 . Since the NoisyNet is inherently random, exploration can be encouraged. Based on the current state s t , the ε -greedy policy is implemented to select action a t and, subsequently, receive feedback reward r t and the next state s t + 1 . The transition sequence ( s t , a t , r t , s t + 1 ) is saved in the experience replay buffer D . After storing enough experiences transitions, the training of the primary networks starts, and mini-batch transitions are selected according to the PER principle and put into the neural networks to obtain the loss function according to Equation (29). Then, the parameters of the primary networks are updated by the Adam optimizer according to Equations (31) and (32), and the target network copies the parameters of the primary networks in every T N E T time interval. In addition, each time the experience transitions are sampled, the selected transitions need to update the priority with the new TD error.
Algorithm 1 Noisy-D3QN-PER algorithm
Require: environment simulator, experience replay buffer D , learning rate α σ and α μ , mini-batch size m.
1:
Initialize: experience replay buffer D with size D, mini-batch size m, primary network parameters ( μ , σ ) , target network parameters ( μ = μ , σ = σ ) .
2:
for each episode = 1, 2, …, N epi  do
3:
   Perceive an initial system state s.
4:
   for each step = 1, 2, …, T do
5:
     Select action a t using ε -greedy policy, i.e., select the action that yields the largest action value with a probability of 1 ε , or randomly select from all the possible actions with the probability of ε .
6:
     Receive an instantaneous reward r t , and obtain the next state s t + 1 .
7:
     Store the experience transitions ( s t , a t , r t , s t + 1 ) .
8:
     if  D m  then
9:
        Sample mini-batch transitions based on PER using (30), and then, update the priority of the selected transition based on its TD error.
10:
        Calculate the loss function for the mini-batch according to (29).
11:
        Perform gradient descent, and update the parameters of the primary networks using (31) and (32).
12:
        if  t mod T N E T = 0  then
13:
          target network copies the parameters of the primary networks.
14:
        end if
15:
     end if
16:
   end for
17:
end for
Ensure: joint power allocation and RIS phase shift design strategy.

4. Simulation Results

This section presents an evaluation of the Noisy-D3QN-PER algorithm. We varied the maximum transmission power P max between 10 dBm and 30 dBm. The number of antennas on both the BS and the jammer were N = N J = 64 , and the number of users was K = 2 . The fixed deployment height of the RIS-UAV was 100 m. The minimum secrecy rate and target data rate were R sec , k min = 0.5 bits/s/Hz and R k min = 1 bits/s/Hz, respectively. The background noise at each user and eavesdropper was set to σ U , k 2 = σ E 2 = 90 dBm. All involved neural networks were considered to be fully connected. The learning rates α σ and α μ were set as α = 0.001 . The initial exploration rate ε was 1, then was linearly annealed to 0.1. The parameters ρ 1 and ρ 2 in (12) were set to ρ 1 = ρ 2 = 2 . The replay buffer size was D = 100,000, and the mini-batch size was m = 32 . In addition, the jammer chooses power was from 10 dBm to 30 dBm based on its own jamming strategy, which the BS could not access. Besides, we chose three conventional approaches as benchmarks, namely the classical DQN, the DDQN, and the optimal transmit power allocation without the RIS approach. All of the displayed illustrations are the average results of over 100 independently executed implementations.
Figure 5 shows the average gain graph of the Noisy-D3QN-PER algorithm and the benchmark algorithm. It can be observed that, in the initial phase of training, the algorithms obtained approximately the same reward gain. However, after 100 episodes of training, the Noisy-D3QN-PER algorithm significantly achieved higher gains and faster convergence compared to the benchmark algorithm. This was due to the fact that the preferred empirical playback and competition layers included in the proposed algorithm were better able to adapt to the dynamic and complex interference environment. Specifically, the dueling layer helps to analyze the state bias due to unknown jammer power and unknown location information, and the NoisyNet encourages the exploration of more reflecting beamforming strategies for higher long-term benefits. Moreover, it can be observed that both the DDQN and the proposed algorithm outperformed the classical DQN, which suggests that the use of the DDQN can effectively mitigate the overestimation problem.
Figure 6 shows the achievable sum rate with varying maximum transmit power P max . Here, we set L = 64 . As expected, the proposed algorithm outperformed other approaches. This was because the dueling layers modeling the advantage function and the state value function separately can better focus on states that are less correlated with the current strategy–action relationship and better predict the jammer’s strategy when the transmit power changes. Besides, the NoisyNet can prevent the proposed algorithm from becoming stuck at the undesired suboptimal solutions. It can be also observed that the three RIS-UAV-assisted approaches can obtain a much higher achievable rate than that without the RIS, which indicates that deploying the RIS-UAV can efficiently enhance the secure performance. To elaborate on this, the system can enhance the desired signals at the users and eliminate the jamming signal by adjusting the reflecting beamforming at the RIS.
To further highlight the security performance enhancement of the proposed algorithm, the security requirement satisfaction probability (the probability of the satisfaction of the rate constraints [27,28]) of different approaches is shown in Figure 7. It is evident from the figure that the security performance of the optimal PA without the RIS approach cannot be guaranteed when the P max is low, and the security performance protection improved until P max was raised to a certain value. However, the other approaches with the RIS-UAV can obtain satisfactory performance at different P max , which further confirmed the superiority of deploying the RIS-UAV in wireless communication systems. Furthermore, it is noteworthy that the proposed algorithm achieved the best result as compared to other conventional approaches. This can be explained by the fact that the comparison approaches usually fell into the suboptimal solution, which only increased the achievable sum rate, but ignored the security performance requirement. However, due to the adopted NoisyNet and the security-aware reward function, the proposed Noisy-D3QN-PER algorithm can explore strategies and make a desirable balance between the security performance and the achievable rate.

5. Conclusions

This paper delved into the optimization of joint power allocation and reflecting beamforming regarding secure communication via RIS-UAV assistance with imperfect CSI. Specifically, the original optimization problem was formulated into an MDP framework and solved by a Noisy-D3QN-PER algorithm, in which the agent can estimate the unknown jamming strategy through constantly interacting with the environment to quickly adapt to the dynamic environment and, finally, obtain the optimal policy that maximizes the achievable rate and meets the requirements of system security performance, which provides technical support for the realization of the intelligence of the RIS-assisted robust beamforming system. The numerical results confirmed the predominance of the proposed Noisy-D3QN-PER algorithm over other existing conventional approaches in improving the achievable sum rate and system security performance. Although the method proposed in this paper can effectively resist the jamming attack with the uncertainty of the CSI, it is still necessary to know the variation range of interference. The next step needs to focus on the following two aspects of research: one is to study the anti-jamming method without any interference information; the other is to explore the AI interpretability, to improve the trustworthiness and effectiveness of the AI method.

Author Contributions

Conceptualization, C.Z., C.L., Y.L. and X.Y.; methodology, C.L. and Y.L.; validation, C.Z., C.L. and Y.L.; formal analysis, C.Z., C.L., Y.L. and X.Y.; investigation, C.Z., C.L., Y.L. and X.Y.; resources, C.Z., C.L., Y.L. and X.Y.; data curation, C.Z., C.L., Y.L. and X.Y.; writing—original draft preparation, C.Z.; writing—review and editing, C.L. and Y.L.; supervision, C.L. and Y.L.; project administration, C.L.; funding acquisition, C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Due to institutional data privacy requirements, our data is unavailable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cao, H.; Du, J.; Zhao, H.; Luo, D.X.; Kumar, N.; Yang, L.; Yu, F.R. Resource-Ability Assisted Service Function Chain Embedding and Scheduling for 6G Networks With Virtualization. IEEE Trans. Veh. Technol. 2021, 70, 3846–3859. [Google Scholar] [CrossRef]
  2. Mukherjee, A.; Fakoorian, S.A.A.; Huang, J.; Swindlehurst, A.L. Principles of Physical Layer Security in Multiuser Wireless Networks: A Survey. IEEE Commun. Surv. Tutorials 2014, 16, 1550–1573. [Google Scholar] [CrossRef]
  3. Zou, Y.; Zhu, J.; Wang, X.; Hanzo, L. A Survey on Wireless Security: Technical Challenges, Recent Advances, and Future Trends. Proc. IEEE 2016, 104, 1727–1765. [Google Scholar] [CrossRef]
  4. Feng, S.; Haykin, S. Cognitive Risk Control for Anti-Jamming V2V Communications in Autonomous Vehicle Networks. IEEE Trans. Veh. Technol. 2019, 68, 9920–9934. [Google Scholar] [CrossRef]
  5. Liang, L.; Cheng, W.; Zhang, W.; Zhang, H. Mode Hopping for Anti-Jamming in Radio Vortex Wireless Communications. IEEE Trans. Veh. Technol. 2018, 67, 7018–7032. [Google Scholar] [CrossRef]
  6. Yan, S.; Yang, N.; Land, I.; Malaney, R.; Yuan, J. Three Artificial-Noise-Aided Secure Transmission Schemes in Wiretap Channels. IEEE Trans. Veh. Technol. 2018, 67, 3669–3673. [Google Scholar] [CrossRef]
  7. Mayouche, A.; Spano, D.; Tsinos, C.G.; Chatzinotas, S.; Ottersten, B. Learning-Assisted Eavesdropping and Symbol-Level Precoding Countermeasures for Downlink MU-MISO Systems. IEEE Open J. Commun. Soc. 2020, 1, 535–549. [Google Scholar] [CrossRef]
  8. Liaskos, C.; Nie, S.; Tsioliaridou, A.; Pitsillides, A.; Ioannidis, S.; Akyildiz, I. A New Wireless Communication Paradigm through Software-Controlled Metasurfaces. IEEE Commun. Mag. 2018, 56, 162–169. [Google Scholar] [CrossRef]
  9. Huang, C.; Zappone, A.; Alexandropoulos, G.C.; Debbah, M.; Yuen, C. Reconfigurable Intelligent Surfaces for Energy Efficiency in Wireless Communication. IEEE Trans. Wirel. Commun. 2019, 18, 4157–4170. [Google Scholar] [CrossRef]
  10. Hu, S.; Rusek, F.; Edfors, O. Beyond Massive MIMO: The Potential of Data Transmission With Large Intelligent Surfaces. IEEE Trans. Signal Process. 2018, 66, 2746–2758. [Google Scholar] [CrossRef]
  11. Wu, Q.; Zhang, R. Intelligent Reflecting Surface Enhanced Wireless Network via Joint Active and Passive Beamforming. IEEE Trans. Wirel. Commun. 2019, 18, 5394–5409. [Google Scholar] [CrossRef]
  12. Cui, M.; Zhang, G.; Zhang, R. Secure Wireless Communication via Intelligent Reflecting Surface. IEEE Wirel. Commun. Lett. 2019, 8, 1410–1414. [Google Scholar] [CrossRef]
  13. Shen, H.; Xu, W.; Gong, S.; He, Z.; Zhao, C. Secrecy Rate Maximization for Intelligent Reflecting Surface Assisted Multi-Antenna Communications. IEEE Commun. Lett. 2019, 23, 1488–1492. [Google Scholar] [CrossRef]
  14. Dong, L.; Wang, H.-M. Secure MIMO Transmission via Intelligent Reflecting Surface. IEEE Wirel. Commun. Lett. 2020, 9, 787–790. [Google Scholar] [CrossRef]
  15. Sun, Y.; An, K.; Luo, J.; Zhu, Y.; Zheng, G.; Chatzinotas, S. Intelligent Reflecting Surface Enhanced Secure Transmission Against Both Jamming and Eavesdropping Attacks. IEEE Trans. Veh. Technol. 2021, 70, 11017–11022. [Google Scholar] [CrossRef]
  16. Sun, Y.; An, K.; Zhu, Y.; Zheng, G.; Wong, K.K.; Chatzinotas, S.; Yin, H.; Liu, P. RIS-Assisted Robust Hybrid Beamforming Against Simultaneous Jamming and Eavesdropping Attacks. IEEE Trans. Wirel. Commun. 2022, 21, 9212–9231. [Google Scholar] [CrossRef]
  17. Sun, Y.; An, K.; Luo, J.; Zhu, Y.; Zheng, G.; Chatzinotas, S. Outage Constrained Robust Beamforming Optimization for Multiuser IRS-Assisted Anti-Jamming Communications With Incomplete Information. IEEE Internet Things J. 2022, 9, 13298–13314. [Google Scholar] [CrossRef]
  18. Li, Z.; Wang, S.; Wen, M.; Wu, Y.C. Secure Multicast Energy-Efficiency Maximization With Massive RISs and Uncertain CSI: First-Order Algorithms and Convergence Analysis. IEEE Trans. Wirel. Commun. 2022, 21, 6818–6833. [Google Scholar] [CrossRef]
  19. Guo, K.; An, K. On the Performance of RIS-Assisted Integrated Satellite-UAV-Terrestrial Networks With Hardware Impairments and Interference. J. Abbr. 2022, 11, 131–135. [Google Scholar] [CrossRef]
  20. Wu, W.; Zhou, F.; Wang, B.; Wu, Q.; Dong, C.; Hu, R.Q. Unmanned Aerial Vehicle Swarm-Enabled Edge Computing: Potentials, Promising Technologies, and Challenges. IEEE Wirel. Commun. 2022, 29, 78–85. [Google Scholar] [CrossRef]
  21. Mei, C.; Fang, Y.; Qiu, L. Dual Based Optimization Method for IRS-Aided UAV-Enabled SWIPT System. In Proceedings of the 2022 IEEE Wireless Communications and Networking Conference (WCNC), Austin, TX, USA, 10–13 April 2022; pp. 890–895. [Google Scholar]
  22. Liu, Z.; Zhao, S.; Wu, Q.; Yang, Y.; Guan, X. Joint Trajectory Design and Resource Allocation for IRS-Assisted UAV Communications With Wireless Energy Harvesting. IEEE Commun. Lett. 2022, 26, 404–408. [Google Scholar] [CrossRef]
  23. Zhou, F.; Li, X.; Alazab, M.; Jhaveri, R.H.; Guo, K. Secrecy Performance for RIS-Based Integrated Satellite Vehicle Networks With a UAV Relay and MRC Eavesdropping. IEEE Trans. Intell. Veh. 2023, 8, 1676–1685. [Google Scholar] [CrossRef]
  24. Feng, K.; Wang, Q.; Li, X.; Wen, C.K. Deep Reinforcement Learning Based Intelligent Reflecting Surface Optimization for MISO Communication Systems. IEEE Wirel. Commun. Lett. 2020, 9, 745–749. [Google Scholar] [CrossRef]
  25. Wang, W.; Zhang, W. Intelligent Reflecting Surface Configurations for Smart Radio Using Deep Reinforcement Learning. IEEE J. Sel. Areas Commun. 2022, 40, 2335–2346. [Google Scholar] [CrossRef]
  26. Omoniwa, B.; Galkin, B.; Dusparic, I. Communication-enabled deep reinforcement learning to optimise energy-efficiency in UAV-assisted networks. Veh. Commun. 2023, 43, 100640. [Google Scholar] [CrossRef]
  27. Yang, H.; Xiong, Z.; Zhao, J.; Niyato, D.; Xiao, L.; Wu, Q. Deep Reinforcement Learning-Based Intelligent Reflecting Surface for Secure Wireless Communications. IEEE Trans. Wirel. Commun. 2021, 20, 375–388. [Google Scholar] [CrossRef]
  28. Yang, H.; Xiong, Z.; Zhao, J.; Niyato, D.; Wu, Q.; Poor, H.V.; Tornatore, M. Intelligent Reflecting Surface Assisted Anti-Jamming Communications: A Fast Reinforcement Learning Approach. IEEE Trans. Wirel. Commun. 2021, 20, 1963–1974. [Google Scholar] [CrossRef]
  29. Omoniwa, B.; Galkin, B.; Dusparic, I. Optimizing Energy Efficiency in UAV-Assisted Networks Using Deep Reinforcement Learning. IEEE Wirel. Commun. Lett. 2022, 11, 1590–1594. [Google Scholar] [CrossRef]
  30. Thanh, P.D.; Giang, H.T.H.; Hong, I.-P. Anti-Jamming RIS Communications Using DQN-Based Algorithm. IEEE Access 2022, 10, 28422–28433. [Google Scholar] [CrossRef]
  31. Sun, Y.; An, K.; Zhu, Y.; Zheng, G.; Wong, K.K.; Chatzinotas, S.; Ng, D.W.K.; Guan, D. Energy-Efficient Hybrid Beamforming for Multilayer RIS-Assisted Secure Integrated Terrestrial-Aerial Networks. IEEE Trans. Commun. 2022, 70, 4189–4210. [Google Scholar] [CrossRef]
  32. Sun, Y.; Zhu, Y.; An, K.; Zheng, G.; Chatzinotas, S.; Wong, K.K.; Liu, P. Robust Design for RIS-Assisted Anti-Jamming Communications With Imperfect Angular Information: A Game-Theoretic Perspective. IEEE Trans. Veh. Technol. 2022, 71, 7967–7972. [Google Scholar] [CrossRef]
  33. Picard, R.W.; Papert, S.; Bender, W.; Blumberg, B.; Breazeal, C.; Cavallo, D.; Machover, T.; Resnick, M.; Roy, D.; Strohecker, C. Affective Learning—A Manifesto. BT Technol. J. 2004, 22, 253–269. [Google Scholar] [CrossRef]
  34. Silver, D.; Lever, G.; Heess, N.; Degris, T.; Wierstra, D.; Riedmiller, M. Deterministic policy gradient algorithms. In Proceedings of the 31st International Conference on Machine Learning, Beijing, China, 21–26 June 2014; pp. 387–395. [Google Scholar]
  35. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Graves, A.; Antonoglou, I.; Wierstra, D.; Riedmiller, M. Playing atari with deep reinforcement learning. arXiv 2013, arXiv:1312.5602. [Google Scholar]
  36. Van Hasselt, H.; Guez, A.; Silver, D. Deep Reinforcement Learning with Double Q-learning. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; pp. 2094–2100. [Google Scholar]
  37. Wang, Z.; Schaul, T.; Hessel, M.; Hasselt, H.; Lanctot, M.; Freitas, N. Dueling network architectures for deep reinforcement learning. Proc. Mach. Learn. Res. 2016, 48, 1995–2003. [Google Scholar]
  38. Fortunato, M.; Azar, M.G.; Piot, B.; Menick, J.; Osband, I.; Graves, A.; Mnih, V.; Munos, R.; Hassabis, D.; Pietquin, O.; et al. Noisy Networks for Exploration. arXiv 2017, arXiv:1706.10295. [Google Scholar]
  39. Schaul, T.; Quan, J.; Antonoglou, I.; Silver, D. Prioritized Experience Replay. arXiv 2015, arXiv:1511.05952. [Google Scholar]
Figure 1. System model.
Figure 1. System model.
Electronics 12 04490 g001
Figure 2. The process of the Noisy-D3QN-PER algorithm.
Figure 2. The process of the Noisy-D3QN-PER algorithm.
Electronics 12 04490 g002
Figure 3. Dueling layer.
Figure 3. Dueling layer.
Electronics 12 04490 g003
Figure 4. NoisyNet.
Figure 4. NoisyNet.
Electronics 12 04490 g004
Figure 5. Average reward of the Noisy-D3QN-PER algorithm and other comparison approaches.
Figure 5. Average reward of the Noisy-D3QN-PER algorithm and other comparison approaches.
Electronics 12 04490 g005
Figure 6. Achievable sum rate with varying maximum transmit power P max .
Figure 6. Achievable sum rate with varying maximum transmit power P max .
Electronics 12 04490 g006
Figure 7. System security requirement satisfaction probability versus the maximum transmit power P max .
Figure 7. System security requirement satisfaction probability versus the maximum transmit power P max .
Electronics 12 04490 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zou, C.; Li, C.; Li, Y.; Yan, X. RIS-Assisted Robust Beamforming for UAV Anti-Jamming and Eavesdropping Communications: A Deep Reinforcement Learning Approach. Electronics 2023, 12, 4490. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics12214490

AMA Style

Zou C, Li C, Li Y, Yan X. RIS-Assisted Robust Beamforming for UAV Anti-Jamming and Eavesdropping Communications: A Deep Reinforcement Learning Approach. Electronics. 2023; 12(21):4490. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics12214490

Chicago/Turabian Style

Zou, Chao, Cheng Li, Yong Li, and Xiaojuan Yan. 2023. "RIS-Assisted Robust Beamforming for UAV Anti-Jamming and Eavesdropping Communications: A Deep Reinforcement Learning Approach" Electronics 12, no. 21: 4490. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics12214490

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop