Farshad Zeinali
Publications
The capacity of fifth-generation (5G) vehicle-to-everything (V2X) networks poses significant challenges. To address this challenge, this paper utilizes New Radio (NR) and New Radio Unlicensed (NR-U) networks to develop a vehicular heterogeneous network (HetNet). We propose a framework, named joint BS assignment and resource allocation (JBSRA) for mobile V2X users and also consider coexistence schemes based on flexible duty cycle (DC) mechanism for unlicensed bands. Our objective is to maximize the average throughput of vehicles, while guarantying the WiFi users throughput. In simulations based on deep reinforcement learning (DRL) algorithms such as deep deterministic policy gradient (DDPG) and deep Q network (DQN), our proposed framework outperforms existing solutions that rely on fixed DC or schemes without consideration of unlicensed bands.
Vehicles are already fitted with light-emitting diodes (LEDs), however, with a vehicles-to-everything (V2X) network, we can integrate this potential network of visible light communication (VLC). In this article, we explore the problem of energy efficiency (EE) and age of information (AoI) aware in a cluster-based VLC V2X system. Through cellular wireless vehicle-to-everything (C-V2X) communication technology, vehicle clusters provide cooperative awareness messages (CAMs) to their members and communicate safety-critical messages to the road-side unit (RSU). The purpose of this study is to evaluate the impact of the rising number of vehicles on EE and AoI, as well as the effect of increasing the intra-cluster gap on AoI in order to maximize EE while minimizing AoI. To solve the EE problem involving quality of service (QoS) and power constraints, we employ the multi-agent reinforcement learning (MARL) mechanism. The simulations show an acceptable improvement in the system's performance.
In this paper, we present a quality of service (QoS)-aware priority-based spectrum management scheme to guarantee the minimum required bit rate of vertical sector players (VSPs) in the 5G and beyond generation, including the 6th generation (6G). VSPs are considered as spectrum leasers to optimize the overall spectrum efficiency of the network from the perspective of the mobile network operator (MNO) as the spectrum licensee and auctioneer. We exploit a modified Vickrey-Clarke-Groves (VCG) auction mechanism to allocate the spectrum to them where the QoS and the truthfulness of bidders are considered as two important parameters for prioritization of VSPs. The simulation is done with the help of deep deterministic policy gradient (DDPG) as a deep reinforcement learning (DRL)-based algorithm. Simulation results demonstrate that deploying the DDPG algorithm results in significant advantages. In particular, the efficiency of the proposed spectrum management scheme is about %85 compared to the %35 efficiency in traditional auction methods.
Mounting a reconfigurable intelligent surface (RIS) on an unmanned aerial vehicle (UAV) holds promise for improving traditional terrestrial network performance. Unlike conventional methods deploying the passive RIS on UAVs, this study delves into the efficacy of an aerial active RIS (AARIS). Specifically, the downlink transmission of an AARIS network is investigated, where the base station (BS) leverages rate-splitting multiple access (RSMA) for effective interference management and benefits from the support of an AARIS for jointly amplifying and reflecting the BS’s transmit signals. Considering both the nontrivial energy consumption of the active RIS and the limited energy storage of the UAV, we propose an innovative element selection strategy for optimizing the on/off status of active RIS elements, which adaptively and remarkably manages the system’s power consumption. To this end, a resource management problem is formulated, aiming to maximize the system energy efficiency (EE) by jointly optimizing the transmit beamforming at the BS, the element activation, the phase shift and the amplification factor at the active RIS, the RSMA common data rate at users, as well as the UAV’s trajectory. Due to the dynamicity nature of the UAV and user mobility, a deep reinforcement learning (DRL) algorithm is designed for resource allocation, utilizing meta-learning to adaptively handle fast time-varying system dynamics. According to simulations, integrating meta-learning yields a notable 36% increase in the system EE. Additionally, substituting AARIS for fixed terrestrial active RIS results in a 26% EE enhancement.