Professor Rahim Tafazolli FREng
Academic and research departments
Institute for Communication Systems, School of Computer Science and Electronic Engineering.About
Biography
Rahim Tafazolli is Regius Professor of Electronic Engineering, Professor of Mobile and Satellite Communications, Founder and Director of 5GIC, 6GIC and ICS (Institute for Communication System) at the University of Surrey. He has over 30 years of experience in digital communications research and teaching. He has authored and co-authored more than 1000 research publications and is regularly invited to deliver keynote talks and distinguished lectures to international conferences and workshops.
Affiliations and memberships
News
ResearchIndicators of esteem
Professor Tafazolli was awarded the 28th KIA Laureate Award in 2015 for his contribution to communications technology.
The laureates of the KIA, Khwarizmi International Award (KIA) of Iran, are selected from the internationally distinguished scientists and researchers whose contributions to the advancement of science and technology are confirmed by the Iranian Research Organisation of Science and Technology (IROST) scientific committee.
Indicators of esteem
Professor Tafazolli was awarded the 28th KIA Laureate Award in 2015 for his contribution to communications technology.
The laureates of the KIA, Khwarizmi International Award (KIA) of Iran, are selected from the internationally distinguished scientists and researchers whose contributions to the advancement of science and technology are confirmed by the Iranian Research Organisation of Science and Technology (IROST) scientific committee.
Publications
In vehicle-to-infrastructure (V2I) networks, a cluster of multi-antenna access points (APs) can collaboratively conduct transmitter beamforming to provide data services (e.g., eMBB or URLLC). The collaboration between APs effectively forms a networked linear antenna-array with extra-large aperture (i.e., network-ELAA), where the wireless channel exhibits spatial non-stationarity. Major contribution of this work lies in the analysis of beamforming gain and radio coverage for network-ELAA non-stationary Rician channels considering the AP clustering. Assuming that: 1) the total transmit-power is fixed and evenly distributed over APs, 2) the beam is formed only based on the line-of-sight (LoS) path, it is found that the beamforming gain is concave to the cluster size. The optimum size of the AP cluster varies with respect to the user's location, channel uncertainty as well as data services. A user located farther from the ELAA requires a larger cluster size. URLLC is more sensitive to the channel uncertainty when comparing to eMBB, thus requiring a larger cluster size to mitigate the channel fading effect and extend the coverage. Finally, it is shown that the network-ELAA can offer significant coverage extension (50% or more in most of cases) when comparing with the single-AP scenario.
In this paper, a novel nonlinear precoding (NLP) technique, namely constellation-oriented perturbation (COP), is proposed to tackle the scalability problem inherent in conventional NLP techniques. The basic concept of COP is to apply vector perturbation (VP) in the constellation domain instead of symbol domain; as often used in conventional techniques. By this means, the computational complexity of COP is made independent to the size of multi-antenna (i.e., MIMO) networks. Instead, it is related to the size of symbol constellation. Through widely linear transform, it is shown that COP has its complexity flexibly scalable in the constellation domain to achieve a good complexityperformance tradeoff. Our computer simulations show that COP can offer very comparable performance with the optimum VP in small MIMO systems. Moreover, it significantly outperforms current sub-optimum VP approaches (such as degree-2 VP) in large MIMO whilst maintaining much lower computational complexity.
Holographic-type Communication (HTC) has been widely deemed as an emerging type of augmented reality (AR) media which offers Internet users deeply immersive experiences. In contrast to the traditional video content transmissions, the characteristics and network requirements of HTC have been much less studied in the literature. Due to the high bandwidth requirements and various limitations of today's HTC platforms, large-scale HTC streaming has never been systematically attempted and comprehensively evaluated till now. In this paper, we introduce a novel HTC based teleportation platform leveraging cloud-based remote production functions, also supported with newly proposed adaptive frame buffering and end-to-end signalling techniques against network uncertainties, which for the first time is able to provide assured user experiences at the public Internet scale. According to our real-life experiments based on strategically deployed cloud sites for remote production functions, we have demonstrated the feasibility of supporting user assured performances for such applications at the global Internet scale.
This paper aims to handle the joint transmitter and noncoherent receiver design for multiuser multiple-input multiple-output (MU-MIMO) systems through deep learning. Given the deep neural network (DNN) based noncoherent receiver, the novelty of this work mainly lies in the multiuser waveform design at the transmitter side. According to the signal format, the proposed deep learning solutions can be divided into two groups. One group is called pilot-aided waveform, where the information-bearing symbols are time-multiplexed with the pilot symbols. The other is called learning-based waveform, where the multiuser waveform is partially or even completely designed by deep learning algorithms. Specifically, if the information-bearing symbols are directly embedded in the waveform, it is called systematic waveform. Otherwise, it is called non-systematic waveform, where no artificial design is involved. Simulation results show that the pilot-aided waveform design outperforms the conventional zero forcing receiver with least squares (LS) channel estimation on small-size MU-MIMO systems. By exploiting the time-domain degrees of freedom (DoF), the learning-based waveform design further improves the detection performance by at least 5 dB at high signal-to-noise ratio (SNR) range. Moreover, it is found that the traditional weight initialization method might cause a training imbalance among different users in the learning-based waveform design. To tackle this issue, a novel weight initialization method is proposed which provides a balanced convergence performance with no complexity penalty.
This paper presents a parallel computing approach that is employed to reconstruct original information bits from a non-recursive convolutional codeword in noise, with the goal of reducing the decoding latency without compromising the performance. This goal is achieved by means of cutting a received codeword into a number of sub-codewords (SCWs) and feeding them into a two-stage decoder. At the first stage, SCWs are decoded in parallel using the Viterbi algorithm or equivalently the brute force algorithm. Major challenge arises when determining the initial state of the trellis diagram for each SCW, which is uncertain except for the first one; and such results in multiple decoding outcomes for every SCW. To eliminate or more precisely exploit the uncertainty, an Euclidean-distance minimization algorithm is employed to merge neighboring SCWs; and this is called the merging stage, which can also run in parallel. Our work reveals that the proposed two-stage decoder is optimal and has its latency growing logarithmically, instead of linearly as for the Viterbi algorithm, with respect to the codeword length. Moreover, it is shown that the decoding latency can be further reduced by employing artificial neural networks for the SCW decoding. Computer simulations are conducted for two typical convolutional codes, and the results confirm our theoretical analysis.
Online advertising has become the backbone of the Internet economy by revolutionizing business marketing. It provides a simple and efficient way for advertisers to display their advertisements to specific individual users, and over the last couple of years has contributed to an explosion in the income stream for several Web-based businesses. For example, Google's income from advertising grew 51.6% between 2016 and 2018, to \ 136.8 billion. This exponential growth in advertising revenue has motivated fraudsters to exploit the weaknesses of the online advertising model to make money, and researchers to discover new security vulnerabilities in the model, to propose countermeasures and to forecast future trends in research. Motivated by these considerations, this paper presents a comprehensive review of the security threats to online advertising systems. We begin by introducing the motivation for online advertising system, explain how it differs from traditional advertising networks, introduce terminology, and define the current online advertising architecture. We then devise a comprehensive taxonomy of attacks on online advertising to raise awareness among researchers about the vulnerabilities of online advertising ecosystem. We discuss the limitations and effectiveness of the countermeasures that have been developed to secure entities in the advertising ecosystem against these attacks. To complete our work, we identify some open issues and outline some possible directions for future research towards improving security methods for online advertising systems.
Software-Defined Networking (SDN) has found applications in different domains, including wired- and wireless networks. The SDN controller has a global view of the network topology, which is vulnerable to topology poisoning attacks, e.g., link fabrication and host-location hijacking. The adversaries can leverage these attacks to monitor the flows or drop them. However, current defence systems such as TopoGuard and TopoGuard+ can detect such attacks. In this paper, we introduce the Link Latency Attack (LLA) that can successfully bypass the systems' defence mechanisms above. In LLA, the adversary can add a fake link into the network and corrupt the controller's view from the network topology. This can be accomplished by compromising the end hosts without the need to attack the SDN-enabled switches. We develop a Machine Learning-based Link Guard (MLLG) system to provide the required defence for LLA. We test the performance of our system using an emulated network on Mininet, and the obtained results show an accuracy of 98.22% in detecting the attack. Interestingly, MLLG improves 16% the accuracy of TopoGuard+.
This paper provides the mobility and coverage evaluation of New Radio (NR) Physical Downlink Control Channel (PDCCH) for Point-to-Multipoint (PTM) use cases, e.g., eMBMS (evolved Multimedia Broadcast Multicast Services). The evaluation methodology is based on analyses and link level simulations where the channel model includes AWGN, TDL-A, TDL-C as well as a modified 0dB echo to model different PTM scenarios. The final version of this work aims to provide insightful guidelines on the delay/echo tolerance of the NR PDCCH in terms of mobility and coverage. In this paper, it is observed that under eMBMS scenario, i.e. SFN channel, due to the time domain granularity of pilots distributed inside the PDCCH region, the system can support very high user movement speed/Doppler with an relatively low requirement on the transmit Signal/Carrier-to-Noise Ratio (SNR/CNR). On the other hand however, the system falls short on its coverage due to the low frequency domain granularity of pilots that effects the channel estimation accuracy.
5G New Radio (NR) Release 15 has been specified in June 2018. It introduces numerous changes and potential improvements for physical layer data transmissions, although only point-to-point (PTP) communications are considered. In order to use physical data channels such as the Physical Downlink Shared Channel (PDSCH), it is essential to guarantee a successful transmission of control information via the Physical Downlink Control Channel (PDCCH). Taking into account these two aspects, in this paper, we first analyze the PDCCH processing chain in NR PTP as well as in the state-of-the-art Long Term Evolution (LTE) point-to-multipoint (PTM) solution, i.e., evolved Multimedia Broadcast Multicast Service (eMBMS). Then, via link level simulations, we compare the performance of the two technologies, observing the Bit/Block Error Rate (BER/BLER) for various scenarios. The objective is to identify the performance gap brought by physical layer changes in NR PDCCH as well as provide insightful guidelines on the control channel configuration towards NR PTM scenarios.
In this work, we provide the first attempt to evaluate error performance of Rate-Splitting (RS) based transmission strategies with constellation-constrained coding/modulation. The considered scenario is an overloaded multigroup multicast, where RS can mitigate the inter-group interference thus achieve a better max-min fair group rate over conventional transmission strategies. We bridge the RS-based rate optimization with modulation-coding scheme selection, and implement them in a developed transceiver framework with either linear or non-linear receiver, where the latter equips with a generalized sphere decoder. Simulation results of a coded bit error rate demonstrate that, while the conventional strategies suffer from the error floor in the considered scenario, the RS-based strategy delivers a superior performance even with low complexity receiver techniques. The proposed analysis, transceiver framework and evaluation methodology provide a generic baseline solution to validate the effectiveness of the RS-based system design in practice.
Integrating Low Earth Orbit (LEO) satellites with terrestrial network infrastructures to support ubiquitous Internet service coverage has recently received increasing research momentum. One distinct challenge is the frequent topology change caused by the constellation behaviour of LEO satellites. In the context of software defined networking (SDN), the controller function that is originally required to control the conventional data plane fulfilled by terrestrial SDN switches will need to expand its responsibility to cover their counterparts in the space, namely LEO satellites that are used for data forwarding. As such, seamless integration of the fixed control plane on the ground and the mobile data plane fulfilled by constellation LEO satellites will become a distinct challenge. In this paper, we propose the Virtual Data-Plane Addressing (VDPA) Scheme by leveraging IP addresses to represent virtual switches at the fixed space locations which are periodically instantiated by the nested LEO satellites traversing them in a predictable manner. With such a scheme the changing data-plane network topology incurred by LEO satellite constellation can be made completely agnostic to the control plane on the ground, thus enabling a native approach to supporting seamless communication between the two planes. Our testbed-based experiment results prove the technical feasibility of the proposed VDPA-based flow rule manipulation mechanism in terms of data plane performance.
Live holographic teleportation is an emerging media application that allows Internet users to communicate with each other in a fully immersive manner. One distinct feature of such an application is the capability of simultaneously teleporting multiple objects from different network locations to the receiver's field of view, mimicking the effect of group-based communications in a common physical space. In this case, teleportation frames from individual sources need to be stringently synchronized in order to assure user Quality of Experiences (QoE) in terms of avoiding the perception of motion misalignment at the receiver side. In this paper, we carry out systematic performance evaluations on how different Internet path conditions may affect the teleportation frame synchronisation performances. Based on this, we present a lightweight, edge-computing based scheme that is able to achieve controllable frame synchronisation operations for multi-source based teleportation applications at the Internet scale.
The introduction of Bitcoin cryptocurrency has inspired businesses and researchers to investigate into the technical aspects of blockchain and DLT systems. However, the blockchain technologies today still have distinct limitations on scalability and flexibly in terms of large-size and dynamic reconfigurability. Sharding appears to be a promising solution to scale out the blockchain system horizontally by dividing the entire network into multiple shards or clusters. However, flexibility and reconfigurability of these clusters need further research and investigations. In this paper, we propose two efficient mechanisms to enable flexible dynamic re-clustering of the blockchain network including blockchain cluster merging and splitting operations. Such mechanisms offer a solution to specific application scenarios such as microgrids and other edge-based applications where clusters of autonomous systems potentially require structure reconfigurations. The proposed mechanisms offer three-stage procedures to merge and split multiple clusters. Based on our simulation experiments, we show that the proposed merging and splitting operations based on proof of work (PoW) consensus algorithm can be optimized to reduce the merging time considerably (in the magnitude of 1/22000 based on 100 blocks) which effectively reduces overall merging and splitting completion time, interruption time and required computation power.
A cell-free massive multiple-input multiple-output (MIMO) uplink is investigated in this paper. We address a power allocation design problem that considers two conflicting metrics, namely the sum rate and fairness. Different weights are allocated to the sum rate and fairness of the system, based on the requirements of the mobile operator. The knowledge of the channel statistics is exploited to optimize power allocation. We propose to employ large scale-fading (LSF) coefficients as the input of a twin delayed deep deterministic policy gradient (TD3). This enables us to solve the non-convex sum rate fairness trade-off optimization problem efficiently. Then, we exploit a use-and-then-forget (UatF) technique, which provides a closed-form expression for the achievable rate. The sum rate fairness trade-off optimization problem is subsequently solved through a sequential convex approximation (SCA) technique. Numerical results demonstrate that the proposed algorithms outperform conventional power control algorithms in terms of both the sum rate and minimum user rate. Furthermore, the TD3-based approach can increase the median of sum rate by 16%-46% and the median of minimum user rate by 11%-60% compared to the proposed SCA-based technique. Finally, we investigate the complexity and convergence of the proposed scheme.
Quantization is the characterization of analogue-to-digital converters (ADC) in massive MIMO systems. The design of quantization function or quantization thresholds is found to relate to quantization step, which is the factor that adapts with the changing of transmit power and noise variance. With the objective of utilizing low-resolution ADC is reducing the cost of massive MIMO, we propose an idea as if it is necessary to have adaptive-threshold quantization function. It is found that when maximum-likelihood (ML) is employed as the detection method, having quantization thresholds fixed for low-resolution ADCs will not cause significant performance loss. Moreover, such fixed-threshold quantization function does not require any information of signal power which can reduce the hardware cost of ADCs. Simulations have been carried out in this paper to make comparisons between fixed-threshold and adaptive-threshold quantization regarding various factors.
Detecting high-resolution signals at binary-array receivers can yield a stochastic-resonance phenomenon. It is found, through mathematical means, that the error probability of maximum-likelihood detection (MLD) forms a convex function of the SNR; and the optimum operating-SNR increases monotonically with the signal resolution. This phenomenon encourages the use of MIMO at higher SNRs and SIMO at lower SNRs in terms of the error probability; as the former often has their signal resolutions higher than the latter. This observation also motivates a fundamental rethinking to determine whether to use MIMO or SIMO for wireless communications given binary-array receivers. In fact, there are a number of arguable advantages for SIMO, including wider coverage, higher point-to-point throughput, as well as lower complexity of the MLD. All of these are extensively investigated in this paper through both analytical work and computer simulations.
Semantic communication is a new paradigm for information transmission that integrates the essential meaning (semantics) of the message into the communication process. However, like in classic wireless communications, the open nature of wireless channels poses security risks for semantic communications. In this paper, we characterize information-theoretic limits for the secure transmission of a semantic source over a wiretap channel. Under separate secrecy and distortion constrains for semantics and observed data, we present general inner and outer bounds on the rate-distortion-equivocation region. We also reduce the general region to the case of Gaussian source and Gaussian wiretap channel and provide numerical evaluations.
Advancements in satellite technology have made direct satellite-to-device connectivity a viable solution for ensuring global access. This method is designed to provide internet connectivity to remote, rural, or underserved areas where traditional cellular or broadband networks are lacking or insufficient. This paper is a survey providing an in-depth review of multi-satellite Multiple Input Multiple Output (MIMO) systems as a potential solution for addressing the link budget challenge in direct satellite-to-device communication. Special attention is given to works considering multi-satellite MIMO systems, both with and without satellite collaboration. In this context, collaboration refers to sharing data between satellites to improve the performance of the system. This survey starts by highlighting the industry's views on the importance of enabling the direct satellite-to-device communications. It follows by explaining several fundamental aspects of satellite communications (SatComs), which are vital prerequisites before investigating the multi-satellite MIMO systems. These aspects encompass satellite orbits, the structure of satellite systems, SatCom links, including the inter-satellite links (ISL) which facilitate satellite cooperation, satellite frequency bands, satellite antenna design, and satellite channel models, which should be known or estimated for effective data transmission to and from multiple satellites. Furthermore, this survey distinguishes itself by providing more comprehensive insights in comparison to other surveys. It specifically delves into the Orthogonal Time Frequency Space (OTFS) within the channel model section. It goes into detail about ISL noise and ISL channel model, and it extends the ISL section by thoroughly investigating hybrid FSO/RF ISLs. Furthermore, analytical comparisons of simulation results from these works are presented to highlight the advantages of employing multi-satellite MIMO systems.
This paper presents wideband channel measurements for indoor Non-Line-of-Sight (NLoS) fixed links operating at sub-Terahertz (sub-THz) frequencies (92-110 GHz) with single scattered reflections. Seemingly "smooth" surfaces generate substantial internal multipath causing frequency selective scattered reflections owing to the short wavelength. Unlike specular reflections, no single well-defined reflection point aligns the incidence angle (θi) and reflection angle (θr) to form the lowest path loss. This means that the scattering effect of the reflective surface gives reason to search for the best angular alignment of the Receiver (Rx) antenna given a specific angle from the Transmitter (Tx) antenna. The wideband effective Radar Cross Section (RCS) is derived and computed based on the bi-static radar equation, which specifically accounts for the effect of alignment angle on the defined frequency selective RCS. The measurements reveal alignment-variant angular scattering from large-scale discontinuities such as wall partitions, television screens and metallic reinforcement studs, offering valuable insights for the design and deployment of future wireless communications systems operating in the sub-THz band.
Exploiting ultra-wide bandwidths is a promising approach to achieve the terabits per second (Tbps) data rates required to unlock emerging mobile applications like mobile extended reality and holographic telepresence. However, conventional digital systems are unable to exploit such bandwidths efficiently. In particular, the power consumption of ultra-fast, high-precision digital-to-analogue and analogue-to-digital converters (DACs/ADCs) for ultra-wide bandwidths becomes impractical. At the same time, achieving ultra-fast digital signal processing becomes extremely challenging in terms of power consumption and processing latency due to the complexity of state-of-the-art processing algorithms (e.g.", soft" detection/decoding) and the fact that the increased sampling rates challenge the speed capabilities of modern digital processors. To overcome these bottlenecks, there is a need for signal processing solutions that can, ideally, avoid DACs/ADCs while minimizing both the power consumption and processing latency. One potential approach in this direction is to design digital systems that do not require DACs/ADCs and perform all the corresponding processing directly in the analogue domain. Despite existing attempts to develop individual components of the transceiver chain in the analogue domain, as we discuss in detail in this work, the feasibility of complete analogue processing in ultra-fast wireless systems is still an open research topic. In addition, existing analogue-based approaches have inferior spectrum utilization than digital approaches, partly due to their inability to exploit the recent advances in digital systems such as "soft" detection/decoding. In this context, we also discuss the challenges related to performing "soft" detection/decoding directly in the analogue domain, as has been recently proposed by the DigiLogue processing concept, and we show with a simple example that analogue-based "soft" detection/decoding is feasible and can achieve the same error performance as digital approaches with more than 37× power savings. In addition, we discuss several challenges related to the design of ultra-fast, fully analogue wireless receivers that can perform "soft" processing directly in the analogue domain and we suggest research directions to overcome these challenges.
An important issue for cellular network operators is how to maintain radio coverage so that any issues can be addressed before they impact user services. This is particularly important in dense small cell network deployment scenarios such as vehicular networks. Antenna electrical tilt is a key factor in this, as unintended deviations from the planned value can adversely affect coverage and service reliability. We propose a novel method to detect antenna tilt anomalies using existing data sources without the need for additional hardware to be deployed in the radio access network. Our approach goes beyond previous techniques by using federated unsupervised learning based on polar coordinates, together with a geometrical transformation to normalise data across multiple sites. By using this approach to combine scarce training data from multiple cells, we can achieve detection accuracy in excess of 95% in a way that minimises training data size as well as computing power and memory usage.
Among the existing satellite types, Low Earth Orbit (LEO) satellites provide short round-trip delays and are becoming increasingly important. Due to its low orbital profile, the LEO satellites can provide high-speed, low-latency and no dead zone network services for ground users. However, as the number of satellites continues to increase, frequency bands as non-renewable resources will seriously restrict the future development of the Space-Earth integration network. In this paper, a flexible spectrum sharing and cooperative service method is proposed to address the co-linear interference issue caused by LEO satellites while passing through the coverage area of the GEO beam and allows the LEO satellites to provide services for multiple LEO ground users. In our proposed scheme, through continuous power allocation optimization, we ensure that the service of LEO satellites will not reduce the service quality of the GEO beam. At by taking full advantage of the cooperation between LEO satellites, the quality of their service can be significantly improved. Simulation results show that our proposed scheme converges quickly, the transmission efficiency and the stability of the system can all be guaranteed.
The increasing complexity of communication systems, following the advent of heterogeneous technologies, services and use cases with diverse technical requirements, provide a strong case for the use of artificial intelligence (AI) and data-driven machine learning (ML) techniques in studying, designing and operating emerging communication networks. At the same time, the access and ability to process large volumes of network data can unleash the full potential of a network orchestrated by AI/ML to optimise the usage of available resources while keeping both CapEx and OpEx low. Driven by these new opportunities, the ongoing standardisation activities indicate strong interest to reap the benefits of incorporating AI and ML techniques in communication networks. For instance, 3GPP has introduced the network data analytics function (NWDAF) at the 5G core network for the control and management of network slices, and for providing predictive analytics, or statistics, about past events to other network functions, leveraging AI/ML and big data analytics. Likewise, at the radio access network (RAN), the O-RAN Alliance has already defined an architecture to infuse intelligence into the RAN, where closed-loop control models are classified based on their operational timescale, i.e., real-time, near real-time, and non-real-time RAN intelligent control (RIC). Different from the existing related surveys, in this review article, we group the major research studies in the design of model-aided ML-based transceivers following the breakdown suggested by the O-RAN Alliance. At the core and the edge networks, we review the ongoing standardisation activities in intelligent networking and the existing works cognisant of the architecture recommended by 3GPP and ETSI. We also review the existing trends in ML algorithms running on low-power micro-controller units, known as TinyML. We conclude with a summary of recent and currently funded projects on intelligent communications and networking. This review reveals that the telecommunication industry and standardisation bodies have been mostly focused on non-real-time RIC, data analytics at the core and the edge, AI-based network slicing, and vendor inter-operability issues, whereas most recent academic research has focused on real-time RIC. In addition, intelligent radio resource management and aspects of intelligent control of the propagation channel using reflecting intelligent surfaces have captured the attention of ongoing research projects.
Our earlier work has demonstrated that a sufficiently trained recurrent neural network (RNN) can effectively detect base station performance degradations. We encountered a performance limit however: the accuracy gain diminishes as the RNN deepens. In this paper, we investigate the performance limit of a well-trained RNN by visualising its processes and modeling its internal operation. We first illustrate that inputs following a certain probability density undergo transformation in the RNN. By linearising the RNN process, we then develop a linear model to analyse the transformation. Using the model, we not only unveil insights into RNN operational behaviour, but are also able to explain the effect of diminishing gains in deeper RNNs. Finally, we validate our model and demonstrate its ability to accurately predict the performance of a well-trained RNN.
This paper proposes a secure transmission in reconfigurable intelligent surfaces (RIS) aided non-terrestrial cooperative networks (NTCN), where the practical phase-dependent model is considered in which the RIS reflection amplitudes change with the corresponding discrete phase shifts. Moreover, we employ a full-duplex transmission scheme at the relay nodes to reduce the long-range signal loss and improve the security between the satellite and the relay node. To solve the complex non-convex optimization problem of the joint RIS reflection coefficient and relay selection optimization, we propose the deep cascade correlation learning (DCCL) algorithm to enhance optimization efficiency. Simulation results show that the proposed DCCL-based method significantly improves the secrecy capacity compared to the random relay selection and RIS coefficient methods.
This letter presents experimental angularly resolved measurements and a model framework for characterizing continuous wideband reflection coefficients at sub-terahertz frequencies between 92 and 110 GHz. Surfaces that appear "smooth" but with internal features comparable to the wavelength have shown to cause frequency selective scattering. An nth-degree polynomial regression model is employed to quantify non-linear multi-path scattering that cannot be described by a best-fit line, with least-squares regression applied to find the best-fitting polynomial and hence the coefficients. The obtained coefficients are then validated against the delay domain statistics of the propagation channel, demonstrating the proposed model's good agreement with measurements and its efficiency in reproducing angle-dependent reflection coefficients for use in ray-tracing tools. This letter presents experimental angularly resolved measurements and a model framework for characterizing continuous wideband reflection coefficients at sub-terahertz frequencies between 92 and 110 GHz. Surfaces that appear "smooth" but with internal features comparable to the wavelength have shown to cause frequency selective scattering. An nth-degree polynomial regression model is employed to quantify non-linear multi-path scattering that cannot be described by a best-fit line, with least-squares regression applied to find the best- fitting polynomial and hence the coefficients. image
IEEE 802.11ax Spatial Reuse (SR) is a new category in the IEEE 802.11 family, aiming at improving the spectrum efficiency and the network performance in dense deployments. The main and perhaps the only SR technique in that amendment is the Basic Service Set (BSS) Color. It aims at increasing the number of concurrent transmissions in a specific area, based on a newly defined Overlapping BSS/Preamble-Detection (OBSS/PD) threshold and the Received Signal Strength Indication (RSSI) from Overlapping BSSs (OBSSs). In this paper, we propose a Control OBSS/PD Sensitivity Threshold (COST) algorithm for adjusting OBSS/PD threshold based on the interference level and RSSI from the associated recipient(s). In contrast to the Dynamic Sensitivity Control (DSC) algorithm that was proposed for setting OBSS/PD, COST is fully aware of any changes in OBSSs and can be applied to any IEEE 802.11ax node. Simulation results in various scenarios, show a clear performance improvement of up to 57% gain in throughput over a conservative fixed OBSS/PD for the legacy BSS Color and DSC.
AbstractA novel high-isolation, monostatic, circularly polarized (CP) simultaneous transmit and receive (STAR) anisotropic dielectric resonator antenna (DRA) is presented. The Proposed antenna is composed of two identical but orthogonally positioned annular sectoral anisotropic dielectric resonators. Each circularly polarized (CP) resonator consists of alternating stacked dielectric layers of relative permittivities of 2 and 15 and is excited by a coaxial probe from the two opposite ends to have left and right-hand CP. Proper element spacing and a square absorber are placed between the resonators to maximize Tx/Rx isolation. Such a structure provides an in-band full-duplex (IBFD) CP-DRA system. Measurement results exhibit high Tx/Rx isolation better than 50 dB over the desired operating bandwidth (5.87–5.97 GHz) with a peak gain of 5.49 and 5.08 dBic for Ports 1 and 2, respectively.
High mobility scenarios may be typical for different applications such as low earth orbit (LEO) satellite and vehicle-to-everything (V2X) communications. A standardized approach to dealing with high mobility scenarios is using flexible sub-frame structures including a higher pilot density in the time domain, which leads to reduced spectrum efficiency. We propose a supplementary algorithm to improve multiple antenna receiver performance in high mobility scenarios for the given sub-frame structure compared to the conventional 3GPP pilot and data based interference rejection receivers. The main feature of high mobility (non-stationary) scenarios is that different symbols in the desired signal sub-frame may be received under different propagation and/or interference conditions. Recently, we have addressed a non-stationary interference rejection scenario in slowly varying propagation environment with asynchronous (intermittent) interference by means of developing an interference rejection combining algorithm, where the pilot based estimate of the interference plus noise covariance matrix is regularized by the data based estimate of the covariance matrix. In this paper, we: 1) extend the data regularized solution to the general high mobility scenarios, and 2) demonstrate its efficiency compared to the conventional pilot and data based receivers for different sub-frame formats in the uplink transmissions in the LEO satellite scenario with high residual Doppler frequency with and without hardware impairments.
Decentralized joint transmit power and beamforming selection for multiple antenna wireless ad hoc networks operating in a multi-user interference environment is considered. An important feature of the considered environment is that altering the transmit beamforming pattern at some node generally creates more significant changes to interference scenarios for neighboring nodes than variation of the transmit power. Based on this premise, a good neighbor algorithm is formulated in the way that at the sensing node, a new beamformer is selected only if it needs less than the given portion of the transmit power required for the current beamformer. Otherwise, it keeps the current beamformer and achieves the performance target only by means of power adaptation. Equilibrium performance and convergence behavior of the proposed algorithm compared to the best response and regret matching solutions is demonstrated by means of semi-analytic Markov chain performance analysis for small scale and simulations for large scale networks.
As over-the-top (OTT) applications such as videos dominate the global mobile traffic, conventional content delivery techniques such as caching no longer suffice to cope with mobile network users' requirements due to e.g., fluctuating radio conditions. In legacy 4G networks, mobile network operator (MNO) and OTT service providers (OSPs) are logically decoupled from each other, hence preventing them to share necessary context and enable in-network context-aware intelligence. The recently standardized 5G network architecture's softwarized and virtualized nature opens up new opportunities for flexible deployment of MNO- and OSP-operated network functions. In this work, we first extend the current 5G standard to enable third-party stakeholders to deploy their own user-plane functions (UPFs) within the MNO infrastructure. Based on this, we propose a service function chaining (SFC) framework within the 5G core network, which allows MNO to dynamically determine the optimal set of UPFs that each flow should traverse based on their real-time contexts. The proposed framework has been implemented in a testbed network. Through realistic experiments, we demonstrate that UPF deployment strategy plays a crucial rule in the resulting SFC performance, and our proposed scheme can achieve performance that is close to the benchmark. Furthermore, we establish recommendations on best practices of UPF deployment strategies in 5G network.
This paper propose a Modulation based Non Orthogonal Multiple Access (M-NOMA) and Modulation based cooperative NOMA (CM-NOMA) which greatly enhances the performance of existing NOMA. There are multiple techniques which offer stability and perfection of NOMA. M-NOMA is such a scheme which makes it one of the technique which is a better adoption in 5G communication systems. M-NOMA works on system modulation for creating orthogonality between users. It is an extremely trivial solution, which not only simplifies the system but improves overall system performance. In M-NOMA, BS modifies real and far users on separate constellation of the modulation. In this paper BS modulates near users on the real component and far users on the imaginary components of the QPSK constellation. M-NOMA reduces Symbol Error Rate, Inter-cell and Inter-cluster interference, Latency, Computational Complexity in terms of reduced number of SICs. In this paper we have shown the better performance of M-NOMA with respect to conventional NOMA in terms of SER, Data Rate and interference with the help of our simulation results. SER comparison of CM-NOMA, M-NOMA, C-NOMA and NOMA has also been simulated. We have also discussed expressions of SINR and achievable data rate with respect to M-NOMA.
The recent paradigm shift towards the transmission of large numbers of mutually interfering information streams, as in the case of aggressive spatial multiplexing, combined with requirements towards very low processing latency despite the frequency plateauing of traditional processors, initiates a need to revisit the fundamental maximum-likelihood (ML) and, consequently, the sphere-decoding (SD) detection problem. This work presents the design and VLSI architecture of MultiSphere; the first method to massively parallelize the tree search of large sphere decoders in a nearly-concurrent manner, without compromising their maximum-likelihood performance, and by keeping the overall processing complexity comparable to that of highly-optimized sequential sphere decoders. For a 10 ⇥ 10 MIMO spatially multiplexed system with 16-QAM modulation and 32 processing elements, our MultiSphere architecture can reduce latency by 29⇥ against well-known sequential SDs, approaching the processing latency of linear detection methods, without compromising ML optimality. In MIMO multicarrier systems targeting exact ML decoding, MultiSphere achieves processing latency and hardware efficiency that are orders of magnitude improved compared to approaches employing one SD per subcarrier. In addition, for 16⇥16 both “hard”- and “soft”-output MIMO systems, approximate MultiSphere versions are shown to achieve similar error rate performance with state-of-the art approximate SDs having akin parallelization properties, by using only one tenth of the processing elements, and to achieve up to approximately 9⇥ increased energy efficiency.
It is well documented that the achievable throughput of MIMO systems that employ linear beamforming can significantly degrade when the number of concurrently transmitted information streams approaches the number of base-station antennas. To increase the number of the supported streams, and therefore, to increase the achievable net throughput, non-linear beamforming techniques have been proposed. These beamforming approaches are typically evaluated via simulations or via simplified over-the-air experiments that are sufficient for validating their basic principles, but they neither provide insights about potential practical challenges when trying to adopt such approaches in a standards-compliant framework, nor they provide any indication about the achievable performance when they are part of a standards-compliant protocol stack. In this work, for first time, we evaluate non-linear beamforming in a 3GPP standards- compliant framework, using our recently-proposed SWORD research platform. SWORD is a flexible, open for research, software-driven platform that enables the rapid evaluation of advanced algorithms without extensive hardware optimizations that can prevent promising algorithms from being evaluated in a standards-compliant stack. We show that in an indoor environment, vector perturbation-based non-linear beamforming can provide up to 46% throughput gains compared to linear approaches for 4×4 MIMO systems, while it can still provide gains of nearly 10% even if the number of base-station antennas is doubled.
The ubiquitous availability of wireless networks and devices provides a unique opportunity to leverage the corresponding communication signals to enable wireless sensing applications. In this article, we develop a new framework for environment sensing by opportunistic use of the mmWave communication signals. The proposed framework is based on a mixture of the conventional and Neural Network (NN) signal processing techniques for simultaneous counting and localization of multiple targets in the environment in a bi-static setting. In this framework, multi-modal delay, Doppler, angular features are first derived from the Channel State Information (CSI) estimated at the receiver, and then a transformer-based NN architecture exploiting attention mechanisms, called CSIformer, is designed to extract the most effective features for sensing. We also develop a novel post-processing technique based on Kullback-Leibler (KL) minimization to transfer knowledge between the counting and localization tasks, thereby simplifying the NN architecture. Our numerical results show accurate counting and localization capabilities that significantly outperform the existing works based on pure conventional signal processing techniques, as well as NN-based approaches. The simulation codes are available at: https://github.com/University-of-Surrey-Mahdi/Attention-on-the-Preambles-Sensing-with-mmWave-CSI.
Signal detection in large multiple-input multiple-output (large-MIMO) systems presents greater challenges compared to conventional massive-MIMO for two primary reasons. First, large-MIMO systems lack favorable propagation conditions as they do not require a substantially greater number of service antennas relative to user antennas. Second, the wireless channel may exhibit spatial non-stationarity when an extremely large aperture array (ELAA) is deployed in a large-MIMO system. In this paper, we propose a scalable iterative large-MIMO detector named ANPID, which simultaneously delivers 1) close to maximum-likelihood detection performance, 2) low computational-complexity (i.e., square-order of transmit antennas), 3) fast convergence, and 4) robustness to the spatial non-stationarity in ELAA channels. ANPID incorporates a damping demodulation step into stationary iterative (SI) methods and alternates between two distinct demodulated SI methods. Simulation results demonstrate that ANPID fulfills all the four features concurrently and outperforms existing low-complexity MIMO detectors, especially in highly-loaded large-MIMO systems.
In this paper, a novel spatially non-stationary fading channel model is proposed for multiple-input multiple-output (MIMO) system with extremely-large aperture service-array (ELAA). The proposed model incorporates three key factors which cause the channel spatial non-stationarity: 1) link-wise path-loss; 2) shadowing effect; 3) line-of-sight (LoS)/non-LoS state. With appropriate parameter configurations, the proposed model can be used to generate computer-simulated channel data that matches the published measurement data from practical ELAA-MIMO channels. Given such appealing results, the proposed fading channel model is employed to study the cumulative distribution function (CDF) of ELAA-MIMO channel capacity. For all of our studied scenarios, it is unveiled that the ELAA-MIMO channel capacity obeys the skew normal distribution. Moreover, the channel capacity is also found close to the Gaussian or Weibull distribution, depending on users' geo-location and distribution. More specifically, for single-user equivalent scenarios or multiuser scenarios with short user-to-ELAA distances (e.g., 1 m), the channel capacity is close to the Gaussian distribution; and for others, it is close to the Weibull distribution. Finally, the proposed channel model is also employed to study the impact of channel spatial non-stationarity on linear MIMO receivers through computer simulations. The proposed fading channel model is available at https://github.com/ELAA-MIMO/non-stationary-fading-channel-model .
—Emerging dual-functional radar communication (RadCom) systems promise to revolutionize wireless systems by enabling radar sensing and communication on a shared platform, thereby enhancing spectral efficiency. However, the high transmit power required for efficient radar operation poses risks by potentially exceeding the electromagnetic field (EMF) exposure limits enforced by the regulations. To address this challenge, we propose an EMF-aware signalling design that enhances RadCom system performance while complying with EMF constraints. Our approach considers exposure levels not only experienced by network users but also in sensitive areas such as schools and hospitals, where the exposure must be further reduced. First, we model the exposure metric for the users and the sectors that encounter sensitive areas. Then, we design the waveform by exploiting the trade-off between radar and communication while satisfying the exposure constraints. We reformulate the problem as a convex optimization program and solve it in closed form using Karush–Kuhn–Tucker (KKT) conditions. The numerical results demonstrate the feasibility of developing a robust RadCom system with low electromagnetic (EM) radiations.
Substrate integrated waveguides (SIW) technology is employed to design a uniform long slot leaky-wave antenna (LWA) in millimeter-wave (mmWave) band. The structure is then loaded by a 3D printed sinusoidal periodic pattern of Photopolymer VeroClear dielectric. This makes a periodic LWA which means that it is possible to regulate a desired higher order space harmonic to form the beam and to tilt it to the direction of interest. To showcase the practicality of method, the dielectric pattern is designed in such a way that the beam is tilted to the backward-quadrant at θm = −15 • at f = 35 GHz. The structure is fabricated and the S-parameters are measured which shows a good agreement with the simulated results. Index Terms—leaky wave antenna (LWA), 3D printing, sub-strate integrated waveguide (SIW), periodic structure.
Software-Defined Networking (SDN) has found applications in different domains, including wired- and wireless networks. The SDN controller has a global view of the network topology, which is vulnerable to topology poisoning attacks, e.g., link fabrication and host-location hijacking. The adversaries can leverage these attacks to monitor the flows or drop them. However, current defence systems such as TopoGuard and TopoGuard+ can detect such attacks. In this paper, we introduce the Link Latency Attack (LLA) that can successfully bypass the systems' defence mechanisms above. In LLA, the adversary can add a fake link into the network and corrupt the controller's view from the network topology. This can be accomplished by compromising the end hosts without the need to attack the SDN-enabled switches. We develop a Machine Learning-based Link Guard (MLLG) system to provide the required defence for LLA. We test the performance of our system using an emulated network on Mininet, and the obtained results show an accuracy of 98.22% in detecting the attack. Interestingly, MLLG improves 16% the accuracy of TopoGuard+.
In conventional hybrid beamforming approaches, the number of radio-frequency (RF) chains is the bottleneck on the achievable spatial multiplexing gain. Recent studies have overcome this limitation by increasing the update-rate of the RF beamformer. This paper presents a framework to design and evaluate such approaches, which we refer to as agile RF beamforming, from theoretical and practical points of view. In this context, we consider the impact of the number of RF-chains, phase shifters speed, and resolution to design agile RF beamformers. Our analysis and simulations indicate that even an RF-chain-free transmitter, which its beamformer has no RF-chains, can provide a promising performance compared with fully-digital systems and significantly outperform the conventional hybrid beamformers. Then, we show that the phase shifter's limited switching speed can result in signal aliasing, in-band distortion, and out-of-band emissions. We introduce performance metrics and approaches to measure such effects and compare the performance of the proposed agile beamformers using the Gram-Schmidt orthogonalization process. Although this paper aims to present a generic framework for deploying agile RF beamformers, it also presents extensive performance evaluations in communication systems in terms of adjacent channel leakage ratio, sum-rate, power efficiency, error vector magnitude, and bit-error rates.
Holographic-type Communication (HTC) has been widely deemed as an emerging type of augmented reality (AR) media which offers Internet users deeply immersive experiences. In contrast to the traditional video content transmissions, the characteristics and network requirements of HTC have been much less studied in the literature. Due to the high bandwidth requirements and various limitations of today’s HTC platforms, large-scale HTC streaming has never been systematically attempted and comprehensively evaluated till now. In this paper, we introduce a novel HTC based teleportation platform leveraging cloud-based remote production functions, also supported with newly proposed adaptive frame buffering and end-to-end signalling techniques against network uncertainties, which for the first time is able to provide assured user experiences at the public Internet scale. According to our real-life experiments based on strategically deployed cloud sites for remote production functions, we have demonstrated the feasibility of supporting user assured performances for such applications at the global Internet scale.
Due to the rise of the energy efficiency (EE) as a system performance evaluation criterion, the EE-spectral efficiency (SE) trade-off is becoming a key tool for getting insight on how to efficiently design future communication system. As far as the single-input single-output (SISO) Rayleigh fading channel is concerned, the EE-SE trade-off has been accurately approximated in the past but only at low-SE. In this paper, we propose a novel and more generic closed-form approximation (CFA) of this EE-SE trade-off which is very accurate for any SE values. We compare our CFA with two existing CFAs and show the great accuracy of the former for a wider range of SE in comparison with the latter. As an application, we use our CFA to study the variation of EE-SE trade-off when a realistic power model is assumed and to compare the energy consumption of SISO against a 2x2 multi-input multi-output (MIMO) system over the Rayleigh fading channel.
A novel dual band-notched printed monopole for ultra-wideband (UWB) applications is presented. The antenna element consists of a microstrip line-fed metal patch loaded with three dielectric resonators (DRs) to achieve impedance matching and bandwidth enhancement. To realize dual band-notched characteristics, a slot etched in the patch, two L-shaped parasitic strips connected to the DRs, and slots in the ground plane, are used. The two key issues in the antenna concern the loading DRs which are in very close proximity to the slots in the ground plane, and the introduction of the two notch-bands. Measurements demonstrate a good radiation performance with an ultra-wide impedance bandwidth from 2.54 GHz to 13.64 GHz with two band-notches at 3.22-4.06 GHz and 5.04-6.04 GHz. Outside the notches, the antenna reaches a quite high and stable gain over the rest of the UWB band, about 5 dBi in between ca. 6 and 14 GHz.
This work aims to handle the joint transmitter and noncoherent receiver optimization for multiuser single-input multiple-output (MU-SIMO) communications through unsupervised deep learning. It is shown that MU-SIMO can be modeled as a deep neural network with three essential layers, which include a partially-connected linear layer for joint multiuser waveform design at the transmitter side, and two nonlinear layers for the noncoherent signal detection. The proposed approach demonstrates remarkable MU-SIMO noncoherent communication performance in Rayleigh fading channels.
In emerging on-demand and live surveillance video applications, end users may actively change content resolutions which may trigger sudden and potentially substantial change of data rate requirements. Traditional IP based static paths may not be able to seamlessly handle such change of user intent in video applications, and hence may lead to potential user QoE deterioration. In this paper, we propose an SRv6 enabled SDN framework that can allow on the fly change of video delivery paths (when necessary) upon the detection of dynamic user intent on different video resolutions. This is typically achieved through offline definition of possible user intent scenarios on specific video resolutions which can be captured by edge computing based intent framework before the path switching action is triggered. We demonstrate a use case of a 4K video quality switch on an implemented framework, and the results show substantially reduced resolution switching delay upon user intent during ongoing video sessions.
This paper designs an efficient distributed intrusion detection system (DIDS) for Internet of Things (IoT) data traffic. The proposed DIDS has been implemented at IoT network gateways and edge sites to detect and alarm on anomalous traffic data. We implement different machine learning (ML) algorithms to classify the traffic as benign or malicious. We perform an in-depth parametric study of the models using multiple real-time IoT datasets to enable the model deployment to be consistent with the demands of the specific IoT network. Specifically, we develop a decentralized method using federated learning (FL) for collecting data from IoT sensor nodes to address the data privacy issues associated with centralizing data at the gateway DIDS. We propose two poisoning attacks on the perception layer of these IoT networks that use generative adversarial networks (GAN) to determine how the threats of unpredictable authenticity of the IoT sensors can be triggered. To address such attacks, we design an appropriate defence algorithm that is implemented at the gateways to help separate anomalous from benign data and preserve the system's robustness. The suggested defence algorithm successfully classifies anomalies with high accuracy, exhibiting the system's immunity against poisoning attacks. We confirm that the Random Forest classifier performs the best across all ML key performance indicators (KPIs) and can be implemented at the edge to reduce false alarm rates.
Wiley 5G Ref: The Essential 5G reference Online is a large-scale, fully comprehensive, digital reference, released in 2020 to coincide with commercial deployment of 5G. This essential resource maps out a solid vision of emerging technologies widely foreseen to be adopted by 5G mobile systems, based on current business trends, proven technologies, and the latest international research. Wiley 5G Ref offers a user-friendly format that provides the user with article-level, in-depth technical surveys on all aspects of 5G solutions, architectures, technologies and standards for researchers, practitioners and students in information and communication engineering, computer science and engineering, and telecommunication networking.
In this paper, federated learning (FL) over wireless networks is investigated. In each communication round, a subset of devices is selected to participate in the aggregation with limited time and energy. In order to minimize the convergence time, global loss and latency are jointly considered in a Stackelberg game based framework. Specifically, age of information (AoI) based device selection is considered at leader-level as a global loss minimization problem, while sub-channel assignment, computational resource allocation, and power allocation are considered at follower-level as a latency minimization problem. By dividing the follower-level problem into two sub-problems, the best response of the follower is obtained by a monotonic optimization based resource allocation algorithm and a matching based sub-channel assignment algorithm. By deriving the upper bound of convergence rate, the leader-level problem is reformulated, and then a list based device selection algorithm is proposed to achieve Stackelberg equilibrium. Simulation results indicate that the proposed device selection scheme outperforms other schemes in terms of the global loss, and the developed algorithms can significantly decrease the time consumption of computation and communication.
Network densification with small cell deployment is being considered as one of the dominant themes in the fifth generation (5G) cellular system. Despite the capacity gains, such deployment scenarios raise several challenges from mobility management perspective. The small cell size, which implies a small cell residence time, will increase the handover (HO) rate dramatically. Consequently, the HO latency will become a critical consideration in the 5G era. The latter requires an intelligent, fast and light-weight HO procedure with minimal signalling overhead. In this direction, we propose a memory-full context-aware HO scheme with mobility prediction to achieve the aforementioned objectives. We consider a dual connectivity radio access network architecture with logical separation between control and data planes because it offers relaxed constraints in implementing the predictive approaches. The proposed scheme predicts future HO events along with the expected HO time by combining radio frequency performance to physical proximity along with the user context in terms of speed, direction and HO history. To minimise the processing and the storage requirements whilst improving the prediction performance, a user-specific prediction triggering threshold is proposed. The prediction outcome is utilised to perform advance HO signalling whilst suspending the periodic transmission of measurement reports. Analytical and simulation results show that the proposed scheme provides promising gains over the conventional approach.
In this paper, hybrid relaying schemes are investigated in the two-way relay channel, where the relay node is able to adaptively switch between different forwarding schemes based on the current channel state and its decoding status and thus provides more flexibility as well as improved performance. The analysis is conducted from the energy efficiency perspective for two transmission protocols distinguished by whether exploiting the direct link between two main communicating nodes (the source and destination nodes, and vice versa since it is two way communication) or not. A realistic power model taking circuitry power consumption of all involved nodes into account is employed. The energy efficiency is optimized in terms of consumed energy per bit subject to the Quality of Service (QoS) constraint. Numerical results show that the hybrid schemes are able to achieve the highest energy efficiency due to its capability of adapting to the channel variations and the protocol where the direct link is exploited is more energy efficient.
Physical layer security (PLS) technologies are expected to play an important role in the next-generation wireless networks, by providing secure communication to protect critical and sensitive information from illegitimate devices. In this paper, we propose a novel secure communication scheme where the legitimate receiver use full-duplex (FD) technology to transmit jamming signals with the assistance of simultaneous transmitting and reflecting reconfigurable intelligent surface (STARRIS) which can operate under the energy splitting (ES) model and the mode switching (MS) model, to interfere with the undesired reception by the eavesdropper. We aim to maximize the secrecy capacity by jointly optimizing the FD beamforming vectors, amplitudes and phase shift coefficients for the ESRIS, and mode selection and phase shift coefficients for the MS-RIS. With above optimization, the proposed scheme can concentrate the jamming signals on the eavesdropper while simultaneously eliminating the self-interference (SI) in the desired receiver. To tackle the coupling effect of multiple variables, we propose an alternating optimization algorithm to solve the problem iteratively. Furthermore, we handle the non-convexity of the problem by the the successive convex approximation (SCA) scheme for the beamforming optimizations, amplitudes and phase shifts optimizations for the ES-RIS, as well as the phase shifts optimizations for the MS-RIS. In addition, we adopt a semi-definite relaxation (SDR) and Gaussian randomization process to overcome the difficulty introduced by the binary nature of mode optimization of the MS-RIS. Simulation results validate the performance of our proposed schemes as well as the efficacy of adapting both two types of STAR-RISs in enhancing secure communications when compared to the traditional selfinterference cancellation technology.
The emergence of the Internet of Things (IoT) has led to the production of huge volumes of real-world streaming data. We need effective techniques to process IoT data streams and to gain insights and actionable information from realworld observations and measurements. Most existing approaches are application or domain dependent. We propose a method which determines how many different clusters can be found in a stream based on the data distribution. After selecting the number of clusters, we use an online clustering mechanism to cluster the incoming data from the streams. Our approach remains adaptive to drifts by adjusting itself as the data changes. We benchmark our approach against state-of-the-art stream clustering algorithms on data streams with data drift. We show how our method can be applied in a use case scenario involving near real-time traffic data. Our results allow to cluster, label and interpret IoT data streams dynamically according to the data distribution. This enables to adaptively process large volumes of dynamic data online based on the current situation. We show how our method adapts itself to the changes. We demonstrate how the number of clusters in a real-world data stream can be determined by analysing the data distributions.
In this paper, we propose a novel position-based routing protocol designed to anticipate the characteristics of an urban VANET environment. The proposed algorithm utilizes the prediction of the node's position and navigation information to improve the efficiency of routing protocol in a vehicular network. In addition, we use the information about link layer quality in terms of SNIR and MAC frame error rate to further improve the efficiency of the proposed routing protocol. This in particular helps to decrease end-to-end delay. Finally, carry-n-forward mechanism is employed as a repair strategy in sparse networks. It is shown that use of this technique increases packet delivery ratio, but increases end-to-end delay as well and is not recommended for QoS constraint services. Our results suggest that compared with GPSR, our proposal demonstrates better performance in the urban environment. © 2011 IEEE.
This paper proposes an analytical model for the throughput of the enhanced distributed channel access (EDCA) mechanism in the IEEE 802.11p medium-access control (MAC) sublayer. Features in EDCA such as different contention windows (CW) and arbitration interframe space (AIFS) for each access category (AC) and internal collisions are taken into account. The analytical model is suitable for both basic access and the request-to-send/clear-to-send (RTS/CTS) access mode. Different from most of existing 3-D or 4-D Markov-chain-based analytical models for IEEE 802.11e EDCA, without computation complexity, the proposed analytical model is explicitly solvable and applies to four access categories of traffic in the IEEE 802.11p. The proposed model can be used for large-scale network analysis and validation of network simulators under saturated traffic conditions. Simulation results are given to demonstrate the accuracy of the analytical model. In addition, we investigate service differentiation capabilities of the IEEE 802.11p MAC sublayer. © 2011 IEEE.
In this paper, an 8×8 Multiple Input Multiple Output (MIMO) antenna design for Fifth Generation (5G) sub- 6GHz smartphone applications is presented. The antenna elements are based off a folded quarter wavelength monopole that operate at 3.4-3.8GHz. Isolation between antenna elements is provided through physical distancing. The fabricated antenna prototype outer casing is made from Rogers R04003C with dimensions based on future 5G enabled phones. Measured results show an operating bandwidth of 3.32 to 3.925GHz (S11 < 6dB) with a transmission coefficient < -14.7dB. A high total efficiency for an antenna array is also obtained at 70-85.6%. The design is suitable for MIMO communications exhibited by an Envelope Correlation Coefficient (ECC) < 0.014. To conclude a Specific Absorption Rate (SAR) model has been constructed and presented showing the user’s effects on the antenna’s Sparameter results. Measurements of the amount of power absorbed by the head and hand during operation have also been simulated.
Multi-user multiple-input, multiple-output (MU-MIMO) designs can substantially increase the achievable throughput and connectivity capabilities of wireless systems. However, existing MU-MIMO deployments typically employ linear processing that, despite its practical benefits, can leave capacity and connectivity gains unexploited. On the other hand, traditional non-linear processing solutions (e.g., sphere decoders) promise improved throughput and connectivity capabilities, but can be impractical in terms of processing complexity and latency, and with questionable practical benefits that have not been validated in actual system realizations. At the same time, emerging new Open Radio Access Network (Open-RAN) designs call for physical layer (PHY) processing solutions that are also practical in terms of realization, even when implemented purely on software. This work demonstrates the gains that our highly efficient, massively parallelizable, non-linear processing (MPNL) framework can provide, both in the uplink and downlink, when running in real-time and over-the-air, using our new 5G-New Radio (5G-NR) and Open-RAN compliant, software-based PHY. We showcase that our MPNL framework can provide substantial throughput and connectivity gains, compared to traditional, linear approaches, including increased throughput, the ability to halve the number of base-station antennas without any performance loss compared to linear approaches, as well as the ability to support a much larger number of users than base-station antennas, without the need for any traditional Non-Orthogonal Multiple Access (NOMA) techniques, and with overloading factors that can be up to 300%.
In existing energy-efficient clustering algorithms for Wireless Sensor Networks (WSNs), individual nodes usually experience significant differences in lifetime. The issue of some nodes depleting energy earlier than other is usually referred to as hot-spot issue in WSNs, which dramatically shortens the stable operation period of a network when all nodes are live with residual energy. This paper addresses hot-spot issue through equalizing individual node's lifetime throughout the network. The probability of nodes to become cluster-head (CH) in this algorithm is relevant to node distance to the sink and is subject to the individual node-lifetime equalization. When selecting CHs, the residual node energy is considered as well. Performance evaluation illustrates the effectiveness of our algorithm in terms of extending the stable operation period of the clustered WSNs. Copyright © 2010 The authors.
This article presents a multilayer mobility management scheme for All-IP networks where local mobility movements (micro-mobility) are handled separately from global movements (macro-mobility). Furthermore, a hybrid scheme is proposed to handle macro-mobility (Mobile IP for non-real-time services and SIP for real-time services). The interworking between micromobility and macro-mobility is implemented at an entity called the enhanced mobility gateway. Both qualitative and quantitative results have demonstrated that the performance of the proposed mobility management is better than existing schemes. Furthermore, a context transfer solution for AAA is proposed to enhance the multilayer mobility management scheme by avoiding the additional delay introduced by AAA security procedures.
In this paper, a novel low-complexity and spectrally efficient modulation scheme for visible light communication (VLC) is proposed. Our new spatial quadrature modulation (SQM) is designed to efficiently adapt traditional complex modulation schemes to VLC; i.e. converting multi-level quadrature amplitude modulation (M-QAM), to real-unipolar symbols, making it suitable for transmission over light intensity. The proposed SQM relies on the spatial domain to convey the orthogonality and polarity of the complex signals, rather than mapping bits to symbol as in existing spatial modulation (SM) schemes. The detailed symbol error analysis of SQM is derived and the derivation is validated with link level simulation results. Using simulation and derived results, we also provide a performance comparison between the proposed SQM and SM. Simulation results demonstrate that SQM could achieve a better symbol error rate (SER) and/or data rate performance compared to the state of the art in SM; for instance a Eb/No gain of at least 5 dB at a SER of 10 4.
The Internet of Things (IoT) has become a new enabler for collecting real-world observation and measurement data from the physical world. The IoT allows objects with sensing and network capabilities (i.e. Things and devices) to communicate with one another and with other resources (e.g. services) on the digital world. The heterogeneity, dynamicity and ad-hoc nature of underlying data, and services published by most of IoT resources make accessing and processing the data and services a challenging task. The IoT demands distributed, scalable, and efficient indexing solutions for large-scale distributed IoT networks. We describe a novel distributed indexing approach for IoT resources and their published data. The index structure is constructed by encoding the locations of IoT resources into geohashes and then building a quadtree on the minimum bounding box of the geohash representations. This allows to aggregate resources with similar geohashes and reduce the size of the index. We have evaluated our proposed solution on a large-scale dataset and our results show that the proposed approach can efficiently index and enable discovery of the IoT resources with 65% better response time than a centralised approach and with a high success rate (around 90% in the first few attempts).
The parameters of Physical (PHY) layer radio frame for 5th Generation (5G) mobile cellular systems are expected to be flexibly configured to cope with diverse requirements of different scenarios and services. This paper presents a frame structure and design which is specifically targeting Internet of Things (IoT) provision in 5G wireless communication systems. We design a suitable radio numerology to support the typical characteristics, that is, massive connection density and small and bursty packet transmissions with the constraint of low cost and low complexity operation of IoT devices. We also elaborate on the design of parameters for Random Access Channel (RACH) enabling massive connection requests by IoT devices to support the required connection density. The proposed design is validated by link level simulation results to show that the proposed numerology can cope with transceiver imperfections and channel impairments. Furthermore, results are also presented to show the impact of different values of guard band on system performance using different subcarrier spacing sizes for data and random access channels, which show the effectiveness of the selected waveform and guard bandwidth. Finally, we present system level simulation results that validate the proposed design under realistic cell deployments and inter-cell interference conditions.
Software-Defined Networking (SDN) is a promising paradigm of computer networks, offering a programmable and centralised network architecture. However, although such a technology supports the ability to dynamically handle network traffic based on real-time and flexible traffic control, SDN-based networks can be vulnerable to dynamic change of flow control rules, which causes transmission disruption and packet loss in SDN hardware switches. This problem can be critical because the interruption and packet loss in SDN switches can bring additional performance degradation for SDN-controlled traffic flows in the data plane. In this paper, we propose a novel robust flow control mechanism referred to as Priority-based Flow Control (PFC) for dynamic but disruption-free flow management when it is necessary to change flow control rules on the fly. PFC minimizes the complexity of flow modification process in SDN switches by temporarily adapting the priority of flow rules in order to substantially reduce the time spent on control-plane processing during run-time. Measurement results show that PFC is able to successfully prevent transmission disruption and packet loss events caused by traffic path changes, thus offering dynamic and lossless traffic control for SDN switches.
Data discovery for sensor data in an M2M network uses probabilistic models, such as Gaussian Mixing Models (GMMs) to represent attributes of the sensor data. The parameters of the probabilistic models can be provided to a discovery server (DS) that respond to queries concerning the sensor data. Since the parameters are compressed compared to the attributes of the sensor data itself, this can simplify the distribution of discovery data. A hierarchical arrangement of discovery servers can also be used with multiple levels of discovery servers where higher level discovery servers using more generic probabilistic models.
Considering a densely populated area where a mobile device, with a single RF chain, shares its message with a set of mobile devices through narrowband mmWave channel, an analogue-beam splitting approach is proposed to achieve a good capacity and coverage trade-off. The proposed approach aims at maximizing the capacity of the mmWave multicast channel through antenna-element grouping and adaptive phase shifting, which takes into account of the inter-beam interference. When receivers are randomly distributed on a circle centered at the transmitter, according to the uniform distribution, it is found that the impact of inter-beam interference on the channel capacity can be negligibly small, and thus the analoguebeam splitting approach can be largely simplified in practice. Computer simulations are carried out to elaborate our theoretical study and demonstrate considerable advantages of the proposed analogue-beam splitting approach.
In this paper, we extend a well-developed quantization scheme to block fading relay system using compress-and-forward and propose a new achievable rate based quantization scheme (ARBQS). A new signal combination scheme with less complexity is also proposed accordingly. Based on the scalar quantizer obtained, vector quantizer with Trellis coded quantization (TCQ) scheme is provided. While many quantization schemes have concentrated on minimization of quantization distortion, our simulations results indicate that the new scheme achieves better performance in both AWGN case and block fading case without distortion minimization and achieve higher compression efficiency and reduced complexity simultaneously.
Network scenarios beyond 3G assume the cooperation of operators with wireless access networks of different technologies in order to improve scalability and provide enhanced services to their mobile customers. While the selection of an optimised delivery path in such scenarios with multiple access networks is already a challenging task for unicast delivery, the problem becomes more severe for multicast services, where a potentially large group of heterogeneous receivers has to be served simultaneously via shared resources. In this paper we study the problem of selecting the optimal bearer paths for multicast services with groups of heterogeneous receivers in wireless networks with overlapping coverage. We propose an algorithm for bearer selection with different optimisation goals, demonstrating the existing tradeoff between user preference and resource efficiency.
It has been pointed out that Slepian-Wolf (SW) ending is efficient to compress data with side information available at the receiver. However, most papers assume that the compressed information is perfectly known to the receiver. In this paper, we consider more practical assumptions that the channel between the relay and the destination is not perfect and error protection need to be implemented. Accordingly, a soft Slepian-Wolf decoding structure is proposed. The new structure not only supports soft Slepian-Wolf decoding within one level, but it also allows soft information passing between different levels. We also consider the relationship between the codes for error protection and the codes for compression and propose a joint decoding and decompressing algorithm to further improve the .performance.
The most common use of formal verification methods and tools so far has been in identifying whether livelock and/or deadlock situations can occur during protocol execution, process, or system operation. In this work we aim to show that an additional equally important and useful application of formal verification tools can be in protocol design and protocol selection in terms of performance related metrics. This can be achieved by using the tools in a rather different context compared to their traditional use. That is not only as model checking tools to assess the correctness of a protocol in terms of lack of livelock and deadlock situations but rather as tools capable of building profiles of protocol operations, assessing their performance, and identifying operational patterns and possible bottleneck operations. This process can provide protocol designers with an insight about the protocols' behavior and guide them towards further protocol design optimizations. It can also assist network operators and service providers in selecting the most suitable protocol for specific network and service configurations. We illustrate these principles by showing how formal verification tools can be applied in this protocol profiling and performance assessment context using some existing protocols as case studies.
Energy savings are becoming a global trend, hence the importance of energy efficiency (EE) as an alternative performance evaluation metric. This paper proposes an EE based resource allocation method for the broadcast channel (BC), where a linear power model is used to characterize the power consumed at the base station (BS). Having formulated our EE based optimization problem and objective function, we utilize standard convex optimization techniques to show the concavity of the latter, and thus, the existence of a unique globally optimal energy-efficient rate and power allocation. Our EE based resource allocation framework is also extended to incorporate fairness, and provide a minimum user satisfaction in terms of spectral efficiency (SE). We then derive the generic equation of the EE contours and use them to get insights about the EE-SE trade-off over the BC. The performances of the aforementioned resource allocation schemes are compared for different metrics against the number of users and cell radius. Results indicate that the highest EE improvement is achieved by using the unconstrained optimization scheme, which is obtained by significantly reducing the total transmit power. Moreover, the network EE is shown to increase with the number of users and decrease as the cell radius increases.
Most of the wireless systems such as the long term evolution (LTE) adopt a pilot symbol-aided channel estimation approach for data detection purposes. In this technique, some of the transmission resources are allocated to common pilot signals which constitute a significant overhead in current standards. This can be traced to the worst-case design approach adopted in these systems where the pilot spacing is chosen based on extreme condition assumptions. This suggests extending the set of the parameters that can be adaptively adjusted to include the pilot density. In this paper, we propose an adaptive pilot pattern scheme that depends on estimating the channel correlation. A new system architecture with a logical separation between control and data planes is considered and orthogonal frequency division multiplexing (OFDM) is chosen as the access technique. Simulation results show that the proposed scheme can provide a significant saving of the LTE pilot overhead with a marginal performance penalty.
The fifth-generation (5G) mobile communication technology with higher capacity and data rate, ultra-low device to device (D2D) latency, and massive device connectivity will greatly promote the development of vehicular ad hoc networks (VANETs). Meantime, new challenges such as security, privacy and efficiency are raised. In this article, a hybrid D2D message authentication (HDMA) scheme is proposed for 5G-enabled VANETs, in which a novel group signature-based algorithm is used for mutual authentication between vehicle to vehicle (V2V) communication. In addition, a pre-computed lookup table is adopted to reduce the computation overhead of modular exponentiation operation. Security analysis shows that HDMA is robust to resist various security attacks, and performance analysis also points out that, the authentication overhead of HDMA is more efficient than some traditional schemes with the help of the pre-computed lookup table in V2V and vehicle to infrastructure (V2I) communication.
The Self-Organizing Network (SON) has been seen as one of the promising areas to save OPerational EXpenditure (OPEX) and to bring real efficiency to the wireless networks. Though the studies in literature concern with local interaction and distributed structure for SON, study on its coherent pattern has not yet been well-conducted. We consider a targetfollowing regime and propose a novel approach of goal attainment using Similarity Measure (SM) for Coverage & Capacity Optimization (CCO) use-case in SON. The methodology is based on a self-optimization algorithm, which optimizes the multiple objective functions of UE throughput and fairness using performance measure, which is carried out using SM between target and measured KPIs. After certain epochs, the optimum results are used in adjustment and updating modules of goal attainment. To investigate the proposed approach, a simulation in downlink LTE has also been set up. In a scenario including a congested cell with hotspot, the joint antenna parameters of tilt/azimuth using a 3D beam pattern is considered. The final CDF results show a noticeable migration of hot-spot UEs to higher throughputs, while no one worse off.
Conference Title: 2022 IEEE Globecom Workshops (GC Wkshps) Conference Start Date: 2022, Dec. 4 Conference End Date: 2022, Dec. 8 Conference Location: Rio de Janeiro, BrazilThe general tendency to deliver Open Radio Access Network (Open-RAN) solutions by means of software-based, or even cloud-native, realizations drives the development community to fully capitalize on software architectures, even for the computationally demanding 5G physical layer (PHY) processing. However, software solutions are typically orders of magnitude less efficient than dedicated hardware in terms of power consumption and processing speed. Consequently, realizing highly-efficient, massive multiple-input multiple-output (mMIMO) solutions in software, while exploiting the wide 5G transmission bandwidths, becomes extremely challenging and requires the massive parallelization of the PHY processing tasks. In this work, for the first time, we show that massively parallel software solutions are capable of meeting the processing requirements of 5G New Radio (NR), still, with a significant increase in the corresponding power consumption. In this context, we quantify this power consumption overhead, both in terms of Watts and carbon emissions, as a function of the concurrently transmitted information streams, of the base-station antennas, and of the utilized bandwidth. We show that the computational power consumption of such PHY processing is no longer negligible and that, for mMIMO solutions supporting a large number of information streams, it can become comparable to the power consumption of the Radio Frequency (RF) chains. Finally, we discuss how a shift towards non-linear PHY processing can significantly boost energy efficiency, and we further highlight the importance of energy-aware digital signal processing design in future PHY processing architectures.
A cell-free Massive multiple-input multiple-output (MIMO) system is considered, where the access points (APs) are linked to a central processing unit (CPU) via the limited-capacity fronthaul links. It is assumed that only the quantized version of the weighted signals are available at the CPU. The achievable rate of a limited fronthaul cell-free massive MIMO with local minimum mean square error (MMSE) detection is studied. We study the assumption of uncorrelated quantization distortion, which is commonly used in literature. We show that this assumption will not affect the validity of the insights obtained in our work. To investigate this, we compare the uplink per-user rate with different system parameters for two different scenarios; 1) the exact uplink per-user rate and 2) the uplink per-user rate while ignoring the correlation between the inputs of the quantizers. Finally, we present the conditions which imply that the quantization distortions across APs can be assumed to be uncorrelated.
The SUCC~SS of the deployment of CPRS will be significantly influenced by the introduction of efieient and variable QoS management and supporting mechanisms. Although QoS profiles for a number of CPRS service classes has been specified by ETSI, implementation issues plays a major role in achieving that. This includes QoS management in the areas of trsfflc scheduling, traffic shaping and call admission control techniques. QoS in CPRS is defined as the collective etTect of service performances, which determines the degree of satisfaction of a user of the service. QoS enables the differentiation between provided services. Increasing demand and limited bandwidth available for mobile communication scrriees require efficient use of radio resources among diverse services. I n future wireless packet networks, it is anticipated that B wide variety of data applications, ranging from WWW browsing to Email, and real time sewices like paeketized voice and videoconference will be supported with varying levels of QoS. Therefore there is P need for packet and service scheduling schemes that effectively provide QoS guarantees and also are simple to implement This paper describes a novel dynamic admission control and scheduling technique based on genetic algorithms focusing on static and dynamic parameters of service classes I. Performance comparison of this technique on a CPRS system is evaluated against data services and also a trafiic mix comprising voice and data.
It is well-established that transmitting at full power is the most spectral-efficient power allocation strategy for pointto- point (P2P) multi-input multi-output (MIMO) systems, however, can this strategy be energy efficient as well? In this letter, we address the most energy-efficient power allocation policy for symmetric P2P MIMO systems by accurately approximating in closed-form their optimal transmit power when a realistic MIMO power consumption model is considered. In most cases, being energy efficient implies a reduction in transmit and overall consumed powers at the expense of a lower spectral efficiency.
In this paper, the problem of drone-assisted collaborative learning is considered. In this scenario, swarm of intelligent wireless devices train a shared neural network (NN) model with the help of a drone. Using its sensors, each device records samples from its environment to gather a local dataset for training. The training data is severely heterogeneous as various devices have different amount of data and sensor noise level. The intelligent devices iteratively train the NN on their local datasets and exchange the model parameters with the drone for aggregation. For this system, the convergence rate of collaborative learning is derived while considering data heterogeneity, sensor noise levels, and communication errors, then, the drone trajectory that maximizes the final accuracy of the trained NN is obtained. The proposed trajectory optimization approach is aware of both the devices data characteristics (i.e., local dataset size and noise level) and their wireless channel conditions, and significantly improves the convergence rate and final accuracy in comparison with baselines that only consider data characteristics or channel conditions. Compared to state-of-the-art baselines, the proposed approach achieves an average 3.85% and 3.54% improvement in the final accuracy of the trained NN on benchmark datasets for image recognition and semantic segmentation tasks, respectively. Moreover, the proposed framework achieves a significant speedup in training, leading to an average 24% and 87% saving in the drone hovering time, communication overhead, and battery usage, respectively for these tasks.
In this paper, an ultra-wideband, Dielectric Resonator Antenna (DRA) has been proposed. The proposed antenna is based on isosceles triangular DRA (TDRA), which is fed from the base side using a 50Ω probe. For bandwidth enhancement and radiation characteristics improvement, a partially cylindrical-shape hole is etched from its base side which approached probe feed to the center of TDRA. The dielectric resonator (DR) is located over an extended conducting ground plane. This technique has significantly enhanced antennas bandwidth from 48.8% to 80% (5.29-12.35 GHz), while the biggest problem was radiation characteristics. The basis antenna possesses negative gain in a wide range of bandwidth from 7.5 GHz to 10.5 GHz down to -13.8 dBi. Using this technique improve antenna gain over 1.6 dBi for whole bandwidth, while peak gain is 7.2 dBi.
Utilizing the holography theory, a bidirectional wideband leaky wave antenna in the millimetre wave (mmW) band is presented. The antenna includes a printed pattern of continuous metallic strips on an Alumina 99:5% sheet, and a surface wave launcher (SWL) to produce the initial reference waves on the substrate. To achieve a bidirectional radiation pattern, the fundamental TE mode is excited by applying a Vivaldi antenna (as the SWL). The proposed holographic-based leaky wave antenna (HLWA) is fabricated and tested and the measured results are aligned with the simulated ones. The antenna has 22:6% fractional bandwidth with respect to the central frequency of 30 GHz. The interference pattern is designed to generate a 15 deg backward tilted bidirectional radiation pattern with respect to the normal of the hologram sheet. The frequency scanning property of the designed HLWA is also investigated.
—With the huge number of broadband users, automated network management becomes of huge interest to service providers. A major challenge is automated monitoring of user Quality of Experience (QoE), where Artificial Intelligence (AI) and Machine Learning (ML) models provide powerful tools to predict user QoE from basic protocol indicators such as Round Trip Time (RTT), retransmission rate, etc. In this paper, we introduce an effective feature selection method along with the corresponding classification algorithms to address this challenge. The simulation results show a prediction accuracy of 78% on the benchmark ITU ML5G-PS-012 dataset, improving 11% over the state-of-the-art result whilst reducing the model complexity at the same time. Moreover, we show that the local area network round trip time (LAN RTT) value during daytime and midweek plays the most prominent factor affecting the user QoE.
Orthogonal frequency division multiplexing (OFDM) with index modulation (IM) (OFDM-IM), which employs the activated sub-carrier indices to convey information, exhibits higher energy efficiency and lower peak-to-average power ratio (PAPR)thanconventionalOFDMsystems.Tofurtherimprovethe throughput of discrete Fourier transform (DFT) based OFDM-IM (DFT-OFDM-IM),discretecosinetransform(DCT)basedOFDMIM (DCT-OFDM-IM) can be employed with double subcarriers giventhesamebandwidth.However,oneofthemaindisadvantage of DCT-OFDM-IM is its lack of circular convolutional property over a dispersive channel. To address this issue, an enhanced DCT-OFDM-IM(EDCT-OFDM-IM)systemhasbeenproposedby introducing symmetric prefix and suffix at the transmitter and a pre-filter at the receiver leading to better performance than DFTOFDM-IM in terms of bit error rate (BER). However, due to its special structure, it is difficult to derive the accurate absolute bit error probability (ABEP) upper bound, which is essential for the performance evaluation. In this paper, a tight ABEP upper bound is derived using the moment-generating-function (MGF). Our theoretical analysis is validated by simulation results and proven to be very accurate. Consequently the advantages of the EDCT-OFDM-IM system over the classic OFDM-IM system are further demonstrated analytically.
The launch of the StarLink Project has recently stimulated a new wave of research on integrating Low Earth Orbit (LEO) satellite networks with the terrestrial Internet infrastructure. In this context, one distinct technical challenge to be tackled is the frequent topology change caused by the constellation behaviour of LEO satellites. Frequent change of the peering IP connection between the space and terrestrial Autonomous Systems (ASes) inevitably disrupts the Border Gateway Protocol (BGP) routing stability at the network boundaries which can be further propagated into the internal routing infrastructures within ASes. To tackle this problem, we introduce the Geosynchronous Network Grid Addressing (GNGA) scheme by decoupling IP addresses from physical network elements such as a LEO satellite. Specifically, according to the density of LEO satellites on the orbits, the IP addresses are allocated to a number of stationary "grids" in the sky and dynamically bound to the interfaces of the specific satellites moving into the grids along time. Such a scheme allows static peering connection between a terrestrial BGP speaker and a fixed external BGP (e-BGP) peer in the space, and hence is able to circumvent the exposure of routing disruptions to the legacy terrestrial ASes. This work-in-progress specifically addresses a number of fundamental technical issues pertaining to the design of the GNGA scheme.
In this paper, we investigate the downlink transmission in the integrated terrestrial satellite networks, whereby the same spectrum is shared between two systems, and thus interference to each other should be carefully mitigated. We address this challenging issue by unifying the terrestrial beamformer design and satellite user scheduling into the same optimization framework. This nontrivial problem is decomposed into two subproblems, one deals with the terrestrial beamformer design to control the interference from the terrestrial base stations to the satellite users, the other tries to optimize the scheduling of the satellite users following the framing structure the DVB-S2X standards for satellite communication systems. A deep clustering user scheduling scheme is developed to group suitable satellite users to the same frame using the channel state information as the input feature. Finally, a joint iterative algorithm is designed to maximize the sum rate of all users in the integrated systems. We conduct extensive simulation results to show the effectiveness of the proposed scheme.
The elasticity of transmission control protocol (TCP) traffic complicates attempts to provide performance guarantees to TCP flows. The existence of different types of networks and environments on the connections’ paths only aggravates this problem. In this paper, simulation is the primary means for investigating the specific problem in the context of bandwidth on demand (BoD) geostationary satellite networks. Proposed transport-layer options and mechanisms for TCP performance enhancement, studied in the single connection case or without taking into account the media access control (MAC)-shared nature of the satellite link, are evaluated within a BoD-aware satellite simulation environment. Available capabilities at MAC layer, enabling the provision of differentiated service to TCP flows, are demonstrated and the conditions under which they perform efficiently are investigated. The BoD scheduling algorithm and the policy regarding spare capacity distribution are two MAC-layer mechanisms that appear to be complementary in this context; the former is effective at high levels of traffic load, whereas the latter drives the differentiation at low traffic load. When coupled with transport layer mechanisms they can form distinct bearer services over the satellite network that increase the differentiation robustness against the TCP bias against connections with long round-trip times. We also explore the use of analytical, fixed-point methods to predict the performance at transport level and link level. The applicability of the approach is mainly limited by the lack of analytical models accounting for prioritization mechanisms at the MAC layer and the nonuniform distribution of traffic load among satellite terminals.
The vehicular ad hoc network (VANET) is a platform for exchanging information between vehicles and everything to enhance driver's driving experience and improve traffic conditions. The reputation system plays an essential role in judging whether to communicate with the target vehicle based on other vehicles' feedback. However, existing reputation systems ignore the privacy protection of feedback providers. Additionally, traditional VANET based on wireless sensor networks (WSNs) has limited power, storage, and processing capabilities, which cannot meet the real-world demands in a practical VANET deployment. Thus, we attempt to integrate cloud computing with VANET and proposes a privacy-preserving protocol of vehicle feedback (PPVF) for cloud-assisted VANET. In cloud-assisted VANET, we integrate homomorphic encryption and data aggregation technology to design the scheme PPVF, in which with the assistance of the roadside units (RSU), cloud service provider (CSP) obtains the total number of vehicles with the corresponding parameters in the feedback for reputation calculation without violating individual feedback privacy. Simulation results and security analysis confirm that PPVF achieves effective privacy protection for vehicle feedback with acceptable computational and communication burden. Besides, the RSU is capable of handling 1999 messages for every 300ms, so as the number of vehicles in the communication domain increases, the PPVF has a lower message loss rate.
When channel state information (CSI) is not available to the transmitter, outage events might happen and Automatic Repeat re-Quest (ARQ) is implemented to ensure reliable transmission in such case. In this paper, we consider a three nodes relay system with hybrid relay scheme, where the relay, based on it decoding status, could switch between decode-and-forward (DF) and compress-and-forward (CF) adaptively. We notice that CSI is required when CF is deployed and consider practical implementation issues by enhancing the ability of feedback channel from the destination to the relay to convey a few extra bits (only 2 bits in this paper) in addition to the ACK/NACK bit and propose a new ARQ scheme. The modified scheme allows the relay to utilize various relay schemes more flexibly according to its coding status and the extra feedback bits. ARQ strategies with hybrid relay schemes exhibits superior performance over direct transmission and pure DF, especially when the relay is close to the destination.
In this paper, empty substrate integrated waveguides (ESIW) technology is applied to design long slot leaky-wave antennas (LWAs). First, a uniform-aperture structure is presented and its limitations on forming the beam are studied. Then, a sinusoidal curve is employed to modify the geometry of guided-wave structure which divides the slot into a number of segments, making a periodic aperture. After that, a method is proposed to regulate the guided waves inside the ESIW. To this end, a modulation function is derived to simultaneously determine the local amplitude and segment length of the physical sinusoidal curve at each individual points on the structure. This results in manipulating the phase constant (_) and leakage rate (_) across the aperture which ultimately controls both the tilt angle and side-lobe-level (SLL) of the constructed beam. The slot is placed on the centerline of the broad wall of the ESIW in order to reduce the cross polarization. The structure is designed to operate at 35 GHz with SLL = 30 dB and a backward tilt angle of _m = 20 deg. Finally, the proposed LWA is simulated and a fabricated design is measured. A good agreement is observed between the theoretical, simulated, and measured performance of the antenna.
This letter addresses energy-efficient design in multi-user, single-carrier uplink channels by employing multiple decoding policies. The comparison metric used in this study is based on average energy efficiency contours, where an optimal rate vector is obtained based on four system targets: Maximum energy efficiency, a trade-off between maximum energy efficiency and rate fairness, achieving energy efficiency target with maximum sum-rate and achieving energy efficiency target with fairness. The transmit power function is approximated using Taylor series expansion, with simulation results demonstrating the achievability of the optimal rate vector, and negligible performance difference in employing this approximation.
A reconfigurable metamaterial-inspired unit cell is proposed that can be reconfigured to behave either as a perfect magnetic conductor (PMC) or as a perfect electric conductor (PEC) and its application to waveguide miniaturisation is demonstrated. The unit cell is designed to operate in the sub-6 GHz band at 3.6 GHz with a PMC bandwidth of ≈ 150 MHz and has a simple construction that makes the design easy to fabricate. The phase response of the reconfigurable unit cell is presented and a prototype design of a miniaturised waveguide using the proposed unit cell is also proposed. The performance and field distribution of the waveguide are analysed which demonstrate the existence of a pass-band spanning ≈ 160 MHz below the cutoff frequency and the presence of a quasi TEM mode.
In this paper, a correlation matching pursuit (CMP) procedure is proposed to handle the vector perturbation (VP) problem for nonlinear precoding (NLP) in downlink multiuser multi-antenna (i.e., MU-MIMO) systems. Basically, CMP consists of two sub-procedures, i.e., correlation matching and correlation pursuit. The former takes the charge of direct calculation of the perturbation integers through a set of established convex optimization problems; the latter is responsible for selecting the perturbation vector for updating the precoded vector. Iterative execution of both procedures renders CMP being modelled as a searching tree which consists of multiple nodes and paths. The sequence correlation between the precoded vector (node vector) and perturbation vectors (path vectors) is revealed crucial to the performance optimality and thus used as metric of the path search. Given that the single-path search is suboptimal, we propose multi-path schemes to enable exploitation on the path diversity and thus further improve the performance. Complexity analysis and computer simulations demonstrate that CMP-based NLP algorithms serve as low-cost VP solutions with significantly lowered processing latency and meanwhile, comparable performance compared to prior arts.
Millimeter wave (mmWave) systems with effective beamforming capability play a key role in fulfilling the high data-rate demands of current and future wireless technologies. Hybrid analog-todigital beamformers have been identified as a cost-effective and energy-efficient solution towards deploying such systems. Most of the existing hybrid beamforming architectures rely on a subconnected phase shifter network with a large number of antennas. Such approaches, however, cannot fully exploit the advantages of large arrays. On the other hand, the current fully-connected beamformers accommodate only a small number of antennas, which substantially limits their beamforming capabilities. In this paper, we present a mmWave hybrid beamformer testbed with a fully-connected network of phase shifters and adjustable attenuators and a large number of antenna elements. To our knowledge, this is the first platform that connects two RF inputs from the baseband to a 16 8 antenna array, and it operates at 26 GHz with a 2 GHz bandwidth. It provides a wide scanning range of 60, and the flexibility to control both the phase and the amplitude of the signals between each of the RF chains and antennas. This beamforming platform can be used in both short and long-range communications with linear equivalent isotropically radiated power (EIRP) variation between 10 dBm and 60 dBm. In this paper, we present the design, calibration procedures and evaluations of such a complex system as well as discussions on the critical factors to consider for their practical implementation.
This letter highlights the combined advantages of Open Radio Access Network (O-RAN) and distributed Artificial Intelligence (AI) in network slicing. O-RAN's virtualization and disaggregation techniques enable efficient resource allocation, while AI-driven networks optimize performance and decision-making. We propose a federated Deep Reinforcement Learning (DRL) approach to offload dynamic RAN disaggregation to edge sites to enable local data processing and faster decision-making. Our objective is to optimize dynamic RAN disaggregation by maximizing resource utilization and minimizing reconfiguration overhead. Through performance evaluation, our proposed approach surpasses the distributed DRL approach in the training phase. By modifying the learning rate, we can influence the variance of rewards and enhance the convergence of training. Moreover, fine-tuning the reward function's weighting factor enables us to attain the targeted network Key Performance Indicators (KPIs).
This paper aims to analyze the stochastic performance of a multiple input multiple output (MIMO) integrated sensing and communication (ISAC) system in a downlink scenario, where a base station (BS) transmits a dual-functional radar-communication (DFRC) signal matrix, serving the purpose of transmitting communication data to the user while simultaneously sensing the angular location of a target. The channel between the BS and the user is modeled as a random channel with Rayleigh fading distribution, and the azimuth angle of the target is assumed to follow a uniform distribution. We use a maximum ratio transmission (MRT) beamformer to share resource between sensing and communication (S \& C) and observe the trade-off between them. We derive the approximate probability density function (PDF) of the signal-to-noise ratio (SNR) for both the user and the target. Subsequently, leveraging the obtained PDF, we derive the expressions for the user's rate outage probability (OP), as well as the OP for the Cramer-Rao lower bound (CRLB) of the angle of arrival (AOA). In our numerical results, we demonstrate the trade-off between S \& C, confirmed with simulations.
The mutual information (MI) of multiple-input multiple-output (MIMO) system over Rayleigh fading channel is known to asymptotically follow a normal probability distribution. In this paper, we first prove that the MI of distributed MIMO (DMIMO) system is also asymptotically equivalent to a Gaussian random variable (RV) by deriving its moment generating function (MGF) and by showing its equivalence with the MGF of a Gaussian RV. We then derive an accurate closed-form approximation of the outage probability for DMIMO system by using the mean and variance of the MI and show the uniqueness of its formulation. Finally, several applications for our analysis are presented.
This paper presents a fully-transparent and novel frequency selective surface (FSS) that can be deployed instead of conventional glass to reduce the penetration loss encountered by millimeter wave (mmWave) frequencies in typical outdoorindoor (O2I) communication scenarios. The presented design uses a 0:035 mm thick layer of indium tin oxide (ITO), which is a transparent conducting oxide (TCO) deposited on the surface of the glass, thereby ensuring the transparency of the structure. The paper also presents a novel unit cell that has been used to design the hexagonal lattice of the FSS structure. The dispersion and transmission characteristics of the proposed design are presented and compared with conventional glass. The presented FSS can be used for both 26 GHz and 28 GHz bands of the mmWave spectrum and offers a lower transmission loss as compared to conventional glass without any considerable impact on the aesthetics of the building infrastructure.
A novel high-isolation dual-polarized in-band full-duplex (IBFD) dielectric resonator antenna (DRA) for satellite communications using a decoupling structure is proposed. Good isolation between transmit and receive ports is achieved by placing two identical linearly polarized resonators orthogonal to each other. Each resonator consists of a main rectangular dielectric resonator of the dielectric constant of 10 and is loaded by a thin dielectric slab of lower permittivity of 5 to broaden the matching bandwidth further. The isolation is further improved by loading an absorber and etching several slots in the ground plane. Finally, the proposed DRA is fabricated and measured to validate the concepts. Measurement results show high isolation of more than 50 dB over the desired operating bandwidth from 23.04 GHz to 24.08 GHz (ka-band) with a peak gain of about 8.93 dBi and 8.09 dBi for Port 1 and Port 2, respectively. In addition, the proposed IBFD DRA provides 11.87 GHz and 4.84 GHz isolation bandwidths over 25 dB and 30 dB, respectively, making it a potential candidate for mm-wave terrestrial applications.
In the next generation of communication networks (i.e. 6G), metamaterial-based antenna designs, such as Reconfigurable Intelligent Surface (RIS), will be critical for improving wireless communication systems. This paper investigates the ergodic capacity of RIS-aided multiple input multiple outputs (MIMO) systems in the presence of a direct link between the transmitter and receiver. We obtain an exact expression for the ergodic capacity of the cooperative MIMO-RIS systems (along with the corresponding probability density function of the cooperative MIMO-RIS channel) assuming that the receiver is capable of treating the RIS and direct link contributions in the received signal separately. Furthermore, we demonstrate that in the absence of this capability, the resulting formula is a tight upper bound that becomes increasingly tighter with greater numbers of RIS elements. In addition, we pose a simplified capacity expression for large numbers of RIS elements, which provides further insights into the behaviour of the cooperative MIMO-RIS capacity. To gain more insights, we also include a high SNR approximation. Our simulation results confirm the correctness of our expressions and illustrate how the SNR and the number of RIS elements impact the ergodic capacity.
This paper proposes a novel unipolar transceiver for visible light communication (VLC) by using orthogonal waveforms. The main advantage of our proposed scheme over most of the existing unipolar schemes in the literature is that the polarity of the real-valued orthogonal frequency division multiplexing (OFDM) sample determines the pulse shape of the continuous-time signal and thus, the unipolar conversion is performed directly in the analog instead of the digital domain. Therefore, our proposed scheme does not require any direct current (DC) biasing or clipping as it is the case with existing schemes in the literature. The bit error rate (BER) performance of our proposed scheme is analytically derived and its accuracy is verified by using Matlab simulations. Simulation results also substantiate the potential performance gains of our proposed scheme against the state-of-the-art OFDM-based systems in VLC; it indicates that the absence of DC shift and clipping in our scheme supports more reliable communication and outperforms the asymmetrically clipped optical-OFDM (ACO-OFDM), DC optical-OFDM (DCOOFDM) and unipolar-OFDM (U-OFDM) schemes. For instance, our scheme outperforms ACO-OFDM by at least 3 dB (in terms of signal to noise ratio) at a target BER of 10
In this paper we present a novel distributed Inter-Cell Interference Coordination (ICIC) scheme for interference-limited heterogeneous cellular networks (HetNet). We reformulate our problem in such a way that it can be decomposed into a number of small sub-problems, which can be solved independently through an iterative subgradient method. The proposed dual decomposition method can also address problems with binary-valued variables. The proposed algorithm is compared with some reference schemes in terms of cell-edge and total cell throughput.
The concept of Ultra Dense Networks (UDNs) is often seen as a key enabler of the next generation mobile networks. The massive number of BSs in UDNs represents a challenge in deployment, and there is a need to understand the performance behaviour and benefit of a network when BS locations are carefully selected. This can be of particular importance to the network operators who deploy their networks in large indoor open spaces such as exhibition halls, airports or train stations where locations of BSs often follow a regular pattern. In this paper we study performance of UDNs in downlink for regular network produced by careful BS site selection and compare to the irregular network with random BS placement. We first develop an analytical model to describe the performance of regular networks showing many similar performance behaviour to that of the irregular network widely studied in the literature. We also show the potential performance gain resulting from proper site selection. Our analysis further shows an interesting finding that even for over-densified regular networks, a nonnegligible system performance could be achieved.
In this paper, we propose a data cell outage detection scheme for heterogeneous networks (HetNets) with separated control and data plane. We consider a HetNet where the Control Network Layer (CNL) provides ubiquitous network access while Data Network Layer (DNL) provides high data rate transmission to low mobility User Terminals (UTs). Furthermore, network functionalities such as paging and system information broadcast are provided by the CNL to all active UTs, hence, the CNL is aware of all active UTs association. Based on this observation, we categorize our data cell outage detection scheme into the trigger phase and detection phase. In the former, the CNL monitors all UT-data base station association and triggers detection when irregularities occurs in the association, while the later utilizes a grey prediction model on the UTs’ reference signal received power (RSRP) statistics to determine the existence of an outage. The simulation results indicate that the proposed scheme can detect the data cell outage problem in a reliable manner.
Integrating information and communication technologies into the power generation, transmission and distribution system provides a new concept called Smart Grid (SG). The wide variety of devices connected to the SG communication infrastructure generates heterogeneous data with different Quality of Service (QoS) requirements and communication technologies. An intrusion Detection System (IDS) is a surveillance system monitoring the traffic flow over the network, seeking any abnormal behaviour to detect possible intrusions or attacks against the SG system. Distributed fashion of power and data in SG leads to an increase in the complexity of analysing the QoS and user requirements. Thus, we require a Big Data-aware distributed IDS dealing with the malicious behaviour of the network. Motivated by this, we design a distributed IDS dealing with anomaly big data and impose the proper defence algorithm to alert the SG. This paper proposes a new smart meter (SM) architecture, including a distributed IDS model (SM-IDS). Secondly, we implement SM-IDS using supervised ML algorithms. Finally, a distributed IDS model is introduced using federated learning. Numerical results approve that Neighbourhood Area Network IDS (NAN-IDS) can help decrease smart meters' energy and resource consumption. Thus, SM-IDS achieves an accuracy of 84.31% with a detection rate of 74.69%. Also, NAN-IDS provides an accuracy of 87.40% and a detection rate of 86.73%.
This paper presents empirically-based large-scale propagation path loss models for small cell fifth generation (5G) cellular system in the millimeter-wave bands, based on practical propagation channel measurements at 26 GHz, 32 GHz, and 39 GHz. To characterize path loss at these frequency bands for 5G small cell scenarios, extensive wideband and directional channel measurements have been performed on the campus of the University of Surrey. Close-in reference (CI), and 3GPP path loss models have been studied, and large-scale fading characteristics have been obtained and presented.
The uplink of a cell-free massive multiple-input multiple-output with maximum-ratio combining (MRC) and zero-forcing (ZF) schemes are investigated. A power allocation optimization problem is considered, where two conflicting metrics, namely the sum rate and fairness, are jointly optimized. As there is no closed-form expression for the achievable rate in terms of the large scale-fading (LSF) components, the sum rate fairness trade-off optimization problem cannot be solved by using known convex optimization methods. To alleviate this problem, we propose two new approaches. For the first approach, a use-and-then-forget scheme is utilized to derive a closed-form expression for the achievable rate. Then, the fairness optimization problem is iteratively solved through the proposed sequential convex approximation (SCA) scheme. For the second approach, we exploit LSF coefficients as inputs of a twin delayed deep deterministic policy gradient (TD3), which efficiently solves the non-convex sum rate fairness trade-off optimization problem. Next, the complexity and convergence properties of the proposed schemes are analyzed. Numerical results demonstrate the superiority of the proposed approaches over conventional power control algorithms in terms of the sum rate and minimum user rate for both the ZF and MRC receivers. Moreover, the proposed TD3-based power control achieves better performance than the proposed SCA-based approach as well as the fractional power scheme.
Along with spectral efficiency (SE), energy efficiency (EE) is becoming one of the key performance evaluation criteria for communication system. These two criteria, which are conflicting, can be linked through their trade-off. The EE-SE trade-off for the multi-input multi-output (MIMO) Rayleigh fading channel has been accurately approximated in the past but only in the low-SE regime. In this paper, we propose a novel and more generic closed-form approximation of this trade-off which exhibits a greater accuracy for a wider range of SE values and antenna configurations. Our expression has been here utilized for assessing analytically the EE gain of MIMO over single-input single-output (SISO) system for two different types of power consumption models (PCMs): the theoretical PCM, where only the transmit power is considered as consumed power; and a more realistic PCM accounting for the fixed consumed power and amplifier inefficiency. Our analysis unfolds the large mismatch between theoretical and practical MIMO vs. SISO EE gains; the EE gain increases both with the SE and the number of antennas in theory, which indicates that MIMO is a promising EE enabler; whereas it remains small and decreases with the number of transmit antennas when a realistic PCM is considered. © 2012 IEEE.
Energy consumption of sensor nodes is a key factor affecting the lifetime of wireless sensor networks (WSNs). Prolonging network lifetime not only requires energy efficient operation, but also even dissipation of energy among sensor nodes. On the other hand, spatial and temporal variations in sensor activities create energy imbalance across the network. Therefore, routing algorithms should make an appropriate trade-off between energy efficiency and energy consumption balancing to extend the network lifetime. In this paper, we propose a Distributed Energy-aware Fuzzy Logic based routing algorithm (DEFL) that simultaneously addresses energy efficiency and energy balancing. Our design captures network status through appropriate energy metrics and maps them into corresponding cost values for the shortest path calculation. We seek fuzzy logic approach for the mapping to incorporate human logic. We compare the network lifetime performance of DEFL with other popular solutions including MTE, MDR and FA. Simulation results demonstrate that the network lifetime achieved by DEFL exceeds the best of all tested solutions under various traffic load conditions. We further numerically compute the upper bound performance and show that DEFL performs near the upper bound.
This letter presents a new posterior Cramér-Rao bound (PCRB) for inertial sensors enhanced mobile positioning, which performs hybrid data fusion of parameters including position estimates, pedestrian step size, pedestrian heading, and the knowledge of random walk motion model. Moreover, a non-matrix closed form of the PCRB is derived without position estimates. Finally, our numerical results show that when the accuracy of step size and heading measurements is high enough, the knowledge of random walk model becomes redundant.
In this paper, we investigate the uplink transmission of a single-antenna handheld user to a cluster of satellites. Taking advantage of the inter-satellite links, the satellites can cooperate which each other to jointly detect the received signal. We examine a scenario in which the satellite cluster lacks access to the instantaneous channel state information (CSI). Thus, using only statistical CSI, we design the joint detection by minimizing the mean square error (MSE). We calculate the ergodic capacity using the properties of the Wishart matrix, and then for low-SNR scenarios we provide a closed-form approximation for it. Our numerical results demonstrate the effectiveness of the detection scheme, along with the proximity of the approximation to the actual ergodic capacity. Considering a mega constellation with 3168 satellites in low earth orbit (LEO), we show that a capacity of more than 10 MB/sec can be achieved even if only the statistical CSI is known to the receiver, and a capacity of up to 38 MB/sec can be achieved with perfect instantaneous CSI.
Node clustering has been widely studied in recent years for Wireless Sensor Networks (WSN) as a technique to form a hierarchical structure and prolong network lifetime by reducing the number of packet transmissions. Cluster Heads (CH) are elected in a distributed way among sensors, but are often highly overloaded, and therefore re-clustering operations should be performed to share the resource intensive CH-role. Existing protocols involve periodic network-wide re-clustering operations that are simultaneously performed, which requires global time synchronisation. To address this issue, some recent studies have proposed asynchronous node clustering for networks with direct links from CHs to the data sink. However, for large-scale WSNs, multihop packet delivery to the sink is required since longrange transmissions are costly for sensor nodes. In this paper, we present an asynchronous node clustering protocol designed for multihop WSNs, considering dynamic conditions such as residual node energy levels and unbalanced data traffic loads caused by packet forwarding. Simulation results demonstrate that it is possible to achieve similar levels of lifetime extension by re-clustering a multihop WSN via independently made decisions at CHs, without a need for time synchronisation required by existing synchronous protocols.
There has been a keen interest in detecting abrupt sequential changes in streaming data obtained from sensors in Wireless Sensor Networks (WSNs) for Internet of Things (IoT) applications such as fire/fault detection, activity recognition and environmental monitoring. Such applications require (near) online detection of instantaneous changes. This paper proposes an Online, adaptive Filtering-based Change Detection (OFCD) algorithm. Our method is based on a convex combination of two decoupled Least Mean Square (LMS) windowed filters with differing sizes. Both filters are applied independently on data streams obtained from sensor nodes such that their convex combination parameter is employed as an indicator of abrupt changes in mean values. An extension of our method (OFCD) based on a Cooperative scheme between multiple sensors (COFCD) is also presented. It provides an enhancement of both convergence and steady-state accuracy of the convex weight parameter. Our conducted experiments show that our approach can be applied in distributed networks in an online fashion. It also provides better performance and less complexity compared with the state-of-theart on both of single and multiple sensors.
We study the cognitive interference channel where an additional node (a relay) is present. In our model the relay's operation is causal rather than strictly causal, i.e., the relay's transmit symbol depends not only on its past but also on its current received symbol. We derive outer bounds for the discrete and Gaussian cases in very strong interference. A scheme for achievability based on instantaneous amplify-and-forward relaying is proposed for this model. The inner and outer bounds coincide for the special case of very strong interference.
Session Initiation Protocol (SIP) is an application layer signalling protocol used in the IP-based UMTS network for establishing multimedia sessions. With a satellite component identified to play an integral role in UNITS, there is a need to support SIP-based session establishment over Satellite-UMTS (SUNITS) as well. Due to the inherent characteristics of SIP, the transport of SIP over an unreliable wireless link with a larger propagation delay is inefficient. To improve the session setup performance, a link layer retransmission based on the Radio Link Control acknowledgement mode (RLC-AM) mechanisms is utilised. However the current UMTS RLC-AM procedure is found to cause undesirable redundant retransmission when applied over the satellite. As such, this paper proposes an enhancement to the RLC protocol through a timer-based retransmission scheme. Simulation results reveal that not only the system capacity can be improved through this redundant retransmission avoidance scheme, but also better system performances in terms of session setup delay and failure are gained.
This paper presents a new multi-channel MAC protocol for Vehicular Ad Hoc Networks, namely, Asynchronous Multi-Channel MAC (AMCMAC). The AMCMAC supports simultaneous transmissions on different service channels, as well as, allowing other nodes to make rendezvous with their provider/receiver or broadcast emergency messages on the control channel. We compare the performance of the proposed protocol with that of IEEE 1609.4 and Asynchronous Multichannel Coordination Protocol (AMCP), in terms of throughput on control and service channels, channel utilization, and the penetration rate of successfully broadcast emergency messages. We demonstrate that AMCMAC outperforms IEEE 1609.4 and AMCP in terms of system throughput by increasing the utilization of control channel and service channels. In addition, AMCMAC mitigates both the multi-channel hidden terminal and missing receiver problems which occur in asynchronous multichannel MAC protocols. © 2011 IEEE.
This paper provides an efficient key management scheme for large scale personal networks (PN) and introduces the Certified PN Formation Protocol (CPFP) based on a personal public key infrastructure (personal PKI) concept and Elliptic Curve Cryptography (ECC) techniques.
Paving the way for future 5G technologies requires a need to overcome the spectrum crunch, which is one of the major challenges impeding the growth of wireless technology. The issue at hand becomes more pronounced when we consider IoT, where billions of devices require connectivity. This article motivates the need for exploring new spectrum opportunities with reference to the requirements of IoT networks. Millimeter wave (mmWave) spectrum is considered as a panacea for overcoming the spectrum crunch, providing the much needed breathing space for introducing new applications that require higher rates. A network based on CDSA could further improve performance by utilizing mmWave-based DBSs. The CBS operates on the sub-6 GHz single band, while the DBS possesses a dual-band capability. This article presents a new dimension to spectrum heterogeneity by utilizing a dual-band approach at the DBS. One of the unique aspects of this work includes the analysis of a joint radio resource allocation algorithm based on LDD and we compare the proposed algorithm with the maxRx, DSA and JPRA algorithms. The analysis is further expanded by showing an interplay between the utilization of licensed and unlicensed mmWave resources and how dynamic spectrum management could help in their efficient utilization.
This paper presents the design of a Ka-band reflectarray antenna, intended for LEO-satellite communications, which operates at 27 GHz. The phase tuning mechanism relies on variable size patches capable of achieving a 360 degrees phase range, which enables the incoming wave to be scattered in any specific direction. In particular, the reflectarray antenna, which has a squared-shape of 30 cm each side, is constituted by a 50 x 50 radiating-patch elements, printed on a planar substrate of "Rogers TMM4" material. With a 27.41 dBi directivity, this configuration is able to generate a pencil-beam in perpendicular direction to the reflecting plane.
The Terahertz (THz) band (0.3-3.0 THz), spans a great portion of the Radio Frequency (RF) spectrum that is mostly unoccupied and unregulated. It is a potential candidate for application in Sixth-Generation (6G) wireless networks, as it has the capabilities of satisfying the high data rate and capacity requirements of future wireless communication systems. Profound knowledge of the propagation channel is crucial in communication systems design, which nonetheless is still at its infancy, as channel modeling at THz frequencies has been mostly limited to characterizing fixed Point-to-Point (PtP) scenarios up to 300 GHz. Provided the technology matures enough and models adapt to the distinctive characteristics of the THz wave, future wireless communication systems will enable a plethora of new use cases and applications to be realized, in addition to delivering higher spectral efficiencies that would ultimately enhance the Quality-of-Service (QoS) to the end user. In this paper, we provide an insight into THz channel propagation characteristics, measurement capabilities, and modeling techniques for 6G communication applications, along with guidelines and recommendations that will aid future characterization efforts in the THz band. We survey the most recent and important measurement campaigns and modeling efforts found in literature, based on the use cases and system requirements identified. Finally, we discuss the challenges and limitations of measurement and modeling at such high frequencies and contemplate the future research outlook toward realizing the 6G vision.
This paper surveys the literature relating to the application of machine learning to fault management in cellular networks from an operational perspective. We summarise the main issues as 5G networks evolve, and their implications for fault management. We describe the relevant machine learning techniques through to deep learning, and survey the progress which has been made in their application, based on the building blocks of a typical fault management system. We review recent work to develop the abilities of deep learning systems to explain and justify their recommendations to network operators. We discuss forthcoming changes in network architecture which are likely to impact fault management and offer a vision of how fault management systems can exploit deep learning in the future. We identify a series of research topics for further study in order to achieve this.
In this paper, unsupervised deep learning solutions for multiuser single-input multiple-output (MU-SIMO) coherent detection are extensively investigated. According to the ways of utilizing the channel state information at the receiver side (CSIR), deep learning solutions are divided into two groups. One group is called equalization and learning, which utilizes the CSIR for channel equalization and then employ deep learning for multiuser detection (MUD). The other is called direct learning, which directly feeds the CSIR, together with the received signal, into deep neural networks (DNN) to conduct the MUD. It is found that the direct learning solutions outperform the equalizationand- learning solutions due to their better exploitation of the sequence detection gain. On the other hand, the direct learning solutions are not scalable to the size of SIMO networks, as current DNN architectures cannot efficiently handle many cochannel interferences. Motivated by this observation, we propose a novel direct learning approach, which can combine the merits of feedforward DNN and parallel interference cancellation. It is shown that the proposed approach trades off the complexity for the learning scalability, and the complexity can be managed due to the parallel network architecture.
In this paper, we analyze the block error rate (BLER) and bit error rate (BER) of soft decode-and-forward (SDF) using distributed Turbo codes, which is proposed recently to mitigate error propagation caused by decoding error in the relay node. Union bound (UB) in fading channels is derived and compared with simulation results. In order to get tight bounds for block fading case, limit-before-average technique is used. Furthermore, we extend our analysis to the space-time cooperation framework. The analysis and simulation show that the derived bound is very tight.
Device-to-device (D2D) communication is being considered an important traffic offloading mechanism for future cellular networks. Coupled with pro-active device caching, it offers huge potential for capacity and coverage enhancements. In order to ensure maximum capacity enhancement, number of nodes for direct communication needs to be identified. In this paper, we derive analytic expression that relates number of D2D nodes (i.e., D2D user density) and average coverage probability of reference D2D receiver. Using stochastic geometry and poisson point process, we introduce retention probability within cooperation region and shortest distance based selection criterion to precisely quantify interference due to D2D pairs in coverage area. The simulation setup and numerical evaluation validates the closed-form expression.
Hot spots in a wireless sensor network emerge as locations under heavy traffic load. Nodes in such areas quickly deplete energy resources, leading to disruption in network services. This problem is common for data collection scenarios in which Cluster Heads (CH) have a heavy burden of gathering and relaying information. The relay load on CHs especially intensifies as the distance to the sink decreases. To balance the traffic load and the energy consumption in the network, the CH role should be rotated among all nodes and the cluster sizes should be carefully determined at different parts of the network. This paper proposes a distributed clustering algorithm, Energy-efficient Clustering (EC), that determines suitable cluster sizes depending on the hop distance to the data sink, while achieving approximate equalization of node lifetimes and reduced energy consumption levels. We additionally propose a simple energy-efficient multihop data collection protocol to evaluate the effectiveness of EC and calculate the end-to-end energy consumption of this protocol; yet EC is suitable for any data collection protocol that focuses on energy conservation. Performance results demonstrate that EC extends network lifetime and achieves energy equalization more effectively than two well-known clustering algorithms, HEED and UCR.
Choice of a suitable waveform is a key factor in the design of 5G physical layer. New waveform/s must be capable of supporting a greater density of users, higher data throughput and should provide more efficient utilization of available spectrum to support 5G vision of “everything everywhere and always connected” with “perception of infinite capacity”. Although orthogonal frequency division multiplexing (OFDM) has been adopted as the transmission waveform in wired and wireless systems for years, it has several limitations that make it unsuitable for use in future 5G air interface. In this chapter, we investigate and analyse alternative waveforms that are promising candidate solutions to address the challenges of diverse applications and scenarios in 5G.
In this paper, we consider multigroup multicast transmissions with different types of service messages in an overloaded multicarrier system, where the number of transmitter antennas is insufficient to mitigate all inter-group interference. We show that employing a rate-splitting based multiuser beamforming approach enables a simultaneous delivery of the multiple service messages over the same time-frequency resources in a non-orthogonal fashion. Such an approach, taking into account transmission power constraints which are inevitable in practice, outperforms classic beamforming methods as well as current standardized multicast technologies, in terms of both spectrum efficiency and the flexibility of radio resource allocation.
Despite years of physical-layer research, the capacity enhancement potential of relays is limited by the additional spectrum required for Base Station (BS)-Relay Station (RS) links. This paper presents a novel distributed solution by exploiting a system level perspective instead. Building on a realistic system model with impromptu RS deployments, we develop an analytical framework for tilt optimization that can dynamically maximize spectral efficiency of both the BS-RS and BS-user links in an online manner. To obtain a distributed self-organizing solution, the large scale system-wide optimization problem is decomposed into local small scale subproblems by applying the design principles of self-organization in biological systems. The local subproblems are non-convex, but having a very small scale, can be solved via standard nonlinear optimization techniques such as sequential quadratic programming. The performance of the developed solution is evaluated through extensive simulations for an LTE-A type system and compared against a number of benchmarks including a centralized solution obtained via brute force, that also gives an upper bound to assess the optimality gap. Results show that the proposed solution can enhance average spectral efficiency by up to 50% compared to fixed tilting, with negligible signaling overheads. The key advantage of the proposed solution is its potential for autonomous and distributed implementation.
One of the major challenges of Cellular network based localization techniques is lack of hearability between mobile terminals (MTs) and base stations (BSs), thus the number of available anchors is limited. In order to solve the hearability problem, previous work assume some of the MTs have their location information via Global Positioning System (GPS). These located MT can be utilized to find the location of an un-located MT without GPS receiver. However, its performance is still limited by the number of available located MTs for cooperation. This paper consider a practical scenario that hearability is only possible between a MT and its home BS. Only one located MT together with the home BS are utilized to find the location of the un-located MT. A hybrid cooperative localization approach is proposed to combine time-of-arrival and received signal strength based fingerprinting techniques. It is shown in simulations that the proposed hybrid approach outperform the stand-alone time-of-arrival techniques or received signal strength based fingerprinting techniques in the considered scenario. It is also found that the proposed approach offer better accuracy with larger distance between the located MT and the home BS. © 2011 IEEE.
Multiuser selection scheduling concept has been recently proposed in the literature in order to increase the multiuser diversity gain and overcome the significant feedback requirements for the opportunistic scheduling schemes. The main idea is that reducing the feedback overhead saves per-user power that could potentially be added for the data transmission. In this work, we propose to integrate the principle of multiuser selection and the proportional fair scheduling scheme. This is aimed especially at power-limited, multi-device systems in non-identically distributed fading channels. For the performance analysis, we derive closed-form expressions for the outage probabilities and the average system rate of the delay-sensitive and the delay-tolerant systems, respectively, and compare them with the full feedback multiuser diversity schemes. The discrete rate region is analytically presented, where the maximum average system rate can be obtained by properly choosing the number of partial devices. We optimize jointly the number of partial devices and the per-device power saving in order to maximize the average system rate under the power requirement. Through our results, we finally demonstrate that the proposed scheme leveraging the saved feedback power to add for the data transmission can outperform the full feedback multiuser diversity, in non-identical Rayleigh fading of devices’ channels.
A novel reconfigurable dielectric resonator antenna (DRA) employed a T-Shaped microstrip-fed structure in order to excite the dielectric resonator is presented. By carefully adjusting the location of the inverted U-shaped slot, switches, and length of arms, the proposed antenna can support WLAN wireless system. In addition, the presented DRA can be proper for cognitive radio because of availability switching between wideband and narrowband operation. The proposed reconfigurable DRA consisting of a Roger substrate with relative permittivity 3 and a size of 20 mm × 30 mm × 0.75 mm and a dielectric resonator (DR) with a thickness of 9 mm and the overall size of 18 mm × 18 mm. Moreover, the antenna has been fabricated and tested, which test results have enjoyed a good agreement with the simulated results. As well as this, the measured and simulated results show the reconfigurability that the proposed DRA provides a dual-mode operation and also three different resonance frequencies as a result of switching the place of arms.
Due to dynamic wireless network conditions and heterogeneous mobile web content complexities, web-based content services in mobile network environments always suffer from long loading time. The new HTTP/2.0 protocol only adopts one single TCP connection, but recent research reveals that in real mobile environments, web downloading using single connection will experience long idle time and low bandwidth utilization, in particular with dynamic network conditions and web page characteristics. In this paper, by leveraging the Mobile Edge Computing (MEC) technique, we present the framework of Mobile Edge Hint (MEH), in order to enhance mobile web downloading performances. Specifically, the mobile edge collects and caches the meta-data of frequently visited web pages and also keeps monitoring the network conditions. Upon receiving requests on these popular webpages, the MEC server is able to hint back to the HTTP/2.0 clients on the optimized number of TCP connections that should be established for downloading the content. From the test results on real LTE testbed equipped with MEH, we observed up to 34.5% time reduction and in the median case the improvement is 20.5% compared to the plain over-the-top (OTT) HTTP/2.0 protocol.
Amplify-and-forward (AF) is one of the most popular and simple approaches for transmitting information over a cooperative multi-input multi-output (MIMO) relay channel. In cooperative communication, relays are employed for improving the coverage or enhancing the spectral efficiency, especially of cell-edge users. However, in a multi-cell context, the use of relays will also lead to an increase in the level of interference that is experienced by cell-edge users of neighboring cells. In this paper, two novel precoding schemes are proposed for mitigating this adverse effect of cooperative communication. They are designed by taking into account the effect of interference coming from neighboring cells, i.e. other cell-interference (OCI), in order to maximize the sum-rate of cell-edge users. Our novel OCI aware precoding schemes are compared against non OCI-aware techniques and results show the large performance gain in terms of sum-rate that our schemes can achieved, especially for large numbers of users and/or antennas in the multi-cell system.
Smart meters (SM) with wireless capabilities are one of the most meaningful applications of the Internet of Things. Standards like Zigbee have found a niche in transmitting data on energy usage to the user and the supplier wirelessly via these meters and communication hubs. There are still certain difficulties, notably in delivering wireless connectivity to meters situated in difficult-to-reach locations such as basements or deep indoors. To solve this issue, this paper investigates the usage of mesh networks at 868 MHz, particularly to increase coverage, and proposes an additional mounted antenna to significantly increase outside coverage while providing the necessary coverage extension for hard-to-reach indoor locations. Extensive measurements were made in Newbury in both suburban and open environments for validation and delivery of a simple statistical model for the 868 MHz band in United Kingdom conurbations. Results presented in this paper estimate that mesh networks at 868 MHz can accommodate large areas constituting several SMs with the proposed coverage extension method. With our findings and proposed methods on mesh connectivity, only 1% of UK premises will require mesh radios to achieve the desired coverage.
—Numerous low-complexity iterative algorithms have been proposed to offer the performance of linear multiple-input multiple-output (MIMO) detectors bypassing the channel matrix inverse. These algorithms exhibit fast convergence in well-conditioned MIMO channels. However, in the emerging MIMO paradigm utilizing extremely large aperture arrays (ELAA), the wireless channel may become ill-conditioned because of spatial non-stationarity, which results in a considerably slower convergence rate for these algorithms. In this paper, we propose a novel ELAA-MIMO detection scheme that leverages user-wise singular value decomposition (UW-SVD) to accelerate the convergence of these iterative algorithms. By applying UW-SVD, the MIMO signal model can be converted into an equivalent form featuring a better-conditioned transfer function. Then, existing iterative algorithms can be utilized to recover the transmitted signal from the converted signal model with accelerated convergence towards zero-forcing performance. Our simulation results indicate that proposed UW-SVD scheme can significantly accelerate the convergence of the iterative algorithms in spatially non-stationary ELAA channels. Moreover, the computational complexity of the UW-SVD is comparatively minor in relation to the inherent complexity of the iterative algorithms. Index Terms—User-wise singular value decomposition (UW-SVD), extremely large aperture array (ELAA), multiple-input multiple-output (MIMO), iterative algorithms.
A Low complex interference cancellation via modified suboptimum search algorithm in conjunction with a primary stage of reduced rank linear (RRL) multiuser detector for the mobile uplink is proposed. Initial stage is improved through mathematical analysis via Gershgorin algorithm in linear algebra and RRLG detector is introduced. The complexity of the initial stage is evaluated and compared to the recently reported low-complex Fourier interference cancellation method. Depending on the value of the spreading factor of the active users in system, RRLG outperforms the Fourier algorithm, in terms of complexity. The structure of the RRLG method and the suboptimum search algorithm are well suited together and makes them collaboratively work without encountering a high level of complexity. Considering the power profile of the users in the suboptimum search algorithm leaded to even less complexity, yet keeping the performance almost the same. The performance of the structure is obtained via simulations and has been compared to partial parallel interference cancellation (PPIC) method. A good improvement in performance in the low SNR regions, which is difficult to achieve by conventional multiuser detectors and also important as the actual systems are likely to operate in these regions, has been achieved. All the techniques and their modifications introduced in this work consider the complexity as an important issue that enables them suitable for industry and implementation purposes. Another important feature is that the techniques perform on canonical matrix formulations of the system so they can be applied to MC-CDMA and MIMO systems, as well.
This paper investigates a secure wireless powered integrated service system with full duplex self-energy recycling. Specifically, an energy-constrained information transmitter (IT), powered by a power station (PS) in a wireless fashion, broadcasts two types of services to all users: a multicast service intended for all users, and a confidential unicast service subscribed to by only one user while protecting it from any other unsubscribed users and an eavesdropper. Our goal is to jointly design the optimal input covariance matrices for the energy beamforming, the multicast service, the confidential unicast service, and the artificial noises from the PS and the IT, such that the secrecy-multicast rate region (SMRR) is maximized subject to the transmit power constraints. Due to the non-convexity of the SMRR maximization (SMRRM) problem, we employ a semidefinite programmingbased two-level approach to solve this problem and find all of its Pareto optimal points. In addition, we extend the SMRRM problem to the imperfect channel state information case where a worst-case SMRRM formulation is investigated. Moreover, we exploit the optimized transmission strategies for the confidential service and energy transfer by analyzing their own rank-one profile. Finally, numerical results are provided to validate our proposed schemes.
This letter presents a novel opportunistic cooperative positioning approach for orthogonal frequency-division multiple access (OFDMA) systems. The basic idea is to allow idle mobile terminals (MTs) opportunistically estimating the arrival timing of the training sequences for uplink synchronization from active MTs. The major advantage of the proposed approach over state-of-the-arts is that the positioning-related measurements among MTs are performed without the paid of training overhead. Moreover, Cramer-Rao lower bound (CRLB) is utilized to derive the positioning accuracy limit of the proposed approach, and the numerical results show that the proposed approach can improve the accuracy of non-cooperative approaches with the a-priori stochastic knowledge of clock bias among idle MTs.
A practical wireless network solution for providing community broadband Internet access services are considered to be wireless mesh networks with delay-throughput tradeoff. This important aspect of network design lies in the capability to simultaneously support multiple independent mesh connections at the intermediate mobile stations. The intermediate mobile stations act as routers by combining network packets with forwarding, a scenario usually known as multiple coding unicasts. The problem of efficient network design for such applications based on multipath network coding with delay control on packet servicing is considered. The simulated solution involves a joint consideration of wireless media access control (MAC) and network-layer multipath selection. Rather than considering general wireless mesh networks, here the focus is on a relatively small-scale mesh network with multiple sources and multiple sinks suitable for multihop wireless backhaul applications within WiMAX standard. © 2012 ICST Institute for Computer Science, Social Informatics and Telecommunications Engineering.
The fifth generation (5G) wireless communication networks are being deployed worldwide from 2020 and more capabilities are in the process of being standardized, such as mass connectivity, ultra-reliability, and guaranteed low latency. However, 5G will not meet all requirements of the future in 2030 and beyond, and sixth generation (6G) wireless communication networks are expected to provide global coverage, enhanced spectral/energy/cost efficiency, better intelligence level and security, etc. To meet these requirements, 6G networks will rely on new enabling technologies, i.e., air interface and transmission technologies and novel network architecture, such as waveform design, multiple access, channel coding schemes, multi-antenna technologies, network slicing, cell-free architecture, and cloud/fog/edge computing. Our vision on 6G is that it will have four new paradigm shifts. First, to satisfy the requirement of global coverage, 6G will not be limited to terrestrial communication networks, which will need to be complemented with non-terrestrial networks such as satellite and unmanned aerial vehicle (UAV) communication networks, thus achieving a space-air-ground-sea integrated communication network. Second, all spectra will be fully explored to further increase data rates and connection density, including the sub-6 GHz, millimeter wave (mmWave), terahertz (THz), and optical frequency bands. Third, facing the big datasets generated by the use of extremely heterogeneous networks, diverse communication scenarios, large numbers of antennas, wide bandwidths, and new service requirements, 6G networks will enable a new range of smart applications with the aid of artificial intelligence (AI) and big data technologies. Fourth, network security will have to be strengthened when developing 6G networks. This article provides a comprehensive survey of recent advances and future trends in these four aspects. Clearly, 6G with additional technical requirements beyond those of 5G will enable faster and further communications to the extent that the boundary between physical and cyber worlds disappears.
In mobile ad hoc networks (MANETs), accurate throughput-constrained Quality of Service (QoS) routing and admission control have proven difficult to achieve, mainly due to node mobility and contention for channel access. In this paper we propose a solution to those problems, utilising the Dynamic Source Routing (DSR) protocol for basic routing. Our design considers the throughput requirements of data sessions and how these are affected by protocol overbeads and contention between nodes. Furthermore, in contrast to previous work, the time wasted at the MAC layer by collisions and channel access delays, is also considered. Simulation results show that in a stationary scenario with high offered load, and at the cost of increased routing overhead, our protocol more than doubles session completion rate (SCR) and reduces average packet delay by a factor of seven compared to classic DSR. Even in a highly mobile scenario, it can double SCR and cut average packet delay to a third.
Mobile Ad Hoc Networks: Challenges and Solutions for Providing Quality of Service Assurances Lajos Hanzo (II.)1 and Rahim Tafazolli University of Surrey, ...
This paper addresses the problem of joint backhaul and access links optimization in dense small cell networks with special focus on time division duplexing (TDD) mode of operation in backhaul and access links transmission. Here, we propose a framework for joint radio resource management where we systematically decompose the problem in backhaul and access links. To simplify the analysis, the procedure is tackled in two stages. At the first stage, the joint optimization problem is formulated for a point-to-point scenario where each small cell is simply associated to a single user. It is shown that the optimization can be decomposed into separate power and subchannel allocation in both backhaul and access links where a set of rate-balancing parameters in conjunction with duration of transmission governs the coupling across both links. Moreover, a novel algorithm is proposed based on grouping the cells to achieve rate-balancing in different small cells. Next in the second stage, the problem is generalized for multi access small cells. Here, each small cell is associated to multiple users to provide the service. The optimization is similarly decomposed into separate sub-channel and power allocation by employing auxiliary slicing variables. It is shown that similar algorithms as previous stage are applicable by slight change with the aid of slicing variables. Additionally, for the special case of line-of-sight backhaul links, simplified expressions for sub-channel and power allocation are presented. The developed concepts are evaluated by extensive simulations in different case studies from full orthogonalization to dynamic clustering and full reuse in the downlink and it is shown that proposed framework provides significant improvement over the benchmark cases.
Volunteer computing is an Internet-based distributed computing in which volunteers share their extra available resources to manage large-scale tasks. However, computing devices in a Volunteer Computing System (VCS) are highly dynamic and heterogeneous in terms of their processing power, monetary cost, and data transferring latency. To ensure both of the high Quality of Service (QoS) and low cost for different requests, all of the available computing resources must be used efficiently. Task scheduling is an NP-hard problem that is considered as one of the main critical challenges in a heterogeneous VCS. Due to this, in this article, we design two task scheduling algorithms for VCSs, named Min-CCV and Min-V. The main goal of the proposed algorithms is jointly minimizing the computation, communication, and delay violation cost for the Internet of Things (IoT) requests. Our extensive simulation results show that proposed algorithms are able to allocate tasks to volunteer fog/cloud resources more efficiently than the state-of-the-art. Specifically, our algorithms improve the deadline satisfaction task rates around 99.5% and decrease the total cost between 15 to 53% in comparison with the genetic-based algorithm.
Seamless and ubiquitous coverage are key factors for future cellular networks. Despite capacity and data rates being the main topics under discussion when envisioning the Fifth Generation (5G) and beyond of mobile communications, network coverage remains one of the major issues since coverage quality highly impacts the system performance and end-user experience. The increasing number of base stations and user terminals is anticipated to negatively impact the network coverage due to increasing interference. Furthermore, the "ubiquitous coverage" use cases, including rural and isolated areas, present a significant challenge for mobile communication technologies. This survey presents an overview of the concept of coverage, highlighting the ways it is studied, measured, and how it impacts the network performance. Additionally, an overlook of the most important key performance indicators influenced by coverage, which may affect the envisioned use cases with respect to throughput, latency, and massive connectivity, are discussed. Moreover, the main existing developments and deployments which are expected to augment the network coverage, in order to meet the requirements of the emerging systems, are presented as well as implementation challenges.
This paper investigates a multiple intelligent reflecting surfaces (IRSs) aided integrated terrestrial-satellite network (ITSN), where the IRSs are deployed to cooperatively assist the low channel gain users in the co-existing transmission system. In such a network, the coordinated beamforming and frame based transmission scheme are considered for the terrestrial network and the satellite network, respectively. We aim at maximizing the weighted sum rate (WSR) of all users by jointly designing the frame based beamforming at the small base stations (SBSs) and the phase shifts at the IRSs, subject to the individual maximum SBS's transmit power constraints and the IRSs' reflection constraints. This non-convex problem is firstly decomposed via fractional programming (FP) technique in the objective function, then transmit beamforming vectors and reflective phase shifts matrix are optimized alternatingly. A block coordinate descent (BCD) method is proposed to obtain the stationary solution. Simulation results verify the effectiveness of the proposed algorithm compared with different benchmark schemes.
The effect of vehicle's proximity on the radiation pattern when the RADAR's antenna is mounted on the body of autonomous cars is analysed. Two directional radiation patterns with different specifications are placed at different locations of a realistic car body model. The simulation is performed based on ray-tracing method at 77 GHz, the standard frequency for selfdriving applications. It is shown that to obtain a robust RADAR sensor, the antenna radiation pattern is better to have relatively higher gain and lower side-lobe-level (SLL), than narrower half-power-beamwidth (HPBW) and higher front-to-back (F/B) ratio. Both academia and industry can benefit from this study.
This letter proposes an innovative energy-efficient Radio Access Network (RAN) disaggregation and virtualization method for Open RAN (O-RAN) that effectively addresses the challenges posed by dynamic traffic conditions. In this case, the energy consumption is primarily formulated as a multi-objective optimization problem and then solved by integrating Advantage Actor-Critic (A2C) algorithm with a sequence-to-sequence model due to sequentially of RAN disaggregation and long-term dependencies. According to the results, our proposed solution for dynamic Virtual Network Functions (VNF) splitting outperforms approaches that do not involve VNF splitting, significantly reducing energy consumption. The solution achieves up to 56% and 63% for business and residential areas under traffic conditions, respectively.
Recent telecommunication paradigms, such as big data, Internet of Things (IoT), ubiquitous edge computing (UEC), and machine learning, are encountering with a tremendous number of complex applications that require different priorities and resource demands. These applications usually consist of a set of virtual machines (VMs) with some predefined traffic load between them. The efficiency of a cloud data center (CDC) as prominent component in UEC significantly depends on the efficiency of its VM placement algorithm applied. However, VM placement is an NP-hard problem and thus there exist practically no optimal solution for this problem. In this paper, motivated by this, we propose a priority, power and traffic-aware approach for efficiently solving the VM placement problem in a CDC. Our approach aims to jointly minimize power consumption, network consumption and resource wastage in a multi-dimensional and heterogeneous CDC. To evaluate the performance of the proposed method, we compared it to the state-of-the-art on a fat-tree topology under various experiments. Results demonstrate that the proposed method is capable of reducing the total network consumption up to 29%, the consumption of power up to 18%, and the wastage of resources up to 68%, compared to the second-best results.
While many studies have concentrated on providing theoretical analysis for the relay assisted compress-and-forward systems little effort has yet been made to the construction and evaluation of a practical system. In this paper a practical CF system incorporating an error-resilient multilevel Slepian-Wolf decoder is introduced and a novel iterative processing structure which allows information exchanging between the Slepian-Wolf decoder and the forward error correction decoder of the main source message is proposed. In addition, a new quantization scheme is incorporated as well to avoid the complexity of the reconstruction of the relay signal at the final decoder of the destination. The results demonstrate that the iterative structure not only reduces the decoding loss of the Slepian-Wolf decoder, it also improves the decoding performance of the main message from the source.
Exploiting path diversity to enhance communication reliability is a key desired property in Internet. While the existing routing architecture is reluctant to adopt changes, overlay routing has been proposed to circumvent the constraints of native routing by employing intermediary relays. However, the selfish interdomain relay placement may violate local routing policies at intermediary relays and thus affect their economic costs and performances. With the recent advance of the concept of network virtualization, it is envisioned that virtual networks should be provisioned in cooperation with infrastructure providers in a holistic view without compromising their profits. In this paper, the problem of policy-aware virtual relay placement is first studied to investigate the feasibility of provisioning policycompliant multipath routing via virtual relays for inter-domain communication reliability. By evaluation on a real domain-level Internet topology, it is demonstrated that policy-compliant virtual relaying can achieve a similar protection gain against single link failures compared to its selfish counterpart. It is also shown that the presented heuristic placement strategies perform well to approach the optimal solution.
This paper studies adaptive power allocation among sub-carriers in MC-CDMA. Due to intrinsic nature of MC-CDMA; Carrier Based power allocation schemes cause MAI (Multiple Access Interference) enhancements, hence fail at higher system loads. We propose a Band Based Dynamic Link Adaptation (BBDLA) scheme that preserves orthogonality (among users) by spreading user’s signal only over a Band of adjacent N sub-carriers (N < Nsc 1 ) lying within coherence bandwidth (Bc) of the channel. Hence, it allows Band Based power allocation without causing any MAI. However, with only N orthogonal users supported on a particular Band, BBDLA essentially proposes a hybrid of FDMA with MC-CDMA where Bands and transmit powers are optimally assigned to users by Base Station (in accordance with their channel state). Optimum Band allocation for BBDLA is found to be computationally intractable hence a sub-optimal heuristic approach is proposed with equal power distribution among all assigned Bands for each user. Effect of Bc over choice of N is studied and BBDLA with suitably chosen N, is shown to outperform other published Carrier Based power allocation schemes while it maintain almost single user BER performance up to 62% of full system loading
Energy is a critical resource in the design of wireless networks since wireless devices are usually powered by batteries. Without any new approaches for energy saving, 4G mobile users will relentlessly be searching for power outlets rather than network access, and becoming once again bound to a single location. To avoid the so called 4G "energy trap" and to help wireless devices become more environment friendly, there is a clear need for disruptive strategies to address all aspects of power efficiency from the user devices through to the core infrastructure of the network and how these devices and equipment interact with each other. The ICT-C2POWER project is the vehicle that will address these issues through cognitive techniques and cooperation. The C2POWER case study is to research, develop and demonstrate energy saving technologies for multi-standard wireless mobile devices, exploiting the combination of cognitive radio and cooperative strategies, while still enabling the required performance in terms of data rate and QoS to support active applications. Copyright © 2010 The authors.
In the new fascinating era of 5G, new communication requirements set diverse challenges upon existing networks, both in terms of technologies and business models. One among the essential categories of the innovative 5G mobile network services is the enhanced Mobile Broadband (eMBB), mainly aiming to fulfill users’ demand for an increasingly digital lifestyle and focusing upon facilities that implicate high requirements for bandwidth. In this paper we have discussed eMBB as the first commercial use of the 5G technology. Then, we have focused upon the original context of the 5G-DRIVE research project between the EU and China, and we have identified essential features of the respective eMBB trials, constituting one of the corresponding core activities. In addition, we have discussed proposed scenarios and KPIs for assessing the scheduled experimental work, based on similar findings from other research and/or standardization activities.
This chapter has presented the insight methodologies on how to design, implement, and evaluate a width-bandwidth mm-wave fully connected hybrid beamforming metrological testbed with a large antenna array. The focus has been given on discussions include testbed design, calibration procedures, experimental evaluations, as well as the critical factors to consider for their practical implementation. If RF harmonics and spurious signal issues are avoided, one envisages that the testbed could be setup to work between 25 and 30 GHz with 2-GHz instantaneous bandwidth. Each of the phase shifters and attenuators in the mm-wave fully connected hybrid beamformer has six separate DIO control bits. Apart from describing the calibration procedures for the phase and amplitudes of the established fully connected hybrid beamformer system, the linearity, phase, and attenuation performance of the beamformer system between 25.5 and 26.5 GHz have been evaluated as well as the beamforming and link performance of a 128-element planar phased array at 26 GHz where the measured radiation patterns with and without amplitude tapering are compared.
A novel high-isolation, monostatic, circularly polarized (CP) simultaneous transmit and receive (STAR) anisotropic dielectric resonator antenna (DRA) is presented. The Proposed antenna is composed of two identical but orthogonally positioned annular sectoral anisotropic dielectric resonators. Each circularly polarized (CP) resonator consists of alternating stacked dielectric layers of relative permittivities of 2 and 15 and is excited by a coaxial probe from the two opposite ends to have left and right-hand CP. Proper element spacing and a square absorber are placed between the resonators to maximize Tx/Rx isolation. Such a structure provides an in-band full-duplex (IBFD) CP-DRA system. Measurement results exhibit high Tx/Rx isolation better than 50 dB over the desired operating bandwidth (5.87 to 5.97 GHz) with a peak gain of 5.49 and 5.08 dBic for Ports 1 and 2, respectively.
In this paper, selection criteria of Forward Error Correction (FEC) codes, in particular, the convolutional codes are evaluated for a novel air interface scheme, called Low Density Signature Orthogonal Frequency Division Multiple Access (LDS-OFDM). In this regard, the mutual information transfer characteristics of turbo Multiuser Detector (MUD) are investigated using Extrinsic Information Transfer (EXIT) charts. LDS-OFDM uses Low Density Signature structure for spreading the data symbols in frequency domain. This technique benefits from frequency diversity in addition to its ability of supporting parallel data streams more than the number of subcarriers (overloaded condition). The turbo MUD couples the data symbols’ detector of LDS scheme with users’ FEC decoders through the message passing principle. Index Terms — Low density signature, Multiuser detection, Iterative decoding.
With exploiting massive spectrum resources, millimeter wave (mmWave) communications significantly improve the offloading capability for future mobile edge computing (MEC) techniques, which however is constrained by blockage problem in dynamic environments. In this paper, we study the resource allocation problem for the conceived mmWave MEC system with dynamic offloading process, in which the UEs are characterized by being mobile and having the imperfect knowledge of the offloading tasks coming. By introducing the multi-objective Markov decision process (MOMDP), the resource allocation problem is modeled by simultaneously minimizing the delay and energy consumption, where jointly considering the multi-beam assignment (mBA) and beamwidth and power optimization (BPO). To tackle this problem, we innovatively propose a matching-aided-learning (MaL) resource allocation scheme, with the aid of a learnable weight based attention mechanism (LW-AM) for adapting the dynamic offloading process. In particular, our MaL scheme includes many-to-one matching (M2O-M) based mBA algorithm and deep deterministic policy gradient (DDPG) based BPO algorithm, which are executed iteratively and converge with relatively low number of iterations. The simulation results show the practical value of the proposed MaL, which can approach the performance of benchmark scheme with perfect knowledge of offloading tasks.
In this paper, a mathematical model is proposed to govern the phase distribution on a reconfigurable intelligent surface (RIS) for anomalously reflecting the beam towards the directions of interest. To this end, two operational modes are defined with respect to the reflected pattern. In the first mode, the RIS is configured to form multi-reflected beams toward the directions of interest capable of being controlled independently. The second mode is when the RIS provides a wide reflected beam. Regarding to each mode, a cost function is derived and then, in order to enhance the reflected pattern characteristics, a genetic algorithm (GA) is employed to the model as optimization method. To validate the practicality of the method, the proposed model is applied to a fabricated RIS to assess its performance in a real-world outdoor scenario. In the first mode, an asymmetric dual-beam reflected pattern is obtained and tested with tilt angles of θ0=60° and θ1=135°. Furthermore, a wide-reflected beam is generated in the second mode with half-power beamwidth of θHPBW=30° and tilt angle of θ0=75°. At both modes, the measured data are well aligned with the simulated results.
The sheer volume of IIOT malware is one of the most serious security threats in today's interconnected world, with new types of advanced persistent threats and advanced forms of obfuscations. This paper presents a robust Federated Learning-based architecture called Fed-IIoT for detecting Android malware applications in IIoT. Fed-IIoT consists of two parts: i) participant side, where the data are triggered by two dynamic poisoning attacks based on a generative adversarial network (GAN) and Federated Generative Adversarial Network (FedGAN). While ii) server-side, aim to monitor the global model and shape a robust collaboration training model, by avoiding anomaly in aggregation by GAN network (A3GAN) and adjust two GAN-based countermeasure algorithms. One of the main advantages of Fed-IIoT is that devices can safely participate in the IIoT and efficiently communicate with each other, with no privacy issues. We evaluate our solution through experiments on various features using three IoT datasets. The results confirm the high accuracy rates of our attack and defence algorithms and show that the A3GAN defensive approach preserves the robustness of data privacy for Android mobile users and is about 8% higher accuracy with existing state-of-the-art solutions.
Cloud envisioned Cyber-Physical Systems (CCPS) is a practical technology that relies on the interaction among cyber elements like mobile users to transfer data in cloud computing. In CCPS, cloud storage applies data deduplication techniques aiming to save data storage and bandwidth for real-time services. In this infrastructure, data deduplication eliminates duplicate data to increase the performance of the CCPS application. However, it incurs security threats and privacy risks. In this area, several types of research have been done. Nevertheless, they are suffering from a lack of security, high performance, and applicability. Motivated by this, we propose a message Lock Encryption with neVer-decrypt homomorphic EncRyption (LEVER) protocol between the uploading CCPS user and cloud storage to reconcile the encryption and data deduplication. Interestingly, LEVER is the first brute-force resilient encrypted deduplication with only cryptographic two-party interactions
Database-aided user association, where users are associated with data base stations (BSs) based on a database which stores their geographical location with signal-to-noise-ratio tagging, will play a vital role in the futuristic cellular architecture with separated control and data planes. However, such approach can lead to inaccurate user-data BS association, as a result of the inaccuracies in the positioning technique, thus leading to sub-optimal performance. In this paper, we investigate the impact of database-aided user association approach on the average spectral efficiency (ASE). We model the data plane base stations using its fluid model equivalent and derive the ASE for the channel model with pathloss only and when shadowing is incorporated. Our results show that the ASE in database-aided networks degrades as the accuracy of the user positioning technique decreases. Hence, system specifications for database-aided networks must take account of inaccuracies in positioning techniques.
This paper investigates the impacts of deploying Mobile Femtocell (MFemtocell) in LET networks. We investigate access delay, capacity, and feedback signalling overhead required for implementation of opportunistic scheduling in LTE cellular networks. We particularly study the impacts of deploying MFemtocells stations on the signalling overhead for opportunistic scheduling. Our system level simulation results indicate that one potential advantage of deploying MFemtocells can contribute to improve spectral efficiency by reducing the amount of feedback signalling. © 2011 IEEE.
This paper proposes an analytical model for the throughput of the Enhanced Distributed Channel Access (EDCA)mechanism in IEEE 802.11p MAC sub-layer. Features in EDCA such as different Contention Windows (CW) and Arbitration Interframe Space (AIFS) for each Access Category (AC), and internal collisions are taken into account. The analytical model is suitable for both basic access and the Request-To-Send/Clear-To-Send (RTS/CTS) access mode. The proposed analytical model is validated against simulation results to demonstrate its accuracy
Multiple access (MA) technique is a major building block of the cellular systems. Through the MA technique, the users can simultaneously access the physical medium and share the finite resources of the system, such as spectrum, time and power. Due to the rapid growth in demand on data applications in mobile communications, there has been extensive research to improve the efficiency of cellular systems. A significant part of this effort focuses on developing and optimizing the MA techniques. As a result, many MA techniques have been proposed systematically over the years, and some of these MA techniques are already been adopted in the cellular system standards such as Time Division Multiple Access (TDMA), Frequency Division Multiple Access (FDMA), Orthogonal Frequency Division Multiple Access (OFDMA) and Code Division Multiple Access (CDMA). There are many factors that determine the efficiency of the MA technique such as spectral efficiency, low complexity implementation as well low envelope fluctuations. Mainly, the MA techniques can be categorized into orthogonal and non-orthogonal MA. In orthogonal MA techniques, the signal dimension is partitioned and allocated exclusively to the users, and there is no Multiple Access Interference (MAI). For non-orthogonal MA techniques, all the users share the entire signal dimension, and there is a MAI. Thus, for non-orthogonal transmission, more complicated receiver is required to deal with the MAI comparing to orthogonal transmission. Non-orthogonal MA is more practical in the uplink scenario because the base station can afford the Multiuser Detection (MUD) complexity. On the other hand, for downlink, orthogonal MA is more suitable due to the limited processing power at the user equipment. Many non-orthogonal MA techniques have been overlooked due to the implementation complexity. Evidently, the recent advancements in signal processing have opened up new possibilities for developing more sophisticated and efficient MA techniques. Thus, more advanced MA techniques has been proposed lately. However, in order to adopt these new MA techniques in the mobile communication systems, many challenges and opportunities need to be studied.
Information-centric networking (ICN) is an emerging networking paradigm that places content identifiers rather than host identifiers at the core of the mechanisms and protocols used to deliver content to end-users. Such a paradigm allows routers enhanced with content-awareness to play a direct role in the routing and resolution of content requests from users, without any knowledge of the specific locations of hosted content. However, to facilitate good network traffic engineering and satisfactory user QoS, content routers need to exchange advanced network knowledge to assist them with their resolution decisions. In order to maintain the location-independency tenet of ICNs, such knowledge (known as context information) needs to be independent of the locations of servers. To this end, we propose CAINE — Context-Aware Information-centric Network Ecosystem — which enables context-based operations to be intrinsically supported by the underlying ICN routing and resolution functions. Our approach has been designed to maintain the location-independence philosophy of ICNs by associating context information directly to content rather than to the physical entities such as servers and network elements in the content ecosystem, while ensuring scalability. Through simulation, we show that based on such location-independent context information, CAINE is able to facilitate traffic engineering in the network, while not posing a significant control signalling burden on the network
Spectrum sensing is one of key enabling techniques to advanced radio technologies such as cognitive radios and ALOHA. This paper presents a novel non-cooperative spectrum sensing approach that can achieve a good trade-off between latency, reliability and computational complexity. Our major idea is to exploit the first-order cyclostationarity of the primary user's signal to reduce the noise-uncertainty problem inherent in the conventional energy detection approach. It is shown that the proposed approach is suitable for detecting the primary user's activity in the interweave paradigm of cognitive spectrum sharing, while the active primary user is periodically sending training sequence. Computer simulations are carried out for the typical IEEE 802.11g system. It is observed that the proposed approach outperforms both the energy detection and the second-order cyclostationarity approach when the observation period is more than 10 frames corresponding to 0.56 ms. ©2010 IEEE.
In this paper, we propose a rate-adaptive bit and power loading approach for the OFDM-based relaying communications. The cooperative relay operates in the half-duplex amplify-and-forward mode. The source and the relay has the separate power constraints. The maximum-ratio combining is employed at -the destination for maximizing the received SNR. Assuming the perfect channel knowledge available at all nodes, the proposed approach is to maximize the throughput (the number of bits/symbol) at the given power constraint and the target link performance. Unlike the water-filling method, the proposed approach does not need the iterative loading process, and can otTer the sub-optimum performance. Computer simulations are used to test the proposed approach for various scenarios with respect to the relay location or the distributed power allocation. © 2008 IEEE.
Energy efficiency (EE) is becoming an important performance indicator for ensuring both the economical and environmental sustainability of the next generation of communication networks. Equally, cooperative communication is an effective way of improving communication system performances. In this paper, we propose a near-optimal energy-efficient joint resource allocation algorithm for multi-hop multiple-input-multiple-output (MIMO) amplify-and-forward (AF) systems. We first show how to simplify the multivariate unconstrained EE-based problem, based on the fact that this problem has a unique optimal solution, and then solve it by means of a low-complexity algorithm. We compare our approach with classic optimization tools in terms of energy efficiency as well as complexity, and results indicate the near-optimality and low-complexity of our approach. As an application, we use our approach to compare the EE of multihop MIMO-AF with MIMO systems and our results show that the former outperforms the latter mainly when the direct link quality is poor.
This article investigates the unexplored potentials of vortices or orbital angular momentum (OAM) beams using the low-cost and high-gain dielectric reflectarray antennas (RAs) at the terahertz (THz) band. It proposes a paradigm to enable 3D beam-steering or OAM multiplexing by a single structure via tilted OAM beams. That, in turn, requires reaching the maximal attainable angles either to send multiple beams to different receivers or to focus the OAM beams of different modes in a desired direction. For this reason, two concepts are addressed in this work: (i). generating a single 3D steered OAM beam and (ii) producing multiple off-centered OAM beams with different modes. A volumetric unit cell is adopted to be accurately tuned through the aperture to steer the generated beams towards the desired direction(s). The proposed paradigm can be utilized to produce RAs with beam-steering or OAM multiplexing capabilities as candidates for THz indoor communications.
—This work introduces MultiSphere, a method to massively parallelize the tree search of large sphere decoders in a nearly-independent manner, without compromising their maximum-likelihood performance, and by keeping the overall processing complexity at the levels of highly-optimized sequential sphere decoders. MultiSphere employs a novel sphere decoder tree partitioning which can adjust to the transmission channel with a small latency overhead. It also utilizes a new method to distribute nodes to parallel sphere decoders and a new tree traversal and enumeration strategy which minimize redundant computations despite the nearly-independent parallel processing of the subtrees. For an 8 × 8 MIMO spatially multiplexed system with 16-QAM modulation and 32 processing elements MultiSphere can achieve a latency reduction of more than an order of magnitude, approaching the processing latency of linear detection methods, while its overall complexity can be even smaller than the complexity of well-known sequential sphere decoders. For 8×8 MIMO systems, MultiSphere’s sphere decoder tree partitioning method can achieve the processing latency of other partitioning schemes by using half of the processing elements. In addition, it is shown that for a multi-carrier system with 64 subcarriers, when performing sequential detection across subcarriers and using MultiSphere with 8 processing elements to parallelize detection, a smaller processing latency is achieved than when parallelizing the detection process by using a single processing element per subcarrier (64 in total).
As soon as 2020, network densification and spectrum extension will be the dominant theme to support enormous capacity and massive connectivity [1]. However, this approach may not guarantee wide area coverage due to the poor propagation capabilities of high frequency bands. In addition, energy efficiency and signalling overhead will become critical considerations in ultra-dense deployment scenarios. This calls for a futuristic two layer RAN architecture with dual connectivity, where the high frequency bands are used for data services, complemented by a coverage layer at conventional cellular bands [2]. This separation of control and data planes will enable a transition from always-on to always-available systems and could result in order of magnitude savings in energy and signalling overhead.
In this paper, we consider the radio resource allocation problem for uplink OFDMA system. The existing algorithms have been derived under the assumption of Gaussian inputs due to its closed-form expression of mutual information. For the sake of practicality, we consider the system with Finite Symbol Alphabet (FSA) inputs, and solve the problem by capitalizing on the recently revealed relationship between mutual information and Minimum Mean-Square Error (MMSE). We first relax the problem to formulate it as a convex optimization problem, then we derive the optimal solution via decomposition methods. The optimal solution serves as an upper bound on the system performance. Due to the complexity of the optimal solution, a low-complexity suboptimal algorithm is proposed. Numerical results show that the presented suboptimal algorithm can achieve performance very close to the optimal solution and outperforms the existing suboptimal algorithms. Furthermore, using our proposed algorithm, significant power saving can be achieved in comparison to the case when Gaussian input is assumed.
—The cumulative distribution function (CDF) of a non-central χ 2-distributed random variable (RV) is often used when measuring the outage probability of communication systems. For ultra-reliable low-latency communication (URLLC), it is important but mathematically challenging to determine the outage threshold for an extremely small outage target. This motivates us to investigate lower bounds of the outage threshold, and it is found that the one derived from the Chernoff inequality (named Cher-LB) is the most effective lower bound. This finding is associated with three rigorously established properties of the Cher-LB with respect to the mean, variance, reliability requirement , and degrees of freedom of the non-central χ 2-distributed RV. The Cher-LB is then employed to predict the beamforming gain in URLLC for both conventional multi-antenna systems (i.e., MIMO) under first-order Markov time-varying channel and reconfigurable intellgent surface (RIS) systems. It is exhibited that, with the proposed Cher-LB, the pessimistic prediction of the beamforming gain is made sufficiently accurate for guaranteed reliability as well as the transmit-energy efficiency.
Network-enabled sensing and actuation devices are key enablers to connect real-world objects to the cyber world. The Internet of Things (IoT) consists of the network-enabled devices and communication technologies that allow connectivity and integration of physical objects (Things) into the digital world (Internet). Enormous amounts of dynamic IoT data are collected from Internet-connected devices. IoT data is usually multi-variant streams that are heterogeneous, sporadic, multi-modal and spatio-temporal. IoT data can be disseminated with different granularities and have diverse structures, types and qualities. Dealing with the data deluge from heterogeneous IoT resources and services imposes new challenges on indexing, discovery and ranking mechanisms that will allow building applications that require on-line access and retrieval of ad-hoc IoT data. However, the existing IoT data indexing and discovery approaches are complex or centralised which hinders their scalability. The primary objective of this paper is to provide a holistic overview of the state-of-the-art on indexing, discovery and ranking of IoT data. The paper aims to pave the way for researchers to design, develop, implement and evaluate techniques and approaches for on-line large-scale distributed IoT applications and services.
This paper describes a mechanism of forwarding secure state information associated to communication sessions, between middleboxes belonging to different Radio Access Networks (RANs). The transfer of state information among RANs could support service integrity and continuity by maintaining a mobile user's multimedia sessions which may otherwise be dropped and also minimize security vulnerabilities. The paper demonstrates how the context transfer protocol could be employed for this purpose to forward certain security information from the old to the new middlebox to support multimedia session maintenance during mobility and also at the same time notify the previous middlebox to close unnecessary open ports for improved security and resolve vulnerability. A number of test scenarios are used to demonstrate how middleboxes could intervene with multimedia sessions during mobility and show how context transfer can provide a solution for improving the performance in the multimedia session re-establishment as well as enhancing middlebox security. Copyright 2006 ACM.
Open Radio Access Networks (O-RANs) have revolutionized the telecom ecosystem by bringing intelligence into disaggregated RAN and implementing functionalities as Virtual Network Functions (VNF) through open interfaces. However, dynamic traffic conditions in real-life O-RAN environments may require necessary VNF reconfigurations during run-time, which introduce additional overhead costs and traffic instability. To address this challenge, we propose a multi-objective optimization problem that minimizes VNF computational costs and overhead of periodical reconfigurations simultaneously. Our solution uses constrained combinatorial optimization with deep reinforcement learning, where an agent minimizes a penalized cost function calculated by the proposed optimization problem. The evaluation of our proposed solution demonstrates significant enhancements, achieving up to 76% reduction in VNF reconfiguration overhead, with only a slight increase of up to 23% in computational costs. In addition, when compared to the most robust O-RAN system that doesn't require VNF reconfigurations, which is Centralized RAN (C-RAN), our solution offers up to 76% savings in bandwidth while showing up to 27% overprovisioning of CPU.
Requirement for low operating and deployment costs of cellular networks motivate the need for self-organisation in cellular networks. To reduce operational costs, self-organising networks are fast becoming a necessity. One key issue in this context is self-organised coverage estimation that is done based on the signal strength measurement and reported position information of system users. In this paper, the effect of inaccurate position estimation on self-organised coverage estimation is investigated. We derive the signal reliability expression (i.e. probability of the received signal being above a certain threshold) and the cell coverage expressions that take the error in position estimation into consideration. This is done for both the shadowing and non-shadowing channel models. The accuracy of the modified reliability and cell coverage probability expressions are also numerically verified for both cases.
Conventional mobility management schemes tend to hit the core network with increased signaling load when the cell size is shrinking and the user mobility speed increases. To mitigate this problem research community has proposed various intelligent mobility management schemes that take advantage of the predictability of the users mobility pattern. However, most of the proposed solutions are only focused on signaling of the active-state (i.e., handover signaling) and proposals on improvement of the idle-state signaling has been limited and were not well received from the industrial practitioners. This paper first surveys the major shortcomings of the existing proposals for the idle mode mobility management and then proposes a new architecture, namely predictive mobility management (PrMM) to mitigate the identified challenges. An analytical framework is developed and a closed form solution for the expected signaling overhead of the PrMM is presented. The results of numerical evaluations confirm that, depending on user mobility and network configuration, the PrMM efficiency can surpass the long term evolution (LTE) 4G signaling scheme by over 90%. Analysis of the results shows that the best performance is achieved at highly dense paging areas and lower cell crossing rates.
The cumulative distribution function (CDF) of a non-central χ2 -distributed random variable (RV) is often used when measuring the outage probability of communication systems. For adaptive transmitters, it is important but mathematically challenging to determine the outage threshold for an extreme target outage probability (e.g., 10−5 or less). This motivates us to investigate lower bounds of the outage threshold, and it is found that the one derived from the Chernoff inequality (named Cher-LB) is the most {effective} lower bound. The Cher-LB is then employed to predict the multi-antenna transmitter beamforming-gain in ultra-reliable and low-latency communication, concerning the first-order Markov time-varying channel. It is exhibited that, with the proposed Cher-LB, pessimistic prediction of the beamforming gain is made sufficiently accurate for guaranteed reliability as well as the transmit-energy efficiency.
Signal detection in large multiple-input multiple-output (large-MIMO) systems presents greater challenges compared to conventional massive-MIMO for two primary reasons. First, large-MIMO systems lack favorable propagation conditions as they do not require a substantially greater number of service antennas relative to user antennas. Second, the wireless channel may exhibit spatial non-stationarity when an extremely large aperture array (ELAA) is deployed in a large-MIMO system. In this paper, we propose a scalable iterative large-MIMO detector named ANPID, which simultaneously delivers 1) close to maximum-likelihood detection performance, 2) low computational-complexity (i.e., square-order of transmit antennas), 3) fast convergence, and 4) robustness to the spatial non-stationarity in ELAA channels. ANPID incorporates a damping demodulation step into stationary iterative (SI) methods and alternates between two distinct demodulated SI methods. Simulation results demonstrate that ANPID fulfills all the four features concurrently and outperforms existing low-complexity MIMO detectors, especially in highly-loaded large MIMO systems.
—In this paper, a reflecting metasurface is proposed to control the reflection angle by manipulating the chemical potential (CP) of graphene. The surface can operate in three anomalous reflection modes for θ = 45 • , 60 • and 75 • while it is illuminated with a normal incident electromagnetic wave (EMW). Moreover, by tuning the chemical potential of graphene sheets the proposed surface can switch off the reflection mode of operation by absorbing the incident EM power.
Recently, an upsurge of interest has been observed in providing multimedia on-demand (MoD) services to mobile users over wireless networks. Nevertheless, due to the rapidly varying nature of mobile networks and the scarcity of radio resources, the commercial implementation is still limited. This paper presents an efficient group-based multimedia-on-demand (GMoD) service model over multicast-enabled wireless infrastructures, where users requesting the same content are grouped and served simultaneously with a single multicast stream. The grouping is fulfilled through a process named "batching". An analytical model is derived to analyse a timeout-based hatching scheme with respect to the tradeoff between user blocking probability and reneging probability. Based on the deduced analytical model, an optimal timeout-based hatching scheme is proposed to dynamically identify the optimal tradeoff point that maximizes the system satisfaction ratio given a particular system status. The proposed scheme is evaluated by means of simulation and compared with two basic hatching schemes (timeout-based, size-based), and two hybrid ones (combined-for-profit, combined-for-loss). The simulation results demonstrate the proposed approach can ensure significant gains in terms of user satisfaction ratio, with low reneging and blocking probabilities.
Energy consumption has become an increasingly important aspect of wireless communications, from both an economical and environmental point of view. New enhancements are being placed on mobile networks to reduce the power consumption of both mobile terminals and base stations. This paper studies the achievable rate region of AWGN broadcast channels under Time-division, Frequency-division and Superposition coding, and locates the optimal energy-efficient rate-pair according to a comparison metric based on the average energy efficiency of the system. In addition to the transmit power, circuit power and signalling power are also incorporated in the energy efficiency function, with simulation results verifying that the Superposition coding scheme achieves the highest energy efficiency in an ideal, but non-realistic scenario, where the signalling power is zero. With moderate signalling power, the Frequency-division scheme is the most energy-efficient, with Superposition coding and Time-division becoming second and third best. Conversely, when the signalling power is high, both Time-division and Frequency-division schemes outperform Superposition coding. On the other hand, the Superposition coding scheme also incorporates rate-fairness into the system, which allows both users to transmit whilst maximising the energy efficiency.
A batch Kalman-based blind adaptive multiuser detection (K-BA-MUD) with multiple receiver (Rx) antennas is investigatedfor asynchronous CDMA systems in the Uplink direction. In this paper, we consider two receiver structures:the Independent and the Cooperative structure. Previousresults had stated that the Cooperative structure always outperforms the Independent one. However, with a limited number of samples available for signal detection, we need to justify how cooperative the processing should be to maintain that statement. Toward this end we propose the Partially Cooperative structure that relaxes the Identifiability Condition (IC) of a single Rx antenna K-BA-MUD. It is concluded that the proposed structure will outperform the Fully Cooperative one in any condition, given the number of samples is small and the IC is not violated. Finally, by reducing the size of the steering vector, we also reduce its computational complexity for updating the detector parameters.
A Ka-band inset-fed microstrip patches linear antenna array is presented for the fifth generation (5G) applications in different countries. The bandwidth is enhanced by stacking parasitic patches on top of each inset-fed patch. The array employs 16 elements in an H-plane new configuration. The radiating patches and their feed lines are arranged in an alternating out-of-phase 180-degree rotating sequence to decrease the mutual coupling and improve the radiation pattern symmetry. A (24.4%) measured bandwidth (24.35 to 31.13 GHz)is achieved with -15 dB reflection coefficients and 20 dB mutual coupling between the elements. With uniform amplitude distribution, a maximum broadside gain of 19.88 dBi is achieved. Scanning the main beam to 49.5◦ from the broadside achieved 18.7 dBi gain with -12.1 dB sidelobe level (SLL). These characteristics are in good agreement with the simulations, rendering the antenna to be a good candidate for 5G applications.
Deep learning is driving a radical paradigm shift in wireless communications, all the way from the application layer down to the physical layer. Despite this, there is an ongoing debate as to what additional values artificial intelligence (or machine learning) could bring to us, particularly on the physical layer design; and what penalties there may have? These ques-tions motivate a fundamental rethinking of the wireless modem design in the artificial intelli-gence era. Through several physical-layer case studies, we argue for a significant role that ma-chine learning could play, for instance in parallel error-control coding and decoding, channel equalization, interference cancellation, as well as multiuser and multiantenna detection. In addition, we discuss the fundamental bottlenecks of machine learning as well as their poten-tial solutions in this paper.
Meta-material-based antenna designs, such as large intelligent surface (LIS), are expected to be a game changer in future wireless cellular systems, since they provide a simple yet effective mean of drastically improving the wireless propagation environment. This paper investigates the ergodic capacity of LIS-aided multiple input multiple output (MIMO), a.k.a. MIMO-LIS, systems. To this end, the derivation of the probability density function (pdf) of the cascaded channel, i.e. the transmitter-to-LIS-to-receiver channel, is studied. Moreover, both high signal-to-noise ratio (SNR) asymptotic expression and closed-form approximations of this ergodic capacity are provided. Monte-Carlo simulations graphically validate the correctness and accuracy of our various expressions, for different antenna configurations. Furthermore, our performance analysis shows that the MIMO-LIS system outperforms both MIMO-AF and MIMO systems (by more than 60% and 15% respectively, at a 30 dB SNR) from an ergodic capacity point of view, which confirms that LIS can be beneficial for improving the propagation environment.
With increased wireless connectivity and embedded sensors, vehicles are becoming more intelligent, offering Internet access, telematics, and advanced driver assistance systems. Along with all benefits, connectivity to the public network and automotive control systems introduces new threats and security risks to connected and autonomous driving systems. Therefore, it is highly critical to design robust security mechanisms to protect the system from potential attacks and security vulnerabilities. An intrusion detection system (IDS) is a promising solution to detect and identify attacks and malicious behaviour within the network. This paper proposes a two-layer IDS mechanism that exploits machine learning (ML) solutions for collaborative attack detection between an on-vehicle IDS module and a developed IDS platform at a mobile edge computing (MEC) server. The results illustrate that the proposed solution can significantly reduce communication latency and energy consumption up to 80% while maintaining a high level of detection accuracy.
Femtocell is becoming a promising solution to face the explosive growth of mobile broadband usage in cellular networks. While each femtocell only covers a small area, a massive deployment is expected in the near future forming networked femtocells. An immediate challenge is to provide seamless mobility support for networked femtocells with minimal support from mobile core networks. In this paper, we propose efficient local mobility management schemes for networked femtocells based on X2 traffic forwarding under the 3GPP Long Term Evolution Advanced (LTE-A) framework. Instead of implementing the path switch operation at core network entity for each handover, a local traffic forwarding chain is constructed to use the existing Internet backhaul and the local path between the local anchor femtocell and the target femtocell for ongoing session communications. Both analytical studies and simulation experiments are conducted to evaluate the proposed schemes and compare them with the original 3GPP scheme. The results indicate that the proposed schemes can significantly reduce the signaling cost and relieve the processing burden of mobile core networks with the reasonable distributed cost for local traffic forwarding. In addition, the proposed schemes can enable fast session recovery to adapt to the self-deployment nature of the femtocells.
The widely accepted OFDMA air interface technology has recently been adopted in most mobile standards by the wireless industry. However, similar to other frequency-time multiplexed systems, their performance is limited by inter-cell interference. To address this performance degradation, interference mitigation can be employed to maximize the potential capacity of such interference-limited systems. This paper surveys key issues in mitigating interference and gives an overview of the recent developments of a promising mitigation technique, namely, interference avoidance through inter-cell interference coordination (ICIC). By using optimization theory, an ICIC problem is formulated in a multi-cell OFDMA-based system and some research directions in simplifying the problem and associated challenges are given. Furthermore, we present the main trends of interference avoidance techniques that can be incorporated in the main ICIC formulation. Although this paper focuses on 3GPP LTE/LTE-A mobile networks in the downlink, a similar framework can be applied for any typical multi-cellular environment based on OFDMA technology. Some promising future directions are identified and, finally, the state-of-the-art interference avoidance techniques are compared under LTE-system parameters.
This paper proposes a novel graph-based multicell scheduling framework to efficiently mitigate downlink intercell interference in OFDMA-based small cell networks. We define a graph-based optimization framework based on interference condition between any two users in the network assuming they are served on similar resources. Furthermore, we prove that the proposed framework obtains a tight lower bound for conventional weighted sum-rate maximization problem in practical scenarios. Thereafter, we decompose the optimization problem into dynamic graph-partitioning-based subproblems across different subchannels and provide an optimal solution using branch-and-cut approach. Subsequently, due to high complexity of the solution, we propose heuristic algorithms that display near optimal performance. At the final stage, we apply cluster-based resource allocation per subchannel to find candidate users with maximum total weighted sum-rate. A case study on networked small cells is also presented with simulation results showing a significant improvement over the state-of-the-art multicell scheduling benchmarks in terms of outage probability as well as average cell throughput.
A polarization-insensitive circular refiectarray antenna (RA) for long-distance wireless communications is investigated. By combining patches, dipoles, and rings, a polarization-insensitive unit cell is achieved. With a phase variation of around 314° between 30 GHz and 32 GHz, a circular reflectarray with a radius of 400 mm is built. Simulation results indicate a maximum realized gain of 27.6 dB at 30 GHz.
Conventional cellular systems are dimensioned according to a worst case scenario, and they are designed to ensure ubiquitous coverage with an always-present wireless channel irrespective of the spatial and temporal demand of service. A more energy conscious approach will require an adaptive system with a minimum amount of overhead that is available at all locations and all times but becomes functional only when needed. This approach suggests a new clean slate system architecture with a logical separation between the ability to establish availability of the network and the ability to provide functionality or service. Focusing on the physical layer frame of such an architecture, this paper discusses and formulates the overhead reduction that can be achieved in next generation cellular systems as compared with the Long Term Evolution (LTE). Considering channel estimation as a performance metric whilst conforming to time and frequency constraints of pilots spacing, we show that the overhead gain does not come at the expense of performance degradation.
© 2014 IEEE. Cognitive radio has emerged as a promising paradigm to improve the spectrum usage efficiency and to cope with the spectrum scarcity problem through dynamically detecting and re-allocating white spaces in licensed radio band to unlicensed users. However, cognitive radio may cause extra energy consumption because it relies on new and extra technologies and algorithms. The main objective of this work is to enhance the energy efficiency of proposed cellular cognitive radio network (CRN), which is defined as bits/Joule/Hz. In this paper, a typical frame structure of a secondary user (SU) is considered, which consists of sensing and data transmission slots. We analyze and derive the expression for energy efficiency for the proposed CRN as a function of sensing and data transmission duration. The optimal frame structure for maximum bits per joule is investigated under practical network traffic environments. he impact of optimal sensing time and frame length on the achievable energy efficiency, throughput and interference are investigated and verified by simulation results compared with relevant state of art. Our analytical results are in perfect agreement with the empirical results and provide useful insights on how to select sensing length and frame length subject to network environment and required network performance.
Novel low-density signature (LDS) structure is proposed for transmission and detection of symbol-synchronous communication over memoryless Gaussian channel. Given N as the processing gain, under this new arrangement, users' symbols are spread over N chips but virtually only d(v) < N chips that contain nonzero-values. The spread symbol is then so uniquely interleaved as the sampled, at chip rate, received signal contains the contribution from only d(c) < K number of users, where K denotes the total number of users in the system. Furthermore, a near-optimum chip-level iterative soft-in-soft-out (SISO) multiuser decoding (MUD), which is based on message passing algorithm (MPA) technique, is proposed to approximate optimum detection by efficiently exploiting the LDS structure. Given beta = K/N as the system loading, our simulation suggested that the proposed system alongside the proposed detection technique, in AWGN channel, can achieve an overall performance that is close to single-user performance, even when the system has 200% loading, i.e., when beta = 2. Its robustness against near-far effect and its performance behavior that is very similar to optimum detection are demonstrated in this paper. In addition, the complexity required for detection is now exponential to d(c) instead of K as in conventional code division multiple access (CDMA) structure employing optimum multiuser detector.
Nowadays, dense network deployment is being considered as one of the effective strategies to meet capacity and connectivity demands of the fifth generation (5G) cellular system. Among several challenges, energy consumption will be a critical consideration in the 5G era. In this direction, base station on/off operation, i.e., sleep mode, is an effective technique to mitigate the excessive energy consumption in ultra-dense cellular networks. However, current implementation of this technique is unsuitable for dynamic networks with fluctuating traffic profiles due to coverage constraints, quality-of-service requirements and hardware switching latency. In this direction, we propose an energy/load proportional approach for 5G base stations with control/data plane separation. The proposed approach depends on a multi-step sleep mode profiling, and predicts the base station vacation time in advance. Such a prediction enables selecting the best sleep mode strategy whilst minimizing the effect of base station activation/reactivation latency, resulting in significant energy saving gains.
In this paper, we study the Gaussian Cognitive Zinterference channel (GCZIC) and its multiuser extension the Gaussian Cognitive Z-broadcast interference channel (GBZIC). We review some known capacity results and bounds for the GCZIC for various levels of interference. We derive a new improved inner bound for the CZIC under conditions which intersect with those for which the capacity is not known. Then we derive the capacity results and bounds for the CBZIC when the broadcast component of the channel is a degraded broadcast channel.
In this paper we extend the analysis of two-receiver broadcast channels with random parameters to the three-receivers case. Specifically we base our work on Nair and El Gamal's results for the three-receiver discrete memoryless multilevel broadcast channel and assume that state information is available non-causally at the transmitter. We provide an achievable rate region for this setting and acknowledge its importance in the study of multiuser cognitive radio configurations.
Scalable Resource Allocation (ScRA) algorithm is developed to improve the mobile network radio resource utilization [l]. The traditional mobile dimensioning is based on the “busy hour” traffic intensity, and this Static Resource Allocation (StRA) methodology does not seem to be able to provide efticient radio resource utilization for the future/present multi services environment, with their expected spatially and temporally varying loads. This is in hindrance for the introduction of wireless IP based services, for which the demand is rapidly increasing. This paper provides an extension analysis by incorporating single slot FIFO and single slot Round Robin (single slot RR), blocked-call cleared (BCC) and blocked-call delayed (BCD) strategies in the ScRA scheme. By employing the ScRA scheme in an example GSM and GPRS network, we specifically investigate and evaluate the system throughput for both the circuit-and packet-switched networks. The findings show that single slot FIFO ScRA and single slot RR ScRA schemes obtained no difference in system throughput. On the other hand, when BCD is implemented in ScRA scheme, there is significant throughput gain.
Along with spectral efficiency (SE), Energy efficiency (EE) is becoming one of the main performance evaluation criteria in communication. These two criteria, which are conflicting, can be linked through their trade-off. As far as MIMO is concerned, a closed-form approximation of the EE-SE trade-off has recently been proposed and it proved useful for analyzing the impact of using multiple antennas on the EE. In this paper, we use this closed-form approximation for assessing and comparing the EE gain of MIMO over SISO system when different power consumption models (PCMs) are considered at the transmitter. The EE of a communication system is closely related to its power consumption. In theory only the transmit power is considered as consumed power, whereas in a practical setting, the consumed power is the addition of two terms; the fixed consumed power, which accounts for cooling, processing, etc., and the variable consumed power, which varies as a function of the transmit power. Our analysis unveils the large mismatch between theoretical and practical EE gain of MIMO over SISO system; In theory, the EE gain increases both with the SE and the number of antennas, and, hence the potential of MIMO for EE improvement is very large in comparison with SISO; On the contrary, the EE gain is small and decreases as the number of transmit antennas increases when realistic PCMs are considered.
Software-defined networking (SDN) enables centralized control of a network of programmable switches by dynamically updating flow rules. This paves the way for dynamic and autonomous control of the network. In order to be able to apply a suitable set of policies to the correct set of traffic flows, SDN needs input from traffic classification mechanisms. Today, there is a variety of classification algorithms in machine learning. However, recent studies have found that using an arbitrary algorithm does not necessarily provide the best classification outcome on a dataset, and therefore a framework called ensemble which combines individual algorithms to improve classification results has gained attraction. In this paper, we propose the application of the ensemble algorithm as a machine learning pre-processing tool, which classifies ingress network traffic for SDN to pick the right set of traffic policies. Performance evaluation results show that this ensemble classifier can achieve robust performance in all tested traffic types.
This patent is based on our novel data discovery mechanism for large scale, highly distributed and heterogeneous data networks. Managing Big Data harvested from IoT environments is an example application
Smart Grid (SG) is the revolutionised power network characterised by a bidirectional flow of energy and information between customers and suppliers. The integration of power networks with information and communication technologies enables pervasive control, automation and connectivity from the energy generation power plants to the consumption level. However, the development of wireless communications, the increased level of autonomy, and the growing sofwarisation and virtualisation trends have expanded the attack susceptibility and threat surface of SGs. Besides, with the real-time information flow, and online energy consumption controlling systems, customers' privacy and preserving their confidential data in SG is critical to be addressed. In order to prevent potential attacks and vulnerabilities in evolving power networks, the need for additional studying security and privacy mechanisms is reinforced. In addition, recently, there has been an ever-increasing use of machine intelligence and Machine Learning (ML) algorithms in different components of SG. ML models are currently the mainstream for attack detection and threat analysis. However, despite these algorithms' high accuracy and reliability, ML systems are also vulnerable to a group of malicious activities called adversarial ML (AML) attacks. Throughout this paper, we survey and discuss new findings and developments in existing security issues and privacy breaches associated with the SG and the introduction of novel threats embedded within power systems due to the development of ML-based applications. Our survey builds multiple taxonomies and tables to express the relationships of various variables in the field. Our final section identifies the implications of emerging technologies, future communication systems, and advanced industries on the security and privacy issues of SG.
Multi-band and multi-tier network densification is being considered as the most promising solution to overcome the capacity crunch problem of cellular networks. In this direction, small cells (SCs) are being deployed within the macro cell (MC) coverage, to off-load some of the users associated with the MCs. This deployment scenario raises several problems. Among others, signalling overhead and mobility management will become critical considerations. Frequent handovers (HOs) in ultra dense SC deployments could lead to a dramatic increase in signalling overhead. This suggests a paradigm shift towards a signalling conscious cellular architecture with smart mobility management. In this regards, the control/data separation architecture (CDSA) with dual connectivity is being considered for the future radio access. Considering the CDSA as the radio access network (RAN) architecture, we quantify the reduction in HO signalling w.r.t. the conventional approach. We develop analytical models which compare the signalling generated during various HO scenarios in the CDSA and conventionally deployed networks. New parameters are introduced which can with optimum value significantly reduce the HO signalling load. The derived model includes HO success and HO failure scenarios along with specific derivations for continuous and non-continuous mobility users. Numerical results show promising CDSA gains in terms of saving in HO signalling overhead.
Open Radio Access Network (O-RAN) improves the flexibility and programmability of the 5G network by applying the Software-Defined Network (SDN) principles. O-RAN defines a near-real time Radio Intelligent Controller (RIC) to decouple the RAN functionalities into the control and user planes. Although the O-RAN security group offers several countermeasures against threats, RIC is still prone to attacks. In this letter, we introduce a novel attack, named Bearer Migration Poisoning (BMP), that misleads the RIC into triggering a malicious bearer migration procedure. The adversary aims to change the user plane traffic path and causes significant network anomalies such as routing blackholes. BMP has a remarkable feature that even a weak adversary with only two compromised hosts could launch the attack without compromising the RIC, RAN components, or applications. Based on our numerical results, the attack imposes a dramatic increase in signalling cost by approximately 10 times. Our experiment results show that the attack significantly degrades the downlink and uplink throughput to nearly 0 Mbps, seriously impacting the service quality and end-user experience.
The spatially-incoherent radiators in visible light communication (VLC) constrain the optical carrier to be only driven by a real electrical sub-carrier, which cannot be quadrature modulated as in classic RF-based systems. This restriction, in turn, severely limits the transmission throughput of VLC systems. To overcome this technical challenge, we propose a novel coherent transmission scheme for VLC, in which the optical carrier is only treated as a purely amplitude-modulated carrier capable of transmitting two-dimensional (2D) symbols (e.g. quadrature modulated symbols). The ability of our new coherent transmission scheme to transmit 2D symbols is validated through analytical symbol error rate derivation and Matlab simulations. Results show that our scheme can improve both the spectral and energy efficiency of VLC systems, i.e. by either doubling the spectral efficiency or achieving more than 45% energy efficiency improvement, when compared to its existing counterparts.
To flexibly support diverse communication requirements (e.g., throughput, latency, massive connection, etc.) for the next generation wireless communications, one viable solution is to divide the system bandwidth into several service subbands, each for a different type of service. In such a multi-service (MS) system, each service has its optimal frame structure while the services are isolated by subband filtering. In this paper, a framework for multi-service (MS) system is established based on subband filtered multi-carrier (SFMC) modulation. We consider both single-rate (SR) and multi-rate (MR) signal processing as two different MS-SFMC implementations, each having different performance and computational complexity. By comparison, the SR system outperforms the MR system in terms of performance while the MR system has a significantly reduced computational complexity than the SR system. Numerical results show the effectiveness of our analysis and the proposed systems. These proposed SR and MR MS-SFMC systems provide guidelines for next generation wireless system frame structure optimization and algorithm design.
Intelligent reflecting surface (IRS) is a novel technology to manipulate wireless propagation channels via smart and controllable signal reflection. In this paper, we investigate an IRS-aided integrated terrestrial-satellite network (ITSN) system, where the IRS is deployed to assist the co-existing transmissions of the terrestrial small base stations (SBSs) and the satellite. Because of the spectrum sharing in the ITSN, the interference between the two systems should be carefully mitigated. Our objective is to maximize the weighted sum rate (WSR) of all users by jointly optimizing the frame-based coordinated transmit beamforming vectors at the SBSs, the phase shift matrix at the IRS, and the frame user scheduling, subject to SBSs' individual power constraints and unit modulus constraints of phase shifters. To this end, we first adopt the agglomerative hierarchical clustering (AHC) method to schedule the satellite users to different frames. Then the block coordinate descent (BCD) algorithm is proposed, which alternately optimizes the transmit beamforming vectors and the reflective phase shift matrix. In particular, the optimal transmit beamforming vectors are obtained via the fractional programming (FP) technique. Meanwhile, two efficient algorithms, i.e., the Riemannian manifold (RM) and the successive convex approximation (SCA), are proposed for the phase shift optimization. Finally, simulation results are provided to demonstrate the performance gain of our schemes over other benchmark schemes.
A Ka-band inset-fed microstrip patches linear antenna array is presented for the fifth generation (5G) applications in different countries. The bandwidth is enhanced by stacking parasitic patches on top of each inset-fed patch. The array employs 16 elements in an H-plane new configuration. The radiating patches and their feed lines are arranged in an alternating out-of-phase 180-degree rotating sequence to decrease the mutual coupling and improve the radiation pattern symmetry. A (24.4%) measured bandwidth (24.35 to 31.13 GHz)is achieved with -15 dB reflection coefficients and 20 dB mutual coupling between the elements. With uniform amplitude distribution, a maximum broadside gain of 19.88 dBi is achieved. Scanning the main beam to 49.5° from the broadside achieved 18.7 dBi gain with -12.1 dB sidelobe level (SLL). These characteristics are in good agreement with the simulations, rendering the antenna to be a good candidate for 5G applications.
Radio resource management (RRM) for future fifth-generation (5G) heterogeneous networks (HetNets) has emerged as a critical area due to the increased density of small-cell networks and radio access technologies. Recent research has mostly concentrated on resource management, including spectrum utilization and interference mitigation, but the complexities of these resources have been given little attention. This paper provides an overview of the issues arising from future 5G systems and highlights their importance. The different approaches used in recently published surveys categorizing RRM schemes are discussed, and the survey method is presented. We report on a survey of HetNet RRM schemes that have been studied recently, with a focus on the joint optimization of radio resource allocation with other mechanisms. These RRM schemes were subcategorized according to their optimization metrics and qualitatively analyzed and compared. An analysis of the complexity of RRM schemes in terms of implementation and computation is presented. Several potential scopes of research for future RRM in 5G HetNets are also identified.
Low Earth Orbit (LEO) satellites are becoming increasingly important among the existing satellite communication systems. Due to its low orbital profile, the LEO satellites can provide high-speed, low-latency, and ubiquitous services for ground users. However, as the number of satellites continues to increase, frequency bands as non-renewable resources will seriously restrict the future evolution of the LEO networks. In this paper, a flexible spectrum sharing and cooperative service method is proposed to address the collinear interference issue caused by LEO satellites while passing through the coverage area of the GEO beam. By using the continuous power allocation optimization, our scheme ensures that the service of the LEO satellites will not lead to the degradation of the service quality of the GEO beam. Meanwhile, by taking full advantage of the cooperation between LEO satellites, the quality of their service can be significantly improved. Simulation results show that our proposed algorithm converges quickly. The transmission efficiency and the stability of the system can all be guaranteed.
We derive the uplink system model for In-band and Guard-band narrowband Internet of Things (NB-IoT). The results reveal that the actual channel frequency response (CFR) is not a simple Fourier transform of the channel impulse response, due to sampling rate mismatch between the NB-IoT user and Long Term Evolution (LTE) base station. Consequently, a new channel equalization algorithm is proposed based on the derived effective CFR. In addition, the interference is derived analytically to facilitate the co-existence of NB-IoT and LTE signals. This work provides an example and guidance to support network slicing and service multiplexing in the physical layer.
Orthogonal relay based cooperative communication enjoys distributed spatial diversity gain at the price of spectral efficiency. This work aims at improving the spectral efficiency for orthogonal opportunistic decode-and-forward (DF) relaying through employment of novel adaptive modulation scheme. The proposed scheme allows source and relay to transmit information in different modulation formats, while the MAP receiver is employed at destination for the diversity combining. Given the individual power constraint and target bit-error-rate (BER), the proposed scheme can significantly improve the spectral efficiency in comparison with the non-adaptive DF relaying and adaptive direct transmission.
In this letter, we study the beamforming design in a lens-antenna array-based joint multicast-unicast millimeter wave massive MIMO system, where the simultaneous wireless information and power transfer at users is considered. First, we develop a beam selection scheme based on the structure of the lens-antenna array and then, the zero forcing precoding is adopted to cancel the inter-unicast interference among users. Next, we formulate a sum rate maximization problem by jointly optimizing the unicast power, multicast beamforming and power splitting ratio. Meanwhile, the maximum transmit power constraint for the base station and the minimum harvested energy for each user are imposed. By employing the successive convex approximation technique, we transform the original optimization problem into a convex one, and propose an iterative algorithm to solve it. Finally, simulation results are conducted to verify the effectiveness of the proposed schemes.
Coordinated multi-point (CoMP) transmission is one of the key features for long term evolution advanced (LTE-A) and a promising concept for interference mitigation in 5th generation (5G) and beyond future densely deployed wireless networks. Due to the cost of coordination among many transmission points (TP), radio access network (RAN) needs to be clustered into smaller groups of TPs for coordination. In this paper, we develop a novel, load-aware clustering model by employing a merge/split concept from coalitional game theory. A load-aware utility function is introduced to maximize both spectral efficiency (SE) and load balancing (LB) objectives. We show that proposed load-aware clustering model dynamically adapts into the network load conditions providing high SE in low-load conditions and results in better load distribution with significantly less unsatisfied users in over-load conditions while keeping SE at comparable levels when compared to a greedy clustering model. Simulation results show that the proposed solution can reduce the number of unsatisfied users due to over-load conditions by 68.5% when compared to the greedy clustering algorithm. Furthermore, we analyze the stability of the proposed solution and prove that it converges to a stable partition in both homogeneous network (HN) and random network (RN) with and without hotspot scenarios. In addition, we show the convergence of our algorithm into the unique clustering solution with the best payoff possible when such a solution exists.
In this paper, a novel spatially non-stationary channel model is proposed for link-level computer simulations of massive multiple-input multiple-output (mMIMO) with extremely large aperture array (ELAA). The proposed channel model allows a mix of non-line-of-sight (NLoS) and LoS links between a user and service antennas. The NLoS/LoS state of each link is characterized by a binary random variable, which obeys a correlated Bernoulli distribution. The correlation is described in the form of an exponentially decaying window. In addition, the proposed model incorporates shadowing effects which are non-identical for NLoS and LoS states. It is demonstrated, through computer emulation, that the proposed model can capture almost all spatially non-stationary fading behaviors of the ELAA-mMIMO channel. Moreover, it has a low implementational complexity. With the proposed channel model, Monte-Carlo simulations are carried out to evaluate the channel capacity of ELAA-mMIMO. It is shown that the ELAA-mMIMO channel capacity has considerably different stochastic characteristics from the conventional mMIMO due to the presence of channel spatial non-stationarity.
Although Geostationary-Equatorial-Orbit (GEO) satellites have achieved significant success in conducting space missions, they cannot meet the 5G latency requirements due to the far distance from the earth surface. Therefore, Low-Earth-Orbit (LEO) satellites arise as a potential solution for the latency problem. Nevertheless, integrating the 5G terrestrial networks with LEO satellites puts an increased burden on the satellites' limited budget, which stems from their miniature sizes, restricted weights, and the small available surface for solar harvesting in the presence of additional required equipment. This paper aims to design the Electrical Power System (EPS) for 5G LEO satellites and investigate altitudes that meet the latency and capacity requirements of 5G applications. In this regard, accurate solar irradiance determination for the nadir-orientation scenario, Multi-Junction (MJ) solar cells modeling, backup batteries type and number, and designing highly-efficient converters are addressed. Accordingly, the power budgeting of the 5G LEO satellite can be achieved based on defining the maximum generated power and determining the satellite's subsystem power requirements for 5G missions. In the sequel, the measured and simulated values of the electrical V-I characteristics of an MJ solar panel are compared to validate the model by using a Clyde Space solar panel that reaches a maximum power generation of approximately 1~W at ( I_{MPP}=0.426\,\,A , V_{MPP}=2.35\,\,V ). Moreover, a synchronous boost converter circuit is designed based on commercial off-the-shelf elements.
In asynchronous (intermittent) interference scenarios, the content of co-channel interference sources over the data interval may be different from the interferers content over the training interval, typically with extra interference sources presented over the data interval. Under such conditions, conventional adaptive beamformer designed over the training interval may lose its efficiency when applied to the data interval. In this paper, we address the problem by 1) formulating a family of the second order statistics adaptive beamformers regularized by the covariance matrix estimated over the data interval; 2) proposing a maximum likelihood methodology for optimization of the combined (mixed) covariance matrix based on maximization of a product of a likelihood ratio that checks the accuracy of the recovered training signals and a likelihood ratio on equality of the eigenvalues in complementary to the signal subspace defined over the data interval; 3) demonstrating efficiency and robustness of the proposed solution as a linear adaptive beamformer and as an initialization for iterative beamformer with projections to the finite alphabet in different asynchronous interference scenarios comparing with the basic training and data based interference rejection combining receivers.
This paper studies issues that arise with respect to the joint optimization for convergence time in federated learning over wireless networks (FLOWN). We consider the criterion and protocol for selection of participating devices in FLOWN under the energy constraint and derive its impact on device selection. In order to improve the training efficiency, age-of-information (AoI) enables FLOWN to assess the freshness of gradient updates among participants. Aiming to speed up convergence, we jointly investigate global loss minimization and latency minimization in a Stackelberg game based framework. Specifically, we formulate global loss minimization as a leader-level problem for reducing the number of required rounds, and latency minimization as a follower-level problem to reduce time consumption of each round. By decoupling the follower-level problem into two sub-problems, including resource allocation and sub-channel assignment, we achieve an optimal strategy of the follower through monotonic optimization and matching theory. At the leader-level, we derive an upper bound of convergence rate and subsequently refor-mulate the global loss minimization problem and propose a new age-of-update (AoU) based device selection algorithm. Simulation results indicate the superior performance of the proposed AoU based device selection scheme in terms of the convergence rate, as well as efficient utilization of available sub-channels.
Current technological advancements in Software Defined Networks (SDN) can provide efficient solutions for smart grids (SGs). An SDN-based SG promises to enhance the efficiency, reliability and sustainability of the communication network. However, new security breaches can be introduced with this adaptation. A layer of defence against insider attacks can be established using machine learning based intrusion detection system (IDS) located on the SDN application layer. Conventional centralised practises, violate the user data privacy aspect, thus distributed or collaborative approaches can be adapted so that attacks can be detected and actions can be taken. This paper proposes a new SDN-based SG architecture, highlighting the existence of IDSs in the SDN application layer. We implemented a new smart meter (SM) collaborative intrusion detection system (SM-IDS), by adapting the split learning methodology. Finally, a comparison of a federated learning and split learning neighbourhood area network (NAN) IDS was made. Numerical results showed, a five class classification accuracy of over 80.3% and F1-score 78.9 for a SM-IDS adapting the split learning technique. Also, the split learning NAN-IDS exhibit an accuracy of over 81.1% and F1-score 79.9.
Haptic communications represent a vast area of research focused on integrating the sense of touch into digital sensory experiences. Achieving effective haptic communication requires meticulous design and implementation of all subsystems. As implied by the term, the two primary subsystems are haptic and communication. Haptic refers to replicating the touch sensation in various applications such as augmented reality, virtual reality and teleoperation, and communication involves optimising network structures to transmit and receive haptic information alongside other sensory data. In this survey paper, we discuss both haptic interfaces and network requirements simultaneously. For haptic interfaces, we comprehensively explore the mechanisms of touch perception, haptic sensing, and haptic feedback. We delve into haptic sensing by examining state-of-the-art sensors and approaches to capture data related to touch, such as pressure, force, and motion, and translate these physical interactions into digital data that a haptic system can interpret and respond to. Subsequently, we discuss various methods of achieving haptic feedback, including different mechanical actuators and electrical stimulation. We also investigate the incorporation of artificial intelligence in this field, proposing new areas where it could enhance system performance. Additionally, we address open challenges and future research directions, covering critical issues related to privacy, data transmission, cybersickness, performance and wearability of haptic interface, integrated systems, power supply and evaluation of these devices. Through this interdisciplinary approach, which merges haptic feedback, haptic sensing, and communication, our paper aims to inspire further research and development, ultimately advancing technology and enhancing haptic experiences.
Current iterative multiple-input multiple-output (MIMO) detectors suffer from slow convergence when the wireless channel is ill-conditioned. The ill-conditioning is mainly caused by spatial correlation between channel columns corresponding to the same user equipment, known as intra-user interference. In addition, in the emerging MIMO systems using an extremely large aperture array (ELAA), spatial non-stationarity can make the channel even more ill-conditioned. In this paper, user-wise singular value decomposition (UW-SVD) is proposed to accelerate the convergence of iterative MIMO detectors. Its basic principle is to perform SVD on each user's sub-channel matrix to eliminate intra-user interference. Then, the MIMO signal model is effectively transformed into an equivalent signal (e-signal) model, comprising an e-channel matrix and an e-signal vector. Existing iterative algorithms can be used to recover the e-signal vector, which undergoes post-processing to obtain the signal vector. It is proven that the e-channel matrix is better conditioned than the original MIMO channel for spatially correlated (ELAA-)MIMO channels. This implies that UW-SVD can accelerate current iterative algorithms, which is confirmed by our simulation results. Specifically, it can speed up convergence by up to 10 times in both uncoded and coded systems. Index Terms—Linear MIMO detectors, extremely large aperture array (ELAA), user-wise singular value decomposition (UW-SVD), channel ill-conditioning, fast convergence.
This paper presents an analysis on performance of an ultra dense network (UDN) with and without cell cooperation from the perspective of network information theory. We propose a UDN performance metric called Total Average Geometry Throughput which is independent from the user distribution or scheduler etc. This performance metric is analyzed in detail for UDN with and without cooperation. The numerical results from the analysis show that under the studied system model, the total average geometry throughput reaches its maximum when the inter-cell distance is around 6 ~ 8 meters, both without and with cell cooperation. Cell cooperation can significantly reduce inter-cell interference but not remove it completely. With cell cooperation and an optimum number of the cooperating cells the maximum performance gain can be achieved. Furthermore, the results also imply that there is an optimum aggregate transmission power if considering the energy cost per bit.
Extremely large aperture array (ELAA) is a promising multiple-input multiple-output (MIMO) technique for next generation mobile networks. In this paper, we propose two novel approaches to accelerate the convergence of current iterative MIMO detectors in ELAA channels. Our approaches exploit the static components of the ELAA channel, which include line of sight (LoS) paths and deterministic non-LoS (NLoS) components due to channel hardening effects. This paper proposes novel convergence acceleration techniques for fast iterative ELAA-MIMO detection by leveraging the static channel component, including the LoS paths and deterministic NLoS components that arise due to channel hardening. Specifically, these static channel components are utilized in two ways: as preconditioning matrices for general iterative algorithms, and as initialization for quasi-Newton (QN) methods. Simulation results show that the proposed approaches converge significantly faster compared to current iterative MIMO detectors, especially under strong LoS conditions with high Rician K-factor. Furthermore, QN methods with the proposed initialization matrix consistently achieve the best convergence performance while maintaining low complexity.
The design of efficient wireless fronthaul connections for future heterogeneous networks incorporating emerging paradigms such as cloud radio access network (C-RAN) has become a challenging task that requires the most effective utilization of fronthaul network resources. In this paper, we propose to use distributed compression to reduce the fronthaul traffic in uplink Coordinated Multi-Point (CoMP) for C-RAN. Unlike the conventional approach where each coordinating point quantizes and forwards its own observation to the processing centre, these observations are compressed before forwarding. At the processing centre, the decompression of the observations and the decoding of the user message are conducted in a successive manner. The essence of this approach is the optimization of the distributed compression using an iterative algorithm to achieve maximal user rate with a given fronthaul rate. In other words, for a target user rate the generated fronthaul traffic is minimized. Moreover, joint decompression and decoding is studied and an iterative optimization algorithm is devised accordingly. Finally, the analysis is extended to multi-user case and our results reveal that, in both dense and ultra-dense urban deployment scenarios, the usage of distributed compression can efficiently reduce the required fronthaul rate and a further reduction is obtained with joint operation.
MIMO mobile systems, with a large number of antennas at the base-station side, enable the concurrent transmission of multiple, spatially separated information streams, and therefore, enable improved network throughput and connectivity both in uplink and downlink transmissions. Traditionally, such MIMO transmissions adopt linear base-station processing, that translates the MIMO channel into several single-antenna channels. While such approaches are relatively easy to implement, they can leave on the table a significant amount of unexploited MIMO capacity and connectivity capabilities. Recently-proposed non-linear base-station processing methods claim this unexplored capacity and promise substantially increased network throughput and connectivity capabilities. Still, to the best of the authors' knowledge, non-linear base-station processing methods not only have not yet been adopted by actual systems, but have not even been evaluated in a standard-compliant framework, involving of all the necessary algorithmic modules required by a practical system. In this work, for the first time, we incorporate and evaluate non-linear base-station processing in a 3GPP standard environment. We outline the required research platform modifications and we verify that significant throughput gains can be achieved, both in indoor and outdoor settings, even when the number of base-station antennas is much larger than the number of transmitted information streams. Then, we identify missing algorithmic components that need to be developed to make non-linear base-station practical, and discuss future research directions towards potentially transformative next-generation mobile systems and base-stations (i.e., 6G) that explore currently unexploited non-linear processing gains.
With the recent development of Device-toDevice (D2D) communication technologies, mobile devices will no longer be treated as pure “terminals”, but they could become an integral part of the network in specific application scenarios. In this paper, we introduce a novel scheme of using D2D communications for enabling data relay services in partial Not-Spots, where a client without local network access may require data relay by other devices. Depending on specific social application scenarios that can leverage on the D2D technology, we consider tailored algorithms in order to achieve optimised data relay service performance on top of our proposed networkcoordinated communication framework. The approach is to exploit the network’s knowledge on its local user mobility patterns in order to identify best helper devices participating in data relay operations. This framework also comes with our proposed helper selection optimization algorithm based on reactive predictability of individual user. According to our simulation analysis based on both theoretical mobility models and real human mobility data traces, the proposed scheme is able to flexibly support different service requirements in specific social application scenarios.
Power consumption in Information and Communication Technology (ICT) is 10% of total energy consumed in industrial countries. According to the latest measurements, this amount is increasing rapidly in recent years. In the literature, a variety of new schemes have been proposed to save energy in operational communication networks. In this paper, we propose a novel optimization algorithm for network virtualization environment, by sleeping reconfiguration on the maximum number of physical links during off-peak hours, while still guaranteeing the connectivity and off-peak bandwidth availability for supporting parallel virtual networks over the top. Simulation results based on the GÉANT network topology show our novel algorithm is able to put notable number of physical links to sleep during off-peak hours while still satisfying the bandwidth demands requested by ongoing traffic sessions in the virtual networks. © 2012 IEEE.
In previous works the cognitive interference channel with unidirectional destination cooperation has been studied. In this model the cognitive receiver acts as a relay of the primary user's message and its operation is assumed to be strictly causal. In this paper we study the same channel model with a causal rather than a strictly causal relay, i.e. the relay's transmit symbol depends not only on its past but also on its current received symbol. We propose an outer bound for the discrete memoryless channel which is later used to compute an outer bound for the Gaussian channel. We also propose an achievable scheme based on instantaneous amplify-and-forward relaying that meets the outer bound in the very strong interference regime.
Underlay cognitive beamforming allows secondary transmitters to suppress interferences to the primary users, whilst maintain their own quality of services. This paper aims at investigating joint power and interference trade-off inherent in the underlay cognitive beamforming scheme. It is shown that the work of interests leads to a non-convex optimization problem, which can be resolved by employing the second-order cone programming. It is theoretically proved that introducing zero-interference to the primary user does not always lead to the system optimality; moreover, we exhibit two conditions, for which the interference should be treated as noise in order to maximize the sum-rate of the considered beamforming system. © VDE VERLAG GMBH.
New modified 2 × 2 and 3 × 3 series-fed patch antenna arrays with beam-steering capability are designed and fabricated for 28-GHz millimeter-wave applications. In the designs, the patches are connected to each other continuously and in symmetric 2-D format using the high-impedance microstrip lines. In the first design, 3-D beam-scanning range of ± 25° and good radiation and impedance characteristics were attained by using only one phase shifter. In the second one, a new mechanism is introduced to reduce the number of the feed ports and the related phase shifters (from default number 2 N to the reduced number N + 1 in the serial feed (here N = 3) and then the cost, complexity, and size of the design. Here, good scanning performance of a range of ± 20°, acceptable sidelobe level, and gain of 15.6 dB are obtained. These features allow to use additional integrated circuits to improve the gain and performance. A comparison to the conventional array without modification is done. The measured and simulated results and discussions are presented.
Interference forwarding has been shown to be beneficial in the interference channel with a relay as it enlarges the strong interference region, allowing the decoding of the interference at the receivers for larger ranges of the channel gains. In this work we demonstrate the benefit of adding a relay to the cognitive interference channel. We pay special attention to the effect of interference forwarding in this configuration. Two setups are presented. In the first, the interference forwarded by the relay is the primary user's signal, and in the second, this is the cognitive user's signal. We characterise the capacity regions of these two models in the case of strong interference. We show that as opposed to the first setup, in the second setup the capacity region is enlarged, compared to the capacity region of the cognitive interference channel, when the relay does not help the intended receiver.
Energy efficiency has become an important aspect of wireless communication, both economically and environmentally. This letter investigates the energy efficiency of downlink AWGN channels by employing multiple decoding policies. The overall energy efficiency of the system is based on the bits-per-joule metric, where energy efficiency contours are used to locate the optimal operating points based on the system requirements. Our novel approach uses a linear power model to define the total power consumed at the base station, encompassing the circuit and processing power, and amplifier efficiency, and ensures that the best energy efficiency value can be achieved whilst satisfying other system targets such as QoS and rate-fairness.
A device-to-device (D2D) ultra reliable low latency communications (URLLC) network is investigated in this paper. Specifically, a D2D transmitter opportunistically accesses the radio resource provided by a cellular network and directly transmits short packets to its destination. A novel performance metric is adopted for finite block-length code. We quantify the maximum achievable rate for the D2D network, subject to a probabilistic interference power constraint based on imperfect channel state information (CSI). First, we perform a convexity analysis which reveals that the finite block-length rate for the D2D pair in short-packet transmission is not always concave. To address this issue, we propose two effective resource allocation schemes using the successive convex approximation (SCA)-based iterative algorithm. To gain more insights, we exploit the mono- tonicity of the average finite block-length rate. By capitalizing on this property, an optimal power control policy is proposed, followed by closed-form expressions and approximations for the optimal average power and the maximum achievable average rate in the finite block-length regime. Numerical results are provided to confirm the effectiveness of the proposed resource allocation schemes and validate the accuracy of the derived theoretical results.
A compact size, dual-band wearable antenna for off-body communication operating at the both 2.45 and 5.8 GHz industrial, scientific, and medical (ISM) band is presented. The antenna is a printed monopole on an FR4 substrate with a modified loaded ground plane to make the antenna profile compact. Antennas’ radiation characteristics have been optimized while the proposed antenna placed close to the human forearm. The fabricated antenna operating on the forearm has been measured to verify the simulation results.
The design of iterative linear precoding is recently challenged by extremely large aperture array (ELAA) systems, where conventional preconditioning techniques could hardly improve the channel condition. In this paper, it is proposed to regularize the extreme singular values to improve the channel condition by deducting a rank-one matrix from the Wishart matrix of the channel. Our analysis proves the feasibility to reduce the largest singular value or to increase multiple small singular values with a rank-one matrix when the singular value decomposition of the channel is available. Knowing the feasibility, we propose a low-complexity approach where an approximation of the regularization matrix can be obtained based on the statistical property of the channel. It is demonstrated, through simulation results, that the proposed low-complexity approach significantly outperforms current preconditioning techniques in terms of reduced iteration number in both ELAA systems as well as symmetric multi-antenna (i.e., MIMO) systems when the channel is i.i.d. Rayleigh fading.
In time-division-duplexing (TDD) massive multipleinput multiple-output (MIMO) systems, channel reciprocity is exploited to overcome the overwhelming pilot training and the feedback overhead. However, in practical scenarios, the imperfections in channel reciprocity, mainly caused by radiofrequency mismatches among the antennas at the base station side, can significantly degrade the system performance and might become a performance limiting factor. In order to compensate for these imperfections, we present and investigate two new calibration schemes for TDD-based massive multi-user MIMO systems, namely, relative calibration and inverse calibration. In particular, the design of the proposed inverse calibration takes into account a compound effect of channel reciprocity error and channel estimation error. We further derive closedform expressions for the ergodic sum rate, assuming maximum ratio transmissions with the compound effect of both errors. We demonstrate that the inverse calibration scheme outperforms the traditional relative calibration scheme. The proposed analytical results are also verified by simulated illustrations.
Cooperative communication is an effective approach for increasing the spectral efficiency and/or the coverage of cellular networks as well as reducing the cost of network deployment. However, it remains to be seen how energy efficient it is. In this paper, we assess the energy efficiency of the conventional Amplify-and- forward (AF) scheme in an in-building relaying scenario. This scenario simplifies the mutual information formulation of the AF system and allows us to express its channel capacity with a simple and accurate closed-form approximation. In addition, a framework for the energy efficiency analysis of AF system is introduced, which includes a power consumption model and an energy efficiency metric, i.e. the bit-per-joule capacity. This framework along with our closed-form approximation are utilized for assessing both the channel and bit-per-joule capacities of the AF system in an in-building scenario. Our results indicate that transmitting with maximum power is not energy efficient and that AF system is more energy efficient than point-to-point communication at low transmit powers and signal-to-noise ratios.
Soaring capacity and coverage demands dictate that future cellular networks need to migrate soon toward ultra-dense networks. However, network densification comes with a host of challenges that include compromised energy efficiency, complex interference management, cumbersome mobility management, burdensome signaling overheads, and higher backhaul costs. Interestingly, most of the problems that beleaguer network densification stem from legacy networks' one common feature, i.e., tight coupling between the control and data planes regardless of their degree of heterogeneity and cell density. Consequently, in wake of 5G, control and data planes separation architecture (SARC) has recently been conceived as a promising paradigm that has potential to address most of the aforementioned challenges. In this survey, we review various proposals that have been presented in the literature so far to enable SARC. More specifically, we analyze how and to what degree various SARC proposals address the four main challenges in network densification, namely: energy efficiency, system level capacity maximization, interference management, and mobility management. We then focus on two salient features of future cellular networks that have not yet been adapted in legacy networks at wide scale and thus remain a hallmark of 5G, i.e., coordinated multipoint (CoMP) and device-to-device (D2D) communications. After providing necessary background on CoMP and D2D, we analyze how SARC can particularly act as a major enabler for CoMP and D2D in context of 5G. This article thus serves as both a tutorial as well as an up-to-date survey on SARC, CoMP, and D2D. Most importantly, this survey provides an extensive outlook of challenges and opportunities that lie at the crossroads of these three mutually entangled emerging technologies.
Until recently, link adaptation and resource allocation for communication system relied extensively on the spectral efficiency as an optimization criterion. With the emergence of the energy efficiency (EE) as a key system design criterion, resource allocation based on EE is becoming of great interest. In this paper, we propose an optimal EE-based resource allocation method for the scalar broadcast channel (BC-S). We introduce our EE framework, which includes an EE metric as well as a realistic power consumption model for the base station, and utilize this framework for formulating our EE-based optimization problem subject to a power as well as fairness constraints. We then prove the convexity of this problem and compare our EE-based resource allocation method against two other methods, i.e. one based on sum-rate and one based on fairness optimization. Results indicate that our method provides large EE improvement in comparison with the two other methods by significantly reducing the total consumed power. Moreover, they show that near-optimal EE and average fairness can be simultaneously achieved over the BC-S channel. © 2012 IEEE.
The introduction of Bitcoin cryptocurrency has inspired businesses and researchers to investigate into the technical aspects of blockchain and DLT systems. However, the blockchain technologies today still have distinct limitations on scalability and flexibly in terms of large-size and dynamic reconfigurability. Sharding appears to be a promising solution to scale out the blockchain system horizontally by dividing the entire network into multiple shards or clusters. However, flexibility and reconfigurability of these clusters need further research and investigations. In this paper, we propose two efficient mechanisms to enable flexible dynamic re-clustering of the blockchain network including blockchain cluster merging and splitting operations. Such mechanisms offer a solution to specific application scenarios such as microgrids and other edge-based applications where clusters of autonomous systems potentially require structure reconfigurations. The proposed mechanisms offer three-stage procedures to merge and split multiple clusters. Based on our simulation experiments, we show that the proposed merging and splitting operations based on proof of work (PoW) consensus algorithm can be optimized to reduce the merging time considerably (in the magnitude of 1/22000 based on 100 blocks) which effectively reduces overall merging and splitting completion time, interruption time and required computation power.
Integrating Low Earth Orbit (LEO) satellites with terrestrial network infrastructures to support ubiquitous Inter-net service coverage has recently received increasing research momentum. One fundamental challenge is the frequent topology change caused by the constellation behaviour of LEO satellites. In the context of Software Defined Networking (SDN), the controller function that is originally required to control the conventional data plane fulfilled by terrestrial SDN switches will need to expand its responsibility to cover their counterparts in the space, namely LEO satellites that are used for data forwarding. As such, seamless integration of the fixed control plane on the ground and the mobile data plane fulfilled by constellation LEO satellites will become a distinct challenge. For the very first time in the literature, we propose in this paper the Virtual Data-Plane Addressing (VDPA) scheme by leveraging IP addresses to represent virtual switches at the fixed space locations which are periodically instantiated by the nested LEO satellites traversing them in a predictable manner. With such a scheme the changing data-plane network topology incurred by LEO satellite constellations can be made completely agnostic to the control plane on the ground, thus enabling a native approach to supporting seamless communication between the two planes. Our simulation results prove the superiority of the proposed VDPA based flow rule manipulation mechanism in terms of control plane performance.
Hybrid systems, where more than one transmission scheme are used within the same cluster, can be used as a way to improve spectral efficiency for the system as a whole and, more importantly, for the cell-edge users. In this paper, we will propose frequency reuse method by grouping the users into two groups, critical and non-critical users. Each user group is served with a transmission scheme, where the most vulnerable users are served by transmission scheme that avoid, make use of, and orthogo-nalise the interference. These schemes include the cooperative maximal ratio transmission and the non-cooperative orthogonal and non-orthogonal schemes. Radio resource allocation is studied and a solution is given for maximal ratio transmission and interference alignment. Simulation results are given, and showing the performance of each scheme when all users are considered critical and one scheme is used. Moreover, results showing the performance of our proposed frequency reuse scheme where different percentage of users considered critical.
Current trends in developing cost-effective and energy-efficient wireless systems operating at millimeter-wave (mm-wave) frequencies and with large-scale phased array antennas for fulfilling the high data-rate demands of 5G and beyond has driven the needs to explore the use of hybrid beamforming technologies. This paper presents an experimental study of a wide-bandwidth millimeter-wave fully-connected hybrid beamformer system that operates at 26 GHz with 128 antenna elements arranged in a 16 x 8 planar array, 6-bit phase shifter, 6-bit attenuators and two separate radio frequency (RF) channels each capable of fully independent beamforming. The linearity, phase, and attenuation performance of the beamformer system between 25.5 GHz and 26.5 GHz are evaluated as well as the beamforming performance of a 128-element planar phased array at 26 GHz where the measured radiation patterns with and without amplitude tapering are compared.
This paper consider cooperative localization in cellular networks. In this scenario, several located mobile terminals (MTs) are employed as reference nodes to find the location of an un-located MT. The located MTs sent training sequences in the uplink, then the un-located MT perform distance estimation using received signal strength techniques. The localization accuracy of the un-located MT is characterized in terms of squared position error bound (SPEB) [1]. By taking into account the imperfect a priori location knowledge of the located MTs, the SPEB is derived in a closed-form. The closed-form indicate that the effect of the imperfect location knowledge on SPEB is equivalent to the increase of the variance of distance estimation. Moreover, based on the obtained closed-form, a MT selection scheme is proposed to decrease the number of located MTs sending training sequences, thus reduce the training overhead for localization. The simulation results show that the proposed scheme can reduce the training overhead with the paid of accuracy. and with the same training overhead, the accuracy of the proposed scheme is better than that of random selection. © 2011 IEEE.
The ability of the reflectarray antenna (RA) to perform orbital angular momentum (OAM) beam-steering with low divergence angles at the fifth generation (5G) millimetre-wave (mmWave) bands is demonstrated. To provide steered OAM beams, it is necessary to regulate the scatterer's geometries smoothly throughout the focal area to follow the required twisted distribution. The traditional numerical method to compensate for the phase is modified to enable the 3D scanning property of OAM beams, so it is possible to avoid the feeder blockage and produce high-gain steered OAM beams. Likewise, reducing the inherent beam divergence of OAM beams can be obtained by examining the most satisfactory phase distribution of the scatterers by fitting the focal length. The simulated radiation pattern is validated by the measured radiation pattern of the fabricated RA in the frequency range between 28.5 and 31.5 GHz.
This paper examines the uplink transmission of a single-antenna handsheld user to a cluster of satellites, with a focus on utilizing the inter-satellite links to enable cooperative signal detection. Two cases are studied: one with full CSI and the other with partial CSI between satellites. The two cases are compared in terms of capacity, overhead, and bit error rate. Additionally, the impact of channel estimation error is analyzed in both designs, and robust detection techniques are proposed to handle channel uncertainty up to a certain level. The performance of each case is demonstrated, and a comparison is made with conventional satellite communication schemes where only one satellite can connect to a user. The results of our study reveal that the proposed constellation with a total of 3168 satellites in orbit can enable a capacity of 800 Mbits/sec through cooperation of $12$ satellites with and occupied bandwidth of 500 MHz. In contrast, conventional satellite communication approaches with the same system parameters yield a significantly lower capacity of less than 150 Mbits/sec for the nearest satellite.
In this paper, an ultra-wideband Terahertz (THz) channel measurement campaign in the 500-750 GHz frequency band is presented. Power levels received from signal transmission by reflections off 14 different materials were measured in an indoor environment at Non-Line-of-Sight (NLoS) between the Transmitter (Tx) and Receiver (Rx), and compared to power levels received at Line-of-Sight (LoS) transmission. Frequency up-converters were used to transmit the signal using 26 dBi horn antennas at the Tx and Rx side and the signal was measured using a Vector Network Analyzer (VNA). From the data collected, the signal losses due to absorption and diffuse scattering from the rough surface of each Material Under Test (MUT) are calculated. The power delay profile (PDP) is presented, where multipath clustering due to diffuse scattering is observed for materials which have a high frequency selectivity, while less scattering and mostly specular reflection is shown for materials with low frequency selectivity.
Physical layer security (PLS) technologies have attracted much attention in recent years for their potential to provide information-theoretically secure communications. Artificial Noise (AN)-aided transmission is considered as one of the most practicable PLS technologies, as it can realize secure transmission independent of the eavesdropper’s channel status. In this paper, we reveal that AN transmission has the dependency of eavesdropper’s channel condition by introducing our proposed attack method based on a supervised-learning algorithm which utilizes the modulation scheme, available from known packet preamble and/or header information, as supervisory signals of training data. Numerical simulation results with the comparison to conventional clustering methods show that our proposed method improves the success probability of attack from 4.8% to at most 95.8% for the QPSK modulation. It implies that the transmission to the receiver in the cell-edge with low order modulation will be cracked if the eavesdropper’s channel is good enough by employing more antennas than the transmitter. This work brings new insights into the effectiveness of AN schemes and provides useful guidance for the design of robust PLS techniques for practical wireless systems.
Numerous low-complexity iterative algorithms have been proposed to offer the performance of linear multiple-input multiple-output (MIMO) detectors bypassing the channel matrix inverse. These algorithms exhibit fast convergence in well-conditioned MIMO channels. However, in the emerging MIMO paradigm utilizing extremely large aperture arrays (ELAA), the wireless channel may become ill-conditioned because of spatial non-stationarity, which results in a considerably slower convergence rate for these algorithms. In this paper, we propose a novel ELAA-MIMO detection scheme that leverages user-wise singular value decomposition (UW-SVD) to accelerate the convergence of these iterative algorithms. By applying UW-SVD, the MIMO signal model can be converted into an equivalent form featuring a better-conditioned transfer function. Then, existing iterative algorithms can be utilized to recover the transmitted signal from the converted signal model with accelerated convergence towards zero-forcing performance. Our simulation results indicate that proposed UW-SVD scheme can significantly accelerate the convergence of the iterative algorithms in spatially non-stationary ELAA channels. Moreover, the computational complexity of the UW-SVD is comparatively minor in relation to the inherent complexity of the iterative algorithms.
Unlike conventional cellular systems, the fifth generation (5gG) and beyond includes intrinsic support for vertical industries with diverse service requirements. Industrial process automation with autonomous fault detection and prediction, optimised operations and proactive control can be considered as one of the key verticals of 5G and beyond. Such applications enable equipping industrial plants with a reasoning sixth sense for optimised operations and fault avoidance. In this direction, we introduce an inter-disciplinary approach integrating wireless sensor networks with machine learning-enabled industrial plants to build a step towards developing this sixth sense technology, i.e., the reasoning ability. We develop a modular-based system that can be adapted to the vertical-specific elements. Without loss of generalisation, exemplary use cases are developed and presented including a fault detection/prediction scheme in a wireless communication network with sensors and actuators to enable the sixth sense technology with guaranteed service load requirements. The proposed schemes and modelling approach are implemented in a real chemical plant for testing purposes, and a high fault detection and prediction accuracy is achieved coupled with optimised sensor density analysis.
As future communication systems move toward the terahertz (THz) spectrum with a much higher speed and more densification, enhanced security utilizing the novel concepts will be inevitable, particularly in automated identification systems. This paper proposes a novel spatial domain technique based on vortex beams generated by metasurface structures for efficient chipless identification (ID) sensors where the information is saved in the vortex modes. This approach using the vortex concept offers the high security strength in the automated systems compared to existing solutions, including time-based and frequency-based sensors. Furthermore, combining these vortex-based sensors and conventional ones results in substantially increasing the stored information capacity. To verify the idea, here, we present an uneven dielectric metasurface (UDM) to generate distinct vortex modes to identify different information. The proposed sensor is designed by using an equivalent transmission line (TL) model, and the extracted results exhibited a good agreement between simulation and theoretical approaches. Furthermore, it is demonstrated that the transmitted information capacity can be notably enhanced by using a sensor tag with simultaneous multiple modes and also using more than one tag simultaneously.
Energy efficiency (EE) is growing in importance as a system design crite- rion for power-unlimited system such as cellular systems. Equally, resource allocation is a well-known method for improving the performance of the latter. In this paper, we propose two novel coordinated resource allocation strategies for jointly optimizing the resources of three sectors/cells in an energy-efficient manner in the downlink of multi-cell/sector systems. Given that this optimization problem is non-convex, it can only be optimally solved using high complexity exhaustive search. Here, we propose two practical approaches for allocating resources in a low complexity manner. We then compare our novel approaches against other existing non-coordinated and co- ordinated ones in order to highlight their benefit. Our results indicate that our first approach performs the best in terms of EE but with a low level of fairness in the user rate allocation; whereas our second approach provides a good trade-off between EE and fairness. Overall, base station selection, i.e. allowing only one sector to transmit at a time, is a very energy-efficient approach when the sleeping power is considered in the base station power model.
The low earth orbit (LEO) satellite network is undergoing rapid development with the maturing of satellite communications and rocket launch technologies, and the demand for a global coverage network. However, current satellite communication networks are constrained by limited transmitting signal power, resulting in the use of large-size and energy-consuming ground terminals to provide additional gain. This paper proposes a novel technology called distributed beamforming to address such challenges and support direct communications from LEO satellites to smartphones. The proposed distributed beamforming technique is based on the superposition of electromagnetic (EM) waves and aims to enhance the received signal strength. Furthermore, we utilize EM wave superposition to increase the link budget and provide the coverage pattern formed by the distributed antenna array, which will be affected by the array structure and the transmitter parameters. In addition, the impact of Doppler frequency shift and time misalignment on the performance of distributed beamforming is investigated. Numerical results show that the enhancement of the received power depends on the angle formed by those radiated beams and can be up to the square of the number of beams; namely, a maximum enhancement of 6 dB could be obtained by using two satellites and a maximum of 12 dB increase through four satellites, which provide a clear guideline for the design of distributed beamforming for future satellite communications.
It is foreseen that the next generation of cellular network would integrate the relaying or multihop scheme. In a multihop cellular architecture, the users are not only able to communicate directly to the base station (BS) but can also use some relay stations to relay their data to the BS. In such architecture, it may happen that a relayed user handover to another relay station during its communication: this process is called the inter-relay handoff. The main objective of this paper is to study how frequent the inter-relay handoff occurs and its impact on the relaying system performance. For this, different algorithms to decide when a user should inter-relay handover are proposed and tested through a dynamic system level simulator. We compare the capacity gain for the different algorithm with the conventional cellular networks using the UMTS FDD mode. The result showed that with an appropriate inter-relay handoff scheme, the uplink capacity gain of 35% is readily achievable.
Their inherent broadcasting capabilities over very large geographical areas make satellite systems one of the most effective vehicles for multicast service delivery. Recent advances in spotbeam antennas and high-power platforms further accentuate the suitability of satellite systems as multicasting tools. The focus of this article is reliable multicast service delivery via geostationary satellite systems. Starburst MFTP is a feedback-based multicast transport protocol that is distinct from other such protocols in that it defers the retransmission of lost data until the end of the transmission of the complete data product. In contrast to other multicast transport protocols, MFTP retransmission strategy does not interrupt the fresh data transmission with the retransmissions of older segments. Thanks to this feature, receivers enjoying favourable channel conditions do not suffer from unnecessarily extended transfer delays due to those receivers that experience bad channel conditions. Existing research studies on MFTP's performance over satellite systems assume fixed-capacity satellite uplink channels dedicated to individual clients on the return link. Such fixed-assignment uplink access mechanisms are considered to be too wasteful uses of uplink resources for the sporadic and thin feedback traffic generated by MFTP clients. Indeed, such mechanisms may prematurely limit the scalability of MFTP as the multicast client population size grows. In contrast, the reference satellite system considered in this article employs demand-assignment multiple access (DAMA) with contention-based request signalling on the uplink. DAMA MAC (Medium Access Control) protocols in satellite systems are well-known in the literature for their improved resource utilisation and scalability features. Moreover, DAMA forms the basis for the uplink access mechanisms in prominent satellite networks such as Inmarsat's BGAN (Broadband Global Area Network), and return link specifications such as ETSI DVB-RCS, However, in comparison with fixed-assignment uplink access mechanisms, DAMA protocols may introduce unpredictable delays for MFTP feedback messages on the return link. Collisions among capacity requests on the contention channel, temporary lack of capacity on the reservation channel, and random transmission errors on the uplink are the potential causes of such delays, This article presents the results of a system-level simulation analysis of MFTP over a DAMA GEO satellite system with contention-based request channels. Inmarsat's BGAN system was selected as the reference architecture for analyses. The simulator implements the full interaction between the MFTP server and MFTP clients overlaid on top of the Inmarsat BGAN uplink access mechanism. The analyses aim to evaluate and optimise MFTP performance in Inmarsat BGAN system in terms of transfer delay and system throughput as a function of available capacity, client population size, data product size, channel error characteristics, and MFTP protocol settings. Copyright @ 2006 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.
In recent years, 5G resilient networks have gained significant attention in the wireless industry. The prime concern of commercial networks is to maximize network capacity to increase their revenue. However, in disaster situations during outages when cell sites are down, instead of capacity, coverage becomes predominant. In this paper, we propose a game theory-based optimal resource allocation scheme, while aiming to maximize the sum rate and coverage probability for the uplink transmissions in disaster situations. The proposed hierarchical game theoretical framework optimizes the uplink performance in multitier heterogeneous network with pico base stations and femto access points overlaid under a macro base station. The test simulations are based on a real-time data set obtained for a predefined amount of time. The data statistics are then manipulated to create practical disaster situations. The solution for the noncooperative game has been obtained by using pure strategy Nash equilibrium. We perform simulations with different failure rates and the results show that the proposed scheme improves the sum rate and outage probability by significant margin with or without disaster scenario.
A key requirement for ease of migration from legacy to ambient networks is the elimination of dependencies between functionalities. Currently, in the case of designs for QoS and mobility in IP networks, it is apparent that there is a strong coupling between the two functions. One way to reduce this coupling is to remove the need to have QoS state within the network. This can be done if applications are able to manage the QoS parameters themselves. The basic idea is that the application is responsible for keeping the network in a congestion-free state in order to minimize the loss and delay experienced by the application, much as TCP does today for best effort traffic. On-line estimation of end-to-end packet loss can be used to monitor wireless link performance and to help adaptive applications to make better use of network resources. Existing work has focused on measuring and modeling packet loss in the Internet but most of these technologies do not address end-to-end path performance. This paper proposes an on-line stripe packet-pair probing approach to estimate the packet loss rate that applications may suffer. The approach uses 2-order Markov chain to model the loss rate and loss burstiness. To reduce the computational complexity, we employ maximum entropy to estimate the parameters. The paper validates the model via existing loss traces
Distributed Mobility Management (DMM) protocol is proposed to address the shortcomings of centralized mobility management protocols. In DMM, unlike centralized protocols, flows can be routed optimally in the network by dynamic IP address allocation, which can yield lower signaling and packet delivery costs. In this paper, we introduce an SDN based mobility management service with multiple controllers called Hierarchical Software Defined Distributed Mobility Management (HSD-DMM). In contrast to standard DMM which is proposed for flat architectures, HSD-DMM uses a dynamic anchor point selection method for each flow in a hierarchical mobile network architecture. The main condition in selecting an anchor point is packet delivery cost reduction based on collaboration of multiple controllers responsible for different tiers of the hierarchy. Numerical analysis results reveal that HSD-DMM can decrease signaling and packet delivery cost compared to an SDN based standard DMM solution.
Small satellites in Low Earth Orbit (LEO) attract much attention from both industry and academia. The latest production and launch technologies constantly drive the development of LEO constellations. However, the wideband signal, except text messages, cannot be transmitted directly from an LEO satellite to a standard mobile cellular phone due to the insufficient link budget. The current LEO constellation network has to use an extra ground device to receive the signal from the satellite first and then forward the signal to the User Equipment (UE). To achieve direct network communications between LEO satellites and UE, we propose a novel distributed beamforming technology based on the superposition of electromagnetic (EM) waves radiated from multiple satellites that can significantly enhance the link budget in this paper. EM full-wave simulation and Monte Carlo simulation results are provided to verify the effectiveness of the proposed method. The simulation results show a nearly 6 dB enhancement using two radiation sources and an almost 12 dB enhancement using four sources. The received power enhancement could be doubled compared to the diversity gain in Multiple-Input and Single-Output (MISO). Furthermore, other practical application challenges, such as the synchronization and Doppler effect, are also presented.
This paper addresses the problem of opportunistic spectrum access in support of mission-critical ultra-reliable and low latency communications (URLLC). Considering the ability of supporting short packet transmissions in URLLC scenarios, a new capacity metric in finite blocklength regime is introduced as the traditional performance metrics such as ergodic capacity and outage capacity are no longer applicable. We focus on an opportunistic spectrum access system in which the secondary user (SU) opportunistically occupies the frequency resources of the primary user (PU) and transmits reliable short packets to its destination. An achievable rate maximization problem is then formulated for the SU in supporting URLLC services, subject to a probabilistic received-power constraint at the PU receiver and imperfect channel knowledge of the SU-PU link. To tackle this problem, an optimal power allocation policy is proposed. Closedform expressions are then derived for the maximum achievable rate in finite blocklength regime, the approximate transmission rate at high signal-to-noise ratios (SNRs) and the optimal average power. Numerical results validate the accuracy of the proposed closed-form expressions and further reveal the impact of channel estimation error, block error probability, finite blocklength and received-power constraint.
A novel approach for implementation of opportunistic scheduling without explicit feedback channels is proposed in this paper, which exploits the existing, ARQ signals instead of feedback channels to reduce the complexity of implementation. Monte Carlo simulation results demonstrate the efficacy of the proposed approach in harvesting multiuser diversity gain. The proposed approach enables implementation of opportunistic scheduling, in a variety of wireless networks, such as the IEEE 802.11, without feedback facilities for collecting partial channel state information from users.
Vehicular Ad-Hoc Networks (VANETs) are a critical component of the Intelligent Transportation Systems (ITS), which involve the applications of advanced information processing, communications, sensing, and controlling technologies in an integrated manner to improve the functionality and the safety of transportation systems, providing drivers with timely information on road and traffic conditions, and achieving smooth traffic flow on the roads. Recently, the security of VANETs has attracted major attention for the possible presence of malicious elements, and the presence of altered messages due to channel errors in transmissions. In order to provide reliable and secure communications, Intrusion Detection Systems (IDSs) can serve as a second defense wall after prevention-based approaches, such as encryption. This chapter first presents the state-of-the-art literature on intrusion detection in VANETs. Next, the detection of illicit wireless transmissions from the physical layer perspective is investigated, assuming the presence of regular ongoing legitimate transmissions. Finally, a novel cooperative intrusion detection scheme from the MAC sub-layer perspective is discussed.
A beam steering (up to 36 degrees) high gain (20.5 dBi) Leaky-Wave Antenna (LWA) is presented at 26 GHz for enhanced data rate in millimeter wave (mm-wave) 5G system in dynamic environments. A low loss (
Most of the existing distributed beamforming algorithms for relay networks require global channel state information (CSI) at relay nodes and the overall computational complexity is high. In this paper, a new class of adaptive algorithms is proposed which can achieve a globally optimum solution by employing only local CSI. A reference signal based (RSB) scheme is first derived, followed by a constant modulus (CM) based scheme when the reference signal is not available. Considering individual power transmission constraint at each relay node, the corresponding constrained adaptive algorithms are also derived as an extension. An analysis of the overhead and stepsize range for the derived algorithms are then provided and the excess mean square error (EMSE) for the RSB case is studied based on the energy reservation method. As demonstrated by our simulation results, a better performance has been achieved by our proposed algorithms and they have a very low computational complexity and can be implemented on low cost and low processing power devices.
—Live holographic teleportation is an emerging media application that allows Internet users to communicate in a fully immersive environment. One distinguishing feature of such an application is the ability to teleport multiple objects from different network locations into the receiver's field of view at the same time, mimicking the effect of group-based communications in a common physical space. In this case, live teleportation frames originated from different sources must be precisely synchronised at the receiver side to ensure user experiences with eliminated perception of motion misalignment effect. For the very first time in the literature, we quantify the motion misalignment between remote sources with different network contexts in order to justify the necessity of such frame synchronisation operations. Based on this motivation, we propose HoloSync, a novel edge-computing-based scheme capable of achieving controllable frame synchronisation performances for multi-source holographic tele-portation applications. We carry out systematic experiments on a real system with the HoloSync scheme in terms of frame synchronisation performances in specific network scenarios, and their sensitivity to different control parameters.
Authentication protocols are powerful tools to ensure confidentiality as an important feature of Internet of Things (IoT). The Denial-of-Service (DoS) attack is one of the significant threats to availability , as another essential feature of IoT, which deprives users of services by consuming the energy of IoT nodes. On the other hand, computational intelligence algorithms can be applied to solve such issues in the network and cyber domains. Motivated by this, this article links these concepts. To do so, we analyze two lightweight authentication protocols, present a DoS attack inspired by users' misbehavior and suggest a solution called received signal strength, which is easy to compute, applicable for resisting against different kinds of vulnerabilities in Internet protocols, and feasible for practical implementations. We implement it on two scenarios for locating attackers, investigate the effects of IoT devices' internal error on locating, and propose an optimization problem to finding the exact location of attackers, which is efficiently solvable for computational intelligence algorithms, such as TLBO. Besides, we analyze the solutions for unreliable results of accurate devices and provide a solution to detect attackers with less than 12-cm error and the false alarm probability of 0.7%.
In this paper, a cooperative iterative water-filling approach is investigated for two-user Gaussian interference channel. State-of-the-art approaches only maximize the individual user's own rate and always model interference as noise. Our proposed approach establishes user cooperation through sharing network side information. It iteratively maximizes the sum-rate of both users subject to distributed power constraint. Interference is optimally regarded as message or noise. Three efficient rate-sharing schemes are also investigated between two users based on priority. Numerical results are performed in frequency-selective environment. It is observed that the proposed approach offers significantly performance improvement in comparison with conventional iterative water-filling approaches.
In this letter, a dual-band 8x8 MIMO antenna that operates in the sub-6 GHz spectrum for future 5G multiple-input multiple-output (MIMO) smartphone applications is presented. The design consists of a fully grounded plane with closely spaced orthogonal pairs of antennas placed symmetrically along the long edges and on the corners of the smartphone. The orthogonal pairs are connected by a 7.8 mm short neutral line for mutual coupling reduction at both bands. Each antenna element consists of a folded monopole with dimensions 17.85 x 5mm2 and can operate in 3100-3850 MHz for the low band and 4800-6000 MHz for the high band ([S11] ˂ -10dB). The fabricated antenna prototype is tested and offers good performance in terms of Envelope Correlation Coefficient (ECC), Mean Effective Gain (MEG), total efficiency and channel capacity. Finally, the user effects on the antenna and the Specific Absorption Rate (SAR) are also presented.
The IEEE 802.15.4 protocol is widely adopted as the MAC sub-layer standard for wireless sensor networks, with two available modes: beacon-enabled and non-beacon-enabled. The non-beacon-enabled mode is simpler and does not require time synchronisation; however, it lacks an explicit energy saving mech-anism that is crucial for its deployment on energy-constrained sensors. This paper proposes a distributed sleep mechanism for non-beacon-enabled IEEE 802.15.4 networks which provides energy savings to energy-limited nodes. The proposed mechanism introduces a sleep state that follows each successful packet transmission. Besides energy savings, the mechanism produces a traffic shaping effect that reduces the overall contention in the network, effectively improving packet delivery ratio. Based on traffic arrival rate and the level of network contention, a node can adjust its sleep period to achieve the highest packet delivery ratio. Performance results obtained by ns3 simulations validate these improvements as compared to the IEEE 802.15.4 standard.
Previous work about cooperative localization in cellular networks usually consider a centralized processor (CP) is available for location estimation. This paper consider cooperative localization in a distributed base station (BS) scenario, where there is no CP, and the distributed BSs are responsible for location estimation. In this scenario, Global Positioning System (GPS) enable mobile terminals (MTs), i.e., located MTs, are employed as reference nodes. Then, several located MTs can help to find the locations of an un-located MT, by estimating the distance between the un-located MT using received signal strength techniques. Two localization approaches are proposed, the first approach requires only one BS to collect all the assistance information for localization and estimate the location. The second approach distribute the location estimation task to several BSs. The communication overhead between distributed BSs are investigated for these two approaches. Moreover, by taking into account the effect of imperfect location knowledge of the located MTs, the accuracy limits of both approaches are derived. The simulation results shows that compared with the first approach, the second approach can reduce the communication overhead between distributed BSs with the paid of accuracy. © 2011 IEEE.
Flexibly supporting multiple services, each with different communication requirements and frame structure, has been identified as one of the most significant and promising characteristics of next generation and beyond wireless communication systems. However, integrating multiple frame structures with different subcarrier spacing in one radio carrier may result in significant inter-service-band-interference (ISBI). In this paper, a framework for multi-service (MS) systems is established based on subband filtered multi-carrier system. The subband filtering implementations and both asynchronous and generalized synchronous (GS) MS subband filtered multi-carrier (SFMC) systems have been proposed. Based on the GS-MS-SFMC system, the system model with ISBI is derived and a number of properties on ISBI are given. In addition, low-complexity ISBI cancelation algorithms are proposed by precoding the information symbols at the transmitter. For asynchronous MS-SFMC system in the presence of transceiver imperfections including carrier frequency offset, timing offset and phase noise, a complete analytical system model is established in terms of desired signal, intersymbol-interference, inter-carrier-interference, ISBI and noise. Thereafter, new channel equalization algorithms are proposed by considering the errors and imperfections. Numerical analysis shows that the analytical results match the simulation results, and the proposed ISBI cancelation and equalization algorithms can significantly improve the system performance in comparison with the existing algorithms.
Cognitive radio provides a feasible solution for alleviating the lack of spectrum resources by enabling secondary users to access the unused spectrum dynamically. Spectrum sensing and learning, as the fundamental function for dynamic spectrum sharing in 5G evolution and 6G wireless systems, have been research hotspots worldwide. This paper reviews classic narrowband and wideband spectrum sensing and learning algorithms. The sub-sampling framework and recovery algorithms based on compressed sensing theory and their hardware implementation are discussed under the trend of high channel bandwidth and large capacity to be deployed in 5G evolution and 6G communication systems. This paper also investigates and summarizes the recent progress in machine learning for spectrum sensing technology.
Recent research on Frequency Reuse (FR) schemes for OFDM/OFDMA based cellular networks (OCN) suggest that a single fixed FR cannot be optimal to cope with spatiotemporal dynamics of traffic and cellular environments in a spectral and energy efficient way. To address this issue this paper introduces a novel Self Organizing framework for adaptive Frequency Reuse and Deployment (SO-FRD) for future OCN including both cellular (e.g. LTE) and relay enhanced cellular networks (e.g. LTE Advance). In this paper, an optimization problem is first formulated to find optimal frequency reuse factor, number of sectors per site and number of relays per site. The goal is designed as an adaptive utility function which incorporates three major system objectives; 1) spectral efficiency 2) fairness, and 3) energy efficiency. An appropriate metric for each of the three constituent objectives of utility function is then derived. Solution is provided by evaluating these metrics through a combination of analysis and extensive system level simulations for all feasible FRD's. Proposed SO-FRD framework uses this flexible utility function to switch to particular FRD strategy, which is suitable for system's current state according to predefined or self learned performance criterion. The proposed metrics capture the effect of all major optimization parameters like frequency reuse factor, number of sectors and relay per site, and adaptive coding and modulation. Based on the results obtained, interesting insights into the tradeoff among these factors is also provided.
In this paper, we investigate the design of a radio resource control (RRC) protocol in the framework of long-term evolution (LTE) of the 3rd Generation Partnership Project regarding provision of low cost/complexity and low energy consumption machine-type communication (MTC), which is an enabling technology for the emerging paradigm of the Internet of Things. Due to the nature and envisaged battery-operated long-life operation of MTC devices without human intervention, energy efficiency becomes extremely important. This paper elaborates the state-of-the-art approaches toward addressing the challenge in relation to the low energy consumption operation of MTC devices, and proposes a novel RRC protocol design, namely, semi-persistent RRC state transition (SPRST), where the RRC state transition is no longer triggered by incoming traffic but depends on pre-determined parameters based on the traffic pattern obtained by exploiting the network memory. The proposed RRC protocol can easily co-exist with the legacy RRC protocol in the LTE. The design criterion of SPRST is derived and the signalling procedure is investigated accordingly. Based upon the simulation results, it is shown that the SPRST significantly reduces both the energy consumption and the signalling overhead while at the same time guarantees the quality of service requirements.
5G New Radio (NR) Release 15 has been specified in June 2018. It introduces numerous changes and potential improvements for physical layer data transmissions, although only point-to-point (PTP) communications are considered. In order to use physical data channels such as the Physical Downlink Shared Channel (PDSCH), it is essential to guarantee a successful transmission of control information via the Physical Downlink Control Channel (PDCCH). Taking into account these two aspects, in this paper, we first analyze the PDCCH processing chain in NR PTP as well as in the state-of-the-art Long Term Evolution (LTE) point-to-multipoint (PTM) solution, i.e., evolved Multimedia Broadcast Multicast Service (eMBMS). Then, via link level simulations, we compare the performance of the two technologies, observing the Bit/Block Error Rate (BER/BLER) for various scenarios. The objective is to identify the performance gap brought by physical layer changes in NR PDCCH as well as provide insightful guidelines on the control channel configuration towards NR PTM scenarios.
Wi-Fi sensing has become an attractive option for non-invasive monitoring of human activities and vital signs. This paper explores the feasibility of using state-of-the-art commercial off-the-shelf (COTS) devices for Wi-Fi sensing applications, particularly respiration monitoring and motion detection. We utilize the Intel AX210 network interface card (NIC) to transmit Wi-Fi signals in both 2.4 GHz and 6 GHz frequency bands. Our experiments rely on channel frequency response (CFR) and received signal strength indicator (RSSI) data, which are processed using a moving average algorithm to extract human behavior patterns. The experimental results demonstrate the effectiveness of our approach in capturing and representing human respiration and motion patterns. Furthermore, we compare the performance of Wi-Fi sensing across different frequency bands, highlighting the advantages of using higher frequencies for improved sensitivity and clarity. Our findings showcase the practicality of using COTS devices for Wi-Fi sensing and lay the groundwork for the development of non-invasive, contactless sensing systems. These systems have potential applications in various fields, including healthcare, smart homes, and Metaverse.
In this work, we deploy a one-day-ahead prediction algorithm using a deep neural network for a fast-response BESS in an intelligent energy management system (I-EMS) that is called SIEMS. The main role of the SIEMS is to maintain the state of charge at high rates based on the one-day-ahead information about solar power, which depends on meteorological conditions. The remaining power is supplied by the main grid for sustained power streaming between BESS and end-users. Considering the usage of information and communication technology components in the microgrids, the main objective of this paper is focused on the hybrid microgrid performance under cyber-physical security adversarial attacks. Fast gradient sign, basic iterative, and DeepFool methods, which are investigated for the first time in power systems e.g. smart grid and microgrids, in order to produce perturbation for training data.
The upcoming 6G technology is expected to operate in near-field (NF) radiating conditions thanks to high-frequency and electrically large antenna arrays. While several studies have already addressed this possibility, it is worth noting that NF models introduce heightened complexity, the justification for which is not always evident in terms of performance improvements. Therefore, this paper delves into the implications of the disparity between NF and far-field (FF) models concerning communication, localization, and sensing systems. Such disparity might lead to a degradation of performance metrics like localization accuracy, sensing reliability, and communication efficiency. Through an exploration of the effects arising from the mismatches between NF and FF models, this study seeks to illuminate the challenges confronting system designers and offer valuable insights into the balance between model accuracy, which typically requires a high complexity and achievable performance. To substantiate our perspective, we also incorporate a numerical performance assessment confirming the repercussions of the mismatch between NF and FF models.
A hybrid technique is proposed to manipulate the feld distribution in a substrate integrated waveguide (SIW) H-plane horn to enhance its radiation characteristics. The technique comprises two cascaded steps to govern the guided waves in the structure. The frst step is to correct the phase of felds and form a quasi-uniform distribution in the fare section so that the gain increases and sidelobe-level (SLL) decreases. This is obtained by loading the structure with a novel modulated metal-via lens. Field expansion on the radiating aperture of the SIW H-plane horn generates backward surface waves on both broad walls which increases the backlobe. In the second step, these backward surface waves are recycled and directed forward with the aid of holography theory. This is achieved by adding a couple of dielectric slabs with holographic-based patterns of metallic strips on both broad walls. With this step, the backlobe is reduced and the endfre gain is further increased. Using the proposed technique, the structure is designed and fabricated to operate at f = 30GHz which simultaneously improves the measured values of gain to 11.65 dBi, H-plane SLL to − 17.94 dB, and front-to-back ratio to 17.02 dB.
A simple-structure probe-fed multiple-input multiple-output (MIMO) dielectric resonator antenna (DRA) is designed for sub-6GHz applications with a reduced inter-element spacing (< 0.5λ). A 4-element rectangular DRA is positioned in a compact space verifying the proposed DRA potential for MIMO applications. Each element consists of two dielectric resonators with different permittivity of 5 and 10, excited by the coaxial probe. The measurement results reveal that the proposed MIMO DRA provides an envelope correlation coefficient (ECC) of less than 0.01 with good MIMO performance.
A new single-fed circularly polarized dielectric resonator antenna (CP-DRA) without beam squint is presented. The DRA comprises an S-shaped dielectric resonator (SDR) with a metalized edge and two rectangular dielectric resonators (RDRs) blocks. Horizontal extension section is applied as an extension of the SDR, and a vertical-section is placed in parallel to the metallic edge. A vertical coaxial probe is used to excite the SDR and the vertical RDR blocks through an S-shaped metal element and a small rectangular metal strip. The two added RDRs that form an L-shaped DR improve the radiation characteristics and compensate for the beam squint errors. A wideband CP performance is achieved due to the excitation of several orthogonal modes such as [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text]. The experimental results demonstrate an impedance bandwidth of approximately [Formula: see text] (3.71-7.45 GHz) and a 3-dB axial-ratio (AR) bandwidth of about [Formula: see text] (3.72-6.53 GHz) with a stable broadside beam achieving a measured peak gain of about [Formula: see text]. Furthermore, a 100% correction in beam squint value from [Formula: see text] to [Formula: see text] with respect to the antenna boresight is achieved.
In this paper, a novel nonlinear precoding (NLP) technique, namely constellation-oriented perturbation (COP), is proposed to tackle the scalability problem inherent in conventional NLP techniques. The basic concept of COP is to apply vector perturbation (VP) in the constellation domain instead of symbol domain; as often used in conventional techniques. By this means, the computational complexity of COP is made independent to the size of multi-antenna (i.e., MIMO) networks. Instead, it is related to the size of symbol constellation. Through widely linear transform, it is shown that COP has its complexity flexibly scalable in the constellation domain to achieve a good complexityperformance tradeoff. Our computer simulations show that COP can offer very comparable performance with the optimum VP in small MIMO systems. Moreover, it significantly outperforms current sub-optimum VP approaches (such as degree-2 VP) in large MIMO whilst maintaining much lower computational complexity.
Cooperative Transmission can be used in a multicell scenario where base stations are connected to a central processing unit. This cooperation can be used to improve the fairness for users with bad channel conditions–critical users. This paper will look into using cooperative transmission alongside the orthogonal OFDM scheme to improve fairness by careful selection of critical users and a resource allocation and resource division between the two schemes. A solution for power and subcarrier allocations is provided together with a solution for the selection of the critical users. Simulation results is provided to show the fairness achieved by the proposed critical users selection method, resource allocation and the resource division method applied under the stated assumptions.
The random access (RA) mechanism of Long Term Evolution (LTE) networks is prone to congestion when a large number of devices attempt RA simultaneously, due to the limited set of preambles. If each RA attempt is made by means of transmission of multiple consecutive preambles (codewords) picked from a subset of preambles, as proposed in [1], collision probability can be significantly reduced. Selection of an optimal preamble set size [2] can maximise RA success probability in the presence of a trade-off between codeword ambiguity and code collision probability, depending on load conditions. In light of this finding, this paper provides an adaptive algorithm, called Multipreamble RA, to dynamically determine the preamble set size in different load conditions, using only the minimum necessary uplink resources. This provides high RA success probability, and makes it possible to isolate different network service classes by separating the whole preamble set into subsets each associated to a different service class; a technique that cannot be applied effectively in LTE due to increased collision probability. This motivates the idea that preamble allocation could be implemented as a virtual network function, called vPreamble, as part of a random access network (RAN) slice. The parameters of a vPreamble instance can be configured and modified according to the load conditions of the service class it is associated to.
This paper proposes a two stage algorithm to address spectrum sharing between two Universal Mobile Telecommunication System (UMTS) operators. The two stage algorithm uses both genetic algorithm and load balancing techniques. The first stage uses genetic algorithm as a solution to optimize the allocation when the correlation of traffic is low. The second stage uses load balancing scheme in the highly correlated traffic region. The simulation result shows that significant spectrum sharing gains up to 26 percent and 20 percent respectively, can be obtained on both networks using the proposed algorithm.
A circular reflectarray antenna (RA) for generating Orbital Angular Momentum (OAM) modes in the Terahertz (THz) band is introduced. An interlaced unit cell is proposed to reach a phase variation of 328 at 185 GHz to 188 GHz. Combining RA, OAM, and THz technologies in one structure can be utilized to reach the future requirements of 6G networks. That is due to the additional degree of freedom that OAM beams can provide for data multiplexing in short-distance wireless communication.
Minimization of drive test (MDT) has recently been standardized by 3GPP as a key self organizing network (SON) feature. MDT allows coverage to be estimated at the base station (BS) using user equipment (UE) measurement reports with the objective to eliminate the need for drive tests. However, most MDT based coverage estimation methods recently proposed in literature assume that UE position is known at the BS with 100% accuracy, an assumption that does not hold in reality. In this paper we develop an analytical model that allows the quantification of error in MDT based autonomous coverage estimation (ACE) as a function of error in UE as well as BS positioning. Our model also allows characterization of error in ACE as function of standard deviation of shadowing.
This paper presents a novel mechanism which increases mobile terminal battery performance. It supports a cell reselection algorithm which decides on which cell, user equipment (UE) is camped on when in idle mode (there is no active radio connection with a mobile network). Study is based on real 3G UTRA network measurements. Authors propose a technique to reduce UE current consumption in idle mode based on dynamic Sintrasearch neighbour cell measurements threshold optimization. System analysis covers both UTRA and E-UTRA - Long Term Evolution (LTE) technology.
Low Density Signature-Orthogonal Frequency Division Multiplexing (LDS-OFDM) has been introduced recently as an efficient multiple access technique. In this paper, we focus on the subcarrier and power allocation scheme for uplink LDS-OFDM system. Since the resource allocation problem is not convex due to the discrete nature of subcarrier allocation, the complexity of finding the optimal solutions is extremely high. We propose a heuristic subcarrier and power allocation algorithm to maximize the weighted sum-rate. The simulation results show that the proposed algorithm can significantly increase the spectral efficiency of the system. Furthermore, it is shown that LDS-OFDM system can achieve an outage probability much less than that for OFDMA system.
In this paper, we evaluate the performance of Multicarrier-Low Density Spreading Multiple Access (MC-LDSMA) as a multiple access technique for mobile communication systems. The MC-LDSMA technique is compared with current multiple access techniques, OFDMA and SC-FDMA. The performance is evaluated in terms of cubic metric, block error rate, spectral efficiency and fairness. The aim is to investigate the expected gains of using MC-LDSMA in the uplink for next generation cellular systems. The simulation results of the link and system-level performance evaluation show that MC-LDSMA has significant performance improvements over SC-FDMA and OFDMA. It is shown that using MC-LDSMA can considerably reduce the required transmission power and increase the spectral efficiency and fairness among the users.
Energy efficiency (EE) is one of the main design criterion for the current and next generation of communication systems. Whereas, reflective intelligent surface (RIS) is foreseen to be a key enabler of the next generation of communication systems by facilitating the propagation of radio frequency signals, and in turn, possibly improving its spectral efficiency (SE) and/or EE. This paper investigates both the EE and SE of a multi-hop multi-antenna RIS-aided communication system through its fundamental trade-off. To this end, we first provide a generic and accurate closed-form approximation (CFA) of the SE (ergodic capacity) for multi-hop multi-antenna RIS-aided systems and verify its accuracy through simulations for various numbers of antennas/phase shifters and hops. Based on this expression, we then derive a novel and accurate CFA of the fundamental EE-SE trade-off for multi-hop multi-antenna RIS-aided systems. We subsequently use our CFA to analyse the variations of the EE as a function of the number of antennas/phase shifters and hops when considering a realistic power consumption model. It turns out that increasing the number of hops is more energy efficient than increasing the number of antennas/phase shifters and that multi-hop communication with RIS is not necessarily always more energy efficient than classic multi-antenna communication, as it is for instance the case in a simple device-to-device communication scenario.
One major advantage of cloud/centralized radio access network (C-RAN) is the ease of implementation of multicell coordination mechanisms to improve the system spectrum efficiency (SE). Theoretically, large number of cooperative cells lead to a higher SE, however, it may also cause significant delay due to extra channel state information (CSI) feedback and joint processing computational needs at the cloud data center, which is likely to result in performance degradation. In order to investigate the delay impact on the throughput gains, we divide the network into multiple clusters of cooperative small cells and formulate a throughput optimization problem. We model various delay factors and the sum-rate of the network as a function of cluster size, treating it as the main optimization variable. For our analysis, we consider both base stations’ as well as users’ geometric locations as random variables for both linear and planar network deployments. The output SINR (signal-tointerference-plus-noise ratio) and ergodic sum-rate are derived based on the homogenous Poisson point processing (PPP) model. The sum-rate optimization problem in terms of the cluster size is formulated and solved. Simulation results show that the proposed analytical framework can be utilized to accurately evaluate the performance of practical cloud-based small cell networks employing clustered cooperation.
Energy efficiency (EE) is a key design criterion for the next generation of communication systems. Equally, cooperative communication is known to be very effective for enhancing the performance of such systems. This paper proposes a breakthrough approach for maximizing the EE of multiple-inputmultiple- output (MIMO) relay-based nonregenerative cooperative communication systems by optimizing both the source and relay precoders when both relay and direct links are considered. We prove that the corresponding optimization problem is at least strictly pseudo-convex, i.e. having a unique solution, when the relay precoding matrix is known, and that its Lagrangian can be lower and upper bounded by strictly pseudo-convex functions when the source precoding matrix is known. Accordingly, we then derive EE-optimal source and relay precoding matrices that are jointly optimize through alternating optimization. We also provide a low-complexity alternative to the EE-optimal relay precoding matrix that exhibits close to optimal performance, but with a significantly reduced complexity. Simulations results show that our joint source and relay precoding optimization can improve the EE of MIMO-AF systems by up to 50% when compared to direct/relay link only precoding optimization.
This paper studies the optimum user selection scheme in a hybrid-duplex device-to-device (D2D) cellular networks. We derive an analytical integral-form expression of the cumulative distribution function (CDF) for the received signal-to-noise-plus-interference-ratio (SINK) at the D2D node, based on which the closed-form of the outage probability is obtained. Analysis shows that the proposed user selection scheme achieves the best SINK at the D2D node with interference to base station being limited by a pre-defined level. Hybrid duplex D2D can be switched between half and full duplex according to different residual self-interference to enhance the throughput of D2D pair. Simulation results are presented to validate the analysis.
Multi-access edge computing for mobile computingtask offloading is driving the extreme utilization of available degrees of freedom (DoF) for ultra-reliable low-latency downlink communications. The fundamental aim of this work is to find latency-constrained transmission protocols that can achieve a very-low outage probability (e.g. 0:001%). Our investigation is mainly based upon the Polyanskiy-Poor-Verd´u formula on the finite-length coded channel capacity, which is extended from the quasi-static fading channel to the frequency selective channel. Moreover, the use of a suitable duplexing mode is also critical to the downlink reliability. Specifically, time-division duplexing (TDD) outperforms frequency-division duplexing (FDD) in terms of the frequency diversity-gain. On the other hand, FDD takes the advantage of having more temporal DoF in the downlink, which can be exchanged into the spatial diversity-gain through the use of space-time coding. Numerical study is carried out to compare the reliability between FDD and TDD under various latency constraints.
Dynamic spectrum allocation (DSA) seeks to exploit the variations in the loads of various radio-access networks to allocate the spectrum efficiently. Here, a spectrum manager implements DSA by periodically auctioning short-term spectrum licenses. We solve analytically the problem of the operator of a CDMA cell populated by delaytolerant terminals operating at various data rates, on the downlink, and representing users with dissimilar "willingness to pay" (WtP). WtP is the most a user would pay for a correctly transferred information bit. The operator finds a revenue-maximising internal pricing and a service priority policy, along with a bid for spectrum. Our clear and specific analytical results apply to a wide variety of physical layer configurations. The optimal operating point can be easily obtained from the frame-success rate function. At the optimum, (with a convenient time scale) a terminal's contribution to revenues is the product of its WtP by its data rate; and the product of its WtP by its channel gain determines its service priority ("revenue per Hertz"). Assuming a second-price auction, the operator's optimal bid for a certain spectrum band equals the sum of the individual revenue contributions of the additional terminals that could be served, if the band is won.
This letter presents a systematic method to regulating the response of an artificial impedance surface. The method is based on governing the dispersion diagram to control the depth of modulation so that a meaningful pattern of scatterers is obtained on the structure. The method is applied on a holographic-based large reflective metasurface to achieve a dual-beam radiation pattern with tilt angles of \pm 45^{\circ } in the azimuth plane at f=3.5 GHz. The structure is fabricated and the measured data concur with the simulation results.
—The proposed intelligent reflective surface (IRS) is presented to compensate for the path loss and enhance the coverage of 5G networks at mm-wave band. A (π) shaped element with variable-sized dipoles, distributed in a certain way to maintain a phase length curve over 340 • in the range of 23-27 GHz, is addressed in this work. The proposed structure can be an ideal candidate for 5G mm-wave band n258.
This paper proposes a novel carrier frequency offset (CFO) estimation method for generalized MC-CDMA systems in unknown frequency-selective channels utilizing hidden pi- lots. It is established that CFO is identifiable in the frequency domain by employing cyclic statistics (CS) and linear re-gression (LR) algorithms. We show that the CS-based estimator is capable of mitigating the normalized CFO (NCFO) to a small error value. Then, the LR-based estimator can be employed to offer more accurate estimation by removing the residual quantization error after the CS-based estimator.
This paper proposes a framework for spectrum sharing between multiple Universal Mobile Telecommunication System (UMTS) operators in the UMTS extension band. An algorithm is proposed, and the performance of the algorithm is investigated under uniform and non-uniform traffic conditions. The impact of call setup messages on the overall performance of the algorithm show that DSA gains in the region of 7% and 2% can be obtained under uniform and non-uniform traffic conditions.
This paper investigates the downlink handover (soft/softer/hard) performance of Wideband Code Division Multiple Access (WCDMA) based 3rd generation Universal Mobile Telecommunication System (UMTS), as it is known that the downlink capacity of UMTS is very sensitive to the extent of overlap area between adjacent cells and power margin between them. Factors influencing the handover performance such as the correlation between the multipath radio channels of the two links, limiting number of Rake fingers in a handset, imperfect channel estimation, etc. that cannot be modeled adequately in system-level simulations are investigated via link-level simulations. It is also shown that the geometry factor has an influence on the handover performance and exhibits a threshold value (which depends on the correlation between the multipath channels associated with the two links in a handover) above which the performance starts degrading. The variation of the handover gain with the closed loop power control (CLPC) stepsize and space-time transmit diversity (STTD) is also quantified. These comprehensive results can be used as guidelines for more accurate coverage and capacity planning of UMTS networks.
Autonomous monitoring of key performance indicators, which are obtained from measurement reports, is well established as a necessity for enabling self-organising networks. However, this reports are usually tagged with geographical location information which are obtained from positioning techniques and are therefore prone to errors. In this paper, we investigate the impact position estimation errors on the cell coverage probability that can be estimated from autonomous coverage estimation (ACE). We derive novel and accurate expressions of the actual cell coverage probability of such scheme while considering: errors in user equipment (UE) location and; errors in both UE and base station location. We present generic expressions for channel modelled with path-loss and shadowing, and much simplified expressions for the path-loss dominant channel model. Our results reveal that the ACE scheme will be suboptimal as long as there are errors in the reported geographical location information. Hence, appropriate coverage margins must be considered when utilising ACE.
Network virtualization has been recognized as a promising solution to enable the rapid deployment of customized services by building multiple Virtual Networks (VNs) on a shared substrate network. Whereas various VN embedding schemes have been proposed to allocate the substrate resources to each VN requests, little work has been done to provide backup mechanisms in case of substrate network failures. In a virtualized infrastructure, a single substrate failure will affect all the VNs sharing that resource. Provisioning a dedicated backup network for each VN is not efficient in terms of substrate resource utilization. In this paper, we investigate the problem of shared backup network provision for VN embedding and propose two schemes: shared on-demand and shared pre-allocation backup schemes. Simulation experiments show that both proposed schemes make better utilization of substrate resources than the dedicated backup scheme without sharing, while each of them has its own advantages. © 2011 IEEE.
Designers of smart environments based on radio frequency identification devices have a challenging task to build secure mutual authentication protocols. These systems are classified into two major factions which are traditional closed-loop systems, and open-loop systems. To the best of our knowledge, all of the mutual authentication protocols previously introduced for these two categories rely on a centralized database but they fail to address decentralized mutual authentication and their related attacks. Thanks to the blockchain technology, which is a novel distributed technology, in this paper, we propose two decentralized mutual authentication protocols for IoT systems. Our first scheme is utilized for traditional closed-loop RFID systems (called CLAB), and the second one applies to open-loop RFID systems (called OLAB). Meanwhile, we examine the security of the Chebyshev chaotic map-based authentication algorithm and confirm that this algorithm is unprotected against tag and reader impersonation attacks. Likewise, we present a denial of service (DoS), tag impersonation, and reader impersonation attacks against the Chebyshev chaotic-map based protocol when employed in open-loop IoT networks. Moreover, we discover a full secret recovery attack against a recent RFID mutual authentication protocol which is based on blockchain. Finally, we use the BAN-logic method to approve the security characteristics of our CLAB and OLAB proposals.
The technology of using massive transmit-antennas to enable ultra-reliable single-shot transmission (URSST) is challenged by the transmitter-side channel knowledge (i.e., CSIT) imperfection. When the imperfectness mainly comes from the channel time-variation, the outage probability of the matched filter (MF) transmitter beamforming is investigated based on the first-order Markov model of the aged CSIT. With a fixed transmit-power, the transmitter-side uncertainty of the instantaneous signal-to-noise ratio (iSNR) is mathematically characterized. In order to guarantee the outage probability for every single shot, a transmit-power adaptation approach is proposed to satisfy a pessimistic iSNR requirement, which is predicted using the Chernoff lower bound of the beamforming gain. Our numerical results demonstrate a remarkable transmit-power efficiency when comparing with power control approaches using other lower bounds. In addition, a combinatorial approach of the MF beamforming and grouped space-time block code (G-STBC) is proposed to further mitigate the detrimental impact of the CSIT uncertainty. It is shown, through both theoretical analysis and computer simulations, that the combinatorial approach can further improve the transmit-power efficiency with a good tradeoff between the outage probability and the latency.
A circular reflectarray antenna (RA) for generating Orbital Angular Momentum (OAM) modes in the Terahertz (THz) band is introduced. An interlaced unit cell is proposed to reach a phase variation of \boldsymbol {328^{\mathrm{\circ}}} at 185 GHz to 188 GHz. Combining RA, OAM, and THz technologies in one structure can be utilized to reach the future requirements of 6G networks. That is due to the additional degree of freedom that OAM beams can provide for data multiplexing in short-distance wireless communication.
In a critical infrastructure such as Smart Grid (SG), providing security of the system and privacy of consumers are significant challenges to be considered. The SG developers adopt Machine Learning (ML) algorithms within the Intrusion Detection System (IDS) to monitor traffic data and network performance. This visibility safeguards the SG from possible intrusions or attacks that may trigger the system. However, it requires access to residents' consumption information which is a severe threat to their privacy. In this paper, we present a novel method to detect abnormalities on a large scale SG while preserving the privacy of users. We design a Federated IDS (FIDS) architecture using Federated Learning (FL) in a 5G environment for the SG metering network. In this way, we design Federated Deep Neural Network (FDNN) model that protects customers' information and provides supervisory management for the whole energy distribution network. Simulation results for a real-time dataset demonstrate the reasonable improvement of the proposed FDNN model compared with the state-of-the-art algorithms. The FDNN achieves approximately 99.5% accuracy, 99.5% precision/recall, and 99.5% f1-score when comparing with classification algorithms.
This paper presents a novel design of trapped microstrip-ridge gap waveguide by using partially filled air gaps in a conventional microstrip-ridge gap waveguide. The proposed method offers an applicable solution to obviate frustrating assembly processes for standalone high-frequency circuits employing the low temperature co-fired ceramics technology which supports buried cavities. To show the practicality of the proposed approach, propagation characteristics of both trapped microstrip and microstrip-ridge gap waveguide are compared first. Then, a right-angle bend is introduced, followed by designing a power divider. These components are used to feed a linear 4-element array antenna. The bandwidth of the proposed array is 13 GHz from 64~76 GHz and provides the realized gain of over 10 dBi and the total efficiency of about 80% throughout the operational band. The antenna is an appropriate candidate for upper bands of WiGig (63.72~70.2) and FCC-approved 70 GHz band (71~76 GHz) applications.
Wireless interfaces, remote control schemes, and increased autonomy have raised the attacks surface of vehicular networks. As powerful monitoring entities, intrusion detection systems (IDS) must be updated and customised to respond to emerging networks' requirements. As server-based monitoring schemes were prone to significant privacy concerns, new privacy constrained learning methods such as federated learning (FL) have received considerable attention in designing IDS. However, to alleviate the efficiency and enhance the scalability of the original FL, this paper proposes a novel collaborative hierarchical federated IDS, named CHFL for the vehicular network. In the CHFL model, a group of vehicles assisted by vehicle-to-everything (V2X) communication technologies can exchange intrusion detection information collaboratively in a private format. Each group nominates a leader, and the leading vehicle serves as the intermediate in the second level detection system of the hierarchical federated model. The leader communicates directly with the server to transmit and receive model updates of its nearby end vehicles. By reducing the number of direct communications to the server, our proposed system reduces network uplink traffic and queuing-processing latency. In addition, CHFL improved the prediction loss and the accuracy of the whole system. We are achieving an accuracy of 99.10% compared with 97.01% accuracy of the original FL.
In this paper, we investigate the reflection properties of different interior surfaces in the 92-110 GHz sub-Terahertz (THz) band. The measurements were conducted in an indoor environment by placing the surface in a specular configuration between the Transmitter (Tx) and Receiver (Rx) and collecting a large set of data by offsetting the Tx and Rx in parallel to the surface. The measurements were performed using an Agilent N5230A Vector-Network-Analyzer (VNA). In particular, we present a statistical analysis in the frequency domain to show how frequency selective each surface reflection is and how constant this behaviour is across the whole data set. We introduce the Power-Delay-Profile (PDP) to characterize the multipath behaviour of the channel and calculate the Root-Mean-Square (RMS) delay spread. The measurement results provide a good insight for future propagation work to be done for the development of indoor communications systems at sub-THz frequencies.
In this paper, a high flat gain waveguide-fed aperture antenna has been proposed. For this purpose, two layers of FR4 dielectric as superstrates have been located in front of the aperture to enhance the bandwidth and the gain of the antenna. Moreover, a conductive shield, which is connected to the edges of the ground plane and surrounding aperture and superstrates, applied to the proposed structure to improve its radiation characteristics. The proposed antenna has been simulated with HFSS and optimized with parametric study and the following results have been obtained. The maximum gain of 13.0 dBi and 0.5-dBi gain bandwidth of 25.9 % (8.96 - 11.63 GHz) has been achieved. The 3-dBi gain bandwidth of the proposed antenna is 40.7% (8.07-12.20 GHz), which has a suitable reflection coefficient (
The evolution of network technologies has witnessed a paradigm shift toward open and intelligent networks, with the Open Radio Access Network (O-RAN) architecture emerging as a promising solution. O-RAN introduces disaggregation and virtualization, enabling network operators to deploy multi-vendor and interoperable solutions. However, managing and automating the complex O-RAN ecosystem presents numerous challenges. To address this, machine learning (ML) techniques have gained considerable attention in recent years, offering promising avenues for network automation in O-RAN. This paper presents a comprehensive survey of the current research efforts on network automation usingML in O-RAN.We begin by providing an overview of the O-RAN architecture and its key components, highlighting the need for automation. Subsequently, we delve into O-RAN support forML techniques. The survey then explores challenges in network automation usingML within the O-RAN environment, followed by the existing research studies discussing application of ML algorithms and frameworks for network automation in O-RAN. The survey further discusses the research opportunities by identifying important aspects whereML techniques can benefit.
Digital metasurfaces have opened unprecedented ways to accomplish novel electromagnetic devices thanks to their simple manipulation of electromagnetic waves. However, the metasurfaces leveraging phase-only or amplitude-only modulation restricted the full-functionality control of the devices. Herein, a digital graphene-based metasurfaces engineering wavefront amplitude and phase are proposed for the first time to tackle this challenge in the terahertz (THz) band. The concept and its significance are verified using reprogrammable multi-focal meta-lens based on a 2/2-bit digital unit cell with independent control of 2-bit states of amplitude and phase individually. Moreover, we introduce a novel method to directly transmit digital information over multiple channels via the reprogrammable digital metasurface. Since these metasurfaces are composed of digital building blocks, the digital information can be directly modulated to the metasurface by selecting specific digital sequences and sent them to predetermined receivers distributed in the focal points. Following that, a multi-channel THz high-speed communication system and its application to build three-dimensional wireless agile interconnection are demonstrated. The presented method provides a new architecture for wireless communications without using complicated components of conventional systems. This work motivates versatile meta-devices in many applications envisioned for the THz frequencies, which will play a vital role in modern communications.
In space-air-ground integrated networks (SAGIN), receivers experience diverse interference from both the satellite and terrestrial transmitters. The heterogeneous structure of SAGIN poses challenges for traditional interference management (IM) schemes to effectively mitigate interference. To address this, a novel UAV-RIS-aided IM scheme is proposed for SAGIN, where different types of channel state information (CSI) including no CSI, instantaneous CSI, and delayed CSI, are considered. According to the types of CSI, interference alignment, beamforming, and space-time precoding are designed at the satellite and terrestrial transmitter side, and meanwhile, the UAV-RIS is introduced for cooperating interference elimination process. Additionally, the degrees of freedom (DoF) obtained by the proposed IM scheme are discussed in depth when the number of antennas on the satellite side is insufficient. Simulation results show that the proposed IM scheme improves the system capacity in different CSI scenarios, and the performance is better than the existing IM benchmarks without UAV-RIS.
Distributed mobility management (DMM) solution is proposed to address the downsides of centralized mobility management protocols. The standard DMM is proposed for flat architectures and always selects the anchor point from access layer. Numerical analysis is used in this paper to show that dynamic anchor point selection can improve the performance of standard DMM in terms of packet signalling and delivery cost. In next step, an SDN-based DMM solution that we refer to as SD-DMM is presented to provide dynamic anchor point selection for hierarchical mobile network architecture. In SD-DMM, the anchor point is dynamically selected for each mobile node by a virtual function implemented as an application on top of the SDN controller which has a global view of the network. The main advantages of SD-DMM is to decrease packet delivery cost.
Seamless mobility support is a key technical requirement to motivate the market acceptance of the femtocells. The current 3GPP handover procedure may cause large downlink service interruption time when users move from a macrocell to a femtocell or vice versa due to the data forwarding operation. In this letter, a practical scheme is proposed to enable seamless handover by reactively bicasting the data to both the source cell and the target cell after the handover is actually initiated. Numerical results show that the proposed scheme can significantly reduce the downlink service interruption time while still avoiding the packet loss with only limited extra resource requirements compared to the standard 3GPP scheme. © 2012 IEEE.
In cognitive radio networks, the licensed frequency bands of the primary users (PUs) are available to the secondary user (SU) provided that they do not cause significant interference to the PUs. In this study, the authors analysed the normalised throughput of the SU with multiple PUs coexisting under any frequency division multiple access communication protocol. The authors consider a cognitive radio transmission where the frame structure consists of sensing and data transmission slots. In order to achieve the maximum normalised throughput of the SU and control the interference level to the legal PUs, the optimal frame length of the SU is found via simulation. In this context, a new analytical formula has been expressed for the achievable normalised throughput of SU with multiple PUs under prefect and imperfect spectrum sensing scenarios. Moreover, the impact of imperfect sensing, variable frame length of SU and the variable PU traffic loads, on the normalised throughput has been critically investigated. It has been shown that the analytical and simulation results are in perfect agreement. The authors analytical results are much useful to determine how to select the frame duration length subject to the parameters of cognitive radio network, such as network traffic load, achievable sensing accuracy and number of coexisting PUs.
In this paper, we present a novel random access method for future mobile cellular networks that support machine type communications. Traditionally, such networks establish connections with the devices using a random access procedure, however massive machine type communication poses several challenges to the design of random access for current systems. State-of-the-art random access techniques rely on predicting the traffic load to adjust the number of users allowed to attempt the random access preamble phase, however this delays network access and is highly dependent on the accuracy of traffic prediction and fast signalling. We change this paradigm by using the preamble phase to estimate traffic and then adapt the network resources to the estimated load. We introduce Preamble Barring that uses a probabilistic resource separation to allow load estimation in a wide range of load conditions and combine it with multiple random access responses. This results in a load adaptive method that can deliver near-optimal performance under any load condition without the need for traffic prediction or signalling, making it a promising solution to avoid network congestion and achieve fast uplink access for massive MTC.
In this paper we present a novel framework for spectral efficiency enhancement on the access link between relay stations and their donor base station through Self Organization (SO) of system-wide BS antenna tilts. Underlying idea of framework is inspired by SO in biological systems. Proposed solution can improve the spectral efficiency by upto 1 bps/Hz.
Decentralized joint transmit power and beam- forming selection for multiple antenna wireless ad hoc net- works operating in a multi-user interference environment is considered. An important feature of the considered environ- ment is that altering the transmit beamforming pattern at some node generally creates more signicant changes to in- terference scenarios for neighboring nodes than variation of the transmit power. Based on this premise, a good neighbor algorithm is formulated in the way that at the sensing node, a new beamformer is selected only if it needs less than the given portion of the transmit power required for the current beamformer. Otherwise, it keeps the current beamformer and achieves the performance target only by means of power adaptation. Equilibrium performance and convergence be- havior of the proposed algorithm compared to the best re- sponse and regret matching solutions is demonstrated by means of semi-analytic Markov chain performance analysis for small scale and simulations for large scale networks.
A pilot-based spectrum sensing approach in the presence of unknown timing and frequency offset is proposed in this paper. Our major idea is to utilize the second order statistics of the received samples, such as autocorrelation, to avoid the frequency offset problem. Base on the property of the pilot symbols, where the different symbol blocks usually have the same pilot symbols, some nonzero terms will appear in the frequency domain. To test the proposed approach, computer simulations are carried out for the typical Orthogonal frequency-division multiplexing (OFDM) system. It is observed that the proposed approach always outperforms the classic time domain Neyman-Pearson approach at least 4dB. Moreover, the proposed approach get the same performance as the weighted linear combination based approach when the transmitted data block size is equal to 2048, while a small computational cost is keep at the same time. Therefore, it can be said that the proposed approach can achieve a good trade-off between reliability, latency and the computational cost, when the transmitted data block size of the primary system is larger than 1000. © VDE VERLAG GMBH.
5G is the next cellular generation and is expected to quench the growing thirst for taxing data rates and to enable the Internet of Things. Focused research and standardization work have been addressing the corresponding challenges from the radio perspective while employing advanced features, such as network densi cation, massive multiple-input-multiple-output antennae, coordinated multi-point processing, intercell interference mitigation techniques, carrier aggregation, and new spectrum exploration. Nevertheless, a new bottleneck has emerged: the backhaul. The ultra-dense and heavy traf c cells should be connected to the core network through the backhaul, often with extreme requirements in terms of capacity, latency, availability, energy, and cost ef ciency. This pioneering survey explains the 5G backhaul paradigm, presents a critical analysis of legacy, cutting-edge solutions, and new trends in backhauling, and proposes a novel consolidated 5G backhaul framework. A new joint radio access and backhaul perspective is proposed for the evaluation of backhaul technologies which reinforces the belief that no single solution can solve the holistic 5G backhaul problem. This paper also reveals hidden advantages and shortcomings of backhaul solutions, which are not evident when backhaul technologies are inspected as an independent part of the 5G network. This survey is key in identifying essential catalysts that are believed to jointly pave the way to solving the beyond-2020 backhauling challenge. Lessons learned, unsolved challenges, and a new consolidated 5G backhaul vision are thus presented.
This paper proposes a novel method to enable mode and polarization OAM multiplexing at four different frequencies (30, 70, 90, and 110 GHz) through a reflectarray (RA) antenna. The suggested RA delivers almost matched electromagnetic (EM) responses for x- and y-polarized incident waves for both reflected OAM modes of l=2 . The designed antenna can considerably improve channel capacity and spectrum efficiency.
This paper presents the measurement results and analysis for outdoor wireless propagation channels at 26 GHz over 2 GHz bandwidth for two receiver antenna polarization modes. The angular and wideband properties of directional and virtually omni-directional channels, such as angular spread, root-mean-square delay spread and coherence bandwidth, are analyzed. The results indicate that the reflections can have a significant contribution in some realistic scenarios and increase the angular and delay spreads, and reduce the coherence bandwidth of the channel. The analysis in this paper also show that using a directional transmission can result in an almost frequencyflat fading channel over the measured 2 GHz bandwidth; which consequently has a major impact on the choice of system design choices such as beamforming and transmission numerology.
Frequent handovers (HOs) in dense small cell deployment scenarios could lead to a dramatic increase in signalling overhead. This suggests a paradigm shift towards a signalling conscious cellular architecture with intelligent mobility management. In this direction, a futuristic radio access network with a logical separation between control and data planes has been proposed in research community. It aims to overcome limitations of the conventional architecture by providing high data rate services under the umbrella of a coverage layer in a dual connection mode. This approach enables signalling efficient HO procedures, since the control plane remains unchanged when the users move within the footprint of the same umbrella. Considering this configuration, we propose a core-network efficient radio resource control (RRC) signalling scheme for active state HO and develop an analytical framework to evaluate its signalling load as a function of network density, user mobility and session characteristics. In addition, we propose an intelligent HO prediction scheme with advance resource preparation in order to minimise the HO signalling latency. Numerical and simulation results show promising gains in terms of reduction in HO latency and signalling load as compared with conventional approaches.
Adequate and uniform network coverage provision is one of the main objectives of cellular service providers. Additionally, the densification of cells exacerbates coverage and service provision challenges, particularly at the cell-edges. In this paper, we present a new approach of cell-sweeping-based Base Stations (BSs) deployments in cellular Radio Access Networks (RANs) where the coverage is improved by enhancing the cell-edge performance. In essence, the concept of cell-sweeping rotates/sweeps the sectors of a site in azimuth continuously/discretely resulting in near-uniform distribution of the signal-to-interference-plus-noise ratio (SINR) around the sweeping site. This paper investigates the proposed concept analytically by deriving expressions for the PDF/CDF of SINR and achievable rate; and with the help of system-level simulations, it shows that the proposed concept can provide throughput gains of up to 125% at the cell-edge. Then, using a link-budget analysis, it is shown that the maximum allowable path loss (MAPL) increases by 2.1 dB to 4.1 dB corresponding to the gains in wideband SINR and post-equalized SINR, respectively. This increase in MAPL can be translated to cell-radius/area with the help of the Okumura-Hata propagation model and results in cell-coverage area enhancement by 30% to 66% in a Typical Urban cell deployment scenario.
this paper presents a novel approach in targeting load balancing in ad hoc networks utilizing the properties of quantum game theory. This approach benefits from the instantaneous and information-less capability of entangled particles to synchronize the load balancing strategies in ad hoc networks. The Quantum Load Balancing (QLB) algorithm proposed by this work is implemented on top of OLSR as the baseline routing protocol; its performance is analyzed against the baseline OLSR, and considerable gain is reported regarding some of the main QoS metrics such as delay and jitter. Furthermore, it is shown that QLB algorithm supports a solid stability gain in terms of throughput which stands a proof of concept for the load-balancing properties of the proposed theory.
The aim of this paper is to handle the multifrequency synchronization problem inherent in orthogonal frequency-division multiple access (OFDMA) uplink communications, where the carrier frequency offset (CFO) for each user may be different, and they can be hardly compensated at the receiver side. Our major contribution lies in the development of a novel OFDM receiver that is resilient to unknown random CFO thanks to the use of a CFO-compensator bank. Specifically, the whole CFO range is evenly divided into a set of sub-ranges, with each being supported by a dedicated CFO compensator. Given the optimization for CFO compensator a NP-hard problem, a machine deep-learning approach is proposed to yield a good sub-optimal solution. It is shown that the proposed receiver is able to offer inter-carrier interference free performance for OFDMA systems operating at a wide range of SNRs.
Softwarization has been deemed as a key feature of 5G networking in the sense that the support of network functions migrates from traditional hardware-based solutions to software based ones. While the main rationale of 5G softwarization is to achieve high degree of flexibility/ programmability as well as reduction of total cost of ownership (TCO), it remains an interesting but significant issue on how to strike a desirable balance between system openness and necessary standardization in the context of 5G. The aim of this article is to systematically survey relevant enabling technologies, platforms and tools for 5G softwarization, together with ongoing standardization activities at relevant SDOs (Standards Developing Organizations). Based on these, we aim to shed light on the future evolution of 5G technologies in terms of softwarization versus standardization requirements and options.
—Multi-User Multiple-Input, Multiple-Output (MU-MIMO), and massive-MIMO (mMIMO) have been central technologies to the evolution of the latest mobile generations since they promise substantial throughput increase and enhanced connectivity capabilities. Still, the type of signal processing that will unlock the full potential of MU-MIMO developments, has not yet been determined. Existing realizations typically employ linear processing that, despite its practical benefits, can leave capacity and connectivity gains unexploited. On the other hand, traditional non-linear processing solutions (e.g., sphere decoders) promise improved throughput and connectivity capabilities but can be computationally impractical, with exponentially increasing computational complexity, and with their implementability in 5G-NR systems still being unverified. At the same time, emerging new Open Radio Access Network (Open-RAN) designs call for physical layer (PHY) processing solutions that are also practical in terms of realization, even when implemented purely on software. In this work, we present a first, purely software-based, Open-RAN compliant, 5G-NR MIMO PHY that operates in real-time and over-the-air, encompassing both linear and non-linear MIMO processing, and achieving support for 8 concurrently transmitted MIMO streams at a 10MHz bandwidth with just 12 processing cores. Here, we not only demonstrate that implementing non-linear processing is feasible in software within the stringent real-time latency requirements of 5G-NR, but we also compare it side-by-side in a real-time and over-the-air environment against traditional linear methods. We show that the gains of non-linear processing include substantially enhanced throughput with insignificant computational power overhead, the halving of the base-station antennas without performance degradation, and overloading factors of up to 300%.
The aim of this letter is to exhibit some advantages of using real constellations in large multi-user (MU) MIMO systems. It is shown that a widely linear zero-forcing (WLZF) receiver with M-ASK modulation enjoys a spatial-domain diversity gain, which linearly increases with the MIMO size even in fully- and over-loaded systems. Using the decision of WLZF as the initial state, the likelihood ascent search (LAS) achieves nearoptimal BER performance in fully-loaded large MIMO systems. Interestingly, for coded systems, WLZF shows a much closer BER to that of WLZF-LAS with a gap of only 0:9-2 dB in SNR.
The Datagram Congestion Control Protocol (DCCP) has been recently proposed as a new transport protocol, suitable for use by applications such as multimedia streaming. Wireless mesh networks have promising commercial potential for a large variety of applications. In this paper, we evaluate the performance of DCCP with TCP Friendly Rate Control (TFRC) in wireless mesh networks using ns2 simulations, in terms of fairness and throughput smoothness. Our results show that in wireless mesh networks DCCP shares the limited wireless channel bandwidth fairly with the competing flows and provides better throughput smoothness than TCP flows in isolation i.e. with no competing flows. However, DCCP loses its ability to maintain the smoothness for streaming media applications with competing flows in the network. Copyright 2006 ACM.
IEEE 802.11ax Spatial Reuse (SR) is a new category in the IEEE 802.11 family, aiming at improving the spectrum efficiency and the network performance in dense deployments. The main and perhaps the only SR technique in that amendment is the Basic Service Set (BSS) Color. It aims at increasing the number of concurrent transmissions in a specific area, based on a newly defined Overlapping BSS/Preamble-Detection (OBSS/PD) threshold and the Received Signal Strength Indication (RSSI) from Overlapping BSSs (OBSSs). In this paper, we propose a Control OBSS/PD Sensitivity Threshold (COST) algorithm for adjusting OBSS/PD threshold based on the interference level and RSSI from the associated recipient(s). In contrast to the Dynamic Sensitivity Control (DSC) algorithm that was proposed for setting OBSS/PD, COST is fully aware of any changes in OBSSs and can be applied to any IEEE 802.11ax node. Simulation results in various scenarios, show a clear performance improvement of up to 57% gain in throughput over a conservative fixed OBSS/PD for the legacy BSS Color and DSC.
The vision, as we move to future wireless communication systems, embraces diverse qualities targeting significant enhancements from the spectrum, to user experience. Newly-defined air-interface features, such as large number of base station antennas and computationally complex physical layer approaches come with a non-trivial development effort, especially when scalability and flexibility need to be factored in. In addition, testing those features without commercial, off-the-shelf equipment has a high deployment, operational and maintenance cost. On one hand, industry-hardened solutions are inaccessible to the research community due to restrictive legal and financial licensing. On the other hand, researchgrade real-time solutions are either lacking versatility, modularity and a complete protocol stack, or, for those that are full-stack and modular, only the most elementary transmission modes are on offer (e.g., very low number of base station antennas). Aiming to address these shortcomings towards an ideal research platform, this paper presents SWORD, a SoftWare Open Radio Design that is flexible, open for research, low-cost, scalable and software-driven, able to support advanced large and massive Multiple-Input Multiple- Output (MIMO) approaches. Starting with just a single-input single-output air-interface and commercial off-the-shelf equipment, we create a software-intensive baseband platform that, together with an acceleration/ profiling framework, can serve as a research-grade base station for exploring advancements towards future wireless systems and beyond.
Performance of next generation OFDM/OFDMA based Distributed Cellular Network (ODCN) where no cooperation based interference management schemes are used, is dependent on four major factors: 1) spectrum reuse factor, 2) number of sectors per site, 3) number of relay station per site and 4) modulation and coding efficiency achievable through link adaptation. The combined effect of these factors on the overall performance of a Deployment Architecture (DA) has not been studied in a holistic manner. In this paper we provide a framework to characterize the performance of various DA's by deriving two novel performance metrics for 1) spectral efficiency and 2) fairness among users. These metrics are designed to include the effect of all four contributing factors. We evaluate these metrics for a wide set of DA's through extensive system level simulations. The results provide a comparison of various DA's for both cellular and relay enhanced cellular systems in terms of spectral efficiency and fairness they offer and also provide an interesting insight into the tradeoff between the two performance metrics. Numerical results show that, in interference limited regime, DA's with highest spectrum efficiency are not necessarily those that resort to full frequency reuse. In fact, frequency reuse of 3 with 6 sectors per site is spectrally more efficient than that with full frequency reuse and 3 sectors. In case of relay station enhanced ODCN a DA with full frequency reuse, six sectors and 3 relays per site is spectrally more efficient and can yield around 170% higher spectrum efficiency compared to counterpart DA without RS.
It has been claimed that the filter bank multicarrier (FBMC) systems suffer from negligible performance loss caused by moderate dispersive channels in the absence of guard time protection between symbols. However, a theoretical and systematic explanation/analysis for the statement is missing in the literature to date. In this paper, based on one-tap minimum mean square error (MMSE) and zero-forcing (ZF) channel equalizations, the impact of doubly dispersive channel on the performance of FBMC systems is analyzed in terms of mean square error (MSE) of received symbols. Based on this analytical framework, we prove that the circular convolution property between symbols and the corresponding channel coefficients in the frequency domain holds loosely with a set of inaccuracies. To facilitate analysis, we first model the FBMC system in a vector/matrix form and derive the estimated symbols as a sum of desired signal, noise, inter-symbol interference (ISI), inter-carrier interference (ICI), inter-block interference (IBI) and estimation bias in the MMSE equalizer. Those terms are derived one-by-one and expressed as a function of channel parameters. The numerical results reveal that in harsh channel conditions, e.g., with large Doppler spread or channel delay spread, the FBMC system performance may be severely deteriorated and error floor will occur.
Quantization is the characterization of analogueto- digital converters (ADC) in massive MIMO systems. The design of quantization function or quantization thresholds is found to relate to quantization step, which is the factor that adapts with the changing of transmit power and noise variance. With the objective of utilizing low-resolution ADC is reducing the cost of massive MIMO, we propose an idea as if it is necessary to have adaptive-threshold quantization function. It is found that when maximum-likelihood (ML) is employed as the detection method, having quantization thresholds fixed for low-resolution ADCs will not cause significant performance loss. Moreover, such fixed-threshold quantization function does not require any information of signal power which can reduce the hardware cost of ADCs. Simulations have been carried out in this paper to make comparisons between fixed-threshold and adaptive-threshold quantization regarding various factors.
A high gain (20 dBi) Leaky-Wave Antenna (LWA) is presented at 26 GHz with beam steering capabilities (44°) for high data throughput in millimeter-wave (mm-wave) 5G systems. A tunable phase shifting High Impedance Surface (HIS) exhibiting low loss (
Self-supervised monocular depth and visual odometry (VO) are often cast as coupled tasks. Accurate depth contributes to precise pose estimation and vice versa. Existing architectures typically exploit stacking convolution layers and long short-term memory (LSTM) units to capture long-range dependencies. However, their intrinsic locality hinders the model from getting the expected performance gain. In this article, we propose a Transformer-based architecture, named Transformer-based self-supervised monocular depth and VO (TSSM-VO), to tackle these problems. It comprises two main components: 1) a depth generator that leverages the powerful capability of multihead self-attention (MHSA) on modeling long-range spatial dependencies and 2) a pose estimator built upon a Transformer to learn long-range temporal correlations of image sequences. Moreover, a new data augmentation loss based on structural similarity (SSIM) is introduced to constrain further the structural similarity between the augmented depth and the augmented predicted depth. Rigorous ablation studies and exhaustive performance comparison on the KITTI and Make3D datasets demonstrate the superiority of TSSM-VO over other self-supervised methods. We expect that TSSM-VO would enhance the ability of intelligent agents to understand the surrounding environments.
—We focus on the signal detection for large quasi-symmetric (LQS) multiple-input multiple-output (MIMO) systems , where the numbers of both service (M) and user (N) antennas are large and N/M → 1. It is challenging to achieve maximum-likelihood detection (MLD) performance with square-order complexity due to the ill-conditioned channel matrix. In the emerging MIMO paradigm termed with an extremely large aperture array, the channel matrix can be more ill-conditioned due to spatial non-stationarity. In this paper, projected-Jacobi (PJ) is proposed for signal detection in (non-) stationary LQS-MIMO systems. It is theoretically and empirically demonstrated that PJ can achieve MLD performance, even when N/M = 1. Moreover, PJ has square-order complexity of N and supports parallel computation. The main idea of PJ is to add a projection step and to set a (quasi-) orthogonal initialization for the classical Jacobi iteration. Moreover, the symbol error rate (SER) of PJ is mathematically derived and it is tight to the simulation results.
This letter proposes a novel carrier frequency offset (CFO) estimation method for generalized multicarrier code-division multiple access systems in unknown frequency-selective channels utilizing hidden pilots. It is established that CFO is identifiable in the frequency domain by employing cyclic statistics (CS) and linear regression (LR) algorithms. We show that the CS-based estimator is capable of mitigating the normalized CFO (NCFO) to a small error value. Then, the LR-based estimator can be employed to offer more accurate estimation by removing the residual quantization error after the CS-based estimator. Simulation results are presented together with the theoretical analysis, and a good match between them is observed.
—In this paper, a novel spatially non-stationary fading channel model is proposed for multiple-input multiple-output (MIMO) system with extremely-large aperture service-array (ELAA). The proposed model incorporates three key factors which cause the channel spatial non-stationarity: 1) link-wise path-loss; 2) shadowing effect; 3) line-of-sight (LoS)/non-LoS state. With appropriate parameter configurations, the proposed model can be used to generate computer-simulated channel data that matches the published measurement data from practical ELAA-MIMO channels. Given such appealing results, the proposed fading channel model is employed to study the cumulative distribution function (CDF) of ELAA-MIMO channel capacity. For all of our studied scenarios, it is unveiled that the ELAA-MIMO channel capacity obeys the skew normal distribution. Moreover, the channel capacity is also found close to the Gaussian or Weibull distribution, depending on users' geo-location and distribution. More specifically, for single-user equivalent scenarios or multiuser scenarios with short user-to-ELAA distances (e.g., 1 m), the channel capacity is close to the Gaussian distribution; and for others, it is close to the Weibull distribution. Finally, the proposed channel model is also employed to study the impact of channel spatial non-stationarity on linear MIMO receivers through computer simulations. The proposed fading channel model is available at https://github.com/ELAA-MIMO/ non-stationary-fading-channel-model. Index Terms—Channel model, extremely-large aperture array (ELAA), multiple-input multiple-output (MIMO), spatially non-stationary fading.
Abstract—Millimeter wave (mmWave) communication is a promising technology in future wireless networks because of its wide bandwidths that can achieve high data rates. However, high beam directionality at the transceiver is needed due to the large path loss at mmWave. Therefore, in this paper, we investigate the beam alignment and power allocation problem in a nonorthogonal multiple access (NOMA) mmWave system. Dierent from the traditional beam alignment problem, we consider the NOMA scheme during the beam alignment phase when two users are at the same or close angle direction from the base station. Next, we formulate an optimization problem of joint beamwidth selection and power allocation to maximize the sum rate, where the quality of service (QoS) of the users and total power constraints are imposed. Since it is dicult to directly solve the formulated problem, we start by fixing the beamwidth. Next, we transform the power allocation optimization problem into a convex one, and a closed-form solution is derived. In addition, a one-dimensional search algorithm is used to find the optimal beamwidth. Finally, simulation results are conducted to compare the performance of the proposed NOMA-based beam alignment and power allocation scheme with that of the conventional OMA scheme.
The fifth-generation (5G) new radio (NR) cellular system promises a significant increase in capacity with reduced latency. However, the 5G NR system will be deployed along with legacy cellular systems such as the long-term evolution (LTE). Scarcity of spectrum resources in low frequency bands motivates adjacent-/co-carrier deployments. This approach comes with a wide range of practical benefits and it improves spectrum utilization by re-using the LTE bands. However, such deployments restrict the 5G NR flexibility in terms of frame allocations to avoid the most critical mutual adjacent-channel interference. This in turns prevents achieving the promised 5G NR latency figures. In this we paper, we tackle this issue by proposing to use the minislot uplink feature of 5G NR to perform uplink acknowledgement and feedback to reduce the frame latency with selective blind retransmission to overcome the effect of interference. Extensive system-level simulations under realistic scenarios show that the proposed solution can reduce the peak frame latency for feedback and acknowledgment up to 33% and for retransmission by up to 25% at a marginal cost of an up to 3% reduction in throughput.
Metamaterial-based antenna designs, such as Reconfigurable Intelligent Surface (RIS), are expected to play a significant role in next generation communication networks (i.e. 6G) because of their ability to improve wireless communication environments. This letter investigates the ergodic capacity of RIS-aided multiple input multiple output (MIMO), a.k.a. MIMO-RIS, systems over Rayleigh-Rician fading channels. We consider that the transmitter-RIS and receiver-RIS links experience Rayleigh and Rician fading, respectively. An exact analytical expression of the ergodic capacity is derived based on closed-form expressions of the probability density function (pdf) of the cascaded channel. Moreover, a high SNR expression and a large RIS approximation are provided to unveil further system insights. Simulations result validate the correctness of our expressions and show the impact of the Rician fading and the number of RIS elements on the capacity.
Energy efficiency (EE) is undoubtedly an important criterion for designing power-limited systems, and yet in a context of global energy saving, its relevance for power-unlimited systems is steadily growing. Equally, resource allocation is a well-known method for improving the performance of cellular systems. In this paper, we propose an EE optimization framework for the downlink of planar cellular systems over frequency-selective channels. Relying on this framework, we design two novel low-complexity resource allocation algorithms for the single-cell and coordinated multi-cell scenarios, which are EE-optimal and EE-suboptimal, respectively. We then utilize our algorithms for comparing the EE performance of the classic non-coordinated, orthogonal and coordinated multi-cell approaches in realistic power and system settings. Our results show that coordination can be a simple and effective method for improving the EE of cellular systems, especially for medium to large cell sizes. Indeed, by using a coordinated rather than a non-coordinated resource allocation approach, the per-sector energy consumption and transmit power can be reduced by up to 15% and more than 90%, respectively.
Towards 6G networks, such as virtual reality (VR) applications, Industry 4.0 and automated driving, demand mobile edge computing (MEC) techniques to offload computing tasks to nearby servers, which however causes fierce competition with traditional communication services. On the other hand, by introducing millimeter wave (mmWave) communication, it can significantly improve the offloading capability of MEC, so that enabling low latency and high throughput. For this sake, this paper investigates the resource management for the offload transmission of mmWave MEC system, when considering the data transmission demands from both communication-oriented users (CM-UEs) and computing-oriented users (CP-UEs). In particular, the joint consideration of user pairing, beamwidth allocation and power allocation is formulated as a multi-objective problem (MOP), which includes minimizing the offloading delay of CP-UEs and maximizing the transmission rate of CM-UEs. By using -constraint approach, the MOP is converted into a single-objective optimization problem (SOP) without losing Pareto optimality, and then the three-stage iterative resource allocation algorithm is proposed. Our simulation results show that, the gap between Pareto front generated by three-stage iterative resource allocation algorithm and the real Pareto front less than 0.16%. Futher, the proposed algorithm with much lower complexity can achieve the performance similar to the benchmark scheme of NSGA-2, while significantly outperforms the other traditional schemes.
This paper proposes a reconfigurable wideband artificial magnetic conductor (AMC), insensitive to the tilt-angle of linear polarization, that offers an overall AMC bandwidth of 550 MHz from 3.55 GHz to 4.1 GHz. The operating frequency of the proposed AMC can be altered by varying the reverse biasing of the varactor diodes. The proposed AMC is evaluated for variations in the tilt-angle of linear polarization and also as a reflector for a standard bowtie antenna due to its wideband characteristics. The results show its voltage-controlled wideband operation for obtaining a directional radiation pattern suitable for a typical wideband 5G base station antenna.
A polarization-insensitive circular reflectarray antenna (RA) for long-distance wireless communications is investigated. By combining patches, dipoles, and rings, a polarization-insensitive unit cell is achieved. With a phase variation of around 314 • between 30 GHz and 32 GHz, a circular reflectarray with a radius of 400 mm is built. Simulation results indicate a maximum realized gain of 27.6 dB at 30 GHz.
It is well-established that transmitting at full power is the most spectral-efficient power allocation strategy for point-to-point (P2P) multi-input multi-output (MIMO) systems, however, can this strategy be energy efficient as well? In this letter, we address the most energy-efficient power allocation policy for symmetric P2P MIMO systems by accurately approximating in closed-form their optimal transmit power when a realistic MIMO power consumption model is considered. In most cases, being energy efficient implies a reduction in transmit and overall consumed powers at the expense of a lower spectral efficiency.
Clustering algorithms have been extensively applied for energy conservation in wireless sensor networks (WSNs). Cluster-heads (CHs) play an important role and drain energy more rapidly than other member nodes. Numerous mechanisms to optimize CH selection and cluster formation during the set-up phase have been proposed for extending the stable operation period of the network until any node depletes its energy. However, the existing mechanisms assume that the traffic load contributed by each node is the same, in other words, same amount of data are sent to CH from the member nodes during each scheduled round. This paper assumes the nodes contribute traffic load at different rates, and consequently proposes an energy-efficient clustering algorithm by considering both the residual node energy and the traffic load contribution of each node during the set-up phase. The proposed algorithm makes nodes with more residual energy and less traffic load contribution get more chances to become CHs. Furthermore, clusters are adaptively organized in a way that the deviation of ratio between the total cluster energy and the total cluster traffic load (ETRatio) is limited, in order to balance the energy usage among the clusters. Performance evaluation shows that the proposed algorithm extends the stable operation period of the network significantly
In this paper, we consider multi-relay cooperative networks for the Rayleigh fading channel, where each relay, upon receiving its own channel observation, independently compresses it and forwards the compressed information to the destination. Although the compression at each relay is distributed using Wyner-Ziv coding, there exists an opportunity for jointly optimizing compression at multiple relays to maximize the achievable rate. Considering Gaussian signalling, a primal optimization problem is formulated accordingly. We prove that the primal problem can be solved by resorting to its Lagrangian dual problem and an iterative optimization algorithm is proposed. The analysis is further extended to a hybrid scheme, where the employed forwarding scheme depends on the decoding status of each relay. The relays that are capable of successful decoding perform decode-and-forward and the rest conduct distributed compression. The hybrid scheme allows the cooperative network to adapt to the changes of the channel conditions and benefit from an enhanced level of flexibility. Numerical results from both spectrum and energy efficiency perspectives show that the joint optimization improves efficiency of compression and identify the scenarios where the proposed schemes outperform the conventional forwarding schemes. The findings provide important insights into the optimal deployment of relays in a realistic cellular network.
Energy efficiency (EE) is emerging as a key design criterion for both power limited applications, i.e. mobile devices, and power-unlimited applications, i.e. cellular network. Whereas, resource allocation is a well-known technique for improving the performance of communication system. In this paper, we design a simple and optimal EE-based resource allocation method for the orthogonal multi-user channel by adapting the transmit power and rate to the channel condition such that the energy-per-bit consumption is minimized. We present our EE framework, i.e. EE metric and node power consumption model, and utilizes it for formulating our EE-based optimization problem with or without constraint. In both cases, we derive explicit formulations of the optimal energy-per-bit consumption as well as optimal power and rate for each user. Our results indicate that EE-based allocation can substantially reduce the consumed power and increase the EE in comparison with spectral efficiency-based allocation.
In this paper, using stochastic geometry, we investigate the average energy efficiency (AEE) of the user terminal (UT) in the uplink of a two-tier heterogeneous network (HetNet), where the two tiers are operated on separate carrier frequencies. In such a deployment, a typical UT must periodically perform inter-frequency small cell discovery (ISCD) process in order to discover small cells in its neighborhood and benefit from the high data rate and traffic offloading opportunity that small cells present. We assume that the base stations (BSs) of each tier and UTs are randomly located and we derive the average ergodic rate and UT power consumption, which are later used for our AEE evaluation. The AEE incorporates the percentage of time a typical UT missed small cell offloading opportunity as a result of the periodicity of the ISCD process. In addition to this, the additional power consumed by the UT due to the ISCD measurement is also included. Moreover, we derive the optimal ISCD periodicity based on the UT’s average energy consumption (AEC) and AEE. Our results reveal that ISCD periodicity must be selected with the objective of either minimizing UT’s AEC or maximizing UT’s AEE.
Enormous amounts of dynamic observation and measurement data are collected from sensors in Wireless Sensor Networks (WSNs) for the Internet of Things (IoT) applications such as environmental monitoring. However, continuous transmission of the sensed data requires high energy consumption. Data transmission between sensor nodes and cluster heads (sink nodes) consumes much higher energy than data sensing in WSNs. One way of reducing such energy consumption is to minimise the number of data transmissions. In this paper, we propose an Adaptive Method for Data Reduction (AM-DR). Our method is based on a convex combination of two decoupled Least-Mean-Square (LMS) windowed filters with differing sizes for estimating the next measured values both at the source and the sink node such that sensor nodes have to transmit only their immediate sensed values that deviate significantly (with a pre-defined threshold) from the predicted values. The conducted experiments on a real-world data show that our approach has been able to achieve up to 95% communication reduction while retaining a high accuracy (i.e. predicted values have a deviation of +0:5 from real data values).
Doubly differential modem turns out to be a promising technology for coping with unknown frequency offsets with the pay of signal-to-noise ratio (SNR). In this paper, we propose to compensate the SNR loss by employing the detection-forward cooperative relay. The receiver can employ two kind of combiners to attain the achievable spatial diversity-gain. Performance analysis is carefully investigated for the Rayleigh-fading channel. It is shown that the SNR-compensation is satisfied for the large-SNR range.
Virtual multiple-input-multiple-output (MIMO) systems using multiple antennas at the transmitter and a single antenna at each of the receivers have recently emerged as an alternative to point-to-point MIMO systems. This paper investigates the relationship between energy efficiency (EE) and spectral efficiency (SE) for a virtual-MIMO system that has one destination and one relay using compress-and-forward (CF) cooperation. To capture the cost of cooperation, the power allocation (between the transmitter and the relay) and the bandwidth allocation (between the data and cooperation channels) are studied. This paper derives a tight upper bound for the overall system EE as a function of SE, which exhibits good accuracy for a wide range of SE values. The EE upper bound is used to formulate an EE optimization problem. Given a target SE, the optimal power and bandwidth allocation can be derived such that the overall EE is maximized. Results indicate that the EE performance of virtual-MIMO is sensitive to many factors, including resource-allocation schemes and channel characteristics. When an out-of-band cooperation channel is considered, the performance of virtual-MIMO is close to that of the MIMO case in terms of EE. Considering a shared-band cooperation channel, virtual-MIMO with optimal power and bandwidth allocation is more energy efficient than the noncooperation case under most SE values.
This paper presents a parallel computing approach that is employed to reconstruct original information bits from a non-recursive convolutional codeword in noise, with the goal of reducing the decoding latency without compromising the performance. This goal is achieved by means of cutting a received codeword into a number of sub-codewords (SCWs) and feeding them into a two-stage decoder. At the first stage, SCWs are decoded in parallel using the Viterbi algorithm or equivalently the brute force algorithm. Major challenge arises when determining the initial state of the trellis diagram for each SCW, which is uncertain except for the first one; and such results in multiple decoding outcomes for every SCW. To eliminate or more precisely exploit the uncertainty, an Euclidean-distance minimization algorithm is employed to merge neighboring SCWs; and this is called the merging stage, which can also run in parallel. Our work reveals that the proposed two-stage decoder is optimal and has its latency growing logarithmically, instead of linearly as for the Viterbi algorithm, with respect to the codeword length. Moreover, it is shown that the decoding latency can be further reduced by employing artificial neural networks for the SCW decoding. Computer simulations are conducted for two typical convolutional codes, and the results confirm our theoretical analysis.
In this paper, the problem of drone-assisted collaborative learning is considered. In this scenario, swarm of intelligent wireless devices train a shared neural network (NN) model with the help of a drone. Using its sensors, each device records samples from its environment to gather a local dataset for training. The training data is severely heterogeneous as various devices have different amount of data and sensor noise level. The intelligent devices iteratively train the NN on their local datasets and exchange the model parameters with the drone for aggregation. For this system, the convergence rate of collaborative learning is derived while considering data heterogeneity, sensor noise levels, and communication errors, then, the drone trajectory that maximizes the final accuracy of the trained NN is obtained. The proposed trajectory optimization approach is aware of both the devices data characteristics (i.e., local dataset size and noise level) and their wireless channel conditions, and significantly improves the convergence rate and final accuracy in comparison with baselines that only consider data characteristics or channel conditions. Compared to state-of-the-art baselines, the proposed approach achieves an average 3.85 improvement in the final accuracy of the trained NN on benchmark datasets for image recognition and semantic segmentation tasks, respectively. Moreover, the proposed framework achieves a significant speedup in training, leading to an average 24% and 87% saving in the drone's hovering time, communication overhead, and battery usage, respectively for these tasks.
Multi-service system is an enabler to flexibly support diverse communication requirements for the next generation wireless communications. In such a system, multiple types of services co-exist in one baseband system with each service having its optimal frame structure and low out of band emission (OoBE) waveforms operating on the service frequency band to reduce the inter-service-band-interference (ISvcBI). In this article, a framework for multi-service system is established and the challenges and possible solutions are studied. The multi-service system implementation in both time and frequency domain is discussed. Two representative subband filtered multicarrier (SFMC) waveforms: filtered orthogonal frequency division multiplexing (F-OFDM) and universal filtered multi-carrier (UFMC) are considered in this article. Specifically, the design methodology, criteria, orthogonality conditions and prospective application scenarios in the context of 5G are discussed. We consider both single-rate (SR) and multi-rate (MR) signal processing methods. Compared with the SR system, the MR system has significantly reduced computational complexity at the expense of performance loss due to inter-subband-interference (ISubBI) in MR systems. The ISvcBI and ISubBI in MR systems are investigated with proposed low-complexity interference cancelation algorithms to enable the multi-service operation in low interference level conditions.
Holographic Beamforming is a promising concept to reduce the power consumption of Multiple Input Multiple Output (MIMO) antenna arrays. In a holographic approach, the impedance of antenna patches is varied through the inclusion of tuning elements, such as varactor diodes, which allow electronic control of the phase and amplitude of each antenna. In this work, we provide the electromagnetic framework for the design of a Holographic MIMO Surface (HMIMOS). We analyze its performance and compare its power consumption to passive Reconfigurable Intelligent Surfaces (RIS) and MIMO Active Phased Arrays (APA) at 5G Frequency Range (FR) 2. The results show that the power consumption of HMIMOS is lower than of MIMO APAs, but significantly higher than of RISs. However, a combination of active and passive elements on a RIS can offer many benefits in terms of environmental awareness and intelligence for Integrated Sensing and Communication (ISAC) in Beyond 5G (B5G) networks.
—Recent advancements in diffusion models have led to a significant breakthrough in generative modeling. The combination of the generative model and semantic communication (SemCom) enables high-fidelity semantic information exchange at ultra-low rates. In this paper, a novel generative SemCom framework for image tasks is proposed, utilizing pre-trained foundation models as semantic encoders and decoders for semantic feature extraction and image regeneration, respectively. The mathematical relationship between transmission reliability and the perceptual quality of regenerated images is modeled and the semantic values of extracted features are defined accordingly. This relationship is derived through numerical simulations on the Kodak dataset. Furthermore, we investigate the semantic-aware power allocation problem, aiming to minimize total power consumption while guaranteeing semantic performance. To solve this problem, two semantic-aware power allocation methods are proposed by constraint decoupling and bisection search, respectively. Numerical results demonstrate that the proposed semantic-aware methods outperform conventional approach in terms of total power consumption.
Coping with the extreme growth of the number of users is one of the main challenges for the future IEEE 802.11 networks. The high interference level, along with the conventional standardized carrier sensing approaches, will degrade the network performance. To tackle these challenges, the Dynamic Sensitivity Control (DSC) and the BSS Color scheme are considered in IEEE 802.11ax and IEEE 802.11ah, respectively. The main purpose of these schemes is to enhance the network throughput and improve the spectrum efficiency in dense networks. In this paper, we evaluate the DSC and the BSS Color scheme along with the PARTIAL-AID (PAID) feature introduced in IEEE 802.11ac, in terms of throughput and fairness. We also, exploit the performance when the aforementioned techniques are combined. The simulations show a significant gain in total throughput when these techniques are applied.
The conventional transmit diversity schemes, such as Alamouti scheme, use several radio frequency (RF) chains to transmit signals simultaneously from multiple antennas. In this paper, we propose a low-complexity repetition time-switched (RTSTD) transmit diversity algorithm, which employs only one RF chain as well as a low-complexity switch for transmission. A mathematical model is developed to assess the performance of the proposed scheme. In order to make it applicable for practical applications, we also investigate its joint application with orthogonal frequency division multiplexing (OFDM) and channel coding techniques to combat frequency selective fading. © 2011 IEEE.
This paper proposes a novel approach for enhancing the video popularity prediction models. Using the proposed approach, we enhance three popularity prediction techniques that outperform the accuracy of the prior state-of-the-art solutions. The major components of the proposed approach are two novel mechanisms for "user grouping" and "content classification". The user grouping method is an unsupervised clustering approach that divides the users into an adequate number of user groups with similar interests. The content classification approach identifies the classes of videos with similar popularity growth trends. To predict the popularity of the newly-released videos, our proposed popularity prediction model trains its parameters in each user group and its associated video popularity classes. Evaluations are performed through a 5-fold cross validation and on a dataset containing one month video request records of 26,706 number of BBC iPlayer users. Using the proposed grouping technique, user groups of similar interest and up to 2 video popularity classes for each user group were detected. Our analysis shows that the accuracy of the proposed solution outperforms the state-of-the-art including SH, ML, MRBF models on average by 45%, 33% and 24%, respectively. Finally, we discuss how various systems in the network and service management domain such as cache deployment, advertising and video broadcasting technologies benefit from our findings to illustrate the implications.
This paper demonstrates the ability of the reflectarray antenna (RA) to perform orbital angular momentum (OAM) beamsteering with low divergence angles at the fifth generation (5G) millimetre-wave (mmWave) bands. To provide steered OAM beams, it is necessary to regulate the scatterer’s geometries smoothly throughout the focal area to follow the required twisted distribution. The traditional numerical method to compensate for the phase is modified to enable the 3D scanning property of OAM beams, so it is possible to avoid the feeder blockage and produce high-gain steered OAM beams. Likewise, reducing the inherent beam divergence of OAM beams can be obtained by examining the most satisfactory phase distribution of the scatterers by fitting the focal length. The simulated radiation pattern is validated by the measured radiation pattern of the fabricated RA in the frequency range between 28.5 GHz and 31.5 GHz.
Beyond 5G networks would require newer technologies to deliver a smarter network. In accordance with these requirements, an electronically steerable compact antenna system capable of beam-switching in the azimuth plane is proposed. The design uses a monopole antenna as the main radiator surrounded by metasurface-based electronically reconfigurable reflector elements designed for the sub-6GHz range. The reflector elements use a reconfigurable capacitively loaded loop (CLL) which can be electronically activated to work as an artificial magnetic conductor (AMC). The design offers a digitally controllable directional radiation pattern covering all 360° in the azimuth plane with a step-size of 30°, a directional gain of ≥ 4.98 dBi and a high front-to-back lobe ratio (FBR) of ≥ 14.9 dB. The compact and modular nature of the design combined with the use of commercial off-the-shelf (COTS) components and 3D-printing makes the design low-cost and easier to integrate with various internet of thing (IoT) applications.
In this paper, we propose novel Hybrid Automatic Repeat re-Quest (HARQ) strategies used in conjunction with hybrid relaying schemes, named as H^2-ARQ-Relaying. The strategies allow the relay to dynamically switch between amplify-and-forward/compress-and-forward and decode-and-forward schemes according to its decoding status. The performance analysis is conducted from both the spectrum and energy efficiency perspectives. The spectrum efficiency of the proposed strategies, in terms of the maximum throughput, is significantly improved compared with their non-hybrid counterparts under the same constraints. The consumed energy per bit is optimized by manipulating the node activation time, the transmission energy and the power allocation between the source and the relay. The circuitry energy consumption of all involved nodes is taken into consideration. Numerical results shed light on how and when the energy efficiency can be improved in cooperative HARQ. For instance, cooperative HARQ is shown to be energy efficient in long distance transmission only. Furthermore, we consider the fact that the compress-and-forward scheme requires instantaneous signal to noise ratios of all three constituent links. However, this requirement can be impractical in some cases. In this regard, we introduce an improved strategy where only partial and affordable channel state information feedback is needed.
Cross-layer scheduling is a promising solution for improving the efficiency of emerging broadband wireless systems. In this tutorial, various cross-layer design approaches are organized into three main categories namely air interface-centric, user-centric and route-centric and the general characteristics of each are discussed. Thereafter, by focusing on the air interfacecentric approach, it is shown that the resource allocation problem can be formulated as an optimization problem with a certain objective function and some particular constraints. This is illustrated with the aid of a customer-provider model from the field of economics. Furthermore, the possible future evolution of scheduling techniques is described based on the characteristics of traffic and air interface in emerging broadband wireless systems. Finally, some further challenges are identified. © 2009 IEEE.
This paper describes several communication categories for personal and body centric communications. It uses several application scenarios to give examples of these categories and therefore to concretise these categories. Further, the paper presents a first set of analysis for off-body communications.
Multiuser multiple-input multiple-output (MUMIMO) nonlinear precoding techniques face the problem of poor computational scalability to the size of the network. In this paper, the fundamental problem of MU-MIMO scalability is tackled through a novel signal-processing approach, which is called degree-2 vector perturbation (D2VP). Unlike the conventional VP approaches that aim at minimizing the transmit-to-receive energy ratio through searching over an N-dimensional Euclidean space, D2VP shares the same target through an iterative-optimization procedure. Each iteration performs vector perturbation over two optimally selected subspaces. By this means, the computational complexity is managed to be in the cubic order of the size of MUMIMO, and this mainly comes from the inverse of the channel matrix. In terms of the performance, it is shown that D2VP offers comparable bit-error-rate to the sphere encoding approach for the case of small MU-MIMO. For the case of medium and large MU-MIMO when the sphere encoding does not apply due to unimplementable complexity, D2VP outperforms the lattice reduction VP by around 5-10 dB in Eb/No and 10-50 dB in normalized computational complexity.
This paper presents a novel approach for mobile positioning in IEEE 802.11a wireless LANs with acceptable computational complexity. The approach improves the positioning accuracy by utilizing the time and frequency domain channel information obtained from the orthogonal frequency-division multiplexing (OFDM) signals. The simulation results show that the proposed approach outperforms the multiple signal classification (MUSIC) algorithm, Ni's algorithm and achieve a positioning accuracy of 1 m with a 97% probability in an indoor scenario.
Network performance optimization is among the most important tasks within the area of wireless communication networks. In a Self- Organizing Network (SON) with the capability of adaptively changing parameters of a network, the optimization tasks are more feasible than static networks. Yet, with an increase of OPEX and CAPEX in new generation telecommunication networks, the optimization tasks are inevitable. In this paper, it is proven that the similarity among target and network parameters can produce lower Uncertainty Entropy (UEN) in a self-organizing system as a higher degree of organizing is gained. The optimization task is carried out with the Adaptive Simulated Annealing method, which is enhanced with a Similarity Measure (SM) in the proposed approach (EASA). The Markov model of EASA is provided to assess the proposed approach. We also show a higher performance through a simulation, based on a scenario in LTE network.
Hybrid networks consisting of both millimeter wave (mmWave) and microwave (μW) capabilities are strongly contested for next generation cellular communications. A similar avenue of current research is deviceto- device (D2D) communications, where users establish direct links with each other rather than using central base stations (BSs). However, a hybrid network, where D2D transmissions coexist, requires special attention in terms of efficient resource allocation. This paper investigates dynamic resource sharing between network entities in a downlink (DL) transmission scheme to maximize energy efficiency (EE) of the cellular users (CUs) served by either (μW) macrocells or mmWave small cells, while maintaining a minimum quality-of-service (QoS) for the D2D users. To address this problem, firstly a self-adaptive power control mechanism for the D2D pairs is formulated, subject to an interference threshold for the CUs while satisfying their minimum QoS level. Subsequently, a EE optimization problem, which is aimed at maximizing the EE for both CUs and D2D pairs, has been solved. Simulation results demonstrate the effectiveness of our proposed algorithm, which studies the inherent tradeoffs between system EE, system sum rate and outage probability for various QoS levels and varying density of D2D pairs and CUs.
In this paper, a high-gain phased array antenna with wide-angle beam-scanning capability is proposed for fifth- generation (5G) millimeter-wave applications. First, a novel, end-fire, dual-port antenna element with dual functionalities of radiator and power splitter is designed. The element is composed a substrate integrated cavity (SIC) and a dipole based on it. The resonant frequencies of the SIC and dipole can be independently tuned to broaden the impedance bandwidth. Based on this dual-port element, a 4-element subarray can be easily constructed without resorting to a complicated feeding network. The end-fire subarray features broad beam-width of over 180 degrees, high isolation, and low profile, rendering it suitable for wide-angle beam-scanning applications in the H-plane. In addition, the methods of steering the radiation pattern downwards or upwards in the E-plane are investigated. As a proof-of-concept, two phased array antennas each consisting of eight subarrays are designed and fabricated to achieve the broadside and wide-angle beam-scanning radiation. Thanks to the elimination of surface wave, the mutual coupling between the subarrays can be reduced for improving the scanning angle while suppressing the side-lobe level. The experimental predictions are validated by measurement results, showing that the beam of the antenna can be scanned up to 65 degrees with a scanning loss only 3.7 dB and grating lobe less than -15 dB.
In this paper, we propose a novel energy-aware adaptive sectorisation strategy, where the base stations are able to adapt themselves to the temporal traffic variation by switching off some sectors and changing the beam-width of the remaining sectors. An event based user traffic model is established according to Markov-Modulated Poisson Process (MMPP). Adaptation is performed while taking into account the the target Quality of Service (QoS), in terms of blocking probability. In addition, coverage requirement is also considered. This work targets at future cellular systems, in particular LTE systems. The results show that at least 21% energy consumption can be reduced by using the proposed adaptive sectorisation strategy.
Being able to accommodate multiple simultaneous transmissions on a single channel, non-orthogonal multiple access (NOMA) appears as an attractive solution to support massive machine type communication (mMTC) that faces a massive number of devices competing to access the limited number of shared radio resources. In this paper, we first analytically study the throughput performance of NOMA-based random access (RA), namely NOMA-RA. We show that while increasing the number of power levels in NOMA-RA leads to a further gain in maximum throughput, the growth of throughput gain is slower than linear. This is due to the higher-power dominance characteristic in power-domain NOMA known in the literature. We explicitly quantify the throughput gain for the very first time in this paper. With our analytical model, we verify the performance advantage of NOMA-RA scheme by comparing with the baseline multi-channel slotted ALOHA (MS-ALOHA), with and without capture effect. Despite the higher-power dominance effect, the maximum throughput of NOMA-RA with four power levels achieves over three times that of the MS-ALOHA. However, our analytical results also reveal the sensitivity of load on the throughput of NOMA-RA. To cope with the potential bursty traffic in mMTC scenarios, we propose adaptive load regulation through a practical user barring algorithm. By estimating the current load based on the observable channel feedback, the algorithm adaptively controls user access to maintain the optimal loading of channels to achieve maximum throughput. When the proposed user barring algorithm is applied, simulations demonstrate that the instantaneous throughput of NOMA-RA always remains close to the maximum throughput confirming the effectiveness of our load regulation.
In this paper, a novel terahertz (THz) spectroscopy technique and a new graphene-based sensor is proposed. The proposed sensor consists of a graphene-based metasurface (MS) that operates in reflection mode over a broad range of frequency band (0.2 -6 THz) and can detect relative permittivity of up to 4 with a resolution of 0.1 and a thickness ranging from 5 μm to 600 μm with a resolution of 0.5 μm. To the best of author’s knowledge, such a THz sensor with such capabilities has not been reported yet. Additionally, an equivalent circuit of the novel unit cell is derived and compared with two conventional grooved structures to showcase the superiority of the proposed unit cell. The proposed spectroscopy technique utilizes some unique spectral features of a broadband reflection wave including Accumulated Spectral power (ASP) and Averaged Group Delay (AGD), which are independent to resonance frequencies and can operate over a broad range of spectrum. ASP and AGD can be combined to analyse the magnitude and phase of the reflection diagram as a coherent technique for sensing purposes. This enables the capability to distinguish between different analytes with high precision which, to the best of author’s knowledge, has been accomplished for the first time.
One of the most significant applications of Internet of Things are smart meters with wireless capabilities. Smart gas and electricity meters can capture half-hourly pricing and consumption data and send automated meter readings to your energy provider, in contrast to regular meters that can only register a running total of energy used. However, the legacy regular meters were not installed with wireless connectivity in mind and are usually found in hard-to-reach places for wireless radio coverage. To understand these scenarios, this paper provides signal strength measurements conducted at the Building Research Establishment determining building penetration losses in both 900 and 2,100 MHz band. We then present a building penetration loss model using these measurements that is practical and cost effective when compared to traditional statistical propagation loss models.
Orthogonal Frequency Division Multiple Access (OFDMA) as well as other orthogonal multiple access techniques fail to achieve the system capacity limit in the uplink due to the exclusivity in resource allocation. This issue is more prominent when fairness among the users is considered in the system. Current Non-Orthogonal Multiple Access techniques (NOMA) introduce redundancy by coding/spreading to facilitate the users' signals separation at the receiver, which degrade the system spectral efficiency. Hence, in order to achieve higher capacity, more efficient NOMA schemes need to be developed. In this paper, we propose a NOMA scheme for uplink that removes the resource allocation exclusivity and allows more than one user to share the same subcarrier without any coding/spreading redundancy. Joint processing is implemented at the receiver to detect the users' signals. However, to control the receiver complexity, an upper limit on the number of users per subcarrier needs to be imposed. In addition, a novel subcarrier and power allocation algorithm is proposed for the new NOMA scheme that maximizes the users' sum-rate. The link-level performance evaluation has shown that the proposed scheme achieves bit error rate close to the single-user case. Numerical results show that the proposed NOMA scheme can significantly improve the system performance in terms of spectral efficiency and fairness comparing to OFDMA.
The first wave of IEEE 802.11ax capable devices have already hit the market, aiming at enhancing the Quality of Experience (QoE) for the users in dense deployments, by enabling novel features to improve throughput and spectrum efficiency. One of these features is Spatial Reuse (SR) mechanism, which is introduced for coping with the exposed node problem. Under the SR operation, nodes belonging on different Basic Service Sets (BSSs) are allowed to initiate concurrent transmissions, utilising the spectrum resources and improving throughput. However, the main challenge for this enabling technology is the increased interference level that is introduced by the concurrent transmissions. Even though, there are a few algorithms available in the literature that study this issue for the IEEE 802.11ax, in this article we look into that issue from a different perspective. We propose an Interference-Aware scheduler for the Medium Access Control (MAC) queue based on the interference level observed and other characteristics that can be obtained from the channel and the inter-BSS frames. This paper considers only downlink traffic for the evaluation of the proposed scheme with simulation-based results showing a clear performance improvement of up to 34% against the legacy First-In First-Out (FIFO) MAC queue by introducing new policies and leave room for further exploration and enhancements.
In this paper, metamaterial loading on loop and open loop microstrip filters is investigated where both rectangular loop and open loop structures are considered. Spiral resonators are loaded on the four sides of the square loop and result in higher size reduction compared to the conventional split ring resonators with identical structural parameters. It is shown that, for both proposed filters, metamaterial loading provides size reduction, due to possessing lower resonant frequency of spiral resonators. The structures are analytically investigated through the transmission matrix method. In the designed rectangular loop filters, there are two nulls on both sides of the pass-band, which provide high out-of-band rejection and is preserved in the corresponding miniaturized metamaterial loaded structures. However open loop resonators provide lower resonant frequencies or more compact size filters. The proposed filter is fabricated and tested and measured results are in good agreement with simulation ones.
MIMO mobile systems, with a large number of antennas at the base-station side, enable the concurrent transmission of multiple, spatially separated information streams and, therefore, enable improved network throughput and connectivity both in uplink and downlink transmissions. Traditionally, to efficiently facilitate such MIMO transmissions, linear base-station processing is adopted, that translates the MIMO channel into several single-antenna channels. Still, while such approaches are relatively easy to implement, they can leave on the table a significant amount of unexploited MIMO capacity. Recently proposed non-linear base-station processing methods claim this unexplored capacity and promise a substantially increased network throughput. Still, to the best of the authors' knowledge, non-linear base-station processing methods not only have not yet been adopted by actual systems, but have not even been evaluated in a standard-compliant framework, involving of all the necessary algorithmic modules required by a practical system. This work, outlines our experience by trying to incorporate and evaluate the gains of non-linear base-station processing in a 3GPP standard environment. We discuss the several corresponding challenges and our adopted solutions, together with their corresponding limitations. We report gains that we have managed to verify, and we also discuss remaining challenges, missing algorithmic components and future research directions that would be required towards highly efficient, future mobile systems that can efficiently exploit the gains of non-linear, base-station processing.
Simultaneous improvement of matching and isolation for a modified two-element microstrip patch antenna array is proposed. Two simple patch antennas in a linear array structure are designed, whereas, the impedance matching and isolation are improved without using any conventional matching networks. The presented low profile multifunctional via-less structure comprises of only two narrow T-shaped stubs connected to feed lines, a narrow rectangular stub between them, and a narrow rectangular slot on the ground plane. This design provides a simple, compact structure with low mutual coupling, low cost and no adverse effects on the radiation and resonance. To validate the design, a compact very-closely-spaced antenna array prototype is fabricated at 5.5 GHz which is suitable for multiple-input-multiple-output (MIMO) systems. The measured and simulated results are in good agreement with a 16 dB, and 40 dB of improvements in the matching and isolation, respectively.
Multicarrier-Low Density Spreading Multiple Access (MC-LDSMA) is a promising technique for high data rate mobile communications. In this paper, the suitability of using MC-LDSMA in the uplink for next generation cellular systems is investigated. The performance of MC-LDSMA is evaluated and compared with current multiple access techniques, OFDMA and SC-FDMA. Specifically, Peak to Average Power Ratio (PAPR), Bit Error Rate (BER), spectral efficiency and fairness are considered as performance metrics. The link and system-level simulation results show that MC-LDSMA has significant performance improvements over SC-FDMA and OFDMA. It is shown that using MC-LDSMA can significantly improve the system performance in terms of required transmission power, spectral efficiency and fairness among the users.
Device-to-device (D2D) communication has huge potential for capacity and coverage enhancements for next generation cellular networks. The number of potential nodes for D2D communication is an important parameter that directly impacts the system capacity. In this letter, we derive an analytic expression for average coverage probability of cellular user and corresponding number of potential D2D users. In this context, mature framework of stochastic geometry and Poisson point process have been used. The retention probability has been incorporated in Laplace functional to capture reduced path-loss and shortest distance criterion based D2D pairing. The numerical results show a close match between analytic expression and simulation setup.
The cumulative distribution function (CDF) of a non-central χ2-distributed random variable (RV) is often used when measuring the outage probability of communication systems. For adaptive transmitters, it is important but mathematically challenging to determine the outage threshold for an extreme target outage probability (e.g., 10 −5 or less). This motivates us to investigate lower bounds of the outage threshold, and it is found that the one derived from the Chernoff inequality (named Cher-LB) is the most effective lower bound. The Cher-LB is then employed to predict the multi-antenna transmitter beamforming-gain in ultra-reliable and low-latency communication, concerning the first-order Markov time-varying channel. It is exhibited that, with the proposed Cher-LB, pessimistic prediction of the beamforming gain is made sufficiently accurate for guaranteed reliability as well as the transmit-energy efficiency. Index Terms—Chernoff bound, beamforming gain, non-central χ2-distribution, reliability, multi-input multi-output (MIMO).
Signal detection in large multiple-input multiple-output (large-MIMO) systems presents greater challenges compared to conventional massive-MIMO for two primary reasons. First, large-MIMO systems lack favorable propagation conditions as they do not require a substantially greater number of service antennas relative to user antennas. Second, the wireless channel may exhibit spatial non-stationarity when an extremely large aperture array (ELAA) is deployed in a large-MIMO system. In this paper, we propose a scalable iterative large-MIMO detector named ANPID, which simultaneously delivers 1) close to maximum-likelihood detection performance, 2) low computational-complexity (i.e., square-order of transmit antennas), 3) fast convergence, and 4) robustness to the spatial non-stationarity in ELAA channels. ANPID incorporates a damping demodulation step into stationary iterative (SI) methods and alternates between two distinct demodulated SI methods. Simulation results demonstrate that ANPID fulfills all the four features concurrently and outperforms existing low-complexity MIMO detectors, especially in highly-loaded large-MIMO systems.
In this paper, we optimize both a very-low Earth orbit (VLEO) satellite mega-constellation and a low Earth orbit (LEO) satellite mega-constellation to achieve a high-rank channel matrix in line-of-sight (LOS) conditions for satellite-to-mobile communications. The optimization of these constellations is achieved through the spacing of the satellites in adjacent planes. The significance of the optimization is that it creates a high-rank channel matrix that delivers the spatial gain in the distributed multiple-input-multiple-output (MIMO). The distributed MIMO is modeled as the 4 closest satellites to an unmodified mobile phone, where each satellite is assumed to be a single transmit antenna. For each second the satellites fly over the receiver's location, a Monte Carlo simulation of the Rician flat fading channel was completed to obtain the mean channel to calculate the capacity. The simulations have shown achievable peak data rates of 177.2 Mbps for the n255 frequency band and 140.9 Mbps for the n256 frequency band at VLEO of 360km and the LEO we obtained peak data rates of 135.1 Mbps and 102.0 Mbps for the n255 and n256 frequency bands respectively.
Reconfigurable intelligent surface (RIS) has emerged as a promising technology for enhancing the performance of wireless communication systems. However, the extent of this enhancement has yet to be defined in a simple and insightful manner, especially when RIS amplitude and phase responses are coupled. In this paper, we characterize the fundamental ergodic capacity limits of RIS-aided multiple-input multiple-output (MIMO), a.k.a. MIMO-RIS, when considering a practical amplitude response for the RIS, which is coupled to its phase shift response. By studying these fundamental limits, we provide insights into the performance of MIMO-RIS systems and inform the design and optimization of future wireless communications. Accordingly, we first derive a novel expression of MIMO-RIS ergodic capacity from a closed-form expression of the probability density function (pdf) of the cascaded channel eigenvalues. We then provide upper and lower bounds, alongside low SNR, high SNR, and large number of RIS element approximations to illustrate the dependence of the MIMO-RIS ergodic capacity on the amplitude and phase of RIS elements. These expressions helped us to define the maximum SNR gain of MIMO-RIS over MIMO systems. Next, simulations are used to validate the accuracy and correctness of our various capacity expressions. Furthermore, we investigate the impact of environmental factors, such as near-field or far-field path loss, on the MIMO-RIS ergodic capacity. Numerical results confirm the accuracy of our MIMO-RIS SNR gain expression and provide valuable insights into the performance of RIS-based systems in realistic scenarios. Consequently, this can contribute to the design of future wireless communications based on MIMO-RIS.
This paper presents a novel frequency-domain energy detection scheme based on extreme statistics for robust sensing of OFDM sources in the low SNR region. The basic idea is to exploit the frequency diversity gain inherited by frequency selective channels with the aid of extreme statistics of the differential energy spectral density (ESD). Thanks to the differential stage the proposed spectrum sensing is robust to noise uncertainty problem. The low computational complexity requirement of the proposed technique makes it suitable for even machine-to-machine sensing. Analytical performance analysis is performed in terms of two classical metrics, i.e. probability of detection and probability of false alarm. The computer simulations carried out further show that the proposed scheme outperforms energy detection and second order cyclostationarity based approach for up to 10 dB gain in the low SNR range. © 2011 IEEE.
This paper introduces a millimeter-wave multipleinput- multiple-output (MIMO) antenna for autonomous (selfdriving) cars. The antenna is a modified four-port balanced antipodal Vivaldi which produces four directional beams and provides pattern diversity to cover 90 deg angle of view. By using four antennas of this kind on four corners of the car’s bumper, it is possible to have a full 360 deg view around the car. The designed antenna is simulated by two commercially full-wave packages and the results indicate that the proposed method can successfully bring the required 90 deg angle of view.
This work addresses joint transceiver optimization for multiple-input, multiple-output (MIMO) systems. In practical systems the complete knowledge of channel state information (CSI) is hardly available at transmitter. To tackle this problem, we resort to the codebook approach to precoding design, where the receiver selects a precoding matrix from a finite set of pre-defined precoding matrices based on the instantaneous channel condition and delivers the index of the chosen precoding matrix to the transmitter via a bandwidth-constraint feedback channel. We show that, when the symbol constellation is improper, the joint codebook based precoding and equalization can be designed accordingly to achieve improved performance compared to the conventional system.
Recent advancements in sensing, networking technologies and collecting real-world data on a large scale and from various environments have created an opportunity for new forms of real-world services and applications. This is known under the umbrella term of the Internet of Things (IoT). Physical sensor devices constantly produce very large amounts of data. Methods are needed which give the raw sensor measurements a meaningful interpretation for building automated decision support systems. To extract actionable information from real-world data, we propose a method that uncovers hidden structures and relations between multiple IoT data streams. Our novel solution uses Latent Dirichlet Allocation (LDA), a topic extraction method that is generally used in text analysis. We apply LDA on meaningful abstractions that describe the numerical data in human understandable terms. We use Symbolic Aggregate approXimation (SAX) to convert the raw data into string-based patterns and create higher level abstractions based on rules. We finally investigate how heterogeneous sensory data from multiple sources can be processed and analysed to create near real-time intelligence and how our proposed method provides an efficient way to interpret patterns in the data streams. The proposed method uncovers the correlations and associations between different pattern in IoT data streams. The evaluation results show that the proposed solution is able to identify the correlation with high efficiency with an F-measure up to 90%.
In this paper, a novel spatially non-stationary channel model is proposed for link-level computer simulations of massive multiple-input multiple-output (mMIMO) with extremely large aperture array (ELAA). The proposed channel model allows a mix of none line-of-sight (NLoS) and LoS links between a user and service antennas. The NLoS/LoS state of each link is characterized by a binary random variable, which obeys a correlated Bernoulli distribution. The correlation is described in the form of an exponentially decaying window. In addition, the proposed model incorporates shadowing effects which are non-identical for NLoS and LoS states. It is demonstrated, through computer emulation, that the proposed model can capture almost all spatially non-stationary fading behaviors of the ELAA-mMIMO channel. Moreover, it has a low implementational complexity. With the proposed channel model, Monte-Carlo simulations are carried out to evaluate the channel capacity of ELAAmMIMO. It is shown that the ELAA-mMIMO channel capacity has considerably different stochastic characteristics from the conventional mMIMO due to the presence of channel spatial non-stationarity.
A machine learning (ML) technique has been used to synthesis a linear millimetre wave (mmWave) phased array antenna by considering the phase-only synthesis approach. For the first time, gradient boosting tree (GBT) is applied to estimate the phase values of a 16-element array antenna to generate different far-field radiation patterns. GBT predicts phases while the amplitude values have been equally set to generate different beam patterns for various 5G mmWave transmission scenarios such as multicast, unicast, broadcast and unmanned aerial vehicle (UAV) applications.
In order to minimize the downloading time of short-lived applications like web browsing, web application and short video clips, the recently standardized HTTP/2 adopts stream multiplexing on one single TCP connection. However, aggregating all content objects within one single connection suffers from the Head-of-Line blocking issue. QUIC, by eliminating such an issue on the basis of UDP, is expected to further reduce the content downloading time. However, in mobile network environments, the single connection strategy still leads to a degraded and high variant completion time due to the unexpected hindrance of congestion window growth caused by the common but uncertain fluctuations in round trip time and also random loss event at the air interface. To retain resilient congestion window against such network fluctuations, we propose an intelligent connection management scheme based on QUIC which not only employs adaptively multiple connections but also conducts a tailored state and congestion window synchronization between these parallel connections upon the detection of network fluctuation events. According to the performance evaluation results obtained from an LTE-A/Wi-Fi testing network, the proposed multiple QUIC scheme can effectively overcome the limitations of different congestion control algorithms (e.g. the loss-based New Reno/CUBIC and the rate-based BBR), achieving substantial performance improvement in both median (up to 59.1%) and 95th completion time (up to 72.3%). The significance of this piece of work is to achieve highly robust short-lived content downloading performance against various uncertainties of network conditions as well as with different congestion control schemes.
This paper investigates adaptive implementation of the linear minimum mean square error (MMSE) detector in code division multiple access (CDMA). From linear algebra, Cimmino's reflection method is proposed as a possible way of achieving the MMSE solution blindly. Simulation results indicate that the proposed method converges four times faster than the blind least mean squares (LMS) algorithm and has roughly the same convergence performance as the blind recursive least squares (RLS) algorithm. Moreover the proposed algorithm is numerically more stable than the RLS algorithm and also exhibits parallelism for pipelined implementation. © 2009 IEEE.
The current Web and data indexing and search mechanisms are mainly tailored to process text-based data and are limited in addressing the intrinsic characteristics of distributed, large-scale and dynamic Internet of Things (IoT) data networks. The IoT demands novel indexing solutions for large-scale data to create an ecosystem of system; however, IoT data are often numerical, multi-modal and heterogeneous. We propose a distributed and adaptable mechanism that allows indexing and discovery of real-world data in IoT networks. Comparing to the state-of-the-art approaches, our model does not require any prior knowledge about the data or their distributions. We address the problem of distributed, efficient indexing and discovery for voluminous IoT data by applying an unsupervised machine learning algorithm. The proposed solution aggregates and distributes the indexes in hierarchical networks. We have evaluated our distributed solution on a large-scale dataset, and the results show that our proposed indexing scheme is able to efficiently index and enable discovery of the IoT data with 71% to 92% better response time than a centralised approach.
Network slicing has been identified as one of the most important features for 5G and beyond to enable operators to utilize networks on an as-a-service basis and meet the wide range of use cases. In physical layer, the frequency and time resources are split into slices to cater for the services with individual optimal designs, resulting in services/slices having different baseband numerologies (e.g., subcarrier spacing) and / or radio frequency (RF) front-end configurations. In such a system, the multi-service signal multiplexing and isolation among the service/slices are critical for the Physical-Layer Network Slicing (PNS) since orthogonality is destroyed and significant inter-service/ slice-band-interference (ISBI) may be generated. In this paper, we first categorize four PNS cases according to the baseband and RF configurations among the slices. The system model is established by considering a low out of band emission (OoBE) waveform operating in the service/slice frequency band to mitigate the ISBI. The desired signal and interference for the two slices are derived. Consequently, one-tap channel equalization algorithms are proposed based on the derived model. The developed system models establish a framework for further interference analysis, ISBI cancelation algorithms, system design and parameter selection (e.g., guard band), to enable spectrum efficient network slicing.
Cooperative communications can exploit distributed spatial diversity gain to improve link performance. When the message is coded at a low rate, source and relay can send different parts of a codeword to destination. This is referred to as the coded cooperation. In this paper, we propose two novel coded cooperation schemes for three-node relay networks, i.e., adaptive coded cooperation and ARQ-based coded cooperation. The former one needs the channel quality information available at source. The codeword splits adaptively to minimize the overall BER. The latter one is devised for relay network with erasure. In the first time slot, source sends a high-rate sub-codeword. Once destination reports the decoding errors, either source or relay can send one or two new bits selected from the mother codeword. Unlike random rateless erasure codes, such as Fountain code, the proposed scheme is based on the deterministic code generator and puncture pattern. It is experimentally shown that the proposed scheme can offer improved throughput in comparison with the conventional approach.
One of the key research issues in wireless systems is how to improve the system capacity. MIMO has been proven as an effective method to achieve this. Previously, the focus of MIMO-OFDM research in High Performance Metropolitan Area Network (HIPERMAN) systems was on Space Time Coding (STC) and beamforming. Recently, Multi-User Detection (MUD) has emerged as a novel approach in MIMO-OFDM-based HIPERMAN systems. In this paper, we have proposed a new MAC design, which includes the new and flexible MAC frame structure and an efficient dynamic resource allocation algorithm, in order to accommodate the MUD techniques in uplink transmission in HIPERMAN systems. The performance of the new MAC design has been evaluated via simulation means. The simulation results show that the new MAC design based on MUD can significantly increase the system capacity.
Energy efficiency (EE) is a key enabler for the next generation of communication systems. Equally, resource allocation and cooperative communication are effective tech-niques for improving communication system performance. In this paper, we propose an optimal energy-efficient joint resource allocation method for the multi-hop multiple-input-multiple-output (MIMO) amplify-and-forward (AF) system. We define the joint source and multiple relays optimization problem and prove that its objective function, which is not generally quasiconvex, can be lower-bounded by a convex function. Moreover, all the minima of this objective function are strict minima. Based on these two properties, we then simplify the original multivariate optimization problem into a single variable problem and design a novel approach for optimally solving it in both the unconstraint and power constraint cases. In addition, we provide a sub-optimal approach with reduced complexity; the latter reduces the computational complexity by a factor of up to 40 with near-optimal performance. We finally utilize our novel approach for comparing the optimal energy-per-bit consumption of multi-hop MIMO-AF and MIMO systems; results indicate that MIMO-AF can help to save energy when the direct link quality is poor.
Filtered orthogonal frequency division multiplexing (F-OFDM) system is a promising waveform for 5G and beyond to enable multi-service system and spectrum efficient network slicing. However, the performance for F-OFDM systems has not been systematically analyzed in literature. In this paper, we first establish a mathematical model for F-OFDM system and derive the conditions to achieve the interference-free one-tap channel equalization. In the practical cases (e.g., insufficient guard interval, asynchronous transmission, etc.), the analytical expressions for inter-symbol-interference (ISI), inter-carrier-interference (ICI) and adjacent-carrier-interference (ACI) are derived, where the last term is considered as one of the key factors for asynchronous transmissions. Based on the framework, an optimal power compensation matrix is derived to make all of the subcarriers having the same ergodic performance. Another key contribution of the paper is that we propose a multi-rate F-OFDM system to enable low complexity low cost communication scenarios such as narrow band Internet of Things (IoT), at the cost of generating intersubband- interference (ISubBI). Low computational complexity algorithms are proposed to cancel the ISubBI. The result shows that the derived analytical expressions match the simulation results, and the proposed ISubBI cancelation algorithms can significantly save the original F-OFDM complexity (up to 100 times) without significant performance loss.
This paper presents spatio-temporally resolved wideband measurements of Sub-Terahertz (Sub-THz) reflection coefficients in the frequency range of 92-110 GHz. A stochastic model for single reflection fixed links that is capable of modelling random scattering from small-scale discontinuities such as those encountered in complex structures in walls and partitions of buildings is presented. The model auto-regressively produces filter coefficients that are fed into an Infinite-Impulse-Response (IIR) filter which convolves them with the spatio-temporal series in order to generate the next output sample based on previous observations. The IIR filter allows for flexible stochastic generation of samples, and its parameters can be adjusted as needed to suit different channel conditions. A total of 20 suitable start-up filter coefficients are generated from 21.7 % of the sample size for each complex delay tap distribution, which corresponds to 50 complex instances of the channel. These coefficients are then utilised to validate the remaining measured sample set. The model is in quantitative agreement with measurement statistics and can be used to construct relatively simple modified reflection coefficients that can be used in micro-cellular ray-optical network planning tools.
This paper proposes a low-complexity joint source and relay energy-efficient resource allocation scheme for the two-hop multiple-input- multiple-output (MIMO) amplify-and-forward (AF) system when channel state information is available. We first simplify the multivariate unconstrained energy efficiency (EE)-based problem and derive a convex closed-form approximation of its objective function as well as closed-form expressions of subchannel rates in both the unconstrained and power constraint cases. We then rely on these expressions for designing a low-complexity energy-efficient joint resource allocation algorithm. Our approach has been compared with a generic nonlinear constrained optimization solver and results have indicated the low-complexity and accuracy of our approach. As an application, we have also compared our EE-based approach against the optimal spectral efficiency (SE)-based joint resource allocation approach and results have shown that our EE-based approach provides a good trade-off between power consumption and SE. © 2014 IEEE.
This work addresses joint transceiver optimization for multiple-input, multiple-output (MIMO) systems. In practical systems the complete knowledge of channel state information (CSI) is hardly available at transmitter. To tackle this problem, we resort to the codebook approach to precoding design, where the receiver selects a precoding matrix from a finite set of pre-defined precoding matrices based on the instantaneous channel condition and delivers the index of the chosen precoding matrix to the transmitter via a bandwidth-constraint feedback channel. We show that, when the symbol constellation is improper, the joint codebook based precoding and equalization can be designed accordingly to achieve improved performance compared to the conventional system. © 2012 IEEE.
In this paper, we propose a ¯nite-state Markov model for per-user service of an oppor- tunistic scheduling scheme over Rayleigh fading channels, where a single base station serves an arbitrary number of users. By approximating the power gain of Rayleigh fading chan- nels as ¯nite-state Markov processes, we develop an algorithm to obtain dynamic stochastic model of the transmission service, received by an individual user for a saturated scenario, where user data queues are highly loaded. The proposed analytical model is a ¯nite-state Markov process. We provide a comprehensive comparison between the predicted results by the proposed analytical model and the simulation results, which demonstrate a high degree of match between the two sets.
In this paper, the mutual information transfer characteristics of turbo Multiuser Detector (MUD) for a novel air interface scheme, called Low Density Signature Orthogonal Frequency Division Multiplexing (LDS-OFDM) are investigated using Extrinsic Information Transfer (EXIT) charts. LDS-OFDM uses Low Density Signature structure for spreading the data symbols in frequency domain. This technique benefits from frequency diversity besides its ability of supporting parallel data streams more than the number of subcarriers (overloaded condition). The turbo MUD couples the data symbols' detector of LDS scheme with users' FEC (Forward Error Correction) decoders through the message passing principle. The effect of overloading on LDS scheme's performance is evaluated using EXIT chart. The results show that at Eb/N0 as low as 0.3, LDS-OFDM can support loads up to 300%.
Beside the well-established spectral-efficiency (SE), energy-efficiency (EE) is currently becoming an important performance evaluation metric, which in turn makes the EE-SE trade-off as a prominent criterion for efficiently designing future communication systems. In this letter, we propose a very tight closed-form approximation (CFA) of this trade-off over the single-input single-output (SISO) Rayleigh flat fading channel. We first derive an improved approximation of the SISO ergodic capacity by means of a parametric function and then utilize it for obtaining our novel EE-SE trade-off CFA, which is also generalized for the symmetric multi-input multi-output channel. We compare our CFA with existing CFAs and show its improved accuracy in comparison with the latter.
This paper presents two contributions towards incremental decode-forward relaying over asymmetric fading channels. One is about the outage probability of incremental relay network accommodating i.n.d. cooperative paths. Our contribution is mainly on formulating a closed-form of the outage probability through employment of the Inverse Laplace Transform and Eular Summation. The other is about the proposal of transmit-power efficient relay-selection strategy through exploitation of the relationship between position of relays and the outage probability.
In this paper, an ultra-wideband, Dielectric Resonator Antenna (DRA) has been proposed. The proposed antenna is based on isosceles triangular DRA (TDRA), which is fed from the base side using a 50Ω probe. For bandwidth enhancement and radiation characteristics improvement, a partially cylindrical-shape hole is etched from its base side which approached probe feed to the center of TDRA. The dielectric resonator (DR) is located over an extended conducting ground plane. This technique has significantly enhanced antennas bandwidth from 48.8% to 80% (5.29-12.35 GHz), while the biggest problem was radiation characteristics. The basis antenna possesses negative gain in a wide range of bandwidth from 7.5 GHz to 10.5 GHz down to -13.8 dBi. Using this technique improve antenna gain over 1.6 dBi for whole bandwidth, while peak gain is 7.2 dBi.
5G definition and standardization projects are well underway, and governing characteristics and major challenges have been identified. A critical network element impacting the potential performance of 5G networks is the backhaul, which is expected to expand in length and breadth to cater to the exponential growth of small cells while offering high throughput in the order of Gbps and less than one-millisecond latency with high resilience and energy efficiency. Such performance may only be possible with direct optical fibre connections which are often not available countrywide and are cumbersome and expensive to deploy. On the other hand, a prime 5G characteristic is diversity, which describes the radio access network, the backhaul, and also the types of user applications and devices. Thus, we propose a novel, distributed, selfoptimized, end-to-end user-cell-backhaul association scheme that intelligently associates users with potential cells based on corresponding dynamic radio and backhaul conditions while abiding by users’ requirements. Radio cells broadcast multiple bias factors, each reflecting a dynamic performance indicator (DPI) of the endto-end network performance such as capacity, latency, resilience, energy consumption, etc. A given user would employ these factors to derive a user-centric cell ranking that motivates it to select the cell with radio and backhaul performance that conforms to the user requirements. Reinforcement learning is used at the radio cell to optimize the bias factors for each DPI in a way that maximizes the system throughput while minimizing the gap between the users’ achievable and required end-to-end quality of experience (QoE). Preliminary results show considerable improvement in users QoE and cumulative system throughput when compared to state-of-theart user-cell association schemes.
When dealing with a large number of devices, the existing indexing solutions for the discovery of IoT sources often fall short to provide an adequate scalability. This is due to the high computational complexity and communication overhead that is required to create and maintain the indices of the IoT sources particularly when their attributes are dynamic. This paper presents a novel approach for indexing distributed IoT sources and paves the way to design a data discovery service to search and gain access to their data. The proposed method creates concise references to IoT sources by using Gaussian Mixture Models (GMM). Furthermore, a summary update mechanism is introduced to tackle the change of sources availability and mitigate the overhead of updating the indices frequently. The proposed approach is benchmarked against a standard centralized indexing and discovery solution. The results show that the proposed solution reduces the communication overhead required for indexing by three orders of magnitude while depending on IoT network architecture it may slightly increase the discovery time
Future wireless local area networks (WLANs) are expected to serve thousands of users in diverse environments. To address the new challenges that WLANs will face, and to overcome the limitations that previous IEEE standards introduced, a new IEEE 802.11 amendment is under development. IEEE 802.11ax aims to enhance spectrum efficiency in a dense deployment; hence system throughput improves. Dynamic Sensitivity Control (DSC) and BSS Color are the main schemes under consideration in IEEE 802.11ax for improving spectrum efficiency In this paper, we evaluate DSC and BSS Color schemes when physical layer capture (PLC) is modelled. PLC refers to the case that a receiver successfully decodes the stronger frame when collision occurs. It is shown, that PLC could potentially lead to fairness issues and higher throughput in specific cases. We study PLC in a small and large scale scenario, and show that PLC could also improve fairness in specific scenarios.
In this paper, we study an enhanced subspace based approach for the mitigation of multiple access interference (MAI) in direct-sequence code-division multiple-access (DS-CDMA) systems over frequency-selective channels. Blind multiuser detection based on signal subspace estimation is of special interest in mitigating MAI in CDMA systems since it is impractical to assume perfect knowledge of parameters such as spreading codes, time delays and amplitudes of all the users in a rapidly changing mobile environment. We develop a new blind multiuser detection scheme which only needs the priori knowledge of the signature waveform and timing of the user of interest. By exploiting the improper nature of multiple access interference (MAI) and intersymbol interference (ISI), the enhanced detector shows clear superiority to the conventional subspace-based blind multiuser detector. The performance advantages are shown to be more obvious in heavily loaded systems when the number of active users is large. © 2011 IEEE.
The performance of SIR-based Closed Loop power control (CLPC) is analytically analysed. The evaluation work has been carried out using the standard deviation of the power control error (PCE) as the performance metric. A non-linear control theory method is applied to the feedback system under fast fading. An analytical expression of the CLPC under fast fading is also produced. Finally a quantized-step size power control algorithm, replacing the hard limiter is considered. The proposed method is found to work considerably better for high speed MS as well as being a powerful tool to optimise the loop performance.
This paper investigates a learning-based approach autonomously and jointly optimizing the trajectory of unmanned aerial vehicle (UAV), phase shifts of reconfigurable intelligent surfaces (RIS), and aggregation weights for federated learning (FL) in wireless communications, forming an autonomous RIS-assisted UAV-enabled network. The proposed network considers practical RIS reflection models and FL transmission errors in wireless communications. To optimize the RIS phase shifts, a double cascade correlation network (DCCN) is introduced. Additionally, the deep deterministic policy gradient (DDPG) algorithm is employed to address the optimization problem of UAV trajectory and FL aggregation weights based on the results obtained from DCCN. Simulation results demonstrate the substantial improvement in FL performance within the autonomous RIS-assisted UAV-enabled network setting achieved by the proposed algorithms compared to the benchmarks.
This paper proposes an ultra-compact Near-field Focusing (NFF) setup at 60 GHz. The proposed configuration is included a planar substrate integrated waveguide (SIW) slot array as a feeder for a three-layer transmissive coded metasurface lens. A comprehensive design methodology is presented, encompassing unit cell design, coded metasurface lens synthesis, and planar slot array design. The transmissive metasurface lens performance is studied and validated by numerical and analytical approaches. Then, the planar slot array design considerations are elaborated, investigating the amplitude and phase of the resulting waves to ensure that quasi-plane waves are produced in the near-field region. Finally, the slot array is employed to illuminate the designed metasurface where the distance between the feeder and metasurface lens is 2 mm (0.4λ), which shows the integrability and being packed of the proposed setup, contrary to the conventional metasurface-based NFF structures. There are fair agreements between analytical, numerical, and measurement methods to verify the presented approach. It turns out that the proposed device can focus waves close to the diffraction limit, which results in a high-resolution efficiency.
This letter presents a reduced-complexity algorithm for coordinated beamforming aimed at solving the multicell downlink max-min signal-to- interference-plus-noise problem under per-base-station power constraints. It is shown that the proposed algorithm can achieve close performance to the optimum algorithm with faster convergence and lower complexity. © 2014 IEEE.
The abundant spectrum resources and low beam divergence of the terahertz (THz) band can be combined with the orthogonal propagation property of orbital angular momentum (OAM) beams to multi-fold the capacity of wireless communication systems. Here, a reflective metasurface (RMTS) is utilized to enhance the coverage of the high gain THz OAM beams by enabling the non-line-of-sight (NLoS) component by reshaping the planar wavefront of the incident wave into the helical wavefront, so that it is redirected towards the direction of interest. This can contribute to alleviating the concern of the small aperture size, since improving the channel capacity can be achieved at the low spectrum blocks of the THz band (larger aperture size). For validation, three 90 × 90 mm RMTSs are simulated, fabricated, and tested in the frequency range 90-110 GHz, to re-direct single and dual OAM beams towards the desired location.
—In vehicle-to-infrastructure (V2I) networks, a cluster of multi-antenna access points (APs) can collaboratively conduct transmitter beamforming to provide data services (e.g., eMBB or URLLC). The collaboration between APs effectively forms a networked linear antenna-array with extra-large aperture (i.e., network-ELAA), where the wireless channel exhibits spatial non-stationarity. Major contribution of this work lies in the analysis of beamforming gain and radio coverage for network-ELAA non-stationary Rician channels considering the AP clustering. Assuming that: 1) the total transmit-power is fixed and evenly distributed over APs, 2) the beam is formed only based on the line-of-sight (LoS) path, it is found that the beamforming gain is concave to the cluster size. The optimum size of the AP cluster varies with respect to the user's location, channel uncertainty as well as data services. A user located farther from the ELAA requires a larger cluster size. URLLC is more sensitive to the channel uncertainty when comparing to eMBB, thus requiring a larger cluster size to mitigate the channel fading effect and extend the coverage. Finally, it is shown that the network-ELAA can offer significant coverage extension (50% or more in most of cases) when comparing with the single-AP scenario.
Fog-Cloud computing has become a promising platform for executing Internet of Things (IoT) tasks with different requirements. Although the fog environment provides low latency due to its proximity to IoT devices, it suffers from resource constraints. This is vice versa for the cloud environment. Therefore, efficiently utilizing the fog-cloud resources for executing tasks offloaded from IoT devices is a fundamental issue. To cope with this, in this paper, we propose a novel scheduling algorithm in fog-cloud computing named PGA to optimize the multi-objective function that is a weighted sum of overall computation time, energy consumption, and percentage of deadline satisfied tasks (PDST). We take the different requirements of the tasks and the heterogeneous nature of the fog and cloud nodes. We propose a hybrid approach based on prioritizing tasks and a genetic algorithm to find a preferable computing node for each task. The extensive simulations evaluate our proposed algorithm to demonstrate its superiority over the state-of-the-art strategies.
Here, we first aim to explain practical considerations to design and implement a reconfigurable intelligent surface (RIS) in the sub-6 GHz band and then, to demonstrate its real-world performance. The wave manipulation procedure is explored with a discussion on relevant electromagnetic (EM) concepts and backgrounds. Based on that, the RIS is designed and fabricated to operate at the center frequency of 3.5 GHz. The surface is composed of 2430 unit cells where the engineered reflecting response is obtained by governing the microscopic characteristics of the conductive patches printed on each unit cell. To achieve this goal, the patches are not only geometrically customized to properly reflect the local waves, but also are equipped with specific varactor diodes to be able to reconfigure their response when it is required. An equivalent circuit model is presented to analytically evaluate the unit cell’s performance with a method to measure the unit cell’s characteristics from the macroscopic response of the RIS. The patches are printed on six standard-size substrates which then placed together to make a relatively big aperture with approximate planar dimensions of 120 W 120 cm2. The manufactured RIS possesses a control unit with a custom-built system that can control the response of the reflecting surface by regulating the performance of the varactor diode on each printed patch across the structure. Furthermore, with an introduction of our test-bed system, the functionality of the developed RIS in an indoor real-world scenario is assessed. Finally, we showcase the capability of the RIS in hand to reconfigure itself in order to anomalously reflect the incoming EM waves toward the direction of interest in which a receiver could be experiencing poor coverage.
Non-terrestrial networks (NTNs) will become an indispensable part of future wireless networks. Integration with terrestrial networks will provide new opportunities for both satellite and terrestrial telecommunication industries and therefore there is a need to harmonize them in a unified technological framework. Among different NTNs, low earth orbit (LEO) satellites have gained increasing attention in recent years and several companies have filed federal communication commission (FCC) proposals to deploy their LEO constellation in space. This is mainly due to several desired features such as large capacity and low latency. In addition, recent successful LEO network deployments such as Starlink have motivated other companies. In the past satellite and terrestrial wireless networks have been evolving separately but now they are joining forces to enhance coverage and connectivity experience in the future wireless networks. The 3rd Generation Partnership Project (3GPP) is one of the dominating standardization bodies that is working on various technical aspects to provide ubiquitous access to the 5G networks with the aid of NTNs. Initial steps have been taken to adopt 5G state of the art technologies and concepts and harmonized them with the conditions met in non-terrestrial networks. In this article, we review some of the important technical considerations in 5G NTNs with emphasis on the radio access network (RAN) part and provide some simulation based results to assess the required modifications and shed light on the design considerations
Institute of Electrical and Electronics Engineers (IEEE) 802.11ax Spatial Reuse (SR) is a new feature in the IEEE 802.11 family, aiming at improving the spectrum efficiency and the network performance in dense deployments. The main and perhaps the only SR technique in that amendment is the Basic Service Set (BSS) Color. It aims at increasing the number of concurrent transmissions in a specific area, based on a newly defined Overlapping BSS/Preamble-Detection threshold. In this paper, we overview the latest developments introduced in the IEEE 802.11ax for the SR and propose a rate control algorithm developed to exploit the BSS Color scheme. Our proposed algorithm, Damysus is specifically designed to function in dense environments where other off-the-shelf algorithms show poor performance. Simulation results in various dense scenarios, show a clear performance improvement of up to 113% gain in throughput over the well known MinstrelHT algorithm.
A reconfigurable metamaterial-inspired unit cell is proposed that can be reconfigured to behave either as a perfect magnetic conductor (PMC) or as a perfect electric conductor (PEC) and its application to waveguide miniaturisation is demonstrated. The unit cell is designed to operate in the sub-6 GHz band at 3:6 GHz with a PMC bandwidth of 150 MHz and has a simple construction that makes the design easy to fabricate. The phase response of the reconfigurable unit cell is presented and a prototype design of a miniaturised waveguide using the proposed unit cell is also proposed. The performance and field distribution of the waveguide are analysed which demonstrate the existence of a pass-band spanning 160 MHz below the cutoff frequency and the presence of a quasi TEM mode.
Decentralized dynamic spectrum allocation (DSA) that exploit adaptive antenna array interference mitigation (IM) diversity at the receiver, is studied for interference-limited environments with high level of frequency reuse. The system consists of base stations (BSs) that can optimize uplink frequency allocation to their user equipments (UEs) to minimize impact of interference on the useful signal, assuming no control over band allocation of other BSs sharing the same bands. To this end, “good neighbor” (GN) rules allow effective trade off between the equilibrium and transient decentralized DSA behavior if the performance targets are adequate to the interference scenario. In this paper, we extend the GN rules by including a spectrum occupation control that allows adaptive selection of the performance targets corresponding to the potentially “interference free” DSA; define the semi-analytic absorbing Markov chain model for the GN DSA with occupation control and study the convergence properties including effects of possible breaks of the GN rules; and for higher-dimension networks, develop the simplified search GN algorithms with occupation and power control (PC) and demonstrate their efficiency by means of simulations in the scenario with unlimited requested network occupation.
This paper presents details of the indoor wideband and directional propagation measurements at 26 GHz in which a wideband channel sounder using a millimeter wave (mmWave) signal analyzer and vector signal generator was employed. The setup provided 2 GHz bandwidth and the mechanically steerable directional lens antenna with 5 degrees beamwidth provides 5 degrees of directional resolution over the azimuth. Measurements provide path loss, delay and spatial spread of the channel. Angular and delay dispersion are presented for line-of-sight (LoS) and non-line-of-sight (NLoS) scenarios.
In this paper, a high flat gain waveguide-fed aperture antenna has been proposed. For this purpose, two layers of FR4 dielectric as superstrates have been located in front of the aperture to enhance the bandwidth and the gain of the antenna. Moreover, a conductive shield, which is connected to the edges of the ground plane and surrounding aperture and superstrates, applied to the proposed structure to improve its radiation characteristics. The proposed antenna has been simulated with HFSS and optimized with parametric study and the following results have been obtained. The maximum gain of 13.0 dBi and 0.5-dBi gain bandwidth of 25.9 % (8.96 – 11.63 GHz) has been achieved. The 3-dBi gain bandwidth of the proposed antenna is 40.7% (8.07-12.20 GHz), which has a suitable reflection coefficient (≤-10dBi) in whole bandwidth. This antenna comprises a compact size of (1.5λ×1.5λ), easy structure and low-cost fabrication.
This paper aims to investigate an intra-cell overlay opportunistic spectrum sharing scheme by employing 1-bit feedback beamforming. The work of interests is that base station broadcasts independent signal messages to two relay stations (RS-1 and RS-2). RS-2 decodes the signal messages in subcell 2 and attempts to share the spectrum of sub-cell 1 for its own transmission. For this reason, RS-2 makes a deal with RS-1 in sub-cell 1 to help RS-1 send its signal messages. As presented in the paper, by employing 1-bit feedback transmit beamforming, RS-2 can further improve RS-1's achievable rate and automatically eliminate the interference from RS-2 to subcell 1. Meanwhile, the achievable sum-rate upper bound of RS-2 is also analyzed. © VDE VERLAG GMBH.
In this paper, single-input multiple-output (SIMO) system when employing massive binary array-receiver has been investigated while constructive noise has been observed in the single user system to detect the higher-order QAM modulated signals. To fully understand the interesting phenomenon, mathematical model has been established and analyzed in this paper. Theorems of the signal detectability are studied to understand the best operating signal-to-noise ratio (SNR) range based on the error behaviours of the single user SIMO system. Within the observation and analysis, a novel new multiuser SIMO with binary array-receiver structure has been proposed and can be considered as a solution to deal with the high complexity problem that the traditional model has when using maximum likelihood (ML) detection. The key idea of this approach is to set up the multiuser multiple-input multiple-output (MIMO) model into a frequency division multiple access (FDMA) scenario and regard each user as single user SIMO to achieve the goal of decreasing the exponentially increased complexity of ML detection method to the number of users. It is shown by numerical results that each user in this system can achieve a promising error behaviour in the specific best operating SNR range.
The paper presents a time-difference-of-arrival (TDOA) position estimation algorithm for indoor positioning in the present of clock drift in a mobile terminal. Then, a new Cramér-Rao bound is derived as a benchmark of the algorithm. The simulation results show that an acceptable positioning accuracy can be achieved when at least five access points in wireless local area networks are involved in positioning. Moreover, when the clock drift or the TDOA is considerably large, the proposed algorithm outperforms the algorithm without considering the clock drift. © VDE VERLAG GMBH.
Conventional cellular systems are designed to ensure ubiquitous coverage with an always present wireless channel irrespective of the spatial and temporal demand of service. This approach raises several problems due to the tight coupling between network and data access points, as well as the paradigm shift towards data-oriented services, heterogeneous deployments and network densification. A logical separation between control and data planes is seen as a promising solution that could overcome these issues, by providing data services under the umbrella of a coverage layer. This article presents a holistic survey of existing literature on the control-data separation architecture (CDSA) for cellular radio access networks. As a starting point, we discuss the fundamentals, concept and general structure of the CDSA. Then, we point out limitations of the conventional architecture in futuristic deployment scenarios. In addition, we present and critically discuss the work that has been done to investigate potential benefits of the CDSA, as well as its technical challenges and enabling technologies. Finally, an overview of standardisation proposals related to this research vision is provided.
The first generation of femtocells is evolving to the next generation with many more capabilities in terms of better utilisation of radio resources and support of high data rates. It is thus logical to conjecture that with these abilities and their inherent suitability for home environment, they stand out as an ideal enabler for delivery of high efficiency multimedia services. This paper presents a comprehensive vision towards this objective and extends the concept of femtocells from indoor to outdoor environments, and strongly couples femtocells to emergency and safety services. It also presents and identifies relevant issues and challenges that have to be overcome in realization of this vision.
Energy efficiency (EE) is growing in importance as a key performance indicator for designing the next generation of communication systems. Equally, resource allocation is an effective approach for improving the performance of communication systems. In this paper, we propose a low-complexity energyefficient resource allocation method for the orthogonal multiantenna multi-carrier channel. We derive explicit formulations of the optimal rate and energy-per-bit consumption for the per-antenna transmit power constrained and per-antenna rate constrained EE optimization problems as well as provide a lowcomplexity algorithm for optimally allocating resources over the orthogonal multi-antenna multi-carrier channel. We then compare our approach against a classic optimization tool in terms of energy efficiency as well as complexity, and results indicate the optimality and low-complexity of our approach. Comparing EE-optimal with spectral efficiency and power optimal allocation approaches over the orthogonal multi-antenna multi-carrier channel indicates that the former provides a good trade-off between power consumption and sum-rate performances.
Decentralized dynamic spectrum allocation (DSA) that exploits adaptive antenna array interference mitigation diversity at the receiver, is studied for interference-limited environments with high level of frequency reuse. The system consists of base stations (BSs) that can optimize uplink frequency allocation to their user equipments (UEs) to minimize impact of interference on the useful signal, assuming no control over resource allocation of other BSs sharing the same bands. To this end“, good neighbor” (GN) rules allow effective trade-off between the equilibrium and transient decentralized DSA behavior if the performance targets are adequate to the interference scenario. In this paper, we 1) extend the GN rules by including a spectrum occupation control that allows adaptive selection of the performance targets; 2) derive estimates of absorbing state statistics that allow formulation of applicability areas for different DSA algorithms; 3) define a semi-analytic absorbing Markov chain model and study convergence probabilities and rates of DSA with occupation control including networks with possible partial breaks of the GN rules. For higher-dimension networks, we develop simplified search GN algorithms with occupation and power control and demonstrate their efficiency by means of simulations.
In this paper, a novel spatially non-stationary fading channel model is proposed for multiple-input multiple-output (MIMO) system with extremely-large aperture service-array (ELAA). The proposed model incorporates three key factors which cause the channel spatial non-stationarity: 1) link-wise path-loss; 2) shadowing effect; 3) line-of-sight (LoS)/non-LoS state. With appropriate parameter configurations, the proposed model can be used to generate computer-simulated channel data that matches the published measurement data from practical ELAA-MIMO channels. Given such appealing results, the proposed fading channel model is employed to study the cumulative distribution function (CDF) of ELAA-MIMO channel capacity. For all of our studied scenarios, it is unveiled that the ELAA-MIMO channel capacity obeys the skew-normal distribution. Moreover, the channel capacity is also found close to the Gaussian or Weibull distribution, depending on users' geo-location and distribution. More specifically, for single-user equivalent scenarios or multiuser scenarios with short user-to-ELAA distances (e.g., 1 m), the channel capacity is close to the Gaussian distribution; and for others, it is close to the Weibull distribution. Finally, the proposed channel model is also employed to study the impact of channel spatial non-stationarity on linear MIMO receivers through computer simulations. The proposed fading channel model is available at https://github.com/ELAA-MIMO/non-stationary-fading-channel-model.
The first 5G (5th generation wireless systems) New Radio Release-15 was recently completed. However, the specification only considers the use of unicast technologies and the extension to point-to-multipoint (PTM) scenarios is not yet considered. To this end, we first present in this work a technical overview of the state-of-the-art LTE (Long Term Evolution) PTM technology, i.e., eMBMS (evolved Multimedia Broadcast Multicast Services), and investigate the physical layer performance via link-level simulations. Then based on the simulation analysis, we discuss potential improvements for the two current eMBMS solutions, i.e., MBSFN (MBMS over Single Frequency Networks) and SCPTM (Single-Cell PTM). This work explicitly focus on equipping the current eMBMS solutions with 5G candidate techniques, e.g., multiple antennas and millimeter wave, and its potentials to meet the requirements of next generation PTM transmissions.
Ultra densification in heterogeneous networks (HetNets) and the advent of millimeter wave (mmWave) technology for fifth generation (5G) networks have led the researchers to redesign the existing resource management techniques. A salient feature of this activity is to accentuate the importance of computationally intelligent (CI) resource allocation schemes offering less complexity and overhead. This paper overviews the existing literature on resource management in mmWave-based HetNets with a special emphasis on CI techniques and further proposes frameworks that ensure quality-of-service requirements for all network entities. More specifically, HetNets with mmWavebased small cells pose different challenges as compared to an allmicrowave- based system. Similarly, various modes of small cell access policies and operations of base stations in dual mode, i.e., operating both mmWave and microwave links simultaneously, offer unique challenges to resource allocation. Furthermore, the use of multi-slope path loss models becomes inevitable for analysis owing to irregular cell patterns and blocking characteristics of mmWave communications. This paper amalgamates the unique challenges posed because of the aforementioned recent developments and proposes various CI-based techniques including game theory and optimization routines to perform efficient resource management.
The ever-growing computation and storage capability of mobile phones have given rise to mobile-centric context recognition systems, which are able to sense and analyze the context of the carrier so as to provide an appropriate level of service. As nonintrusive autonomous sensing and context recognition are desirable characteristics of a personal sensing system; efforts have been made to develop opportunistic sensing techniques on mobile phones. The resulting combination of these approaches has ushered in a new realm of applications, namely opportunistic user context recognition with mobile phones. This article surveys the existing research and approaches towards realization of such systems. In doing so, the typical architecture of a mobile-centric user context recognition system as a sequential process of sensing, preprocessing, and context recognition phases is introduced. The main techniques used for the realization of the respective processes during these phases are described, and their strengths and limitations are highlighted. In addition, lessons learned from previous approaches are presented as motivation for future research. Finally, several open challenges are discussed as possible ways to extend the capabilities of current systems and improve their real-world experience.
Mobile communications are increasingly contributing to global energy consumption. The EARTH (Energy Aware Radio and neTworking tecHnologies) project tackles the important issue of reducing CO emissions by enhancing the energy efficiency of cellular mobile networks. EARTH is a holistic approach to develop a new generation of energy efficient products, components, deployment strategies and energy-aware network management solutions. In this paper the holistic EARTH approach to energy efficient mobile communication systems is introduced. Performance metrics are studied to assess the theoretical bounds of energy efficiency as well as the practical achievable limits. A vast potential for energy savings lies in the operation of radio base stations. In particular, base stations consume a considerable amount of the available power budget even when operating at low load. Energy efficient radio resource management (RRM) strategies need to take into account slowly changing daily load patterns, as well as highly dynamic traffic fluctuations. Moreover, various deployment strategies are examined focusing on their potential to reduce energy consumption, whilst providing uncompromised coverage and user experience. This includes heterogeneous networks with a sophisticated mix of different cell sizes, which may be further enhanced by energy efficient relaying and base station cooperation technologies. Finally, scenarios leveraging the capability of advanced terminals to operate on multiple radio access technologies (RAT) are discussed with respect to their energy savings potential. ©2010 IEEE.
High mobility scenarios may be typical for different applications such as low earth orbit (LEO) satellite and vehicle-to-everything (V2X) communications. A standardized approach to dealing with high mobility scenarios is using flexible sub-frame structures including a higher pilot density in the time domain, which leads to reduced spectrum efficiency. We propose a supplementary algorithm to improve multiple antenna receiver performance in high mobility scenarios for the given sub-frame structure compared to the conventional 3GPP pilot and data based interference rejection receivers. The main feature of high mobility (non-stationary) scenarios is that different symbols in the desired signal sub-frame may be received under different propagation and/or interference conditions. Recently, we have addressed a non-stationary interference rejection scenario in slowly varying propagation environment with asynchronous (intermittent) interference by means of developing an interference rejection combining algorithm, where the pilot based estimate of the interference plus noise covariance matrix is regularized by the data based estimate of the covariance matrix. In this paper, we: 1) extend the data regularized solution to the general high mobility scenarios, and 2) demonstrate its efficiency compared to the conventional pilot and data based receivers for different sub-frame formats in the uplink transmissions in the LEO satellite scenario with high residual Doppler frequency with and without hardware impairments.
The Wireless Hybrid Enhanced Mobile Radio Estimators (WHERE) consortium researches radio positioning techniques to improve various aspects of communications systems. In order to provide the benefits of position information available to communications systems, hybrid data fusion (HDF) techniques estimate reliable position information. Within this paper, we first present the scenarios and radio technologies evaluated by the WHERE consortium for wireless positioning. We compare conventional HDF approaches with two novel approaches developed within the framework of WHERE. Yet, HDF may still provide insufficient localization accuracy and reliability. Hence, we will research and develop new cooperative positioning algorithms, which exploit the available communications links among mobile terminals of heterogeneous wireless networks, to further enhance the positioning accuracy and reliability.
The full-duplex (FD) communication can achieve higher spectrum efficiency than conventional half-duplex (HD) communication; however, self-interference (SI) is the key hurdle. This paper is the first work to propose the intelligent omni surface (IOS)-assisted FD multi-input single-output (MISO) FD communication systems to mitigate SI, which solves the frequency-selectivity issue. In particular, two types of IOS are proposed, energy splitting (ES)-IOS and mode switching (MS)-IOS. We aim to maximize data rate and minimize SI power by optimizing the beamforming vectors, amplitudes and phase shifts for the ES-IOS and the mode selection and phase shifts for the MS-IOS. However, the formulated problems are non-convex and challenging to tackle directly. Thus, we design alternative optimization algorithms to solve the problems iteratively. Specifically, the quadratic constraint quadratic programming (QCQP) is employed for the beamforming optimizations, amplitudes and phase shifts optimizations for the ES-IOS and phase shifts optimizations for the MS-IOS. Nevertheless, the binary variables of the MS-IOS render the mode selection optimization intractable, and then we resort to semidefinite relaxation (SDR) and Gaussian randomization procedures to solve it. Simulation results validate the proposed algorithms' efficacy and show the effectiveness of both the IOSs in mitigating SI compared to the case without an IOS.
Open radio access network (Open-RAN) is becoming a key component of cellular networks, and therefore optimizing its architecture is vital. The Open-RAN is a distributed architecture that lets the virtualized networking functions be split between Distributed Units (DU) and Centralized Units (CUs); as a result, there is a wide range of design options. We propose an optimization problem to choose the split points. The objective is to balance the load across CUs as well as midhaul links by considering delay requirements. The resulting formulation is an NP-hard problem that is solved with a novel heuristic algorithm. Performance evaluation shows that the gap between optimal and heuristic solutions does not exceed 2%. An in-depth analysis of different centralization levels shows that using multi-CUs could reduce the total bandwidth usage by up to 20%. Moreover, multipath routing can improve the result of load balancing between midhaul links while increasing bandwidth usage.
Automatic Repeat re-Quest (AQR) is implemented to ensure reliable transmission when channel state information (CSI) is not available to the source and the selected transmission rate is not supported by the current channel realization. We consider a relay system with hybrid relay scheme, where the relay switches between decode-and-forward (DF) and compress-and-forward (CF) adapting to the decoding status. In such case, we propose new ARQ strategy and analyze its performance in terms of maximum throughput, average reward and inter-renewal time. Compared with pure DF, the hybrid relay schemes show considerable gain.
In future heterogeneous cellular networks (HCN), cognitive radio (CR) compatible with device to device communication (D2D) technique can be an aid to further enhance system spectral and energy efficiency. The unlicensed smart devices (SDs) are allowed to detect the available licensed spectrum and utilise the spectrum resource which is detected as not being used by the licensed users (LUs). In this work, we propose such a system and provide comprehensive analysis of the effect of selection of SDs’ frame structure on the energy efficiency, throughput and interference. Moreover, uplink power control strategy is also considered where the LUs and SDs adapt the transmit power based on the distance from their reference receivers. The optimal frame structure with power control is investigated under high SNR and low SNR network environments. The impact of power control and optimal sensing time and frame length, on the achievable energy efficiency, throughput and interference are illustrated and analysed by simulation results. It has been also shown that the optimal sensing time and frame length which maximizes the energy efficiency of SDs strictly depends on the power control factor employed in the underlying network such that the considered power control strategy may decrease the energy efficiency of SDs under very low SNR regime.
Motivated by increased interests in energy efficient communication systems, the relation between energy efficiency (EE) and spectral efficiency (SE) for multiple-input multipleoutput (MIMO) systems is investigated in this paper. To provide insight into the design of practical MIMO systems, we adopt a realistic power model, as well as consider both independent Rayleigh fading and semicorrelated fading channels. We derive a novel and closed-form upper bound for the system EE as a function of SE. This upper bound exhibits a great accuracy for a wide range of SE values, and thus can be utilized for explicitly assessing the influence of SE on EE, and analytically addressing the EE optimization problems. Using this tight EE upper bound, our analysis unfolds two EE optimization issues: Given the number of transmit and receive antennas, an optimum value of SE is derived such that the overall EE can be maximized; Given a specific value of SE, the optimal number of antennas is derived for maximizing the system EE.
This paper presents an innovative Intrusion Detection System (IDS) architecture using Deep Reinforcement Learning (DRL). To accomplish this, we started by analysing the DRL issue for IoT devices, followed by designing intruder attacks using Label Flipping Attack (LFA). We propose an artificial intelligence DRL model to imitate IoT attack detection, along with two defence strategies: Label-based Semi-supervised Defence (LSD) and Clustering-based Semi-supervised Defence (CSD). Finally, we provide the evaluation results of the adaptive attack and defence models on multiple IoT scenarios with the NSL-KDD, IoT-23, and NBaIoT datasets. The research proves that DRL functions effectively with dynamically produced traffic in contrast to existing conventional techniques.
The full-duplex (FD) communication can achieve higher spectrum efficiency than conventional half-duplex (HD) communication; however, self-interference (SI) is the key hurdle. This paper is the first work to propose the intelligent Omni surface (IOS)-assisted FD multi-input single-output (MISO) FD communication systems to mitigate SI, which solves the frequency-selectivity issue. In particular, two types of IOS are proposed, energy splitting (ES)-IOS and mode switching (MS)-IOS. We aim to maximize data rate and minimize SI power by optimizing the beamforming vectors, amplitudes and phase shifts for the ES-IOS and the mode selection and phase shifts for the MS-IOS. However, the formulated problems are non-convex and challenging to tackle directly. Thus, we design alternative optimization algorithms to solve the problems iteratively. Specifically, the quadratic constraint quadratic programming (QCQP) is employed for the beamforming optimizations, amplitudes and phase shifts optimizations for the ES-IOS and phase shifts optimizations for the MS-IOS. Nevertheless, the binary variables of the MS-IOS render the mode selection optimization intractable, and then we resort to semidefinite relaxation (SDR) and Gaussian randomization procedure to solve it. Simulation results validate the proposed algorithms' efficacy and show the effectiveness of both the IOSs in mitigating SI compared to the case without an IOS.
This letter proposes a novel graph-based multi-cell scheduling framework to efficiently mitigate downlink inter-cell interference in small cell OFDMA networks. This framework incorporates dynamic clustering combined with channel-aware resource allocation to provide tunable quality of service measures at different levels. Our extensive evaluation study shows that a significant improvement in user's spectral efficiency is achievable, while also maintaining relatively high cell spectral efficiency via empirical tuning of re-use factor across the cells according to the required QoS constraints.
With increased complexity of webpages nowadays, computation latency incurred by webpage processing during downloading operations has become a newly identified factor that may substantially affect user experiences in a mobile network. In order to tackle this issue, we propose a simple but effective transport-layer optimization technique which requires necessary context information dissemination from the mobile edge computing (MEC) server to user devices where such an algorithm is actually executed. The key novelty in this case is the mobile edge’s knowledge about webpage content characteristics which is able to increase downloading throughput for user QoE enhancement. Our experiment results based on a real LTE-A test-bed show that, when the proportion of computation latency varies between 20% and 50% (which is typical for today’s webpages), the downloading throughput can be improved up to 34.5%, with reduced downloading time by up to 25.1%
This letter describes the impact of unknown channel access delay on the timeline of Hybrid Automatic Repeat Request (HARQ) process in the 3rd Generation Partnership Project Long Term Evolution (3GPP LTE) system when a Relay Node (RN) is used for coverage extension of Machine Type Communication (MTC) devices. A solution is also proposed for the determination of unknown channel access delay when the RN operates in the unlicensed spectrum band. The proposed mechanism is expected to help MTC operation in typical coverage holes areas such as smart meters located in the basement of buildings.
This paper highlights, limitations of ECPC (Each Carrier Power Control) concept [1] - originally proposed for OFDM-DS-CDMA - when extended to MC-CDMA (Multi- Carrier Code Division Multiple Access) systems. First, its impractical signaling overhead of 80% secondly; its inability to be used as an uplink power control mechanism. Then we propose BBPC (Band Based Power Control) as a practical alternative to ECPC for MC-CDMA systems. Unlike, ECPC that controls power on each carrier basis, BBPC assigns same power level to a band of carriers (lying within coherence bandwidth of channel). It has been shown that with a nominal performance loss, BBPC reduces the signaling overheads to 2.5% and by employing the control index estimator after de-spreading, it can be used as an uplink power control mechanism for MC-CDMA. We have used SIR (Signal to Interference Ratio) as a power control index, BER and standard deviation of power control error as performance metrics.
Full-duplex transceivers enable transmission and reception at the same time on the same frequency, and have the potential to double the wireless system spectral efficiency. Recent studies have shown the feasibility of full-duplex transceivers. In this paper, we address the radio resource allocation problem for full-duplex system. Due to the self-interference and inter-user interference, the problem is coupled between uplink and downlink channels, and can be formulated as joint uplink and downlink sum-rate maximization. As the problem is non-convex, an iterative algorithm is proposed based on game theory by modelling the problem as a noncooperative game between the uplink and downlink channels. The algorithm iteratively carries out optimal uplink and downlink resource allocation until a Nash equilibrium is achieved. Simulation results show that the algorithm achieves fast convergence, and can significantly improve the full-duplex performance comparing to the equal resource allocation approach. Furthermore, the full-duplex system with the proposed algorithm can achieve considerable gains in spectral efficiency, that reach up to 40%, comparing to half-duplex system.
Architecture Description Languages enable the formalization of the architecture of systems and the execution of preliminary analysis on them, aiming at the identification and resolution of design problems in the early stages of development. Such problems can be incompatibilities and mismatches in the connections between system components and in the format and type of information exchanged between them. Architecture Description Languages were initially developed to validate the correctness of software architectures; however, their applicability has been extended to cover many diverse areas during the past few years. In this paper, we aim to show how Architecture Description Languages can be applied to and be a useful tool towards validating the correctness of architectures and configurations of future internet networking environments. We do so by using a recently proposed architectural approach and a recently proposed deployment approach, implemented by means of network virtualization, as case studies
Coordinated multi-point (CoMP) architecture has proved to be very effective for improving the user fairness and spectral efficiency of cellular communication system, however, its energy efficiency remains to be evaluated. In this paper, CoMP system is idealized as a distributed antenna system by assuming perfect backhauling and cooperative processing. This simplified model allows us to express the capacity of the idealized CoMP system with a simple and accurate closed-form approximation. In addition, a framework for the energy efficiency analysis of CoMP system is introduced, which includes a power consumption model and an energy efficiency metric, i.e. bit-per-joule capacity. This framework along with our closed-form approximation are utilized for assessing both the channel and bit-per-joule capacities of the idealized CoMP system. Results indicate that multi-base-station cooperation can be energy efficient for cell-edge communication and that the backhauling and cooperative processing power should be kept low. Overall, it has been shown that the potential of improvement of CoMP in terms of bit-per-joule capacity is not as high as in terms of channel capacity due to associated energy cost for cooperative processing and backhauling.
It has been envisaged that in future 5G networks user devices will become an integral part by participating in the transmission of mobile content traffic typically through Deviceto- device (D2D) technologies. In this context, we promote the concept of Mobility as a Service (MaaS), where content-aware mobile network edge is equipped with necessary knowledge on device mobility in order to distribute popular mobile content items to interested clients via a small number of helper devices. Towards this end, we present a device-level Information Centric Networking (ICN) architecture that is able to perform intelligent content distribution operations according to necessary context information on mobile user mobility and content characteristics. Based on such a platform, we further introduce device-level online content caching and offline helper selection algorithms in order to optimise the overall system efficiency. In particular, this paper sheds distinct light on the importance of user mobility data analytics based on which helper selection can lead to overall system optimality. Based on representative user mobility models, we conducted realistic simulation experiments and modelling which have proven the efficiency in terms of both network traffic offloading gains and user-oriented performance improvements. In addition, we show how the framework can be flexibly configured to meet specific delay tolerance constraints according to specific context policies.
—Network coverage is an increasing concern for the Quality of Service (QoS) targets of new mobile technologies. New solutions designed to fulfill the requirements of the existing fifth-generation (5G) and upcoming sixth-generation (6G) emerging scenarios are based on deploying a high number of network access points (APs), which tend to considerably degrade coverage and cell-edge performance due to added interference and increase the energy consumption of cellular systems. In this paper, we present new results on our recently proposed novel concept of cell-sweeping that aims to minimize the coverage dead-spots and improve cell-edge user performance. More specifically, the concept is explored further in this paper analyzing the impact of different cell-sweeping configurations and evaluating the potential benefits towards achieving energy efficiency. By means of system level computer simulations, it is shown that cell-sweeping provides energy savings of 11% and 26.5% for a similar average and cell-edge user throughput performance, respectively, when compared to the conventional static cell deployment in a typical urban macro cell scenario.
In broadcast wireless networks, the options for reliable delivery are limited when there is no return link or a return link is not deemed cost-efficient due to the system resource requirements it introduces. In this paper we focus our attention on two reliable transport mechanisms that become relevant for the non real time delivery of files: packet-level Forward Error Correction (FEC) and data carousels. Both techniques perform error recovery at the expense of redundant data transmission and content repetition respectively. We demonstrate that their joint design may lead to significant resource savings.
Future mobile communication systems will be designed to support a wide range of data rates with complex quality of service matrix. It is becoming more challenging to optimize the radio resource management and maximise the system capacity whilst meeting the required quality of service from user's point of view. Traditional schemes have approached this problem mainly focusing on resources within a cell and to large extent ignoring effects of multi-cell architecture. This paper addresses the influence of multi-cell interference on overall radio resource utilisation and proposes a novel approach, setting a new direction for future research on resource scheduling strategies in a multi-cell system. It proposes a concept called Load Matrix (LM) which facilitates joint management of interference within and between cells for allocation of radio resources. Simulation results show significant improvement in the resource utilization and overall network performance. Using the LM strategy, average cell throughput can be increased as much as 30% compared to a benchmark algorithm. Results also show that maintaining cell interference within a margin instead of a hard target can significantly improve resource utilization.
In this paper, we investigate the hybrid precoding design for joint multicast-unicast millimeter wave (mmWave) system, where the simultaneous wireless information and power transform is considered at receivers. The subarray-based sparse radio frequency chain structure is considered at base station (BS). Then, we formulate a joint hybrid analog/digital precoding and power splitting ratio optimization problem to maximize the energy efficiency of the system, while the maximum transmit power at BS and minimum harvested energy at receivers are considered. Due to the difficulty in solving the formulated problem, we first design the codebook-based analog precoding approach and then, we only need to jointly optimize the digital precoding and power splitting ratio. Next, we equivalently transform the fractional objective function of the optimization problem into a subtractive form one and propose a two-loop iterative algorithm to solve it. For the outer loop, the classic Bi-section iterative algorithm is applied. For the inner loop, we transform the formulated problem into a convex one by successive convex approximation techniques, which is solved by a proposed iterative algorithm. Finally, simulation results are provided to show the performance of the proposed algorithm.
Energy efficiency (EE) is a key figure of merit for designing the next generation of communication systems. Meanwhile, relay-based cooperative communication, through machine-to-machine and other related technologies, is also playing an important part in the development of these systems. This paper designs an energy efficient precoding method for optimizing the EE/energy consumption of two-way multi-input multi-output (MIMO)-amplify-and-forward (AF) relay systems by using pseudo-convexity analysis to design EE-optimal precoding matrices. More precisely, we derive an EE-optimal source precoding matrix in closed-form, design a numerical approach for obtaining an optimal relay precoding matrix, prove the optimality of these matrices, when treated separately, and provide lowcomplexity bespoke algorithms to generate them. These matrices are then jointly optimized through an alternating optimization process that is proved to be systematically convergent. Performance evaluation indicates that our method can be globally optimal in some scenarios and that it is significantly more energy efficient (i.e. up to 60% more energy efficient) than existing EEbased one-way or two-way MIMO-AF precoding methods.
Many method has been applied previously to improve the fairness of a wireless communication system. In this paper, we propose using hybrid schemes, where more than one transmission scheme are used in one system, to achieve this objective. These schemes consist of cooperative transmission schemes, maximal ratio transmission and interference alignment, and non-cooperative schemes, orthogonal and non-orthogonal schemes used alongside and in combinations in the same system to improve the fairness. We provide different weight calculation methods to vary the output of the fairness problem. We show the solution of the radio resource allocation problem for the transmission schemes used. Finally, simulation results is provided to show fairness achieved, in terms of Jain's fairness index, by applying the hybrid schemes proposed and the different weight calculation methods at different inter-site distances.
Spectrum sharing and employing highly directional antennas in the mm-wave bands are considered among the key enablers for 5G networks. Conventional interference avoidance techniques like listen-before-talk (LBT) may not be efficient for such coexisting networks. In this paper, we address a coexistence mechanism by means of distributed beam scheduling with minimum cooperation between spectrum sharing subsystems without any direct data exchange between them. We extend a “Good Neighbor” (GN) principle initially developed for decentralized spectrum allocation to the distributed beam scheduling problem. To do that, we introduce relative performance targets, develop a GN beam scheduling algorithm, and demonstrate its efficiency in terms of performance/complexity trade off compared to that of the conventional selfish (SLF) and recently proposed distributed learning scheduling (DLS) solutions by means of simulations in highly directional antenna mm-wave scenarios.
This paper presents a novel method to estimate the frequency offset between a mobile phone and the infrastructure when the mobile phone initially attaches to the LTE network. The proposed scheme is based on PRACH (Physical Random Access Channel) preambles and can significantly reduce the complexity of preamble detection at the eNodeB side.
By performing the Floquet-mode analysis of a periodic slotted waveguide, a multiple-beam leaky wave antenna is proposed in the millimetre-wave (mmW) band. Considering the direction of surface current lines on the broad/side-walls of the waveguide, the polarization of constructed beams are also controlled. The simulation results are well matched with the initial mathematical analysis.
This paper proposes a low-complexity hybrid beamforming design for multi-antenna communication systems. The hybrid beamformer comprises of a baseband digital beamformer and a constant modulus analog beamformer in radio frequency (RF) part of the system. As in Singular-Value-Decomposition (SVD) based beamforming, hybrid beamforming design aims to generate parallel data streams in multi-antenna systems, however, due to the constant modulus constraint of the analog beamformer, the problem cannot be solved, similarly. To address this problem, mathematical expressions of the parallel data streams are derived in this paper and desired and interfering signals are specified per stream. The analog beamformers are designed by maximizing the power of desired signal while minimizing the sum-power of interfering signals. Finally, digital beamformers are derived through defining the equivalent channel observed by the transmitter/receiver. Regardless of the number of the antennas or type of channel, the proposed approach can be applied to wide range of MIMO systems with hybrid structure wherein the number of the antennas is more than the number of the RF chains. In particular, the proposed algorithm is verified for sparse channels that emulate mm-wave transmission as well as rich scattering environments. In order to validate the optimality, the results are compared with those of the state-of-the-art and it is demonstrated that the performance of the proposed method outperforms state-of-the-art techniques, regardless of type of the channel and/or system configuration.
The paper addresses the TCP performance enhancing proxy techniques broadly deployed in wireless networks. Drawing on available models for TCP latency, we describe an analytical model for the latency and the buffer requirements related to the split-TCP mechanism. Although the model applicability is broad, we present and evaluate the model in the context of geostationary satellite networks, where buffering requirements may become more dramatic. Simulation results are compared with the analytical model estimates and show that the model captures the impact of various parameters affecting the dynamics of the component connections traversing the terrestrial and the satellite network.
With the advent of Network Function Virtualization (NFV) techniques, a subset of the Internet traffic will be treated by a chain of virtual network functions (VNFs) during their journeys while the rest of the background traffic will still be carried based on traditional routing protocols. Under such a multi-service network environment, we consider the co-existence of heterogeneous traffic control mechanisms, including flexible, dynamic service function chaining (SFC) traffic control and static, dummy IP routing for the aforementioned two types of traffic that share common network resources. Depending on the traffic patterns of the background traffic which is statically routed through the traditional IP routing platform, we aim to perform dynamic service function chaining for the foreground traffic requiring VNF treatments, so that both the end-to-end SFC performance and the overall network resource utilization can be optimized. Towards this end, we propose a deep reinforcement learning based scheme to enable intelligent SFC routing decision-making in dynamic network conditions. The proposed scheme is ready to be deployed on both hybrid SDN/IP platforms and future advanced IP environments. Based on the real GEANT network topology and its one-week traffic traces, our experiments show that the proposed scheme is able to significantly improve from the traditional routing paradigm and achieve close-to-optimal performances very fast while satisfying the end-to-end SFC requirements.
In research community, a new radio access network architecture with a logical separation between control plane (CP) and data plane (DP) has been proposed for future cellular systems. It aims to overcome limitations of the conventional architecture by providing high data rate services under the umbrella of a coverage layer in a dual connection mode. This configuration could provide significant savings in signalling overhead. In particular, mobility robustness with minimal handover (HO) signalling is considered as one of the most promising benefits of this architecture. However, the DP mobility remains an issue that needs to be investigated. We consider predictive DP HO management as a solution that could minimise the out-of band signalling related to the HO procedure. Thus we propose a mobility prediction scheme based on Markov Chains. The developed model predicts the user’s trajectory in terms of a HO sequence in order to minimise the interruption time and the associated signalling when the HO is triggered. Depending on the prediction accuracy, numerical results show that the predictive HO management strategy could significantly reduce the signalling cost as compared with the conventional non-predictive mechanism.
In this paper, a novel approach, namely realcomplex hybrid modulation (RCHM), is proposed to scale up multiuser multiple-input multiple-output (MU-MIMO) detection with particular concern on the use of equal or approximately equal service antennas and user terminals. By RCHM, we mean that user terminals transmit their data sequences with a mix of real and complex modulation symbols interleaved in the spatial and temporal domain. It is shown, through the system outage probability, RCHM can combine the merits of real and complex modulations to achieve the best spatial diversity-multiplexing trade-off that minimizes the required transmit-power given a sum-rate. The signal pattern of RCHM is optimized with respect to the real-to-complex symbol ratio as well as power allocation. It is also shown that RCHM equips the successive interference canceling MU-MIMO receiver with near-optimal performances and fast convergence in Rayleigh fading channels. This result is validated through our mathematical analysis of the average biterror- rate as well as extensive computer simulations considering the case with single or multiple base-stations.
Deep learning is driving a radical paradigm shift in wireless communications, all the way from the application layer down to the physical layer. Despite this, there is an ongoing debate as to what additional values artificial intelligence (or machine learning) could bring to us, particularly on the physical layer design; and what penalties there may have? These questions motivate a fundamental rethinking of the wireless modem design in the artificial intelligence era. Through several physical-layer case studies, we argue for a significant role that machine learning could play, for instance in parallel error-control coding and decoding, channel equalization, interference cancellation, as well as multiuser and multiantenna detection. In addition, we will also discuss the fundamental bottlenecks of machine learning as well as their potential solutions in this paper.
Nowadays, system architecture of the fifth generation (5G) cellular system is becoming of increasing interest. To reach the ambitious 5G targets, a dense base station (BS) deployment paradigm is being considered. In this case, the conventional always-on service approach may not be suitable due to the linear energy/density relationship when the BSs are always kept on. This suggests a dynamic on/off BS operation to reduce the energy consumption. However, this approach may create coverage holes and the BS activation delay in terms of hardware transition latency and software reloading could result in service disruption. To tackle these issues, we propose a predictive BS activation scheme under the control/data separation architecture (CDSA). The proposed scheme exploits user context information, network parameters, BS sleep depth and measurement databases to send timely predictive activation requests in advance before the connection is switched to the sleeping BS. An analytical model is developed and closed-form expressions are provided for the predictive activation criteria. Analytical and simulation results show that the proposed scheme achieves a high BS activation accuracy with low errors w.r.t. the optimum activation time.
The proposed structure is presented to improve the inherited limited bandwidth and reduce the production of grating lobes in reflectarray antennas (RAs). A dielectric RA with 200 mm × 200 mm is designed and simulated to produce an orbital angular momentum beam (OAM) of the second mode with averaged realized gain of around 20 dBi in the band of 25-40 GHz, which covers most of 5G mm-wave bands (n257, n258, n260, and n261). To achieve the mentioned specifications, an inter-element spacing of 0.25λ is adopted.
—Concerning ultra-reliable low-latency communication (URLLC) for the downlink operating in the frequency-division multiple-access with random channel assignment, a lightweight power allocation approach is proposed to maximize the number of URLLC users subject to transmit-power and individual user-reliability constraints. Provided perfect channel-state-information at the transmitter (CSIT), the proposed approach is proven to ensure maximized URLLC users. Assuming imperfect CSIT, the proposed approach still aims to maximize the URLLC users without compromising the individual user reliability by using a pessimistic evaluation of the channel gain. It is demonstrated , through numerical results, that the proposed approach can significantly improve the user capacity and the transmit-power efficiency in Rayleigh fading channels. With imperfect CSIT, the proposed approach can still provide remarkable user capacity at limited cost of transmit-power efficiency.
It is well-known that the values of IEEE 802.11 MAC parameters directly affect the utilization of the channel capacity and link layer throughput as well as higher layers performance. This paper first studies the throughput of ad hoc network under various 802.11 MAC parameters by developing a 3-dimensional Markov chain. Based on this model, it is mathematically proved that the current values of 802.11 parameters result in dramatic throughput degradation. Then, the optimum values of 802.11 parameters that lead to maximum 802.11 MAC throughput will be proposed.
—In this paper, a novel supplementary index bit aided transmit diversity (SIB-TD) approach is proposed for an enhanced discrete cosine transform based orthogonal frequency division multiplexing with index modulation (EDCT-OFDM-IM) system. Specifically, conventional index modulation (IM) is employed for the first antenna, i.e., the main branch, and the non-activated subcarriers' indices in one index modulation group are utilized for the IM mapping on the same index modulation group for the second antenna, i.e., the diversity branch. Hence, each subcarrier can not be synchronously activated on the two antennas. Finally, the non-activated subcarriers of one antenna will transmit the same modulated symbols of the other antenna to exploit the diversity gain. At the receiver, a maximum likelihood group-wise receiver is also developed by detecting the main and diversity branches jointly. Simulation results demonstrate the superiority of the proposed scheme over both the conventional EDCT-OFDM-IM and the DFT-OFDM with Alamouti code either with or without IM respectively, even under the imperfect channel estimation.
Channel reciprocity in time-division duplexing (TDD) massive MIMO (multiple-input multiple-output) systems can be exploited to reduce the overhead required for the acquisition of channel state information (CSI). However, perfect reciprocity is unrealistic in practical systems due to random radio-frequency (RF) circuit mismatches in uplink and downlink channels. This can result in a significant degradation in the performance of linear precoding schemes which are sensitive to the accuracy of the CSI. In this paper, we model and analyse the impact of RF mismatches on the performance of linear precoding in a TDD multi-user massive MIMO system, by taking the channel estimation error into considerations. We use the truncated Gaussian distribution to model the RF mismatch, and derive closed-form expressions of the output SINR (signal-to-interference-plus-noise ratio) for maximum ratio transmission and zero forcing precoders. We further investigate the asymptotic performance of the derived expressions, to provide valuable insights into the practical system designs, including useful guidelines for the selection of the effective precoding schemes. Simulation results are presented to demonstrate the validity and accuracy of the proposed analytical results.
In this work, we provide the first attempt to evaluate error performance of Rate-Splitting (RS) based transmission strategies with constellation-constrained coding/modulation. The consider scenario is an overloaded multigroup multicast, where RS can mitigate the inter-group interference thus achieve a better max-min fair group rate over conventional transmission strategies. We bridge the RS-based rate optimization with modulationcoding scheme selection, and implement them in a developed transceiver framework with either linear or non-linear receiver, where the latter equips with a generalized sphere decoder. Simulation results of a coded bit error rate demonstrate that, while the conventional strategies suffer from the error floor in the considered scenario, the RS-based strategy delivers a superior performance even with low complexity receiver techniques. The proposed analysis, transceiver framework and evaluation methodology provide a generic baseline solution to validate the effectiveness of the RS-based system design in practice. Index Terms—Rate-splitting, overloaded system, multigroup multicast, rank-deficient, generalized sphere decoder, coded bit error rate.
This paper presents a machine learning (ML) based model to predict the diffraction loss around the human body. Practically, it is not reasonable to measure the diffraction loss changes for all possible body rotation angles, builds and line of sight (LoS) elevation angles. A diffraction loss variation prediction model based on a non-parametric learning technique called Gaussian process (GP) is introduced. Analysed results state that 86% correlation and normalised mean square error (NMSE) of 0.3 on the test data is achieved using only 40% of measured data. This allows a 60% reduction in required measurements in order to achieve a well-fitted ML loss prediction model. It also confirms the model generalizability for non-measured rotation angles.
In this paper, the capacity of OFDM/OQAM with isotropic orthogonal transfer algorithm (IOTA) pulse shaping is evaluated through information theoretic analysis. In the conventional OFDM systems the insertion of a cyclic prefix (CP) decreases the system’s spectral efficiency. As an alternative to OFDM, filter bank based multicarrier systems adopt proper pulse shaping with good time and frequency localisation properties to avoid interference and maintain orthogonality in real field among sub-carriers without the use of CP. We evaluate the spectral efficiency of OFDM/OQAM systems with IOTA pulse shaping in comparison with conventional OFDM/QAM systems, and our analytical model is further extended in order to gain insights into the effect of utilizing the intrinsic interference on the performance of our system. Furthermore, the spectral efficiency of OFDM/OQAM systems is analyzed when the effect of inter-symbol and inter-carrier interference is considered.
The beyond 3G or 4G mobile systems envisions heterogeneous infrastructures comprising diverse wireless systems, e.g., 2G, 3G, DVB, WLAN, and various transmission approaches, e.g., “one-to-one” and “one-to-many”. In this context, a network selection (NS) problem emerges regarding determining the appropriate Access Network (AN), as users are reachable through several different ANs. This paper addresses the issue of provisioning “one-to-many” services over heterogeneous wireless networks in terms of how to choose the AN that satisfies the bandwidth requirement of services, while maximizing the system profit obtained in the combined network. A heterogeneous network comprising Multicast Broadcast Multimedia Service (MBMS) of the third generation mobile terrestrial network and the digital video broadcasting transmission system for handheld terminals (DVB-H) is adopted in this study. Both networks cooperate and complement each other to improve the resource usage and to support “one-to-many” services with their multicast and broadcast transmission capabilities. Based on this architecture, an algorithm framework is defined and proposed to solve the NS problem for the “one-to-many” services. Six schemes based on the algorithm framework are then evaluated by simulation.
To avoid unnecessarily using a massive number of base station antennas to support a large number of users spatially multiplexed multi-user MIMO systems, optimal detection methods are required to demultiplex the mutually interfering information streams. Sphere decoding (SD) can achieve this, but its complexity and latency becomes impractical for large MIMO systems. Low complexity detection solutions such as linear detectors (e.g., MMSE) or likelihood ascendant search (LAS) approaches, have significantly lower latency requirements than SD but their achievable throughput is far from optimal. This work presents the concept of Antipodal detection and decoding, that can deliver very high throughput with practical latency requirements, even in systems where the number of user antennas reaches the number of base station antennas. The Antipodal detector either results in a highly reliable vector solution, or it does not find a vector solution at all (i.e., it results in an erasure), skipping the heavy processing load related to finding vector solutions that have a very high likelihood to be erroneous. Then, a belief-propagation-based decoder is proposed, that restores these erasures and further corrects remaining erroneous vector solutions. We show that for 32⇥32, 64-QAM modulated systems, and for packet error rates below 10%, Antipodal detection and decoding requires 9 dB less transmitted power than systems employing soft MMSE or LAS detection and LDPC decoding with similar complexity requirements. For the same scenario, our Antipodal method achieves practical throughput gains of more than 50% compared to soft MMSE and soft LAS-based methods.
Considering a reconfigurable intelligent surface (RIS) aided wireless powered Internet of Things (WP IoT) network. To address the energy-limitation issue, IoT devices in such a network can be wirelessly powered by a power station (PS) first and then connect with an access point (AP) using their own harvested energy. The RIS helps enhance energy and information receptions in the downlink wireless energy transfer (WET) and uplink wireless information transfer (WIT), respectively. This work unveils the impact of phase shift error (PSE) and transceiver hardware impairment (THI) on the considered network. Our investigation starts with a scenario where only the impact of the PSE on system under study is considered, then moves toward a scenario with the compound effect of both PSE and THI. A maximization problem of the system sum throughput is formulated to evaluate the overall performance for these two scenarios, subject to the constraints of the adjustable RIS phase shifts, the statistical PSE and the transmission time scheduling. To handle the non-convexity of the formulated problem due to those coupled variables, we first adopt the Lagrange dual method and Karush-Kuhn-Tucker (KKT) conditions to derive the optimal time scheduling in closed-form. Next, we recast the stochastic PSE into the deterministic counterpart for its tractability. Then, we adopt a successive convex approximation (SCA) to iteratively derive the optimal WIT’s phase shifts, and element-wise block coordinate decent (EBCD) and complex circle manifold (CCM) methods to iteratively derive the optimal WET’s phase shifts. Finally, we complete our solution approach for the scenario with both PSE and THI. Simulation results highlight the performance of the proposed scheme and the benefits induced by the RIS in comparison to benchmark schemes.
Good network coverage is an important element of Quality of Service (QoS) provision that mobile cellular operators aim to provide. The established requirements for the existing Fifth Generation (5G) and the emerging scenarios for upcoming Sixth Generation (6G) cellular communication technologies highly depend on the coverage quality that the network is able to provide. In addition, some proposed 5G solutions such as densification, are complex, costly, and tend to degrade network coverage due to increased interference which is critical for the cell-edge performance. In this direction, we present a novel concept of cell-sweeping for coverage enhancement in cellular networks. One of the main objectives behind this mechanism relies on overcoming the cell-edge problem which directly translates into better network coverage. In sequence, the concept operation is introduced and compared to the conventional static cell scenarios. These comparisons target mostly the benefits at the cell-edge locations. Additionally, the use of schedulers that take advantage of the sweeping system is expected to extend the cell-edge benefits to the entire network. This is observed when deploying cellsweeping with the Proportional Fair (PF) scheduler. A 5thpercentile improvement of 125% and an average throughput increase of 35% were obtained through system level simulations. The preliminary results presented in this paper suggest that cellsweeping can be adopted as an emerging technology for future Radio Access Network (RAN) deployments.
A cell-free massive multiple-input multiple-output (MIMO) uplink is investigated in this paper. We address a power allocation design problem that considers two conflicting metrics, namely the sum rate and fairness. Different weights are allocated to the sum rate and fairness of the system, based on the requirements of the mobile operator. The knowledge of the channel statistics is exploited to optimize power allocation. We propose to employ large scale-fading (LSF) coefficients as the input of a twin delayed deep deterministic policy gradient (TD3). This enables us to solve the non-convex sum rate fairness trade-off optimization problem efficiently. Then, we exploit a use-and-then-forget (UatF) technique, which provides a closed-form expression for the achievable rate. The sum rate fairness trade-off optimization problem is subsequently solved through a sequential convex approximation (SCA) technique. Numerical results demonstrate that the proposed algorithms outperform conventional power control algorithms in terms of both the sum rate and minimum user rate. Furthermore, the TD3-based approach can increase the median of sum rate by 16%-46% and the median of minimum user rate by 11%-60% compared to the proposed SCA-based technique. Finally, we investigate the complexity and convergence of the proposed scheme. cc Index terms— Cell-free massive MIMO, deep reinforcement learning, fairness, power control, sequential convex approximation .
This paper presents empirically based ultrawideband and directional channel measurements, performed in the Terahertz (THz) frequency range over 250 GHz bandwidth from 500 GHz to 750 GHz. Measurement setup calibration technique is presented for free-space measurements taken at Line-of-Sight (LoS) between the transmitter (Tx) and receiver(Rx) in an indoor environment. The atmospheric effects on signal propagation in terms of molecular absorption by oxygen and water molecules are calculated and normalized. Channel impulse responses (CIRs) are acquired for the LoS scenario for different antenna separation distances. From the CIRs the Power Delay Profile (PDP) is presented where multiple delay taps can be observed caused due to group delay products and reflections from the measurement bench.
We are on the brink of a new era for the wireless telecommunications, an era that will change the way that business is done. The fifth generation (5G) systems will be the first realization in this new digital era where various networks will be interconnected forming a unified system. With support for higher capacity as well as low-delay and machine-type communication services, the 5G networks will significantly improve performance over the current fourth generation (4G) systems and will also offer seamless connectivity to numerous devices by integrating different technologies, intelligence, and flexibility. In addition to ongoing 5G standardization activities and technologies under consideration in the Third Generation Partnership Project (3GPP), the Institute of Electrical and Electronic Engineers (IEEE) based technologies operating on unlicensed bands, will also be an integral part of a 5G eco-system. Along with the 3GPP-based cellular technology, IEEE standards and technologies are also evolving to keep pace with the user demands and new 5G services. In this article, we provide an overview of the evolution of the cellular and Wi-Fi standards over the last decade with particular focus on Medium Access Control (MAC) and Physical (PHY) layers, and highlight the ongoing activities in both camps driven by the 5G requirements and use-cases.
Wideband millimeter-wave (mmWave) directional propagation measurements were conducted in the 32 GHz and 39 GHz bands in outdoor line-of-sight (LoS) small cell scenarios. The measurement provides spatial and temporal statistics that will be useful for small-cell outdoor wireless networks for future mmWave bands. Measurements were performed at two outdoor environments and repeated for all polarization combinations. Measurement results show little spread in the angular and delay domains for the LoS scenario. Moreover root-mean-squared (RMS) delay spread at different polarizations show small difference which can be due to specific scatterers in the channel.
Future cellular systems demand higher throughput as an important requirement, along with smaller cell sizes to characterize the performance of network services. This paper purposes a way to optimize the multihop cellular network (MCN) deployment in LTE-A/Mobile WiMAX broadband wireless access systems. A simple way to optimize the MCN is to associate direct and multihop users based on maximum channel quality and allocate the resources blocks dynamically based on traffic load balancing as adjustment variables. The changing traffic demands require dynamic network reconfiguration to maintain proportional fairness in achieving the throughput. A self optimizing network based on genetic algorithm (GA) is made to adaptively resize the cell coverage limit and dynamically allocate resources based on active user demands. A policy control scheme to control resource allocations between direct and multihop users can be either fixed resource allocation (FRA) or dynamic resource allocation (DRA).
This paper presents details of the wideband directional propagation measurements of millimetre-wave (mmWave) channels in the 26 GHz, 32 GHz, and 39 GHz frequency bands in an indoor typical office environment. More than 14400 power delay profiles (PDPs) were measured across the 26 GHz band and over 9000 PDPs have been recorded for the 32 GHz and 39 GHz bands at each measurement point. A mmWave wideband channel sounder has been used, where signal analyzer and vector signal generator was employed. Measurements have been conducted for both co- and crossantenna polarization. The setup provided 2GHz bandwidth and the mechanically steerable directional horn antenna with 8 degrees beamwidth provides 8 degrees of directional resolution over the azimuth for 32 GHz and 39 GHz while 26 GHz measurement setup provides the angular resolution of 5 degrees. Measurements provide path loss, delay and spatial spread of the channel. Large-scale fading characteristics, RMS delay spread, RMS angular spread, angular and delay dispersion are presented for three mmWave bands for the line-of-sight (LoS) scenario.
In this paper, we investigate channel estimation for MC-CDMA in the presence of time and frequency synchronization errors. Channel estimation in MC-CDMA requires transmission of pilot tones, based on which MMSE interpolation or FFT-based interpolation algorithms are applied. Most channel estimation methods in literature assume perfect synchronization. However, this condition is not guaranteed in the actual case, and channel estimators are always expected to work as a fine synchronizer which has some ability to compensate synchronization errors [1]. Multicarrier systems are very sensitive to synchronization errors. Uncorrected errors cause inter-carrier interference (ICI) and degrade the performance significantly. In this paper, we analyze the effect of synchronization errors on the performance of pilot-aided channel estimators, and propose low complexity methods to compensate the residual timing and frequency offset respectively. We estimate the timing offset in frequency domain by a single frequency estimation technique, and iteratively search the frequency offset based on the interference power at the certain set of subcarriers. Simulation results show that our methods improve the performance of channel estimators considerably in imperfect synchronization conditions.
In this paper, an orthogonal stochastic gradient descent (O-SGD) based learning approach is proposed to tackle the wireless channel over-training problem inherent in artificial neural network (ANN)-assisted MIMO signal detection. Our basic idea lies in the discovery and exploitation of the training-sample orthogonality between the current training epoch and past training epochs. Unlike the conventional SGD that updates the neural network simply based upon current training samples, O-SGD discovers the correlation between current training samples and historical training data, and then updates the neural network with those uncorrelated components. The network updating occurs only in those identified null subspaces. By such means, the neural network can understand and memorize uncorrelated components between different wireless channels, and thus is more robust to wireless channel variations. This hypothesis is confirmed through our extensive computer simulations as well as performance comparison with the conventional SGD approach.
Software-defined data centers (SDDC) are an emerging softwarized model that can monitor the virtual machines' allocation atop the cloud servers. SDDC consists of softwarized entities like Virtual Machine (VM) and hardware entities like servers and connected switches. SDDCs apply VM deployment algorithms to preserve efficient placement and processing data traffic generated from the Connected and Autonomous Vehicles (CAV). To enhance user satisfaction, SDDC providers are always looking for an intellectual model to monitor large-scale incoming traffics, such as the Internet of Things (IoT) and CAV applications, by optimizing service quality and service level agreement (SLA). This paper is motivated by this, raising an energy-efficient VM cluster placement algorithm named EVCT to handle service quality and SLA issues in an SDDC in a CAV environment. EVCT algorithm leverages the similarity between VMs and models the problem of VM deployment into a weighted directed graph. Based on the amount of traffic between VM, EVCT adopts the "maximum flow and minimum cut theory" to cut the directed graph and achieve high energy-efficient placement for VMs. The proposed algorithm can efficiently reduce the energy consumption cost, provide a high quality of services (QoS) to users, and have good scalability for the variable workload. We have also carried out a series of experiments to use the real-world workload to evaluate the performance of the EVCT. The results illustrate that the EVCT surpasses the state-of-the-art algorithms in terms of energy consumption cost and efficiency.
A new way to model a CDMA-system applying relaying is proposed in this paper. This method makes it possible to compare directly the performance of relaying. The outage probability, which represents the ability of the users to reach the base Station, is chosen as criteria to compare the system with and without relaying. The model is based on the single mode air interface WCDMA FDD with a two-hop relay. When relaying is applied, the simulation results show that even by using the single mode FDD the uplink capacity is significantly improved by 82%. Also a new relay node selection strategy is proposed and the results show how important it is to choose appropriately the relay node. And finally, different scenarios of relaying are simulated to show when or when it is not better to apply relaying.
In this paper, we investigate the optimal inter- frequency small cell discovery (ISCD) periodicity for small cells deployed on carrier frequency other than that of the serving macro cell. We consider that the small cells and user terminals (UTs) positions are modeled according to a homogeneous Poisson Point Process (PPP). We utilize polynomial curve fitting to approximate the percentage of time the typical UT misses small cell offloading opportunity, for a fixed small cell density and fixed UT speed. We then derive analytically, the optimal ISCD periodicity that minimizes the average UT energy consumption (EC). Furthermore, we also derive the optimal ISCD periodicity that maximizes the average energy efficiency (EE), i.e. bit- per-joule capacity. Results show that the EC optimal ISCD periodicity always exceeds the EE optimal ISCD periodicity, with the exception of when the average ergodic rate in both tiers are equal, in which the optimal ISCD periodicity in both cases also becomes equal.
In cognitive radio network, secondary (unlicensed) users (SUs) are allowed to utilize the licensed spectrum when it is not used by the primary (licensed) users (PUs). Because of the dynamic nature of cognitive radio network, the activities of SUs such as ??how long to sense?? and ??how long to transmit?? significantly affect both the service quality of the cognitive radio networks and protection to PUs. In this work, we formulate and analyze spectrum utilization efficiency problem in the cognitive radio network with various periodic frame structure of SU, which consists of sensing and data transmission slots. Energy detection is considered for spectrum sensing algorithm. To achieve higher spectrum utilization efficiency, the optimal sensing and data transmission length are investigated and found numerically. The simulation results are presented to verify the our analysis and to evaluate the interference to the PU which should be controlled into tolerable level. Index Terms ?? Cognitive radio network; spectrum utilization efficiency; spectrum sensing; energy detection; frame structure.
This paper exploits a generic downlink symbiotic radio (SR) system, where a Base Station (BS) establishes a direct (primary) link with a receiver having an integrated backscatter device (BD). In order to accurately measure the backscatter link, the backscattered signal packets are designed to have finite block length. As such, the backscatter link in this SR system employs the finite block-length channel codes. According to different types of the backscatter symbol period and transmission rate, we investigate the non-cooperative and cooperative SR (i.e., NSR and CSR) systems, and derive their average achievable rate of the direct and backscatter links, respectively. We formulate two optimization problems, i.e., transmit power minimization and energy efficiency maximization. Due to the non-convex property of these formulated optimization problems, the semidefinite programming (SDP) relaxation and the successive convex approximation (SCA) are considered to design the transmit beamforming vector. Moreover, a low-complexity transmit beamforming structure is constructed to reduce the computational complexity of the SDP relaxed solution. Finally, the simulation results are demonstrated to validate the proposed schemes.
This paper investigates self-backhauling with dual antenna selection at multiple small cell base stations. Both half and full duplex transmissions at the small cell base station are considered. Depending on instantaneous channel conditions, the full duplex transmission can have higher throughput than the half duplex transmission, but it is not always the case. Closed-form expressions of the average throughput are obtained, and validated by simulation results. In all cases, the dual receive and transmit antenna selection significantly improves backhaul and data transmission, making it an attractive solution in practical systems.
A generic cooperative MIMO BICM system is described. Achievable rates are computed based on the extended equivalent binary input channel model of the original BICM system. Full decode and forward is assumed at the relay node. Two types of two-phased transmission/reception protocols are employed to establish orthogonal transmission/reception of the relay node. The achievable rate results are provided for different combinations of modulation orders and the number of antennas used at the source and relay nodes. Quantitative results provided in this paper could serve as a guide on when to engage cooperative transmission and how to choose proper constellations and puncturing ratios for the practical BICM coded systems. Comparison of the considered BICM system with other possible cooperative coded systems is also crucial that this paper due to lack of space for exposition misses to address.
We investigate the capacity of Intelligent Quadrifiliar Helix Antenna (IQHA) based Multiple-input-multiple-output (MIMO) communication system. We will show that IQHA based MIMO system is able to offer larger capacity compared with MIMO system without using IQHA. And at the same time, it can reduce the number of RF chains following the antenna, thus reducing the total cost. Two sub-optimal algorithms are proposed to adjust the weights of IQHA to maximize the capacity.
In this paper, a compact circularly polarized (CP) multi-mode antenna for global navigation satellite system re- flectometry (GNSS-R) is presented. The design comprises two Quadrifilar Helical Antennas (QHAs), each fed with a ground coplanar waveguide (GCPW) and quarter wavelength power divider (QWPD) integrated feed. A hybrid staircase-shaped (SSR) QHA radial is proposed, and it is formed by serially arranging several vertical and diagonal elements. The electric field lines from the vertical elements converge constructively to radiate with the axis normal. Besides, the circular spatial offsets between the adjacent diagonal and vertical elements induce a 90 delay in the field radiated. This hybrid shape launches an unprecedented theory facilitating normal mode of operation (MOOp) in QHA and generates CP over broad elevations and azimuths (0
We investigate the use of fixed-point methods for predicting the performance of multiple TCP flows sharing geostationary satellite links. The problem formulation is general in that it can address both error-free and error-prone links, proxy mechanisms such as split-TCP connections and account for asymmetry and different satellite network configurations. We apply the method in the specific context of bandwidth on demand (BoD) satellite links. The analytical approximations show good agreement with simulation results, although they tend to be optimistic when the link is not saturated. The main constraint upon the method applicability is the limited availability of analytical models for the MAC-induced packet delay under nonhomogeneous load and prioritization mechanisms.
A statistical model is derived for the equivalent signal-to-noise ratio of the Source-to-Relay-to-Destination (S-R-D) link for Amplify-and-Forward (AF) relaying systems that are subject to block Rayleigh-fading. The probability density function and the cumulated density function of the S-R-D link SNR involve modified Bessel functions of the second kind. Using fractional-calculus mathematics, a novel approach is introduced to rewrite those Bessel functions (and the statistical model of the S-R-D link SNR) in series form using simple elementary functions. Moreover, a statistical characterization of the total receive-SNR at the destination, corresponding to the S-R-D and the S-D link SNR, is provided for a more general relaying scenario in which the destination receives signals from both the relay and the source and processes them using maximum ratio combining (MRC). Using the novel statistical model for the total receive SNR at the destination, accurate and simple analytical expressions for the outage probability, the bit error probability, and the ergodic capacity are obtained. The analytical results presented in this paper provide a theoretical framework to analyze the performance of the AF cooperative systems with an MRC receiver.
This paper investigates a full duplex wirelesspowered two way communication networks, where two hybrid access points (HAP) and a number of amplify and forward (AF) relays both operate in full duplex scenario. We use time switching (TS) and static power splitting (SPS) schemes with two way full duplex wireless-powered networks as a benchmark. Then the new time division duplexing static power splitting (TDD SPS) and full duplex static power splitting (FDSPS) schemes as well as a simple relay selection strategy are proposed to improve the system performance. For TS, SPS and FDSPS, the best relay harvests energy using the received RF signal from HAPs and uses harvested energy to transmit signal to each HAP at the same frequency and time, therefore only partial self-interference (SI) cancellation needs to be considered in the FDSPS case. For the proposed TDD SPS, the best relay harvests the energy from the HAP and its self-interference. Then we derive closed-form expressions for the throughput and outage probability for delay limited transmissions over Rayleigh fading channels. Simulation results are presented to evaluate the effectiveness of the proposed scheme with different system key parameters, such as time allocation, power splitting ratio and residual SI.
In this paper, a novel unsupervised deep learning approach is proposed to tackle the multiuser frequency synchronization problem inherent in orthogonal frequency-division multiple-access (OFDMA) uplink communications. The key idea lies in the use of the feed-forward deep neural network (FF-DNN) for multiuser interference (MUI) cancellation taking advantage of their strong classification capability. Basically, the proposed FF-DNN consists of two essential functional layers. One is called carrier-frequency-offsets (CFOs) classification layer that is responsible for identifying the users’ CFO range, and another is called MUI-cancellation layer responsible for joint multiuser detection (MUD) and frequency synchronization. By such means, the proposed FF-DNN approach showcases remarkable MUIcancellation performances without the need of multiuser CFO estimation. In addition, we also exhibit an interesting phenomenon occurred at the CFO-classification stage, where the CFO-classification performance get improved exponentially with the increase of the number of users. This is called multiuser diversity gain in the CFO-classification stage, which is carefully studied in this paper.
—Holographic Teleportation is an emerging media application allowing people or objects to be teleported in a real-time and immersive fashion into the virtual space of the audience side. Compared to the traditional video content, the network requirements for supporting such applications will be much more challenging. In this paper, we present a 5G edge computing framework for enabling remote production functions for live holographic Teleportation applications. The key idea is to offload complex holographic content production functions from end user premises to the 5G mobile edge in order to substantially reduce the cost of running such applications on the user side. We comprehensively evaluated how specific network-oriented and application-oriented factors may affect the performances of remote production operations based on 5G systems. Specifically, we tested the application performance from the following four dimensions: (1) different data rate requirements with multiple content resolution levels, (2) different transport-layer mechanisms over 5G uplink radio, (3) different indoor/outdoor location environments with imperfect 5G connections and (4) different object capturing scenarios including the number of teleported objects and the number of sensor cameras required. Based on these evaluations we derive useful guidelines and policies for future remote production operation for holographic Teleportation through 5G systems.
—This paper investigates a wireless powered intelligent radio environment, where a fractional non-linear energy harvesting (NLEH) is proposed to enable an intelligent reflecting surface (IRS) assisted wireless powered Internet of Things (WP IoT) network. The IRS engages in downlink wireless energy transfer (WET) and uplink wireless information transfer (WIT). We aim to improve the overall performance of the considered network, and the approach is to maximize its sum throughput subject to constraints of two different types of IRS beam patterns and time durations. To solve the formulated problem, we first consider the Lagrange dual method and Karush-Kuhn-Tucker (KKT) conditions to optimally design the time durations in closed-form. Then, a quadratic transformation (QT) is proposed to iteratively transform the fractional NLEH model into the subtractive form, where the IRS phase shifts are optimally derived by the Complex Circle Manifold (CCM) method in each iteration. Finally, numerical results are demonstrated to promote the proposed scheme in comparison to the benchmark schemes, where the benefits are induced by the IRS compared with the benchmark schemes.
Energy efficiency has become increasingly important in wireless communications, with significant environmental and financial benefits. This paper studies the achievable capacity region of a single carrier uplink channel consisting of two transmitters and a single receiver, and uses average energy efficiency contours to find the optimal rate pair based on four different targets: Maximum energy efficiency, a trade-off between maximum energy efficiency and rate fairness, achieving energy efficiency target with maximum sum-rate and achieving energy efficiency target with fairness. In addition to the transmit power, circuit power is also accounted for, with the maximum transmit power constrained to a fixed value. Simulation results demonstrate the achievability of the optimal energy-efficient rate pair within the capacity region, and provide the trade-off for energy efficiency, fairness and maximum sum-rate.
Recently, the fifth-generation (5G) cellular system has been standardised. As opposed to legacy cellular systems geared towards broadband services, the 5G system identifies key use cases for ultra-reliable and low latency communications (URLLC) and massive machine-type communications (mMTC). These intrinsic 5G capabilities enable promising sensor-based vertical applications and services such as industrial process automation. The latter includes autonomous fault detection and prediction, optimised operations and proactive control. Such applications enable equipping industrial plants with a sixth sense (6S) for optimised operations and fault avoidance. In this direction, we introduce an inter-disciplinary approach integrating wireless sensor networks with machine learningenabled industrial plants to build a step towards developing this 6S technology. We develop a modular-based system that can be adapted to the vertical-specific elements. Without loss of generalisation, exemplary use cases are developed and presented including a fault detection/prediction scheme, and a sensor density-based boundary between orthogonal and non-orthogonal transmissions. The proposed schemes and modelling approach are implemented in a real chemical plant for testing purposes, and a high fault detection and prediction accuracy is achieved coupled with optimised sensor density analysis.
High Speed Downlink Packet Access (HSDPA) is the front-line technology within the 3rd Generation Partnership Project (3GPP) and represents mid term evolution of the standard. This paper presents simple equalizer structures based on Minimum Mean Square Error criterion that are suitable for Adaptive Modulation and Coding (AMC), which is one of the key features of HSDPA. Performance of equalizer structures in AMC has been shown to provide significant gain over Rake receiver, in terms of HSDPA throughput, by enabling the use of higher CQI (Channel Quality Indicator) indices whilst showing stability against changing input signal statistics caused by AMC. LMMSE equalizer has been found to roughly double the HSDPA throughput in a variety of radio channels with relatively small increase in complexity. © 2009 IEEE.
This paper describes a distributed, cooperative and real time rental protocol for DCA operations in a multi system and mult) cell context for OFDMA systems. A credit token based rental protocol using auctioning Is proposed in support of dynamic spectrum sharing between cells. The proposed scheme can be tuned adaptively as a function of the context by specifying the credit tokens usage in the radio etiquette. The application of the rental protocol is illustrated with an ascending bid auctioning. The paper also describes two approaches for BS-BS communications in support of the rental protocol. Finally, it is described how the proposed mechanisms contribute to the current approaches followed in the IEEE 802.16h and IEEE 802.22 standards efforts addressing cognitive radio, © 2006 IEEE.
The Internet-of-Things (IoT) paradigm envisions billions of devices all connected to the Internet, generating low-rate monitoring and measurement data to be delivered to application servers or end-users. Recently, the possibility of applying innetwork data caching techniques to IoT traffic flows has been discussed in research forums. The main challenge as opposed to the typically cached content at routers, e.g. multimedia files, is that IoT data are transient and therefore require different caching policies. In fact, the emerging location-based services can also benefit from new caching techniques that are specifically designed for small transient data. This paper studies in-network caching of transient data at content routers, considering a key temporal data property: data item lifetime. An analytical model that captures the trade-off between multihop communication costs and data item freshness is proposed. Simulation results demonstrate that caching transient data is a promising information-centric networking technique that can reduce the distance between content requesters and the location in the network where the content is fetched from. To the best of our knowledge, this is a pioneering research work aiming to systematically analyse the feasibility and benefit of using Internet routers to cache transient data generated by IoT applications.
In this article, we consider the joint subcarrier and power allocation problem for uplink orthogonal frequency division multiple access system with the objective of weighted sum-rate maximization. Since the resource allocation problem is not convex due to the discrete nature of subcarrier allocation, the complexity of finding the optimal solution is extremely high. We use the optimality conditions for this problem to propose a suboptimal allocation algorithm. A simplified implementation of the proposed algorithm has been provided, which significantly reduced the algorithm complexity. Numerical results show that the presented algorithm outperforms the existing algorithms and achieves performance very close to the optimal solution.
This article presents a comprehensive survey of the literature on self-interference management schemes required to achieve a single frequency full duplex communication in wireless communication networks. A single frequency full duplex system often referred to as in-band full duplex (FD) system has emerged as an interesting solution for the next generation mobile networks where scarcity of available radio spectrum is an important issue. Although studies on the mitigation of self-interference have been documented in the literature, this is the first holistic attempt at presenting not just the various techniques available for handling self-interference that arises when a full duplex device is enabled, as a survey, but it also discusses other system impairments that significantly affect the self-interference management of the system, and not only in terrestrial systems, but also on satellite communication systems. The survey provides a taxonomy of self-interference management schemes and shows by means of comparisons the strengths and limitations of various self-interference management schemes. It also quantifies the amount of self-interference cancellation required for different access schemes from the 1 st generation to the candidate 5 th generation of mobile cellular systems. Importantly, the survey summarises the lessons learnt, identifies and presents open research questions and key research areas for the future. This paper is intended to be a guide and take off point for further work on self-interference management in order to achieve full duplex transmission in mobile networks including heterogeneous cellular networks which is undeniably the network of future wireless systems.
In this letter, we analyse the trade-off between collision probability and code-ambiguity, when devices transmit a sequence of preambles as a codeword, instead of a single preamble, to reduce collision probability during random access to a mobile network. We point out that the network may not have sufficient resources to allocate to every possible codeword, and if it does, then this results in low utilisation of allocated uplink resources. We derive the optimal preamble set size that maximises the probability of success in a single attempt, for a given number of devices and uplink resources.
A clear understanding of mixed-numerology signals multiplexing and isolation in the physical layer is of importance to enable spectrum efficient radio access network (RAN) slicing, where the available access resource is divided into slices to cater to services/users with optimal individual design. In this paper, a RAN slicing framework is proposed and systematically analyzed from a physical layer perspective. According to the baseband and radio frequency (RF) configurations imparities among slices, we categorize four scenarios and elaborate on the numerology relationships of slices configurations. By considering the most generic scenario, system models are established for both uplink and downlink transmissions. Besides, a low out of band emission (OoBE) waveform is implemented in the system for the sake of signal isolation and inter-service/slice-band-interference (ISBI) mitigation. We propose two theorems as the basis of algorithms design in the established system, which generalize the original circular convolution property of discrete Fourier transform (DFT). Moreover, ISBI cancellation algorithms are proposed based on a collaboration detection scheme, where joint slices signal models are implemented. The framework proposed in the paper establishes a foundation to underpin extremely diverse user cases in 5G that implement on a common infrastructure.