Dr Chuan Foh
Academic and research departments
Institute for Communication Systems, School of Computer Science and Electronic Engineering.About
Biography
Chuan Heng Foh received his MSc degree from Monash University, Australia in 1999 and PhD degree from the University of Melbourne, Australia in 2002. After his PhD, he spent 6 months as a lecturer at Monash University in Australia. In December 2002, he joined Nanyang Technological University, Singapore as an Assistant Professor until 2012.
He is now a Senior Lecturer at the University of Surrey. His research interests include protocol design and performance analysis of various computer networks including wireless local area and mesh networks, mobile ad hoc and sensor networks, 5G networks, and data center networks.
He has authored or co-authored over 100 refereed papers in international journals and conferences. He actively participates in IEEE conference and workshop organisation, including the International Workshop on Cloud Computing Systems, Networks, and Applications (CCSNA) where he is a steering member. He is an Associate Editor for IEEE Access, IEEE Wireless Communications, International Journal of Communications Systems, and a Guest Editor for various International Journals. Currently, he is the Vice-Chair (Europe/Africa) of IEEE Technical Committee on Green Communications and Computing (TCGCC) and the Chair of Special Interest Group on Green Data Center and Cloud Computing under TCGCC. He is a senior member of IEEE.
Publications
In recent years, there has been a notable surge in the Internet of Things (IoT) applications. Increasingly, IoT devices are being attacked. Network intrusion detection is a tool to detect any presence of malicious activities in a network. Machine learning (ML) techniques are increasingly used for classifying network traffic. However, results from state-of-the-art studies have shown that training ML classifiers with imbalanced datasets affect their classification performance, resulting in network categories with fewer training instances getting classified wrongly. This study presents a Stack ensemble ML classifier for network intrusion detection in an IoT network using the Bot-IoT dataset for the classifier evaluation. According to preliminary results, the classifier showed lower metric scores for minority network categories. We applied Synthetic Minority Oversampling Technique (SMOTE) to address the class imbalance. Follow-up experiment results for the SMOTE-Stack outclassed Stack and other state-of-the-art classifiers.
Access control is one of the major security concerns for wireless sensor networks. However, applying conventional access control models that rely on the central Certificate Authority and sophisticated cryptographic algorithms to wireless sensor networks poses new challenges as wireless sensor networks are highly distributed and resource-constrained. In this paper, a distributed and fine-grained access control model based on the trust and centrality degree is proposed (TC-BAC). Our design uses the combination of trust and risk to grant access control. To meet the security requirements of an access control system with the absence of Certificate Authority, a distributed trust mechanism is developed to allow access of a trusted node to a network. Then, centrality degree is used to assess the risk factor of a node and award the access, which can reduce the risk ratio of the access control scheme and provide a certain protection level. Finally, our design also takes multi-domain access control into account and solves this problem by utilizing a mapping mechanism and group access policies. We show with simulation that TC-BAC can achieve both the intended level of security and high efficiency suitable for wireless sensor networks. © 2013 Elsevier B.V. All rights reserved.
Satellite communication system is expected to play a vital role for realizing various remote internet of things (IoT) applications in 6G vision. Due to unique characteristics of satellite environment, one of the main challenges in this system is to accommodate massive random access (RA) requests of IoT devices while minimizing their energy consumptions. In this paper, we focus on the reliable design and detection of RA preamble to effectively enhance the access efficiency in high-dynamic low-earth-orbit (LEO) scenarios. To avoid additional signaling overhead and detection process, a long preamble sequence is constructed by concatenating the conjugated and circularly shifted replicas of a single root Zadoff-Chu (ZC) sequence in RA procedure. Moreover, we propose a novel impulse-like timing metric based on length-alterable differential cross-correlation (LDCC), that is immune to carrier frequency offset (CFO) and capable of mitigating the impact of noise on timing estimation. Statistical analysis of the proposed metric reveals that increasing correlation length can obviously promote the output signal-to-noise power ratio, and the first-path detection threshold is independent of noise statistics. Simulation results in different LEO scenarios validate the robustness of the proposed method to severe channel distortion, and show that our method can achieve significant performance enhancement in terms of timing estimation accuracy, success probability of first access, and mean normalized access energy, compared with the existing RA methods.
This paper designs an efficient distributed intrusion detection system (DIDS) for Internet of Things (IoT) data traffic. The proposed DIDS has been implemented at IoT network gateways and edge sites to detect and alarm on anomalous traffic data. We implement different machine learning (ML) algorithms to classify the traffic as benign or malicious. We perform an in-depth parametric study of the models using multiple real-time IoT datasets to enable the model deployment to be consistent with the demands of the specific IoT network. Specifically, we develop a decentralized method using federated learning (FL) for collecting data from IoT sensor nodes to address the data privacy issues associated with centralizing data at the gateway DIDS. We propose two poisoning attacks on the perception layer of these IoT networks that use generative adversarial networks (GAN) to determine how the threats of unpredictable authenticity of the IoT sensors can be triggered. To address such attacks, we design an appropriate defence algorithm that is implemented at the gateways to help separate anomalous from benign data and preserve the system's robustness. The suggested defence algorithm successfully classifies anomalies with high accuracy, exhibiting the system's immunity against poisoning attacks. We confirm that the Random Forest classifier performs the best across all ML key performance indicators (KPIs) and can be implemented at the edge to reduce false alarm rates.
In this paper, federated learning (FL) over wireless networks is investigated. In each communication round, a subset of devices is selected to participate in the aggregation with limited time and energy. In order to minimize the convergence time, global loss and latency are jointly considered in a Stackelberg game based framework. Specifically, age of information (AoI) based device selection is considered at leader-level as a global loss minimization problem, while sub-channel assignment, computational resource allocation, and power allocation are considered at follower-level as a latency minimization problem. By dividing the follower-level problem into two sub-problems, the best response of the follower is obtained by a monotonic optimization based resource allocation algorithm and a matching based sub-channel assignment algorithm. By deriving the upper bound of convergence rate, the leader-level problem is reformulated, and then a list based device selection algorithm is proposed to achieve Stackelberg equilibrium. Simulation results indicate that the proposed device selection scheme outperforms other schemes in terms of the global loss, and the developed algorithms can significantly decrease the time consumption of computation and communication.
Accurate and low-cost autonomous self-localization is a critical requirement of various applications of a large-scale distributed wireless sensor network (WSN). Due to its massive deployment of sensors, explicit measurements based on specialized localization hardware such as the Global Positioning System (GPS) is not practical. In this paper, we propose a low-cost WSN localization solution. Our design uses received signal strength indicators for ranging, light weight distributed algorithms based on the spring-relaxation technique for location computation, and the cooperative approach to achieve certain location estimation accuracy with a low number of nodes with known locations. We provide analysis to show the suitability of the spring-relaxation technique for WSN localization with cooperative approach, and perform simulation experiments to illustrate its accuracy in localization.
Wireless communication between sensors allows the formation of flexible sensor networks, which can be deployed rapidly over wide or inaccessible areas. However, the need to gather data from all sensors in the network imposes constraints on the distances between sensors. This survey describes the state of the art in techniques for determining the minimum density and optimal locations of relay nodes and ordinary sensors to ensure connectivity, subject to various degrees of uncertainty in the locations of the nodes.
In this paper, we propose a hybrid approach to the wireless sensor network (WSN) localization problem. The proposed approach harnesses the strengths of two techniques: RF mapping and cooperative ranging, to overcome the potential weaknesses in one another. The idea is to first allow every node to obtain an initial estimate of its own position in a neighbor-independent way using a coarse-grained RF map acquired with minimal efforts. Then each node iteratively refines its own position through distance ranging to each of its neighbors, irregardless of their positions with respect to itself. Through simulation performance experiments, we show the potential of this hybrid approach as a practical localization system for WSN that can achieve reasonable localization accuracy without significant deployment efforts. ©2009 IEEE.
The parameters of Physical (PHY) layer radio frame for 5th Generation (5G) mobile cellular systems are expected to be flexibly configured to cope with diverse requirements of different scenarios and services. This paper presents a frame structure and design which is specifically targeting Internet of Things (IoT) provision in 5G wireless communication systems. We design a suitable radio numerology to support the typical characteristics, that is, massive connection density and small and bursty packet transmissions with the constraint of low cost and low complexity operation of IoT devices. We also elaborate on the design of parameters for Random Access Channel (RACH) enabling massive connection requests by IoT devices to support the required connection density. The proposed design is validated by link level simulation results to show that the proposed numerology can cope with transceiver imperfections and channel impairments. Furthermore, results are also presented to show the impact of different values of guard band on system performance using different subcarrier spacing sizes for data and random access channels, which show the effectiveness of the selected waveform and guard bandwidth. Finally, we present system level simulation results that validate the proposed design under realistic cell deployments and inter-cell interference conditions.
Providing a certain quality of service (QoS) for multimedia transmissions over a noisy wireless channel has always been a challenge. The IEEE 802.11 standardization dedicates a working group, group e, to investigate and propose a solution for enabling IEEE 802.11 networks to provide multimedia transmissions with certain QoS supports. As drafted in the latest draft release, the IEEE 802.11e working group proposes the use of contention based mechanism to achieve the transmissions of prioritized traffic, which in turn provides a framework to support multimedia transmissions over IEEE 802.11 networks. However, such a contention based priority scheme does not deliver a strong QoS capability. In this paper, we first study the characteristics of the IEEE 802.11e network. For all the four defined priorities of IEEE 802.11e, we first investigate their capacity characteristics. We then design a resource allocation technique to better utilize the bandwidth and improve the performance of video transmissions. Our design uses a QoS mapping scheme according to the IEEE 802.11e protocol characteristics to deliver scalable video. In addition, we design an appropriate cross-layer video adaptation mechanism for the scalable video that further improves the video quality combining with our proposed resource allocation technique. We have evaluated our proposed technique via simulations (NS2). We use PSNR as our video quality measures. Our results show improvement in video quality and resource usage when our proposed technique is implemented.
Due to its low latency and high data rates support, mmWave communication has been an important player for vehicular communication. However, this carries some disadvantages such as lower transmission distances and inability to transmit through obstacles. This work presents a Contextual Multi-Armed Bandit Algorithm based beam selection to improve connection stability in next generation communications for vehicular networks. The algorithm, through machine learning (ML), learns about the mobility contexts of the vehicles (location and route) and helps the base station make decisions on which of its beam sectors will provide connection to a vehicle. In addition, the proposed algorithm also smartly extends, via relay vehicles, beam coverage to outage vehicles which are either in NLOS condition due to blockages or not served any available beam. Through a set of experiments on the city map, the effectiveness of the algorithm is demonstrated, and the best possible solution is presented.
On-demand routing protocol is an important category of the current ad-hoc routing protocols, in which a route between a communicating node pair is discovered only on demand. However, due to the dynamic and mobile nature of the nodes, intermediate nodes in the route tend to lose connection with each other during the communication process. When this occurs, an end-to-end route discovery is typically performed to establish a new connection for the communication. Such route repair mechanism causes high control overhead and long packet delay. In this paper, we propose a Proximity Approach To Connection Healing (PATCH) local recovery mechanism, which aims to reduce the control overhead and achieve fast recovery when route breakage happens. It is shown that PATCH is simple, robust and effective. We present simulation results to illustrate the performance benefits of using PATCH mechanism.
LT codes provide an efficient way to transfer information over erasure channels. Past research has illustrated that LT codes can perform well for a large number of input symbols. However, it is shown that LT codes have poor performance when the number of input symbols is small. We notice that the poor performance is due to the design of the LT decoding process. In this respect, we present a decoding algorithm called full rank decoding that extends the decodability of LT codes by using Wiedemann algorithm. We provide a detailed mathematical analysis on the rank of the random coefficient matrix to evaluate the probability of successful decoding for our proposed algorithm. Our studies show that our proposed method reduces the overhead significantly in the cases of small number of input symbols yet preserves the sim plicity of the original LT decoding process. © 2009 IEEE.
Drone networks offer rapid network deployment to areas that can pose access difficulty. This paper investigates the deployment of multi-hop drone-based unmanned aerial vehicles networks with a focus on the self-organization aspect. When rescue drones carry out their rescue operations which may fly faraway from the gateway, relay drones are autonomously deployed to maintain connectivity. We study the multiple dedicated connections where each rescue drone is connected to the gateway via dedicated relay drones. We show that this approach lacks sharing of relay drones and thus requires more relay drones. We then propose a centralized greedy algorithm and a distributed solution to significantly reduce the number of relay drones. We show that while the distributed self-organized drones (DSOD) solution requires a slightly higher number of relay drones than the greedy algorithm, it eliminates the need for global message exchange which makes it attractive for practical use.
It is envisaged that 5G can enable many vehicular use cases that require high capacity, ultra-low latency and high reliability. To support this, 5G proposes the use of dense small cells technology as well as and highly directional mmWave systems deployment, among many other new advanced communication technologies, to boost the network capacity, reduce latency and provide high reliability. In such systems, enabling vehicular communication, where the nodes are highly mobile, requires robust mobility management techniques to minimise signalling cost and interruptions during frequent handovers. This presents a major challenge that communication system engineers need to address to realise the promise of 5G systems for V2X and similar applications. In this paper, we provide an overview of recent progresses in the development of handover and beam management techniques in 5G communication systems. We conduct a critical appraisal of current research on beam level and cell level mobility management in 5G mmWave networks considering the ultra-reliable and low-latency communication requirements within the context of V2X applications. We also provide an insight into the open challenges and the emerging trends as well as the possible evolution beyond the horizon of 5G.
Future communication networks promise to provide ubiquitous high-speed services for numerous users via densely deployed small cells. They should offer good user experiences to all the users while incurring a low operational cost to the operators. User scheduling is a well-known approach to deliver good user experience, and recent works further demonstrate that it is also beneficial to improve energy efficiency (EE). However, existing EE-based scheduling schemes tend to favor users with good channel condition which lead to unfair user experiences. In this paper, we introduce a new concept of resource allocation boundary where EE and user fairness can be addressed simultaneously. We derive the boundary that partition in an effective manner the users into different groups. By applying an appropriate scheduling strategy to each group of users, not only users with poorer channel conditions can be served fairly, but the EE of the system can be further improved. We also provide a low-complexity energy-efficient power allocation algorithm that is designed to fully exploit the transmit power reduction capability of small cells. Simulation results show that our new scheduling scheme can improve the EE and user fairness when compared to existing approaches, i.e. by up to 63% and 56%, respectively.
The concept of Ultra Dense Networks (UDNs) is often seen as a key enabler of the next generation mobile networks. The massive number of BSs in UDNs represents a challenge in deployment, and there is a need to understand the performance behaviour and benefit of a network when BS locations are carefully selected. This can be of particular importance to the network operators who deploy their networks in large indoor open spaces such as exhibition halls, airports or train stations where locations of BSs often follow a regular pattern. In this paper we study performance of UDNs in downlink for regular network produced by careful BS site selection and compare to the irregular network with random BS placement. We first develop an analytical model to describe the performance of regular networks showing many similar performance behaviour to that of the irregular network widely studied in the literature. We also show the potential performance gain resulting from proper site selection. Our analysis further shows an interesting finding that even for over-densified regular networks, a nonnegligible system performance could be achieved.
Energy consumption of sensor nodes is a key factor affecting the lifetime of wireless sensor networks (WSNs). Prolonging network lifetime not only requires energy efficient operation, but also even dissipation of energy among sensor nodes. On the other hand, spatial and temporal variations in sensor activities create energy imbalance across the network. Therefore, routing algorithms should make an appropriate trade-off between energy efficiency and energy consumption balancing to extend the network lifetime. In this paper, we propose a Distributed Energy-aware Fuzzy Logic based routing algorithm (DEFL) that simultaneously addresses energy efficiency and energy balancing. Our design captures network status through appropriate energy metrics and maps them into corresponding cost values for the shortest path calculation. We seek fuzzy logic approach for the mapping to incorporate human logic. We compare the network lifetime performance of DEFL with other popular solutions including MTE, MDR and FA. Simulation results demonstrate that the network lifetime achieved by DEFL exceeds the best of all tested solutions under various traffic load conditions. We further numerically compute the upper bound performance and show that DEFL performs near the upper bound.
This paper makes use of the fluid-based approach to model the throughput of TCP Veno flow over wired/wireless networks. A generalized formula is derived between Veno's throughput and its window evolution parameters, packet loss rate and round-trip time. Simulation experiments and real network measurements are conducted to validate the accuracy of this model. © 2008 IEEE.
This paper surveys the literature relating to the application of machine learning to fault management in cellular networks from an operational perspective. We summarise the main issues as 5G networks evolve, and their implications for fault management. We describe the relevant machine learning techniques through to deep learning, and survey the progress which has been made in their application, based on the building blocks of a typical fault management system. We review recent work to develop the abilities of deep learning systems to explain and justify their recommendations to network operators. We discuss forthcoming changes in network architecture which are likely to impact fault management and offer a vision of how fault management systems can exploit deep learning in the future. We identify a series of research topics for further study in order to achieve this.
The introduction of physical layer network coding gives rise to the concept of turning a collision of transmissions on a wireless channel useful. In the idea of physical layer network coding, two synchronized simultaneous packet transmissions are carefully encoded such that the superimposed transmission can be decoded to produce a packet which is identical to the bitwise binary sum of the two transmitted packets. This paper explores the decoding of superimposed transmission resulted by multiple synchronized simultaneous transmissions. We devise a coding scheme that achieves the identification of individual transmission from the synchronized superimposed transmission. A mathematical proof for the existence of such a coding scheme is given.
Fountain-code based cloud storage system provides reliable online storage solution through placing unlabeled content blocks into multiple storage nodes. Luby Transform (LT) code is one of the popular fountain codes for storage systems due to its efficient recovery. However, to ensure high success decoding of fountain codes based storage, retrieval of additional fragments is required, and this requirement could introduce additional delay. In this paper, we show that multiple stage retrieval of fragments is effective to reduce the fileretrieval delay. We first develop a delay model for various multiple stage retrieval schemes applicable to our considered system. With the developed model, we study optimal retrieval schemes given requirements on success decodability. Our numerical results suggest a fundamental tradeoff between the file-retrieval delay and the target probability of successful file decoding, and that the file-retrieval delay can be significantly reduced by optimally scheduling packet requests in a multi-stage fashion.
Intrusion detection systems (IDS) protect networks by continuously monitoring data flow and taking immediate action when anomalies are detected. However, due to redundancy and significant network data correlation, classical IDS have shortcomings such as poor detection rates and high computational complexity. This paper proposes a novel feature selection and extraction technique (FI-PCA). Feature Importance (FI) and Principal Component Analysis (PCA) are used to preprocess the network dataset (PCA). FI identifies the most important features in the data, while PCA is used to reduce dimensionality and denoise the data. In order to detect anomalies, we employ three single classifiers: Decision Tree (DT), Naive Bayes and Logistic Regression. Preliminary results, however, show that these classifiers have achieved average classification metric scores. On this basis, we use the Stack Ensemble Learning Classifier (ELC) method of combining single classifiers to improve the classifier's performance further. Experimental results on varied feature dimensions of an IoT (Bot-IoT) dataset indicate that our proposed technique combined with the Stack ELC can maintain the same level of classification performance for reduced dataset features. A comparison of our result with state-of-the-art classifiers' classification performance shows that our classifier is superior in terms of accuracy and detection rate. At the same time, a remarkable decrease is recorded for both training and test time.
—Generative foundation AI models have recently shown great success in synthesizing natural signals with high perceptual quality using only textual prompts and conditioning signals to guide the generation process. This enables semantic communications at extremely low data rates in future wireless networks. In this paper, we develop a latency-aware semantic communications framework with pre-trained generative models. The transmitter performs multi-modal semantic decomposition on the input signal and transmits each semantic stream with the appropriate coding and communication schemes based on the intent. For the prompt, we adopt a re-transmission-based scheme to ensure reliable transmission, and for the other semantic modalities we use an adaptive modulation/coding scheme to achieve robustness to the changing wireless channel. Furthermore , we design a semantic and latency-aware scheme to allocate transmission power to different semantic modalities based on their importance subjected to semantic quality constraints. At the receiver, a pre-trained generative model synthesizes a high fidelity signal using the received multi-stream semantics. Simulation results demonstrate ultra-low-rate, low-latency, and channel-adaptive semantic communications.
Open Radio Access Networks (O-RANs) have revolutionized the telecom ecosystem by bringing intelligence into disaggregated RAN and implementing functionalities as Virtual Network Functions (VNF) through open interfaces. However, dynamic traffic conditions in real-life O-RAN environments may require necessary VNF reconfigurations during run-time, which introduce additional overhead costs and traffic instability. To address this challenge, we propose a multi-objective optimization problem that minimizes VNF computational costs and overhead of periodical reconfigurations simultaneously. Our solution uses constrained combinatorial optimization with deep reinforcement learning, where an agent minimizes a penalized cost function calculated by the proposed optimization problem. The evaluation of our proposed solution demonstrates significant enhancements, achieving up to 76% reduction in VNF reconfiguration overhead, with only a slight increase of up to 23% in computational costs. In addition, when compared to the most robust O-RAN system that doesn't require VNF reconfigurations, which is Centralized RAN (C-RAN), our solution offers up to 76% savings in bandwidth while showing up to 27% overprovisioning of CPU.
An accurate and low-cost hybrid solution to the problem of autonomous self-localization in wireless sensor networks (WSN) is presented. The solution is designed to perform robustly under challenging radio propagation conditions in mind, while requiring low deployment efforts, and utilizing only low-cost hardware and light-weight distributed algorithms for location computation. Our solution harnesses the strengths of two approaches for environments with complex propagation characteristics: RF mapping to provide an initial estimate of each sensor's position based on a coarse-grain RF map acquired with minimal efforts; and a cooperative light-weight spring relaxation technique for each sensor to refine its estimate using Kalman filtered inter-node distance measurements. Using Kalman filtering to pre-process noisy distance measurements inherent in complex propagation environments, is found to have significant positive impacts on the subsequent accuracy and the convergence of our spring relaxation algorithm. Through extensive simulations using realistic settings and real data set, we show that our approach is a practical localization solution which can achieve sub-meter accuracy and fast convergence under harsh propagation conditions, with no specialized hardware or significant efforts required to deploy. © 2012 IEEE.
This paper introduces a new centralized hybrid scheme for high speed wireless local area networks (WLANs), which combines the Out-of-Band Signaling (OBS), the Enhanced Distributed Coordination Function (EDCF) in IEEE 802.11 and the Deficit Round Robin (DRR) technologies to achieve better system performance for high-speed WLANs. We show via simulation that our overall system performance is higher than that of the EDCF in IEEE 802.11, and a certain quality of service and fairness are achieved. © 2004 IEEE.
Energy efficiency (EE) is considered as a key enabler for the next generation of communication system. Equally, scheduling is an important aspect for efficient and reliable communication in multi-user system. In this paper, we propose a low-complexity green scheduling algorithm for the downlink of orthogonal frequency division multiple access (OFDMA) cellular system when considering that base station (BS) can coordinate their transmission. More specifically, our aim here is to design a practical, low-complexity and low-power consumption solution based on a realistic EE scheduling criterion, which takes into account the time dependence of the scheduling process. Numerical results indicate that our scheme reduces both the computational complexity (by a factor of at least 25) and transmit power (by at least 30%) while achieving similar EE performance than existing schemes, in a typical cellular environment. Moreover, they confirm the benefit of BS coordination for power and energy consumption reduction.
In this paper, a new approach has been proposed and investigated with the help of variational auto-encoder (VAE) as a probabilistic model to reconstruct the transmitted symbol without sending the data bits out of the transmitter. The novelty of the proposed End-to-end (E2E) wireless system is in representing the symbol as a image hot vector (IHV) that contains the features of the shape such as spikes, closed squared frame, pixels index location and pixels grey-scale colours. The previously mentioned features are inferred by latent random variables (LRVs). The LRVs are used for fronthaul and backhaul data representation. The LRVs parameters have only been transmitted through the physical wireless channel instead of the original bits as in the classical modulations or the hot vectors in the Autoencoders (AE) E2E systems. The new proposed VAE architecture achieved the reconstruction of the symbol from the received LRV. The results show that the VAE with a simple classifier can provide a better symbol error rate (SER) than both AE baseline and classical Hamming code with hard decision decoding, especially at high E-b/N-o.
This paper studies issues that arise with respect to the joint optimization for convergence time in federated learning over wireless networks (FLOWN). We consider the criterion and protocol for selection of participating devices in FLOWN under the energy constraint and derive its impact on device selection. In order to improve the training efficiency, age-of-information (AoI) enables FLOWN to assess the freshness of gradient updates among participants. Aiming to speed up convergence, we jointly investigate global loss minimization and latency minimization in a Stackelberg game based framework. Specifically, we formulate global loss minimization as a leader-level problem for reducing the number of required rounds, and latency minimization as a follower-level problem to reduce time consumption of each round. By decoupling the follower-level problem into two sub-problems, including resource allocation and sub-channel assignment, we achieve an optimal strategy of the follower through monotonic optimization and matching theory. At the leader-level, we derive an upper bound of convergence rate and subsequently refor-mulate the global loss minimization problem and propose a new age-of-update (AoU) based device selection algorithm. Simulation results indicate the superior performance of the proposed AoU based device selection scheme in terms of the convergence rate, as well as efficient utilization of available sub-channels.
Retransmission based on packet acknowledgement (ACK/NAK) is a fundamental error control technique employed in IEEE 802.11-2007 unicast network. However the 802.11-2007 standard falls short of proposing a reliable MAC-level recovery protocol for multicast frames. In this paper we propose a latency and bandwidth efficient coding algorithm based on the principles of network coding for retransmitting lost packets in a singlehop wireless multicast network and demonstrate its effectiveness over previously proposed network coding based retransmission algorithms.
—This paper investigates a joint intelligent transmis-sive surface (ITS) and intelligent reflecting surface (IRS)-assisted cell-free network. Specifically, various ITS-assisted base stations (BSs), handled via a central processing unit (CPU), broadcast information signals to multiple IoT devices, carried by active transmit beamforming and transmissive reflecting phase shifts. Meanwhile, an IRS passively reflect signals from the ITS-assisted BS to the IoT devices. To examine this network performance, a sum rate is maximized among all users to jointly optimize the active beamforming, ITS and IRS passive beampatterns. These coupled variables leads to the non-convexity of this formulated optimization problem, which cannot be directly solved. To deal with this issue, we begin with applying the Lagrange dual transformation (LDT) and quadratic transformation (QT) to recast the sum of multiple logarithmically fractional objectives to the subtractive form, and further to quadratic form. Next, an alternating optimization (AO) algorithm is presented to separately the active beamforming, ITS and IRS passive beampatterns in an iterative fashion. Each sub-optimal solution of these variables can be iteratively derived, in a closed-form, by solving the quadratic objective function with a convex constraint or a unit-modulus constraint via the dual method with bisection search or the Alternating Direction Method of Multipliers (ADMM) algorithm. Finally, simulation results are provided to confirm the performance of the proposed algorithm compared to several benchmark schemes. Index Terms—Cell-free networks, intelligent reflecting/transmissive surface (IRS/ITS), Lagrange dual transformation (LDT), quadratic transformation (QT), and alternating direction method of multipliers (ADMM).
Localization of mobile phones is important to location-based mobile services, but achieving good location estimation of mobile phones is difficult especially in environment whose path loss exponent is unknown. In this paper, we present a Wi-Fi localization solution specifically designed for dense WLANs with unknown path loss exponent. In order to leverage between the computational cost and localization accuracy, our solution establishes a neighbor selection scheme based on the Voronoi diagram to identify a subset of Access Points (APs) to participate in localization. It considers the identified subset of APs and a mobile phone to be located as a mass-spring system. Provided with information of known coordinates of APs, the solution estimates the path loss exponent of the physical environment, infers inter-distances between APs and the mobile phone from Wi-Fi signals received, and implements spring relaxation algorithm to approximate the geographical location of the mobile phone, where this location estimation is fed back to refine the estimated exponent iteratively. Extensive simulation results confirm that our solution is able to provide location estimation with an attractive average accuracy of below 2 m in a typical Wi-Fi setup. © 2011 IEEE.
In this paper, we propose an optimal Clipping/Blanking nonlinearity technique for impulsive noise reduction in narrowband (9 kHz-490 kHz) PLC system. This optimal technique is based on the minimum bit error rate (BER) search. For our simulation, we have derived the transfer function of a typical low voltage (LV) PLC network using the common bottom-up approach and scattering matrix method. Our simulation results, in terms of BER versus signal to noise ratio (SNR), show that the proposed technique improves the BER performance of the narrowband PLC system. © 2011 IEEE.
Cooperative communication techniques have earlier been applied to design of the IEEE 802.11 medium access control (MAC) and shown to perform better. High rate stations can help relay packets from low-rate stations resulting in better throughput for the entire network. However, this also involves additional energy costs on the part of the relay which can result in reducing the network lifetime. We propose a cooperative MAC protocol NetCoop with the objective of maximizing the network lifetime and achieving high throughput. Based on this design, we also propose a flexible strategy which allows cooperation to be achieved using more than one relay. We show that this can achieve at least as good throughput as that of single relay cooperation while maintaining a high network lifetime. ©2010 IEEE.
© 2014, IJICIC Editorial Office, Inc. All rights reserved.Effective traffic management has always been one of the key considerations in datacenter design. It plays an even more important role today in the face of increasingly widespread deployment of communication intensive applications and cloud- based services, as well as the adoption of multipath datacenter topologies to cope with the enormous bandwidth requirements arising from those applications and services. Of central importance in traffic management for multipath datacenters is the problem of timely detection of elephant flows flows that carry huge amount of data so that the best paths can be selected for these flows, which otherwise might cause serious network congestion. In this paper, we propose FuzzyDetec, a novel control architecture for the adaptive detection of elephant flows in multipath datacenters based on fuzzy logic. We develop, perhaps for the first time, a close loop elephant flow detection framework with an automated fuzzy inference module that can continually compute an appropriate threshold for elephant flow detection based on current information feedback from the network. The novelty and practical significance of the idea lie in allowing multiple imprecise and possibly conflicting criteria to be incorporated into the elephant flow detection process, through simple fuzzy rules emulating human expertise in elephant flow threshold classification. The proposed approach is simple, intuitive and easily extensible, providing a promising direction towards intelligent datacenter traffic management for autonomous high performance datacenter networks. Simulation results show that, in comparison with an existing state-of-the-art elephant flow detection framework, our proposed approach can provide considerable throughput improvements in datacenter network routing.
Transmission range plays an important role in the deployment of a practical underwater acoustic sensor network (UWSN), where sensor nodes equipping with only basic functions are deployed at random locations with no particular geometrical arrangements. The selection of the transmission range directly influences the energy efficiency and the network connectivity of such a random network. In this paper, we seek analytical modeling to investigate the tradeoff between the energy efficiency and the network connectivity through the selection of the transmission range. Our formulation offers a design guideline for energy-efficient packet transmission operation given a certain network connectivity requirement.
Recently, we have witnessed the remarkable development in high-speed railways around the world. To provide a robust and fast wireless network for the onboard passengers, we have earlier proposed smart collaborative networking for railways (SCN-R). In the realization of SCN-R, its security is challenged by potential exploitation of authentication vulnerabilities since traditional authentication mechanisms are unsuitable for scenarios with fast moving objects due to their complex and relatively timely operations. In this paper, we address this issue by proposing a new efficient authentication mechanism, which is based on a new design of chaotic random number generator (RNG). Comparing with the recent proposal relying on the precise boundaries of chaotic map state-spaces, our RNG uses two logistic maps to avoid the time-consuming boundary location process. The proposed authentication mechanism uses the RNG to generate and validate the one-time password (OTP). To support different authentication applications, different lengths of OTPs can be used to differentiate and identify the applications. We have implemented our proposed authentication mechanism under real world conditions with results showing the feasibility and effectiveness of our authentication mechanism.
The IEEE 802.15.4 protocol is widely adopted as the MAC sub-layer standard for wireless sensor networks, with two available modes: beacon-enabled and non-beacon-enabled. The non-beacon-enabled mode is simpler and does not require time synchronisation; however, it lacks an explicit energy saving mech-anism that is crucial for its deployment on energy-constrained sensors. This paper proposes a distributed sleep mechanism for non-beacon-enabled IEEE 802.15.4 networks which provides energy savings to energy-limited nodes. The proposed mechanism introduces a sleep state that follows each successful packet transmission. Besides energy savings, the mechanism produces a traffic shaping effect that reduces the overall contention in the network, effectively improving packet delivery ratio. Based on traffic arrival rate and the level of network contention, a node can adjust its sleep period to achieve the highest packet delivery ratio. Performance results obtained by ns3 simulations validate these improvements as compared to the IEEE 802.15.4 standard.
In this paper, we study erasure coding for ultra-low power wireless networks with power consumption in order of milliwatts. We propose sparse parallel concatenated coding (SPCC) scheme, in which we adopt concatenated code over different field sizes so that the total energy cost of the network is minimized. We optimize sparsity and ratio of coded packets over GF(2) (i.e., Galois field of size 2) and larger field size such as GF(32) for different values of k. While high sparsity decreases energy cost of encoding, it comes at the tradeoff cost of high reception redundancy, which also results in a larger matrix which the receiver need to invert for decoding. The use of GF(2) packets minimizes the computational cost of encoding and decoding, while the use of small fraction of packets over GF(32) minimizes reception redundancies. Testbed implementation shows that SPCC energy gain increases with increasing packet generation size k compared with the next best performing coding scheme. We show that for the case where k ≤ 40, SPCC reduces energy cost by up to 100% compared with the next best performing coding scheme.
Grant-free non-orthogonal multiple access (GF-NOMA) technique is considered as a promising solution to address the bottleneck of ubiquitous connectivity in massive machine type communication (mMTC) scenarios. One of the challenging problems in uplink GF-NOMA systems is how to efficiently perform user activity detection and data detection. In this paper, a novel complexity-reduction weighted block coordinate descend (CR-WBCD) algorithm is proposed to address this problem. To be specific, we formulate the multi-user detection (MUD) problem in uplink GF-NOMA systems as a weighted l_{2} minimization problem. Based on the block coordinate descend (BCD) framework, a closed-form solution involving dynamic user-specific weights is derived to adaptively identify the active users with high accuracy. Furthermore, a complexity reduction mechanism is developed for substantial computational cost saving. Simulation results demonstrate that the proposed algorithm enjoys bound-approaching detection performance with more than three-order of magnitude computational complexity reduction.
The index coding problem is a fundamental transmission problem which occurs in a wide range of multicast networks. Network coding over a large finite field size has been shown to be a theoretically efficient solution to the index coding problem. However the high computational complexity of packet encoding and decoding over a large finite field size, and its subsequent penalty on encoding and decoding throughput and higher energy cost makes it unsuitable for practical implementation in processor and energy constraint devices like mobile phones and wireless sensors. While network coding over GF(2) can alleviate these concerns, it comes at a tradeoff cost of degrading throughput performance. To address this tradeoff, we propose a throughput optimal triangular network coding scheme over GF(2). We show that such a coding scheme can supply unlimited number of innovative packets and the decoding involves the simple back substitution. Such a coding scheme provides an efficient solution to the index coding problem and its lower computation and energy cost makes it suitable for practical implementation on devices with limited processing and energy capacity.
The evolution of network technologies has witnessed a paradigm shift toward open and intelligent networks, with the Open Radio Access Network (O-RAN) architecture emerging as a promising solution. O-RAN introduces disaggregation and virtualization, enabling network operators to deploy multi-vendor and interoperable solutions. However, managing and automating the complex O-RAN ecosystem presents numerous challenges. To address this, machine learning (ML) techniques have gained considerable attention in recent years, offering promising avenues for network automation in O-RAN. This paper presents a comprehensive survey of the current research efforts on network automation usingML in O-RAN.We begin by providing an overview of the O-RAN architecture and its key components, highlighting the need for automation. Subsequently, we delve into O-RAN support forML techniques. The survey then explores challenges in network automation usingML within the O-RAN environment, followed by the existing research studies discussing application of ML algorithms and frameworks for network automation in O-RAN. The survey further discusses the research opportunities by identifying important aspects whereML techniques can benefit.
In this paper, we present a novel random access method for future mobile cellular networks that support machine type communications. Traditionally, such networks establish connections with the devices using a random access procedure, however massive machine type communication poses several challenges to the design of random access for current systems. State-of-the-art random access techniques rely on predicting the traffic load to adjust the number of users allowed to attempt the random access preamble phase, however this delays network access and is highly dependent on the accuracy of traffic prediction and fast signalling. We change this paradigm by using the preamble phase to estimate traffic and then adapt the network resources to the estimated load. We introduce Preamble Barring that uses a probabilistic resource separation to allow load estimation in a wide range of load conditions and combine it with multiple random access responses. This results in a load adaptive method that can deliver near-optimal performance under any load condition without the need for traffic prediction or signalling, making it a promising solution to avoid network congestion and achieve fast uplink access for massive MTC.
Although Power Line Communication (PLC) is not a new technology, its use to support communication with low rate on low voltage (LV) distribution networks is still the focus of ongoing research. In this paper, we propose a PLC channel modeling method based on the bottom-up approach for LV PLC in a narrow, low frequency band between 9 kHz and 490 kHz. We employ the model to derive the transfer function of a typical LV PLC network, which is comprised of two common cable types (copper cables and aluminum conductor steel reinforced). We then investigate the multipath effect of the LV PLC in the studied low frequency bandwidth using numerical computations. Our simulation results based on the proposed channel model show an acceptable performance between neighboring nodes, in terms of bit error rate versus signal to noise ratio, which enables communication required for smart grid applications. Furthermore, we show that data transmission beyond one-hop communication in LV PLC networks will have to rely on upper layer protocols. © 2011 IEEE.
One of the cutting edge requirements envisioned for next-generation mobile networks is to support ultra-reliable and low latency communication (URLLC), as well as to meet massive traffic demand in the next few years. Although network densification has been considered as one of the promising solutions to boost capacity and high throughput, the impact of mobility on latency and reliability in dense networks has not been well investigated. Moreover, handovers, especially in dense networks, can cause extra delay to the communication and degrade reliability performance. In this paper, we aim to analyse the impact of different handover hysteresis parameters on the performance metrics, such as end-to-end delay and packet loss ratio (PLR). In this regard, we compare latency and PLR performance around cell borders including the handover process with the overall period of simulation. Simulation results show that the impact of mobility becomes more significant in dense networks due to frequent exposure to cell borders and handovers.
We propose a dynamic cache control mechanism HCache for a hybrid storage device consisting of next generation non-volatile memory (NVM) like STT-MRAM/PCRAM and conventional Flash. HCache works by distributing the scarce NVM capacity among multiple applications to meet their QoS requirements. The dynamic adaptation of the cache size is based on the access pattern and cache demands of the application. This is tracked through a hit rate histogram to a simple chain of virtual LRU queue. We show that our method can achieve 14% - 46% improvement in latency compared to popular control mechanisms available in the literature. It can also lead to reduced number of QoS violations compared particularly to a PID control mechanism commonly used in many of the recent and some earlier works. © 2012 DSI.
In this paper we present a cross-layer solution to the problem of unreliability in IEEE 802.11 wireless multicast network, where an Access Point (AP) is multicasting a data file to a group of receivers over independent wireless erasure channels. We first present a practical scheme for collecting feedback frames from the receivers by means of simultaneous acknowledgment (ACK) frames collision. Based on these feedback frames, we design an online linear XOR coding algorithm to retransmit the lost packets. Through simulation results we first show that our proposed coding algorithm outperforms all the existing XOR coding algorithms in terms of retransmission rate. We further show that our proposed coding algorithm has the lowest average decoding delay of all the known network coding schemes. XOR coding and decoding only requires addition over GF(2), hence it enjoys lower encoding and decoding computational complexities. Because of these features such an online XOR coding algorithm is also of interest for delay-sensitive applications such as multicast audio video (AV) streaming, and in battery constrained devices such as smartphones. © 2013 Elsevier B.V. All rights reserved.
The current storage system is facing the bottleneck of performance due to the gap between fast CPU computing speed and the slow response time of hard disk. Recently a multitier hybrid storage system (MTHS) which uses fast flash devices like a solid-state drive (SSD) as the one of the high performance storage tiers has been proposed to boost the storage system performance. In order to maintain the overall performance of the MTHS, optimal disk storage assignment has to be designed so that the data migrated to the high performance tier like SSD is the optimal set of data. In this paper we proposed a optimal data allocation algorithm for disk storage in MTHS. The data allocation problem (DAP) is to find the optimal lists of data files for each storage tier in the MTHS to achieve maximal benefit values without exceeding the available size of each tier. We formulate the DAP as a special multiple choice knapsack problem (MCKP) and propose the multiple-stage dynamic programming (MDP) to find the optimal solutions. The results show that the MDP can achieve improvements up to 6 times compared with the existing greedy algorithms. © 1965-2012 IEEE.
Grant-free non-orthogonal multiple access (NOMA) scheme is a promising candidate to accommodate massive connectivity with reduced signalling overhead for Internet of Things (IoT) services in massive machine-type communication (mMTC) networks. In this paper, we propose a low-complexity compressed sensing (CS) based sparsity adaptive block gradient pursuit (SA-BGP) algorithm in uplink grant-free NOMA systems. Our proposed SA-BGP algorithm is capable of jointly carrying out channel estimation (CE), user activity detection (UAD) and data detection (DD) without knowing the user sparsity level. By exploiting the inherent sparsity of transmission signal and gradient descend, our proposed method can enjoy a decent detection performance with substantial reduction of computational complexity. Simulation results demonstrate that the proposed method achieves a balanced trade-off between computational complexity and detection performance, rendering it a viable solution for future IoT applications.
The problem of security and privacy in vehicular networks is a vital issue, and it attracts increasing attention to address the security vulnerability of vehicular networks. Authentication solutions are introduced for vehicular networks to ensure that network access is only given to authorized users. Among authentication solutions for vehicular networks, group signature not only offers authentication services, but also provides conditional privacy preservation. However, the current group signature approach for authentication in vehicular networks exhibits time-consuming signature verification and poor scalability. To overcome these shortcomings, we propose a flexible and efficient delay-aware authentication scheme (FEDAS) by utilizing edge computing paradigm. In the proposed architecture, we design the authentication group maintaining mechanism and develop the collaborative CRL management method. Moreover, we propose transition zone to solve the reliable authentication problem in border area of the group. To implement the proposed architecture, we propose a model for calculating the length of local CRL, which establishes the relationship between the size of a sub-area and the length of local CRL. And we also design a method for area division based on the length of local CRL, which provides division principle for our authentication scheme. We conduct extensive simulations to verify the effectiveness of our proposed scheme.
The 5G technology has tapped into millimeter wave (mmWave) spectrum to create additional bandwidth for improved network capacity. The use of mmWave for specific applications including vehicular networks has widely discussed. However, applying mmWave to vehicular networks faces challenges of high mobility nodes and narrow coverage along the mmWave beams. In this paper, we focus on a mmWave small cell base station deployed in a city area to support vehicular network application. We propose profiling vehicle mobility for a machine learning agent to learn the performance of serving vehicles with different mobility profiles and utilize the past experiences to select appropriate mmWave beam to service a vehicle. Our machine learning agent is based on multi-armed bandit learning model, where classical multi-armed bandit and contextual multi-armed bandit are used. Particularly for the contextual multi-armed bandit, the contexts are vehicle mobility information. We show that the local street layout has naturally constrained vehicle movement creating distinct mobility information for vehicles, and the vehicle mobility information is highly related to communication performance. By using vehicle mobility information, the machine learning agent is able to identify vehicles that can remain within a beam for longer time period to avoid frequent handovers.
The ability to manage the distributed functionality of large multi-vendor networks will be an important step towards ultra-dense 5G networks. Managing distributed scheduling functionality is particularly important, due to its influence over inter-cell interference and the lack of standardization for schedulers. In this paper, we formulate a method of managing distributed scheduling methods across a small cluster of cells by dynamically selecting schedulers to be implemented at each cell. We use deep reinforcement learning methods to identify suitable joint scheduling policies, based on the current state of the network observed from data already available in the RAN. Additionally, we also explore three methods of training the deep reinforcement learning based dynamic scheduler selection system. We compare the performance of these training methods in a simulated environment against each other, as well as homogeneous scheduler deployment scenarios, where each cell in the network uses the same type of scheduler. We show that, by using deep reinforcement learning, the dynamic scheduler selection system is able to identify scheduler distributions that increase the number of users that achieve their quality of service requirements in up to 77% of the simulated scenarios when compared to homogeneous scheduler deployment scenarios.
Being able to accommodate multiple simultaneous transmissions on a single channel, non-orthogonal multiple access (NOMA) appears as an attractive solution to support massive machine type communication (mMTC) that faces a massive number of devices competing to access the limited number of shared radio resources. In this paper, we first analytically study the throughput performance of NOMA-based random access (RA), namely NOMA-RA. We show that while increasing the number of power levels in NOMA-RA leads to a further gain in maximum throughput, the growth of throughput gain is slower than linear. This is due to the higher-power dominance characteristic in power-domain NOMA known in the literature. We explicitly quantify the throughput gain for the very first time in this paper. With our analytical model, we verify the performance advantage of NOMA-RA scheme by comparing with the baseline multi-channel slotted ALOHA (MS-ALOHA), with and without capture effect. Despite the higher-power dominance effect, the maximum throughput of NOMA-RA with four power levels achieves over three times that of the MS-ALOHA. However, our analytical results also reveal the sensitivity of load on the throughput of NOMA-RA. To cope with the potential bursty traffic in mMTC scenarios, we propose adaptive load regulation through a practical user barring algorithm. By estimating the current load based on the observable channel feedback, the algorithm adaptively controls user access to maintain the optimal loading of channels to achieve maximum throughput. When the proposed user barring algorithm is applied, simulations demonstrate that the instantaneous throughput of NOMA-RA always remains close to the maximum throughput confirming the effectiveness of our load regulation.
In this work, we present a new generic polymorphic routing protocol tailored for vehicular ad hoc networks (VANETs). Similar to the case of mobile ad hoc networks, the routing task in VANETs comes under various constraints that can be environmental, operational, or performance based. The proposed Polymorphic Unicast Routing Protocol (PURP) uses the concept of polymorphic routing as a means to describe dynamic, multi-behavioral, multi-stimuli, adaptive, and hybrid routing, that is applicable in various contexts, which empowers the protocol with great flexibility in coping with the timely requirements of the routing tasks. Polymorphic routing protocols, in general, are equipped with multi-operational modes (e.g., grades of proactive, reactive, and semi-proactive), and they are expected to tune in to the right mode of operation depending on the current conditions (e.g., battery residue, vicinity density, traffic intensity, mobility level of the mobile node, and other user-defined conditions). The objective is commonly maximizing and/or improving certain metrics such as maximizing battery life, reducing communication delays, improving deliverability, and so on. We give a detailed description and analysis of the PURP protocol. Through comparative simulations, we show its superiority in performance to its peers and demonstrate its suitability for routing in VANETs. © 2011 John Wiley & Sons, Ltd.
Rateless codes such as LT codes have become more and more popular due to their abilities to handle varying channel conditions without much feedback. However, rateless codes have the drawback of unable to provide intermediate outputs when delivering layer-coded media content. Although some methods have been proposed to produce intermediate outputs based on adjusting the distribution of LT codes, they are typically content-dependent and unable to guarantee that a more important layer can always be decoded before the decoding of a less important layer. In this paper, we propose a simple joint unequal loss protection (ULP) and LT coding (ULP-LT) scheme for layered media delivery, where different numbers of FEC are allocated to different layers to guarantee the priority and LT codes are used to deal with varying channel conditions. Simulation results show that with a small amount of overhead allocated to ULP, the ULP-LT scheme can produce good intermediate performance while still enjoying the nice features provided by LT codes. ©2010 IEEE.
Currently implemented Spanning Tree Protocol (STP) cannot meet the requirement of a data center due to its poor bandwidth utilization and lack of multipathing capability. In this paper, we propose a layer-2 multipathing solution, namely dynamic load balancing multipathing (DLBMP), for data center Ethernets. With DLBMP, traffic between two communication nodes can be spread among multiple paths. The traffic load of all paths is continuously monitored so that traffic split to each path can be dynamically adjusted. In addition, per-flow forwarding is preserved to guarantee in-order frame delivery. Computer simulations show that DLBMP gives much better performance as compared to STP due to its multipathing and dynamic load balancing capability. © 2010 IEEE.
Measurement shows that 85% of TCP flows in the internet are short-lived flows that stay most of their operation in the TCP startup phase. However, many previous studies indicate that the traditional TCP Slow Start algorithm does not perform well, especially in long fat networks. Two obvious problems are known to impact the Slow Start performance, which are the blind initial setting of the Slow Start threshold and the aggressive increase of the probing rate during the startup phase regardless of the buffer sizes along the path. Current efforts focusing on tuning the Slow Start threshold and/or probing rate during the startup phase have not been considered very effective, which has prompted an investigation with a different approach. In this paper, we present a novel TCP startup method, called threshold-less slow start or SSthreshless Start, which does not need the Slow Start threshold to operate. Instead, SSthreshless Start uses the backlog status at bottleneck buffer to adaptively adjust probing rate which allows better seizing of the available bandwidth. Comparing to the traditional and other major modified startup methods, our simulation results show that SSthreshless Start achieves significant performance improvement during the startup phase. Moreover, SSthreshless Start scales well with a wide range of buffer size, propagation delay and network bandwidth. Besides, it shows excellent friendliness when operating simultaneously with the currently popular TCP NewReno connections.
In 5G network, dense deployment and millimetre wave (mmWave) are some of the key approaches to boost network capacity. Dense deployment of mmWave small cells using narrow directional beams will escalate the cell and beam related handovers for high mobility of vehicles, which may in turn limits the performance gain promised by 5G. One of the research issues in mmWave handover is to minimise the handover needs by identifying long lasting connections. In this paper, we first develop an analytical model to derive the vehicle sojourn time within a beam coverage. When multiple connections offered by nearby all mmWave small cells are available when upon a handover event, we further derive the longest sojourn time among all potential connections which represents the theoretical upperbound limit of the sojourn time performance. We then design a Fuzzy Logic (FL) based distributed beam-centric handover decision algorithm to maximise vehicle sojourn time. Simulation experiments are conducted to validate our analytical model and show the performance advantage of our proposed FLbased solution when compared with commonly used approach of connecting to the strongest connection.
Infrastructure-as-a-service (IaaS) cloud technology has attracted much attention from users who have demands on large amounts of computing resources. Current IaaS clouds provision resources in terms of virtual machines (VMs) with homogeneous resource configurations where different types of resources in VMs have similar share of the capacity in a physical machine (PM). However, most user jobs demand different amounts for different resources. For instance, high-performance-computing jobs require more CPU cores while big data processing applications require more memory. The existing homogeneous resource allocation mechanisms cause resource starvation where dominant resources are starved while non-dominant resources are wasted. To overcome this issue, we propose a heterogeneous resource allocation approach, called skewness-avoidance multi-resource allocation (SAMR), to allocate resource according to diversified requirements on different types of resources. Our solution includes a VM allocation algorithm to ensure heterogeneous workloads are allocated appropriately to avoid skewed resource utilization in PMs, and a model-based approach to estimate the appropriate number of active PMs to operate SAMR. We show relatively low complexity for our modelbased approach for practical operation and accurate estimation. Extensive simulation results show the effectiveness of SAMR and the performance advantages over its counterparts.
This paper provides an accurate model of the General Packet Radio Service (GPRS). GPRS is modeled as a single server queue in a Markovian environment. The queueing performance of data packets is evaluated by matrix geometric methods. The arrival process is assumed to follow a two state Markov modulated Poisson process (MMPP), and the service rate fluctuates based on voice loading. The analytical results are confirmed by simulation.
Energy efficiency is becoming an important feature for designing the next generation of communication networks, as are the multiplication of access points and the reduction of their coverage area. In this article we survey the latest development in energy-efficient scheduling, a.k.a. green scheduling, for both classic and heterogeneous cellular networks. We first introduce the main system model and framework that are considered in most of the existing green scheduling works. We then describe the main contributions on green scheduling as well as summarize their key findings. For instance, green scheduling schemes have demonstrated that they can significantly reduce transmit power and improve the energy efficiency of cellular systems. We also provide a performance analysis of some of the existing schemes in order to highlight some of the challenges that need to be addressed to make green scheduling more effective in heterogeneous networks. Indeed, the coordination between tiers and the rate fairness between the users of different tiers are important issues that have not yet been addressed. In addition, most existing designs exhibit a computational complexity that is too high for being deployed in a real system.
One of the cutting edge requirements envisioned for next-generation mobile networks is to support ultra-reliable and low latency communication (URLLC), as well as to meet massive traffic demand in the next few years. Although network densification has been considered as one of the promising solutions to boost capacity and high throughput, the impact of mobility on latency and reliability in dense networks has not been well investigated. Moreover, handovers, especially in dense networks, can cause extra delay to the communication and degrade reliability performance. In this paper, we aim to analyse the impact of different handover hysteresis parameters on the performance metrics, such as end-to-end delay and packet loss ratio (PLR). In this regard, we compare latency and PLR performance around cell borders including the handover process with the overall period of simulation. Simulation results show that the impact of mobility becomes more significant in dense networks due to frequent exposure to cell borders and handovers.
As the “biggest big data”, video data streaming in the network contributes the largest portion of global traffic nowadays and in future. Due to heterogeneous mobile devices, networks and user preferences, the demands of transcoding source videos into different versions have been increased significantly. However, video transcoding is a time-consuming task and how to guarantee quality-of-service (QoS) for large video data is very challenging, particularly for those real-time applications which hold strict delay requirement such as live TV. In this paper, we propose a cloud-based online video transcoding system (COVT) aiming to offer economical and QoS guaranteed solution for online large-volume video transcoding. COVT utilizes performance profiling technique to obtain the different performance of transcoding tasks in different infrastructures. Based on the profiles, we model the cloud-based transcoding system as a queue and derive the QoS values of the system based on queuing theory. With the analytically derived relationship between QoS values and the number of CPU cores required for transcoding workloads, COVT is able to solve the optimization problem and obtain the minimum resource reservation for specific QoS constraints. A task scheduling algorithm is further developed to dynamically adjust the resource reservation and schedule the tasks so as to guarantee the QoS in runtime. We implement a prototype system of COVT and experimentally study the performance on real-world workloads. Experimental results show that COVT effectively provisions minimum number of resources for predefined QoS. To validate the effectiveness of our proposed method under large scale video data, we further perform simulation evaluation which again shows that COVT is capable to achieve economical and QoS-aware video transcoding in cloud
Cloud computing provides a promising solution to cope with the increased complexity in new video compression standards and the increased data volume in video source. It not only saves the cost of too frequent equipment upgrading but also gives individual users the flexibilities to choose the amount of computing according to their needs. To facilitate cloud computing for real-time video encoding, in this paper we evaluate the amount of computing resource needed for H.264 and H.264 SVC encoding. We focus on evaluating the complexity-rate- distortion (C-R-D) relationship with a fixed encoding process but under different external configuration parameters. We believe such an empirical study is meaningful for eventually realizing real-time video encoding in an optimal way in a cloud environment. © 2011 IEEE.
Interference in wireless networks is one of the key-capacity limiting factor. The multicast capacity of an ad- hoc wireless network decreases with an increasing number of transmitting and/or receiving nodes within a fixed area. Digital Network Coding (DNC) has been shown to improve the multicast capacity of non-interfering wireless network. However recently proposed Physical-layer Network Coding (PNC) and Analog Network Coding (ANC) has shown that it is possible to decode an unknown packet from the collision of two packet, when one of the colliding packet is known a priori. Taking advantage of such collision decoding scheme, in this paper we propose a Joint Network Coding based Cooperative Retransmission (JNC- CR) scheme, where we show that ANC along with DNC can offer a much higher retransmission gain than that attainable through either ANC, DNC or Automatic Repeat reQuest (ARQ) based retransmission. This scheme can be applied for two wireless multicast groups interfering with each other. Because of the broadcast nature of the wireless transmission, receivers of different multicast group can opportunistically listen and cache packets from the interfering transmitter. These cached packets, along with the packets the receiver receives from its transmitter can then be used for decoding the JNC packet. We validate the higher retransmission gain performance of JNC with an optimal DNC scheme, using simulation.
Conference Title: 2022 13th International Conference on Information and Communication Technology Convergence (ICTC) Conference Start Date: 2022, Oct. 19 Conference End Date: 2022, Oct. 21 Conference Location: Jeju Island, Korea, Republic ofProtecting information systems against intruders' attacks requires utilising intrusion detection systems. Over the past several years, many open-source intrusion datasets have been made available so that academics and researchers can analyse and assess various detection classifiers' effectiveness. These datasets are made available with a full complement of illustrative network features. In this research, we investigate the issue of Network Intrusion Detection (NID) by utilising an Internet of Things (IoT) dataset called Bot-IoT to evaluate the detection efficiency and effectiveness of five different Ensemble Learning Classifiers (ELCs). Our experiment's results showed that despite all ELCs recording high classification metric scores, CatBoost emerged as the ELC that performed the best in our experiment in terms of Accuracy, Precision, F1-Score, Training and Test Time.
This paper analyzes the performance of IEEE 802.11 MAC protocol under a disaster scenario. The performance is measured in terms of the recovery time and the throughput of the protocol when a network disaster occurs. To make the problem amenable to analysis, some approximations are used, and a new technique to collapse a very large state space is introduced. The analytical results are found to agree with simulations.
In this paper, we propose a scalable video adaptation mechanism to improve the overall quality of service (QoS) in wireless home networks with a mixture of IPTV and VoD users. Unlike most of the existing studies on video streaming over WLANs, which usually focus on only one type of video streams, either stored videos or live videos, here we consider a mixture of live and stored videos. We make use of the pre-buffering time of VoD users in the rate adaptation for both IPTV and VoD users so as to achieve an overall optimal QoE for all the users. In addition, we employ the standard H.264 SVC and consider a practical multi-rate scenario, where the physical data rate of a wireless user is determined according to its distance to the access point (AP). The corresponding multi-rate multi-queue MAC-layer throughput is analyzed so as to accurately estimate the bandwidth for the video streaming. The ns-2 simulations verify the effectiveness of the proposed scalable video adaptation. © 2011 IEEE.
Future wireless local area networks (WLANs) promise bit rates higher than 100 Mbps. Previous research by Xiao et al. reported that the current IEEE 802.11 medium access control (MAC) protocol does not scale well to high bit rate channels. In this letter, we propose an enhancement that uses contention-tone transmitted on a separate narrow band signaling channel. The proposed contention tone mechanism avoids more than 96% of transmission collisions, hence achieving near to the theoretical maximum throughput of a WLAN MAC protocol. © 2006 IEEE.
Interference in wireless networks is one of the key capacity-limiting factors. Recently developed interference-embracing techniques show promising performance on turning collisions into useful transmissions. However, the interference-embracing techniques are hard to apply in practical applications due to their strict requirements. In this paper, we consider utilising the interference-embracing techniques in a common scenario of two interfering sender-receiver pairs. By employing opportunistic listening and analog network coding (ANC), we show that compared to traditional ARQ retransmission, a higher retransmission throughput can be achieved by allowing two interfering senders to cooperatively retransmit selected lost packets at the same time. This simultaneous retransmission is facilitated by a simple handshaking procedure without introducing additional overhead. Simulation results demonstrate the superior performance of the proposed cooperative retransmission.
Existing ad-hoc network routing strategies base their operations on flooding route requests throughout the network and choosing the shortest path thereafter. However, this typically results in a large number of unnecessary transmissions, which could be expensive for resource-constrained nodes such as those in a sensor network. In this paper, we propose a new mechanism HopAlert which optimizes route establishment and packet routing by limiting the number of nodes taking part in the route discovery process while achieving a low number of hops establishment. Using analysis and simulations, we show that this results in more routes with shorter hop counts than a reactive flooding protocol such as AODV while achieving higher savings. ©2010 IEEE.
The present authors would like to point out that the observation where the channel collision probability depends on whether the channel was busy or idle discussed by Kuan and Dimyati has been reported earlier by the present authors, Foh and Tantra. © The Institution of Engineering and Technology 2007.
Software-defined networking (SDN) is a promising network paradigm for future Internet. The centralized controller and simplified switches replace the traditional complex forwarding devices, and make network management convenient. However, the switches in SDN currently have limited ternary content addressable memory (TCAM) to store specific routing rules from the controller. This bottleneck provokes cyber attacks to overload the switches. Despite existing some countermeasures for such attacks, they are proposed based on simplified attack patterns. In this paper, we review the table-overflow attack using a sophisticated attack pattern. In the attack pattern, attack flows are targeted at their middle hops instead of endpoints. We first define potential targets in the network topology, and then we propose three specific traffic features and a monitoring mechanism to detect and locate the attackers. Further, we propose a mitigation mechanism to limit the attack rate using the token bucket model. With the control of token add rate and bucket capacity, it avoids the table overflow on the victim switch. Extensive simulations in different types of topologies and experiments in our testbed are provided to show the performance of our proposal.
Grant-free non-orthogonal multiple access (NOMA) scheme is considered as a promising candidate for the enabling of massive connectivity and reduced signalling overhead for Internet of Things (IoT) applications in massive machine-type communication (mMTC) networks. Exploiting the inherent nature of sporadic transmissions in the grant-free NOMA systems, compressed sensing based multiuser detection (CS-MUD) has been deemed as a powerful solution to user activity detection (UAD) and data detection (DD). In this paper, block coordinate descend (BCD) method is employed in CS-MUD to reduce the computational complexity. We propose two modified BCD based algorithms, called enhanced BCD (EBCD) and complexity reduction enhanced BCD (CR-EBCD), respectively. To be specific, by incorporating a novel candidate set pruning mechanism into the original BCD framework, our proposed EBCD algorithm achieves remarkable CS-MUD performance improvement. In addition, the proposed CR-EBCD algorithm further ameliorates the proposed EBCD by eliminating the redundant matrix multiplications during the iteration process. As a consequence, compared with the proposed EBCD algorithm, our proposed CR-EBCD algorithm enjoys two orders of magnitude complexity saving without any CS-MUD performance degradation, rendering it a viable solution for future mMTC scenarios. Extensive simulation results demonstrate the bound-approaching performance as well as ultra-low computational complexity.
—At 5G and beyond networks, accurate localization services and nanosecond time synchronization are crucial to enabling mission-critical wireless communications technologies and techniques such as autonomous vehicles and distributed multiple-input and multiple-output (MIMO) antenna systems. This paper investigates how to improve wireless time synchronization by studying time correction based on the Real-Time Kinematics (RTK) positioning algorithm. Using the multiple Global Navigation Satellite System (GNSS) receiver references and the proposed binary GNSS satellite formation to reduce the effect of the ionosphere and troposphere delays and recede the measurement phase-range and pseudorange errors. As a result, it improves user equipment's (UE) localization and measures the time difference between the Base Station (BS) and the UE local clocks. The results show that the positioning accuracy has been increased, and a millimetre accuracy has been achieved while attaining the sub-nanosecond time error (TE) between the UE's and BS local clocks.
Energy efficiency (EE) has become a critical metric for the next generation mobile networks. Scheduling plays a key role for offering not only spectral efficient but also energy efficient operation in mobile networks. In this paper, we address the problem of EE scheduling for the downlink of a two-tier Heterogeneous Network (HetNet) with orthogonal frequency division multiple access (OFDMA). Contrary to the existing contributions on this research topic, we propose a coordinated green scheduling scheme that maximizes EE for the entire HetNet rather than a particular tier. Moreover, our novel scheduling scheme uses a more realistic EE criterion where the time dependence of the scheduling process is taken into account. Numerical results are presented showing the competitive EE performance of our proposed scheduling scheme with improved user fairness and reduced complexity compared with existing non-HetNet schemes. In dense small cell situation, our scheme reduces the scheduling processing time as much as 25 times.
In this paper, we study the source mobility problem that exists in the current named data networking (NDN) architecture and propose a proxy-based mobility support approach named PMNDN to overcome the problem. PMNDN proposes using a proxy to efficiently manage source mobility. Besides, functionalities of the NDN access routers are extended to track the mobility status of a source and signal Proxy about a handoff event. With this design, a mobile source does not need to participate in handoff signaling which reduces the consumption of limited wireless bandwidth. PMNDN also features an ID that is structurally similar to the content name so that routing scalability of NDN architecture is maintained and addressing efficiency of Interest packets is improved. We illustrate the performance advantages of our proposed solution by comparing the handoff performance of the mobility support approaches with that in NDN architecture and current Internet architecture via analytical and simulation investigation. We show that PMNDN offers lower handoff cost, shorter handoff latency, and less packet losses during the handoff process.
—Grant-free non-orthogonal multiple access (GF-NOMA) technique is considered as a promising solution to address the bottleneck of ubiquitous connectivity in massive machine type communication (mMTC) scenarios. One of the challenging problems in uplink GF-NOMA systems is how to efficiently perform user activity detection and data detection. In this paper, a novel complexity-reduction weighted block coordinate descend (CR-WBCD) algorithm is proposed to address this problem. To be specific, we formulate the multiuser detection (MUD) problem in uplink GF-NOMA systems as a weighted l2 minimization problem. Based on the block coordinate descend (BCD) framework, a closed-form solution involving dynamic user-specific weights is derived to adaptively identify the active users with high accuracy. Furthermore, a complexity reduction mechanism is developed for substantial computational cost saving. Simulation results demonstrate that the proposed algorithm enjoys bound-approaching detection performance with more than three-order of magnitude computational complexity reduction. Index Terms—Grant-free non-orthogonal multiple access (GF-NOMA), block coordinate descend (BCD), compressed sensing (CS), multiuser detection (MUD).
Energy efficiency is becoming an important feature for designing the next generation of communication networks, as are the multiplication of access points and the reduction of their coverage area. In this article we survey the latest development in energy-efficient scheduling, a.k.a. green scheduling, for both classic and heterogeneous cellular networks. We first introduce the main system model and framework that are considered in most of the existing green scheduling works. We then describe the main contributions on green scheduling as well as summarize their key findings. For instance, green scheduling schemes have demonstrated that they can significantly reduce transmit power and improve the energy efficiency of cellular systems. We also provide a performance analysis of some of the existing schemes in order to highlight some of the challenges that need to be addressed to make green scheduling more effective in heterogeneous networks. Indeed, the coordination between tiers and the rate fairness between the users of different tiers are important issues that have not yet been addressed. In addition, most existing designs exhibit a computational complexity that is too high for being deployed in a real system.
The current storage system is facing the bottleneck of performance due to the gap between fast CPU computing speed and the slow response time of hard disk. Recently multi-tier hybrid storage system (MTHS) which uses fast flash devices like solid state drive (SSD) as the one of the high performance storage tiers has been proposed to boost the storage system performance. In order to maintain the overall performance of the MTHS, optimal disk storage assignment has to be designed so that the data migrated to the high performance tier like SSD is the optimal set of data. In this paper we proposed a optimal data allocation algorithm for disk storage in MTHS. The data allocation problem (DAP) is to find the optimal lists of data files for each storage tier in the MTHS to achieve maximal benefit values without exceeding the available size of each tier. We formulate the DAP as a special multiple choice knapsack problem (MCKP) and propose the multiple-stage dynamic programming (MDP) to find the optimal solutions. The results show that the MDP can achieve improvements up to 6 times compared with the existing greedy algorithms. © 2012 DSI.
The emergence of high speed WLANs, such as the IEEE 802.1 la and IEEE 802.11g, has provided an alternative solution for mobile users to access a network in addition to the popular IEEE 802.11b solution. Although the channel data rate of those emerging high speed WLANs is five times higher than that of 802.11b, some recent studies have shown that the throughput of IEEE 802.11 drops as the channel data rate increases, and the throughput upper limit do exist when the channel data rate goes to infinite high. These findings indicate that the performance of a WLAN will not be efficiently improved by merely increasing the channel data rate. In this paper, we propose a new protocol scheme that makes use of an out-of-band signaling (OBS) technique. The proposed scheme provides better bandwidth usage compared to the in-band signaling technique in the existing scheme and is compatible with the existing IEEE 802.11 standard.