Professor Ning Wang
Academic and research departments
Institute for Communication Systems, School of Computer Science and Electronic Engineering.About
Biography
Professor Ning Wang received his BEng degree in computing from Changchun University of Science and Technology, China in 1996 and MEng degree in electronic engineering from Nanyang Technological University, Singapore in 2000 respectively.
He obtained his PhD degree in electronic engineering from the Centre for Communication Systems Research (CCSR, currently known as the Institute of Communication Systems - ICS), University of Surrey, UK in 2004.
Areas of specialism
University roles and responsibilities
- Coordinator of the EuroMaster Programme
- Coordinator of the Communication Networks and Software (CNS) pathway (MSc level)
- 5GIC Work Area 1 leader: Content, user and network context
News
In the media
ResearchResearch interests
Since 2009 Professor Wang has been the principal investigator for both EU, UK EPSRC, InnovateUK (DCMS) and Royal Society research projects, covering the technical areas of Future Internet design, network intelligence, networked content, and smart grid communications etc.
Till now Professor Wang has published more than one hundred and thirty research papers and he has also been actively engaged in International standardisation activities including IETF, 3GPP, ITU-T and ETSI. Since 2012, his work on network management and 5G mobile video delivery featured IEEE ComSoC Technology News (CTN) for three times.
Research projects
Current projects- DCMS TUDOR (Towards Ubiquitous 3D Open Resilient Network) Project
- EPSRC NG-CDI (Next Generation Converged Digital Infrastructures) Project
- EU Horizon Europe SPIRIT (Scalable Platform for Innovations for Real-time Immersive Telepresence) Project
- ESA TINA (5G Functions in Space) Project.
Previous research projects- Industry funded Project on Network 2030
- EU H2020 5GPPP SAT5G (Satellite and Terrestrial Networking for 5G) Project
- DCMS Worcester LEP (WLEP) on 5G testing
- Royal Society NewTRIP (New Transport-layer Intelligence and Protocols) Project
- DCMS 5GUK Testbed Hub 1 Project
- EPSRC KCN (Knowledge Centric Networking) Project
- EPSRC/ChistEra CONCERT (A Context-Adaptive Content Ecosystem Under Uncertainty) Project
- FP7 C-DAX (Cyber-Secure Data and Control Cloud for Power Grids) Project
- Industry-funded Information Management Project
- FP7/IRSES EVANS (End-to-end Virtual Resource Management Across Heterogeneous Networks and Services) Project
- FP7 COMET (Content Mediator Architecture for Content-aware Networks) Project
- FP7 UniverSelf (Realising Autonomics for Future Networks) Project
- EPSRC Mobile VCE Flexible Networks Project
- FP7 4WARD Project
- FP6 EMANICS (European Network of Excellence for the Management of Internet Technologies and Complex Services) Project
- FP6 Enthrone-II (End-to-End QoS through Integrated Management of Content, Networks and Terminals) Project
- FP6 AGAVE (A Lightweight Approach for Viable End-to-end IP based QoS Services) Project
- FP5 MESCAL (Management of End-to-end Quality of Service in the Internet at Large) Project
- FP5 TEQUILA (Traffic Engineering for Quality of Service in the Internet, At Large Scale) Project
Research collaborations
My research collaborations are mainly through current/previously funded research projects.
Current collaborations
- UK 5G Innovation Centre (5GIC) and 5GUK partners
- EPSRC KCN (Knowledge Centric Networking) Project: University College London, University of Bristol
- EPSRC NG-CDI (Next Generation Converged Digital Infrastructures) Project: BT, Lancaster University, Cambridge University, University of Bristol
- EU H2020 5GPPP SAT5G (Satellite and Terrestrial Networking for 5G) Project.
Previous collaborations
- EPSRC/ChistEra CONCERT (A Context-Adaptive Content Ecosystem Under Uncertainty) Project: University College London (UK), AAU Klagenfurt (Austria), EPFL (Switzerland)
- FP7 C-DAX (Cyber-Secure Data and Control Cloud for Power Grids) Project: Alliander (Netherlands), National Instrument (Sweden), University College London, University of Ghent (iMinds, Belgium), Tubingen University (Germany), EPFL, Radboud University (Netherlands)
- FP7/IRSES EVANS (End-to-end Virtual Resource Management Across Heterogeneous Networks and Services) Project: University of Essex (UK), Simula (Norway), UPC (Spain), Tsinghua University (China), BUPT (China)
- InterDigital Information Management Project (2013)
- FP7 COMET (Content Mediator Architecture for Content-aware Networks) Project: Telefonica (Spain), UCL (UK), IntraCom (Greece), Warsaw University of Technology (Poland), PrimeTel (Cyprus).
Research interests
Since 2009 Professor Wang has been the principal investigator for both EU, UK EPSRC, InnovateUK (DCMS) and Royal Society research projects, covering the technical areas of Future Internet design, network intelligence, networked content, and smart grid communications etc.
Till now Professor Wang has published more than one hundred and thirty research papers and he has also been actively engaged in International standardisation activities including IETF, 3GPP, ITU-T and ETSI. Since 2012, his work on network management and 5G mobile video delivery featured IEEE ComSoC Technology News (CTN) for three times.
Research projects
- DCMS TUDOR (Towards Ubiquitous 3D Open Resilient Network) Project
- EPSRC NG-CDI (Next Generation Converged Digital Infrastructures) Project
- EU Horizon Europe SPIRIT (Scalable Platform for Innovations for Real-time Immersive Telepresence) Project
- ESA TINA (5G Functions in Space) Project.
- Industry funded Project on Network 2030
- EU H2020 5GPPP SAT5G (Satellite and Terrestrial Networking for 5G) Project
- DCMS Worcester LEP (WLEP) on 5G testing
- Royal Society NewTRIP (New Transport-layer Intelligence and Protocols) Project
- DCMS 5GUK Testbed Hub 1 Project
- EPSRC KCN (Knowledge Centric Networking) Project
- EPSRC/ChistEra CONCERT (A Context-Adaptive Content Ecosystem Under Uncertainty) Project
- FP7 C-DAX (Cyber-Secure Data and Control Cloud for Power Grids) Project
- Industry-funded Information Management Project
- FP7/IRSES EVANS (End-to-end Virtual Resource Management Across Heterogeneous Networks and Services) Project
- FP7 COMET (Content Mediator Architecture for Content-aware Networks) Project
- FP7 UniverSelf (Realising Autonomics for Future Networks) Project
- EPSRC Mobile VCE Flexible Networks Project
- FP7 4WARD Project
- FP6 EMANICS (European Network of Excellence for the Management of Internet Technologies and Complex Services) Project
- FP6 Enthrone-II (End-to-End QoS through Integrated Management of Content, Networks and Terminals) Project
- FP6 AGAVE (A Lightweight Approach for Viable End-to-end IP based QoS Services) Project
- FP5 MESCAL (Management of End-to-end Quality of Service in the Internet at Large) Project
- FP5 TEQUILA (Traffic Engineering for Quality of Service in the Internet, At Large Scale) Project
Research collaborations
My research collaborations are mainly through current/previously funded research projects.
Current collaborations
- UK 5G Innovation Centre (5GIC) and 5GUK partners
- EPSRC KCN (Knowledge Centric Networking) Project: University College London, University of Bristol
- EPSRC NG-CDI (Next Generation Converged Digital Infrastructures) Project: BT, Lancaster University, Cambridge University, University of Bristol
- EU H2020 5GPPP SAT5G (Satellite and Terrestrial Networking for 5G) Project.
Previous collaborations
- EPSRC/ChistEra CONCERT (A Context-Adaptive Content Ecosystem Under Uncertainty) Project: University College London (UK), AAU Klagenfurt (Austria), EPFL (Switzerland)
- FP7 C-DAX (Cyber-Secure Data and Control Cloud for Power Grids) Project: Alliander (Netherlands), National Instrument (Sweden), University College London, University of Ghent (iMinds, Belgium), Tubingen University (Germany), EPFL, Radboud University (Netherlands)
- FP7/IRSES EVANS (End-to-end Virtual Resource Management Across Heterogeneous Networks and Services) Project: University of Essex (UK), Simula (Norway), UPC (Spain), Tsinghua University (China), BUPT (China)
- InterDigital Information Management Project (2013)
- FP7 COMET (Content Mediator Architecture for Content-aware Networks) Project: Telefonica (Spain), UCL (UK), IntraCom (Greece), Warsaw University of Technology (Poland), PrimeTel (Cyprus).
Teaching
- EEEM023 (Level M): Network and Service Management and Control - module coordinator
- Engineering Professional Study (EPS) modules for Euromaster students
- Undergraduate tutor.
Publications
Holographic-type Communication (HTC) has been widely deemed as an emerging type of augmented reality (AR) media which offers Internet users deeply immersive experiences. In contrast to the traditional video content transmissions, the characteristics and network requirements of HTC have been much less studied in the literature. Due to the high bandwidth requirements and various limitations of today's HTC platforms, large-scale HTC streaming has never been systematically attempted and comprehensively evaluated till now. In this paper, we introduce a novel HTC based teleportation platform leveraging cloud-based remote production functions, also supported with newly proposed adaptive frame buffering and end-to-end signalling techniques against network uncertainties, which for the first time is able to provide assured user experiences at the public Internet scale. According to our real-life experiments based on strategically deployed cloud sites for remote production functions, we have demonstrated the feasibility of supporting user assured performances for such applications at the global Internet scale.
Integrating Low Earth Orbit (LEO) satellites with terrestrial network infrastructures to support ubiquitous Internet service coverage has recently received increasing research momentum. One distinct challenge is the frequent topology change caused by the constellation behaviour of LEO satellites. In the context of software defined networking (SDN), the controller function that is originally required to control the conventional data plane fulfilled by terrestrial SDN switches will need to expand its responsibility to cover their counterparts in the space, namely LEO satellites that are used for data forwarding. As such, seamless integration of the fixed control plane on the ground and the mobile data plane fulfilled by constellation LEO satellites will become a distinct challenge. In this paper, we propose the Virtual Data-Plane Addressing (VDPA) Scheme by leveraging IP addresses to represent virtual switches at the fixed space locations which are periodically instantiated by the nested LEO satellites traversing them in a predictable manner. With such a scheme the changing data-plane network topology incurred by LEO satellite constellation can be made completely agnostic to the control plane on the ground, thus enabling a native approach to supporting seamless communication between the two planes. Our testbed-based experiment results prove the technical feasibility of the proposed VDPA-based flow rule manipulation mechanism in terms of data plane performance.
Live holographic teleportation is an emerging media application that allows Internet users to communicate with each other in a fully immersive manner. One distinct feature of such an application is the capability of simultaneously teleporting multiple objects from different network locations to the receiver's field of view, mimicking the effect of group-based communications in a common physical space. In this case, teleportation frames from individual sources need to be stringently synchronized in order to assure user Quality of Experiences (QoE) in terms of avoiding the perception of motion misalignment at the receiver side. In this paper, we carry out systematic performance evaluations on how different Internet path conditions may affect the teleportation frame synchronisation performances. Based on this, we present a lightweight, edge-computing based scheme that is able to achieve controllable frame synchronisation operations for multi-source based teleportation applications at the Internet scale.
The introduction of Bitcoin cryptocurrency has inspired businesses and researchers to investigate into the technical aspects of blockchain and DLT systems. However, the blockchain technologies today still have distinct limitations on scalability and flexibly in terms of large-size and dynamic reconfigurability. Sharding appears to be a promising solution to scale out the blockchain system horizontally by dividing the entire network into multiple shards or clusters. However, flexibility and reconfigurability of these clusters need further research and investigations. In this paper, we propose two efficient mechanisms to enable flexible dynamic re-clustering of the blockchain network including blockchain cluster merging and splitting operations. Such mechanisms offer a solution to specific application scenarios such as microgrids and other edge-based applications where clusters of autonomous systems potentially require structure reconfigurations. The proposed mechanisms offer three-stage procedures to merge and split multiple clusters. Based on our simulation experiments, we show that the proposed merging and splitting operations based on proof of work (PoW) consensus algorithm can be optimized to reduce the merging time considerably (in the magnitude of 1/22000 based on 100 blocks) which effectively reduces overall merging and splitting completion time, interruption time and required computation power.
As over-the-top (OTT) applications such as videos dominate the global mobile traffic, conventional content delivery techniques such as caching no longer suffice to cope with mobile network users' requirements due to e.g., fluctuating radio conditions. In legacy 4G networks, mobile network operator (MNO) and OTT service providers (OSPs) are logically decoupled from each other, hence preventing them to share necessary context and enable in-network context-aware intelligence. The recently standardized 5G network architecture's softwarized and virtualized nature opens up new opportunities for flexible deployment of MNO- and OSP-operated network functions. In this work, we first extend the current 5G standard to enable third-party stakeholders to deploy their own user-plane functions (UPFs) within the MNO infrastructure. Based on this, we propose a service function chaining (SFC) framework within the 5G core network, which allows MNO to dynamically determine the optimal set of UPFs that each flow should traverse based on their real-time contexts. The proposed framework has been implemented in a testbed network. Through realistic experiments, we demonstrate that UPF deployment strategy plays a crucial rule in the resulting SFC performance, and our proposed scheme can achieve performance that is close to the benchmark. Furthermore, we establish recommendations on best practices of UPF deployment strategies in 5G network.
Holographic-type Communication (HTC) has been widely deemed as an emerging type of augmented reality (AR) media which offers Internet users deeply immersive experiences. In contrast to the traditional video content transmissions, the characteristics and network requirements of HTC have been much less studied in the literature. Due to the high bandwidth requirements and various limitations of today’s HTC platforms, large-scale HTC streaming has never been systematically attempted and comprehensively evaluated till now. In this paper, we introduce a novel HTC based teleportation platform leveraging cloud-based remote production functions, also supported with newly proposed adaptive frame buffering and end-to-end signalling techniques against network uncertainties, which for the first time is able to provide assured user experiences at the public Internet scale. According to our real-life experiments based on strategically deployed cloud sites for remote production functions, we have demonstrated the feasibility of supporting user assured performances for such applications at the global Internet scale.
In emerging on-demand and live surveillance video applications, end users may actively change content resolutions which may trigger sudden and potentially substantial change of data rate requirements. Traditional IP based static paths may not be able to seamlessly handle such change of user intent in video applications, and hence may lead to potential user QoE deterioration. In this paper, we propose an SRv6 enabled SDN framework that can allow on the fly change of video delivery paths (when necessary) upon the detection of dynamic user intent on different video resolutions. This is typically achieved through offline definition of possible user intent scenarios on specific video resolutions which can be captured by edge computing based intent framework before the path switching action is triggered. We demonstrate a use case of a 4K video quality switch on an implemented framework, and the results show substantially reduced resolution switching delay upon user intent during ongoing video sessions.
The current Internet has been founded on the architectural premise of a simple network service used to interconnect relatively intelligent end systems. While this simplicity allowed it to reach an impressive scale, the predictive manner in which ISP networks are currently planned and configured through external management systems and the uniform treatment of all traffic are hampering its use as a unifying multi-service network. The future Internet will need to be more intelligent and adaptive, optimizing continuously the use of its resources and recovering from transient problems, faults and attacks without any impact on the demanding services and applications running over it. This article describes an architecture that allows intelligence to be introduced within the network to support sophisticated self-management functionality in a coordinated and controllable manner. The presented approach, based on intelligent substrates, can potentially make the Internet more adaptable, agile, sustainable, and dependable given the requirements of emerging services with highly demanding traffic and rapidly changing locations. We discuss how the proposed framework can be applied to three representative emerging scenarios: dynamic traffic engineering (load balancing across multiple paths); energy efficiency in ISP network infrastructures; and cache management in content-centric networks.
Software-Defined Networking (SDN) is a promising paradigm of computer networks, offering a programmable and centralised network architecture. However, although such a technology supports the ability to dynamically handle network traffic based on real-time and flexible traffic control, SDN-based networks can be vulnerable to dynamic change of flow control rules, which causes transmission disruption and packet loss in SDN hardware switches. This problem can be critical because the interruption and packet loss in SDN switches can bring additional performance degradation for SDN-controlled traffic flows in the data plane. In this paper, we propose a novel robust flow control mechanism referred to as Priority-based Flow Control (PFC) for dynamic but disruption-free flow management when it is necessary to change flow control rules on the fly. PFC minimizes the complexity of flow modification process in SDN switches by temporarily adapting the priority of flow rules in order to substantially reduce the time spent on control-plane processing during run-time. Measurement results show that PFC is able to successfully prevent transmission disruption and packet loss events caused by traffic path changes, thus offering dynamic and lossless traffic control for SDN switches.
The next generation Internet is expected to focus more on large-scale media/content distribution rather than the communication infrastructure. In this article, we present CURLING, a Content-Ubiquitous Resolution and Delivery Infrastructure for Next Generation Services. The proposed architecture will support the realization of a future content-centric Internet that will overcome the current intrinsic constraints by efficiently diffusing media content of massive scale. We propose a holistic approach that natively supports content manipulation capabilities which encompass the entire content lifecycle, from content publication to content resolution and finally, to content delivery at Internet-wide scale. The CURLING infrastructure offers to both content providers and customers high flexibility in expressing their location preferences when publishing and requesting content respectively, thanks to the proposed scoping and filtering functions. Content manipulation operations can be driven by a variety of factors, including business relationships between Internet Service Providers (ISPs), local ISP policies, and specific content provider and customer preferences. Content resolution is also natively coupled with optimized content routing techniques that enable efficient unicast and multicast-based content delivery across the global Internet.
With the fast development of the Internet, the size of Forwarding Information Base (FIB) maintained at backbone routers is experiencing an exponential growth, making the storage support and lookup process of FIBs a severe challenge. One effective way to address the challenge is FIB compression, and various solutions have been proposed in the literature. The main shortcoming of FIB compression is the overhead of updating the compressed FIB when routing update messages arrive. Only when the update time of FIB compression algorithms is small bounded can the probability of packet loss incurred by FIB compression operations during update be completely avoided. However, no prior FIB compression algorithm can achieve small bounded worst case update time, and hence a mature solution with complete avoidance of packet loss is still yet to be identified. To address this issue, we propose the Unite and Split (US) compression algorithm to enable fast update with controlled worst case update time. Further, we use the US algorithm to improve the performance of a number of classic software and hardware lookup algorithms. Simulation results show that the average update speed of the US algorithm is a little faster than that of the binary trie without any compression, while prior compression algorithms inevitably seriously degrade the update performance. After applying the US algorithm, the evaluated lookup algorithms exhibit significantly smaller on-chip memory consumption with little additional update overhead
The launch of the StarLink Project has recently stimulated a new wave of research on integrating Low Earth Orbit (LEO) satellite networks with the terrestrial Internet infrastructure. In this context, one distinct technical challenge to be tackled is the frequent topology change caused by the constellation behaviour of LEO satellites. Frequent change of the peering IP connection between the space and terrestrial Autonomous Systems (ASes) inevitably disrupts the Border Gateway Protocol (BGP) routing stability at the network boundaries which can be further propagated into the internal routing infrastructures within ASes. To tackle this problem, we introduce the Geosynchronous Network Grid Addressing (GNGA) scheme by decoupling IP addresses from physical network elements such as a LEO satellite. Specifically, according to the density of LEO satellites on the orbits, the IP addresses are allocated to a number of stationary "grids" in the sky and dynamically bound to the interfaces of the specific satellites moving into the grids along time. Such a scheme allows static peering connection between a terrestrial BGP speaker and a fixed external BGP (e-BGP) peer in the space, and hence is able to circumvent the exposure of routing disruptions to the legacy terrestrial ASes. This work-in-progress specifically addresses a number of fundamental technical issues pertaining to the design of the GNGA scheme.
Energy consumption in ISP backbone networks has been rapidly increasing with the advent of increasingly bandwidth-hungry applications. Network resource optimization through sleeping reconfiguration and rate adaptation has been proposed for reducing energy consumption when the traffic demands are at their low levels. It has been observed that many operational backbone networks exhibit regular diurnal traffic patterns, which offers the opportunity to apply simple time-driven link sleeping reconfigurations for energy-saving purposes. In this work, an efficient optimization scheme called Time-driven Link Sleeping (TLS) is proposed for practical energy management which produces an optimized combination of the reduced network topology and its unified off-peak configuration duration in daily operations. Such a scheme significantly eases the operational complexity at the ISP side for energy saving, but without resorting to complicated online network adaptations. The GÉANT network and its real traffic matrices were used to evaluate the proposed TLS scheme. Simulation results show that up to 28.3% energy savings can be achieved during off-peak operation without network performance deterioration. In addition, considering the potential risk of traffic congestion caused by unexpected network failures based on the reduced topology during off-peak time, we further propose a robust TLS scheme with Single Link Failure Protection (TLS-SLFP) which aims to achieve an optimized trade-off between network robustness and energy efficiency performance.
Optimizing server’s power consumption in content distribution infrastructure has attracted increasing research efforts. The technical challenge is the tradeoff between server power consumption and the content service capability on both the server and the network side. This paper proposes and evaluates a novel approach that optimizes content servers’ power consumptions in large-scale content distribution platforms across multiple ISP domains. Specifically, our approach strategically puts servers to sleep mode without violating load capacities of virtual content delivery links and active servers in the infrastructure. Such a problem can be formulated into a nonlinear programming model. The efficiency of our approach is evaluated in a content distribution topology covering two real interconnected domains. The simulation has shown that our approach is capable of reducing servers’ power consumptions by up to 62.2%, while maintaining the actual service performance in an acceptable scope.
This letter highlights the combined advantages of Open Radio Access Network (O-RAN) and distributed Artificial Intelligence (AI) in network slicing. O-RAN's virtualization and disaggregation techniques enable efficient resource allocation, while AI-driven networks optimize performance and decision-making. We propose a federated Deep Reinforcement Learning (DRL) approach to offload dynamic RAN disaggregation to edge sites to enable local data processing and faster decision-making. Our objective is to optimize dynamic RAN disaggregation by maximizing resource utilization and minimizing reconfiguration overhead. Through performance evaluation, our proposed approach surpasses the distributed DRL approach in the training phase. By modifying the learning rate, we can influence the variance of rewards and enhance the convergence of training. Moreover, fine-tuning the reward function's weighting factor enables us to attain the targeted network Key Performance Indicators (KPIs).
With the increasing importance of the Internet for delivering personal and business applications, the slow re-convergence after network failure of existing routing protocols becomes a significant problem. This is especially true for real time multimedia services where service disruption cannot be generally tolerated. In order to ensure fast network failure recovery, IP Fast Reroute (FRR) can be adopted to immediately reroute affected customer traffic from the default path onto a backup path when link failure occurs, thus avoiding slow Interior Gateway Protocol (IGP) re-convergence. We notice that IGP link weight setting plays an important role in influencing the protection coverage performance in intra-domain link failures. Therefore in this paper we present an IGP link weight optimization scheme for backup path provisioning, which works on top of a multi-plane enabled routing platform. The scheme aims to optimize the path diversity among multiple routing planes. Due to the large search space of possible intra-domain link weights, in this paper we adopted a global search method based on a Genetic Algorithm to optimize the IGP link weights. Evaluation results show that in most cases a set of optimal link weights can be found which ensures that there are no more critical shared links among all the diverse paths on each routing plane. As a result, backup paths can be always available in case of single link failures.
Taking advantage of spontaneous and infrastructure⁃less behaviour, a mobile ad hoc network (MANET) can be integrated with various networks to extend communication for different types of network services. In the integrated system, to provide interconnection between different networks and provide data aggregation, the design of the gateway is vital. In some integrated networks with multiple gateways, proper gateway selection guarantees desirable QoS and optimization of network resource utilization. However, how to select gateway efficiently is still challenging in the integrated MANET systems with distributed behaviour terminals and limited network resources. In this paper, we examine gateway selection problem from different aspects including information discovery behaviour, selection criteria and decision-making entity. The benefits and drawbacks for each method are illustrated and compared. Based on the discussion, points of considerations are highlighted for future studies.
Node clustering has been widely studied in recent years for Wireless Sensor Networks (WSN) as a technique to form a hierarchical structure and prolong network lifetime by reducing the number of packet transmissions. Cluster Heads (CH) are elected in a distributed way among sensors, but are often highly overloaded, and therefore re-clustering operations should be performed to share the resource intensive CH-role. Existing protocols involve periodic network-wide re-clustering operations that are simultaneously performed, which requires global time synchronisation. To address this issue, some recent studies have proposed asynchronous node clustering for networks with direct links from CHs to the data sink. However, for large-scale WSNs, multihop packet delivery to the sink is required since longrange transmissions are costly for sensor nodes. In this paper, we present an asynchronous node clustering protocol designed for multihop WSNs, considering dynamic conditions such as residual node energy levels and unbalanced data traffic loads caused by packet forwarding. Simulation results demonstrate that it is possible to achieve similar levels of lifetime extension by re-clustering a multihop WSN via independently made decisions at CHs, without a need for time synchronisation required by existing synchronous protocols.
As the Internet has grown in size and diversity of applications, the next generation is designed to accommodate flows that span over multiple domains with quality of service guarantees, and in particular bandwidth. In that context, a problem emerges when destinations for inter-domain traffic may be reachable through multiple egress routers. Selecting different egress routers for traffic flows can have diverse effects on network resource utilization. In this paper, we address a critical provisioning issue of how to select an egress router that satisfies the customer end-to-end bandwidth requirement while minimizing the total bandwidth consumption in the network.
Satellite communication has recently been included as one of the key enabling technologies for 5G backhauling, especially for the delivery of bandwidth-demanding enhanced mobile broadband (eMBB) applications in 5G. In this paper, we present a 5G-oriented network architecture that is based on satellite communications and multi-access edge computing (MEC) to support eMBB applications, which is investigated in the EU 5GPPP Phase-2 SaT5G project. We specifically focus on using the proposed architecture to assure Quality-of-Experience (QoE) of HTTP-based live streaming users by leveraging satellite links, where the main strategy is to realise transient holding and localization of HTTP-based (e.g., MPEG-DASH or HTTP Live Streaming) video segments at 5G mobile edge while taking into account the characteristics of satellite backhaul link. For the very first time in the literature, we carried out experiments and systematically evaluated the performance of live 4K video streaming over a 5G core network supported by a live geostationary satellite backhaul, which validates its capability of assuring live streaming users’ QoE under challenging satellite network scenarios.
We present an efficient multi-plane based fast network failure recovery scheme which can be realized using the recently proposed multi-path enabled BGP platforms. We mainly focus on the recovery scheme that takes into account BGP routing disruption avoidance at network boundaries, which can be caused by intra-AS failures due to the hot potato routing effect. On top of this scheme, an intelligent IP crank-back operation is also introduced for further enhancement of network protection capability against failures. Our simulations based on both real operational network topologies and synthetically generated ones suggest that, through our proposed optimized backup egress point selection algorithm, as few as two routing planes are able to achieve high degree of path diversity for fast recovery in any single link failure scenario.
Reducing energy consumption in the Telecom industry has become a major research challenge to the Internet community. Towards this end, numerous research works have been carried out to mitigate the growth of energy consumption through intelligent network control mechanisms. This paper proposes a novel approach to achieving energy efficiency in ISP backbone networks according to dynamic traffic conditions. The main objective is to enforce as many links as possible to go to sleep during the off-peak time, while in event of traffic volume increase, the minimum number of sleeping links should be required to wake up to handle this dynamicity and in a way that this creates minimal or no traffic disruption. Based on our simulations with the GEANT and Abilene network topologies and their traffic traces respectively, up to 47% and 44% energy gains can be achieved without any obstruction to the network performance. Secondly, we show that the activation of a small number of sleeping links is still sufficient to cope with any traffic surge instead of reverting to the full topology or sacrificing energy savings as seen in some research proposals.
The energy consumption of backbone networks has become a primary concern for network operators and regulators due to the pervasive deployment of wired backbone networks to meet the requirements of bandwidth-hungry applications. While traditional optimization of IGP link weights has been used in IP based load-balancing operations, in this paper we introduce a novel link weight setting algorithm, the Green Load-balancing Algorithm (GLA), which is able to jointly optimize both energy efficiency and load-balancing in backbone networks. Such a scheme can be directly applied on top of existing link sleeping techniques in order to achieve substantially improved energy saving gains. The contribution is a practical solution that opens a new dimension of energy efficiency optimization, but without sacrificing traditional traffic engineering performance in plain IP routing environments. In order to evaluate the efficiency of the proposed optimization scheme without losing generality, we applied it to a set of recently proposed but diverse algorithms for link sleeping operations in the literature. Evaluation results based on the European academic network topology, GÉANT, and its real traffic matrices show that GLA can achieve significantly improved energy efficiency compared to the original standalone algorithms, while also maintaining near-optimal load-balancing performance.
With the popularity of information and content items that can be cached within ISP networks, developing high-quality and efficient content distribution approaches has become an important task in future internet architecture design. As one of the main techniques of content distribution, in-network caching mechanism has attracted attention from both academia and industry. However, the general evaluation model of in-network caching is seldom discussed. The trade-off between economic cost and the deployment of in-network caching still remains largely unclear, especially for heterogeneous applications. We take a first yet important step towards the design of a better evaluation model based on the Application Adaptation CapaciTy (2ACT) of the architecture to quantify the trade-off in this paper. Based on our evaluation model, we further clarify the deployment requirements for the in-network caching mechanism. Based on our findings, ISPs and users can make their own choice according to their application scenarios. © 2013 IEEE.
This paper further develops an architecture and design elements for a resource management and a signalling system to support the construction and maintenance of a mid-long term hybrid multicast tree for multimedia distribution services in a QoS guaranteed way, over multiple IP domains. The system called E-cast is composed of an overlay part – in inter-domain and possible IP level multicast in intra-domain. Each E-cast tree is associated with a given QoS class and is composed of unicast pipes established through Service Level Specification negotiations between the domain managers. The paper continues a previous work, by proposing an inter-domain signalling system to support the multicast management and control operations and then defining the resource management for tree construction and adjustment procedures in order to assure the required static and dynamic properties of the tree.
Spectrum sensing is one of the key technologies to realize dynamic spectrum access in cognitive radio (CR). In this paper, a novel database-augmented spectrum sensing algorithm is proposed for a secondary access to the TV White Space (TVWS) spectrum. The proposed database-augmented sensing algorithm is based on an existing geo-location database approach for detecting incumbents like Digital Terrestrial Television (DTT) and Programme Making and Special Events (PMSE) users, but is combined with spectrum sensing to further improve the protection to these primary users (PUs). A closed-form expression of secondary users' (SUs) spectral efficiency is also derived for its opportunistic access of TVWS. By implementing previously developed power control based geo-location database and adaptive spectrum sensing algorithm, the proposed database-augmented sensing algorithm demonstrates a better spectrum efficiency for SUs, and better protection for incumbent PUs than the exiting stand-alone geo-location database model. Furthermore, we analyze the effect of the unregistered PMSE on the reliable use of the channel for SUs.
Due to dynamic wireless network conditions and heterogeneous mobile web content complexities, web-based content services in mobile network environments always suffer from long loading time. The new HTTP/2.0 protocol only adopts one single TCP connection, but recent research reveals that in real mobile environments, web downloading using single connection will experience long idle time and low bandwidth utilization, in particular with dynamic network conditions and web page characteristics. In this paper, by leveraging the Mobile Edge Computing (MEC) technique, we present the framework of Mobile Edge Hint (MEH), in order to enhance mobile web downloading performances. Specifically, the mobile edge collects and caches the meta-data of frequently visited web pages and also keeps monitoring the network conditions. Upon receiving requests on these popular webpages, the MEC server is able to hint back to the HTTP/2.0 clients on the optimized number of TCP connections that should be established for downloading the content. From the test results on real LTE testbed equipped with MEH, we observed up to 34.5% time reduction and in the median case the improvement is 20.5% compared to the plain over-the-top (OTT) HTTP/2.0 protocol.
© 2015 IEEE.Wireless sensor networks usually have a massive number of randomly deployed sensor nodes that perform sensing and transmitting data to a base station. This can be a cause of sensor redundancy and data duplication. Sensor scheduling is a solution to reducing the enormous amount of the data load by selecting certain potential sensors to perform the tasks. Meanwhile, the quality of connectivity and coverage is also assured. This paper proposes a sensor scheduling method, called 4-Sqr, which uses a virtual square partition that is composed of consecutive square cells. Based on coordinates upon a monitored area, sensors learn their position on the virtual partition themselves; these are divided into groups of target areas, depending on the sensors' geographical locations. They are then ready for the node selection phase. In order to distribute energy consumption equally, the sensors with the highest residual energy within the same group usually have more chance of being active than the others. Compared to other existing methods, the proposed method is outstanding in many aspects such as the quality of connected coverage, the chance of being selected and the network's lifetime.
This letter proposes an innovative energy-efficient Radio Access Network (RAN) disaggregation and virtualization method for Open RAN (O-RAN) that effectively addresses the challenges posed by dynamic traffic conditions. In this case, the energy consumption is primarily formulated as a multi-objective optimization problem and then solved by integrating Advantage Actor-Critic (A2C) algorithm with a sequence-to-sequence model due to sequentially of RAN disaggregation and long-term dependencies. According to the results, our proposed solution for dynamic Virtual Network Functions (VNF) splitting outperforms approaches that do not involve VNF splitting, significantly reducing energy consumption. The solution achieves up to 56% and 63% for business and residential areas under traffic conditions, respectively.
Exploiting path diversity to enhance communication reliability is a key desired property in Internet. While the existing routing architecture is reluctant to adopt changes, overlay routing has been proposed to circumvent the constraints of native routing by employing intermediary relays. However, the selfish interdomain relay placement may violate local routing policies at intermediary relays and thus affect their economic costs and performances. With the recent advance of the concept of network virtualization, it is envisioned that virtual networks should be provisioned in cooperation with infrastructure providers in a holistic view without compromising their profits. In this paper, the problem of policy-aware virtual relay placement is first studied to investigate the feasibility of provisioning policycompliant multipath routing via virtual relays for inter-domain communication reliability. By evaluation on a real domain-level Internet topology, it is demonstrated that policy-compliant virtual relaying can achieve a similar protection gain against single link failures compared to its selfish counterpart. It is also shown that the presented heuristic placement strategies perform well to approach the optimal solution.
As a scalable paradigm for content distribution at Internet-wide scale, Peer-to-Peer (P2P) technologies have enabled a variety of networked services, such as distributed file-sharing and live video streaming. Most existing P2P systems employ nonintelligent peer selection algorithms for content swarming which greedily consume Internet bandwidth resources. As a result, Internet service providers (ISPs) need some efficient solutions for managing P2P traffic within their own networks. A common practice today is to block or shape P2P traffic in order to conserve bandwidth resources for carrying standard traffic from which revenue can be generated. In this paper, instead of looking at simple time-driven blocking/limiting approaches, we investigate how such types of limiting behaviors can be more gracefully performed by the ISP by taking into account the dynamics of both P2P traffic and of standard Internet traffic. Specifically, our approach is to adaptively limit excessive P2P traffic on critical network links that are prone to congestion, based on periodical link load/utilization measurements by the ISP. The ultimate objective is to guarantee non-P2P service capability while trying to accommodate as much P2P traffic as possible based on the available bandwidth resources. This approach can be regarded as a complementary solution to the recently proposed collaboration-based P2P paradigms such as P4P. Simulation results show that our approach not only eliminates performance degradation of non-P2P services that are caused by overwhelming P2P traffic, but also accommodates P2P traffic efficiently in both existing and future collaboration-based P2P network scenarios.
CURLING, a Content-Ubiquitous Resolution and Delivery Infrastructure for Next Generation Services, aims to enable a future content-centric Internet that will overcome the current intrinsic constraints by efficiently diffusing media content of massive scale. It entails a holistic approach, supporting content manipulation capabilities that encompass the entire content life cycle, from content publication to content resolution and, finally, to content delivery. CURLING provides to both content providers and customers high flexibility in expressing their location preferences when publishing and requesting content, respectively, thanks to the proposed scoping and filtering functions. Content manipulation operations can be driven by a variety of factors, including business relationships between ISPs, local ISP policies, and specific content provider and customer preferences. Content resolution is also natively coupled with optimized content routing techniques that enable efficient unicast and multicast- based content delivery across the global Internet.
In this paper we introduce a new scheme to achieve fast failure recovery in IP multicast based content delivery, which is based on efficient extensions to the Not-via fast reroute (FRR) technique. The design of such an approach takes into account distinct characteristics of IP multicast routing, namely receiver-initiated and state-based, and it offers comprehensive protections against both simple and complex network failures. We also specify in the paper moderate extensions to the standard PIM-SM routing protocol in order to equip individual repairing routers with necessary knowledge for dynamically binding protected multicast trees with pre-established Not-via tunnels that are able to automatically bypass failed network components. Our simulation experiments based on both real and synthetically generated topologies indicate promising scalability performance in the proposed multicast FRR approach. © 2010 IEEE.
A mobile ad hoc network (MANET) is a self-configuring infrastructure-less network. Taking advantage of spontaneous and infrastructure-less behavior, MANET can be integrated with satellite network to provide world-wide communication for emergency and disaster relieve services and can also be integrated with cellular network for mobile data offloading. To achieve different purposes, different architecture of integrated system, protocols and mechanisms are designed. For emergency services, ubiquitous and robust communications are of paramount importance. For mobile data offloading services, emphasis is amount of offloaded data, limited storage and energy of mobile devices. It is important to study the common features and distinguish of the architecture and service considerations for further research in the two integrated systems. In this paper, we study common issues and distinguish between two systems in terms of routing protocol, QoS provision, energy efficiency, privacy protection and resource management. The future research can benefit from taking advantage of the similarity of two systems and address the relevant issues.
Internet video streaming applications have been demanding more bandwidth and higher video quality, especially with the advent of Virtual Reality (VR) and Augmented Reality (AR) applications. While adaptive streaming protocols like MPEG-DASH (Dynamic Adaptive Streaming over HTTP) allows video quality to be flexibly adapted, e.g., degraded when mobile network condition deteriorates, this is not an option if the application itself requires guaranteed 4K quality at all time. On the other hand, conventional end-to-end TCP has been struggling in supporting 4K video delivery across long-distance Internet paths containing both fixed and mobile network segments with heterogeneous characteristics. In this paper, we present a novel and practically-feasible system architecture named MVP (Mobile edge Virtualization with adaptive Prefetching), which enables content providers to embed their content intelligence as a virtual network function (VNF) into the mobile network operator’s (MNO) infrastructure edge. Based on this architecture, we present a context-aware adaptive video prefetching scheme in order to achieve QoE-assured 4K video on demand (VoD) delivery across the global Internet. Through experiments based on a real LTE-A network infrastructure, we demonstrate that our proposed scheme is able to achieve QoE-assured 4K VoD streaming, especially when the video source is located remotely in the public Internet, in which case none of the state-of-the-art solutions is able to support such an objective at global Internet scale.
Information-centric networking (ICN) is an emerging networking paradigm that places content identifiers rather than host identifiers at the core of the mechanisms and protocols used to deliver content to end-users. Such a paradigm allows routers enhanced with content-awareness to play a direct role in the routing and resolution of content requests from users, without any knowledge of the specific locations of hosted content. However, to facilitate good network traffic engineering and satisfactory user QoS, content routers need to exchange advanced network knowledge to assist them with their resolution decisions. In order to maintain the location-independency tenet of ICNs, such knowledge (known as context information) needs to be independent of the locations of servers. To this end, we propose CAINE — Context-Aware Information-centric Network Ecosystem — which enables context-based operations to be intrinsically supported by the underlying ICN routing and resolution functions. Our approach has been designed to maintain the location-independence philosophy of ICNs by associating context information directly to content rather than to the physical entities such as servers and network elements in the content ecosystem, while ensuring scalability. Through simulation, we show that based on such location-independent context information, CAINE is able to facilitate traffic engineering in the network, while not posing a significant control signalling burden on the network
The high volume of energy consumption has become a great concern to the Internet community because of high energy waste on redundant network devices. One promising scheme for energy savings is to reconfigure network elements to sleep mode when traffic demand is low. However, due to the nature of today's traditional IP routing protocols, network reconfiguration is generally deemed to be harmful because of routing table reconvergence. To make these sleeping network elements, such as links, robust to traffic disruption, we propose a novel online scheme called designate to sleep algorithm that aims to remove network links without causing traffic disruption during energy-saving periods. Considering the nature of diurnal traffic, there could be traffic surge in the network because of reduced network capacity. We therefore propose a complementary scheme called dynamic wake-up algorithm that intelligently wakes up minimum number of sleeping links needed to control such dynamicity. This is contrary to the normal paradigm of either reverting to full topology and sacrificing energy savings or employing on-the-fly link weight manipulation. Using the real topologies of GEANT and Abilene networks respectively, we show that the proposed schemes can save a substantial amount of energy without affecting network performance.
Energy consumption has already become a major challenge to the current Internet. Most researches aim at lowering energy consumption under certain fixed performance constraints. Since trade-offs exist between network performance and energy saving, Internet Service Providers (ISPs) may desire to achieve different Traffic Engineering (TE) goals corresponding to changeable requirements. The major contributions of this paper are twofold: 1) we present an OSPF-based routing mechanism, Routing On Demand (ROD), that considers both performance and energy saving, and 2) we theoretically prove that a set of link weights always exists for each trade-off variant of the TE objective, under which solutions (i.e., routes) derived from ROD can be converted into shortest paths and realized through OSPF. Extensive evaluation results show that ROD can achieve various trade-offs between energy saving and performance in terms of Maximum Link Utilization, while maintaining better packet delay than that of the energy-agnostic TE. © 2012 IFIP International Federation for Information Processing.
Open Radio Access Networks (O-RANs) have revolutionized the telecom ecosystem by bringing intelligence into disaggregated RAN and implementing functionalities as Virtual Network Functions (VNF) through open interfaces. However, dynamic traffic conditions in real-life O-RAN environments may require necessary VNF reconfigurations during run-time, which introduce additional overhead costs and traffic instability. To address this challenge, we propose a multi-objective optimization problem that minimizes VNF computational costs and overhead of periodical reconfigurations simultaneously. Our solution uses constrained combinatorial optimization with deep reinforcement learning, where an agent minimizes a penalized cost function calculated by the proposed optimization problem. The evaluation of our proposed solution demonstrates significant enhancements, achieving up to 76% reduction in VNF reconfiguration overhead, with only a slight increase of up to 23% in computational costs. In addition, when compared to the most robust O-RAN system that doesn't require VNF reconfigurations, which is Centralized RAN (C-RAN), our solution offers up to 76% savings in bandwidth while showing up to 27% overprovisioning of CPU.
Femtocell is becoming a promising solution to face the explosive growth of mobile broadband usage in cellular networks. While each femtocell only covers a small area, a massive deployment is expected in the near future forming networked femtocells. An immediate challenge is to provide seamless mobility support for networked femtocells with minimal support from mobile core networks. In this paper, we propose efficient local mobility management schemes for networked femtocells based on X2 traffic forwarding under the 3GPP Long Term Evolution Advanced (LTE-A) framework. Instead of implementing the path switch operation at core network entity for each handover, a local traffic forwarding chain is constructed to use the existing Internet backhaul and the local path between the local anchor femtocell and the target femtocell for ongoing session communications. Both analytical studies and simulation experiments are conducted to evaluate the proposed schemes and compare them with the original 3GPP scheme. The results indicate that the proposed schemes can significantly reduce the signaling cost and relieve the processing burden of mobile core networks with the reasonable distributed cost for local traffic forwarding. In addition, the proposed schemes can enable fast session recovery to adapt to the self-deployment nature of the femtocells.
In a content delivery network (CDN), the energy cost is dominated by its geographically distributed data centers (DCs). Generally within a DC, the energy consumption is dominated by its server infrastructure and cooling system, with each contributing approximately half. However, existing research work has been addressing energy efficiency on these two sides separately. In this paper, we jointly optimize the energy consumption of both server infrastructures and cooling systems in a holistic manner. Such an objective is achieved through both strategies of: 1) putting idle servers to sleep within individual DCs; and 2) shutting down idle DCs entirely during off-peak hours. Based on these strategies, we develop a heuristic algorithm, which concentrates user request resolution to fewer DCs, so that some DCs may become completely idle and hence have the opportunity to be shut down to reduce their cooling energy consumption. Meanwhile, QoS constraints are respected in the algorithm to assure service availability and end-to-end delay. Through simulations under realistic scenarios, our algorithm is able to achieve an energy-saving gain of up to 62.1% over an existing CDN energy-saving scheme. This result is bound to be near-optimal by our theoretically-derived lower bound on energy-saving performance.
Satellite communication has recently been included as one of the enabling technologies for 5G backhauling, in particular for the delivery of bandwidth-demanding enhanced mobile broadband (eMBB) application data in 5G. In this paper we introduce a 5G-oriented network architecture empowered by satellite communications for supporting emerging mobile video delivery, which is investigated in the EU 5GPPP Phase 2 SAT5G Project. Two complementary use cases are introduced, including (1) the use of satellite links to support offline multicasting and caching of popular video content at 5G mobile edge, and (2) real-time prefetching of DASH (Dynamic Adaptive Streaming over HTTP) video segments by 5G mobile edge through satellite links. In both cases, the objective is to localize content objects close to consumers in order to achieve assured Quality of Experiences (QoE) in 5G content applications. In the latter case, in order to circumvent the large end-to-end propagation delay of satellite links, testbed based experiments have been carried out to identify specific prefetching policies to be enforced by the Multiaccess computing server (MEC) for minimizing user perceived disruption during content consumption sessions.
Software-defined networking (SDN) enables centralized control of a network of programmable switches by dynamically updating flow rules. This paves the way for dynamic and autonomous control of the network. In order to be able to apply a suitable set of policies to the correct set of traffic flows, SDN needs input from traffic classification mechanisms. Today, there is a variety of classification algorithms in machine learning. However, recent studies have found that using an arbitrary algorithm does not necessarily provide the best classification outcome on a dataset, and therefore a framework called ensemble which combines individual algorithms to improve classification results has gained attraction. In this paper, we propose the application of the ensemble algorithm as a machine learning pre-processing tool, which classifies ingress network traffic for SDN to pick the right set of traffic policies. Performance evaluation results show that this ensemble classifier can achieve robust performance in all tested traffic types.
Current edge cloud providers offer a wide range of on-demand private and public cloud services for customers. Predictive demand monitoring and supply optimisation are necessary to deliver truly elastic distributed edge cloud services with resizable resource and compute capacity to adapt to dynamically changing customer requirements. However, current state-of-the-art monitoring and provisioning systems remain reactive which often results in over or under service provisioning, incurring unnecessary costs for customers or deterioration in the quality of service for the end-user. This paper proposes an adaptive protocol, ARPP, that enables distributed real-time demand monitoring and automatic resource provision based on the dynamically changing spatial-temporal workload patterns. ARPP leverages distributed predictive analytics and deep reinforcement learning at the edges to predict the dynamically changing spatial-temporal demand and allocate the appropriate amount of resources at the right times and right locations. We show that ARPP outperforms benchmark and state of the art algorithms across a range of criteria in the face of dynamically changing mobile real-world topologies and user interest patterns.
Current practices for managing resources in fixed networks rely on off-line approaches, which can be sub-optimal in the face of changing or unpredicted traffic demand. To cope with the limitations of these off-line configurations new traffic engineering (TE) schemes that can adapt to network and traffic dynamics are required. In this paper, we propose an intra-domain dynamic TE system for IP networks. Our approach uses multi-topology routing as the underlying routing protocol to provide path diversity and supports adaptive resource management operations that dynamically adjust the volume of traffic sent across each topology. Re-configuration actions are performed in a coordinated fashion based on an in-network overlay of network entities without relying on a centralized management system. We analyze the performance of our approach using a realistic network topology, and our results show that the proposed scheme can achieve near-optimal network performance in terms of resource utilization in a responsive manner.
IP Fast ReRoute (FRR) mechanisms have been proposed to achieve fast failover for supporting Quality of Services (QoS) assurance. However, these mechanisms do not consider network performance after affected traffic is rerouted onto repair paths. As a result, QoS deterioration may still happen due to post-failure traffic congestion in the network, which nullifies the effectiveness of IP FRR. In this paper, by considering IP tunneling as the underlying IP FRR mechanism, we proposed an efficient algorithm to judiciously select tunnel endpoints such that the network performance is optimized after the repair paths are activated for rerouting. According to the simulation results using real operational network topologies and traffic matrices, the algorithm achieves significant improvement on post-failure load balancing compared to the traditional IGP re-convergence and plain tunnel endpoint selection without such consideration.
With the recent development of Device-toDevice (D2D) communication technologies, mobile devices will no longer be treated as pure “terminals”, but they could become an integral part of the network in specific application scenarios. In this paper, we introduce a novel scheme of using D2D communications for enabling data relay services in partial Not-Spots, where a client without local network access may require data relay by other devices. Depending on specific social application scenarios that can leverage on the D2D technology, we consider tailored algorithms in order to achieve optimised data relay service performance on top of our proposed networkcoordinated communication framework. The approach is to exploit the network’s knowledge on its local user mobility patterns in order to identify best helper devices participating in data relay operations. This framework also comes with our proposed helper selection optimization algorithm based on reactive predictability of individual user. According to our simulation analysis based on both theoretical mobility models and real human mobility data traces, the proposed scheme is able to flexibly support different service requirements in specific social application scenarios.
Power consumption in Information and Communication Technology (ICT) is 10% of total energy consumed in industrial countries. According to the latest measurements, this amount is increasing rapidly in recent years. In the literature, a variety of new schemes have been proposed to save energy in operational communication networks. In this paper, we propose a novel optimization algorithm for network virtualization environment, by sleeping reconfiguration on the maximum number of physical links during off-peak hours, while still guaranteeing the connectivity and off-peak bandwidth availability for supporting parallel virtual networks over the top. Simulation results based on the GÉANT network topology show our novel algorithm is able to put notable number of physical links to sleep during off-peak hours while still satisfying the bandwidth demands requested by ongoing traffic sessions in the virtual networks. © 2012 IEEE.
This paper addresses delay/disruption tolerant networking routing under a highly dynamic scenario, envisioned for communication in vehicular sensor networks (VSNs) suffering from intermittent connection. Here, we focus on the design of a high-level routing framework, rather than the dedicated encounter prediction. Based on an analyzed utility metric to predict nodal encounter, our proposed routing framework considers the following three cases. First, messages are efficiently replicated to a better qualified candidate node, based on the analyzed utility metric related to destination. Second, messages are conditionally replicated if the node with a better utility metric has not been met. Third, messages are probabilistically replicated if the information in relation to destination is unavailable in the worst case. With this framework in mind, we propose two routing schemes covering two major technique branches in literature, namely: 1) encounter-based replication routing and 2) encounter-based spraying routing. Results under the scenario applicable to VSNs show that, in addition to achieving high delivery ratio for reliability, our schemes are more efficient in terms of a lower overhead ratio. Our core investigation indicates that apart from what information to use for encounter prediction, how to deliver messages based on the given utility metric is also important.
Receiving great interest from the research community, Delay Tolerant Networks (DTNs) are a type of Next Generation Networks (NGNs) proposed to bridge communication in challenged environments. In this paper, the message replication probability is proportionally sprayed for efficient routing mainly under sparse scenario. This methodology is different from the spray based algorithms using message copy tickets to control replication. Our heuristic algorithm aims to overcome the scalability of the spray based algorithms, since to determine the initial value of the copy tickets requires the assumption that either the number of nodes is known in advance, or the underlying mobility model follows the Random WayPoint (RWP) characteristic. Specifically, in combining with the assistance of geographic information to estimate the movement range of destination, the routing decision is based on the encounter angle between pairwise nodes, and is dynamically switched between the designed two routing phases, named as geographic replication and replication probability spray. Furthermore, messages are under prioritized transmission with the consideration of redundancy pruning. Simulation results show our heuristic algorithm outperforms other well known algorithms in terms of delivery ratio, transmission overhead, average latency as well as buffer occupancy time. © 2012 IEEE.
Due to the fact that P2P applications have dominantly accounted for the entire Internet traffic, how to efficiently manage P2P traffic has become increasingly important. It has been recently proposed that the underlying network information can be shared between ISPs and P2P service providers in order to achieve efficient resource utilization, with the locality-based peer selection being a specific example. Based on such collaboration, we propose a proportional traffic-exchange localization scheme for making efficient use of network resources. Our approach employs locality information in order to regulate the volume of traffic exchange between peers according to their physical distance between peers. The key objective of our approach is to further reduce both intra- and inter-autonomous system (AS) traffic compared with basic locality-based peer selection solutions. Our simulation-based results have shown that this approach is not only able to reduce a significant of inter-AS P2P traffic, but also to balance the network utilization in comparison to existing approaches.
How to reduce power consumption within individual data centers has attracted major research efforts in the past decade, as their energy bills have contributed significantly to the overall operating costs. In recent years, increasing research efforts have also been devoted to the design of practical powersaving techniques in content delivery networks (CDNs), as they involve thousands of globally distributed data centers with content server clusters. In this paper, we present a comprehensive survey on existing research works aiming to save power in data centers and content delivery networks that share high degree of commonalities in different aspects. We firstly highlight the necessities of saving power in these two types of networks, followed by the identification of four major power-saving strategies that have been widely exploited in the literature. Furthermore, we present a high-level overview of the literature by categorizing existing approaches with respect to their scopes and research directions. These schemes are later analyzed with respect to their strategies, advantages and imitations. In the end, we summarize several key aspects that are considered to be crucial in effective power-saving schemes. We also highlight a number of our envisaged open research directions in the relevant areas that are of significance and hence require further elaborations.
Due to the explosive growth of mobile data traffic, it has become a common practice for Mobile Network Operators (MNOs, also known as operators or carriers) to utilize cellular and WiFi resources simultaneously through mobile data offloading. However, existing offloading technologies are mainly established between operators and third-party WiFi resources, which cannot reflect users dynamic traffic demands. Therefore, MNOs have to design an effective incentive framework, encouraging users to reveal their valuations on resources. In this paper, we propose a novel bid-based Heterogeneous Resources Allocation (HRA) framework. It can enable operators to efficiently utilize both cellular and operator-own WiFi resources simultaneously, where the decision cost of user is strictly controlled. Through auction-based mechanisms, it can achieve dynamic offloading with awareness of users valuations. And the operator-domain offloading effectively avoids anarchy brought by users selfishness and lack of information. More specifically, HRA-Profit and HRA-Utility, are proposed to achieve the maximal profit and social utility, respectively. addition, based on Stochastic Multi-Armed Bandit model, the newly proposed HRA-UCB-Profit and HRA-UCB-Utility are able to gain near-optimal profit and social utility under incomplete user context information. All mechanisms have been proven to be truthful and satisfy individual rationality, while the achieved profit of our mechanism is within a bounded difference from the optimal profit. In addition, the trace-based simulations and evaluations have demonstrated that HRA-Profit and HRA-Utility increase the profit and social utility by up to 40% and 47%, respectively, compared with benchmarks. And the cellular utilization rate is kept at a favorable level under the proposed mechanisms. HRA-UCB-Profit and HRA-UCB-Utility restrict pseudo-regret ratios under 20%.
The introduction of Bitcoin cryptocurrency has inspired businesses and researchers to investigate into the technical aspects of blockchain and DLT systems. However, the blockchain technologies today still have distinct limitations on scalability and flexibly in terms of large-size and dynamic reconfigurability. Sharding appears to be a promising solution to scale out the blockchain system horizontally by dividing the entire network into multiple shards or clusters. However, flexibility and reconfigurability of these clusters need further research and investigations. In this paper, we propose two efficient mechanisms to enable flexible dynamic re-clustering of the blockchain network including blockchain cluster merging and splitting operations. Such mechanisms offer a solution to specific application scenarios such as microgrids and other edge-based applications where clusters of autonomous systems potentially require structure reconfigurations. The proposed mechanisms offer three-stage procedures to merge and split multiple clusters. Based on our simulation experiments, we show that the proposed merging and splitting operations based on proof of work (PoW) consensus algorithm can be optimized to reduce the merging time considerably (in the magnitude of 1/22000 based on 100 blocks) which effectively reduces overall merging and splitting completion time, interruption time and required computation power.
Integrating Low Earth Orbit (LEO) satellites with terrestrial network infrastructures to support ubiquitous Inter-net service coverage has recently received increasing research momentum. One fundamental challenge is the frequent topology change caused by the constellation behaviour of LEO satellites. In the context of Software Defined Networking (SDN), the controller function that is originally required to control the conventional data plane fulfilled by terrestrial SDN switches will need to expand its responsibility to cover their counterparts in the space, namely LEO satellites that are used for data forwarding. As such, seamless integration of the fixed control plane on the ground and the mobile data plane fulfilled by constellation LEO satellites will become a distinct challenge. For the very first time in the literature, we propose in this paper the Virtual Data-Plane Addressing (VDPA) scheme by leveraging IP addresses to represent virtual switches at the fixed space locations which are periodically instantiated by the nested LEO satellites traversing them in a predictable manner. With such a scheme the changing data-plane network topology incurred by LEO satellite constellations can be made completely agnostic to the control plane on the ground, thus enabling a native approach to supporting seamless communication between the two planes. Our simulation results prove the superiority of the proposed VDPA based flow rule manipulation mechanism in terms of control plane performance.
Distributed Mobility Management (DMM) protocol is proposed to address the shortcomings of centralized mobility management protocols. In DMM, unlike centralized protocols, flows can be routed optimally in the network by dynamic IP address allocation, which can yield lower signaling and packet delivery costs. In this paper, we introduce an SDN based mobility management service with multiple controllers called Hierarchical Software Defined Distributed Mobility Management (HSD-DMM). In contrast to standard DMM which is proposed for flat architectures, HSD-DMM uses a dynamic anchor point selection method for each flow in a hierarchical mobile network architecture. The main condition in selecting an anchor point is packet delivery cost reduction based on collaboration of multiple controllers responsible for different tiers of the hierarchy. Numerical analysis results reveal that HSD-DMM can decrease signaling and packet delivery cost compared to an SDN based standard DMM solution.
The design of an efficient charging management system for on-the-move Electric Vehicles (EVs) has become an emerging research problem, in future connected vehicle applications given their mobility uncertainties. Major technical challenges here involve decision-making intelligence for the selection of Charging Stations (CSs), as well as the corresponding communication infrastructure for necessary information dissemination between the power grid and mobile EVs. In this article, we propose a holistic solution that aims to create high impact on the improvement of end users’ driving experiences (e.g., to minimize EVs’ charging waiting time during their journeys) and charging efficiency at the power grid side. Particularly, the CS-selection decision on where to charge is made by individual EVs for privacy and scalability benefits. The communication framework is based on a mobile Publish/Subscribe (P/S) paradigm to efficiently disseminate CSs condition information to EVs on-the-move. In order to circumvent the rigidity of having stationary Road Side Units (RSUs) for information dissemination, we promote the concept of Mobility as a Service (MaaS) by exploiting the mobility of public transportation vehicles (e.g. buses) to bridge the information flow to EVs, given their opportunistic encounters. We analyze various factors affecting the possibility for EVs to access CSs information via opportunistic Vehicle-to-Vehicle (V2V) communications, and also demonstrate the advantage of introducing buses as mobile intermediaries for information dissemination, based on a common EV charging management system under the Helsinki city scenario. We further study the feasibility and benefit of enabling EVs to send their charging reservations involved for CS-selection logic, via opportunistically encountered buses as well. Results show this advanced management system improves both performances at CS and EV sides.
—Live holographic teleportation is an emerging media application that allows Internet users to communicate in a fully immersive environment. One distinguishing feature of such an application is the ability to teleport multiple objects from different network locations into the receiver's field of view at the same time, mimicking the effect of group-based communications in a common physical space. In this case, live teleportation frames originated from different sources must be precisely synchronised at the receiver side to ensure user experiences with eliminated perception of motion misalignment effect. For the very first time in the literature, we quantify the motion misalignment between remote sources with different network contexts in order to justify the necessity of such frame synchronisation operations. Based on this motivation, we propose HoloSync, a novel edge-computing-based scheme capable of achieving controllable frame synchronisation performances for multi-source holographic tele-portation applications. We carry out systematic experiments on a real system with the HoloSync scheme in terms of frame synchronisation performances in specific network scenarios, and their sensitivity to different control parameters.
The random access (RA) mechanism of Long Term Evolution (LTE) networks is prone to congestion when a large number of devices attempt RA simultaneously, due to the limited set of preambles. If each RA attempt is made by means of transmission of multiple consecutive preambles (codewords) picked from a subset of preambles, as proposed in [1], collision probability can be significantly reduced. Selection of an optimal preamble set size [2] can maximise RA success probability in the presence of a trade-off between codeword ambiguity and code collision probability, depending on load conditions. In light of this finding, this paper provides an adaptive algorithm, called Multipreamble RA, to dynamically determine the preamble set size in different load conditions, using only the minimum necessary uplink resources. This provides high RA success probability, and makes it possible to isolate different network service classes by separating the whole preamble set into subsets each associated to a different service class; a technique that cannot be applied effectively in LTE due to increased collision probability. This motivates the idea that preamble allocation could be implemented as a virtual network function, called vPreamble, as part of a random access network (RAN) slice. The parameters of a vPreamble instance can be configured and modified according to the load conditions of the service class it is associated to.
Recent advances in smart connected vehicles and Intelligent Transportation Systems (ITS) are based upon the capture and processing of large amounts of sensor data. Modern vehicles contain many internal sensors to monitor a wide range of mechanical and electrical systems and the move to semi-autonomous vehicles adds outward looking sensors such as cameras, lidar, and radar. ITS is starting to connect existing sensors such as road cameras, traffic density sensors, traffic speed sensors, emergency vehicle, and public transport transponders. This disparate range of data is then processed to produce a fused situation awareness of the road network and used to provide real-time management, with much of the decision making automated. Road networks have quiet periods followed by peak traffic periods and cloud computing can provide a good solution for dealing with peaks by providing offloading of processing and scaling-up as required, but in some situations latency to traditional cloud data centres is too high or bandwidth is too constrained. Cloud computing at the edge of the network, close to the vehicle and ITS sensor, can provide a solution for latency and bandwidth constraints but the high mobility of vehicles and heterogeneity of infrastructure still needs to be addressed. This paper surveys the literature for cloud computing use with ITS and connected vehicles and provides taxonomies for that plus their use cases. We finish by identifying where further research is needed in order to enable vehicles and ITS to use edge cloud computing in a fully managed and automated way. We surveyed 496 papers covering a seven-year timespan with the first paper appearing in 2013 and ending at the conclusion of 2019.
In this paper, we design and evaluate the proposed geographic-based spray-and-relay (GSaR) routing scheme in delay/disruption-tolerant networks. To the best of our knowledge, GSaR is the first spray-based geographic routing scheme using historical geographic information for making a routing decision. Here, the term spray means that only a limited number of message copies are allowed for replication in the network. By estimating a movement range of destination via the historical geographic information, GSaR expedites the message being sprayed toward this range, meanwhile prevents that away from and postpones that out of this range. As such, the combination of them intends to fast and efficiently spray the limited number of message copies toward this range and effectively spray them within range, to reduce the delivery delay and increase the delivery ratio. Furthermore, GSaR exploits delegation forwarding to enhance the reliability of the routing decision and handle the local maximum problem, which is considered to be the challenges for applying the geographic routing scheme in sparse networks. We evaluate GSaR under three city scenarios abstracted from real world, with other routing schemes for comparison. Results show that GSaR is reliable for delivering messages before the expiration deadline and efficient for achieving low routing overhead ratio. Further observation indicates that GSaR is also efficient in terms of a low and fair energy consumption over the nodes in the network.
Network virtualization has been recognized as a promising solution to enable the rapid deployment of customized services by building multiple Virtual Networks (VNs) on a shared substrate network. Whereas various VN embedding schemes have been proposed to allocate the substrate resources to each VN requests, little work has been done to provide backup mechanisms in case of substrate network failures. In a virtualized infrastructure, a single substrate failure will affect all the VNs sharing that resource. Provisioning a dedicated backup network for each VN is not efficient in terms of substrate resource utilization. In this paper, we investigate the problem of shared backup network provision for VN embedding and propose two schemes: shared on-demand and shared pre-allocation backup schemes. Simulation experiments show that both proposed schemes make better utilization of substrate resources than the dedicated backup scheme without sharing, while each of them has its own advantages. © 2011 IEEE.
This chapter presents initial results available from the European Commission H2020 5G PPP Phase 2 project SaT5G (Satellite and Terrestrial Network for 5G) [1]. It specifically elaborates on the selected use cases and scenarios for satellite communications (SatCom) positioning in the 5G usage scenario of eMBB (enhanced mobile broadband), which appears the most commercially attractive for SatCom. After a short introduction to the satellite role in the 5G ecosystem and the SaT5G project, the chapter addresses the selected satellite use cases for eMBB by presenting their relevance to the key research pillars (RPs), their relevance to key 5G PPP key performance indicators (KPIs), their relevance to the 3rd Generation Partnership Project (3GPP) SA1 New Services and Markets Technology Enablers (SMARTER) use case families, their relevance to key 5G market verticals, and their market size assessment. The chapter then continues by providing a qualitative high-level description of multiple scenarios associated to each of the four selected satellite use cases for eMBB. Useful conclusions are drawn at the end of the chapter.
Emerging Peer-to-Peer (P2P) technologies have enabled various types of content to be efficiently distributed over the Internet. Most P2P systems adopt selfish peer selection schemes in the application layer that in some sense optimize the user quality of experience. On the network side, traffic engineering (TE) is deployed by ISPs in order to achieve overall efficient network resource utilization. These TE operations are typically performed without distinguishing between P2P flows and other types of traffic. Due to inconsistent or even conflicting objectives from the perspectives of P2P overlay and network-level TE, the interactions between the two and their impact on the performance for each is likely to be non-optimal, and also has not yet been investigated in detail. In this paper we study such non-cooperative interactions by modeling best-reply dynamics, in which the P2P overlay and network-level TE optimize their own strategies based on the decision of the other player in the previous round. According to our simulations results based on data from the ABILENE network, P2P overlays exhibit strong resilience to adverse TE operations in maintaining end-to-end performance at the application layer. In addition, we show that network-level TE may suffer from performance deterioration caused by greedy peer (re-)selection behavior in reacting to previous TE adjustments.
The evolution of network technologies has witnessed a paradigm shift toward open and intelligent networks, with the Open Radio Access Network (O-RAN) architecture emerging as a promising solution. O-RAN introduces disaggregation and virtualization, enabling network operators to deploy multi-vendor and interoperable solutions. However, managing and automating the complex O-RAN ecosystem presents numerous challenges. To address this, machine learning (ML) techniques have gained considerable attention in recent years, offering promising avenues for network automation in O-RAN. This paper presents a comprehensive survey of the current research efforts on network automation usingML in O-RAN.We begin by providing an overview of the O-RAN architecture and its key components, highlighting the need for automation. Subsequently, we delve into O-RAN support forML techniques. The survey then explores challenges in network automation usingML within the O-RAN environment, followed by the existing research studies discussing application of ML algorithms and frameworks for network automation in O-RAN. The survey further discusses the research opportunities by identifying important aspects whereML techniques can benefit.
Distributed mobility management (DMM) solution is proposed to address the downsides of centralized mobility management protocols. The standard DMM is proposed for flat architectures and always selects the anchor point from access layer. Numerical analysis is used in this paper to show that dynamic anchor point selection can improve the performance of standard DMM in terms of packet signalling and delivery cost. In next step, an SDN-based DMM solution that we refer to as SD-DMM is presented to provide dynamic anchor point selection for hierarchical mobile network architecture. In SD-DMM, the anchor point is dynamically selected for each mobile node by a virtual function implemented as an application on top of the SDN controller which has a global view of the network. The main advantages of SD-DMM is to decrease packet delivery cost.
This paper presents initial results available from the European Commission Horizon 2020 5G Public Private Partnership Phase 2 project “SaT5G” (Satellite and Terrestrial Network for 5G).1 After describing the concept, objectives, challenges, and research pillars addressed by the SaT5G project, this paper elaborates on the selected use cases and scenarios for satellite communications positioning in the 5G usage scenario of enhanced mobile broadband.
Node provisioning in wireless sensor networks is very high density and is a cause of data duplication. Therefore, sensors' duty-cycling is a significant process in order to reduce data load and prolong network lifetime, where certain sensors are selected to be active, while some others are pushed into sleep mode. However, quality of service in terms of network connectivity and sensing coverage must be guaranteed. This paper proposes a sensor selection method to guarantee connected coverage by using hexagonal tessellation as a virtual partition which consists of many hexagonal cells across the network. Six pieces of equilateral triangles in each hexagonal cell are target areas in which k sensors are selected to operate. Performance of the method is evaluated in terms of quality of connected coverage, number of active nodes, efficient coverage area and chance of node selection.
The integration of Space Information Network (SIN) with terrestrial infrastructures has been attracting significant attentions in the context of 5G, where satellite communications can be leveraged as additional capabilities such as backhauling between the core network and remote mobile edge sites. However, simple addition of SIN capabilities to terrestrial 5G does not automatically lead to enhanced service performance without systematic scheduling of coexisting resources. In this article we focus on the scenario of multi-link video streaming over both parallel Geostationary Earth Orbit (GEO) satellite and terrestrial 5G backhaul links for enhancing user Quality of Experience (QoE) and network efficiency. The distinct challenge is the complex optimization of scheduling video segment delivery via two parallel channels with very different characteristics while striving to enhance the video quality and resource optimality. We carried out systematic experiments based on a real-life 5G testing framework with integrated GEO satellite and terrestrial backhaul links. The experimental results demonstrate the effectiveness of our proposed 5G edge computing based solution for holistically achieving assured user experiences and optimised network resource efficiency in terms of video traffic offloading.
© 2014 IEEE.In-network content caching has recently emerged in the context of Information-Centric Networking (ICN), which allows content objects to be cached at the content router side. In this paper, we specifically focus on in-network caching of Peer-to-Peer (P2P) content objects for improving both service and operation efficiencies. We propose an intelligent in-network caching scheme of P2P content chunks, aiming to reduce P2P-based content traffic load and also to achieve improved content distribution performance. Towards this end, the proposed holistic decision-making logic takes into account context information on the P2P characteristics such as chunk availability. In addition, we also analyse the benefit of coordination between neighbouring content routers when making caching decisions in order to avoid duplicated P2P chunk caching nearby. An analytical modelling framework is developed to quantitatively evaluate the efficiency of the proposed in-network caching scheme.
This article presents an approach to delivering qualitative end-to-end quality of service (QoS) guarantees across the multiprovider Internet. We propose that bilateral agreements between a number of autonomous systems (ASs) result in the establishment of QoS-class planes that potentially extend across the global Internet. The deployment of a QoS-enhanced Border Gateway Protocol (BGP) with different QoSbased route selection policies in each of the planes allows a range of interdomain QoS capabilities to coexist on the same network infrastructure. The article presents simulation results showing the benefits of the approach and discusses aspects of the performance of QoSenhanced BGP.
Cooperation between peer-to-peer (P2P) overlays and underlying networks has been proposed as an effective approach to improve the efficiency of both the applications and the underlying networks. However, fundamental characteristics with respect to ISP business relationships and inter-ISP routing information are not sufficiently investigated in the context of collaborative ISP-P2P paradigms in multi-domain environments. In this paper, we focus on such issues and develop an analytical modelling framework for analysing optimized inter-domain peer selection schemes concerning ISP policies, with the main purpose of mitigating cross-ISP traffic and enhancing service quality of end users. In addition, we introduce an advanced hybrid scheme for peer selections based on the proposed analytical theory framework, in accordance with practical network scenarios, wherein cooperative and non-cooperative behaviours coexisting. Numerical results show that the proposed scheme incorporating ISP policies is able to achieve desirable network efficiency as well as great service quality for P2P users. Our analytical modelling framework can be used as a guide for analysing and evaluating future network-aware P2P peer selection paradigms in general multi-domain scenarios.
Handling traffic dynamics in order to avoid network congestion and subsequent service disruptions is one of the key tasks performed by contemporary network management systems. Given the simple but rigid routing and forwarding functionalities in IP base environments, efficient resource management and control solutions against dynamic traffic conditions is still yet to be obtained. In this article, we introduce AMPLE - an efficient traffic engineering and management system that performs adaptive traffic control by using multiple virtualized routing topologies. The proposed system consists of two complementary components: offline link weight optimization that takes as input the physical network topology and tries to produce maximum routing path diversity across multiple virtual routing topologies for long term operation through the optimized setting of link weights. Based on these diverse paths, adaptive traffic control performs intelligent traffic splitting across individual routing topologies in reaction to the monitored network dynamics at short timescale. According to our evaluation with real network topologies and traffic traces, the proposed system is able to cope almost optimally with unpredicted traffic dynamics and, as such, it constitutes a new proposal for achieving better quality of service and overall network performance in IP networks.
Softwarization has been deemed as a key feature of 5G networking in the sense that the support of network functions migrates from traditional hardware-based solutions to software based ones. While the main rationale of 5G softwarization is to achieve high degree of flexibility/ programmability as well as reduction of total cost of ownership (TCO), it remains an interesting but significant issue on how to strike a desirable balance between system openness and necessary standardization in the context of 5G. The aim of this article is to systematically survey relevant enabling technologies, platforms and tools for 5G softwarization, together with ongoing standardization activities at relevant SDOs (Standards Developing Organizations). Based on these, we aim to shed light on the future evolution of 5G technologies in terms of softwarization versus standardization requirements and options.
—With the development of edge-cloud computing technologies, distributed data centers (DCs) have been extensively deployed across the global Internet. Since different users/applications have heterogeneous requirements on specific types of ICT resources in distributed DCs, how to optimize such heterogeneous resources under dynamic and even uncertain environments becomes a challenging issue. Traditional approaches are not able to provide effective solutions for multi-dimensional resource allocation that involves the balanced utilization across different resource types in distributed DC environments. This paper presents a reinforcement learning based approach for multi-dimensional resource allocation (termed as NESRL-MRM) that is able to achieve balanced utilization and availability of resources in dynamic environments. To train NESRL-MRM's agent with sufficiently quick wall-clock time but without the loss of exploration diversity in the search space, a natural evolution strategy (NES) is employed to approximate the gradient of the reward function. To realistically evaluate the performance of NESRL-MRM, our simulation evaluations are based on real-world workload traces from Amazon EC2 and Google datacenters. Our results show that NESRL-MRM is able to achieve significant improvement over the existing approaches in balancing the utilization of multi-dimensional DC resources, which leads to substantially reduced blocking probability of future incoming workload demands. Index Terms—distributed data centers, multi-dimensional resources allocation, workload placement, balanced resource utilization, deep reinforcement learning, natural evolution strategy.
In this paper, we present a satellite-integrated 5G testbed that was produced for the EU-commissioned Satellite and Terrestrial Networks for 5G (SaT5G) project. We first describe the testbed's 3GPP Rel. 15/6-compliant mobile core and radio access network (RAN) that have been established at the University of Surrey. We then detail how satellite NTN UE and gateway components were integrated into the testbed using virtualization and software-defined orchestration. The satellite element provides 5G backhaul, which in concert with the terrestrial/mobile segment of the testbed forms a fully integrated end-to-end (E2E) 5G network. This hybrid 5G network exercised and validated the four major use cases defined within the SaT5G project: cell backhaul, edge delivery of multimedia content, multicast and caching for media delivery and multilinking using satellite and terrestrial. In this document, we describe the MEC implementations developed to address each of the aforementioned use cases and explore how each MEC system integrates into the 5G network. We also provide measurements from trials of the use cases over a live GEO satellite system and indicate in each case the improvements that result from the use of satellite in the 5G network.
Data collection is a fundamental yet challenging task of Wireless Sensor Networks (WSN) to support a variety of applications, due to the inherent distinguish characteristics for sensor networks, such as limited energy supply, self-organizing deployment and QoS requirements for different applications. Mobile sink and virtual MIMO (vMIMO) techniques can be jointly considered to achieve both time efficient and energy efficient for data collection. In this paper, we aim to minimize the overall data collection latency including both sink moving time and sensor data uploading time. We formulate the problem and propose a multihop weighted revenue (MWR) algorithm to approximate the optimal solution. To achieve the trade-off between full utilization of concurrent uploading of vMIMO and the shortest moving tour of mobile sink, the proposed algorithm combines the amount of concurrent uploaded data, the number of neighbours, and the moving tour length of sink in one metric for polling point selection. The simulation results show that the proposed MWR effectively reduces total data collection latency in different network scenarios with less overall network energy consumption.
The recently proposed Application Layer Traffic Optimization (ALTO) framework has opened up a new dimension for Internet traffic management that is complementary to the traditional application-agnostic traffic engineering (AATE) solutions currently employed by ISPs. In this paper, we investigate how ALTO-assisted Peer-to-Peer (P2P) traffic management functions interact with the underlying AATE operations, given that there may exist different application-layer policies in the P2P overlay. By considering specific P2P peer selection behaviors on top of a traffic-engineered ISP network, we conduct a performance analysis on how the application and network-layer respective performance is influenced by different policies at the P2P side. Our empirical study offers significant insight for the future design and analysis of cross-layer network engineering approaches that involve multiple autonomous optimization entities with both consistent and non-consistent policies.
Backbone network energy efficiency has recently become a primary concern for Internet Service Providers and regulators. The common solutions for energy conservation in such an environment include sleep mode reconfigurations and rate adaptation at network devices when the traffic volume is low. It has been observed that many ISP networks exhibit regular traffic dynamicity patterns which can be exploited for practical time-driven link sleeping configurations. In this work, we propose a joint optimization algorithm to compute the reduced network topology and its actual configuration duration during daily operations. The main idea is first to intelligently remove network links using a greedy heuristic, without causing network congestion during off-peak time. Following that, a robust algorithm is applied to determine the window size of the configuration duration of the reduced topology, making sure that a unified configuration with optimized energy efficiency performance can be enforced exactly at the same time period on a daily basis. Our algorithm was evaluated using on a Point-of-Presence representation of the GÉANT network and its real traffic matrices. According to our simulation results, the reduced network topology obtained is able to achieve 18.6% energy reduction during that period without causing significant network performance deterioration. The contribution from this work is a practical but efficient approach for energy savings in ISP networks, which can be directly deployed on legacy routing platforms without requiring any protocol extension. © 2012 IEEE.
The continuous growth in volume of Internet traffic, including VoIP, IPTV and user-generated content, requires improved routing mechanisms that satisfy the requirements of both the Internet Service Providers (ISPs) that manage the network and the end-users that are the sources and sinks of data. The objectives of these two players are different, since ISPs are typically interested in ensuring optimised network utilisation and high throughput whereas end-users might require a low-delay or a high-bandwidth path. In this paper, we present our UAESR (Utilisation-Aware Edge Selected Routing) algorithm, which aims to satisfy both players' demands concurrently by selecting paths that are a good compromise between the two players' objectives. We demonstrate by simulation that this algorithm allows both actors achieve their goals. The results support our argument that our cooperative approach achieves effective network resource engineering at the same time as offering routing flexibility and good quality of service to end-users.
The integrated MANET and satellite network is a natural evolution in providing local and remote connectivity. The features of this integrated network, such as requiring no fixed infrastructure, ease of deployment and providing global ubiquitous communication, give advantages of its being popular. However, its unpredictable mobility of nodes, lack of central coordination and limited available resources emphasizes the challenges in networking. A large library of studies has been done in literature, yet some issues are still worth tackling, such as gateway selection mechanisms, satellite link management, resource management and so on. As a basic step of internetworking, the issue of gateway selection is studied specifically and corresponding optimization scheme for achieving load balancing is described.
Quality-of-service in terms of network connectivity and sensing coverage is important in wireless sensor networks. Particularly in sensor scheduling, it must be controlled to meet the required quality. In this paper, we present novel methods of the connected coverage optimization for sensor scheduling using a virtual hexagon partition composed of hexagonal cells. We first investigate the optimum number of active sensors to fully cover an individual hexagonal cell. According to the best case, a sensor selection method called the three-symmetrical area method (3-Sym) is then proposed. Furthermore, we optimize the coverage efficiency by reducing the overlapping coverage degree incurred from the 3-Sym method, which is called the symmetrical area optimization method. This considers coverage redundancy within the particular area, namely, sensor's territory. The simulation results show that we achieve not only complete connected coverage over the entire monitored area with the near-ideal number of active sensors but also the minimum overlapping coverage degree in each scheduling round.
Ever since the first automation provided by the introduction of the Strowger telephone exchange in the late 19th century, networks have been increasingly automated. Fast forward to 2022, and the challenge facing network providers is scaling up this level of automation considering massive increases in complexity, new levels of agility to operate services, and rising demand from customers within the modern telecommunications ecosystem. This article describes a significant new industry-academia partnership to address these challenges: Next Generation Converged Digital Infrastructure (NG-CDI) is creating a vision for the building and operation of a future-proof network infrastructure and its autonomic management. In this article, we highlight three exemplar activities within the NG-CDI research program that illustrate the benefits of taking a highly collaborative interdisciplinary approach and show how academia and industry working closely together have delivered a range of direct and positive impacts on business.
Future Internet will utilise numerous methods of communication, including an increasing amount of space-based Internet transport infrastructure. Control and communication across Earth-based space-based networks present several problems—high dynamicity, spatial connectivity, continual movement tracking and prediction, ocular obstruction, integration with existing Internet infrastructure, routing, and addressing—all of which challenge existing control architectures and protocol mechanisms. This chapter provides an overview of near-to-mid-term space networking towards 2030; it outlines the key components, challenges, and requirements for integrating future space-based network infrastructure with existing networks and mechanisms. We highlight the network control and transport interconnection and identify the resources and functions required for successful interconnection of space-based and Earth-based Internet infrastructure. Finally, we discuss the management implications of these integrated assets and resources and potential technologies and capabilities that may be applied or extended.
Locality-based peer selection paradigms have been proposed recently based on cooperation between peer-to-peer (P2P) service providers, Internet Service Providers (ISPs) and end users in order to achieve efficient resource utilization by P2P traffic. Based on this cooperation between different stakeholders, we introduce a more advanced paradigm with adaptive peer selection that takes into account traffic dynamics in the operational network. Specifically, peers associated with low path utilization as measured by the ISP are selected in order to reduce the probability of network congestion. This approach not only improves real-time P2P service assurance but also optimizes the overall use of network resources. Our simulations based on the GEANT network topology and real traffic traces show that the proposed adaptive peer selection scheme achieves significant improvement in utilizing bandwidth resources as compared to static locality-based approaches.
Software-defined networking (SDN) is the key technology to enable network softwarization by offering a programable and flexible network control capabilities. In order to dynamically manage and reconfigure the underlying network through SDN, network-based monitoring functionality needs to be in place. However, existing network monitoring schemes are normally heavyweight which can cause substantial monitoring overhead when dealing with entire network infrastructure and complex policies. Such a limitation can be critical in a software-based network system that enables the construction of multiple networks with various network policies designed by a network operator. In this paper, we propose a new lightweight monitoring mechanism referred to as Active-port Aware Monitoring (APAM) in order to support the monitoring of complex networks with substantially reduced overhead. APAM typically monitors active ports which are the switch ports utilized by current flow rules. These active ports are dynamically monitored with reconfigurable monitoring intervals according to their port utilization. The measurement results show that APAM adapts varying traffic route due to a change of flow rules and also adjusts its monitoring performance according to network traffic dynamicity, which reduces the monitoring overhead and also improves monitoring accuracy.
Backup paths are usually pre-installed by network operators to protect against single link failures in backbone networks that use multi-protocol label switching. This paper introduces a new scheme called Green Backup Paths (GBP) that intelligently exploits these existing backup paths to perform energy-aware traffic engineering without adversely impacting the primary role of these backup paths of preventing traffic loss upon single link failures. This is in sharp contrast to most existing schemes that tackle energy efficiency and link failure protection separately, resulting in substantially high operational costs. GBP works in an online and distributed fashion, where each router periodically monitors its local traffic conditions and cooperatively determines how to reroute traffic so that the highest number of physical links can go to sleep for energy saving. Furthermore, our approach maintains quality-of-service by restricting the use of long backup paths for failure protection only, and therefore, GBP avoids substantially increased packet delays. GBP was evaluated on the point-of-presence representation of two publicly available network topologies, namely, GÉANT and Abilene, and their real traffic matrices. GBP was able to achieve significant energy saving gains, which are always within 15% of the theoretical upper bound. © 2004-2012 IEEE.
The research in this letter focuses on geographic routing in Delay/Disruption Tolerant Networks (DTNs), by considering sparse network density. We explore the Delegation Forwarding (DF) approach to overcome the limitation of the geometric metric which requires mobile node moving towards destination, with the Delegation Geographic Routing (DGR) proposed. Besides, we handle the local maximum problem of DGR, by considering nodal mobility and message lifetime. Analysis and evaluation results show that DGR overcomes the limitation of the algorithm based on the given geometric metric. By overcoming the limited routing decision and handling the local maximum problem, DGR is reliable for delivering messages before expiration lifetime. Meanwhile, the efficiency of DGR regarding low overhead ratio is contributed by utilizing DF. © 2013 IEEE.
In order to minimize the downloading time of short-lived applications like web browsing, web application and short video clips, the recently standardized HTTP/2 adopts stream multiplexing on one single TCP connection. However, aggregating all content objects within one single connection suffers from the Head-of-Line blocking issue. QUIC, by eliminating such an issue on the basis of UDP, is expected to further reduce the content downloading time. However, in mobile network environments, the single connection strategy still leads to a degraded and high variant completion time due to the unexpected hindrance of congestion window growth caused by the common but uncertain fluctuations in round trip time and also random loss event at the air interface. To retain resilient congestion window against such network fluctuations, we propose an intelligent connection management scheme based on QUIC which not only employs adaptively multiple connections but also conducts a tailored state and congestion window synchronization between these parallel connections upon the detection of network fluctuation events. According to the performance evaluation results obtained from an LTE-A/Wi-Fi testing network, the proposed multiple QUIC scheme can effectively overcome the limitations of different congestion control algorithms (e.g. the loss-based New Reno/CUBIC and the rate-based BBR), achieving substantial performance improvement in both median (up to 59.1%) and 95th completion time (up to 72.3%). The significance of this piece of work is to achieve highly robust short-lived content downloading performance against various uncertainties of network conditions as well as with different congestion control schemes.
HTTP-based live streaming has become increasingly popular in recent years, and more users have started generating 4K live streams from their devices (e.g., mobile phones) through social-media service providers like Facebook or YouTube. If the audience is located far from a live stream source across the global Internet, TCP throughput becomes substantially suboptimal due to slow-start and congestion control mechanisms. This is especially the case when the end-to-end content delivery path involves radio access network (RAN) at the last mile. As a result, the data rate perceived by a mobile receiver may not meet the high requirement of 4K video streams, which causes deteriorated Quality-of-Experience (QoE). In this paper, we propose a scheme named Edge-based Transient Holding of Live sEgment (ETHLE), which addresses the issue above by performing context-aware transient holding of video segments at the mobile edge with virtualized content caching capability. Through holding the minimum number of live video segments at the mobile edge cache in a context-aware manner, the ETHLE scheme is able to achieve seamless 4K live streaming experiences across the global Internet by eliminating buffering and substantially reducing initial startup delay and live stream latency. It has been deployed as a virtual network function at an LTE-A network, and its performance has been evaluated using real live stream sources that are distributed around the world. The significance of this paper is that by leveraging virtualized caching resources at the mobile edge, we address the conventional transport-layer bottleneck and enable QoE-assured Internet-wide live streaming services with high data rate requirements.
Given that the vast majority of Internet interactions relate to content access and delivery, recent research has pointed to a potential paradigm shift from the current host-centric Internet model to an information-centric one. In information-centric networks, named content is accessed directly, with the best content copy delivered to the requesting user given content caching within the network. Here, we present an Internet-scale mediation approach for content access and delivery that supports content and network mediation. Content characteristics, server load, and network distance are taken into account in order to locate the best content copy and optimize network utilization while maximizing the user quality of experience. The content mediation infrastructure is provided by Internet service providers in a cooperative fashion, with both decoupled/two-phase and coupled/one-phase modes of operation. We present in detail the coupled mode of operation which is used for popular content and follows a domain-level hop-by-hop content resolution approach to optimally identify the best content copy. We also discuss key aspects of our content mediation approach, including incremental deployment issues and scalability. While presenting our approach, we also take the opportunity to explain key information-centric networking concepts.
Satellite communication has recently been included as one of the key enabling technologies for 5G backhauling, especially for the delivery of bandwidth-demanding enhanced mobile broadband (eMBB) applications in 5G. In this paper, we present a 5G-oriented network architecture that is based on satellite communications and multi-access edge computing to support eMBB applications, which is investigated in the EU 5GPPP phase-2 satellite and terrestrial network for 5G project. We specifically focus on using the proposed architecture to assure quality-of-experience (QoE) of HTTP-based live streaming users by leveraging satellite links, where the main strategy is to realize transient holding and localization of HTTP-based (e.g., MPEG-DASH or HTTP live streaming) video segments at 5G mobile edge while taking into account the characteristics of satellite backhaul link. For the very first time in the literature, we carried out experiments and systematically evaluated the performance of live 4K video streaming over a 5G core network supported by a live geostationary satellite backhaul, which validates its capability of assuring live streaming users' QoE under challenging satellite network scenarios.
Fast reroute (FRR) techniques have been designed and standardised in recent years for supporting sub-50-millisecond failure recovery in operational ISP networks. On the other hand, if the provisioning of FRR protection paths does not take into account traffic engineering (TE) requirements, customer traffic may still get disrupted due to post-failure traffic congestion. Such a situation could be more severe in operational networks with highly dynamic traffic patterns. In this paper we propose a distributed technique that enables adaptive control of FRR protection paths against dynamic traffic conditions, resulting in self-optimisation in addition to the self-healing capability. Our approach is based on the Loop-free Alternates (LFA) mechanism that allows non-deterministic provisioning of protection paths. The idea is for repairing routers to periodically re-compute LFA alternative next-hops using a lightweight algorithm for achieving and maintaining optimised post-failure traffic distribution in dynamic network environments. Our experiments based on a real operational network topology and traffic traces across 24 hours have shown that such an approach is able to significantly enhance relevant network performance compared to both TE-agnostic and static TE-aware FRR solutions. © 2011 IEEE.
In today’s BGP routing architecture, traffic delivery is in general based on single path selection paradigms. The lack of path diversity hinders the support for resilience, traffic engineering and QoS provisioning across the Internet. Some recently proposed multi-plane extensions to BGP offer a promising mechanism to enable diverse inter-domain routes towards destination prefixes. Based on these enhanced BGP protocols, we propose in this paper a novel technique to enable controlled fast egress router switching for handling network failures. In order to minimize the disruptions to real-time services caused by the failures, backup egress routers can be immediately activated through locally remarking affected traffic towards alternative routing planes without waiting for IGP routing re-convergence. According to our evaluation results, the proposed multi-plane based egress router selection algorithm is able to provide both high path diversity and balanced load distribution across inter-domain links with a small number of planes.
The vehicular cloud is a promising new paradigm where vehicular networking and mobile cloud computing are elaborately integrated to enhance the quality of vehicular information services. Pseudonym is a resource for vehicles to protect their location privacy, which should be efficiently utilized to secure vehicular clouds. However, only a few existing architectures of pseudonym systems take flexibility and efficiency into consideration, thus leading to potential threats to location privacy. In this paper, we exploit software-defined networking technology to significantly extend the flexibility and programmability for pseudonym management in vehicular clouds. We propose a software-defined pseudonym system where the distributed pseudonym pools are promptly scheduled and elastically managed in a hierarchical manner. In order to decrease the system overhead due to the cost of inter-pool communications, we leverage the two-sided matching theory to formulate and solve the pseudonym resource scheduling.We conducted extensive simulations based on the real map of San Francisco. Numerical results indicate that the proposed software-defined pseudonym system significantly improves the pseudonym resource utilization, and meanwhile, effectively enhances the vehicles’ location privacy by raising their entropy.
Largely motivated by the proliferation of content-centric applications in the Internet, information-centric networking has attracted the attention of the research community. By tailoring network operations around named information objects instead of end hosts, ICN yields a series of desirable features such as the spatiotemporal decoupling of communicating entities and the support of in-network caching. In this article, we advocate the introduction of such ICN features in a new, rapidly transforming communication domain: the smart grid. With the rapid introduction of multiple new actors, such as distributed (renewable) energy resources and electric vehicles, smart grids present a new networking landscape where a diverse set of multi-party machine-to-machine applications are required to enhance the observability of the power grid, often in real time and on top of a diverse set of communication infrastructures. Presenting a generic architectural framework, we show how ICN can address the emerging smart grid communication challenges. Based on real power grid topologies from a power distribution network in the Netherlands, we further employ simulations to both demonstrate the feasibility of an ICN solution for the support of real-time smart grid applications and further quantify the performance benefits brought by ICN against the current host-centric paradigm. Specifically, we show how ICN can support real-time state estimation in the medium voltage power grid, where high volumes of synchrophasor measurement data from distributed vantage points must be delivered within a very stringent end-to-end delay constraint, while swiftly overcoming potential power grid component failures. © 1986-2012 IEEE.
This paper introduces a new scheme called Green MPLS Fast ReRoute (GMFRR) for enabling energy aware traffic engineering. The scheme intelligently exploits bac kup label switched paths, originally used for failure protection, in order to achieve energy saving during the normal failure-free operation period. GMFRR works in an online and distributed fashion whe re each router periodically monitors its local traffic condition and cooperatively determines how to efficiently reroute traffic onto the backup paths in order to exploit opportunities for power saving through link sleeping in the primary paths. According to our performance evaluations based on the academic network GEANT and its traffic matrices, GMFRR is able to achieve significant power saving gains, which are within 15% of the theoretical upper bound.
In data center networks, traffic needs to be distributed among different paths using traffic optimization strategies for mixed flows. Most of the existing strategies consider either distributed or centralized mechanisms to optimize the latency of mice flows or the throughput of elephant flows. However, low network performance and scalability issues are intrinsic limitations of both strategies. In addition, the current elephant flow detection methods are inefficient. In this paper, we propose a high-performance and scalable traffic optimization strategy (HPSTOS) based on a hybrid approach that leverages the advantages of both centralized and distributed mechanisms. HPSTOS improves the efficiency of elephant flow detection through sampling and flow-table identification. HPSTOS guarantees preferential transmission of mice flows using priority scheduling and adjusts their transmission rate by coding-based congestion control on the end-host, reducing their latency. Additionally, HPSTOS schedules elephant flows by cost-aware dynamic flow scheduling on a centralized controller to improve their throughput. The controller handles only elephant flows, which constitutes the minority of the flows, allowing effective scalability. Evaluations show that HPSTOS outperforms existing schemes by realizing efficient elephant flow detection and improving network performance and scalability.
This article presents an architecture for supporting interdomain QoS across the multi-provider global Internet. While most research to date has focused on supporting QoS within a single administrative domain, mature solutions are not yet available for the provision of QoS across multiple domains administered by different organizations. The architecture described in this article encompasses the full set of functions required in the management (service and resource), control and data planes for the provision of end-to-end QoS-based IP connectivity services. We use the concept of QoS classes and show how these can be cascaded using service level specifications (SLSs) agreed between BGP peer domains to construct a defined end-to-end QoS. We illustrate the architecture by describing a typical operational scenario.
Current practices for managing resources in fixed networks rely on off-line approaches, which can be sub-optimal in the face of changing or unpredicted traffic demand. To cope with the limitations of these off-line configurations new traffic engineering (TE) schemes that can adapt to network and traffic dynamics are required. In this paper, we propose an intra-domain dynamic TE system for IP networks. Our approach uses multi-topology routing as the underlying routing protocol to provide path diversity and supports adaptive resource management operations that dynamically adjust the volume of traffic sent across each topology. Re-configuration actions are performed in a coordinated fashion based on an in-network overlay of network entities without relying on a centralized management system. We analyze the performance of our approach using a realistic network topology, and our results show that the proposed scheme can achieve near-optimal network performance in terms of resource utilization in a responsive manner.
In order to meet the requirements of emerging demanding services, network resource management functionality that is decentralized, flexible and adaptive to traffic and network dynamics is of paramount importance. In this paper we describe the main mechanisms of DACoRM, a new intra-domain adaptive resource management approach for IP networks. Based on path diversity provided by multi-topology routing, our approach controls the distribution of traffic load in the network in an adaptive manner through periodical re-configurations that uses real-time monitoring information. The re-configuration actions performed are decided in a coordinated fashion between a set of source nodes that form an in-network overlay. We evaluate the overall performance of our approach using realistic network topologies. Results show that near-optimal network performance in terms of resource utilization can be achieved in scalable manner. © 2012 IEEE.
This paper presents a holistic peer selection scheme in multi-domain environments, aiming to mitigate Peer-to-Peer (P2P) traffic volumes over expensive inter-domain links as well as the maintenance of desirable P2P users' perceived service quality. The mechanism combines the traditional locality-aware peer selection with the consideration of ISP business relationship. By leveraging between the two peering strategies, the risk of possible congestion on critical inter-connected links can be effectively alleviated due to more concentrated P2P traffic over fewer inter-ISP links under pure cooperative peering schemes. According to our analytical modelling, the proposed hybrid approach is able to achieve better performance for P2P users, and can retain desirable network efficiency as of the cooperative peer selection strategy. Our modelling based analysis offers the incentives to perform peer selections in multi-domain environments wherein non-cooperative networks and cooperative networks coexist. © 2013 IEEE.
Open radio access network (Open-RAN) is becoming a key component of cellular networks, and therefore optimizing its architecture is vital. The Open-RAN is a distributed architecture that lets the virtualized networking functions be split between Distributed Units (DU) and Centralized Units (CUs); as a result, there is a wide range of design options. We propose an optimization problem to choose the split points. The objective is to balance the load across CUs as well as midhaul links by considering delay requirements. The resulting formulation is an NP-hard problem that is solved with a novel heuristic algorithm. Performance evaluation shows that the gap between optimal and heuristic solutions does not exceed 2%. An in-depth analysis of different centralization levels shows that using multi-CUs could reduce the total bandwidth usage by up to 20%. Moreover, multipath routing can improve the result of load balancing between midhaul links while increasing bandwidth usage.
Traffic Engineering (TE) involves network configuration in order to achieve optimal IP network performance. The existing literature considers intra- and inter-AS (Autonomous System) TE independently. However, if these two aspects are considered separately, the overall network performance may not be truly optimized. This is due to the interaction between intra- and inter-AS TE, where a good solution of inter-AS TE may not be good for intra-AS TE. To remedy this situation, we propose a joint optimization of intra- and inter-AS TE in order to improve the overall network performance by simultaneously finding the best egress points for inter-AS traffic and the best routing scheme for intra-AS traffic. Three strategies are presented to attack the problem, sequential, nested and integrated optimization. Our evaluation shows that, in comparison to sequential and nested optimization, integrated optimization can significantly improve overall network performance by being able to accommodate approximately 30%-60% more traffic demand.
With increased complexity of webpages nowadays, computation latency incurred by webpage processing during downloading operations has become a newly identified factor that may substantially affect user experiences in a mobile network. In order to tackle this issue, we propose a simple but effective transport-layer optimization technique which requires necessary context information dissemination from the mobile edge computing (MEC) server to user devices where such an algorithm is actually executed. The key novelty in this case is the mobile edge’s knowledge about webpage content characteristics which is able to increase downloading throughput for user QoE enhancement. Our experiment results based on a real LTE-A test-bed show that, when the proportion of computation latency varies between 20% and 50% (which is typical for today’s webpages), the downloading throughput can be improved up to 34.5%, with reduced downloading time by up to 25.1%
In this paper, we present a Mobile Edge Computing (MEC) scheme for enabling network edge-assisted video adaptation based on MPEG-DASH (Dynamic Adaptive Streaming over HTTP). In contrast to the traditional over-the-top (OTT) adaptation performed by DASH clients, the MEC server at the mobile network edge can capture radio access network (RAN) conditions through its intrinsic Radio Network Information Service (RNIS) function, and use the knowledge to provide guidance to clients so that they can perform more intelligent video adaptation. In order to support such MECassisted DASH video adaptation, the MEC server needs to locally cache the most popular content segments at the qualities that can be supported by the current network throughput. Towards this end, we introduce a two-dimensional user Quality-of-Experience (QoE)-driven algorithm for making caching / replacement decisions based on both content context (e.g., segment popularity) and network context (e.g., RAN downlink throughput). We conducted experiments by deploying a prototype MEC server at a real LTE-A based network testbed. The results show that our QoE-driven algorithm is able to achieve significant improvement on user QoE over 2 benchmark schemes
It has been envisaged that in future 5G networks user devices will become an integral part by participating in the transmission of mobile content traffic typically through Deviceto- device (D2D) technologies. In this context, we promote the concept of Mobility as a Service (MaaS), where content-aware mobile network edge is equipped with necessary knowledge on device mobility in order to distribute popular mobile content items to interested clients via a small number of helper devices. Towards this end, we present a device-level Information Centric Networking (ICN) architecture that is able to perform intelligent content distribution operations according to necessary context information on mobile user mobility and content characteristics. Based on such a platform, we further introduce device-level online content caching and offline helper selection algorithms in order to optimise the overall system efficiency. In particular, this paper sheds distinct light on the importance of user mobility data analytics based on which helper selection can lead to overall system optimality. Based on representative user mobility models, we conducted realistic simulation experiments and modelling which have proven the efficiency in terms of both network traffic offloading gains and user-oriented performance improvements. In addition, we show how the framework can be flexibly configured to meet specific delay tolerance constraints according to specific context policies.
The Internet, the de facto platform for large-scale content distribution, suffers from two issues that limit its manageability, efficiency and evolution: (1) The IP-based Internet is host-centric and agnostic to the content being delivered and (2) the tight coupling of the control and data planes restrict its manageability, and subsequently the possibility to create dynamic alternative paths for efficient content delivery. Here we present the CURLING system that leverages the emerging Information- Centric Networking paradigm for enabling cost-efficient Internetscale content delivery by exploiting multicasting and in-network caching. Following the software-defined networking concept that decouples the control and data planes, CURLING adopts an inter-domain hop-by-hop content resolution mechanism that allows network operators to dynamically enforce/change their network policies in locating content sources and optimizing content delivery paths. Content publishers and consumers may also control content access according to their preferences. Based on both analytical modelling and simulations using real domainlevel Internet subtopologies, we demonstrate how CURLING supports efficient Internet-scale content delivery without the necessity for radical changes to the current Internet.
Data collection is a fundamental task of Wireless Sensor Networks (WSN) to support a variety of applications, such as remote monitoring, and emergency response, where collected information is relayed to an infrastructure network via packet gateways for processing and decision making. In large-scale monitoring scenarios, data packets need to be relayed over multi-hop paths to the gateways, and sensors are often randomly deployed, causing local node density differences. As a result, imbalance in data traffic load on the gateways is likely to occur. Furthermore, due to dynamic network conditions and differences in sensor data generation rates, congestion on some data paths is also often experienced. Numerous studies have focused on the problem of in-network traffic load balancing, while a few works have aimed at equalizing the loads on gateways. However, there is a potential trade-off between these two problems. In this paper, the dual objective of gateway and in-network load balancing is addressed and the RALB (Reactive and Adaptive Load Balancing) algorithm is presented. RALB is proposed as a generic solution for multihop networks and mesh topologies, especially in large-scale remote monitoring scenarios, to balance traffic loads.
Routing in delay/disruption-tolerant networks (DTNs) is without the assumption of contemporaneous end-to-end connectivity to relay messages. Geographic routing is an alternative approach using real-time geographic information instead of network topology information. However, if considering the mobility of destination, its real-time geographic information is often unavailable due to sparse network density in DTNs. Using historical geographic information to overcome this problem, we propose the converge-and-diverge (CaD) by combining two routing phases that depend on the proximity to the movement range estimated for destination. The key insight is to promote message replication converging to the edge of this range and diverging to the entire area of this range to achieve fast delivery, given limited message lifetime. Furthermore, the concept of delegation replication (DR) is explored to overcome the limitation of routing decisions and the local maximum problem. Evaluation results under the Helsinki city scenario show an improvement of CaD in terms of delivery ratio, average delivery latency, and overhead ratio. Since geographic routing in DTNs has not received much attention, apart from the design of CaD, our novelty also focuses on exploring DR to overcome the limitation of routing decision and the local maximum problem, in addition to enhancing efficiency, as DR originally intended. © 1967-2012 IEEE.
With the advent of Network Function Virtualization (NFV) techniques, a subset of the Internet traffic will be treated by a chain of virtual network functions (VNFs) during their journeys while the rest of the background traffic will still be carried based on traditional routing protocols. Under such a multi-service network environment, we consider the co-existence of heterogeneous traffic control mechanisms, including flexible, dynamic service function chaining (SFC) traffic control and static, dummy IP routing for the aforementioned two types of traffic that share common network resources. Depending on the traffic patterns of the background traffic which is statically routed through the traditional IP routing platform, we aim to perform dynamic service function chaining for the foreground traffic requiring VNF treatments, so that both the end-to-end SFC performance and the overall network resource utilization can be optimized. Towards this end, we propose a deep reinforcement learning based scheme to enable intelligent SFC routing decision-making in dynamic network conditions. The proposed scheme is ready to be deployed on both hybrid SDN/IP platforms and future advanced IP environments. Based on the real GEANT network topology and its one-week traffic traces, our experiments show that the proposed scheme is able to significantly improve from the traditional routing paradigm and achieve close-to-optimal performances very fast while satisfying the end-to-end SFC requirements.
Energy-aware traffic engineering (ETE) has been gaining increasing research attentions due to the cost reduction benefits that it can offer to network operators and for environmental reasons. While numerous approaches exist which attempt to provide energy reduction benefits by intelligently manipulating network devices and their configurations, most of them suffer from one fundamental shortcoming: however, minor adaptations to a given IP network topology configuration all lead to temporal service disruptions incurred by routing reconvergence, which makes these schemes less appealing to network operators. The more frequently the IP topology reconfigurations take place in order to optimize the network performance against dynamic traffic demands, the more frequently service disruptions will occur to end users. Motivated by the essential requirement for network operators to enable seamless service assurance, we put forward a framework for disruption-free ETE, which leverages on selective link sleeping and wake-up operations in a disruption-free manner. The framework allows for maximizing the opportunities for disruption-free reconfigurations based on intelligent IGP link weight settings, assisted by a dynamic scheme that optimizes the reconfigurations in response to changing traffic conditions. As our simulation-based evaluation show, the framework is capable of achieving significant energy saving gains while at the same time ensuring robustness in terms of disruption avoidance and resilience to congestion.
In recent years, Mobile Ad hoc Networks (MANETs) have been great interest all over the world for its advantage of high mobility and flexibility. It is also among the greatest challenges in wireless communications. As a special type of MANET, Vehicular Ad hoc Networks (VANETs) are considerably important in Next-Generation Networking (NGN). Unlike typical MANETs, VANETs are much more challenging due to high velocity, which makes classic MANET routing protocols cannot fit in such scenarios efficiently. This paper is intended to evaluate performance of two different routing protocols, namely DSDV and AODV, in various realistic scenarios. Thus, a DSDV optimization approach is therefore proposed to improve DSDV's performance in VANETs.
In the coming era of telecommunication, the integration of satellite capabilities with emerging 5G technologies has been considered as a promising solution to achieve assured user experiences in bandwidth-hungry content applications. In this paper, we present our design for emerging Multi-access Edge Computing (MEC) based Video on Demand (VoD) services, which efficiently utilizes satellite and terrestrial integrated 5G network. Based on this framework, we propose and analyse the Video-segment Scheduling Network Function (VSNF), which is able to deliver enhanced quality of video consumption experiences to end-users. We specifically consider the layer video scenario, where it is possible to intelligently schedule layers of video segments via parallel satellite and terrestrial backhaul links in 5G. The key technical challenge is to optimally schedule the layered video segment over the two network link which are having distinct characteristics while attempting to enhance the Quality of Experience (QoE) for all the end-users in fair manner. We have conducted extensive set of experiments using real 5G testing framework in which gNB is integrated with core network using Geostationary Earth Orbit (GEO) satellite and terrestrial backhaul links. The results highlights the capability of our proposed content delivery framework for holistically delivering assured QoE, fairness among multiple video sessions, as well as optimised network resource efficiency. INDEX TERMS HTTP Adaptive Streaming, Satellite and Terrestrial network integration, 5G networks, Quality of Experiences
With the advent of various emerging network services in recent years, the current best-effort based internet infrastructure has increasingly struggled in providing comprehensive support for these applications. Despite the QoS (Quality of Services) frameworks proposed in the 1990’s, such as Integrated Services (IntServ) and Differentiated Services (DiffServ), large-scale deployments have not been seen across the global internet until now, and this slow progress has significantly hindered the development of the relevant services. In addition, network resilience to failures has become another major concern by today’s ISPs (Internet Service Providers), as QoS assurance to end-users may be severely impacted by various failures which are very common in operational networks today.
—Holographic Teleportation is an emerging media application allowing people or objects to be teleported in a real-time and immersive fashion into the virtual space of the audience side. Compared to the traditional video content, the network requirements for supporting such applications will be much more challenging. In this paper, we present a 5G edge computing framework for enabling remote production functions for live holographic Teleportation applications. The key idea is to offload complex holographic content production functions from end user premises to the 5G mobile edge in order to substantially reduce the cost of running such applications on the user side. We comprehensively evaluated how specific network-oriented and application-oriented factors may affect the performances of remote production operations based on 5G systems. Specifically, we tested the application performance from the following four dimensions: (1) different data rate requirements with multiple content resolution levels, (2) different transport-layer mechanisms over 5G uplink radio, (3) different indoor/outdoor location environments with imperfect 5G connections and (4) different object capturing scenarios including the number of teleported objects and the number of sensor cameras required. Based on these evaluations we derive useful guidelines and policies for future remote production operation for holographic Teleportation through 5G systems.
Multi-Protocol Label Switching (MPLS) has been considered to be a promising solution to achieve end-to-end QoS guarantees in Differentiated Services (DiffServ) domains [1].Based on the Service Level Specification (SLS) between customers and the ISP, traffic forecast mechanism is able to predict traffic demands between ingress-egress routers, and hence bandwidth guaranteed LSPs can be set up accordingly through the DiffServ domain. In this paper, we address the problem of computing multiple LSPs with heterogeneous bandwidth requirements, while the overall network link cost is optimized. We first prove that finding a set of feasible LSPs with bandwidth constrained is NP-complete, and then propose an efficient heuristic with global network resource coordination over individual traffic aggregates. By simulation we show that the proposed coordinated path section (CPS) scheme obtains better overall LSP cost and lower bandwidth consumption compared with existing bandwidth constrained routing algorithms.
The energy consumption of backbone networks has risen exponentially during the past decade with the advent of various bandwidth-hungry applications. To address this serious issue, network operators are keen to identify new energy-saving techniques to green their networks. Up to this point, the optimization of IGP link weights has only been used for load-balancing operations in IP-based networks. In this paper, we introduce a novel link weight setting algorithm, the Green Load-balancing Algorithm (GLA), which is able to jointly optimize both energy efficiency and load-balancing in backbone networks without any modification to the underlying network protocols. The distinct advantage of GLA is that it can be directly applied on top of existing link-sleeping based Energy-aware Traffic Engineering (ETE) schemes in order to achieve substantially improved energy saving gains, while at the same time maintain traditional traffic engineering objectives. In order to evaluate the performance of GLA without losing generality, we applied the scheme to a number of recently proposed but diverse ETE schemes based on link sleeping operations. Evaluation results based on the European academic network topology GÉANT and its real traffic matrices show that GLA is able to achieve significantly improved energy efficiency compared to the original standalone algorithms, while also achieving near-optimal load-balancing performance. In addition, we further consider end-to-end traffic delay requirements since the optimization of link weights for load-balancing and energy savings may introduce substantially increased traffic delay after link sleeping. In order to solve this issue, we modified the existing ETE schemes to improve their end-to-end traffic delay performance. The evaluation of the modified ETE schemes together with GLA shows that it is still possible to save a significant amount of energy while achieving substantial load-balancing within a given traffic delay constraint. © 2014 Elsevier B.V. All rights reserved.
The Internet-of-Things (IoT) paradigm envisions billions of devices all connected to the Internet, generating low-rate monitoring and measurement data to be delivered to application servers or end-users. Recently, the possibility of applying innetwork data caching techniques to IoT traffic flows has been discussed in research forums. The main challenge as opposed to the typically cached content at routers, e.g. multimedia files, is that IoT data are transient and therefore require different caching policies. In fact, the emerging location-based services can also benefit from new caching techniques that are specifically designed for small transient data. This paper studies in-network caching of transient data at content routers, considering a key temporal data property: data item lifetime. An analytical model that captures the trade-off between multihop communication costs and data item freshness is proposed. Simulation results demonstrate that caching transient data is a promising information-centric networking technique that can reduce the distance between content requesters and the location in the network where the content is fetched from. To the best of our knowledge, this is a pioneering research work aiming to systematically analyse the feasibility and benefit of using Internet routers to cache transient data generated by IoT applications.
In this letter, we analyse the trade-off between collision probability and code-ambiguity, when devices transmit a sequence of preambles as a codeword, instead of a single preamble, to reduce collision probability during random access to a mobile network. We point out that the network may not have sufficient resources to allocate to every possible codeword, and if it does, then this results in low utilisation of allocated uplink resources. We derive the optimal preamble set size that maximises the probability of success in a single attempt, for a given number of devices and uplink resources.