Dr Rizwan Asghar
Academic and research departments
Surrey Centre for Cyber Security, Computer Science Research Centre, School of Computer Science and Electronic Engineering.About
Biography
Rizwan joined the Surrey Centre for Cyber Security (SCCS) at the University of Surrey as a Reader in 2022. He is also an Honorary Academic in the School of Computer Science at the University of Auckland in New Zealand. Before that, he was a Senior Lecturer (above the bar) at the University of Auckland, which he joined in 2015. Prior to that, he was a Postdoctoral Researcher at international research institutes, including the Center for IT-Security, Privacy, and Accountability (CISPA) at Saarland University in Germany and CREATE-NET (an international research centre) in Trento Italy, where he also served as a Researcher.
He received his PhD degree in ICT (Security and Privacy) from the University of Trento, Italy in 2013. As part of his PhD programme, he was Stanford Research Institute (SRI) Fellow at SRI International, California, USA. He obtained his MSc degree in Computer Science and Engineering - Information Security Technology from the Eindhoven University of Technology (TU/e), The Netherlands in 2009 and carried out his research as a Master Thesis Student at the Ericsson Research Eurolab, Germany. During his career, he also worked as a Software Engineer at international software companies.
He is an award-winning teacher and researcher who received several awards, including the 2017 Dean's Award for Teaching Excellence, Best Paper Award at TrustCom 2018, Highly Commended Paper at ITNAC 2017, and Best Paper Award at FiCloud 2015.
University roles and responsibilities
- Academic Representative on Senate
- MSc Admissions Lead
- Personal Tutor
Previous roles
Affiliations and memberships
News
ResearchResearch interests
His research interests include cyber resilience, privacy, cyber security, and access control. He is passionate about developing privacy-preserving systems and improving cyber resilience. He successfully led several research projects on secure storage, access control mechanisms, data provenance, consent management, usable authentication techniques, secure IoT networks, communications security, cyber security and privacy in social media, and cyber security education.
Research interests
His research interests include cyber resilience, privacy, cyber security, and access control. He is passionate about developing privacy-preserving systems and improving cyber resilience. He successfully led several research projects on secure storage, access control mechanisms, data provenance, consent management, usable authentication techniques, secure IoT networks, communications security, cyber security and privacy in social media, and cyber security education.
Teaching
COMM050: Information Security for Business and Government (Module Leader)
COMM037: Information Security Management (Module Convener)
COMM002: MSc Dissertation (Dissertation Supervisor)
COM3001: Professional Project (Project Supervisor)
Publications
Information-Centric Networking (ICN) has emerged as a paradigm to cope with the increasing demand for content delivery on the Internet. In contrast to the Internet Protocol (IP), the underlying architecture of ICN enables users to request contents based on their name rather than their hosting location (IP address). On the one hand, this preserves users’ anonymity since packet routing does not require source and destination addresses of the communication parties. On the other hand, semantically-rich names reveal information about users’ interests, which poses serious threats to their privacy. A curious ICN node can monitor the traffic to profile users’ or censor specific contents for instance. In this paper, we present PrivICN: a system that enhances users privacy in ICN by protecting the confidentiality of content names and content data. PrivICN relies on a proxy encryption scheme and has several features that distinguish it from existing solutions: it preserves full in-network caching benefits, it does not require end-to-end communication between consumers and providers and it provides flexible user management (addition/removal of users). We evaluate PrivICN in a real ICN network (CCNx implementation) showing that it introduces an acceptable overhead and little delay. PrivICN is publicly available as an open-source library.
Teaching cyber security techniques can be challenging due to the complexity associated with building secure systems. The major issue is these systems could easily be broken if proper protection techniques are not employed. This requires students to understand the offensive approaches that can be used to breach security in order to better understand how to properly defend against cyber attacks. We present a novel approach to teaching cyber security in a graduate course using an innovative assessment task that engages students in both software obfuscation and reverse engineering of obfuscated code. Students involved in the activities gain an appreciation of the challenges in defending against attacks. Our results demonstrate a positive change in the students' perception during the learning process.
A Content Delivery Network (CDN) is a distributed system composed of a large number of nodes that allows users to request objects from nearby nodes. CDN not only reduces the end-to-end latency on the user side but also offloads Content Providers (CPs) providing resilience against Distributed Denial of Service (DDoS) attacks. However, by caching objects and processing users' requests, CDN service providers could infer user preferences and the popularity of objects, thus resulting in information leakage. Unfortunately, such information leakage may result in compromising users' privacy and reveal businessspecific information to untrusted or potentially malicious CDN providers. State-of-the-art Searchable Encryption (SE) schemes can protect the content of sensitive objects but cannot prevent the CDN providers from inferring users' preferences and the popularity of objects. In this work, we present a privacy-preserving encrypted CDN system not only to hide the content of objects and users' requests, but also to protect users' preferences and the popularity of objects from curious CDN providers. We encrypt the objects and user requests in a way that both the CDNs and CPs can perform the search operations without accessing those objects and requests in cleartext. Our proposed system is based on a scalable key management approach for multi-user access, where no key regeneration and data re-encryption are needed for user revocation.
Cyber resilience quantification is the process of evaluating and measuring an organisation’s ability to withstand, adapt to, and recover from cyber-attacks. It involves estimating IT systems, networks, and response strategies to ensure robust defence and effective recovery mechanisms in the event of a cyber-attack. Quantifying cyber resilience can be difficult due to the constantly changing components of IT infrastructure. Traditional methods like vulnerability assessments and penetration testing may not be effective. Measuring cyber resilience is essential to evaluate and strengthen an organisation’s preparedness against evolving cyber-attacks. It helps identify weaknesses, allocate resources, and ensure the uninterrupted operation of critical systems and information. There are various methods for measuring cyber resilience, such as evaluating, teaming and testing, and creating simulated models. This article proposes a cyber resilience quantification framework for IT infrastructure that utilises a simulation approach. This approach enables organisations to simulate different attack scenarios, identify vulnerabilities, and improve their cyber resilience. The comparative analysis of cyber resilience factors highlights pre-configuration’s robust planning and adaptation (61.44%), buffering supported’s initial readiness (44.53%), and network topologies’ robust planning but weak recovery and adaptation (60.04% to 77.86%), underscoring the need for comprehensive enhancements across all phases. The utilisation of the proposed factors is crucial in conducting a comprehensive evaluation of IT infrastructure in the event of a cyber-attack.
Searchable Encryption (SE) is a technique that allows Cloud Service Providers (CSPs) to search over encrypted datasets without learning the content of queries and records. In recent years, many SE schemes have been proposed to protect outsourced data from CSPs. Unfortunately, most of them leak sensitive information, from which the CSPs could still infer the content of queries and records by mounting leakage-based inference attacks, such as the count attack and file injection attack. In this work, first we define the leakage in searchable encrypted databases and analyse how the leakage is leveraged in existing leakage-based attacks. Second, we propose a Privacy-preserving Multi-cloud based dynamic symmetric SE (SSE) scheme for relational Database (P-McDb). P-McDb has minimal leakage, which not only ensures confidentiality of queries and records, but also protects the search, access, and size patterns from CSPs. Moreover, P-McDb ensures both forward and backward privacy of the database. Thus, P-McDb could resist existing leakage-based attacks, e.g., active file/record-injection attacks. We give security definition and analysis to show how P-McDb hides the aforementioned patterns. Finally, we implemented a prototype of P-McDb and test it using the TPC-H benchmark dataset. Our evaluation results show the feasibility and practical efficiency of P-McDb.
The process of identifying the true anomalies from a given set of data instances is known as anomaly detection. It has been applied to address a diverse set of problems in multiple application domains including cybersecurity. Deep learning has recently demonstrated state-of-the-art performance on key anomaly detection applications, such as intrusion detection, Denial of Service (DoS) attack detection, security log analysis, and malware detection. Despite the great successes achieved by neural network architectures, models with very low test error have been shown to be consistently vulnerable to small, adversarially chosen perturbations of the input. The existence of evasion attacks during the test phase of machine learning algorithms represents a significant challenge to both their deployment and understanding. Recent approaches in the literature have focused on three different areas: (a) generating adversarial examples in supervised machine learning in multiple domains; (b) countering the attacks with various defenses; (c) theoretical guarantees on the robustness of machine learning models by understanding their security properties. However, they have not covered, from the perspective of the anomaly detection task in a black box setting. The exploration of black box attack strategies, which reduce the number of queries for finding adversarial examples with high probability, is an important problem. In this paper, we study the security of black box deep anomaly detectors with a realistic threat model. We propose a novel black box attack in query constraint settings. First, we run manifold approximation on samples collected at attacker end for query reduction and understanding various thresholds set by underlying anomaly detector, and use spherical adversarial subspaces to generate attack samples. This method is well suited for attacking anomaly detectors where decision boundaries of nominal and abnormal classes are not very well defined and decision process is done with a set of thresholds on anomaly scores. We validate our attack on state-of-the-art deep anomaly detectors and show that the attacker goal is achieved under constraint settings. Our evaluation of the proposed approach shows promising results and demonstrates that our strategy can be successfully used against other anomaly detectors.
Cyber resilience has become a major concern for both academia and industry due to the increasing number of data breaches caused by the expanding attack surface of existing IT infrastructure. Cyber resilience refers to an organisation’s ability to prepare for, absorb, recover from, and adapt to adverse effects typically caused by cyber-attacks that affect business operations. In this survey, we aim to identify the significant domains of cyber resilience and measure their effectiveness. We have selected these domains based on a literature review of frameworks, strategies, applications, tools, and technologies. We have outlined the cyber resilience requirements for each domain and explored solutions related to each requirement in detail. We have also compared and analysed different studies in each domain to find other ways of enhancing cyber resilience. Furthermore, we have compared cyber resilience frameworks and strategies based on technical requirements for various applications. We have also elaborated on techniques for improving cyber resilience. In the supplementary section, we have presented applications that have implemented cyber resilience. This survey comprehensively compares various popular cyber resilience tools to help researchers, practitioners, and organisations choose the best practices for enhancing cyber resilience. Finally, we have shared key findings, limitations, problems, and future directions.
Social media has become an integral part of modern day society. With increasingly digital societies, individuals have become more familiar and comfortable in using Online Social Networks (OSNs) for just about every aspect of their lives. This higher level of comfort leads to users spilling their emotions on OSNs and eventually their private information. In this work, we aim to investigate the relationship between users’ emotions and private information in their tweets. Our research questionis whether users’ emotions, expressed in their tweets, affect their likelihood to reveal their own private information (privacy leakage) in subsequent tweets. In contrast to existing survey based approaches, we use an inductive, data-driven approach to answer our research question. We use state-of-the-art techniques to classify users’ emotions, and privacy scoring and employ a newtechnique involving BERT for binary detection of sensitive data. We use two parallel classification frameworks: one that takes the user’s emotional state into account and the other for the detectionof sensitive data in tweets. Consecutively, we identify individual cases of correlation between the two. We bring the two classifiers together to interpret the changes in both factors over time duringa conversation between individuals. Variations were found with respect to the kinds of private information revealed in different states. Our results show that being in negative emotional states,such as sadness, anger or fear, leads to higher privacy leakage than otherwise
Searchable encryption allows users to execute encrypted queries over encrypted databases. Several encryption schemes have been proposed in the literature but they leak sensitive information that could lead to inference attacks. We propose ObliviousDB, a searchable encryption scheme for an outsourced database that limits information leakage. Moreover, our scheme allows users to execute SQL-like queries on encrypted data and efficiently supports multi-user access without requiring key sharing. We have implemented ObliviousDB and show its practical efficiency.
As the communication industry is progressing towards fifth generation (SG) of cellular networks, the traffic it carries is also shifting from high data rate traffic from cellular users to a mixture of high data rate and low data rate traffic from Internet of Things (IoT) applications. Moreover, the need to efficiently access Internet data is also increasing across SG networks. Caching contents at the network edge is considered as a promising approach to reduce the delivery time. In this paper, we propose a marketplace for providing a number of caching options for a broad range of applications. In addition, we propose a security scheme to secure the caching contents with a simultaneous potential of reducing the duplicate contents from the caching server by dividing a file into smaller chunks. We model different caching scenarios in NS-3 and present the performance evaluation of our proposal in terms of latency and throughput gains for various chunk sizes.
Cloud storage is a cheap and reliable solution for users to share data with their contacts. However, the lack of standardisation and migration tools makes it difficult for users to migrate to another Cloud Service Provider (CSP) without losing contacts, thus resulting in a vendor lock-in problem. In this work, we aim at providing a generic framework, named PortableCloud, that is flexible enough to enable users to migrate seamlessly to a different CSP keeping all their data and contacts. To preserve the privacy of users, the data in the portable cloud is concealed from the CSP by employing encryption techniques. Moreover, we introduce a migration agent that assists users in automatically finding a suitable CSP that can satisfy their needs.
The notion of patient's consent plays a major role in granting access to medical data. In typical healthcare systems, consent is captured by a form that the patient has to fill in and sign. In e-Health systems, the paper-form consent is being replaced by the integration of the notion of consent in the mechanisms that regulate the access to the medical data. This helps in empowering the patient with the capability of granting and revoking consent in a more effective manner. However, the process of granting and revoking consent greatly varies according to the situation in which the patient is. Our main argument is that such a level of detail is very difficult and error-prone to capture as a set of authorisation policies. In this paper, we present ACTORS, a goal-driven approach to manage consent. The main idea behind ACTORS is to leverage the goal-driven approach of Teleo-Reactive (TR) programming for managing consent that takes into account changes regarding the domains and contexts in which the patient is providing her consent.
Photo Response Non-Uniformity (PRNU) noise-based source camera attribution is a popular digital forensic method. In this method, a camera fingerprint computed from a set of known images of the camera is matched against the extracted noise of an anonymous questionable image to find if the camera had taken the anonymous image. The possibility of privacy leak, however, is one of the main concerns of the PRNU-based method. Using the camera fingerprint (or the extracted noise), an adversary can identify the owner of the camera by matching the fingerprint with the noise of an image (or with the fingerprint computed from a set of images) crawled from a social media account. In this paper, we address this privacy concern by encrypting both the fingerprint and the noise using the Boneh-Goh-Nissim (BGN) encryption scheme, and performing the matching in encrypted domain. To overcome leakage of privacy from the content of an image that is used in the fingerprint calculation, we compute the fingerprint within a trusted environment, such as ARM TrustZone. We present PANDORA that aims at minimizing privacy loss and allows authorized forensic experts to perform camera attribution.
Cloud computing is an established paradigm that attracts enterprises for offsetting the cost to more competitive outsource data centres. Considering economic benefits offered by this paradigm, organisations could outsource data storage and computational services. However, data in the cloud environment is within easy reach of service providers. One of the strong obstacles in widespread adoption of the cloud is to preserve confidentiality of the data. Generally, confidentiality of the data could be guaranteed by employing existing encryption schemes. For regulating access to the data, organisations require access control mechanisms. Unfortunately, access policies in clear text might leak information about the data they aim to protect. The major research challenge is to enforce dynamic access policies at runtime, i.e., Enforcement of dynamic security constraints (including dynamic separation of duties and Chinese wall) in the cloud. The main challenge lies in the fact that dynamic security constraints require notion of sessions for managing access histories that might leak information about the sensitive data if they are available as clear text in the cloud. In this paper, we present E-GRANT: an architecture able to enforce dynamic security constraints without relying on a trusted infrastructure, which can be deployed as Software-as-a-Service (SaaS). In E-GRANT, sessions' access histories are encrypted in such a way that enforcement of constraints is still possible. As a proof-of-concept, we have implemented a prototype and provide a preliminary performance analysis showing a limited overhead, thus confirming the feasibility of our approach.
Opportunistic networks have recently received considerable attention from both industry and researchers. These networks can be used for many applications without the need for a dedicated IT infrastructure. In the context of opportunistic networks, content sharing in particular has attracted significant attention. To support content sharing, opportunistic networks often implement a publish-subscribe system in which users may publish their own content and indicate interest in other content through subscriptions. Using a smartphone, any user can act as a broker by opportunistically forwarding both published content and interests within the network. Unfortunately, opportunistic networks are faced with serious privacy and security issues. Untrusted brokers can not only compromise the privacy of subscribers by learning their interests but also can gain unauthorised access to the disseminated content. This paper addresses the research challenges inherent to the exchange of content and interests without: (i) compromising the privacy of subscribers, and (ii) providing unauthorised access to untrusted brokers. Specifically, this paper presents an interest and content sharing solution that addresses these security challenges and preserves privacy in opportunistic networks. We demonstrate the feasibility and efficiency of the solution by implementing a prototype and analysing its performance on smart phones.
Automated and smart meters are devices that are able to monitor the energy consumption of electricity consumers in near real-time. They are considered key technological enablers of the smart grid, as the real-time consumption data that they can collect could enable new sophisticated billing schemes, could facilitate more efficient power distribution system operation and could give rise to a variety of value-added services. At the same time, the energy consumption data that the meters collect are sensitive consumer information; thus, privacy is a key concern and is a major inhibitor of real-time data collection in practice. In this paper, we review the different uses of metering data in the smart grid and the related privacy legislation. We then provide a structured overview, shortcomings, recommendations, and research directions of security solutions that are needed for privacy-preserving meter data delivery and management. We finally survey recent work on privacy-preserving technologies for meter data collection for the three application areas: 1) billing; 2) operations; and 3) value-added services including demand response.
Advances in computing and compression technology, coupled with high-speed networks, has beaconed an era of video streaming on the Internet. This has led to a need to enhance the security of communications transporting data without degrading its performance. The transport layer security (TLS) protocol negotiates configurations for securing communication channels. Such conversations adversely impact latency, thereby presenting a fundamental tradeoff between security and efficiency. In this work, we present a conceptual framework, called SEC-QUIC (Secure and Efficient Configurations for QUIC), that focuses on optimizing this tradeoff specifically for video transmissions by investigating various factors in Quick UDP Internet Connections (QUIC). Transport-layer-related elements, such as maximum transmission unit (MTU) sizes, cipher suites, and ACK timer, are examined to evaluate the impact on the security-efficiency tradeoff in QUIC-based video transmissions using platform-based experiments. Subsequently, we develop a conceptual framework to leverage QUIC's dynamics based on the context of a connection to optimize the security-efficiency tradeoff. Our findings demonstrate the need to alter default configurations based on the contextual factors of a connection (eg, resource constraints and network conditions) in QUIC-based video transmissions to balance the tradeoff. Experiments reveal an MTU of 1400 bytes is found to have 60% better throughput compared to an MTU of 1200 bytes while also 4% less CPU usage on average for the transmission of 100 MB video files. Overall, our experiments suggest that fine-tuning performance and security related configurations is an effective approach to optimizing the security-efficiency tradeoff in video transmissions.
In the Public Key infrastructure (PKI) model, digital certificates play a vital role in securing online communication. Communicating parties exchange and validate these certificates, the validation fails if a certificate has been revoked. In this paper we propose the Certificate Revocation Guard (CRG) to efficiently check certificate revocation while minimising bandwidth, latency and storage overheads. CRG is based on OCSP, which caches the status of certificates locally. CRG could be installed on the user's machine, at the organisational proxy or even at the ISP level. Compared to a naive approach (where a client checks the revocation status of all certificates in the chain on every request), CRG decreases the bandwidth overheads and network latencies by 95%. Using CRG incurs 69% lower storage overheads compared to the CRL method. Our results demonstrate the effectiveness of our approach to improve certificate revocation.
Most recent theoretical literature on program obfuscation is based on notions like Virtual Black Box (VBB) obfuscation and indistinguishability Obfuscation (iO). These notions are very strong and are hard to satisfy. Further, they offer far more protection than is typically required in practical applications. On the other hand, the security notions introduced by software security researchers are suitable for practical designs but are not formal or precise enough to enable researchers to provide a quantitative security assurance. Hence, in this paper, we introduce a new formalism for practical program obfuscation that still allows rigorous security proofs. We believe our formalism will make it easier to analyse the security of obfuscation schemes. To show the flexibility and power of our formalism, we give a number of examples. Moreover, we explain the close relationship between our formalism and the task of providing obfuscation challenges. This is the full version of the paper. In this version, we also give a new rigorous analysis of several obfuscation techniques and we provide directions for future research.
Global cybersecurity crises have compelled universities to address the demand for educated cybersecurity professionals. As no shared framework for cybersecurity as an academic discipline exists, growthhas been unfocused and driven by training materials, which make it harder to create a common body of knowledge. An international perspective is still harder, as different nations use different criteria to define local needs. As a result, new programs entering this space are on their own to conceptualize, design, package and market their programs, as there is no globally accepted reference model for cybersecurity to allow employers or students to understand the extent of a given cybersecurity program. Building on prior efforts at ITiCSE 2010 and 2011, other sources and participant experiences, this working group will develop a taxonomy of approaches to cybersecurity education, capture its dimensions, and develop a corresponding global reference model.
Network Intrusion Detection Systems (NIDSes) are crucial for securing various networks from malicious attacks. Recent developments in Deep Neural Networks (DNNs) have encouraged researchers to incorporate DNNs as the underlying detection engine for NIDS. However, DNNs are susceptible to adversarial attacks, where subtle modifications to input data result in misclassification, posing a significant threat to security-sensitive domains such as NIDS. Existing efforts in adversarial defenses predominantly focus on supervised classification tasks in Computer Vision, differing substantially from the unsupervised outlier detection tasks in NIDS. To bridge this gap, we introduce a novel method of generalized adversarial robustness and present NIDS-Vis, an innovative black-box algorithm that traverses the decision boundary of DNN-based NIDSes near given inputs. Through NIDS-Vis, we can visualize the geometry of the decision boundaries and examine their impact on performance and adversarial robustness. Our experiment uncovers a tradeoff between performance and robustness, and we propose two novel training techniques, feature space partition and distributional loss function, to enhance the generalized adversarial robustness of DNN-based NIDSes without significantly compromising performance.
With the evolution of cloud computing, organizations are outsourcing the storage and rendering of volume (i.e., 3D data) to cloud servers. Data confidentiality at the third-party cloud provider, however, is one of the main challenges. In this paper, we address this challenge by proposing - 3DCrypt - a modified Paillier cryptosystem scheme for multi-user settings that allows cloud datacenters to render the encrypted volume. The rendering technique we consider in this work is pre-classification volume ray-casting. 3DCrypt is such that multiple users can render volumes without sharing any encryption keys. 3DCrypt's storage and computational overheads are approximately 66.3 MB and 27 seconds, respectively when rendering is performed on a 256 x 256 x 256 volume for a 256 x 256 image space.
The COVID‐19 pandemic introduced the new norm that changed the way we work and live. During these unprecedented times, most of the organizations expected their employees to work from home. Remote working created new opportunities for hackers since more users were making use of digital platforms for online shopping, accessing Virtual Private Network (VPN), videoconferencing platforms, and software alike. Consequently, cybercrime increased due to the increase in the attack surface, and software vulnerabilities were exploited for launching cyberattacks. There is existing research that explores vulnerability disclosure on Twitter. However, there is a lack of study on opportunistic targeted attacks where specific vulnerabilities are exploited in a way that benefit adversaries the most in times such as COVID‐19. The primary aim of this work is to study the effectiveness of vulnerability disclosure pattern on Twitter in COVID‐19, and discuss how Twitter can be leveraged as Open‐Source Intelligence (OSINT) during a pandemic where the global users can follow a coordinated approach to share security‐related information and conduct awareness campaigns. The study identifies Twitter as an apt source for conducting cybersecurity awareness campaigns as 99.83% of the security vulnerabilities are found to be accurate. The information can help global cybersecurity agencies to proactively identify vulnerabilities, coordinate activities, and plan for mitigation strategies since releasing patches from the vendor might take time.
Nowadays, governmental and non-governmental health organisations and insurance companies invest in integrating an individual's genetic information to their daily practices. In this paper, we focus on an emerging area of genome analysis, called Disease Susceptibility (DS), from which an individual's susceptibility to a disease is calculated by using her genetic information. Recent work by Danezis et al. [1] presents an approach for calculating DS in a privacy-preserving manner. However, the proposed solution has two drawbacks. First, it does not provide a mechanism to check the integrity of genomic data that is used to calculate the susceptibility and more importantly the computed result. Second, it lacks a mechanism to check the correctness of the performed DS test. In this paper, we present iGenoPri that aims at addressing both problems by employing the Message Authentication Code (MAC) and verifiable computing.
In the Public Key Infrastructure (PKI) model, digital certificates play a vital role in securing online communication. Communicating parties exchange and validate these certificates and the validation should fail if the certificate has been revoked. However, some existing studies [1,2] raise an alarm as the certificate revocation check is skipped in the existing PKI model for a number of reasons including network latency overheads, bandwidth costs, storage costs and privacy issues. In this article, we propose a Certificate Revocation Guard (CRG) to efficiently check certificate revocation while minimising bandwidth, latency and storage overheads. CRG is based on OCSP, which caches the revocation status of certificates locally, thus strengthening user privacy for subsequent requests. CRG is a plug and play component that could be installed on the user's machine, at the organisational proxy or even in the ISP network. Compared to a naive approach (where a client checks the revocation status of all certificates in the chain on every request), CRG decreases the bandwidth overheads and network latencies by 95%. Using CRG incurs 69% lower storage overheads compared to the CRL method. Our results demonstrate the effectiveness of our approach to improve the certificate revocation process. (C) 2019 Elsevier Ltd. All rights reserved.
When we upload or create data into the cloud or the web, we immediately lose control of our data. Most of the time, we will not know where the data will be stored, or how many copies of our files are there. Worse, we are unable to know and stop malicious insiders from accessing the possibly sensitive data. Despite being transferred across and within clouds over encrypted channels, data often has to be decrypted within the database for it to be processed. Exposing the data at some point in the cloud to a few privileged users is undoubtedly a vendor-centric approach, and hinges on the trust relationships data owners have with their cloud service providers. A recent example of the abuse of the trust relationship is the high-profile Edward Snowden case. In this paper, we propose a user-centric approach which returns data control to the data owners - empowering users with data provenance, transparency and auditability, homomorphic encryption, situation awareness, revocation, attribution and data resilience. We also cover key elements of the concept of user data control. Finally, we introduce how we attempt to address these issues via the New Zealand Ministry of Business Innovation and Employment (MBIE)-funded STRATUS (Security Technologies Returning Accountability, Trust and User-centric Services in the Cloud) research project.
Outsourcing sensitive data and operations to untrusted cloud providers is considered a challenging issue. To perform a search operation, even if both the data and the query are encrypted, attackers still can learn which data locations match the query and what results are returned to the user. This kind of leakage is referred to as data access pattern. Indeed, using access pattern leakage, attackers can easily infer the content of the data and the query. Oblivious RAM (ORAM), Fully Homomorphic Encryption (FHE), and secure Multi- Party Computation (MPC) offer a higher level of security but incur high computation and communication overheads. One promising practical approach to process the outsourced data efficiently and securely is leveraging trusted hardware like Intel SGX. Recently, several SGX- based solutions have been proposed in the literature. However, those solutions suffer from side channel attacks, high overheads of context switching, or limited SGX memory. In this paper, we present an SGX-assisted scheme for performing search over encrypted data. Our solution protects access pattern against side channel attacks while ensuring search efficiency. It can process large databases without requiring any long-term storage on SGX. We have implemented a prototype of the scheme and evaluated its performance using a dataset of 1 million records. The equality query and range query can be completed in 11 and 40 milliseconds, respectively. Comparing with ORAM- based solutions, such as ObliDB, our scheme is more than 10x faster.
Searchable Symmetric Encryption (SSE) allows users to execute encrypted queries over encrypted databases. A large number of SSE schemes have been proposed in the literature. However, most of them leak a significant amount of information that could lead to inference attacks. In this work, we propose an SSE scheme for a Privacypreserving Multi-cloud encrypted Database (P-McDb), which aims at preventing inference attacks. P-McDb allows users to execute queries in an efficient sub-linear manner without leaking search, access and size patterns. We have implemented a prototype of P-McDb and show its practical efficiency.
WiFi Direct is a new technology that enables direct Device-to-Device (D2D) communication. This technology has a great potential to enable various proximity-based applications such as multimedia content distribution, social networking, cellular traffic offloading, mission critical communications, and Internet of Things (IoT). However, in such applications, energy consumption of battery-constrained devices is a major concern. In this paper, we propose a novel power saving protocol that aims at optimizing energy consumption and throughput of user devices by controlling the WiFi Direct group size and transmit power of the devices. We model a content distribution scenario in NS-3 and present the performance evaluation. Our simulation results demonstrate that even a small modification in the network configuration can provide a considerable energy gain with a minor effect on throughput. The observed energy saving can be as high as 1000% for a throughput loss of 12%.
Searchable Encryption (SE) makes it possible for users to outsource an encrypted database and search operations to cloud service providers without leaking the content of data or queries to them. A number of SE schemes have been proposed in the literature; however, most of them leak a significant amount of information that could lead to inference attacks. To minimise information leakage, there are a number of solutions, such as Oblivious Random Access Memory (ORAM) and Private Information Retrieval (PIR). Unfortunately, existing solutions are prohibitively costly and impractical. A practical scheme should support not only a lightweight user client but also a flexible key management mechanism for multi-user access. In this position paper, we briefly analyse several leakage-based attacks, and identify a set of requirements for a searchable encryption system for cloud database storage to be secure against these attacks while ensuring usability of the system. We also discuss several possible solutions to fulfil the identified requirements.
The management of identities on the Internet has evolved from the traditional approach (where each service provider stores and manages identities) to a federated identity management system (where identity management is delegated to a set of identity providers). On one hand, federated identity ensures usability and provides economic benefits to service providers. On the other hand, it poses serious privacy threats to users as well as service providers. The current technology, which is prevalently deployed on the Internet, allows identity providers to track the user's behavior across a broad range of services. In this work, we propose PRIMA, a universal credential-based authentication system for supporting federated identity management in a privacy-preserving manner. Basically, PRIMA does not require any interaction between service providers and identity providers during the authentication process, thus preventing identity providers to profile users' behavior. Moreover, throughout the authentication process, PRIMA provides a mechanism for controlled disclosure of the users' private information. We have conducted comprehensive evaluations of the system to show the feasibility of our approach. Our performance analysis shows that an identity provider can process 1,426 to 3,332 requests per second when the key size is varied from 1024 to 2048-bit, respectively.
Advanced attack campaigns span across multiple stages and stay stealthy for long time periods. There is a growing trend of attackers using off-the-shelf tools and pre-installed system applications (such as powershell and wmic) to evade the detection because the same tools are also used by system administrators and security analysts for legitimate purposes for their routine tasks. Such a dual nature of using these tools makes the analyst's task harder when it comes to spotting the difference between attack and benign activities. To start investigations, event logs can be collected from operational systems; however, these logs are generic enough and it often becomes impossible to attribute a potential attack to a specific attack group. Recent approaches in the literature have used anomaly detection techniques, which aim at distinguishing between malicious and normal behavior of computers or network systems. Unfortunately, anomaly detection systems based on point anomalies are too rigid in a sense that they could miss malicious activity and classify the attack, not an outlier. Therefore, there is a research challenge to make better detection of malicious activities. To address this challenge, in this paper, we leverage Group Anomaly Detection (GAD), which detects anomalous collections of individual data points. Our approach is to build a neural network model utilizing Adversarial Autoencoder (AAE-alpha) in order to detect the activity of an attacker who leverages off-the-shelf tools and system applications. In addition, we also build Behavior2Vec and Command2Vec sentence embedding deep learning models specific for feature extraction tasks. We conduct extensive experiments to evaluate our models on real world datasets collected for a period of two months. Our method discovered 2 new attack tools used by targeted attack groups and multiple instances of the malicious activity. The empirical results demonstrate that our approach is effective and robust in discovering targeted attacks, pen-tests, and attack campaigns leveraging custom tools.
With the surge of data breaches, practitioner ignorance and unprotected hardware, secure information management in healthcare environments is becoming a challenging problem. In the context of healthcare systems, confidentiality of patient data is of particular sensitivity. For economic reasons, cloud services are spreading, but there is still no clear solution to the problem of truly secure data storage at a remote location. To tackle this issue, we first examine if it is possible to have a secure storage of healthcare data without fully relying on trusted third-parties, and without impeding system usability on the side of the caregivers. The novelty of this approach is that it offers a standard-based deployable solution tailored for healthcare scenarios, using cloud services, but where trust is shifted from the cloud provider to the healthcare institution. This approach is unlike state-of-the-art solutions: there are secure cloud storage solutions that insist on having no knowledge of the stored data, but we discovered that they still require too much trust to manage user credentials; these credentials actually give them access to confidential data. In the paper, we present SPARER as a solution to the secure cloud storage problem and discuss the trade-offs of our approach. Moreover, we look at performance benchmarks that can hint to the feasibility and cost of using off-the-shelf cryptographic tools as building blocks in SPARER.
Industrial Control Systems (ICSs) play an important role in today’s industry by providing process automation, distributed control, and process monitoring. ICS was designed to be used in an isolated area or connected to other systems via specialised communication mechanisms or protocols. This setup allows manufacturers to manage their production processes with great flexibility and safety. However, this design does not meet today’s business requirements to work with state-of-the-art technologies such as Internet-of-Things (IoT) and big data analytics. In order to fulfil industry requirements, many ICSs have been connected to enterprise networks that allow business users to access real-time data generated by power plants. At the same time, this new design opens up several cybersecurity challenges for ICSs. We review possible cyber attacks on ICSs, identify typical threats and vulnerabilities, and we discuss unresolved security issues with existing ICS cybersecurity solutions. Then, we discuss how to secure ICSs (e.g., using risk assessment methodologies) and other protection measures. We also identify open security research challenges for ICSs, and we present a classification of existing security solutions along with their strengths and weaknesses. Finally, we provide future research directions in ICS security.
The emergence of New Data Sources (NDS) in healthcare is revolutionising traditional electronic health records in terms of data availability, storage, and access. Increasingly, clinicians are using NDS to build a virtual holistic image of a patients health condition. This research is focused on a review and analysis of the current legislation and privacy rules available for healthcare professionals. NDS in this project refers to and includes patient-generated health data, consumer device data, wearable health and fitness data, and data from social media. This project reviewed legal and regulatory requirements for New Zealand, Australia, the European Union, and the United States to establish the ground reality of existing mechanisms in place concerning the use of NDS. The outcome of our research is to recommend changes and enhancements required to better prepare for the 'tsunami' of NDS and applications in the currently evolving data-driven healthcare area and precision or personalised health initiatives such as Precision Driven Health (PDH) in New Zealand.
User revocation is one of the main security issues in publish and subscribe (pub/sub) systems. Indeed, to ensure data confidentiality, the system should be able to remove malicious subscribers without affecting the functionalities and decoupling of authorised subscribers and publishers. To revoke a user, there are solutions, but existing schemes inevitably introduce high computation and communication overheads, which can ultimately affect the system capabilities. In this paper, we propose a novel revocation technique for pub/sub systems that can efficiently remove compromised subscribers without requiring regeneration and redistribution of new keys as well as re-encryption of existing data with those keys. Our proposed solution is such that a subscriber's interest is not revealed to curious brokers and published data can only be accessed by the authorised subscribers. Finally, the proposed protocol is secure against the collusion attacks between brokers and revoked subscribers.
Content-Centric Networking (CCN) is an emerging paradigm that can anticipate growing demands of content delivery in coming years. The underlying architecture of the CCN enables users to search for content based on names. On one hand, this is a privacy-friendly feature that do not require source and destination addresses. On the other hand, semantically-rich names reveal sufficient information about users' preferences. Unfortunately, a curious CCN node may learn and sell sensitive information to third-parties, thus posing serious threats to users' privacy. In this paper, we present PROTECTOR that aims at protecting content names as well as content and allows a CCN network to add new users or remove existing ones without requiring any re-encryption of stored content and names. It is scalable and efficient as it incurs very limited overhead for required cryptographic operations. Our performance analysis reports that PROTECTOR can handle 34 and over 10 million requests per second at boundary and other CCN nodes, respectively.
Abstract The integration of Internet of Things (IoT) devices into commercial or industrial buildings to create smart environments, such as Smart Buildings (SBs), has enabled real‐time data collection and processing to effectively manage building operations. Due to poor security design and implementation in IoT devices, SB networks face an array of security challenges and threats (e.g., botnet malware) that leverage IoT devices to conduct Distributed Denial of Service (DDoS) attacks on the Internet infrastructure. Machine Learning (ML)‐based traffic classification systems aim to automatically detect such attacks by effectively differentiating attacks from benign traffic patterns in IoT networks. However, there is an inherent accuracy‐efficiency tradeoff in network traffic classification tasks. To balance this tradeoff, we develop an accurate yet lightweight device‐specific traffic classification model. This model classifies SB traffic flows into four types of coarse‐grained flows, based on the locations of traffic sources and the directions of traffic transmissions. Through these four types of coarse‐grained flows, the model can extract simple yet effective flow rate features to conduct learning and predictions. Our experiments find the model to achieve an overall accuracy of 96%, with only 32 features to be learned by the ML model.
Cloud storage is a cheap and reliable solution for users to share data with their contacts. However, the lack of standardisation and migration tools makes it difficult for users to migrate to another Cloud Service Provider (CSP) without losing contacts, thus resulting in a vendor lock-in problem. In this work, we aim at providing a generic framework, named PortableCloud, that is flexible enough to enable users to migrate seamlessly to a different CSP keeping all their data and contacts. To preserve privacy of users, the data in the portable cloud is concealed from the CSP by employing encryption techniques. Moreover, we introduce a migration agent that assists users in automatically finding a suitable CSP that can satisfy their needs.
Location-based services (LBS) in smart cities have drastically altered the way cities operate, giving a new dimension to the life of citizens. LBS rely on location of a device, where proximity estimation remains at its core. The applications of LBS range from social networking and marketing to vehicle-to-everything communications. In many of these applications, there is an increasing need and trend to learn the physical distance between nearby devices. This paper elaborates upon the current needs of proximity estimation in LBS and compares them against the available Localization and Proximity (LP) finding technologies (LP technologies in short). These technologies are compared for their accuracies and performance based on various different parameters, including latency, energy consumption, security, complexity, and throughput. Hereafter, a classification of these technologies, based on various different smart city applications, is presented. Finally, we discuss some emerging LP technologies that enable proximity estimation in LBS and present some future research areas.
Device-to-Device (D2D) communication has emerged as a new technology, which minimizes data transmission in radio access networks by leveraging direct interaction between nearby mobile devices. D2D communication has a great potential in solving the capacity bottleneck problem of cellular networks by offloading cellular traffic of proximity-based applications to D2D links. This provides several benefits including, but are not limited to, lower transfer delays, higher data rates, and better energy efficiency. However, security in D2D communication, which is equally essential for the success of D2D communication in future networks, is a less investigated topic in literature. In this paper, we propose the combination of the PGP and reputation-based model to bootstrap trust in D2D environments. Our proposal aims at minimizing any suspicious connection with selfish users. Offloading cellular traffic to trusted D2D links provides significant throughput gain over the conventional cellular network. Our results show that the capacity gain can be as high as 133%.
Wireless body area networks (WBANs) play a vital role in shaping today's healthcare systems. Given the critical nature of a WBAN in one's health to automatically monitor and diagnose health issues, security and privacy of these healthcare systems need a special attention. In this paper, we first propose a novel four-tier architecture of remote health monitoring system and then identify the security requirements and challenges at each tier. We provide a concise survey of the literature aimed at improving the security and privacy of WBANs and then present a comprehensive overview of the problem. In particular, we stress that the inclusion of in vivo nano-networks in a remote healthcare monitoring system is imperative for its completeness. To this end, we elaborate on security threats and concerns in nano-networks and medical implants as well as we emphasize on presenting a holistic framework of an overall ecosystem for WBANs, which is essential to ensure end-to-end security. Lastly, we discuss some limitations of current WBANs.
Passwords are widely used for client to server authentication as well as for encrypting data stored in untrusted environments, such as cloud storage. Both, authentication and encrypted cloud storage, are usually discussed in isolation. In this work, we propose AuthStore, a flexible authentication framework that allows users to securely reuse passwords for authentication as well as for encrypted cloud storage at a single or multiple service providers. Users can configure how secure passwords are protected using password stretching techniques. We present a compact password-authenticated key exchange protocol (CompactPAKE) that integrates the retrieval of password stretching parameters. A parameter attack is described and we show how existing solutions suffer from this attack. Furthermore, we introduce a password manager that supports CompactPAKE.
The Internet has undergone dramatic changes in the past 2 decades and now forms a global communication platform that billions of users rely on for their daily activities. While this transformation has brought tremendous benefits to society, it has also created new threats to online privacy, such as omnipotent governmental surveillance. As a result, public interest in systems for anonymous communication has drastically increased. In this work, we survey previous research on designing, developing, and deploying systems for anonymous communication. Our taxonomy and comparative assessment provide important insights about the differences between the existing classes of anonymous communication protocols.
Network Intrusion Detection Systems (NIDSs) play a crucial role in detecting malicious activities within the networks. Basically, an NIDS monitors network flows and compares it with the pre-defined suspicious patterns. To be effective, different intrusion detection algorithms and packet capturing methods have been implemented. With rapidly increasing network speeds, NIDSs face a challenging problem of monitoring large and diverse traffic volumes; in particular, the high packet drop rate has a significant impact on detection accuracy. In this work, we investigate three popular open-source NIDSs: Snort, Suricata, and Bro along with their comparative performance benchmarks. We investigate key factors (including system resource usage, packet processing speed and packet drop rate) that limit applicability of NIDSs to large-scale networks. Moreover, we also analyse and compare the performance of NIDSs when configurations and traffic volumes are changed.
Modern cars have become quite complex and heavily connected. Today, diverse services offer infotainment services, electric power-assisted steering, assisted driving, automated toll payment and traffic-sharing information. Thanks to recent technologies, which made it possible to enable these services. Unfortunately, these technologies also enlarge the attack surface. This survey covers the main security and privacy issues and reviews recent research on these issues. It summarizes requirements of modern cars and classifies threats and solutions based on the underlying technologies. To the best of our knowledge, this is the first survey offering such an overall view.
Typical smart city applications generally require two different communications infrastructures, a wide area cellular network to provide connectivity and long-range communications and efficient communication strategies for transmitting short data packets, particularly in case of Internet of Things (IoT) devices. The cellular infrastructure is optimized for high data rates and large data sizes while IoT devices mostly exchange small data packets with high energy efficiency and low data rates. To fully exploit both communication infrastructures together, different strategies related to 5G and Device-to-Device (D2D) communications are proposed in literature. In this paper, we survey these strategies and provide useful considerations for seamless integration of smart city applications in 5G networks. Moreover, we present smart city scenarios, their communication requirements and the potential impact on the life of citizens. Finally, we elaborate big data impact on smart cities with possible security and privacy concerns.
The evolution of cloud computing and a drastic increase in image size are making the outsourcing of image storage and processing an attractive business model. Although this outsourcing has many advantages, ensuring data confidentiality in the cloud is one of the main concerns. There are state-of-the-art encryption schemes for ensuring confidentiality in the cloud. However, such schemes do not allow cloud datacenters to perform operations over encrypted images. In this paper, we address this concern by proposing 2DCrypt, a modified Paillier cryptosystem-based image scaling and cropping scheme for multi-user settings that allows cloud datacenters to scale and crop an image in the encrypted domain. To anticipate a high storage overhead resulted from the naive per-pixel encryption, we propose a space-efficient tiling scheme that allows tile-level image scaling and cropping operations. Basically, instead of encrypting each pixel individually, we are able to encrypt a tile of pixels. 2DCrypt is such that multiple users can view or process the images without sharing any encryption keys-a requirement desirable for practical deployments in real organizations. Our analysis and results show that 2DCrypt is INDistinguishable under Chosen Plaintext Attack secure and incurs an acceptable overhead. When scaling a 512×512 image by a factor of two, 2DCrypt requires an image user to download approximately 5.3 times more data than the un-encrypted scaling and need to work approximately 2.3 s more for obtaining the scaled image in a plaintext.
Cyber resilience quantification is the process of evaluating and measuring an organisation’s ability to withstand, adapt to, and recover from cyber-attacks. It involves estimating IT systems, networks, and response strategies to ensure robust defence and effective recovery mechanisms in the event of a cyber-attack. Quantifying cyber resilience can be difficult due to the constantly changing components of IT infrastructure. Traditional methods like vulnerability assessments and penetration testing may not be effective. Measuring cyber resilience is essential to evaluate and strengthen an organisation’s preparedness against evolving cyber-attacks. It helps identify weaknesses, allocate resources, and ensure the uninterrupted operation of critical systems and information. There are various methods for measuring cyber resilience, such as evaluating, teaming and testing, and creating simulated models. This article proposes a cyber resilience quantification framework for IT infrastructure that utilises a simulation approach. This approach enables organisations to simulate different attack scenarios, identify vulnerabilities, and improve their cyber resilience. The comparative analysis of cyber resilience factors highlights pre-configuration’s robust planning and adaptation (61.44%), buffering supported’s initial readiness (44.53%), and network topologies’ robust planning but weak recovery and adaptation (60.04% to 77.86%), underscoring the need for comprehensive enhancements across all phases. The utilisation of the proposed factors is crucial in conducting a comprehensive evaluation of IT infrastructure in the event of a cyber-attack.
The energy system is undergoing a radical transformation. The coupling of the energy system with advanced information and communication technologies is making it possible to monitor and control in real-time generation, transport, distribution and consumption of energy. In this context, a key enabler is represented by smart meters, devices able to monitor in near real-time the consumption of energy by consumers. If, on one hand, smart meters automate the process of information flow from endpoints to energy suppliers, on the other hand, they may leak sensitive information about consumers. In this paper, we review the issues at stake and the research challenges that characterise smart grids from a privacy and security standpoint.
Advanced attack campaigns span across multiple stages and stay stealthy for long time periods. There is a growing trend of attackers using off-the-shelf tools and pre-installed system applications (such as \emph{powershell} and \emph{wmic}) to evade the detection because the same tools are also used by system administrators and security analysts for legitimate purposes for their routine tasks. To start investigations, event logs can be collected from operational systems; however, these logs are generic enough and it often becomes impossible to attribute a potential attack to a specific attack group. Recent approaches in the literature have used anomaly detection techniques, which aim at distinguishing between malicious and normal behavior of computers or network systems. Unfortunately, anomaly detection systems based on point anomalies are too rigid in a sense that they could miss the malicious activity and classify the attack, not an outlier. Therefore, there is a research challenge to make better detection of malicious activities. To address this challenge, in this paper, we leverage Group Anomaly Detection (GAD), which detects anomalous collections of individual data points. Our approach is to build a neural network model utilizing Adversarial Autoencoder (AAE-$\alpha$) in order to detect the activity of an attacker who leverages off-the-shelf tools and system applications. In addition, we also build \textit{Behavior2Vec} and \textit{Command2Vec} sentence embedding deep learning models specific for feature extraction tasks. We conduct extensive experiments to evaluate our models on real-world datasets collected for a period of two months. The empirical results demonstrate that our approach is effective and robust in discovering targeted attacks, pen-tests, and attack campaigns leveraging custom tools.
Traditional address scanning attacks mainly rely on the naive 'brute forcing' approach, where the entire IPv4 address space is exhaustively searched by enumerating different possibilities. However, such an approach is inefficient for IPv6 due to its vast subnet size (i.e., 2(64)). As a result, it is widely assumed that address scanning attacks are less feasible in IPv6 networks. In this paper, we evaluate new IPv6 reconnaissance techniques in real IPv6 networks and expose how to leverage the Domain Name System (DNS) for IPv6 network reconnaissance. We collected IPv6 addresses from 5 regions and 100,000 domains by exploiting DNS reverse zone and DNSSEC records. We propose a DNS Guard (DNSG) to efficiently detect DNS reconnaissance attacks in IPv6 networks. DNSG is a plug and play component that could be added to the existing infrastructure. We implement DNSG using Bro and Suricata. Our results demonstrate that DNSG could effectively block DNS reconnaissance attacks.
In New Zealand, the demand for healthcare services has grown gradually in the last decade, and it is likely to increase further. This had led to issues such as increasing treatment costs and processing time for the patients. To address the growing pressure in the healthcare sector, and its fragmented IT landscape that compounds the problems further, the New Zealand Ministry of Health aims to establish a shared Electronic Health Record (EHR) system that integrates all the major healthcare organisations such as hospitals, medical centres, and specialists. Due to its characteristics, blockchain technology could be a potential platform for building such large‐scale health systems. Here, MedBloc, a blockchain‐based secure EHR system that enables patients and healthcare providers to access and share health records while providing usability, security, and privacy is presented. MedBloc captures a longitudinal view of the patients’ health story and empowers the patients to regulate their own data by allowing them to give or withdraw consent for healthcare providers to access their records. To preserve the patients’ privacy and protect their health data, MedBloc uses an encryption scheme to secure records and smart contracts to enforce access control to prevent unauthorised access.
The emergence of New Data Sources (NDS) in healthcare is revolutionising traditional electronic health records in terms of data availability, storage, and access. Increasingly, clinicians are using NDS to build a virtual holistic image of a patient's health condition. This research is focused on a review and analysis of the current legislation and privacy rules available for healthcare professionals. NDS in this project refers to and includes patient-generated health data, consumer device data, wearable health and fitness data, and data from social media. This project reviewed legal and regulatory requirements for New Zealand, Australia, the European Union, and the United States to establish the ground reality of existing mechanisms in place concerning the use of NDS. The outcome of our research is to recommend changes and enhancements required to better prepare for the 'tsunami' of NDS and applications in the currently evolving data-driven healthcare area and precision or personalised health initiatives such as Precision Driven Health (PDH) in New Zealand.
Although the popularity of Software-Defined Networking (SDN) is increasing, it is also vulnerable to security attacks such as Denial of Service (DoS) attacks. Since in SDN, the control plane is isolated from the data plane, DoS attackers can easily target the control plane to impair the network infrastructure in addition to the data plane to degrade the user's Quality of Service (QoS). In our previous work, we introduced SECO, an SDN Secure Controller algorithm to detect and defend SDN against DoS attacks. Simulation results showed that SECO successfully defends SDN networks from DoS attacks. In this paper, we present SDN sEcure COntrol and Data Plane (SECOD), which is an improved version of SECO. Basically, SECOD introduces new triggers to detect and prevent DoS attacks in both control and data planes. Moreover, SECOD is implemented and tested using SDN-based hardware testbed, OpenFlow-based switch, and RYU controller to capture the dynamics of realistic hardware and software. The results show that SECOD successfully detects and effectively mitigates DoS attacks on SDN networks keeping data plane performance at 99.72% compared to a network not under attack.
Information security has been an area of research and teaching within various computing disciplines in higher education almost since the beginnings of modern computers. The need for security in computing curricula has steadily grown over this period. Recently, with an emerging global crisis, because of the limitations of security within the nascent information technology infrastructure, the field of "cybersecurity" is emerging with international interest and support. Recent evolution of cybersecurity shows that it has begun to take shape as a true academic perspective, as opposed to simply being a training domain for certain specialized jobs. This report starts from the premise that cybersecurity is a "meta-discipline." That is, cybersecurity is used as an aggregate label for a wide variety of similar disciplines, much in the same way that the terms "engineering" and "computing" are commonly used. Thus, cybersecurity should be formally interpreted as a meta-discipline with a variety of disciplinary variants, also characterized through a generic competency model. The intention is that this simple organizational concept will improve the clarity with which the field matures, resulting in improved standards and goals for many different types of cybersecurity programs.
The enforcement of security policies in outsourced environments is still an open challenge for policy-based systems. On the one hand, taking the appropriate security decision requires access to the policies. However, if such access is allowed in an untrusted environment then confidential information might be leaked by the policies. Current solutions are based on cryptographic operations that embed security policies with the security mechanism. Therefore, the enforcement of such policies is performed by allowing the authorised parties to access the appropriate keys. We believe that such solutions are far too rigid because they strictly intertwine authorisation policies with the enforcing mechanism. In this paper, we want to address the issue of enforcing security policies in an untrusted environment while protecting the policy confidentiality. Our solution ESPOON is aiming at providing a clear separation between security policies and the enforcement mechanism. However, the enforcement mechanism should learn as less as possible about both the policies and the requester attributes.
Search engines are the prevalently used tools to collect information about individuals on the Internet. Search results typically comprise a variety of sources that contain personal information — either intentionally released by the person herself, or unintentionally leaked or published by third parties without being noticed, often with detrimental effects on the individual’s privacy. To grant individuals the ability to regain control over their disseminated personal information, the European Court of Justice recently ruled that EU citizens have a right to be forgotten in the sense that indexing systems, such as Google, must offer them technical means to request removal of links from search results that point to sources violating their data protection rights. As of now, these technical means consist of a web form that requires a user to manually identify all relevant links herself upfront and to insert them into the web form, followed by a manual evaluation by employees of the indexing system to assess if the request to remove those links is eligible and lawful. In this work, we propose a universal framework Oblivion to support the automation of the right to be forgotten in a scalable, provable and privacy-preserving manner. First, Oblivion enables a user to automatically find and tag her disseminated personal information using natural language processing (NLP) and image recognition techniques and file a request in a privacy-preserving manner. Second, Oblivion provides indexing systems with an automated and provable eligibility mechanism, asserting that the author of a request is indeed affected by an online resource. The automated eligibility proof ensures censorship-resistance so that only legitimately affected individuals can request the removal of corresponding links from search results. We have conducted comprehensive evaluations of Oblivion, showing that the framework is capable of handling 278 removal requests per second on a standard notebook (2.5 GHz dual core), and is hence suitable for large-scale deployment.
Cloud computing is an emerging paradigm offering companies (virtually) unlimited data storage and computation at attractive costs. It is a cost-effective model because it does not require deployment and maintenance of any dedicated IT infrastructure. Despite its benefits, it introduces new challenges for protecting the confidentiality of the data. Sensitive data like medical records, business or governmental data cannot be stored unencrypted on the cloud. Companies need new mechanisms to control access to the outsourced data and allow users to query the encrypted data without revealing sensitive information to the cloud provider. State-of-the-art schemes do not allow complex encrypted queries over encrypted data in a multi-user setting. Instead, those are limited to keyword searches or conjunctions of keywords. This paper extends work on multi-user encrypted search schemes by supporting SQL-like encrypted queries on encrypted databases. Furthermore, we introduce access control on the data stored in the cloud, where any administrative actions (such as updating access rights or adding/deleting users) do not require re-distributing keys or re-encryption of data. Finally, we implemented our scheme and presented its performance, thus showing feasibility of our approach.
Data usage is of great concern for a user owning the data. Users want assurance that their personal data will be fairly used for the purposes for which they have provided their consent. Moreover, they should be able to withdraw their consent once they want. Actually, consent is captured as a matter of legal record that can be used as legal evidence. It restricts the use and dissemination of information. The separation of consent capturing from the access control enforcement mechanism may help a user to autonomously define the consent evaluation functionality, necessary for the automation of consent decision. In this paper, we present a solution that addresses how to capture, store, evaluate and withdraw consent. The proposed solution preserves integrity of consent, essential to provide a digital evidence for legal proceedings. Furthermore, it accommodates emergency situations when users cannot provide their consent.
Medical implants are an important part of Wireless Body Area Networks (WBANs) and play an important role to monitor, diagnose, and control various medical conditions. These tiny sensors are injected inside human body to measure and communicate various vital signs of a human body. Since the information transmitted by implants is very sensitive and critical in nature, both availability and confidentiality of such information are of prime importance. A possible security threat in medical implants that can breach the availability of the medical implants is a Denial of Service (DoS) attack. In this work, we propose a solution to mitigate DoS attack in the Medical Implant Communication Service (MICS) network. We propose a three-level trust model for a MICS network based on its environment and couple a threshold of maximum allowed data rate to each environment. The simulation results show that DoS attacks can be seamlessly mitigated in many MICS settings.
To fully benefit from a cloud storage approach, privacy in outsourced databases needs to be preserved in order to protect information about individuals and organisations from malicious cloud providers. As shown in recent studies [1, 2], encryption alone is insufficient to prevent a malicious cloud provider from analysing data access patterns and mounting statistical inference attacks on encrypted databases. In order to thwart such attacks, actions performed on outsourced databases need to be oblivious to cloud service providers. Approaches, such as Fully Homomorphic Encryption (FHE), Oblivious RAM (ORAM), or Secure Multi-Party Computation (SMC) have been proposed but they are still not practical. This paper investigates and proposes a practical privacy-preserving scheme, named Long White Cloud (LWC), for outsourced databases with a focus on providing security against statistical inferences. Performance is a key issue in the search and retrieval of encrypted databases. LWC supports logarithmic-time insert, search and delete queries executed by outsourced databases with minimised information leakage to curious cloud service providers. As a proof-of-concept, we have implemented LWC and compared it with a plaintext MySQL database: even with a database size of 10M records, our approach shows only a 10-time slowdown factor.
WiFi direct is a variant of Infrastructure mode WiFi, which is designed to enable direct Device-to-Device (D2D) communications between proximity devices. This new technology enables various proximity-based services such as social networking, multimedia content distribution, cellular traffic offloading, Internet of Things (IoT), and mission critical communications. However, energy consumption of battery-constrained devices remains a major concern in all the aforementioned applications. In this paper, we model energy consumption of the WiFi direct protocol, starting from device discovery to actual data transmissions for intra group D2D communications. We simulate a content distribution scenario in Matlab and analyze our model for the energy consumption of the devices. We argue that the energy spent in device discovery becomes significant in the case of small data sizes. In particular, we find that smaller data sizes, such as 100KB, cause the equal amount of energy to spend in both device discovery and data transmission phases, even when the device discovery time is very small.
Data outsourcing is a growing business model offering services to individuals and enterprises for processing and storing a huge amount of data. It is not only economical but also promises higher availability, scalability, and more effective quality of service than in-house solutions. Despite all its benefits, data outsourcing raises serious security concerns for preserving data confidentiality. There are solutions for preserving confidentiality of data while supporting search on the data stored in outsourced environments. However, such solutions do not support access policies to regulate access to a particular subset of the stored data. For complex user management, large enterprises employ Role-Based Access Controls (RBAC) models for making access decisions based on the role in which a user is active in. However, RBAC models cannot be deployed in outsourced environments as they rely on trusted infrastructure in order to regulate access to the data. The deployment of RBAC models may reveal private information about sensitive data they aim to protect. In this paper, we aim at filling this gap by proposing ESPOONERBAC for enforcing RBAC policies in outsourced environments. ESPOONERBAC enforces RBAC policies in an encrypted manner where a curious service provider may learn a very limited information about RBAC policies. We have implemented ESPOONERBAC and provided its performance evaluation showing a limited overhead, thus confirming viability of our approach.
Providing high‐speed Internet to connect anything in the globe anywhere at any time is becoming the need of future societies, which are built on the smart city concepts. As a pragmatic approach, an integration of terrestrial and satellite networks is proposed for leveraging the combined benefits of both complementary technologies. In addition, with the quest of exploring deep space and connecting solar system planets with the Earth, the traditional satellite network has gone beyond the geosynchronous equatorial orbit (GEO) wherein interplanetary Internet will play a key role. To this end, futuristic satellite networks will be an integration of inter‐satellite and deep space network (ISDSN), which on the one hand will connect thousands of entities on the Earth together and on the other hand will connect deep space satellites with the Earth and solar planets. This chapter demystifies such dynamic networks and classifies them into different tiers. Most importantly, for each tier, we discuss the key requirements, research challenges, and potential security threats. Finally, we present open issues and new research directions in this emerging area of futuristic satellite networks.
Network-based Intrusion Detection System (NIDS) forms the frontline defence against network attacks that compromise the security of the data, systems, and networks. In recent years, Deep Neural Networks (DNNs) have been increasingly used in NIDS to detect malicious traffic due to their high detection accuracy. However, DNNs are vulnerable to adversarial attacks that modify an input example with imperceivable perturbation, which causes a misclassification by the DNN. In security-sensitive domains, such as NIDS, adversarial attacks pose a severe threat to network security. However, existing studies in adversarial learning against NIDS directly implement adversarial attacks designed for Computer Vision (CV) tasks, ignoring the fundamental differences in the detection pipeline and feature spaces between CV and NIDS. It remains a major research challenge to launch and detect adversarial attacks against NIDS. This article surveys the recent literature on NIDS, adversarial attacks, and network defences since 2015 to examine the differences in adversarial learning against deep neural networks in CV and NIDS. It provides the reader with a thorough understanding of DL-based NIDS, adversarial attacks and defences, and research trends in this field. We first present a taxonomy of DL-based NIDS and discuss the impact of taxonomy on adversarial learning. Next, we review existing white-box and black-box adversarial attacks on DNNs and their applicability in the NIDS domain. Finally, we review existing defence mechanisms against adversarial examples and their characteristics.
Photo Response Non-Uniformity (PRNU) noise-based source camera attribution is a popular digital forensic method. In this method, a camera fingerprint computed from a set of known images of the camera is matched against the extracted noise of an anonymous questionable image to find out if the camera had taken the anonymous image. The possibility of privacy leak, however, is one of the main concerns of the PRNU-based method. Using the camera fingerprint (or the extracted noise), an adversary can identify the owner of the camera by matching the fingerprint with the noise of an image (or with the fingerprint computed from a set of images) crawled from a social media account. In this article, we address this privacy concern by encrypting both the fingerprint and the noise using the Boneh-Goh-Nissim (BGN) encryption scheme, and performing the matching in encrypted domain. To overcome leakage of privacy from the content of an image that is used in the fingerprint calculation, we compute the fingerprint within a trusted environment, such as ARM TrustZone. We present e-PRNU that aims at minimizing privacy loss and allows authorized forensic experts to perform camera attribution. The security analysis shows that the proposed approach is semantically secure. Experimental results show that the run-time computational overhead is 10.26 seconds when a cluster of 64 computing nodes are used.
Customer reviews enable customers to share their experiences with others, which allow potential customers to know more about products and consume products with confidence. However, online product sellers and service providers could manipulate customer reviews, such as adding fake positive reviews and removing negative customer reviews, to support their business. Manipulated reviews could result in distorting the original content of customer reviews and misleading customers. State-of-the-art solutions lack a customer review system that is secure, efficient, and usable. In this paper, we propose RevBloc to provide a customer review system with a high level of security, efficiency, and usability. RevBloc is based on blockchain technology that enables customer reviews to be preserved in a distributed ledger, thus a single or subset of malicious parties cannot manipulate the reviews. To show the feasibility of our approach, we implement a proof-of-concept prototype of RevBloc and report its performance.
Most recent theoretical literature on program obfuscation is based on notions like virtual black box (VBB) obfuscation and indistinguishability obfuscation (iO). These notions are very strong and are hard to satisfy. Further, they offer far more protection than is typically required in practical applications. On the other hand, the security notions introduced by software security researchers are suitable for practical designs but are not formal or precise enough to enable researchers to provide a quantitative security assurance. Hence, in this paper, we introduce a new formalism for practical program obfuscation that still allows rigorous security proofs. We believe our formalism will make it easier to analyse the security of obfuscation schemes. To show the flexibility and power of our formalism, we give a number of examples. Moreover, we explain the close relationship between our formalism and the task of providing obfuscation challenges.
Vehicle-to-everything (V2X) communication is a powerful concept that not only ensures public safety (e.g., by avoiding road accidents) but also offers many economic benefits (e.g., by optimizing the macroscopic behavior of the traffic across an area). On the one hand, V2X communication brings new business opportunities for many stakeholders, such as vehicle manufacturers, retailers, Mobile Network Operators (MNOs), V2X service providers, and governments. On the other hand, the convergence of these stakeholders to a common platform possesses many technical and business challenges. In this article, we identify the issues and challenges faced by V2X communications, while focusing on the business models. We propose different solutions to potentially resolve the identified challenges in the framework of 5G networks and propose a high-level hierarchy of a potential business model for a 5G-based V2X ecosystem. Moreover, we provide a concise overview of the legislative status of V2X communications across different regions in the world.
Software-Defined Networking (SDN) and Internet of Things (IoT) are the trends of network evolution. SDN mainly focuses on the upper level control and management of networks, while IoT aims to bring devices together to enable sharing and monitoring of real-time behaviours through network connectivity. On the one hand, IoT enables us to gather status of devices and networks and to control them remotely. On the other hand, the rapidly growing number of devices challenges the management at the access and backbone layer and raises security concerns of network attacks, such as Distributed Denial of Service (DDoS). The combination of SDN and IoT leads to a promising approach that could alleviate the management issue. Indeed, the flexibility and programmability of SDN could help in simplifying the network setup. However, there is a need to make a security enhancement in the SDN-based IoT network for mitigating attacks involving IoT devices. In this article, we discuss and analyse state-of-the-art DDoS attacks under SDN-based IoT scenarios. Furthermore, we verify our SDN sEcure COntrol and Data plane (SECOD) algorithm to resist DDoS attacks on the real SDN-based IoT testbed. Our results demonstrate that DDoS attacks in the SDN-based IoT network are easier to detect than in the traditional network due to IoT traffic predictability. We observed that random traffic (UDP or TCP) is more affected during DDoS attacks. Our results also show that the probability of a controller becoming halt is 10%, while the probability of a switch getting unresponsive is 40%.
The Publish and Subscribe (pub/sub) system is an established paradigm to disseminate the data from publishers to subscribers in a loosely coupled manner using a network of dedicated brokers. However, sensitive data could be exposed to malicious entities if brokers get compromised or hacked; or even worse, if brokers themselves are curious to learn about the data. A viable mechanism to protect sensitive publications and subscriptions is to encrypt the data before it is disseminated through the brokers. State-of-the-art approaches allow brokers to perform encrypted matching without revealing publications and subscriptions. However, if malicious brokers collude with malicious subscribers or publishers, they can learn the interests of innocent subscribers, even when the interests are encrypted. In this article, we present a pub/sub system that ensures confidentiality of publications and subscriptions in the presence of untrusted brokers. Furthermore, our solution resists collusion attacks between untrusted brokers and malicious subscribers (or publishers). Finally, we have implemented a prototype of our solution to show its feasibility and efficiency.
For the easy and flexible management of large scale networks, Software-Defined Networking (SDN) is a strong candidate technology that offers centralisation and programmable interfaces for making complex decisions in a dynamic and seamless manner. On the one hand, there are opportunities for individuals and businesses to build and improve services and applications based on their requirements in the SDN. On the other hand, SDN poses a new array of privacy and security threats, such as Distributed Denial of Service (DDoS) attacks. For detecting and mitigating potential threats, Machine Learning (ML) is an effective approach that has a quick response to anomalies. In this article, we analyse and compare the performance, using different ML techniques, to detect DDoS attacks in SDN, where both experimental datasets and self-generated traffic data are evaluated. Moreover, we propose a simple supervised learning (SL) model to detect flooding DDoS attacks against the SDN controller via the fluctuation of flows. By dividing a test round into multiple pieces, the statistics within each time slot reflects the variation of net-work behaviours. And this "trend" can be recruited as samples to train a predictor to understand the net-work status, as well as to detect DDoS attacks. We verify the outcome through simulations and measurements over a real testbed. Our main goal is to find a lightweight SL model to detect DDoS attacks with data and features that can be easily obtained. Our results show that SL is able to detect DDoS attacks with a single feature. The performance of the analysed SL algorithms is influenced by the size of training set and parameters used. The accuracy of prediction using the same SL model could be entirely different depending on the training set.(c) 2022 Karabuk University. Publishing services by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
Satellite communication is becoming a complementary technology in future 5G and beyond networks due to its wider coverage. Similar to any terrestrial network, security has become a major concern in satellite networks. Due to a long distance between ground stations (GS) and satellite transponders and due to its inherited broadcast nature, satellite communication encounters certain limitations such as high bit error rate, high link delays, power control, and large round trip delays. The aforementioned limitations make security techniques proposed for terrestrial networks more challenging in satellite settings. Denial‐of‐service (DoS) and distributed DoS (DDoS) attacks have become one of the most popular security threats in both the terrestrial and satellite networks. In this article, we present a DDoS mitigation technique that can be employed at the GS end in satellite networks. In particular, we simulate Internet Control Message Protocol echo request (ping) flooding across a satellite network and propose a proactive mitigation technique by restricting the number of echo requests a network entity can generate. The simulation results demonstrate that DDoS attacks can be mitigated in satellite networks without affecting the quality of experience of legitimate users.
Cybersecurity is an area of growing international importance. In response to global shortages of Cybersecurity skills, many universities have introduced degree programmes in Cybersecurity. These programmes aim to prepare students to become Cybersecurity practitioners with advanced skills in a timely manner. Several universities offer Cybersecurity degrees, but these have been developed ad hoc, as there is currently no internationally accepted Cybersecurity curriculum. Recently, an ITiCSE working group on global perspectives on Cybersecurity education developed a competency-based framework that aims to help institutions to implement Cybersecurity programmes. In this report, we present a case study of a Cybersecurity programme at the University of Auckland. We discuss how the curriculum and resource management of this programme evolved, and we present some challenges for the design and delivery of a Cybersecurity programme in the light of this competency-based framework.
Software-Defined Networking (SDN) is a virtualised yet promising technology that is gaining attention from both academia and industry. On the one hand, the use of a centralised SDN controller provides dynamic configuration and management in an efficient manner; but on the other hand, it raises several concerns mainly related to scalability and availability. Unfortunately, a centralised SDN controller may be a Single Point Of Failure (SPOF), thus making SDN architectures vulnerable to Distributed Denial of Service (DDoS) attacks. In this paper, we design SMART, a scalable SDN architecture that aims at reducing the risk imposed by the centralised aspects in typical SDN deployments. SMART supports a decentralised control plane where the coordination between switches and controllers is provided using Tuple Spaces. SMART ensures a dynamic mapping between SDN switches and controllers without any need to execute complex migration techniques required in typical load balancing approaches.
Social media has become an integral part of modernday society. With increasingly digital societies, individuals have become more familiar and comfortable in using Online Social Networks (OSNs) for just about every aspect of their lives. This higher level of comfort leads to users spilling their emotions on OSNs and eventually their private information. In this work, we aim to investigate the relationship between users' emotions and private information in their tweets. Our research question is whether users' emotions, expressed in their tweets, affect their likelihood to reveal their own private information (privacy leakage) in subsequent tweets. In contrast to existing survey-based approaches, we use an inductive, data-driven approach to answer our research question. We use state-of-the-art techniques to classify users' emotions, and privacy scoring and employ a new technique involving BERT for binary detection of sensitive data. We use two parallel classification frameworks: one that takes the user's emotional state into account and the other for the detection of sensitive data in tweets. Consecutively, we identify individual cases of correlation between the two. We bring the two classifiers together to interpret the changes in both factors over time during a conversation between individuals. Variations were found with respect to the kinds of private information revealed in different states. Our results show that being in negative emotional states, such as sadness, anger or fear, leads to higher privacy leakage than otherwise.
Since the concern of privacy leakage extremely discourages user participation in sharing data, federated learning has gradually become a promising technique for both academia and industry for achieving collaborative learning without leaking information about the local data. Unfortunately, most federated learning solutions cannot efficiently verify the execution of each participant's local machine learning model and protect the privacy of user data, simultaneously. In this article, we first propose a Zero-Knowledge Proof-based Federated Learning (ZKP-FL) scheme on blockchain. It leverages zero-knowledge proof for both the computation of local data and the aggregation of local model parameters, aiming to verify the computation process without requiring the plaintext of the local data. We further propose a Practical ZKP-FL (PZKP-FL) scheme to support fraction and non-linear operations. Specifically, we explore a Fraction-Integer mapping function, and use Taylor expansion to efficiently handle non-linear operations while maintaining the accuracy of the federated learning model. We also analyze the security of PZKP-FL. Performance analysis demonstrates that the whole running time of the PZKP-FL scheme is approximately less than one minute in parallel execution.
[Display omitted] •Proposed the concept of semantic-aware Cyber-physical Systems (SCPSs) that can enable semantic machine-to-machine (M2M) communications between CPSs in the context of collaborative smart manufacturing automation.•Proposed a generic layered architecture for enabling SCPSs via developing a communication layer and semantic layer on top of the existing CPS system architectures. The separation between CPS internal architecture and its external communication concerns ensures a smooth upgrade of a CPS to an SCPS.•Verified the implementation of the proposed architecture via a case study on enabling semantic M2M communications on production status between two machine tools and a case study on enabling distributed data-driven production automation in a workshop. Machine-to-machine (M2M) communication is a crucial technology for collaborative manufacturing automation in the Industrial Internet of Things (IIoT)-empowered industrial networks. The new decentralized manufacturing automation paradigm features ubiquitous communication and interoperable interactions between machines. However, peer-to-peer (P2P) interoperable communications at the semantic level between industrial machines is a challenge. To address this challenge, we introduce a concept of Semantic-aware Cyber-Physical Systems (SCPSs) based on which manufacturing devices can establish semantic M2M communications. In this work, we propose a generic system architecture of SCPS and its enabling technologies. Our proposed system architecture adds a semantic layer and a communication layer to the conventional cyber-physical system (CPS) in order to maximize compatibility with the diverse CPS implementation architecture. With Semantic Web technologies as the backbone of the semantic layer, SCPSs can exchange semantic messages with maximum interoperability following the same understanding of the manufacturing context. A pilot implementation of the presented work is illustrated with a proof-of-concept case study between two semantic-aware cyber-physical machine tools. The semantic communication provided by the SCPS architecture makes ubiquitous M2M communication in a network of manufacturing devices environment possible, laying the foundation for collaborative manufacturing automation for achieving smart manufacturing. Another case study focusing on decentralized production control between machines in a workshop also proved the merits of semantic-aware M2M communication technologies.
HTTPS refers to an application-specific implementation that runs HyperText Transfer Protocol (HTTP) on top of Secure Socket Layer (SSL) or Transport Layer Security (TLS). HTTPS is used to provide encrypted communication and secure identification of web servers and clients, for different purposes such as online banking and e-commerce. However, many HTTPS vulnerabilities have been disclosed in recent years. Although many studies have pointed out that these vulnerabilities can lead to serious consequences, domain administrators seem to ignore them. In this study, we evaluate the HTTPS security level of Alexa's top 1 million domains from two perspectives. First, we explore which popular sites are still affected by those well-known security issues. Our results show that less than 0.1% of HTTPS-enabled servers in the measured domains are still vulnerable to known attacks including Rivest Cipher 4 (RC4), Compression Ratio Info-Leak Mass Exploitation (CRIME), Padding Oracle On Downgraded Legacy Encryption (POODLE), Factoring RSA Export Keys (FREAK), Logjam, and Decrypting Rivest-Shamir-Adleman (RSA) using Obsolete and Weakened eNcryption (DROWN). Second, we assess the security level of the digital certificates used by each measured HTTPS domain. Our results highlight that less than 0.52% domains use the expired certificate, 0.42% HTTPS certificates contain different hostnames, and 2.59% HTTPS domains use a self-signed certificate. The domains we investigate in our study cover 5 regions (including ARIN, RIPE NCC, APNIC, LACNIC, and AFRINIC) and 61 different categories such as online shopping websites, banking websites, educational websites, and government websites. Although our results show that the problem still exists, we find that changes have been taking place when HTTPS vulnerabilities were discovered. Through this three-year study, we found that more attention has been paid to the use and configuration of HTTPS. For example, more and more domains begin to enable the HTTPS protocol to ensure a secure communication channel between users and websites. From the first measurement, we observed that many domains are still using TLS 1.0 and 1.1, SSL 2.0, and SSL 3.0 protocols to support user clients that use outdated systems. As the previous studies revealed security risks of using these protocols, in the subsequent studies, we found that the majority of domains updated their TLS protocol on time. Our 2020 results suggest that most HTTPS domains use the TLS 1.2 protocol and show that some HTTPS domains are still vulnerable to the existing known attacks. As academics and industry professionals continue to disclose attacks against HTTPS and recommend the secure configuration of HTTPS, we found that the number of vulnerable domain is gradually decreasing every year.
Over the years, software applications have captured a big market ranging from smart devices (smartphones, smart wearable devices) to enterprise resource management including Enterprise Resource Planning, office applications, and the entertainment industry (video games and graphics design applications). Protecting the copyright of software applications and protection from malicious software (malware) have been topics of utmost interest for academia and industry for many years. The standard solutions use the software license key or rely on the Operating System (OS) protection mechanisms, such as Google Play Protect. However, some end users have broken these protections to bypass payments for applications that are not free. They have done so by downloading the software from an unauthorised website or by jailbreaking the OS protection mechanisms. As a result, they cannot determine whether the software they download is malicious or not. Further, if the software is uploaded to a third party platform by malicious users, the software developer has no way of knowing about it. In such cases, the authenticity or integrity of the software cannot be guaranteed. There is also a problem of information transparency among software platforms. In this study, we propose an architecture that is based on blockchain technology for providing data transparency, release traceability, and auditability. Our goal is to provide an open framework to allow users, software vendors, and security practitioners to monitor misbehaviour and assess software vulnerabilities for preventing malicious software downloads. Specifically, the proposed solution makes it possible to identify software developers who have gone rogue and are potentially developing malicious software. Furthermore, we introduce an incentive policy for encouraging security engineers, victims and software owners to participate in collaborative works. The outcomes will ensure the wide adoption of a software auditing ecosystem in software markets, specifically for some mobile device manufacturers that have been banned from using the open-source OS such as Android. Consequently, there is a demand for them to verify the application security without completely relying on the OS-specific security mechanisms.
A Content Delivery Network (CDN) is a distributed system composed of a large number of nodes that allows users to request objects from nearby nodes. CDN not only reduces end-to-end latency on the user side but also offloads Content Providers (CPs), providing resilience against Distributed Denial of Service (DDoS) attacks. However, by caching objects and processing user requests, CDN providers could infer user preferences and the popularity of objects, thus resulting in information leakage. Unfortunately, such information leakage may result in loss of user privacy and reveal business-specific information to untrusted or compromised CDN providers. State-of-the-art solutions can protect the content of sensitive objects but cannot prevent CDN providers from inferring user preferences and the popularity of objects. In this work, we present a privacy-preserving encrypted CDN system to hide not only the content of objects and user requests, but also protect user preferences and the popularity of objects from curious CDN providers. We employ encryption to protect the objects and user requests in a way that both the CDNs and CPs can perform the search operations without accessing objects and requests in cleartext. Our proposed system is based on a scalable key management approach for multi-user access, where no key regeneration and data re-encryption are needed for user revocation. We have implemented a prototype of the system and show its practical efficiency.
The rapid development in the field of low power circuits and biosensors has created a new area in Wireless Sensor Networks (WSNs), called Wireless Body Area Networks (WBANs). Medical implants are an integral part of a WBAN to measure, monitor, and control various medical conditions. These implants are generally injected inside the human body through an invasive surgical procedure with very limited energy resources available. In case the battery is depleted, the patient potentially needs to go for another surgery to replace the implant. Subsequently, it becomes imperative to propose energy efficient protocols to save energy in the implants. This work proposes a clustering mechanism in medical implants wherein the implants in the immediate proximity form clusters. In particular, the performance of the customized version of Low-Energy Adaptive Clustering Hierarchy (LEACH), modified for the implants' network, is analyzed. The results are compared with the contention-based Medium Access Control (MAC) layer protocol such as pure ALOHA. In addition, a mathematical model to capture the energy consumption details of an implant network is presented. The simulation results demonstrate that a significant amount of energy can be saved using the proposed model. More precisely, the customized LEACH protocol consumes around 10 times less energy as compared to pure ALOHA.
A major gap in cybersecurity studies, especially as it relates to cyber risk, is the lack of comprehensive formal knowledge representation, and often a limited view, mainly based on abstract security concepts with limited context. Additionally, much of the focus is on the attack and the attacker, and a more complete view of risk assessment has been inhibited by the lack of knowledge from the defender landscape, especially in the matter of the impact and performance of compensating controls. In this study, we will start by defining a conceptual ontology that integrates concepts that model all of cybersecurity entities. We will then present an adaptive risk reasoning approach with a particular focus on defender activities. The main purpose is to provide a more complete view, from the defender perspective, that bridges the gap between risk assessment theories and practical cybersecurity operations in real-world deployments.
Driven by the growing data transfer needs, industry and research institutions are deploying 100 Gb/s networks. As such high-speed networks become prevalent, these also introduce significant technical challenges. In particular, an Intrusion Detection System (IDS) cannot process network activities at such a high rate when monitoring large and diverse traffic volumes, thus resulting in packet drops. Unfortunately, the high packet drop rate has a significant impact on detection accuracy. In this work, we investigate two popular open-source IDSs: Snort and Suricata along with their comparative performance benchmarks to better understand drop rates and detection accuracy in 100 Gb/s networks. More specifically, we study vital factors (including system resource usage, packet processing speed, packet drop rate, and detection accuracy) that limit the applicability of IDSs to high-speed networks. Furthermore, we provide a comprehensive analysis to show the performance impact on IDSs by using different configurations, traffic volumes and different flows. Finally, we identify challenges of using open-source IDSs in high-speed networks and provide suggestions to help network administrators to address identified issues and give some recommendations for developing new IDSs that can be used for high-speed networks. (C) 2020 Elsevier Ltd. All rights reserved.
Searchable Encryption (SE) is a technique that allows Cloud Service Providers to search over encrypted datasets without learning the content of queries and records. In recent years, many SE schemes have been proposed to protect outsourced data. However, most of them leak sensitive information, from which attackers could still infer the content of queries and records by mounting leakage-based inference attacks, such as the count attack and file-injection attack. In this work, first we define the leakage in searchable encrypted databases and analyse how the leakage is leveraged in existing leakage-based attacks. Second, we propose a Privacy-preserving Multi-cloud based dynamic symmetric SE scheme for relational Database (P-McDb). P-McDb has minimal leakage, which not only ensures confidentiality of queries and records but also protects the search, intersection, and size patterns. Moreover, P-McDb ensures both forward and backward privacy of the database. Thus, P-McDb could resist existing leakage-based attacks, e.g., active file/record-injection attacks. We give security definition and analysis to show how P-McDb hides the aforementioned patterns. Finally, we implemented a prototype of P-McDb and tested it using the TPC-H benchmark dataset. Our evaluation results show that users can get the required records in 2.16 s when searching over 4.1 million records.
The recent pandemic of COVID-19 has changed the way people socially interact with each other. A huge increase in the usage of social media applications has been observed due to quarantine strategies enforced by many governments across the globe. This has put a great burden on already overloaded cellular networks. It is believed that direct Device-to-Device (D2D) communication can offload a significant amount of traffic from cellular networks, especially during scenarios when residents in a locality aim to share information among them. WiFi Direct is one of the enabling technologies of D2D communications, having a great potential to facilitate various proximity-based applications. In this work, we propose power saving schemes that aim at minimizing energy consumption of user devices across D2D based multi-hop networks. Further, we provide an analytical model to formulate energy consumption of such a network. The simulation results demonstrate that a small modification in the network configuration, such as group size and transmit power can provide considerable energy gains. The observed energy consumption is reduced by 5 times for a throughput loss of 12%. Additionally, we measure the energy per transmitted bit for different configurations of the network. Furthermore, we analyze the behavior of the network, in terms of its energy consumption and throughput, for different file sizes.
Recently, real-world attacks against the web Public Key Infrastructure (PKI) have arisen more frequently. The current PKI that use Registration Authorities/Certificate Authorities (RAs/CAs) model suffer from notorious security vulnerabilities. Most of these vulnerabilities are due to compromises of RAs, which lead to impersonation attacks resulting in CAs misbehaving to issue bogus certificates. To counter this problem, many approaches, such as Certificate Transparency (CT), ARPKI, and PoliCert, have been proposed. Nonetheless, no solution has yet gained widespread acceptance as a result of complexity and deployability issues. Moreover, existing approaches still require to satisfy complicated interactions and synchronisation among the entities that are involved during certificate issuance, updates, and revocations. In this paper, we propose a new Blockchain-Based PKI (BB-PKI) to address these vulnerabilities of CA misbehaviour caused by impersonation attacks against RAs. Certificate Issuance Request (CIR) should be vouched by manifold RAs. Multiple CAs shall sign and issue the certificate using an out-of-band secure communication channel. Any RA that contributes to the verification process of a user's request can publish the certificate in the blockchain by creating a smart contract certificate transaction. BB-PKI offers strong security guarantees, compromising n - 1 of the RAs or CAs is not enough to launch impersonation attacks, meaning that attackers cannot compromise more than the threshold of the latter signatures to launch an attack.
With the widespread popularity of smartphones and mobile applications, they have gradually penetrated and are widely used in our daily lives, e.g., we use them for online shopping and mobile banking. This has led to an increased demand for securing data processed and stored by smartphones. User authentication is an entry guard for ensuring secure access to smartphones, which aims at verifying a user’s identity. Typically, such a method is text-based authentication. However, the existing text-based authentication solutions bring in the trade-off issue between security and usability. The main reason is short text-based passwords are easy to remember but not secure enough as they are vulnerable to password guessing or shoulder surfing attacks. In contrast, long text-based passwords can ensure security, but they raise usability issues due to the difficulty of memorising, recalling, and inputting passwords. Moreover, the graphical password solutions suffer from shoulder surfing attacks. In this article, we propose an image-based authentication solution for smartphone users to reduce the risk of mounting a shoulder surfing attack. The proposed solution requires users to select and move predefined images to the designated position for passing the authentication check. In a laboratory experiment with 62 participants, we asked them to test the robustness of TIM to resist the existing attacks, and compare the usability with other image-based solutions. An analysis of collected results indicates that the proposed solution can resist password guessing and shoulder surfing attacks. Consequently, more than 85% of participants believed that the proposed solution could mitigate both password guessing and shoulder surfing attacks. Further, 71% of participants think that TIM is more usable compared to the existing solutions. While 50% of them preferred to choose TIM, more than 50% of participants claim that the learning curve of TIM is very short and the configuration is easy.