Research Papers On Network


Wenye Wang | Zhuo Lu

The Smart Grid, generally referred to as the next-generation power system, is considered as a revolutionary and evolutionary regime of existing power grids. More importantly, with the integration of advanced computing and communication technologies, the Smart Grid is expected to greatly enhance efficiency and reliability of future power systems with renewable energy resources, as well as distributed intelligence and demand response. Along with the silent features of the Smart Grid, cyber security emerges to be a critical issue because millions of electronic devices are inter-connected via communication networks throughout critical power facilities, which has an immediate impact on reliability of such a widespread infrastructure. In this paper, we present a comprehensive survey of cyber security issues for the Smart Grid. Specifically, we focus on reviewing and discussing security requirements, network vulnerabilities, attack countermeasures, secure communication protocols and architectures in the Smart Grid. We aim to provide a deep understanding of security vulnerabilities and solutions in the Smart Grid and shed light on future research directions for Smart Grid security. © 2012 Elsevier B.V. All rights reserved.


Rodrigo Roman | Jianying Zhou | Javier Lopez

In the Internet of Things, services can be provisioned using centralized architectures, where central entities acquire, process, and provide information. Alternatively, distributed architectures, where entities at the edge of the network exchange information and collaborate with each other in a dynamic way, can also be used. In order to understand the applicability and viability of this distributed approach, it is necessary to know its advantages and disadvantages - not only in terms of features but also in terms of security and privacy challenges. The purpose of this paper is to show that the distributed approach has various challenges that need to be solved, but also various interesting properties and strengths. © 2013 Elsevier B.V. All rights reserved.


Giuseppe Aceto | Alessio Botta | Walter De Donato | Antonio Pescapè

Nowadays, Cloud Computing is widely used to deliver services over the Internet for both technical and economical reasons. The number of Cloud-based services has increased rapidly and strongly in the last years, and so is increased the complexity of the infrastructures behind these services. To properly operate and manage such complex infrastructures effective and efficient monitoring is constantly needed. Many works in literature have surveyed Cloud properties, features, underlying technologies (e.g. virtualization), security and privacy. However, to the best of our knowledge, these surveys lack a detailed analysis of monitoring for the Cloud. To fill this gap, in this paper we provide a survey on Cloud monitoring. We start analyzing motivations for Cloud monitoring, providing also definitions and background for the following contributions. Then, we carefully analyze and discuss the properties of a monitoring system for the Cloud, the issues arising from such properties and how such issues have been tackled in literature. We also describe current platforms, both commercial and open source, and services for Cloud monitoring, underlining how they relate with the properties and issues identified before. Finally, we identify open issues, main challenges and future directions in the field of Cloud monitoring. © 2013 Elsevier B.V. All rights reserved.


S. Sicari | A. Rizzardi | L. A. Grieco | A. Coen-Porisini

© 2014 Elsevier B.V. Internet of Things (IoT) is characterized by heterogeneous technologies, which concur to the provisioning of innovative services in various application domains. In this scenario, the satisfaction of security and privacy requirements plays a fundamental role. Such requirements include data confidentiality and authentication, access control within the IoT network, privacy and trust among users and things, and the enforcement of security and privacy policies. Traditional security countermeasures cannot be directly applied to IoT technologies due to the different standards and communication stacks involved. Moreover, the high number of interconnected devices arises scalability issues; therefore a flexible infrastructure is needed able to deal with security threats in such a dynamic environment. In this survey we present the main research challenges and the existing solutions in the field of IoT security, identifying open issues, and suggesting some hints for future research.


Mark Berman | Jeffrey S. Chase | Lawrence Landweber | Akihiro Nakao | Max Ott | Dipankar Raychaudhuri | Robert Ricci | Ivan Seskar

GENI, the Global Environment for Networking Innovation, is a distributed virtual laboratory for transformative, at-scale experiments in network science, services, and security. Designed in response to concerns over Internet ossification, GENI is enabling a wide variety of experiments in a range of areas, including clean-slate networking, protocol design and evaluation, distributed service offerings, social network integration, content management, and in-network service deployment. Recently, GENI has been leading an effort to explore the potential of its underlying technologies, SDN and GENI racks, in support of university campus network management and applications. With the concurrent deployment of these technologies on regional and national R & E backbones, this will result in a revolutionary new national-scale distributed architecture, bringing to the entire network the shared, deeply programmable environment that the cloud has brought to the datacenter. This deeply programmable environment will support the GENI research mission and as well as enabling research in a wide variety of application areas. © 2014 Elsevier B.V. All rights reserved.


Tifenn Rault | Abdelmadjid Bouabdallah | Yacine Challal

The design of sustainable wireless sensor networks (WSNs) is a very challenging issue. On the one hand, energy-constrained sensors are expected to run autonomously for long periods. However, it may be cost-prohibitive to replace exhausted batteries or even impossible in hostile environments. On the other hand, unlike other networks, WSNs are designed for specific applications which range from small-size healthcare surveillance systems to large-scale environmental monitoring. Thus, any WSN deployment has to satisfy a set of requirements that differs from one application to another. In this context, a host of research work has been conducted in order to propose a wide range of solutions to the energy-saving problem. This research covers several areas going from physical layer optimisation to network layer solutions. Therefore, it is not easy for the WSN designer to select the efficient solutions that should be considered in the design of application-specific WSN architecture. We present a top-down survey of the trade-offs between application requirements and lifetime extension that arise when designing wireless sensor networks. We first identify the main categories of applications and their specific requirements. Then we present a new classification of energy-conservation schemes found in the recent literature, followed by a systematic discussion as to how these schemes conflict with the specific requirements. Finally, we survey the techniques applied in WSNs to achieve trade-off between multiple requirements, such as multi-objective optimisation. © 2014 Elsevier B.V. All rights reserved.


Ian F. Akyildiz | Ahyoung Lee | Pu Wang | Min Luo | Wu Chou

Software Defined Networking (SDN) is an emerging networking paradigm that separates the network control plane from the data forwarding plane with the promise to dramatically improve network resource utilization, simplify network management, reduce operating cost, and promote innovation and evolution. Although traffic engineering techniques have been widely exploited in the past and current data networks, such as ATM networks and IP/MPLS networks, to optimize the performance of communication networks by dynamically analyzing, predicting, and regulating the behavior of the transmitted data, the unique features of SDN require new traffic engineering techniques that exploit the global network view, status, and flow patterns/characteristics available for better traffic control and management. This paper surveys the state-of-the-art in traffic engineering for SDNs, and mainly focuses on four thrusts including flow management, fault tolerance, topology update, and traffic analysis/characterization. In addition, some existing and representative traffic engineering tools from both industry and academia are explained. Moreover, open research issues for the realization of SDN traffic engineering solutions are discussed in detail. © 2014 Elsevier B.V. All rights reserved.


Mohamed Younis | Izzet F. Senturk | Kemal Akkaya | Sookyoung Lee | Fatih Senel

In wireless sensor networks (WSNs) nodes often operate unattended in a collaborative manner to perform some tasks. In many applications, the network is deployed in harsh environments such as battlefield where the nodes are susceptible to damage. In addition, nodes may fail due to energy depletion and breakdown in the onboard electronics. The failure of nodes may leave some areas uncovered and degrade the fidelity of the collected data. However, the most serious consequence is when the network gets partitioned into disjoint segments. Losing network connectivity has a very negative effect on the applications since it prevents data exchange and hinders coordination among some nodes. Therefore, restoring the overall network connectivity is very crucial. Given the resource-constrained setup, the recovery should impose the least overhead and performance impact. This paper focuses on network topology management techniques for tolerating/handling node failures in WSNs. Two broad categories based on reactive and proactive methods have been identified for classifying the existing techniques. Considering these categories, a thorough analysis and comparison of all the recent works have been provided. Finally, the paper is concluded by outlining open issues that warrant additional research. © 2013 Elsevier B.V. All rights reserved.


Luis Sanchez | Luis Muñoz | Jose Antonio Galache | Pablo Sotres | Juan R. Santana | Veronica Gutierrez | Rajiv Ramdhany | Alex Gluhak | Srdjan Krco | Evangelos Theodoridis | Dennis Pfisterer

This paper describes the deployment and experimentation architecture of the Internet of Things experimentation facility being deployed at Santander city. The facility is implemented within the SmartSantander project, one of the projects of the Future Internet Research and Experimentation initiative of the European Commission and represents a unique in the world city-scale experimental research facility. Additionally, this facility supports typical applications and services of a smart city. Tangible results are expected to influence the definition and specification of Future Internet architecture design from viewpoints of Internet of Things and Internet of Services. The facility comprises a large number of Internet of Things devices deployed in several urban scenarios which will be federated into a single testbed. In this paper the deployment being carried out at the main location, namely Santander city, is described. Besides presenting the current deployment, in this article the main insights in terms of the architectural design of a large-scale IoT testbed are presented as well. Furthermore, solutions adopted for implementation of the different components addressing the required testbed functionalities are also sketched out. The IoT experimentation facility described in this paper is conceived to provide a suitable platform for large scale experimentation and evaluation of IoT concepts under real-life conditions. © 2013 Elsevier B.V. All rights reserved.


Murat Kuzlu | Manisa Pipattanasomporn | Saifur Rahman

Since the introduction of the smart grid, accelerated deployment of various smart grid technologies and applications have been experienced. This allows the traditional power grid to become more reliable, resilient, and efficient. Despite such a widespread deployment, it is still not clear which communication technology solutions are the best fit to support grid applications. This is because different smart grid applications have different network requirements - in terms of data payloads, sampling rates, latency and reliability. Based on a variety of smart grid use cases and selected standards, this paper compiles information about different communication network requirements for different smart grid applications, ranging from those used in a Home Area Network (HAN), Neighborhood Area Network (NAN) and Wide-Area Network (WAN). Communication technologies used to support implementation of selected smart grid projects are also discussed. This paper is expected to serve as a comprehensive database of technology requirements and best practices for use by communication engineers when designing a smart grid network. © 2014 Elsevier B.V. All rights reserved.


Guoqiang Zhang | Yang Li | Tao Lin

Internet usage has drastically shifted from host-centric end-to-end communication to receiver-driven content retrieval. In order to adapt to this change, a handful of innovative information/content centric networking (ICN) architectures have recently been proposed. One common and important feature of these architectures is to leverage built-in network caches to improve the transmission efficiency of content dissemination. Compared with traditional Web Caching and CDN Caching, ICN Cache takes on several new characteristics: cache is transparent to applications, cache is ubiquitous, and content to be cached is more ine-grained. These distinguished features pose new challenges to ICN caching technologies. This paper presents a comprehensive survey of state-of-art techniques aiming to address these issues, with particular focus on reducing cache redundancy and improving the availability of cached content. As a new research area, this paper also points out several interesting yet challenging research directions in this subject. © 2013 Elsevier B.V. All rights reserved.


Weiwei Fang | Xiangmin Liang | Shengxin Li | Luca Chiaraviglio | Naixue Xiong

In recent years, the power costs of cloud data centers have become a practical concern and have attracted significant attention from both industry and academia. Most of the early works on data center energy efficiency have focused on the biggest power consumers (i.e., computer servers and cooling systems), yet without taking the networking part into consideration. However, recent studies have revealed that the network elements consume 10-20% of the total power in the data center, which poses a great challenge to effectively reducing network power cost without adversely affecting overall network performance. Based on the analysis on topology characteristics and traffic patterns of data centers, this paper presents a novel approach, called VMPlanner, for network power reduction in the virtualization-based data centers. The basic idea of VMPlanner is to optimize both virtual machine placement and traffic flow routing so as to turn off as many unneeded network elements as possible for power saving. We formulate the optimization problem, analyze its hardness, and solve it by designing VMPlanner as a stepwise optimization approach with three approximation algorithms. VMPlanner is implemented and evaluated in a simulated environment with traffic traces collected from a data center test-bed, and the experiment results illustrate the efficacy and efficiency of this approach.© 2012 Elsevier B.V. All rights reserved.


K. Giotis | C. Argyropoulos | G. Androulidakis | D. Kalogeras | V. Maglaris

Software Defined Networks (SDNs) based on the OpenFlow (OF) protocol export control-plane programmability of switched substrates. As a result, rich functionality in traffic management, load balancing, routing, firewall configuration, etc. that may pertain to specific flows they control, may be easily developed. In this paper we extend these functionalities with an efficient and scalable mechanism for performing anomaly detection and mitigation in SDN architectures. Flow statistics may reveal anomalies triggered by large scale malicious events (typically massive Distributed Denial of Service attacks) and subsequently assist networked resource owners/operators to raise mitigation policies against these threats. First, we demonstrate that OF statistics collection and processing overloads the centralized control plane, introducing scalability issues. Second, we propose a modular architecture for the separation of the data collection process from the SDN control plane with the employment of sFlow monitoring data. We then report experimental results that compare its performance against native OF approaches that use standard flow table statistics. Both alternatives are evaluated using an entropy-based method on high volume real network traffic data collected from a university campus network. The packet traces were fed to hardware and software OF devices in order to assess flow-based data-gathering and related anomaly detection options. We subsequently present experimental results that demonstrate the effectiveness of the proposed sFlow-based mechanism compared to the native OF approach, in terms of overhead imposed on usage of system resources. Finally, we conclude by demonstrating that once a network anomaly is detected and identified, the OF protocol can effectively mitigate it via flow table modifications. © 2013 Elsevier B.V. All rights reserved.


Sérgio S.C. Silva | Rodrigo M.P. Silva | Raquel C.G. Pinto | Ronaldo M. Salles

Botnets, which are networks formed by malware-compromised machines, have become a serious threat to the Internet. Such networks have been created to conduct large-scale illegal activities, even jeopardizing the operation of private and public services in several countries around the world. Although research on the topic of botnets is relatively new, it has been the subject of increasing interest in recent years and has spawned a growing number of publications. However, existing studies remain somewhat limited in scope and do not generally include recent research and developments. This paper presents a comprehensive review that broadly discusses the botnet problem, briefly summarizes the previously published studies and supplements these with a wide ranging discussion of recent works and solution proposals spanning the entire botnet research field. This paper also presents and discusses a list of the prominent and persistent research problems that remain open. © 2012 Elsevier B.V. All rights reserved.


Reduan H. Khan | Jamil Y. Khan

A robust communication infrastructure is the touchstone of a smart grid that differentiates it from the conventional electrical grid by transforming it into an intelligent and adaptive energy delivery network. To cope with the rising penetration of renewable energy sources and expected widespread adoption of electric vehicles, the future smart grid needs to implement efficient monitoring and control technologies to improve its operational efficiency. However, the legacy communication infrastructures in the existing grid are quite insufficient, if not incapable of meeting the diverse communication requirements of the smart grid. Therefore, utilities from all over the world are now facing the key challenge of finding the most appropriate technology that can satisfy their future communication needs. In order to properly assess the vast landscape of available communication technologies, architectures and protocols, it is very important to acquire detailed knowledge about the current and prospective applications of the smart grid. With a view to addressing this critical issue, this paper offers an in depth review on the application characteristics and traffic requirements of several emerging smart grid applications and highlights some of the key research challenges present in this arena. © 2012 Elsevier B.V. All rights reserved.


Muhammad Adeel Mahmood | Winston K.G. Seah | Ian Welch

© 2015 Elsevier B.V. All rights reserved. Ensuring energy efficient and reliable transport of data in resource constrained Wireless Sensor Networks (WSNs) is one of the primary concerns to achieve a high degree of efficiency in monitoring and control systems. The two techniques typically used in WSNs to achieve reliability are either retransmission or redundancy. Most of the existing research focuses on traditional retransmission-based reliability, where reliable transmission of data packets is ensured in terms of recovering the lost packets by retransmitting them. This might result in additional transmission overhead that not only wastes sensors' limited energy resources but also makes the network congested and in turn affects the reliable transmission of data. On the other hand, employing redundancy to achieve reliability in WSNs has received comparatively lesser emphasis by the research community [35] and this area warrants further investigation. In redundancy-based reliability mechanisms, a bit loss within a packet can be recovered by utilizing some form of coding schemes. This ability to correct the lost or corrupted bits within a packet would significantly reduce the transmission overhead caused by the retransmission of the entire packet. Both retransmission and redundancy can either be performed on a hop-by-hop or an end-to-end basis. Hop-by-hop method allows the intermediate nodes to perform retransmission or redundancy. On the other hand, in the end-to-end approach, retransmission or redundancy is performed only at the source and the destination nodes. However, a hybrid mechanism with an efficient combination of these retransmission and redundancy techniques in order to achieve reliability has so far been neglected by the existing research. Depending on the nature of the application, it is also important to define the amount of data required to ensure reliability. This introduces the concept of packet or event level reliability. Packet reliability requires all the packets from all the relevant sensor nodes to reach the sink, whereas event reliability ensures that the sink only gets enough information about a certain event happening. Thus retransmission or redundancy techniques using hop-by-hop or end-to-end mechanisms aim to achieve either packet or event level reliability. This paper presents a survey on reliability protocols in WSNs. We review several reliability schemes based on retransmission and redundancy techniques using different combinations of packet or event reliability in terms of recovering the lost data using hop-by-hop or end-to-end mechanisms. We further analyze these schemes by investigating the most suitable combination of these techniques, methods and required reliability level in order to provide energy efficient reliability mechanism for resource constrained WSNs. This paper also proposes a 3D reference model for classifying research in WSN reliability, which will be used to perform in-depth analysis of the unexplored areas.


Manar Jammal | Taranpreet Singh | Abdallah Shami | Rasool Asal | Yiming Li

Network usage and demands are growing at a rapid pace, while the network administrators are facing difficulties in tracking the frequent users' access of the network. Consequently, managing the infrastructure supporting these demands has become a complicated and time-consuming task. Networks are also in a flux state, they are not only expanding but require reconfigurations to meet the business needs. Software defined networking (SDN) and network function virtualization (NFV) technologies have emerged as promising solutions that change the cost profile and agility of internet protocol (IP) networks. Conceptually, SDN separates the network control logic from its underlying hardware, enabling network administrators to exert more control over network functioning and providing a unified global view of the network. However, SDN and NFV can be merged and have the potential to mitigate the challenges of legacy networks. In this paper, our aim is to describe the benefits of using SDN in a multitude of environments such as in data centers, data center networks, and Network as Service offerings. We also present the various challenges facing SDN, from scalability to reliability and security concerns, and discuss existing solutions to these challenges. © 2014 Elsevier B.V. All rights reserved.


Akram Hakiri | Aniruddha Gokhale | Pascal Berthou | Douglas C. Schmidt | Thierry Gayraud

© 2014 Elsevier B.V. All rights reserved. Currently many aspects of the classical architecture of the Internet are etched in stone - a so called ossification of the Internet - which has led to major obstacles in IPv6 deployment and difficulty in using IP multicast services. Yet, there exist many reasons to extend the Internet, e.g., for improving intra-domain and inter-domain routing for high availability of the network, providing end-to-end connectivity for users, and allowing dynamic QoS management of network resources for new applications, such as data center, cloud computing, and network virtualization. To address these requirements, the next-generation architecture for the Future Internet has introduced the concept of Software-Defined Networking (SDN). At the core of this emerging paradigm is the separation and centralization of the control plane from the forwarding elements in the network as opposed to the distributed control plane of existing networks. This decoupling allows deployment of control plane software components (e.g., OpenFlow controller) on computer platforms that are much more powerful than traditional network equipment (e.g., switches/routers) while protecting the data and intellectual property of the vendors of such equipment. A critical understanding of this emerging paradigm is necessary to address the multiple challenges in realizing the Future Internet and to resolve the ossification problem of the existing Internet. To address these requirements, this paper surveys existing technologies and the wide range of recent and state-of-the-art projects on SDN followed by an in-depth discussion of the major challenges in this area.


S. Salsano | N. Blefari-Melazzi | A. Detti | G. Morabito | L. Veltri

Information Centric Networking (ICN) is a new networking paradigm in which the network provides users with content instead of communication channels between hosts. Software Defined Networking (SDN) is an approach that promises to enable the continuous evolution of networking architectures. In this paper we propose and discuss solutions to support ICN by using SDN concepts. We focus on an ICN framework called CONET, which grounds its roots in the CCN/NDN architecture and can interwork with its implementation (CCNx). Although some details of our solution have been specifically designed for the CONET architecture, its general ideas and concepts are applicable to a class of recent ICN proposals, which follow the basic mode of operation of CCN/NDN. We approach the problem in two complementary ways. First we discuss a general and long term solution based on SDN concepts without taking into account specific limitations of SDN standards and equipment. Then we focus on an experiment to support ICN functionality over a large scale SDN testbed based on OpenFlow, developed in the context of the OFELIA European research project. The current OFELIA testbed is based on OpenFlow 1.0 equipment from a variety of vendors, therefore we had to design the experiment taking into account the features that are currently available on off-the-shelf OpenFlow equipment. © 2013 Elsevier B.V. All rights reserved.

Abstract

What are the big movements in networking that researchers should heed? A standout is the global spread of communities of interest (the networking analogue of the flat world) and their need for “dynamic virtual networks” that support rich applications requiring resources from several domains. The imperative for inter-networking, i.e., the enablement of coordinated sharing of resources across multiple domains, is certain. This challenge has many facets, ranging from the organizational, e.g., different, possibly competing, owners to the technical, e.g., different technologies. Yet another key characteristic of the emerging networking environment is that the service provider is required to handle ever-increasing uncertainty in demand, both in volume and time. On the other hand there are new instruments available to handle the challenge. Thus, inter-networking and uncertainty management are important challenges of emerging networking that deserve attention from the research community.

We describe research that touch on both topics. First, we consider a model of data-optical inter-networking, where routes connecting end-points in data domains are concatenation of segments in the data and optical domains. The optical domain in effect acts as a carrier’s carrier for multiple data domains. The challenge to inter-networking stems from the limited view that the data and optical domains have of each other. Coordination has to be enabled through parsimonious and qualitatively restrictive information exchange across domains. Yet the overall optimization objective, which is to maximize end-to-end carried traffic with minimum lightpath provisioning cost, enmeshes data and optical domains. This example of inter-networking also involves two technologies. A mathematical reflection of the latter fact is the integrality of some of the decision variables due to wavelengths being the bandwidth unit in optical transmission. Through an application of Generalized Bender’s Decomposition the problem of optimizing provisioning and routing is decomposed into sub-problems, which are solved by the different domains and the results exchanged in iterations that provably converge to the global optimum.

In turning to uncertainty management we begin by presenting a framework for stochastic traffic management. Traffic demands are uncertain and given by probability distributions. While there are alternative perspectives (and metrics) to resource usage, such as social welfare and network revenue, we adopt the latter, which is aligned with the service provider’s interests. Uncertainty introduces the risk of misallocation of resources. What is the right measure of risk in networking? We examine various definitions of risk, some taken from modern portfolio theory, and suggest a balanced solution. Next we consider the optimization of an objective which is a risk-adjusted measure of network revenue. We obtain conditions under which the optimization problem is an instance of convex programming. Studies of the properties of the solution show that it asymptotically meets the stochastic efficiency criterion. Service providers’ risk mitigation policies are suggested. For instance, by selecting the appropriate mix of long-term contracts and opportunistic servicing of random demand, the service provider can optimize its risk-adjusted revenue. The “efficient frontier”, which is the set of Pareto optimal pairs of mean revenue and revenue risk, is useful to the service provider in selecting its operating point.

0 thoughts on “Research Papers On Network”

    -->

Leave a Comment

Your email address will not be published. Required fields are marked *