Network Function Virtualization (NFV) and Software-Defined Networking (SDN) are deeply changing the networking field by introducing software at any level, aiming at decoupling the logic from the hardware. Together, they bring several benefits, mostly in terms of scalability and flexibility. Up to now, SDN has been used to support NFV from the routing and the architectural point of view. In this paper we present Kathará, a framework based on containers, that allows network operators to deploy Virtual Network Functions (VNFs) through the adoption of emerging data-plane programmable capabilities, such as P4-compliant switches. It also supports the coexistence of SDN and traditional routing protocols in order to set up arbitrarily complex networks. As a side effect, thanks to Kathará, we demonstrate that implementing NFV by means of specific-purpose equipment is feasible and it provides a gain in performance while preserving the benefits of NFV. We measure the resource consumption of Kathará and we show that it performs better than frameworks that implement virtual networks using virtual machines by several orders of magnitude.
We introduce an ETSI NFV compliant, scalable, and distributed architecture, called Megalos, that supports the implementation of virtual network scenarios consisting of virtual devices (VNFs) where each VNF may have several L2 interfaces assigned to virtual LANs. We rely on Docker containers to realize VNFs and we leverage Kubernetes for the management of the nodes of a distributed cluster. Our architecture guarantees the segregation of each virtual LAN traffic from the traffic of other LANs, from the cluster traffic, and from Internet traffic. Also, a packet is only sent to the cluster node containing the recipient VNF. The allocation of the VNFs to the nodes of the cluster is performed by Megalos Scheduler, taking into account the network topology in order to reduce the traffic among nodes. We produce an example application where we emulate a large network scenario, with thousands of VNFs and LANs, on a small cluster of 50 nodes. Finally, we experimentally show the scalability potential of Megalos by measuring the overhead of the distributed environment and of its signaling protocols.
In computer networks, tests to ensure the correct behaviour of network equipment or protocols are often required. Because of the high cost of physical hardware, these tests are always performed in a virtual environment. Kathará is a network emulation system which accurately reproduces the behaviour of a real system. It can exploit several virtualization technologies leveraging on its modularity. Lately, Kathará has been rewritten to overcome some implementation limitations and performance issues. This paper presents the Kathará model and its new architecture, demonstrating its value, comparing its scalability and performance with Netkit (another state-of-the-art tool for network emulation) and with the previous version of Kathará.
We introduce an open-source, scalable, and distributed architecture, called Megalos, that supports the implementation of virtual network scenarios consisting of virtual devices (VDs) where each VD may have several Layer 2 interfaces assigned to virtual LANs. We rely on Docker containers to realize vendor-independent VDs and we leverage Kubernetes for the management of the nodes of a distributed cluster. Our architecture does not require platform-specific configurations and supports a seamless interconnection between the virtual environment and the physical one. Also, it guarantees the segregation of each virtual LAN traffic from the traffic of other LANs, from the cluster traffic, and from Internet traffic. Further, a packet is only sent to the cluster node containing the recipient VD. We produce several example applications where we emulate large network scenarios, with thousands of VDs and LANs. Finally, we experimentally show the scalability potential of Megalos by measuring the overhead of the distributed environment and of its signaling protocols.
Datacenters are a critical part of the Internet infrastructure as they guarantee efficient deployment of a wide range of services. Since a considerable amount of datacenter failures is caused by software bugs and configuration errors, the management and testing of these networks is a crucial task. In this field, emulation-based digital twins have proven their effectiveness. To faithfully emulate the typical three layers hierarchy, composed of physical servers, virtual machines, and containers, the support for nested virtualization is a fundamental requirement. Further, the emulation of hyper-scale datacenters needs to leverage on horizontal scaling over a cluster of nodes. Existing container-based proposals do not meet both requirements. On the contrary, existing VM-based proposals meet such requirements, but they need complex configurations and high resource demands. We propose a container-based framework to faithfully emulate datacenters. This is a fundamental building block for designing datacenter digital twins, that would allow testing of real software implementations in a lightweight, scalable, and easily configurable environment.
Data centers are a critical part of the Internet infrastructure. In fact, most of the relevant online services are hosted in a data center. Data center networks are complex, since they are characterized by a high density architecture and by a high level of redundancy. Fat tree topologies are currently the most used in hyperscale data centers. Performing tests in such topologies would be unfeasible, because of the high costs of the required equipment and due to the involvement of human resources. This would limit the automation and reproducibility of tests, leading to a more error-prone testing pipeline. This paper presents VFTGen, a tool that, leveraging on the virtualization and the Software Defined Data Center concepts, automatically builds, deploys and configures arbitrary fat tree topologies in a virtual environment. We demonstrate the ease of use of the tool and its value as a support to the study or the development of networking protocols for fat trees.
The Internet architecture has been undergoing a significant refactoring, where the past preeminence of transit providers has been replaced by content providers, which have a ubiquitous presence throughout the world, seeking to improve the user experience, bringing content closer to its final recipients. This restructuring is materialized in the emergence of Massive Scale Data Centers (MSDC) worldwide, which allows the implementation of the Cloud Computing concept. MSDC usually deploy Fat-Tree topologies, with constant bisection bandwidth among servers and multi- path routing. To take full advantage of such characteristics, specific routing protocols are needed. Multi-path routing also calls for revision of transport protocols and forwarding policies, also affected by specific MSDC applications' traffic characteristics. Experimenting over these infrastructures is prohibitively expensive, and therefore, scalable and realistic experimentation environments are needed to research and test solutions for MSDC. In this paper, we review several environments, both single-host and distributed, which permit analyzing the pros and cons of different solutions.
Several data centers adopt fat-tree topologies, where high bisection bandwidth is achieved by interconnecting commodity hardware and by using specific routing solutions. These solutions, which include protocol implementations and configurations, are difficult to evaluate and test both for the density of fat-trees and for the complexity of the protocols. Also, since most issues show up only when a fault happens, it is unfeasible to perform such tests in a production environment. Additionally, the lack of standard testing procedures motivates an effort in developing solutions for such a critical task. In this paper, we propose a methodology devised for testing fat-tree routing protocol implementations. It adopts a wall-clock independent method to establish metrics, which permits normalizing the results of different routing protocol implementations independently from the execution environment. The methodology is implemented by Sibyl, a software framework developed to perform repeatable tests on arbitrary fat-tree topologies automatically. Sibyl also provides a set of tools to analyze the results and investigate implementation behaviors. We evaluate the methodology and Sibyl in three use cases. Such use cases witness a wide spectrum of situations where Sibyl is effective for analyzing, comparing, developing, and debugging routing protocol implementations.
Nowadays, inter domain routing optimization is performed based on the so called "Tweak and Pray" approach, which consists in performing changes in the configuration of the BGP protocol without knowing in advance the consequences of such a modification. This is due to the lack of cooperation among Network Operators in the configuration of the BGP to optimize the inter domain routing. Inefficiency in the resource usage, network anomalies and outages are common consequences of wrong configuration changes performed by Network Operators in an attempt to improve the performance of their infrastructures. In this paper we propose a novel framework based on the Digital Twin technology to enable the execution of "what-if" analysis in the context of Traffic Engineering performed by tuning BGP parameters. Such a paradigm shift will allow Network Operator to be aware of the effects of BGP configuration changes before their actual execution. A proof of concept related to the balancing of inbound traffic in an Autonomous System network, based on the use of the AS Path Prepending technique, is realized to validate the feasibility of the proposed approach.