Impact Of Network Protocols On Data Center Applications
- Publication Year:
- Repository URL:
- data center; jumbo frames; network protocols; TCP; applied science; Computer Sciences
thesis / dissertation description
Data centers containing hundreds of thousands of servers have become the foundation of modern computing infrastructures. Enterprises are increasingly deploying new applications and moving existing applications to these large-scale data centers. These networked applications communicate via a set of conventional network protocols, which were originally designed for wide area networks. In this dissertation, we study the impact of different networking layer protocols on the performance of data center applications. At the transport layer, we observe that bandwidth sharing via TCP in commodity data center networks, which are organized in multi-rooted tree topologies, can lead to severe unfairness under many common traffic patterns. We term this phenomenon as the TCP Outcast problem. When a set of a large number of flows and another set of a small number of flows arrive at two input ports of a network switch and are destined to one common output port, the smaller set of flows loses out on its throughput share significantly. The Outcast problem occurs mainly in the taildrop queues that commodity switches use. Using careful analysis, we show that the taildrop queues exhibit a phenomenon known as port blackout, where a series of packets from one port are dropped. Port blackout affects the set of smaller number of flows more significantly, as these flows lose more consecutive packets leading to TCP timeouts. We show the existence of this TCP Outcast problem and its impact on MapReduce and file transfer applications. At the data link layer, we focus on a simple question: Should data center network operators turn on Ethernet jumbo frames? While prior work supports using jumbo frames for their throughput and CPU benefits, it is not clear whether these results are directly applicable to modern data center networks. Most of the prior experiments were performed on older hardware, focusing mostly on TCP performance and not necessarily on the application-level performance. In this dissertation, we evaluate the advantages of jumbo frames using modern hardware with features such as large send/receive offload, and with canonical data center applications such as MapReduce and tiered Web services. We find that the throughput and CPU utilization benefits still exist generally, although compared to prior studies, are significantly reduced. Based on these results, we conclude that data center network operators can safely turn jumbo frames on, despite a small side effect which we discovered.