New system could reduce data-transmission delays across server farms by 99.6 percent
Big websites usually maintain their own “data centers,” banks of tens or even hundreds of thousands of servers, all passing data back and forth to field users’ requests. Like any big, decentralized network, data centers are prone to congestion: Packets of data arriving at the same router at the same time are put in a queue, and if the queues get too long, packets can be delayed.
At the annual conference of the ACM Special Interest Group on Data Communication, in August, MIT researchers will present a new network-management system that, in experiments, reduced the average queue length of routers in a Facebook data center by 99.6 percent — virtually doing away with queues. When network traffic was heavy, the average latency — the delay between the request for an item of information and its arrival — shrank nearly as much, from 3.56 microseconds to 0.23 microseconds.
Like the Internet, most data centers use decentralized communication protocols: Each node in the network decides, based on its own limited observations, how rapidly to send data and which adjacent node to send it to. Decentralized protocols have the advantage of an ability to handle communication over large networks with little administrative oversight.
The MIT system, dubbed Fastpass, instead relies on a central server called an “arbiter” to decide which nodes in the network may send data to which others during which periods of time. “It’s not obvious that this is a good idea,” says Hari Balakrishnan, the Fujitsu Professor in Electrical Engineering and Computer Science and one of the paper’s coauthors.
With Fastpass, a node that wishes to transmit data first issues a request to the arbiter and receives a routing assignment in return. “If you have to pay these maybe 40 microseconds to go to the arbiter, can you really gain much from the whole scheme?” says Jonathan Perry, a graduate student in electrical engineering and computer science (EECS) and another of the paper’s authors. “Surprisingly, you can.”
Division of labor
Balakrishnan and Perry are joined on the paper by Amy Ousterhout, another graduate student in EECS; Devavrat Shah, the Jamieson Associate Professor of Electrical Engineering and Computer Science; and Hans Fugal of Facebook.
The researchers’ experiments indicate that an arbiter with eight cores, or processing units, can keep up with a network transmitting 2.2 terabits of data per second. That’s the equivalent of a 2,000-server data center with gigabit-per-second connections transmitting at full bore all the time.
“This paper is not intended to show that you can build this in the world’s largest data centers today,” Balakrishnan says. “But the question as to whether a more scalable centralized system can be built, we think the answer is yes.”
The Latest on: Data centers
via Google News
The Latest on: Data centers
- The CIO Playbook For Defying Data Gravityon August 3, 2022 at 12:05 pm
Your data glut is only going to grow, but you can make it work for you with an intentional multicloud strategy.
- Pliops Collaborates with Partners to Break Through Data Scalability Barriers at FMSon August 3, 2022 at 10:28 am
Pliops, a leading provider of data processors for cloud and enterprise data centers, will be on hand this week at the ...
- AMD's Data Center Revenue Soars 83% -- Is the Stock a Buy Now?on August 3, 2022 at 4:56 am
AMD's data center sales were nearly $1.5 billion last quarter. Xilinx grew 35% year over year on a stand-alone basis. The company is enjoying higher profit margins as expected, making it a great ...
- Alibaba Cloud and XPeng Motors build autonomous vehicle cloud data centeron August 3, 2022 at 4:28 am
Alibaba Cloud has partnered with electric vehicle manufacturer XPeng Motors to develop a data center for autonomous driving vehicle model testing. The 'intelligent computing center for automated ...
- Texas Medical Center data suggests latest COVID peak has passed, but numbers remain highon August 2, 2022 at 6:32 pm
Texas Medical Center data released Tuesday suggests the latest wave of COVID-19 might have reached its peak in the Houston area, though several key metrics used to track the virus remain high. The ...
- AMD posts Q3 sales outlook below Wall St, data center growth remains strongon August 2, 2022 at 3:31 pm
Advanced Micro Devices Inc on Tuesday forecast third-quarter revenue slightly below Wall Street estimates, a signal of uncertainty that concerned some investors after the company stock made huge gains ...
- Google’s London data center outage during heatwave caused by “simultaneous failure of multiple, redundant cooling systems”on August 2, 2022 at 5:48 am
Google said the data center hosting one of its London cloud regions suffered “simultaneous failure of multiple, redundant cooling systems” during the UK’s recent record heatwave. Google, Oracle, and ...
- Rethinking the Data Center: Hydrogen Backup is Latest Microsoft Moonshoton August 1, 2022 at 9:48 am
Microsoft has successfully tested a 3-megawatt generator using hydrogen fuel cells, proving that the technology can work at data center scale. It's the latest in a series of Microsoft "moonshot" R&D ...
- Data Center Innovations Future Proof Business Information and Infrastructureon August 1, 2022 at 5:30 am
Venyu explains how data center innovations will provide solutions for businesses that are both resilient and future proof.
- Amazon: We’ll Spend More on Data Centers, Less on Warehouseson August 1, 2022 at 5:21 am
Amazon has been busy slashing unneeded warehouse space by postponing ribbon-cuttings at newly built warehouses for up to two years, subleasing up to 30M SF and terminating some leases. The company ...
via Bing News