New MIT Flash Memory System Promises to Improve Data Center Management
MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has unveiled a new system for data center caching, a flash memory solution that promises a more energy-efficient and economical approach to data center management.
According to MIT News, CSAIL researchers presented their flash caching system, named BlueCache, at last week’s International Conference on Very Large Databases (VLDB) in Munich, Germany. The flash-based cache is designed to replace conventional cache servers, which typically use RAM, a fast but power-hungry and relatively expensive technology.
Since flash memory is much slower than RAM, a flash-caching system may come as a surprise, particularly since the purpose of a cache system is to store the results of common queries for faster access. A data center for a major web service may have up to 1,000 caching-dedicated servers.
“That’s where the disbelief comes in,” Arvind, senior author of the VLDB conference paper, told MIT News. “People say, ‘Really? You can do this with flash memory?’ Access time in flash is 10,000 times longer than in DRAM [dynamic RAM].”
But while that’s a huge time difference to computers, people won’t notice it. According to Arvind, human users won’t detect the difference between the standard .0002 seconds it takes to process a standard internet query and one that’s twice as long done via a flash-caching system.
How does BlueCache do it? CSAIL researchers implemented a variety of innovations, including some savvy engineering tricks and the use of pipelining, a common computer science technique where the flash cache system begins processing thousands of new queries before delivering the results of the first query to reach it, MIT says.
Another trick was to not abandon DRAM entirely. In fact, the CSAIL team added a few megabytes of DRAM per million megabytes of BlueCache flash cache. According to MIT, the DRAM stores a table that pairs a database query with the flash-memory address of its query result. This approach makes detecting cache misses — the identification of data not yet imported into the cache — more efficient, the researchers note.
The small DRAM addition doesn’t negatively impact the flash system’s power savings, says MIT, because BlueCache’s energy-efficient design allows it to use just four percent as much power as a default caching system.
Eschewing the conventional software-based approach, BlueCache also uses a special hardware circuit for each of the caching system’s three main functions: reading data from the cache, writing data to the cache and deleting data from the cache. The payoff: better performance and lower energy consumption.
Here’s another speed trick: To maximize performance of the bus, the conduit between flash memory and the central processor, BlueCache collects a large number of queries before sending them. This approach maximizes bus bandwidth efficiency and speeds overall system performance.
Better Data Center Management
The CSAIL researchers’ innovations enable BlueCache to perform write operations as efficiently as a DRAM-based system, MIT says. For enterprises, the technology could mean significant costs savings for data center management — an important consideration as data center construction expands rapidly around the world.