Hyperscale: The Next Generation of Data Center Architecture
As enterprises continue to move toward cloud computing and colocated IT models in order to support increasing IT demands and workloads, more and more focus is turning to hyperscale data centers. Typically, data centers are big, sprawling, campus-level facilities, often run by internet giants and cloud or colocation providers, but they can also be managed by large enterprises. And according to 451 Research, hyperscale is the fastest-growing data center segment.
Hyperscale vs. Traditional Data Centers
The differences between hyperscale and traditional data centers go beyond just size. Hyperscale facilities have distinct design and management requirements to support the complexity of new workloads and storage demands. Here are a few of the ways in which the two data center models differ:
- Servers: Many hyperscale operators, particularly the internet giants running hundreds of servers, construct what are called vanity-free servers built to their own specifications rather than purchasing name-brand servers, as CNET reports. These servers don’t have many of the components of traditional servers — like displays and multiple interfaces — and are designed in a way that makes them both high-speed and resilient. As a result, vanity-free servers can be up to 38 percent more efficient and up to 24 percent less expensive to build and run than traditional server hardware.
- Cooling: Hyperscale locations are starting to move toward more temperate and colder climates in an attempt to save on cooling costs. Inside, cooling systems seen in traditional data centers are replaced with custom air handlers, large metal boxes containing a blower and cooling elements that allow the servers to run at a higher ambient temperature.
- Application Portability: Hyperscale data centers run cloud applications that are highly portable, so if a server fails, workloads can easily be moved around. In a traditional data center, if a server running a critical application fails, it needs to be repaired before the application can effectively run.
- Power: As part of the commodification of subsystems under the Open Compute Platform, power supply is being taken out of the servers and, in some instances, built directly into the individual custom racks. Alternatively, some operators prefer to use a centralized UPS solution to avoid some of the added maintenance of distributed power architecture. In either case, hyperscales prefer to use lithium-ion batteries instead of traditional valve-regulated sealed lead-acid batteries, because they pack a lot of energy into a much smaller footprint.
- Support: Because hyperscale environments use thousands of servers, staffing ratios can vary drastically from the average data center. In some instances, hyperscale operators employ dedicated teams just to maintain their servers. In an average data center, this granular level of support simply doesn’t exist due to the high cost and lack of available personnel.
The Promise of a Hyperscale Data Center
Hyperscale provides a new approach to the way data centers are designed, operated and managed to handle the complexity of new workloads and the increasing demand on IT services. However, attempts towards hyperscale standardization — by bringing industry leaders together to collaborate on designs and discuss what has and hasn’t worked for them — haven’t found much success with many organizations creating their own custom designs and standards in silos. Benefits like economies of scale, lower total cost of ownership and on-demand scalability will require tackling some of these big technology challenges and doing it in a way that makes the environment more agile and efficient than today’s mainstream data centers.