Hyperscale Data Centers

Hyperscale is the ability of an architecture to scale appropriately as increased demand is added to the system.” (ref. Wikipedia )

Flexibility and Scalability

Two critical factors in nowadays economy, where organisations can grow exponentially in a very short period of time, from garage start-up to industry juggernauts with hundreds of thousands of subscribers and users.

How can you deliver a reliable service and maintain your performances when you are subject to such a rapid growth?

Going Hyperscale: the architecture of choice in cloud-based organisations such as Facebook, Amazon and Google, and also for building high spec supercomputers.

Data Center’s Evolution

In response to the above, DC’s are evolving by providing a single, massively scalable computer architecture. This architecture is typically made up of small, individual servers, called nodes, that provide compute, storage and networking resources. These nodes are then clustered together and managed as if they were a single entity. Nodes are typically deployed from inexpensive, off the shelf servers.

To maintain investments costs as low as possible, you start small, follow the demand, building up and expanding your estate by adding new nodes to the cluster. Adding new nodes and resources is seamless, having your Hyperscale Software automatically re-balancing and re-positioning the workloads within the expanded architecture.

Rack Disaggregation

Physical kits will also have to change to meet Hyperscale requirements. We are already talking of Hyperscale servers and Hyperscale networks embedding optical modules.

With such a new approach to the rack, a logical architecture disaggregates and pools compute, storage, and network resources and provides a means to create a shared and automated rack architecture that enables higher performance, lower cost, and rapid deployment of services. To run a disaggregated environment will require the development of an agile orchestration of hardware layer (e.g. OpenStack).

From a connectivity standpoint, the use of high-speed interconnections between components with less copper and more prevalent optical/wireless technologies, along with more security, telemetry and monitoring.

What industries are affected?

Google and Facebook have already converted several DC’s into Hyperscale Datacenters. Ad Hoc solutions are offered by Dell and Ericsson as well.

How about financial services and banking industry – how would they benefit from an Hyperscale architecture? There is a big difference in resource demand when you compare an internet banking user to a Pokemon player, for obvious reasons. However let’s not forget that most of the big financial institutions are now investing and implementing big data solutions, and in that case it might be necessary and critical to have access to extra computational resources and be able to expand your estate as and when needed.

Also what will happen when a Blockchain solution will be implemented across such organisations? How will it cope with the computational resources needed? We all know that this type of solution is decentralized and require CPU resources from several nodes.

Many open questions and only one certainty: big changes are upon us and they are coming really fast!