Traditional data center architecture employs individual tiers of compute (servers) and storage that are connected by a dedicated network. The network is typically a storage area network (SAN) or a network attached storage (NAS) system.
SAN/NAS-based architecture physically separates the compute tier from the storage tier to centralize storage resources for data center services. This approach improves the inefficienct utilization of direct-attached storage (DAS) designs that do not share data among multiple servers.
While SAN and NAS-based architecture has been the main pillars of many enterprise data centers, they are having trouble keeping up with the demands of a modern, virtualized environment. They simply were not designed to handle the explosion of VMs and the resulting I/O needs of enterprises today.
Supplying new VMs using SAN-based storage is often a complex and laborious process involving manual configuration of a centralized storage array in order to accommodate the new workloads. In addition, dependency on an intermediate network to continuously move data between compute and server tiers adds unnecessary latency, adversely affecting performance.
While SANs have performed exceptionally in the past, they are quickly becoming a handicap for the next generation of data centers. SANs have:
Fortunately, leading datacenter architects have come up with a better approach. Google, Amazon, Facebook, and Microsoft have embraced a converged infrastructure design for their cloud services, designs that collapse disparate compute and storage tiers into unified systems. These completely eliminate the need for network-based storage systems such as SAN.
The infrastructure convergence pioneered by these innovative companies retains the benefits of shared storage while reducing the cost and complexity of the storage network. This greatly increases overall data center performance, scalability, and efficiency.