Quantcast
Channel: Managedview.emc.com – EMC Emerging Tech Blog
Viewing all articles
Browse latest Browse all 63

How The Data Center Is Becoming Software-Defined

0
0

How pervasive is the concept of software-defined?

In mid 2012 VMware CTO Steve Herrod and others began to articulate the concept of the software-defined data center. This concept was just as often received as a marketing position from vendors as an observation about the evolution of the data center. At the time, I blogged about the basic concepts of the software-defined data center and followed-up the initial post with an additional blog posts about storage challenges in the software-defined data center. Other posts addressed related topics such as how cloud adoption contributes to the evolution of APIs.

Now, since many months have passed, which can be measured in dog years in high-tech, I would like to revisit the concept of software-defined as it pertains to storage as well as compute and networking, and its status in 2013. I believe that the software-defined data center has moved beyond concept, putting us on the cusp of a time when new architectures and product offerings will make it a reality.

Change Led by Compute Virtualization

All change has to start somewhere and the software-defined change began with abstracting applications from physical servers. Compute virtualization enabled applications to be separated or decoupled from specialized hardware, easing the process to automatically move business processes to industry-standard architectures. This fundamental change caused the application universe to expand; more applications could be created more quickly because code was written to open platforms. Concurrently, the accelerated use of lower-cost standardized multi-core x86 compute platforms was adopted by storage, networking, and security platforms as data center managers saw the benefits of standardization yield better resource utilization. This standardization at the hardware level then spurred automation of configuration and management processes because it freed software from dependencies on proprietary hardware. In no time, more and more functionality and intelligence moved from the physical plane to software where functions are now being abstracted from physical constructs to be automated and presented via open interfaces.

Impact on the Data Center

There once was a time when applications, hardware, and their operating systems were tightly integrated or coupled systems, optimized to deliver specific IT functions. While beneficial for certain needs, these systems were also rigid silo structures,   not easy to change or to repurpose.

Moving functionality for common tasks to the software-layer changes how components are designed, built, and integrated into the data center. For example:

  • Storage management moves away from microcode in an array-specific controller to software, engineered to abstract away system details to a control plane to expose features through a well-defined service interface. In storage systems, data planes move and store data from and to particular locations, while control planes manage disk assignments for LUN and file share sizes, and implement functions such as snapshots, replication ,disaster recovery, data migrations, and encryption.
  • Network configuration moves away from individual device upgrades to a service change request via consolidated and centralized interfaces in the control plane. In network devices, data planes check the header of packets and route them based on a forward table, while the control plane calculates the forward tables and updates them in the devices.

The impact of these changes on the data center is fairly self-evident because the separation of management constructs from individual physical devices means that data centers should be able to use fewer tools requiring less staff to get work done. The greatest effect, however, is at the end-user level and how business applications get deployed and consumed.

Effect on Applications

Compute virtualization made it possible to decouple applications away from special-purpose, proprietary compute hardware so that applications were no longer tightly constrained to particular servers or tied to dedicated single-purpose silos. This change made it possible to get more out of existing servers and to scale applications more easily. It also exposed some new challenges though, because with compute virtualization came the added ability of being able to create and move virtual machines or virtual servers more easily. This change made it difficult to always keep storage resources.

Current State of Software-Defined

Software-defined first gained some visibility at the Interop conference in May last year, describing a layered approach that includes software-defined compute, networking, storage, and security. It gained some momentum in the months that followed and in the weeks leading up to VMworld with VMware, leveraging the term which made sense since their CTO was possibly the first to use it. Storage vendor Coraid was among an early wave to follow adopting the moniker for marketing purposes applying it to storage. Others have joined the fray in the storage segment like Nutanix and Nexenta, while Insiemi (a Cisco spin-in company) and Nicira (a direct hit acquisition by VMware announced last July) have shown the networking segment to be slightly ahead of the curve to align storage and networking to compute. Industry analysts have all given their spin and made arguments for what constitutes the software-defined data center, including software-defined storage.

While the storage startups have been quick to jump in, most of the major players like EMC, IBM, and HP have been quiet until recently. While larger organizations are prone to be slow to move, it does not mean they are necessarily asleep. Case in point: at a recent financial analyst meeting in New York, EMC joined VMware to talk about the shared vision and common theme of the software-defined data center with EMC characterizing its plans for software-defined storage as storage virtualization done right

While it still remains to be seen what this means, it is clear that software-defined is getting applied in technology delivered as products and is no longer just a concept. VMware and Microsoft (with Hyper-V) dominate compute and Cisco and VMware (with Nicira) are chipping away at networking. With this activity, the time may be right for a major storage play. While security might seem to be the laggard in this mix, one could postulate that it has always been software-defined. With the possibility of software-defined compute, networking, and storage coming together, the software-defined data center may well be on the horizon and soon be a reality.

Author information

Mark Prahl
Mark Prahl

The post How The Data Center Is Becoming Software-Defined appeared first on EMC Emerging Tech Blog.


Viewing all articles
Browse latest Browse all 63

Latest Images

Trending Articles





Latest Images