Software-defined everything -- from SDN and NFV to software-defined storage -- has been in use in some forms for decades. It's not as revolutionary as it seems.
Software-defined networking. Network functions virtualization. Virtual storage. These are the new buzzwords of the channel today. But are these trends actually as novel as they seem? Viewed from an historical perspective, not really.
SDN, NFV and scale-out storage -- which we can collectively call software-defined everything, or SDx if you like acronyms -- offer lots of benefits for data centers and the cloud. They abstract operations from underlying infrastructure, making workloads more portable, scalable and platform-agnostic. They also create new opportunities for building more secure infrastructure. And they can lower costs by letting you get next-generation functionality out of cheap commodity hardware.
It seems pretty certain that software-defined everything is the wave of the future. From Docker containers to carrier-grade SDN projects like ONOS, these technologies are progressing rapidly through the development and adoption stages and into production use. Some of them are not there yet, but they're on the way.
What's Old Is New Again
In many respects, though, software-defined everything is not really a new idea at all. Think, for example, about virtualization -- a technology that's almost as old as computers themselves, and which went mainstream in the data center world more than a decade ago.
Virtual servers -- here I mean the traditional kind that use hypervisors like VMware's or KVM, not containers -- also abstract storage from the host system. They usually virtualize networking. In other words, they rely centrally on software-defined functionality, even though no one was thinking about it that way when virtualization became the next big thing in the 2000s.
Think, too, about VPNs, an even simpler technology, which has been in widespread use for a number of years. What is a VPN but a software-defined network? To be sure, traditional VPNs offered only a small slice of the functionality you can get from modern SDN infrastructure, but the core idea is the same.
Telecoms have essentially been using SDN ever since they updated from circuit-switching to packet-switching networks, too.
In a way, even plain old Network Address Translation, or NAT -- the thing that lets you connect dozens of devices in your house to the public Internet without having to assign unique IPs to each of them -- is also a form of software-defined networking.
As for software-defined storage, that has also long been in use. A local virtual storage volume, or a networked file system protocol like NFS, is just software-defined storage by another, older name.
What Makes Today's SDx Different
So, if software-defined infrastructure is not actually a very new idea, what is different about it today? Is it just a buzzword that, like the cloud (which was also not a new idea when it came into vogue, by the way; the Web and even old-school Unix terminals were also basically the cloud), has become trendy without meaning much of anything?
Well, no. Software-defined everything may not be different in kind from what has been happening already for decades. But it is different in scale and sophistication. Projects like OpenDaylight are enabling a new level of flexibility in the data center. They are making software-defined storage, networking and services not just complements to physical infrastructure, but complete replacements for it.
Still, we think it's worth bearing in mind that although software-defined everything may seem totally revolutionary, it's firmly rooted in the past. If you want to make the most of it, you should make sure you are using it to achieve new levels of functionality -- as opposed to just replacing your existing, first-generation SDx technology, like VPNs and virtual servers, with newer platforms.