At a time when IT budgets remain relatively flat, IT organisations are faced with a number of major shifts in networking that cannot be ignored. Finding ways to keep pace with this shifting landscape and the rapid drumbeat of innovation is a challenge certainly, but even for those without limitless budgets, it’s not one which has to prove insurmountable. IT organisations must be pragmatic but also consider their technology choices carefully, protecting existing investments through the use of open technologies and looking at which projects need to be prioritised in line with the business’ priorities.
Broadly speaking there are three major trends within the networking space that are currently shaping the industry; convergence, distributed networking and software defined networks. These topics are not new to the table, but having been discussed for some time, they are now beginning to hit the mainstream in terms of the maturity of the technology and where they are on the corporate agenda.
What characterises and differentiates the networking space in its ability to respond to these trends is that, unlike the server space, prices haven’t been driven down by a standardised low-cost architecture like the x86 platform which opened up and diversified the market. Despite CIOs being more cost-conscious than ever before, the networking market has instead remained a market dominated by one major player that has set the pricing agenda in its own favour and successfully locked customers into its technology.
New and open technologies are however emerging, and with them choice is now becoming an option. As CIOs begin to look at developing their networking strategy to benefit from new and innovative approaches, fresh and lower-cost options to traditional architectures are starting to change the game.
Convergence
Convergence is not wholly a networking issue but its impact on the way networks are managed and – just as importantly – who manages them, should not be underestimated. Previously IT functioned in silos, with server, storage and network admins going about their business relatively independently. When someone within the organisation wanted a new resource provisioned, working across these silos to make that happen could be a painful an unnecessarily cumbersome experience.
Virtualisation eased this issue to an extent through its ability to cut across domains and break up these traditional silos. However, this move has created a new role – the virtualisation admin – challenged with managing a plethora of different technologies from a multitude of vendors. They have built their virtualisation infrastructure using the traditional approach of selecting individual platforms (server, storage, and networking) on a best-of-breed basis and managing the virtual infrastructure using existing system management tools optimised for the individual platforms and for physical environments.
In the last several years, a new paradigm for an x86 virtual computing infrastructure called converged infrastructure has gained initial acceptance. An ideal converged infrastructure is an integrated system of compute, storage, networking that is managed holistically by a single software tool and provides pools of virtualised resources that can be used to run applications, virtual desktop infrastructure and private clouds.
There are a few key features which true converged infrastructure solutions should offer:
* Modular infrastructure – modular servers, virtual networking, and intelligent, automated storage platforms connected with merged SAN and Ethernet fabrics.
* Converged management – unified infrastructure operations for infrastructure teams using simple and intuitive tools for repetitive, common tasks.
* Delivery models – flexible means for customers to deploy converged systems, ranging from fully pre-integrated systems to a do-it-yourself (DIY) approach.
* Full reference architectures – flexible blueprints to deploy enterprise applications, VDI, and private cloud solutions.
Where there is some disagreement is around who ‘owns’ the converged infrastructure. Most networking vendors understandably want to keep control in the hands of network admins, or at least come at the issue from a very networking-centric perspective. These solutions will allow a network admin to manage servers, but not necessarily vice versa.
A truly converged solution should offer both options – server admins should be able to manage the network, while network professionals should have access to the server infrastructure. Flexible tools at the switching layer can offer this ability to be configured to fit a networking based domain or put control in the hands of server admins. Again, this openness is key to delivering the flexibility required by the business.
Software defined networking
Although the technology is still in its infancy, software defined networking (SDN) is widely touted to revolutionise network infrastructures on the same scale as virtualisation in the server market. Traditional networking has been unable to offer the flexibility that networking managers require today – there is little to no ability for developers to modify or transform networking devices to provide deep integration between applications and the network infrastructure. Networking switches have always worked by routing data using the CPU built into the networking hardware, which has meant that IT staff have had relatively little control over the flow of data across a network.
The emergence of SDN has provided IT administrators with a controller which is decoupled from switch from which they can harness and shape data traffic flows without having to manually configure individual networking pieces of hardware. Administrators can take control over entire networks of switches from this single control plane providing a flexible virtual network architecture that can keep pace with modern IT demands. This provides a far more pragmatic approach to network management which eliminates hours of manual routing and managing policy, whilst providing the ability to respond far more quickly to business demands.
SDN is relatively new as a concept, but the benefits are well speculated – networking managers anticipate far diminished reliance on expensive proprietary networking switches and routers as SDNs can be configured on less expensive hardware. However the main benefits are from a managerial flexibility standpoint and Dell has been working with SDN providers to build in the technology into its Force10 portfolio so that customers are armed with the right tools when SDN becomes a more mainstream reality.
From traditional to distributed architectures
Several developments have rendered the traditional centralised, monolithic chassis-switched network unfit for the modern business’ requirements. Firstly, the workforce has become extremely disperse and mobile. Secondly, virtualisation and cloud computing have resulted in much higher server-to-server traffic flow than before. Finally, enterprises now have vastly larger volumes of data to process, store, and analyse than was previously the case.
Monolithic networks are simply not architected to efficiently handle this new type of dispersed ‘horizontal’ traffic. Traditional networks are designed to handle linear ‘north-south’ traffic in and out of the data centre. Scaling up these networks is a costly and painful process – adding switches from one vendor until all the slots are filled and then performing a potentially disruptive rip-and-replace forklift upgrade. Core switches are the heartbeat of the network, so enterprises have invariably ended up being locked-in to their switch vendor long-term.
Alternative distributed approaches, which are more easily scalable, are now beginning to hit the market. Compared to the traditional design, the distributed core architecture can be scaled through low-cost Ethernet switches while the architecture improves reliability by eliminating the single network point of failure and providing better performance for any-to-any traffic flow.
However, not all distributed networks are equal and many networking vendors have taken proprietary approaches to building distributed networking equipment, locking customers in just as completely as the monolithic approach. The core may be distributed, but with proprietary standards, protocols and OS’, the network must be managed as a complete entity, without any scope for interoperability. However, an open standards approach to distributed core architectures allows for a much greater degree of flexibility, allowing IT organisations to mix and match components based on their needs and budgetary capabilities.
Be open to be successful
The pace of change in networking is exciting and is creating opportunity for transformation. For too long end-users have been locked into technologies and cost cycles which have stifled innovation. The rise of open standards, frameworks and architectures, and a growing realisation that proprietary models do not have the customers’ best interest in mind is giving way to new solutions to old and new challenges alike. In the new world of networking, the future is bright, the future is open.
What characterises and differentiates the networking space in its ability to respond to these trends is that, unlike the server space, prices haven’t been driven down by a standardised low-cost architecture like the x86 platform which opened up and diversified the market. Despite CIOs being more cost-conscious than ever before, the networking market has instead remained a market dominated by one major player that has set the pricing agenda in its own favour and successfully locked customers into its technology.
New and open technologies are however emerging, and with them choice is now becoming an option. As CIOs begin to look at developing their networking strategy to benefit from new and innovative approaches, fresh and lower-cost options to traditional architectures are starting to change the game.
Convergence
Convergence is not wholly a networking issue but its impact on the way networks are managed and – just as importantly – who manages them, should not be underestimated. Previously IT functioned in silos, with server, storage and network admins going about their business relatively independently. When someone within the organisation wanted a new resource provisioned, working across these silos to make that happen could be a painful an unnecessarily cumbersome experience.
Virtualisation eased this issue to an extent through its ability to cut across domains and break up these traditional silos. However, this move has created a new role – the virtualisation admin – challenged with managing a plethora of different technologies from a multitude of vendors. They have built their virtualisation infrastructure using the traditional approach of selecting individual platforms (server, storage, and networking) on a best-of-breed basis and managing the virtual infrastructure using existing system management tools optimised for the individual platforms and for physical environments.
In the last several years, a new paradigm for an x86 virtual computing infrastructure called converged infrastructure has gained initial acceptance. An ideal converged infrastructure is an integrated system of compute, storage, networking that is managed holistically by a single software tool and provides pools of virtualised resources that can be used to run applications, virtual desktop infrastructure and private clouds.
There are a few key features which true converged infrastructure solutions should offer:
* Modular infrastructure – modular servers, virtual networking, and intelligent, automated storage platforms connected with merged SAN and Ethernet fabrics.
* Converged management – unified infrastructure operations for infrastructure teams using simple and intuitive tools for repetitive, common tasks.
* Delivery models – flexible means for customers to deploy converged systems, ranging from fully pre-integrated systems to a do-it-yourself (DIY) approach.
* Full reference architectures – flexible blueprints to deploy enterprise applications, VDI, and private cloud solutions.
Where there is some disagreement is around who ‘owns’ the converged infrastructure. Most networking vendors understandably want to keep control in the hands of network admins, or at least come at the issue from a very networking-centric perspective. These solutions will allow a network admin to manage servers, but not necessarily vice versa.
A truly converged solution should offer both options – server admins should be able to manage the network, while network professionals should have access to the server infrastructure. Flexible tools at the switching layer can offer this ability to be configured to fit a networking based domain or put control in the hands of server admins. Again, this openness is key to delivering the flexibility required by the business.
Software defined networking
Although the technology is still in its infancy, software defined networking (SDN) is widely touted to revolutionise network infrastructures on the same scale as virtualisation in the server market. Traditional networking has been unable to offer the flexibility that networking managers require today – there is little to no ability for developers to modify or transform networking devices to provide deep integration between applications and the network infrastructure. Networking switches have always worked by routing data using the CPU built into the networking hardware, which has meant that IT staff have had relatively little control over the flow of data across a network.
The emergence of SDN has provided IT administrators with a controller which is decoupled from switch from which they can harness and shape data traffic flows without having to manually configure individual networking pieces of hardware. Administrators can take control over entire networks of switches from this single control plane providing a flexible virtual network architecture that can keep pace with modern IT demands. This provides a far more pragmatic approach to network management which eliminates hours of manual routing and managing policy, whilst providing the ability to respond far more quickly to business demands.
SDN is relatively new as a concept, but the benefits are well speculated – networking managers anticipate far diminished reliance on expensive proprietary networking switches and routers as SDNs can be configured on less expensive hardware. However the main benefits are from a managerial flexibility standpoint and Dell has been working with SDN providers to build in the technology into its Force10 portfolio so that customers are armed with the right tools when SDN becomes a more mainstream reality.
From traditional to distributed architectures
Several developments have rendered the traditional centralised, monolithic chassis-switched network unfit for the modern business’ requirements. Firstly, the workforce has become extremely disperse and mobile. Secondly, virtualisation and cloud computing have resulted in much higher server-to-server traffic flow than before. Finally, enterprises now have vastly larger volumes of data to process, store, and analyse than was previously the case.
Monolithic networks are simply not architected to efficiently handle this new type of dispersed ‘horizontal’ traffic. Traditional networks are designed to handle linear ‘north-south’ traffic in and out of the data centre. Scaling up these networks is a costly and painful process – adding switches from one vendor until all the slots are filled and then performing a potentially disruptive rip-and-replace forklift upgrade. Core switches are the heartbeat of the network, so enterprises have invariably ended up being locked-in to their switch vendor long-term.
Alternative distributed approaches, which are more easily scalable, are now beginning to hit the market. Compared to the traditional design, the distributed core architecture can be scaled through low-cost Ethernet switches while the architecture improves reliability by eliminating the single network point of failure and providing better performance for any-to-any traffic flow.
However, not all distributed networks are equal and many networking vendors have taken proprietary approaches to building distributed networking equipment, locking customers in just as completely as the monolithic approach. The core may be distributed, but with proprietary standards, protocols and OS’, the network must be managed as a complete entity, without any scope for interoperability. However, an open standards approach to distributed core architectures allows for a much greater degree of flexibility, allowing IT organisations to mix and match components based on their needs and budgetary capabilities.
Be open to be successful
The pace of change in networking is exciting and is creating opportunity for transformation. For too long end-users have been locked into technologies and cost cycles which have stifled innovation. The rise of open standards, frameworks and architectures, and a growing realisation that proprietary models do not have the customers’ best interest in mind is giving way to new solutions to old and new challenges alike. In the new world of networking, the future is bright, the future is open.