Alan Z. Uster - Fotolia

Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Network function virtualization needs to evolve to overcome challenges

Although NFV can help organizations be more agile, the technology still has a long way to go before it can meet evolving data center needs. Read this author Q&A to find out more.

There's no arguing that network function virtualization is a complex technology; when disagreements do occur, they're more about how to reduce costs, simplify implementation and make day-to-day management easier. A VM-centric plan with OpenStack orchestration is one option, while cloud-native and serverless installations are others. To outline the challenges of this technology and break down how it's being used, we consulted experts Thomas D. Nadeau and Ken Gray, who co-authored the book, Network Function Virtualization.

What are the advantages and disadvantages of NFV technology?

Thomas D. Nadeau and Ken Gray: The potential advantages of network function virtualization (NFV) are in service agility and -- as a result -- customization. Simply put, this means that benefits in using network-function-virtualization-related technologies can be reflected in the creation and operations/assurance aspects of a service offering. One of the most important and transformative aspects of this arises from being "analytics informed and continually optimized." This is a marked change from the present methodology -- service creation, operation and deletion/modification loop.

The disadvantages of network function virtualization arise out of increased complexity and security concerns for the operator and the open source software required to run these systems. Virtualization offers new challenges in resource management, orchestration, operation and security that have to be mastered. These are compounded when you consider that the overall approach is a mixture of technologies and practices. Don't assume that lifting and shifting existing services means simply turning them into virtual appliances. It's a necessary start but adds cost and complexity without solving a lot of long-term problems. You can declare victory by virtualizing a readily virtualized appliance like an Information Management System entity and still have missed the point. There's a long evolution ahead of us where individual "service" concepts will be reimagined and reformed. Who would have imagined serverless application function when network function virtualization was being designed by the European Telecommunications Standards Institute (ETSI)?

The early ETSI architecture itself invited confusion because of its inherent complexity in compliance, compatibility and interoperability shortfalls. The proliferation of open source projects also distracted from ETSI development. You could argue that the ETSI design doesn't really foster any sort of manageable verification of interoperability -- a specification that lacks real specifications -- which led to the spawning of Open Platform for NFV. As for open source project proliferation, the network function virtualization management and orchestration portion of the architecture provides a good example. Competing projects, some with questionable governance or motive, vied for attention. Though the Linux Foundation recently managed to solidify some of this, the bottom line of creating, running and deploying code is still unrealized.

At their root, many functions that are targets for virtualization are network I/O centric, which wasn't in the optimization sweet spot of generic compute platforms.
Thomas D. Nadeau and Ken Grayco-authors, Network Function Virtualization

Against this backdrop, there's the continual danger of over-building projects in the open source community around network function virtualization, such as what has happened with large projects like OpenStack. We need modular, specialized focus projects that interoperate for an evolving space. To be clear, we think everyone can agree that open source software is here to stay for the long-term in the network function virtualization marketplace, but the specifics of how things are executed in the public community requires continual vigilance and oversight.

What is the difference between the way containers affect networking and NFV versus the way VMs affect networking and NFV?

Nadeau and Gray: On a base level, containers could allow the elimination of the virtual switch construct -- of course; you can run containers in a VM. Containers create a slightly different network "attach point" than the VM, which has spawned a couple of container-specific networking products. With this new "attach point" comes different security paradigms. Beyond that, at the highest level, containers provide an additional layer of abstraction for the developer, which has downstream effects on deployment and operation. Of course, you could run your application whole-cloth inside a container and not take advantage of the abstraction, in which case the differences could be trivially described in orchestration and resource management.

With those thoughts in mind, if the developer does embrace cloud-native design, you could see a sea-change in the concepts around availability -- compared to VMs -- and messaging patterns as the functionality of the monolith is broken apart, which are major differences in networking. You'll also see changes in the concept of service function chaining, which was necessary for VM-based service offers -- assuming 'service' is more than one VM -- but may be challenged by cloud native and/or serverless instantiations. You will start to see these in a take on the "SDN/NFV continuum." That is, we think you will see deployments where a mix of VMs, containers and microservices exist in a harmonious way. How is an exercise for the specific domain or user?

How does NFV, in turn, affect VM performance?

Nadeau and Gray: We believe that the answer to this question is so important that we devoted two chapters to it. The general answer is that performance hasn't been what was originally imagined -- beyond basic packet-in/packet-out speed tests -- due to a number of factors. At their root, many functions that are targets for virtualization are network I/O centric, which wasn't in the optimization sweet spot of generic compute platforms. Vendors and researchers have been exploring numerous architectural changes in the generic compute platform or lower-level virtualization software to provide performance boosts. Additionally, an after-market of smart NICs and other acceleration technology has emerged. These developments have challenged the basic 'cloud' economics underpinning the original network function virtualization use-case -- this includes the cost of power and cooling.

The other important aspect of this is to understand that one has to carefully consider how the question of performance is answered; it's complicated, so high-level, aspirational statements don't actually cut it when you get into the details as we did in those chapters -- and 'hero' forwarding tests are misleading. Examining the return on the investment of virtualization is problematic. What we show is that it's not a straightforward win for the network operator. The answer depends on a number of factors, and at the time we wrote the book, nobody seemed to have really done the math taking into account all factors -- or, at least, they're not showing their work.

How has NFV software evolved over time to meet data center needs?

Nadeau and Gray: The short answer is that it hasn't, but we hope it will. It's still amazingly complex, costly and doesn't scale to the expectations of most operators. There are also numerous moving parts needed to deploy and operate the infrastructure. Again, there's a point where it seems relatively 'easy' -- a single virtual function at moderate scale in a centralized location. The problem is when you get into large scale, distributed deployments with more complex chains with two or more functions. Here, the moving parts required to use the most common types of deployments are numerous, complex to operate and costly. Even if you manage to build your infrastructure with your own staff, that's a cost factor. The bottom line is that we still have a long way to go. But, there still is hope. As we get to cloud-native implementations of services and utilization of the container concept, the distributed aspects seem to be more manageable and scale/efficiency seem better -- even though we should still be aware we're shifting the focus of networking, security and other aspects of the technology as described earlier.

How do you think NFV technology will continue to evolve over the next year or so?

Nadeau and Gray: We need to look at what we learned from our recent lift-and-shift period. While it hasn't really brought costs down -- the costs and complexities of the VM-centric network function virtualization plan and its accompanying OpenStack orchestration are problematic -- we learned a few things that should have been obvious. After the huge integration service costs of deployments in this period, we have to go back and look at the roots of operations complexity in open source software and figure out a way to end that 'carrying cost.' There are also attendant security aspects that we can't afford to overlook.

Going forward, we need to embrace the fact that technology moves forward in waves of adoption around a point of innovation that defy over-description/prescriptive architecture -- what was attempted -- and encourages modularity and loose coupling. If that's what we thought we were doing with the ETSI NFV plan and OpenStack, we need to revisit the definition of those concepts. Network function virtualization is actually already succeeding in the bloom of over-the-top services consumption, so it's just a question of by whom and how it's being realized. Ultimately, what is winning the day is reimagination of the marketplace. Cloud-native designs deploying containers, serverless, the evolution of edge compute -- already underway -- and the new application/service designs that come with them are snapping existing orchestration, resource management and security paradigms. We'll have to evolve policy and identity concepts in network function virtualization and software-defined networking to cope. It'll take longer than a year to see the results, though. Network function virtualization is about an evolution and we've just left the starting line.

Read an excerpt from Network Function Virtualization by Thomas D. Nadeau and Ken Gray to learn more about this technology. The book can be purchased online from ScienceDirect.

Next Steps

Achieve network flexibility with NFV

Navigate different NFV deployments

Address NFV security challenges

This was last published in May 2017

Dig Deeper on Network virtualization

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

How are you currently using network function virtualization in your environment?
Cancel

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close