We use cookies to make interactions with our websites and services easy and meaningful, and to better understand how they are used and to tailor your experience. You can read more about our cookie policy here. By continuing to use this site you are giving us your consent to do this.

January 19, 2017 by Evan Parker

The lab is dead, long live the (new) lab!

During my days as a network engineer I would spend weeks that would turn into months that would turn into years in service providers’ labs getting product certified. I knew there had to be a better way to introduce new products into service providers’ networks, but the solution was not apparent at the time. Here are just a few of the many reasons why system certification and OSS integration takes so long:

  • Early availability of new products. Even though the northbound interface that could be used for OSS integration is available months ahead of the physical product, OSS developers aren’t comfortable integrating to a specification because they can’t test their code against a written document. They need to validate against a real API that responds to OSS queries.
  • Access is limited. There are always more people who need access to the systems under test than there are systems. OSS developers need the most hands-on time because of their long development and test cycles. The network engineers must test and evaluate functionality, as well as validate interoperability. Field techs and NOC personnel must be trained on how to install and operate these new systems. Time sharing the same systems prolongs the test and integration project timeline and makes remote work nearly impossible.
  • Backend Scalability Testing. Functionality integration testing with OSS and orchestration can be done with a small handful of lab systems, but scalability testing cannot. How can you be sure your OSS will be able to manage 10,000 instances of a system when you only test against five? You can’t. The best you can do it test with a simulator, exercising the configuration interfaces, but not the control and data plane.
  • Labs cost too much. Power, space, cooling, equipment, and staff drive ongoing OPEX. At a time when new subscriber growth rates have slowed, ARPUs are under increasing pressure, and other functions in the business require incremental investment, maintaining a lab can be a significant expense.

So you still need labs with physical hardware for certain things: emissions testing, throughput capacity testing, physical interface compatibility, and much more. But in this day and age of virtual everything, shouldn’t we leverage virtualization more for things like functional testing, network simulation, and OSS scalability testing? At Calix, we think you should. More on that in a future post.