A pre-release version of the Cisco Modeling Labs GUI.
[Updated with a few new details from Cisco Live Milan. See bottom of page.]
Virtual Internet Routing Lab (VIRL, or “viral”) has been a subject of discussion in my network geek corner of the Internet since Cisco announced it last year. In between then and now the name has changed. Apparently someone didn’t like having a “viral” product, so now it’s called Cisco Modeling Labs (CML, or “camel”). Right now, it looks like release will probably be early in the second quarter of 2014.
I have been testing (read “playing”) with a hosted CML server for a couple months and would like to share some of what I have learned about it.
The system comes in two primary forms. One is a standalone VM that can be run on a desktop or laptop. The other form is for corporate and the image will run on ESXi or on bare metal. There may eventually be the ability to build clusters with the corporate version, which could allow you to lab some impressively large topologies, but that’s something they are looking at for post-FCS. The system runs in a client/server configuration with a front end client built on Eclipse.
The standalone VM flavor will be an inexpensive version for individuals (probably in $100 range) that will support up to 15 Cisco VMs and up to 100 VMs total. It will be able to run on a laptop and is a VMware image. For the Mac users, this image will run in VMware Fusion. It does not work with VirtualBox and I presume it will not work with Parallels. Neither will be supported, that much is certain. This version actually runs in a client/server configuration, too. There just isn’t a separate computer for the server.
Under the hood, the system is built on Linux using OpenStack, some “middleware”, and multiple VMs. The demo server I have been using is some variation of the corporate version and is hosted at Cisco. This cloud hosted flavor probably will not happen for corporate scale, but they know individuals may want this. When I spoke with the Cisco team they said they have plans for this, but it definitely won’t be an option at FCS.
So what devices will you be able to lab with, anyway? The demo environment I’m working with has IOS-XRv, IOSv (a virtual version of the traditional IOS, not IOU-based), CSR 1000v, and NX-OS using Titanium. Titanium is still up in the air as to whether it will be released at FCS. Each business unit makes it’s own decisions about including their products in CML, so we’ll have to wait and see. Cisco says there is a project to add the ASA, but it definitely won’t be ready at FCS. You can, however, drop in a Linux machine and you can add third party machines using Grizzly, OpenStack, KVM. This is not functionality I have been able to test.
To connect the devices you will have the options of Ethernet interfaces and Ethernet interfaces. Any interface type you want, as long as it’s Ethernet. Sorry, no serial interfaces.
The system is essentially layer 3 only. There are no ASIC simulations and since all the cool L2 stuff is done in ASICs, there are no L2 features. It all uses a software-based forwarding plane. It can do 802.1Q tagging, but none of the fancy stuff like pseudowire, FabricPath, VPLS, and the like. [L2 is planned for future release, see update section.]
You will also somehow be able to tie this in to an external network, but I can’t test that, either.
I believe that IOSv can have up to 32 interfaces and IOS-XRv supports 124 interfaces, but I’m not certain I have those numbers correct. I can’t/don’t want to build a topology to test them.
Scale is technically only limited by memory, but on a laptop that’s not going to get you far. I believe one setup I was told about was running on something like a C210 UCS chassis and they were running 37 IOS-XRv nodes with over 2000 tunnels in 60GB RAM and using about 12% CPU.
Memory isn’t as much of an issue as you might initially think. VMs with the same memory share the pages, which helps with memory efficiency. In english, this essentially means that if you are running multiple copies of IOS-XRv, there’s really only one copy of IOS-XRv in RAM. Only the data structures for each instance add to your RAM footprint.
CPU allocation is a bigger issue. There are some tradeoffs in the different VMs. IOSv is CPU hungry but has a small memory footprint at around 300MB. IOSv is CPU hungry because it thinks the CPU is dedicated to it. IOS-XR, by contrast, is very light on the CPU but uses more RAM. IOS-XR is designed for a more modern environment. The CSR should be similar, since it was designed to be a VM from the beginning.
This is just a quick overview. I’m working on another post covering some of CML’s capabilities that really take it beyond being just a way to run virtual routers. That’s where CML starts to strut it’s stuff and become really interesting.
Lastly, if you happen to be in the greater Seattle area on Wednesday, February 26th, I’ll be speaking on CML and demoing the product for the Seattle Network Experts Meetup at the INE office in Bellevue at 17:30 PST.
- Other Cisco virtual appliances (beyond the ASA) may be available later. This would cover things like vWLC, vWAAS, etc. Still up to the business units.
- Titanium (NX-OS) will not be in v1. Hopefully v1.1.
- The OpenStack implementation is using KVM (which is default for OpenStack).
- His information says vIOS uses 0.5GB of RAM and CSR and XR both need 3GB.
- The code for each of these is shared with the hardware versions. It’s recompiled for the different target environment. This means same features and bugs. This is very good for using CML to proof of concept a design or changes.
- There are plans to deliver L2 functionality for both NX-OSv and IOSv.