Above is a GitOps Blue Green Deployment, explained below among many other topics, and transitioning to other Automation Testing related topics.
Infrastructure as Code including Blue / Green and Canary Testing examples
When working with Infrastructure as Code one term to know is “immutability” which means being in a state that is not changeable, but in DevOps it refers to maintaining systems entirely as code, which aims to require no manual operations at all.
This segment will focus not on the mechanics of Infra as Code in topics like Automation of Code / Idempotency / Code Storage and Transfer / Etc, however in this I will explore the different ways to write Infrastructure as Code using Immutability for stability.
GitOps – Modern Infrastructure-as-Code
Embracing “Immutability” into your Codebase for the IaC Environment has 2 benefits:
- Knowing that the code in your Codebase describes what is running on the Bare Metal or off-site Cloud Servers
- You can use an Agile procedure for structured use of Version Control (GIT) to keep things as clear and simple as possible
GitOps is also referred to as “Operations by Pull Request” as you would see in GitHub:
To read full documentation on GitHub about how Pull Requests work:
https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/about-pull-requests
My GitHub currently only has a Master Branch, however with GitOps there would be several branches such as Master / Test / Development to pull / code / push like this:
The original Git Pull is exactly that, just “git pull …” from the Master Branch, and from there it is GIT Pull-Requests to the Testing Branch, then finally to the Master Branch also not shown here is merging on every pull to avoid Merge / Branch Conflicts!
The workflow can be broken down into 3 workflows by Branch:
- Development – Developers make changes on their Branch, making pulls and commits in an Automated process called “gating” which queues changes for Automated Testing / Commit until it passes Tests, then pass off to Test Branch so it can be Tested more in depth
- Test – Changes are merged via Git Pull-Request, code is scrutinized by multiple Automated Tools and in Different Environments that mimic Production
- Production – This will review the code and testing, run any additional testing before doing a GIT Pull-Request and merging into the Master Branch for Production
This brings the benefit of as bug-free / flawless of code as possible given the different levels of testing at each Branch (at least two levels of testing) before it is merged to the Production / Master Branch.
Once GitOps procedures like Workflow / Gateing (CI/CD) / Automation Testing are in place, you can begin doing Enterprise Level Deployments / Models like shown here:
This allows the DevOps team to Interact with the Load Balancer to gradually cut over traffic from the existing Production Environment into the Deployment Environment as users prove to not run into any bugs with the updated deployments.
Canary Testing
This has the same approach as the Blue / Green model in a single Enterprise, only instead of load-balancing random users, there are certain users that volunteer to get the latest releases pushed to them, and deal with bugs if found in the application.
So instead of Dual-Production environments and quietly cutting over traffic, you have a pool of test users that knowing are getting experimental applications / feature updates, and then the app is deployed into production as more test users report its working.
Switching gears to a different Topic – Automation Testing / Challenges of Testing
The purpose of Automation Testing is to bring the # of unknowns with the code or software as close to zero percent as possible, which there are a variety of products for testing, here I will go over some of those methods and challenges within them.
Testing Infrastructure and Challenges of Testing non-SDN Networks
Before discussing the hardships of testing our Automated Deployment, I’d like to talk about how integrating Code Testing into the Codebase using Automation Tools like Ansible / Puppet / Chef as these tools treat infrastructure as code.
Unit-Testing framework integration into the Codebase will ensure tests at every stage of GitOps for example with every Push / Pull / Commit / Merge operation, which generally will work best for a “Test Driven Development” Model where testing actually drives the model forward to its end goal into Production.
Unit-Testing tools like “pytest” in tandem with Hierarchical Automation and Integrated into CI/CD, testing can be built into any code changes, meaning they must pass tests before the will be put into the Codebase.
Challenges of testing a Physical Network
Real world networks are composed of individually configured hardware appliances, with little to no standardization of configurations, maintained by “discrete” pieces of equipment and software.
This means that making changes to the network will require time for configuration changes / possible misconfiguration on the network / downtime for the network and is overall considered very high risk because it is difficult / error-prone / risky.
Networks are also changed in small ways that sometimes go undocumented in a non-standardized way from reaction to an outage or end user problem ticket fix, which can lead to less secured network traffic to ensure reliability, and your application traffic may interact with a certain vendor device / appliance on the network using a certain service or protocol / somehow break the network but is possible with good (but time consuming) network discovery across the entire Environment.
Testing SDN Networks with ease
Software-Defined Networks present much less trouble as there are many solutions that Cisco (and others) have come up with to create a layer of Abstraction for Programmability and Network Functionality.
Some examples include:
- ACI (Application Centric Infrastructure) – A Data Center solution that runs on top of Cisco Nexus 9000 series device with APIC Enabled, which provides this Abstraction Layer via the Application Policy Infrastructure Controller (APIC)
- DNA Center – An Open and Extensible software defined network architecture that easily integrates for Automated Testing
- REST API and SDKs – Integrations with Ansible / Puppet / Chef that perform testing
ACI works by converging the network “Model” of the current network in a YANG Data Model, which can work with Middleware to run the desired state harmoniously with the desired state after software integration to “Test Run” it on the network, which allows Engineers to work less with individual devices but rather models of the network.
This type of deployment provides:
- Immediate roll back if needed for a failure in the network upon deployment
- Portability from one Data Center to another that runs similar software
- Version Control / CI/CD / Tools to help manage the Codebase
Although SDN Networks / Orchestration Tools cannot predict the final state of a deployment of new software into the network, it greatly reduces the possibility.
pyATS as a Network Test Solution
Python based network device test / validation solution, originally made for Cisco internal use but currently made available to the public, and partially open-source. pyATS does a pre-flight check on your changes before putting them into production, then continues to monitor them while in productions to ensure it remains running smooth.
Python Automated Testing System = pyATS
Genie provides APIs and libraries that drives and interacts with network devices and performs the actual testing, so it is when you think of pyATS you should also think of Genie, however it is generally just referred to simply as “pyATS”.
pyATS key features:
- pyATS framework and libraries work within Python code
- Its modular with components like AEtest (executes scripts) / Easypy (allows multiple executions of scripts, logging, and a centralized point to inject changes)
- A CLI that enables rapid interrogation of live networks to extract facts and help automate running scripts / forensics, this enables a “no-code” debugging and correction of issues in network topologies created and maintained using these tools
pyATS works extremely well in SDN / Cloud networks by providing huge libraries to pull from for use for Cisco and other vendor platforms, including REST APIs / ACI / Clouds.
pyATS can consume / parse / implement topologies using JSON inside a YANG Data Model from other sources including an Excel Spreadsheet.
It can also be used with the big 3 Automation platforms to either use them or be invoked by those tools, which will often require some kind of middleware like ACI does to allow them to define the network properly.
Attempt to Demo pyATS on my Ubuntu VM!
I start by jumping into a Python Virtual Environment with all the per-requisite stuff installed like pip / git / venv, and jumping into the virtual environment to install:
When running the sample script I was blown away by how robust it is!
That is absolutely incredible!
Though I would be sure to lab this in a Python Virtual Environment setup guide here:
https://naysan.ca/2019/08/05/install-python-3-virtualenv-on-ubuntu/
Install python 3, pip, and git as well on the Ubuntu machine then you can simple go into the venv folder and type “source bin/activate” to start the Virtual Environment.
Once in the virtual environment, clone the Devnet Repo, and cd as shown here:
git clone https://github.com/CiscoDevNet/pyats-sample-scripts.git
cd pyats-sample-scripts
This way you can actually view these type of scripts and what functions they perform, however it is very robust, so I would definitely understand it thoroughly before running it even on a VM you don’t want to accidentally kill running a pyATS script.
pyATS also has a “learn” command that extracts the entire networks device operational and protocol states, and returns them in JSON formatted files
^^That seems like an exam day question, “pyats learn ..”
Instead you can create a YAML file that describes your network and even its connections using the ConnectionManager class, by using something called “Testbed” found here:
https://pubhub.devnetcloud.com/media/pyats/docs/topology/creation.html#testbed-file
An example from that website I wanted to review here quick is shown below:
Credentials should always be stored on the local machine (and any sensitive information that can be stored in a variable), Telnet should not ever be used outside of a lab, and it is recommended to add a “platform” definition in the Testbed file to define the platform that the target device is running on.
pyATS Testbeds can also just be tested for compatibility / conforms properly to schema with the command “pyats validate testbed testbed.yaml”.
I will not be demo’ing Genie here for the sake of time / VM resources!
Some things to know about Genie is that it is “pyATS” that runs a higher-level library system that provides APIs for interacting with devices and Powerful CLI Tool for device management and interrogation of network devices.
For more info check out this DevNet Genie URL:
https://developer.cisco.com/docs/genie-docs/
Although its not demo’d here, I’d definitely recommend giving Genie site a once over to know the enhanced feature set from pyATS for exam day!
Network Simulation – VIRL need to knows for exam day
VIRL provides a local CLI for system management, a REST interface for integration with automation, and a powerful UI that offers a complete graphical environment for building and configuring simulation topologies.
The UI comes with several sample topologies to get you started. Among these is a two-router IOS network simulation that can quickly be made active and explored. VIRL’s Design Perspective view lets you modify existing simulations (after stopping them) or compose new simulations by dragging, dropping, and connecting network entities, configuring them as you go.
The visualization has clickable elements that let you explore configuration of entities and make changes via the WebUI or by connecting to network elements via console. You can also extract individual device configurations, or entire simulated network configs, as .virl
files.
VIRL files
VIRL also enables you to define simulations as code, enabling both-ways integration with other software platforms for network management and testing.
VIRL’s native configuration format is called a.virl
file, which is a human-readable YAML file. The .virl
file contains a complete descriptions of the IOS routers, their interface configurations and connection (plus other configuration information), credentials for accessing them, and other details. These files can be used to launch simulations via the VIRL REST API, and you can convert .virl
files to and from “testbed” files for use with PyATS and Genie.
In the VIRL UI, you select a simulation, make VIRL read the device’s configuration, and then it composes a .virl
file to represent it. VIRL offers to save the topology in a new file that you can then open in an editor for review.
The .virl
file provides a method for determining if configuration drift has occurred on the simulation. A simple diff
command can compare a newly-extracted .virl
file with the original .virl
file used to launch the simulation, and differences will be apparent.
This technique — comparing a known-good configuration manifest with an extracted manifest describing current network state — helps debug real-world networks for which authoritative, complete PyATS topologies are available
To connect to VIRL on a flat network, it is recommended to use their OpenVPN Client to connect to the VIRL Environment to avoid conflicting Subnetworks.
This seems like a good place to end, so I will end it here!
I’m going to continue playing with my AWS Cloud and resources I can utilize for labbing, happy studies, and almost Happy Monday!
Until next time!!!