Week 5 of the DevNet Grind – Automation Ecosystem, Full-Stack Automation, Distributed Applications, Software-Defined Infrastructure, and Core DevOps Principles!


To begin this week of Infrastructure Automation lets look at Code Exchange

Pictured above is the front page of “DevNet Automation Exchange” which is one of Cisco’s website resources available to developers, which can be found at this URL:


Given that the DEVASC Blueprint requires candidates to know not only a little bit about working with all these systems, it also expects candidates to be familiar with the different resources like this that is covered in my list of Cisco Resources stickied to the top of this website for visibility and will be included in the Blueprint / Blog mapping.

Though there is an “Ecosystem” of items, we will need to know Cisco Infrastructure

From the above picture I clicked on SD-WAN that then provides a Search Engine to the “Automation Exchange Library” where you can choose criteria to browser for user submitted Automation scripts that you can grab to use within your own environment:


As seen there are multiple other fields I won’t demo here, but you can narrow it down to scenario (Monitoring, Automation at Scale), Lifecycle (Day 0, Day N), and Domain (Collaboration, Security, Service Provider) to hone your search down to what you need.

So I will review “Automation Exchange” as that will be the theme of this week!

Pictured above in the Dropdown Menu are 4 different options to know which is Walk / Run / Fly / Other, which are a progression of stages of identifying / configure / deploy changes into the Environment based using the Walk / Run / Fly step by step progression created by users with Use Case scenarios and next step progressions from gathering information in the Walk phase to Deploying the changes in the Fly phase.

Walk: Read-only Automation

When you run into a problem and your wondering “What has changed in the environment?” these scripts may be able to help, as these perform Read-Only scripts for Automation that will gather data from your network, and before downloading the script you can also see “Use cases” of the script before running it to see if it is right for you.

This phase performs read-only operations like “GET” or “show …” throughout the Environment, so you can observe a change without necessarily reacting to it.

Run: Activate policies / Provide Self-Service

This phase is the Automation of creating workflows, runs through Day 0 / Day 1 / Day N scenarios, and enable users to Provision their own network updates from that Material that you create and share out to the community.

Fly: Deploy Applications or Network Configurations using CI/CD

This might come into play if you are a “Private Cloud” provider with several tenants that need to be separated out, or host a CUCM Cluster and need to provision devices at scale, these types of work flows can be found in the “Fly” phase of Automation Exchange.

Why the need for Automation? Why right now?

Its a good idea to think forward when it comes to Automation and your Enterprise networks, as your current network may not support much in the way of Automation as it gets antiquated, but planning for future open-source or more “IT Ecosystem” friendly will cut costs dramatically over time when you remove the cost of manual labor hours / human error break fix hours / becoming a slave to one vendors Proprietary platform.

If a vendor you’ve come to know and count on for solid network products when you refresh the network every 3-5 years, if they do not support integrating into the Automation Ecosystem, you are working with the wrong vendor and its better to start looking into that in advance of your next network refresh to see if you may need to consider another more Auto Eco friendly vendor.

Thought here are some inherent risks as well – Such as developers no longer updating or patching an Application or Feature that is rarely used so its not worth the time to support, your network will require flexibility to integrate with different open-source systems, and if you require “Dependency Driven” software trying to build an open-stack network around it may require cutting corners on Security or other important features.

For this reason it may be worth considering Full-Stack Automation

This means the entire network and its servers, everything working together like an engine to drive your business forward, fueled by the power of Automation!

There are an absolute ton of benefits to Full-Stack Network Automation:

  • Speed
  • Repeatable operations
  • Working at scale with minimal risk
  • Infrastructure on demand (VPNs, Testing Platforms, Hardened Servers, Etc)
  • Scale on Demand (creating additional business resources almost immediately through the use of Virtualization Platforms and Container Technologies)
  • Automation Problem Handling such as Self-Healing, Event Monitors, and be able to re-route work flows immediately as a problem is detected

That last one of “self healing” is a really great concept where an application is aware of its own problem such as a VM need additional resources, and will provision those resources automatically, rather than a 3am phone call to the On-Call IT guy!

Which transitions into a discussion around Software-Defined Infrastructure

Better known as “Cloud Computing” Infrastructure, this brings all the Infrastructure into a single Domain such as running multiple Bare Metal Hosts with enough resources to run Virtual Switches / Virtual Servers, this allows for new network segments or environments to be brought into the network upon request of your Developer rather than purchasing Hardware / Warranties / Support Contracts / IT Staff to manage the different pieces / downtime from hardware or software failure / the list goes on!

This goes back to “Platform on Demand” concept, Repeatable Operations (to spin up new servers automatically), but also adds the “Platform Abstraction” where you no longer are a hostage to licensing to use a certain platform as the Applications will run independent of any type of OS Framework / Patches / Versions / Updates breaking them.

Some drawbacks to Cloud Computing is if you are not the Cloud Owner there may be some unforeseen quirks in the environment that will need to be addressed, Access Control is critical as a leak in credentials would be the keys to the entire kingdom, and Cost as a Metric can get out of hand if you need additional resources to deploy an Application properly that require extra resources and thus extra cost.

This also brings with it the consideration of Distributed and Dynamic Applications

This simply means to plan an Architecture that will allow mission critical Applications for your business to be run on multiple Servers / Hosts running “MicroServices” rather than one “Monolithic” Server that can handle heavy data load but if it fails that mission critical Application that possibly is the cash cow of your business fails with it.

Most Modern Applications are now running on this type of Distributed and Dynamic Server Architecture as it brings Scalability which brings with it Redundancy / Resource Distribution (so a natural disaster doesn’t wipe out the single DataCenter housing your Application or Service) / Infrastructure Automation Tools like Kubernetes or Mesos which allow for Self Healing and On-Demand Scaling of Applications.

The drawbacks to this is that this type of Application Delivery does require Automation Tools to properly handle the increased Complexity that this also introduces, though the benefits outweigh the cons in the bigger picture these are factors to consider.

These are just some examples and considerations of deploying Automation into your Infrastructure, as the benefits can easily outweigh the costs, but that all heavily depends on whether your environment or vendors are dependent on antiquated Infrastructure or services such as Novell Software or AS400 Terminals.

Which transitions into the question of who can run all this? DevOps Teams can!

DevOps is the bridge between Development and Operations, bringing the two generally silo’d roles together, so that the same person who is Developing an Application for Operations within the company understands how it is used, so the instead of two teams passing work back and forth you have one team that defines and accomplishes a goal.

Historically Development and Operations as two different roles have almost opposite roles / views from the company, the same way IT Sales VS IT Support is seen.

Developers make the application that creates business and revenue so they are viewed as a profit center, they only care about the Software and how it performs, they are rarely “On call” for emergencies with Development, their skills are very specific skill-set to develop the application, and the Agility required is constantly active while in motion.

Operations supports / secures / scales the Infrastructure the Software is used in, they are considered a Cost Center as keeping things working securely doesn’t have an absolute dollar value, their skillset is often much broader to support technologies that provide and secure the application, and On Call is basically a must if the Infrastructure dies.

This reminds me of the IT Reseller model where Sales Revenue is often the Profit Center, however IT Support to support those solutions is often required by a customer, where a support contract might cover the cost of the IT Support supplied to customers though sometimes the support required by a customer can outweigh the Contract price and start to eat into the Sales profit which is a fine line to walk.

Which is where DevOps comes into play, its like a team of sales people that go back to the office after closing a deal, and put their headset on to log into the IT Support Queue 🙂

The issues with separation of Development and Operations

When the teams are Silo’d one has to communicate what they need for the Development of an Application like Hardware / Software / Licensing, which can require sometimes extremely huge delays as spending is planned per quarter or annually, and it may not be in the budget or some things already in the budget will have to be cut out of it.

It may also come down to Devs and Ops simply needing to share what resources are currently available for use, which can lead to one or the other being short on resources, and when it comes to “Profit” vs “Cost” you can guess how that is going to go.

Then came the idea and very smart adoption of baking roles from each side into the others responsibilities – The developers had to deploy and maintain their applications while operations had to work within a Virtual Environment and treat Infrastructure as Code rather than different configurations on multiple devices.

Some of the defining moments of the Evolution of fusing these roles together are:

  1. The creation of the SRE Role AKA “Site Reliability Engineer” which was the first defined role that created best-practices for doing Operations with Software methods which include – Shared Responsibility / Shared (Embrace) Risk / Accepting Failure as a new normal / Using Automation to reduce ‘toil’ in completing tasks / Measuring or Metrics for both roles (whereas Ops would generally only have Metrics against their work) / Qualifying success by meeting Service-Level Goals
  2. Agile Infrastructure – This was coined by Belgian Developer Patrick Debois in where he had a presentation about “Agile Infrastructure & Operations” where the two roles were still technically separate, but spoke about deploying Development methods to apply solutions for Operations. This led to furthering enthusiasm in the IT Community about Virtualization / Automation / Version Control (GIT), and applying the Agile methods to development and maintenance of Infrastructure.
  3. Allspaw and Hammond – These two individuals created a presentation in 2009 titled “10+ Deploys a Day: Dev and Ops Cooperation at Flickr” which cemented the idea of DevOps being the future of tying Development and Operations together. The topics that drove this presentation home was Automated Infrastructure, Shared Version Control, and Single-Step Builds and Deployments.

The third item went on to describe all things DevOps for the first time such as Automation / Collaboration / Responsibility Sharing / Trust / Transparency and just generally working as one cohesive team rather than separate roles on separate teams.

Core Principles of DevOps

At a high level view, this can be described in 6 separate bullet points:

  • Embrace Failure
  • Change is good
  • Collaborate actively
  • Empower Teams
  • Provide Feedback Systems
  • End-to-End Automation

This is very much modeled in an “Agile” methodology sort of way, fast paced production where failure is okay, as if you never take risks you will never see rewards, and it can be thought of as taking a step back and saying “How can we strive to be better while still meeting business needs from the DevOps team?”

Instead of assigning blame and scapegoating individuals for “failure” the entire team takes a calculated risk together, which encourages new ideas that lead to better features, and learn from their mistakes as a team to better form their next innovation.

Embracing Automation, Avoiding ‘Toil’, and Retaining Talent

DevOps teams rely heavily on one another to form one cohesive team, working as one single unit in both sharing the blame in risks, and sharing the reward of successes through developing skill-sets through collaboration and innovative thinking / taking risk.

For this reason retaining talent is a truly a core principle of a DevOps team, as its not hard to find a one trick pony Developer or “Ops” Engineer, but finding a Hybrid forward thinking product driven mindset isn’t a skill-set on a resume, so when you develop that talent within your team its important to retain it by the principles list above.

Embracing Automation is a major component as well, because it not only allows that forward thinking Hybrid Devops Engineer more time to work on new things, it also avoids have to ‘Toil’ not in the sense of just being kind of slow and lazy but rather being in Five Star Fire Drill action mode when dealing with critical failure issues. The Automation through CI/CD triggered events to allow the work flow to run itself will lend more time to DevOps teams to step away from normal duties to be all hands on deck!

I will cap this post off here as next it gets into Automation Scripting and Tools

To summarize that all up if asked in question format what answers are DevOps Principles, think about which best defines a team-building / fire drill mode reacting / failure is learning / collaboration and shared responsibilities in your answers.

Oh yeah – and Automation. Automate everything!

Until next time! 🙂

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s