Week 5 of the DevNet Grind – Overview of Automation Tools / SDKs / CLIs, Procedural Automation, Idempotency, and review of Bare Metal Clouds!


Overview of Automation Scripting (Tools, Cloud, SDKs, Execution, Etc)

The most popular Tool for Basic Scripting is Bash (also called Shell) in Unix / Linux and MacOS, which can be used for single I/O operations, or “piping” different utilities together in one command to create a “Bash Script” via this command chain. This is an excellent way to verify the functionality of a piece of code, by executing it first in Bash, then adding it to a larger script if execution produces expected results.

Automation at scale with added complexity will require more sophisticated languages such as Python or SDK (Software Development Kit) that are often provided by Public Cloud Providers, which provide libraries for parsing out more complex data sets such as JSON and offer tools for both CLI Access via AWS but also process errors / handle asynchronous API calls / lots of other functionality.

A very basic and important concept is executing system level functions by calling out “os” in the script whether it is “FROM os” or “import os” for Python, which will allow for any Bash type functions to be added into say a Python script for the OS it is executing on.

Procedural Automation / Developing a Procedure for Automation

Using Bash / Python / other Conventional languages is writing “Imperative Procedure” where an ordered sequence of commands is used to achieve the goal, going sequence by sequence (which may call out multiple functions / classes / etc) until it completes.

This makes “Procedural Automation” extremely powerful, yet simple to create and work with to a knowledgeable developer that is familiar with system Utilities / CLIs / SDKs as well as the “State” of the Target System the Procedural Automation is being run against.

Developing a Procedure for Automation

An “Imperative Procedure” when run again a target system once may produce the correct result, but Procedural Automation looks to address what happens if its run multiple times, what happens if it starts to pile up errors or write redundant configs?

To make Scripts more Flexible / Safer (and easier) to use multiple times we can use:

  • Determine the Environments Linux Distro to determine correct packages to use such as Debian using “apt” and CentOS / RedHat Distros using “yum” pkg mgr
  • Determine if target app is in appropriate environment, stopping if it is not
  • Determine if it makes copies of each config file using stream editors such as “awk” and “sed” which make changes clean and precisely, rather than just appending configurations and hoping nothing breaks in the process

During Script Development you will want to accomplish some tasks along the way:

  • Discover / Inventory / Compile Information about the Target Systems (and ensure scripts do this by default)
  • Encapsulate the complexity of safely installing applications by making config file backups / changes / restart services into a reusable forms, such as a sub-set of scripts built as a result of this information for later usage (along with other info)

To ensure the script are efficient and reusable:

  • Standardize the ordering and presentation of parameter / flag / errors
  • Create code Hierarchy which divides tasks logically and efficiently
  • Create high-level scripts for “Entire Deployments” and low-level scripts for “Deployment Phases”
  • Separate out deployment specific code from overall code, to make it as generic as possible

“Idempotency” – Reviewing what it is / its Core Principles

Being that the goal of most scripts is reaching the “Desired State” of an environment or resource, despite the starting conditions, “Idempotency” is the result of carefully-written Procedural Scripts  and Declarative Configuration Tools that BOTH examine the Target Environment prior to performing tasks on them to make ONLY changes that need to be made to them.

That quality of Software is referred to “Idempotency” and is based on Core Principals:

  • Look before you Jump! – If its not broken, don’t fix it, as a Network Engineer working on an IOS device I tend to remove any old or unused configs unless they are part of an upcoming project with code its better to leave it in place (as breaking a CI/CD Pipeline for example could have major impact to the customer)
  • Get a “Known State” before making changes – When working with an Infrastructure as Code Environment, Immutability (immutable code) might prevent a future release of an Application from using existing code, in which case the Automation code is changed to build a new Application from scratch
  • Test for Idempotency – Be diligent about building automation that has no unknown side effects or unknown outcomes
  • All Procedures must be Idempotent – Again Quality Software that is known to be good quality as one small error can have major impact to the Software as a Whole

Its a very easy concept with some well defined Principles that should really explain any Project into a Production (or really any) Environment, you want to bring unknown factors in both the target environment to your own code to as close to zero as possible, this is the definition of Quality Software / Idempotency in Automation.

To configure remote systems, you need secure access to execute scripts to them!

There are many ways to deliver code to production environments, and I wanted to just listed them out in a bullet point fashioned here as there are many secure options:

  • Using “scp” (Secure Copy Protocol) to push the Automation Scripts locally to a remote machine, and using “ssh” to remotely execute the scripts
  • “Piping” scripts using “cat | ssh” to execute them in proper sequences with any additional commands needed using SSH, thus executing and returning output results all in a single command to the remote environment
  • Using SFTP between local and remote sites to securely transfer the data, set appropriate security permissions / roles, and execute the code
  • Store scripts on a Web Server so that a remote machine can use “wget” or “curl” commands to retrieve, or a service such as GitHub / Public Cloud Repositories coupled with GIT to clone the the Repository and execute the scripts it contains
  • You can use new or existing RMM’s or Remote Operation Tools (like VNC) to transmit or copy the scripts to the remote target and execute them through it
  • If the target device is provisioned to a Web Service (AWS / Azure / Etc) and use integrated Web GUIs or CLI Tools between the platforms to Inject and Manifest code from one environment into the other

There is no one size fits all of secured methods as all environments will have unique factors (like maybe SFTP is already in use with another Application on the network), so you will always consider the target environment when considering code transfer / execution when planning out remote scripted Automation.

Cloud Automation / Cloud Automation Tool-Kits / SDKs / CLIs

As discussed previously IaaS Environments are usually targets for Automation Integration, as for all the reason previously described such as Resources on Demand or Virtualized Hosts spun up as needed, as this type of Environment is built for Automation.

Public Cloud Provides have their own platform specific deployment tools such as “Ansible Dynamic Inventory for AWS EC2 (Elastic Compute Cloud)” and OpenStack, which as shown in the Cloud Demo Post I created just before this one shows that these Environments are nearly identical in function but different in branding / name.

Cloud resources can also be managed with Bash and Python scripting (among many other choices) and helped along with both CLIs and SDKs:

  • CLIs and SDKs in Cloud Computing roll the features of APIs, infrastructure entities,  control-plane and APIs feature sets, Shell / Bash functions, these are all present in different formats through Cloud Management Dashboard and Environments (IaaS)
  • Command-Line Tools and Pythons built-in Parsers for JSON and YAML (De-Serialization) come in well presented, well-documented (sometimes even with video training as shown in my Cloud post), and comes in an easy to use format.

SDKs and CLIs are generally platform specific, with Public Cloud Environments having practically every type of Tool and Software Development Kit (SDK) in the Environment, which SDKs can be thought of like a Handyman Tool-Box they brings with them for a project of building a wall or installing a window – In this way a cloud can be thought of as a Handyman walking into a Auto Shop with every Power-Tool / Drill Bit / Screw / Nut / Bolt they could possibly need hanging all around them on the walls ready for use.

While I will review Public Cloud Tools at length soon, I want to review Bare Metal Clouds

There are “Bare Metal” Clouds like an ESXi Host or Vendor Specific standalone Hardware Appliance that hosts a “Cloud” environments as well outside of AWS and Azure Public Clouds which are in many Private Clouds or Enterprise Data Centers that the business hosts itself.

Bare Metal “Clouds” are fading out of existence rapidly as the lack fault tolerance or redundancy, these can be labbed through Cisco Sandbox website (in my stickied post of Development URLs), but I don’t intend to sandbox lab them so wanted to at least touch them here for exam day purposes of what is out there and what it is:

  • Cisco UCS (Unified Computing System) – There are different flavors such as Hyperflex, UCS Manager, and Cisco ‘Intersight’ (Infrastructure Management System). Cisco Intersight RESTful API is an OpenAPI-compatible API that can be interrogated with Swagger and other OpenAPI Tools to utilize its SDK’s to be used within / for Integration with other Environments – Cisco provides a wide range of SDKs for Intersight including ones for Ansible / Powershell / Python libraries
  • VMWare – Datacenter CLI is VMWares main Command-Line Interface that runs operations against both the Virtualized ESXi Host and AWS Public Cloud, written in Python and runs on either Linux or Mac OS. VMware vCenter has tons of tools for different aspects of Management for Hosts including vSphere CLI / Power CLI (Powershell) / vRealize Ops Mgr / vSAN / NSX-T / VMWare Site Recovery Mgr / VMWare HCX / VMWare Horizon – VMWare also has a robust set of SDKs written in Python for mgmt of vSphere Automation / vCloud Suite / vSAN among others
  • OpenStack – The “OpenStack Project” provides OSC (OpenStack Client) also written in Python which allows access and management of OpenStack Compute / Identity / Object Storage / Block Storage / Images, this Client installs OpenStack Python SDK enabling a wide array of OpenStack commands available in Python

Many other tools are not mentioned here for the sake of that being enough of a WALL OF TEXT on Bare Metal Hypervisors, I don’t imagine I will write articles on them in depth but rather just lab them in either Cisco Learning Labs or Sandboxes, again those links are in the Big List of Development Links post (and you need to know those sites for the DEVASC exam purposes) so I will end that here.

With that I will end this topic here, as Public Cloud Automation will be a huge post!

I didn’t actually realize that this course would go into depth about Public Cloud Interfaces before signing up for them, but now that I have signed up and the next section covers the Public Cloud tools, I plan to really cover that one in major depth at least for exam day purposes if not fully labbing them to some extent – Though I have limited time currently to get through material as there is weekly homework still for my DevNet class.

Until Next Time!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s