Week 1 of the Cisco DevNet Grind – SDLC Methods, Design Patterns, GIT, Coding Basics, Code review / Unit Testing / Integration, TDD, and tons of info!


Beginning with this Class I will be having a much more “scribbled notes” format!

I started out trying to use a pen and notepad, but I thought to myself why in world would I not type out my notes in my study blog, even if they do look like a notebook full of quick notes stuck into a blog – So I’ve created a category specifically for this courses notes!

These may be a bit scattered, but will probably be the best notes in terms of preparing for the DevNet class I can provide, and I will still be writing up other posts in my free time on other Technologies I need to learn for job functions like Meraki API / Python.

Week 1 – Setting up your local “Development PC” / Knowing your resources!

This is a core skill that sounds kind of silly, knowing all the resources provided by Cisco like Free Video / Learning Labs / SandBox (test code) / Code Exchange / Network-Automation / Etc is important so you know how to find and build the software or code needed for your specific case use.

Setting up the Lab PC is arguably just as an important, I am not entirely sure how many remote tools are provided to talk to SandBox environment to test code (I know Anyconnect VPN is to connect to your SandBox) but I’m not sure about things to talk to the Environments themselves like SSH / Visual Studio Code / Postman / Etc.

I really need to start ripping through Part 2 though, so I’m going to shut up and start ripping through that here!

Week 1 – Software Development and Design as shown in the top graphic!

Software Development Lifecycle

Requirement and Analysis:

Software Development = SDLC (Software Development Life Cycle) is not just creating code / software, but is rather Gathering Requirements, Creating a Proof of Concept, Testing, and Fixing Bugs.

Software Development Life Cycle (SDLC) has 6 elements to it:

  1. Requirements and Analysis
  2. Design
  3. Implementation
  4. Testing
  5. Deployment
  6. Maintenance

This is the historically first agreed upon Software Development Methodology called “Waterfall” which I will note in more detail, however these steps are a high level view of the Waterfall Software Dev Methodology.

The idea was essentially by following this Cycle to forecast project time / cost / resources / timelines that it would be easier to manage expectations, thus avoiding mistake and conflicts during the deployment – Hence came the “Waterfall Method” of SD.

Of course in IT things rarely if ever go according to plan, and so the “Agile” SD model was born into SD Methodologies, which allowed more flexibility for faster and more flexible software development / deployment.

These models are more a common-sense abstraction that can be applied step by step, in single steps, steps can be iterated, they can be reversed, repeated, or even folded into one another that I will touch on coming up.

The following is a “Classical” model to Software Development / Deployment, though note that Agile is worked into portions of this, noting where it differs as a more dynamic way of developing and delivering software features / bug fixes / updates.

Requirements and Analysis

Requirements should be based on the potential customers or stakeholders (Users, IT Support Teams, Stakeholders, Managers, etc), some questions to ask are:

  • Who are the Stakeholders?
  • What are the challenges both globally and in terms of the software to be built?
  • How are the challenges being met now? What is the level of Technical Expertise? What solutions have already been tried? What does the new software need to displace / replace?
  • What are the Organizational / Cultural restraints? Tech Expertise? Acceptance of Risk? Tolerance for Change?

Once these are defined we can then look at more concrete issues like determining where / in what context the new software will new live and thrive, for example:

  • What is the Orgs current Infrastructure / Application Ecology / Dev Culture like? Like are they 100% Windows / Mac PCs? RDS Users only? All apps rely on Java?
  • What is the current IT Roadmap, Headcount, and Role breakdown? Where is the Org in terms of mobile / cloud / web / DevOps testing, monitoring, continuous delivery, and building / update / operating software?

Then we can begin exploring close-in requirements and focus on initial desired features / outcomes and expected User Experience such as:

  • What features should the software have?
  • How many users will the software need to support?
  • What should the user experience be like on each soft platform that requires support such as Desktop Applications / Web Applications / Mobile Applications?
  • What other Software needs to integrate?
  • How will the Software need to scale?

Then the focus can shift to Architectural Assessments for building the software itself, such as considering the Front End (User interact with), Backend (Data Processing, Storage, Streaming), along with considering points of Integration with other Applications / Services for Lifecycle Management such as Backups, non-disruptive Updates, scaling, and any other tasks that are needed to continue supporting the software.

After all these questions are answered the final assessment must done of:

  • Is it possible to develop software according to requirements, in budget, on time?
  • Are there risks to development schedule, and if so what are they?
  • How will the software be tested?
  • When and how will the software be delivered?

After all of this is completed an SRS or Software Requirement Specification stating scope / requirements / etc for Stakeholders to review and approve.

More Modern methods assume that requirement-gathering will be an iterative ongoing process that will grow over time, which is where “Agile” Method comes in with the MVP or “Minimal Viable Product” guide can be used for delivery of initial features and grow from there moving forward with the software.

SRS (Software Requirement Scope) = SDLC, MVP (Minimal Viable Product) = Agile


Design is largely based upon the initial Analysis of the SRS that has been provided to Developers and may build Prototypes / Proof of Concepts / Best Build design options for the Software.

At the conclusion of the Design Phase the team “classically” will create a High level Design (HLD and Low Level Design (LLD), which provides the 10,000 foot overview of proposed software that is written in a non-technical language to describe the general architecture / components / relationships and any other significant information.

The LLD will be the technical nuts and bolts of the HLD, such as greater detail to the architecture broken down into its individual components, protocols used for communication, and enumerates classes and other aspects of the design very fine grain.

Modern Methods like Agile tend to document much less at this stage, and work direct toward design and delivery based of the MVP Specifications.


This phase often bring the HLD and LLD as input.

Agile Dev Teams will do some initial white boarding before coding begins as the MVP provides a brief and simplified set of tasks, and Agile teams will communicate often.

Implementation phase is often called the coding or development phase, as during this phase developers take design specs to develop code according to the design, and all components / modules are built during this phase which makes it the longest phase of the life cycle (different modules / components / features are usually assigned to various developers on a team of software developers to load balance the work).

During this phase Testing Engineers are “classically” writing a test plan. More modern methods tend to stress test earlier in the process, such as Test-Drive Development which builds a test framework for testing before any code is ever written.


The testing phase “classically” takes the software code from the Implementation Phase as their input, where the Test Engineers will take the code and install it into a Test Environment (Such as Cisco Developer Sandbox) to execute their test plan.

Test plans will include every test performed that covers all features and functionality of software as specified by the customer requirements, this may also include:

  • Integration Testing
  • Performance Testing
  • Security Testing

When test does not “pass” it is designated as a bug to be fixed, which is passed back to the developers to fix, then the software is ENTIRELY re-tested to ensure that the bug fixes did not introduce any other bugs into the software.

At the end of this phase, a production ready / bug-free piece of software should be ready for deployment into the Production Environment . Though this is of course just the ideal scenario. It is commonly known or expected that no software comes completely bug-free into production – For this reason Software is kept observable, testable while in Production, and made resilient so that is still perform essential functions while bugs are being fixed.

For this reason Developers have and continue to develop ways to Automate testing, testing at several levels of the software, from individual functions to the large-scale component / module aggregations.


During this phase the ideally bug-free software is installed into the Production Environment, if there are no deployment issues, the product manager works with infrastructure architects / engineers to decide if the software is ready for release.

At this fine phase of deployment the software is release to end users.

More modern efforts look at deployment differently like automating deployment / release management / updating / scaling / other tasks very early. The most advanced is the CD (Continuous Delivery) pipeline that will build / deploy / test an evolving software product frequently which further automates configuring and deployment of underlying (virtualized) infrastructure to bring Testing / Staging / Product as close to identical as possible to limit the unkowns as much as possible.


The development team will perform the following tasks during this Phase:

  • Provide support to end users
  • Fixes bugs found in production
  • Works on software improvements
  • Gathers new requests from customer

Classically at the end of the Maintenance Phase (which never truly ends), the development team will be looking forward to the next iteration or version of the software being developed, starting the Development Cycle over from the beginning.

Teams using the “Agile” Method will close this phase immediately, as they push pieces and functions of the software into production into bursts (adding features and bug fixes along the way), which has shown to accelerate development while keeping software aligned with customer / stakeholders goals.

Reviewing the Software Development Methodologies Waterfall, Agile, and Lean

Waterfall Methodology

This Methodology is the traditional Development Model still in use today, which is basically the SD Life Cycle process described above, only the process only flows one way in one single iteration and can be thought of in the following graphic:


It’s thought of as a “Relay Race” where there is a Baton being passed to the next step, always moving forward toward the finish line, never backwards.

This means that one phase cannot overlap with the next, making it critical that the current phase be thoroughly documented when presented to the team or individual for the next phase as one as one mistake in a phase cannot be corrected in another an can derail the entire software development iteration!

For the sake of being thorough this model was introduced by Winston W. Royce in a 1970 article called “Managed the developer of large software systems” though he never referred to it as the “Waterfall” Methodology in the article.

Agile Methodology

From a high level view Agile is considered the most popular modern SD Methodology.

Agile is flexible and customer focused, long used but becoming an official Method in 2001, when a team of software developers created something called the “Agile Manifesto” which addressed frustrations with the current options / methods in use.

The Agile Manifesto consists of the following values (as in motto / moral values):

  • Individual and Interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer Collaboration over contraction negotiation
  • Responding to Change over following a plan

To accomplish these values, the Agile Manifesto lists 12 principles / guidelines:

  1. Customer Focus – Our highest priority is to satisfy the customer through early and continuous delivery of valuable software
  2. Embrace Change and Adapt – Welcome changing requirements, even late in development, Agile processes harness change for the customers competitive edge
  3. Frequent delivery of working software – Deliver working software frequently, from a couple of weeks to a cople of month, with a preference to the short timescale
  4. Collaboration – Business people and developers must work together daily throughout the lifespan of the project
  5. Motivated Teams – Build projects around motivated individuals, give them the environment and support they need, and trust them to get the job done
  6. Face-to-Face Conversations – The most efficient and effective method of conveying information to and within a development team is face to face conversations
  7. Working Software – Working software is the primary measure of progress
  8. Work at a sustainable pace – Agile processes promote sustainable development, the sponsors / developers / users should be able to maintain a constant pace indefinitely
  9. Agile Environment – Continuous attention to technical excellence and good design enhances agility
  10. Simplicity – The Art of Maximizing the amount of work not done is essential
  11. Self-Organizing teams – The best architectures / requirements / designs emerge from teams that can self organize
  12. Continuous Improvement – At regular intervals the team reflects on how to become more effective, then fine tunes behaviors to accomplish this effectiveness

As Agile is an ever evolving Methodology, there have been several different takes on the Methodology over time, as described below:

  • Agile Scrum – This plays on the term “Scrum” where in Rugby players will group together in an effort to take the ball, small Agile development teams will meet daily and work in iterative sprints, constantly adapting and delivering in changing environments
  • Lean – Based on Lean Manufacturing this method emphasizes eliminating wasted effort in planning and execution of tasks, and reduction of programmer cognitive load which will be touched on more in the Lean Method section
  • Extreme Programming (XP) – Is much like Scrum Methodology but has a focus on best practices and software engineering, uses methods such as Pair Programming and end-to-end process automation like Continuous Integration (CI)
  • Feature-Drive Development (FDD) – This method defines that the overall software development as an overall model should be broken out / planned / designed / built feature by feature, focusing more on feature delivery, utilizing both Core Development Team Engineers and Specialized Feature Engineers like referred to as Build Engineers, Toolsmiths, Etc

Of all of the above methods Agile Scrum is the most widely adopted Agile Method for some of the reasons / terminology defined below that has been adopted by a majority of the Agile community across all Methodologies in regards to Agile.

  • Sprints – These are essentially bursts / iterations of the SDLC in a very short amount of time, which is predefined or “time boxed” so there is a set amount of time for each sprint / tasks to be accomplished by the team within the sprint. During a Sprint each team takes on many tasks they feel can be accomplished during this time-box duration (called “User Stories”), and when the Sprint is over there should be working and deliverable software (although this does not mean the software WILL be delivered) to ensure that software remains constantly deliverable
  • Back log – This is the role of the Product Owner is to create the Backlog which is a listing of all software features in a prioritized order / list, this list is the result of the “Requirements and Analysis” phase of the SDLC, with new features available to be added to the back logged and re-prioritized at any time
  • User Stories – These are the broken down pieces of creating a feature that is assigned to a team or individual user to accomplish in a single “Sprint” session, and if they cannot accomplish the task in a single Sprint it should be further broken down into more granular User Stories.
  • Scrum Teams – These teams are made up of people with different roles to accomplish the SDLC, being collaborative / cross-functional / self-managed / self-empowered. They will meet in whats called a “Standup” for no longer than 15 minutes daily (hence the term Standup as there should be no need to sit), with the focus of tracking daily progress and assigning User Stories / Roles / Updates with a “Scrum Master” that is essentially the team lead that tracks this progress to break down any issues or road blocks that are derailing Sprint / SDLC Progress.

Lean Methodology

As mentioned Lean software development is based on Lean Manufacturing Principles which are focus on minimizing waste / maximizing value to the customer.

The principles of Lean are as described below:

  • Eliminate waste
  • Amplify learning
  • Decide as late as possible
  • Deliver as fast as possible
  • Empower the team
  • Build integrity in
  • Optimize the whole

Eliminate Waste

  1. Partially done work – Work that isn’t maintain, provides no value
  2. Extra Processes – Useless processes that adds work but no value
  3. Extra Features – If the customer didn’t ask for it, don’t waste time on it
  4. Task Switching – This refers to working on projects that not synchronous, requiring the developer to switch tasks and waste time spent focusing on different contexts
  5. Waiting – Approvals, getting answers, making decisions, testing, etc
  6. Motion – This applies to both people or code, like when someone steps away from their desk to ask a team member a question (waste), or when code does not have complete documentation when passed onto the next team member (waste)
  7. Defects – Basically means bugs that go unnoticed until the begin to snowball into a much larger issue during the SDLC which ends up being a HUGE waste

Amplify Learning

Learning software takes practice, so smaller iterations of work is thought to be better for amplifying the developers learning ability to learn faster / add more value to their work.

Decide as late as possible

This is done in times of uncertainty (otherwise it would be waste), when it is better to base your decision on facts as late as possible, rather than making incorrect decisions sooner in the process based off opinions or speculation.

Deliver as fast as possible

Enables customers to provide feedback / Amplification of Learning / Faster Decision Making / Less time for customers to change their mind / Produces less waste.

Empower the team

This essentially let the experts be the experts and make decisions, managers should not assign tasks and let the experts make their own decisions, which will produce faster and better results further reducing overall waste.

Build Integrity In

This means essentially to deliver a complete / quality piece of software that can stand up to extreme workloads or complex situations, and retain its integrity as work software

Optimize the whole

This goes back to Empowering the team as each expert must be allowed to make their own decisions, they must also look at the bigger picture and work cohesively, considering the ramifications their actions can have on delivering the software as a whole.

Software Design Patterns – Original, Observer, and Model-View-Controller (MVC)

Software Design Patterns are best practice solutions for solving common problems in software development, Design Patterns are Language-Independent, so they can be used by both computing languages and object-oriented languages.

However there are some popular Design Patterns such as MVC that encourages the developer community to create add-ons to the framework the simplify implementation in widely-used languages like Django (an MVC framework for Python) / Ruby / Spring and Backbone flavors of JavaScript.

Original Design Patterns

These are defined in three main categories:

  • Creational – Patterns used to guide, simplify, and abstract software object creation at scale
  • Structural – Patterns describing reliable ways of using objects and classes for different kinds of software projects
  • Behavioral – Patterns detailing how object can communicate and work together to meet familiar challenges in software engineering

There are a total of 23 Design Patterns that are considered the foundation of newer design patterns, and this is probably more than enough to cover for Original DPs.

Observer Design Pattern

The “Observer Design Pattern” is notification based design, that sends a Notification to the Observer / Subscriber when a change is made to an object they are Observing called a Subject / Publisher.

For the Subscription Mechanism to work:

  1. The Subject must have the ability to store a list of its Observers
  2. The Subject must have methods to Add / Remove Observers
  3. All Observer implement a callback function to invoke when the Publisher sends a Notification, using a standard interface if possible to simplify the process. This interface needs to be able to declare its notification mechanism to Subjects or “Publishers” to send the necessary data back to the Observer or “Subscriber”

Now being a Network Engineer I am trying to keep Network Language separate, but this is simply too close to the SNMP Model not to point works basically like SNMP:


Of course SNMP does work differently, but when writing this out myself to fully understand the relationship, that just screamed SNMP to me.

Anyways. The above graphic breaks down into the following flow of events:

  1. An Observer adds itself to a Subjects list of Observers (via invoking the Subjects method to add Observers to its list – so that is Subject dependent!)
  2. When there is a change on the Subject, it notifies all Observers in its “Observer List” via invoking the Observers Callback method to send the data
  3. The Observers Callbacks are triggered and subsequently executed to process the Notification data
  4. When the Observer is done receiving Notifications, it will again invoke the Subjects Method on how to remove that Observer from its Observer List

A few takeaways from this for me is when a message is sent to the Subject or Observer, its up to the receiving device to invoke its Method of doing things (For Observers its essentially the “Callback” and for Subjects its Subscribing and Unsubscribing).

The Subscription system provides better performance than Polling like SNMP.

An interesting way that this was also thought of was thinking about how Social Media works when you follow someone or Connect with them on LinkedIn, you are the Observer and they are the Subscriber that added you to their Connections to get Notifications they create – And when you no longer want Notifications you end the subscription to that individual.

^^^ That is a really good way to look at Observer, which will finish the notes on that.

Model-View-Controller (MVC) Design Pattern

This Model-View-Controller is at times described as an Architectural Design Pattern, rather than a Design Pattern mechanism such as the Observer / Subject model, as its goal is to simply development of software / applications that rely on GUI’s. MVC abstracts code and roles into three different components – (You guess it!) Model / View / Controller.

MVC Components:

  • Model – This is the Applications Data Structure which manages the data, logic, and rules of the Application via the input it receives from the Controller
  • View – This is the Visual representation that is presented of the data to the user, the same data can be represented in multiple ways.
  • Controller – The Controller takes the user input and manipulates it to format for both the Model and the View (final presentation), acting as a sort of middleman

This will at first sound odd when visually looking at the Data-Flow, but do read the process below it for a full understanding of how the MVC Design Pattern works:


The steps in the MVC Process is as follows, step by step:

  1. User provides data input to the Controller
  2. The Controller accepts the data, and manipulates it
  3. The Controller sends the manipulated Data to the Model
  4. The Model takes the manipulated data, and sends the selected data per the MVC Controllers instruction to the View
  5. The View then receives the selected data from the Model and presents it to the user
  6. The user see’s the updated / manipulated output of the data they initially input

Because of the way that MVC Design Pattern divides up these roles, the components in it can be re-used as long as each component gets the proper input and output interfaces, this means that components can be re-used as long as the data provided is using the proper interfaces that the components understand.

GIT – Version Control Systems

Given I’ve already dived fairly deep into “GIT” in other articles, I will be cherry picking some new information or updated information from previous articles, and I will still likely be writing a “GIT Collaboration” Article demonstrating using Remote Repositories.

That being said, I will likely burn through this portion with few notes in this post, though I am sure there are some “Explain the benefits of GIT” sort of things that need to be reviewed – That being said lets GIT into it!

GIT of course is used for Version Control or Source Control of data, taking snapshots in time of all files / data in its Repository, which you can of course begin your own local repository or clone a Repository from a Remote Source like GitHub to your Local Machine to work on that Repositories files locally.

Some of the benefits of GIT (told ya):

  • It enables collaboration – Multiple people (a team or complete strangers) can work on the same project or set of files without overriding each others work
  • Accountability and Visibility – As shown in a recent article you can see the complete Commit timeline of GIT with the users “user.name” and “user.email” along with timestamps and the Hash ID it produces for that Commit
  • Work Isolation – New features can be be independently created with impacting the current existing software operations
  • Safety – Files can be reverted back to previous Commits in their history
  • Work Anywhere – You can either access your Local GIT Repository on your laptop, or use a Remote Repository like GitHub and “clone” your repository to another machine and be working on your files within minutes

Being this is a topic on the DEVASC Blueprint I would Commit those ^^^^ 🙂

There are 3 different models of GIT:

  • Local Version Control System (LVCS) – This is a Local GIT Repository
  • Centralized Version Control System (CVCS) – This is a Client-Server model in which multiple people can work on a Remote Repository by doing a “checkin” and “checkout” of files to ensure noone else is working on them (so it essentially locks anyone else out of them), this would be something you might see on a Server on your work LAN or a locked down Enterprise Cloud work space (Like GitHub or Azure’s Enterprise Solutions)
  • Distributed Version Control System (DVCS) – This is considered Peer to Peer with a Remote Repository host like GitHub as the GIT Hosting service, where I have my own GitHub repository that I plan to work with once I catch up on my DevNet Class homework 🙂

I would be sure to know those longer acronyms for these different types of GIT Repository for exam day as well, as well as their application, as the “Centralized” model is one I don’t work with a model like that in my own lab – So that is one to watch for on exam day as likely most people work with Local / Remote and don’t even consider a Centralized Repository unless they work out of one at their job!

Three states of a file to know for exam day as well in GIT Terminology:

  • Modified – A file is Modified but not yet staged
  • Staged – The Modified file is now ready for a Commit to the repository
  • Committed – The file version is updated, the Head of that working directory / branch is updated to a new Hash ID, and basically takes a snapshot of the Repo at the time of that commit as demonstrated in previous GIT Posts

“Branches” are also a major part of GIT, which I have looked at mainly working with the “Master Branch” in more recent GIT Posts, though as we’ve seen in “Detached Head” mode it does create a “Psuedo Branch” that exists in the Repository (until a cleanup operation is run against it), but upon doing a “git checkout master” when reverting back to the current Master Branch / Attached Head it gave this output from a previous post:


Seen at the bottom is surprisingly a warning INCLUDING THE EXACT COMMAND(!!!) of how to give that “Psuedo Branch” its own name to retrieve at a later time, however I was merely demonstrating the Detached Head mode – However this is what is meant by “Branching” out from the Master Branch if you want to work with Multiple Branches!

Working within GitHub and other GIT Repository Hosts

Remember GitHub and other Repository Hosts are just that, hosting services, while GIT is the actual Version Control System.

GitHub has a ton of collaborative projects that people in the thousands collaborate on from different communities, which Cisco has many of these published on Code Exchange and their own GitHub for things like Hank Prestons video series – This GitHub can be found in my sticky post at the bottom of the list of URL’s for Developers.

Updating a Remote Repo via “git push” and updating the Local Repo via “git pull”

(And other GIT Commands)

First things first, as discussed in another post I made, conflicts will not allow repositories to update properly if they have conflicting information / changes to the version history.

This will happen if two people clone the same Remote Repo, work on it, and one person pushes their changes before you, your timeline no longer matches the existing timeline of the Repository and will conflict and not take the update from your changes.

GIT Push command (Local to Remote)

That being said, to push your local working Repository to a Remote Repository like GitHub to collaborate, you would use “git push origin master” to update the Master Branch of the local Repository, or “git push origina (branch name)” to push it to a non-Master Branch of a remote repository.

GIT Pull command (Remote to Local)

This will pull from a remote repository Branch to update a local Branch, or you can also just “git clone (remote repo)” to clone a remote repository for the first time to your local machine, you again can use “git pull origin master” or “git pull origin (branch)” to either update your Local Branch with the Remote Branch.

GIT Merge command (Merge remote Branch to Local Branch)

To merge two branches together, the “git merge (branch)” command is used, where the branch name being input is the source branch that is being merged to your local branch.

To ensure no conflicts when merging a remote repository, you will want to use “git checkout (branch)” to checkout the local GIT Branch, and “git merge (branch)” which will be the source Branch name.

If you Merge more than one Branch into a clients current branch or repository, this is called an “Octopus Merge” which is probably a good term worth knowing for exam day.

echo “message” >> filename.ext in GIT repo

This updates the file so that it is back in a “Modified” state, which will then need to be staged with the “git add (filename)” and “git commit -m “some message ” command.

I did find also that inside of Cisco’s lab environment, “git comment -m “comment” ” works without needing a file name or * to stage all files, I am not sure if this is practice but have to assume that is correct if that is how it works in Cisco environment as the have amazingly well spun up on demand labs as shown here:


Also one other command I never touched was “git diff (hash id) (hash id)” ” :


To get the two commit points I of course issued “git log –oneline” and found the two points I wanted to compare, which is seen with the “git compare …” in the screen snip.

Also to demonstrate checking out a branch other than the Master Branch which I have not done in previous GIT posts yet:


So now the working Branch is not the Master but a different Branch, whereas I’ve just been working off a Master Branch so far in my studies.

Also to demonstrate the “git merge” and deleting a branch after the merge:


This also demonstrates what is called the “Fast Forward” merge which I actually looked up and there is a fairly important point on “Fast Forward” merging.

Fast Forward Merging when it WILL and WILL NOT work!

What was shown above was the first time I had really even played with merging branches much at all, however there is an important distinction about merging branches or this “Fast Forward” Merge more specifically that we saw here.

If the branch splits from the Master Branch, is worked on, and then merged a Fast Forward Merge will occur where it combines the two branches and creates a new Head / Hash ID, however say if I did a “git branch master” then made another commit it would not work as there would be a conflict due to where the branches are at in GIT time.

To illustrate this point about why the merge will fail due to conflict in timelines:


This actually will not work with any type of Merge, until your Heads are on straight!

The Head is the most recent commit point on the Branch, and the Head cannot be past a point where this Branch was created, this has to manually fixed or it will not merge.

To do this, you will need to “amend” your most recent commit, with:

“git commit -a -m “Amending to Merge Branches” ” – This will Amend the last commit like it had never happened, in this scenario will allow the branches to merge whereas when you are several commits in you will need to use “git rebase” to revert the placement of the Master Branch Head multiple commits back which I believe is beyond the scope of DEVASC and I am honestly too tired to go above and beyond at the point.

NOTE ON CONFLICT HANDLING IN GIT – It can be done using “vim (filename.ext)” to go into VIM text editor, and manually remove the lines from the commit that is causing issues and then exiting VIM with esc / : / wq / enter to escape out!

You may want to practice VIM and manual conflict handling, I can’t imagine you will need to know much more than it is how you CAN manually fix GIT Branch Merge Conflicts – However this still is generally done with a software application.

Coding Basics – A look at creating “Clean Code” and revisiting the basics

Coding basics is not just about how to make code work, but how to make “clean code” for a real world project, using best practices, and will review many topics beyond that.

What constitutes writing Clean Code vs Messy Code

I actually went through while Parsing JSON about how White Space does not matter but the Data File would still be understood, and I can basically sum up Clean Code vs Messy Code with two images from my JSON Parsing posts below.

Clean JSON Code:


While some code formatting like YAML and Python force good indentation and do not tolerate odd spacing, as seen below JSON can be a total mess and still be a valid file:


Though this is still valid because the Straight / Curly Brackets open and close at the proper point on the proper lines, the rest of the code is all over the place, and while a PC will read this no problem this on a larger scale beyond a few names is a problem.

Clean code is a file you can hand off to another Developer, and they can not only read your code fluently because your naming convention / indentation / values all make sense, but should also be able to continue creating the code from where you left off continuing with your scheme because it so so easily readable.

Clean code is also easier to interpret for what are called “Linters” that help clean up and format code properly, it will likely result in less bugs, it will be “modular” so its easier for frameworks that perform testing to interpret, and is just a good practice to always do.

Methods and Functions

Methods and Functions share the same concept in that they are blocks of code that perform a task when executed, or they can be in the code but if not called upon in the code will perform not execute any kind of task, some use cases or best practices include:

  • Code that performs a discrete task like an input / output task, and is also used in Data Serialization / Parsing with Python and Data Format files
  • Pieces of code that perform tasks and are used several times throughout the code should be made into a Method or Function to call out the action to execute like in Python this would be done with with () at the end of any function to call it out

Arguments and Parameters

Arguments are the variables passed into Functions, which can be any data type as long as it is the proper data type the function is expecting, and the parameter defines the data type within the function itself.

The difference between Methods and Functions is that Functions are literal code blocks that perform a task, while a method are code blocks that are associated with an object, generally used in object-oriented programming (like Python!).

For visual example of where Parameters and Functions exist in code:

def someFunction(someParameter1, someparameters2):

somefunction(‘someVariable1’, ‘somevariable2’, 12.3, {‘somethingElse4′:’3’})

Here “someFunction” is defined with Parameters within it, then that Function is called out with Arguments within it, using multiple data types like here it is 2 variables / 1 float / 1 key-pair or Python Dictionary (curly brackets).

Note – It is good to define parameters even when not necessary in more modern languages, as it makes scripts or code easier to understand what is trying to be accomplished, and make it provides context to any error output that are received when the code is executed.

Return Statements are called exactly that, “return statements”, and generally the syntax is “return someVariable” with return statement and any code below it will be skipped after it returns the output so they will generally end the block of code (it will continue to execute code down the script but will ignore code within that code block).


These are referred to as “Libraries” in Python, they are imported pre-defined functions that are re-used in different scripts and code, this is done by doing an “import math” for example to then be able to use all the pre-defined Mathematical functions in that Library.


For clarity sake in programming languages (not specifically Python), Classes are used for:

  • Encapsulation
  • Data Abstraction
  • Polymorphism
  • Inheritance

With Python Classes are used to logically bundling data and functionality together within the script, with each class declaration defining a new object “type” in the code, for example using “str(HelloWorld)” to tie a HelloWorld = ‘Hello World!’ to a string value or class, thus inheriting the properties of that class (meaning it is an object being set to have the class of a string so I cannot use a math operator like * to multiple HelloWorld).

Notes from Python Hands on Learning Lab

While in a Bash Shell, “which python” to get the Path Python will be using in the Directory, and “python -V” to determine which python version you are using.

In Bash you can also jump into IDE mode using “python -i” and then return to the bash at any time by using the exit function with close parenthesis exit()

First Python script in learning lab for visual demo:


Once saved this can be run from bash with python / python3 feedme.py, this will output the first line unless the Boolean value is changed to False, then it will hit the else: function and return that hunger pains are gone!

Python Data Types review

Integer / int = 1, 5, -28, 5,400,201,563 <- One single large number

float = 1.23, 5.8, -10.3 <- Any Numeric Value with a decimal value

Boolean / bool = True / False <- Either true or not true, 1 or 0, yes or no

String / str = ‘A statement value that is not numeric’ <- Can use single or double quotes (at least in 3.x Python, I believe in 2.x there are stricter rules on Strings and quote types)

bytes / b = b’A statement \x95\385\x385′ <- I am admittedly not entirely sure the use case for bytes except that it more for machine reading / returning a value in Binary when working with Network Automation script in my labbing with David Bombals course

Python Numeric Operators review

+ = Addition

– = Subtraction

*  = Multiplication

/ = Division (returns an float numeric value)

// = Floored Division (returns an integer numeric value)

% = Modulus (remainder)

** = Power / Exponent

Python String Operators

+ = Concatenation (3 + ‘Hello!’) = 3Hello!

* = Multiplacation (3 * ‘Hello!’) = Hello!Hello!Hello!

n\ = Line Break, note sure where this fit in but is important to know, this is basically like hitting the return key to go to the next line, and used a lot in Network Automation when sending command to the Network Device you will put \n at the end of commands being written to the device to indicate you are ‘entering’ that command and move down a line.

You can find what data type something is by using “type(variable)” shown below:


Here I demo in the Cisco Learning Lab environment jumping into Python IDE mode from their Linux Shell, I assign a = 5 and type(a) shows int, then assign a= “””Hello World!””” to demonstrate that Python will allow up to 3 single or double quotes to close a String.

I am glad I labbed this out a bit as can be seen the Multiplication String Operator just prints the String x amount of times, however when doing Concatenation you must turn the Integer or Float value into a String value by assigning it to that “Class” as described above – This is because Python will only perform Concatenation with String Values whereas Multiplication applies the math function to the String by printing it x times.

You can also do something like “one” + “two” without an Integer, and it will simply return ‘OneTwo’ as the result, I probably should have started with that.

Below are some examples of what are called “Convenience Methods” and the different ways they can be written out in Python, including the output from the IDE:


Something I will need to review more later, but wanted to post it up here for review.


Integers, strings, operators, functions, absolutely everything is an object in Python.

Below are a demonstration of what are called attributes, defined by adding a ‘.’ after a string / function to add an attribute to add an additional function as shown below:


I found the bit length one pretty cool cause looking at a 256-bit binary string, 57 is 00111001 – 6 bits long!

I have to start to speed my way through the course, so I will likely be throwing up a lot of screen snips for later review of different Python Variables such as this Variable demo:



Comparison Operator Review

< = Less than

> = Greater Than

<= or >= = Less than or Equal to / Greater than or Equal to

== = Equal to (is exactly equal to)

!= = Not equal to (anything but this)

in = (contains element)

Creating and executing Python Functions (examples)


Native Python Data Structure review

list = [‘a’, 2, 12.6] = Ordered list of items that can be re-arranged after created, can contain duplicate items, can have all different data types within

tuple = (‘a’, 6, 3.59) = Just like a list except it is immutable / cannot be altered

dict = {“homework: 50, “time”: 5, “studytime”: -45} = This is a lot like a JSON Key Pair, note there is no comma on the last Key-Pair before it is closed with a curly bracket.

Demo of these:


Importing from Python Libraries / Collections example:


Data Structures in Python:




Loops and Iterations:


I have to stop the Python here, there is so much labbing I need to review, but I just have to push forward to get my DevNet Class quiz completed in time so I have to move on.

Code Review

The process of reviewing code is done by one but ideally more developers in reviewing a peers code, with an understanding of the codes goal, to give feedback to improve:

  • Ease of reading / understanding code
  • Following best practices / correct formatting
  • is free of bugs
  • contains good comments in the code and documentation
  • is “clean code”

Code review done by peers is kind of like me doing proof reading on my own material I write on here, as I fine typos all over my articles sometimes immediately after posting them or sometimes months / years later.

The flow chart not using MS Paint as I am hauling butt to finish this module:

Code Author -> Code Reviewer -> Sent back to Code Author with Criticisms (if good is ready, if needs fixes gets fixed and sent back to coders) -> Coders review code again -> if no further fixes are needed the code is considered “Final Code”

There are 4 types of code review

  • Formal Code Review – This consists of a series of meetings of Developers to review the whole codebase, line by line, discussing each one in detail.
  • Change Based Code review – Also known as tool-assisted code review, and often results in changes driven by a bug / user story (Agile) / feature / commit / etc.  This is generally done by developers peers using tools that highlight the changes between the original code and the new code to highlight changes for review of the changes themselves rather than going line by line.
  • Over the Shoulder code review – This is an informal peer review where the coder will go line by line explaining functions and the peer will give feedback.
  • Email pass-around – This works in conjunction with Change Management systems where every time a file is “checked in” it generates an automated email when a file is “Checked in” however this has the downside of the review being vague and there is no distinction between complete code changes or changing just a single line

Testing (Software)

There are two different methods two testing:

  • Functional Testing – This is to see if the software works right in general in terms of behaviors and functionality, from low level functions to high level things like Software Integration with other Systems which is called “Unit Testing”
  • Non-Functional Testing – This focuses on usability / performance / security / resilience / compliance – Basically maximum value and minimal risk

Testing will generally happen in the SDLC defined at the top of this article (yes, waaaay back up there), though modern SDLC methods such as Agile that uses the MVP (Minimal Viable Product) which creates software in feature based “Sprints” using CI/CD (Continuous Implementation / Continuous Delivery), will generally be subject to both functional and non-functional testing methods.

There is also “Test-Driven Deployment” which is creating a framework for testing testing software to drive its development, rather than developing software and then testing.

Unit Testing

This is testing software down to lines of code, literally testing the individual components of different scripts that make up functions or pieces of the program, and verifying their output does not return unexpected results, which 2 tools exist to help with this:

  • unittest – A framework included in Python by default which enables creation of test collections as methods via extending the TestCase class in this Library / Module
  • PyTest – This a framework added to Python via “pip install pytest” that can run unittest tests without need to modify them, and is used for more complex Python suites such as PyAts (also does not use classes but functions which simplifies the testing process for developers)

Integration Testing

This comes after Unit Testing to confirm that integrations with other vendors / URLs / APIs / 3rd party systems all work as expected, generally by using “PyTest” to perform.

Test-Drive Development

This is meant to test each small unit of the larger software or codebase individually, ensuring that the units work for the purpose they were built for, and catch bugs locally and fix them early on in the process rather than them “snowballing” into a larger issue down the SDLC for a piece of Software.

There is a 5 step process for TDD:

  1. Create new “test code” first
  2. Run tests to check for failures or unexpected behaviors
  3. Write “application code” to pass a newly created test
  4. Run tests to see if anything fails
  5. Refactor and improve application code with any issues found

Essentially the concept of Test-Drive Development is to create “Test Code” to test before writing “Application Code” for testing, and if either fail fix the bugs and re-test.

Its as redundant as it sounds, but some developers advocate for this type of code testing when developing software, so that main concept of writing test code before software code for testing is the main take away from that!


I completely skipped posting the Data Formats review here as I went through XML / JSON / YAML in a separate article, I do need to touch up the YAML page, but I went through all the information contained in this page in the span of about two days.

So now that my brain is fried like an egg, I am going to stare at my TV until Monday AM!

Until next time! 🙂


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s