Design Patterns such as MVC above and Observer make software development a breeze!
Or at least that is their goal, is to have standardized or well-known Design Patterns that has been refined by other developers, and shared out to the community in different locations (perhaps at Cisco Developer Code Exchange?) that Devs can use that code for their own projects without having to “reinvent the wheel” every project if one Design Pattern works!
Model-View-Controller (MVC) Design Pattern
As seen in the top picture the diagram shows the Data Flow of MVC Design Pattern:
MVC is known for “Decoupling applications inter dependencies and functions” from its other components, with the goal of making the logical layers of Applications Modular, so that Presentation for example is separate from the Business Logic of an application so when changes are made at one level it will not impact the other logical / Modular layer.
The roles of the MVC Components are as follows:
- Model – Receives and manipulates data tied to a specific Database / Datastore (or possibly just a single file), performs read/write/update/delete/etc, which is sent by the Controller via the Update sent to the Model with instructions how to ‘Manipulate Data’ via backend systems that it uses for ‘Business Logic’ or processing data operations for the end user
- View – The primary function of the view is to “Render Data” for end user consumption via a Web Interface, GUI, CLI, Etc, thus being essentially the “Presentation Layer” that MVC creates like a separate layer as to not impact other components operation / logic
- Controller – This is the intermediary device that receives User Input, and translates it for the Model by passing those requests to the specified Datastore (or whatever) with instructions on the process needed, then updates the View once that change is made
Analogy to think of MVC:
It reminds me almost of DNS and how you type a website into a web browser (View), a DNS Server resolves the web address to an IP Address (Controller), and then that IP Address and Resources requested presents back data based on the criteria put into the URL (Model).
For exam day this system keeps its components independent of each other so they can be re-usable, refined or optimized with time, and “decouples” operations from each other!
Observer (and Subject) Design Pattern
This is a very simple (visually explainable) Design Pattern as shown here:
I just noticed the “Subscriver” typo in that picture, I remember that was a long study day 🙂
Really I would remember both terms for the only two components to this model, Observers are also known as “Subscribers” as they are literally subscribing for Notifications from a Subject or “Publisher” device that triggers Notifications to all Subscribers (Observers) upon any kind of change event is detected on that device.
I keep it straight with how CUCM (VOIP) Servers work, where the Publisher is the main CUCM Server where changes are made, and then Subscribers are the secondary servers that Subscribe to a Publisher for info and processes VOIP work loads except in smaller or server down situations where the Publisher is somehow the last server standing in a cluster 🙂
A couple exam day bullet points of how each component works in more detail:
- Subject – Describes the state of being ‘Observed’ in which the device allows remote software or systems to “subscribe” to monitor a service or process, the subject holds or maintains the data that is to be Synchronized by the Observer via subscription, and when a change occurs a notification is pushed to all Observers (notification are PUSH ONLY!)
- Observer – The “Observer” component registers with the Subject, and tells the subject how to talk back to it, so that data can be Synchronized when the Subject calls the Observer.
This works almost reverse of SNMP (or similar to NETCONF?) in the way that SNMP Polls for Data Changes, this model Pushes notifications for events and changes in data, which at scale (many Observers registered to a subject) can be inefficient in this way as a change may cause one huge ripple in network / resource utilization.
However this Design Pattern could actually be integrated with MVC Design Pattern, say if the backend systems for Multiple Views (interfaces) had to render the same data in different formats (AS400 terminal, Web Gui, Graphs / Charts, Etc), so because they are two different Design Patterns does not means they cannot be used together!
However that rabbit hole goes very deep, so I will leave the summarized version there!
BASH (Bourne Again Shell) Line Commands for DevNet Exam Day!
BASH Prompts are really more Native to Linux and Mac OS machines, however the BASH Prompt is universal to many different systems, being considered the layer between the User and underlying Operating System and its features.
I will focus on Linux BASH here, though its fairly universal across most systems.
This is the Linux Directory Tree with Root or / being the base of the Linux OS much alike the C: is to Windows OS, though before getting into Directory Navigation I want to show off a command I amazingly didn’t know until reading the DevNet OCG by Cisco:
It has an amazing manual available on its Bash prompt with extremely thorough explanation of every command or command modifier you can type into it!
Typing “man man” will give you a manual on how to use the manual system, then you can type something like “man ls” to see a deeper explanation of the “ls” command, a really helpful utility I am sure I will be reading casually in some down time from DevNet study!
The “sudo” command modifier is of course used to run something as an Administrator and should be used as minimal as needed, though you generally will need this command regularly to edit / update files, and at times you will just want to change a folders permissions to be run by any user to make it work with work flows (will cover soon).
I want to hit navigation of directories first with that quick tree picture up top, with the commands being as follows for the different “cd” or “Change Directory” command:
- “cd /” – Change back to the root directory (not home)
- “cd ~” – Change back to home directory for user
- “cd folder” – Change directory to target folder (case sensitive)
- “cd folder/folder” – Change directory to target folder within another folder
- “cd ..” – Change Directory back a folder in the Directory Tree
- “pwd” – Print Working Directory (shows the directory you are working in)
- “ls” – List Stuff… or something. It Lists Files and Folders in the directory
- “ls folder” – Lists Folders and Files in the target directory (called ‘folder’ here)
- “ls -a” – Lists “all” objects in the directory whether its hidden or an extension
- “ls -l” – Lists the permissions and user / group ownership of the file or folder
- “ls -F” – Shows the List of Files and Folders but defines Folders with a “/” after them
I crammed all of the above commands into one giant screenshot here to review:
At the very top it can be seen that folders can either be navigated one step at a time, or you can define the path right to the target folder, using “cd /” if you want to get back to the very Root Prompt however what you are likely looking for is “cd ~” which will take you back to your /home/user/ directory where BASH opens to initially.
Note you can use the shortcuts “cd ~” or “cd /” from anywhere, and it will plant you back at the root or user directory, the working directory is where you will make folders with “mkdir” from or files with “nano” or “touch” commands unless specified to a target (covered later).
A note on the home directory(~) – When you see “~/” that assumes the home directory when using this as a source or destination for copy / move / delete / etc commands!
Then I jump into looking at “ls” which I assume simply means list, but I say list stuff until someone tells me the “s” means something other than stuff… maybe its something?
However – “ls” can be used either in the current working dir, remote directories, it can be used with the modifiers described above to show regular files / all files / permissions associated with files / the -F modifier is interesting as I didn’t realize there was a command that specified what type object is which by putting a “/” after folder names to indicate that is a directory rather than a file or extension.
That covers the basics of Linux BASH, lets take a step deeper into BASH commands!
It really doesn’t get too complex actually, I’ll bullet point style it here again:
- “cp” – The copy commands does not move a file but duplicates it somewhere
- “cp testfile.txt test2.txt” – This would copy “testfile.txt” in the current directory into the defined file name, so you now have a second copy of that file
- “cp /home/loopedback/testfile.txt ~/test2.txt” – This does the same thing as above where it just makes a second copy of the file, just using directory names in two different formats
- “cp -r folder tesfolder.old” – Copies a folder in the current working directory
For the example file and folder I used “mkdir testfolder” and “touch testfile.txt” to just make them populate in the current directory, I don’t expect most people reading this to not know that, but just to add that in here somewhere 🙂
Pretty self explanatory of the copy function shown above, the main concept is that it makes duplicates, it will not destroy an original copy of a file or folder just duplicate it.
Another thing to “touch” on quick (har har har) is using the “nano” command also to create a file, as you will need to be logged into the user with permission of the current directory you are in which can be checked via “ls -l” if you are not sure to be able to save the file:
I wanted to verify that with nano you can not only create a new file to fill in for use with Automation or whatever you need in the working directory, but notice in “ls” it does not show a file extension, so if you intend this to be say an Ansible Playbook file you would want to type “test.yaml” so it is understood by the Ansible Playbook!
“cat (something)” prints whatever is in the target file into the BASH prompt, in this case I just wrote test to demo this file with “cat” which is actually our old friend “concatenate” abbreviated down to cat, there are a couple more uses for cat as well:
- “cat something.txt” – Displays content of target file
- “cat something.txt | more” – The pipe more added to this will allow for page breaks so if its a gigantic file you can parse your way through it, like Cisco IOS running configuration
- “cat >something2.txt” – Drops you into almost a “nano” environment to paste in text or data and then exiting the file will copy whatever was pasted into it
This is fairly straight forward as well I believe, cat basically prints the contents of a file, however the “cat <file.txt” was new to me, so that took a little playing with there.
Essentially it drops you into a VIM type mode where there is no prompt, and so I just copied and pasted the “ls” output from above and did a ctrl + c to escape out and sure enough when I did a “cat test4.txt” and it shows the ls output I pasted in.
For precision changes use “nano” to update a file, but I can see how “cat >test.txt” could be handy to copy / paste a huge chunk of information into, and I won’t get into the text placement in a file like say for example doing this in an Ansible Inventory file but wanted to document that it is an option for adding chunks of text to a file as well.
Next up is Move / Remove / Touch commands which are all pretty basic as well
First I will bullet point how to move or “mv” files and folders within BASH:
- “mv test2.txt testtest2.txt” – This command actually just moves the final into a new file named something else, so essentially is just changing the name of the file
- “mv /home/loopedback/test2.txt ~/Documents” – Moves test2.txt from the home working directory to Documents directory
- “mv -i * ~/NewFolder/” – Moves files from the current working directory to the NewFolder directory, so if in the home directory it would empty the contents
A note on that last command to move to all files to a new directory, it will not auto-create the directory, so if you leave the “/” off the end it will create a file name and called “NewFolder” and copy the contents of the working directory into it but not create a new directory:
So I tried moving a file to Documents and moving it around from there not to start moving my entire home/user directory around, and found some interesting results that aren’t too surprising along with experimenting with the “touch” command as well:
Now both the files and folder that was in “Documents” is moved to the user directory, as you cannot move things to a sub-folder down the directory tree, only to directory folders upwards of it in the directory structure.
Also I played around a bit with Touch / Cat to verify that if a file exists than doing “touch” with an exact same file name will not erase or over-write it (or throw an error), it will just do nothing at all to the existing file or create a secondary file like in Windows.
Last a look at environment variables / echo / PATH / source commands!
You can see the environmental variables in your BASH Shell by simply typing “env” or “env | more” to again add page break / parsing if there is a lot of output as shown here:
Going down this does show all information about your session other than just a huge wall of text displayed here, however I wanted to get that simple “env” in there.
Next is a discussion not just of “echo” as that simply prints something to the BASH Terminal, however in this case we are printing the “$PATH” which is an Environmental Variable used by BASH to look for executable files when going to execute a file – This can be verified and actually changed to suit your needs (does not save if session closes) shown below:
This can be permanently changed by exporting the change to the “.bashrc” extension which controls the BASH shell environmental variables, by using the following command:
echo “export PATH=$PATH:/Home/loopedback/bin” >> .bashrc
You can also use “source ~/.bashrc” to reload the variables from the bashrc file, or simply run the command “. ~/.bashrc” to accomplish the same thing as each other just different ways of achieving the same goal as I am sure little nuggets like this will be sprinkled through studies!
With that I am actually done with Bash Shell / Design Patterns, next up is GIT!
I am not exactly sure how robust the GIT section will be as I’ve covered that at fairly painful length in other posts, but I will fill in the knowledge gaps with the next article, and hopefully get to some Ansible labbing fairly soon here (I am having some VMware Network issues).
Until Next Time!!! 😀