
Time to do some GNS3 Switch Automation via Ansible on my Ubuntu Workstation VM!
This is the Topology I threw together to begin with, the 3rd octect changes with Workstation NAT / DHCP on vmnet8, I still haven’t figured it out but I’d rather spend time labbing.
The Ubuntu “Ansible Control Node” and “AutoVM1” to also Automate was setup this post:
I set the “nano /etc/hosts” locally as a sort of “DNS” on my Ubuntu VM or “Ansible Control Node” to define switched in the Ansible Inventory file (“nano /etc/ansible/hosts”) which I will demo here via pinging the Host Name associated to the IP’s configured in GNS3 switches:

Next I make an SSH Key on the Ansible Control Node with “ssh-keygen” and leave all fields blank when it prompts for different input, so it should then be the Ubuntu User/Pass:

Highlighted is the public key that is coped to the HostVM to be automated, and I am not quite sure how it will work with the switches yet, however I will add them into proper groupings now with the “Ansible Inventory File” as makes sense for how I want to access logical groups.
(I then used the link at the top to setup the remote AutoVM1 to allow SSH for Automation)
As shown above the local host / DNS setup on the Ansible Control Node is working, and here we can see SSH is working to AutomVM1:

I have not looked into how to SSH to the Switches yet, before that I am going to get the Ansible Inventory File setup with a few different groupings of devices, that suits my Automation needs as shown in the picture below in “nano /etc/ansible/hosts” both individually in the first picture and then by Host Groups or Parent Groups:


Now even if the DHCP 3rd Octet changes on VMNet8 adapter I now have my hosts setup in the Ansible Inventory (or sometimes called Ansible Host) File, so this is the Ansible DNS for the Control Node to network devices, however I haven’t yet worked out SSH with GNS3:

On the SW1 I removed the “enable secret” and confirmed username, transport input all on VTY lines, verified (seen at the top) it can ping, and also to point out above before the big VIRL Banner is that Ubuntu VM cannot use the Ansible Inventory name “switch1” to ping the device like “SW1” is pinging again as seen in the top pictures showing the local host file.
I tried just doing an SSH to SW1 from the Ansible Control Node just regular troubleshooting:

Being I would rather struggle a bit to understand than watch a training video on how it is configured, I found something to enter into Ubuntu for Ansible login creds in the “groups_var” file, however I accidentally entered it still on SW1 and got this:


Didn’t do the trick but was a good try, I at least turned it on and did a write mem, so wanted to document that here when I am wondering why only SW1 is not working later once I get this figured out 🙂
One demo that I found to resolve this was to put a username / password in the Inventory file:
[all:vars]
ansible_connection=ssh
ansible_user=vagrant
ansible_ssh_pass=vagrant
So I created my username / password variable within the Ansible Inventory File, however:

This is the kind of stuff I enjoy, seeing weird output and googling it until I figure out about 50 different things I wasn’t studying, but this helps build out IT skillsets fully from others real world experience in forum discussions on tech / issues you may have never thought of.
Just to ensure that this is something happening with my IOSv and not Ansible in General I verify I can ping the AutoVM (or server1 in the Inventory file) can be pinged:

All the switches also have a local username / password database that matches my Ubuntu VM’s and everything in this lab has the same creds, so it has to be something either with SSH Keys, some credential variable is not set somewhere properly or VIRL is difficult with Ansible.
After some brain cool down time I came back and got all switches pinging pretty fast!
I got pretty close settings the [all:vars] with a username and password, but there is a specific Cisco IOS naming for these lines within the “Ansible Inventory File” shown here:

Which after inputting my own username / password along with Port 22 as I am SSH’ing to the Switches in GNS3 rather than a unique port to Sandboxes, I started hitting pings:

This demonstrates the power of Ansible with the Inventory File alone, as I was able to group Core / Access / All switches in 3 different Host groups, so I can apply appropriate changes to different design layers in the Topology but also baseline configurations to ALL switches.
This was one of those issues I’d probably not have slept if I didn’t figure it out, so seeing those pings was SUCH a sigh of relief, next up will be actually making and running playbooks!
Now that I have connectivity, I will halt this lab setup, and do some Automation next!
I was going to go a bit deeper into Ansible, but now that I have connectivity rocking to all my devices as shown here by pinging either a single switch or a group of switches, I will start looking at how I might configure a switches network like this for Core / Access layers.
Until next time!!! 🙂