This is going to be the ‘boring’ side of Intent Based Networking, though very relevant, you will not find any mind blowing information here – However crucial information in making informed decisions with Intent Based Networking deployment!
To be clear, in this discussion, this is still centered around Cisco DNA Center concepts.
Also towards the bottom a quick review of the migration from a traditional networks device management to a DNA Center network, to compare the differences.
To give a quick list of topics to be covered in this article:
- Distributed on-device Analytics
- Scalable Storage
- Analytics Engines
- Machine Learning (AI)
Lets get right into it!
Software and Hardware elements measure and collect data statistics, for example:
The Cisco Catalyst 9000 Cisco Unified Access Data Plane (UADP) 2.0 ASIC, developed to capture and store massive amounts of extremely granular information.
Even devices that run IOS (Apple Devices) can be used to provide information about network analytics if one wanted to get so granular (and you probably want to include a splash page on your wireless advising of the data collection for analytics).
Also there can be Sensors in the network like Aironet Wireless Sensors.
Distributed On-Device Analytics
All this data being collected for analysis can actually be analyzed directly on the network devices, as all this data collection would probably peg a single controller, so using on-board analytics to pinpoint KPI’s or Key Performance Indicators in your network so that you have the main analysis points of those data sets available for the SDN Controller.
The term for getting this important data off the box and to the controller is “Telemetry”.
Some forms of Telemetry we have all probably used on the job to extract network data are SNMP / Syslog / Netflow servers, which need to be “polled” from devices, when for a living breathing real-time network like a DNA Center Network we need data now!
For this reason DNA Center introduced an approach for Telemetry called “Model-based streaming telemetry” which as indicated in the name, streams these key points of data collected within your Network Fabric devices real time.
The Fabric Devices streams KPI’s and other interesting data, by actually estasblishing TCP Connections back to the DNA Center Controller using HTTP / GRPC (General Purpose Remote Procedure Call), meaning the Data KPI’s are constantly being ‘pushed’ by Fabric Devices for the Controller to review.
There are a few different options for Scalable Storage (or storage in general):
- Centralized Collectors – One huge SAN, tons of RAID Storage, but not real scalable
- Distributed Collectors – Lots of smaller collectors all over the network, drawback is complexity to deploy this solution
- Cloud-Based Collectors – It doesn’t get much more scalable and reliable than this, however its going to cost some decent money for this solution
With all this data collection and analytics to go through, a solution is needed (and sure enough was created) to address this, so that a human being could sift through this mountain of data and pick out the key parts that DNA Center has highlighted for review.
Cisco DNA Center provides what are called “Analytics Engines” which provides context along with any trouble issues detected on the network which include:
- Type of device experiencing issue
- IP And MAC Address of device
- ISE Security Policy granted to the device (AAA, Permissions, Identity, Etc)
- Network Connection Endpoint such as wired (switch) or wireless (wap)
This means that Cisco DNA Center can ‘statistically learn’ from data collected without explicit programming to do so, whereas traditional Network Analytics either required Network Engineer time / expertise or it wasn’t done (a lot of the in the past the latter), also Cisco DNA Center can utilize AI / ML to automate “Baselining” the Network Performance (remember that concept we learned about during TSHOOT?).
In this way it quite literally uses AI and Machine Learning to suggest what is wrong with the network based on baseline statistics vs current statistics, and suggest fixes for issues to the person standing at the Controller, and fixes of course can and will be Automated!
Traditional Network Device Management vs DNA Center Network Mgmt
Traditional Networks are mainly different in the way that they require manual work to resolve issues or pull statistics, whereas DNA Center does things dynamically.
For example if an IOS Bug is found in a Traditional Network this would require logging into every box separately via CLI or some kind of SFTP client to push the new IOS image to ever separate device, whereas DNA Center will upgrade all devices dynamically.
Another similarity but difference is that the Underlay must have basic IP Connectivity for either type of network, however a Traditional Network this is done manually by setting IP Addressing and Routing Protocols via the CLI, whereas with DNA Center if it can reach the Internet it can actually go out to its DNA Center Cloud server to pull down a configuration for the specific network.
Another example – Data Analysis via SNMP to troubleshoot network issues. Normally a separate GUI for SNMP or Syslog info is reviewed for Data pointing to an issue, that is then troubleshot by a Network Engineer, while DNA Center actually IS the SNMP Manager software itself and will dynamically resolve issues as it learns about them (via AI / Machine Learning of Baseline Network Performance vs Current Performance).
I will end the article here on the note that DNA Center automates everything
I could drone on for hours about everything DNA Center does dynamically like making diagrams / discovering hosts / fixing issues / etc, but as can be seen, DNA Center completely automates and dynamically achieves every single task a network administrator manually does today without exception I can think of other than brew the coffee in the break room of the Enterprise building.