Netflow : Heavier ROUTE config on top half, light weight config for real world at the bottom, some comparison of v5 to v9!

OSPF_Base_Topology

I want to first say the top half of this will be the heavy duty CCNP ROUTE details needed for the exam, I’ll try to fill in the differences as much as I can in the middle, and at the bottom is a real world way for actual network admins to configure and view Netflow on the fly so seriously check it out if your job is working on routers!

Netflow is a tool that can be used to view traffic statistics within the network, for the reasons of seeing “top talkers” consuming bandwidth / reviewing available bandwidth for future planning / getting a general idea of your network bandwidth usage.

Netflow is comprised of three components:

  • The “Monitor” – The Ingress interface (LAN interface) of your router with Netflow configurations on it, which watches the data and caches it for a short period of time, until it is Exported to a network management server
  • The “Exporter” – Configuration on the router that exports the collected Data on UDP port 9996 to a Network Management Server that contains software to collect it
  • The “Collector” – An NMS running special software to capture the Netflow data put it into graphical output that is more readable than the CLI for quick views of the info

I don’t actually have an NMS server setup, but I’ll show the configurations of what to do if we did, I’ll point it at R4 to see if there is any way to pull the export info off there:

R1(config)#flow exporter EXPORT1
R1(config-flow-exporter)#?
  default          Set a command to its defaults
  description      Provide a description for this Flow Exporter
  destination      Export destination configuration
  dscp             Optional DSCP
  exit             Exit from Flow Exporter configuration mode
  export-protocol  Export protocol version
  no               Negate a command or set its defaults
  option           Select an option for exporting
  output-features  Send export packets via IOS output feature path
  source           Originating interface
  template         Flow Exporter template configuration
  transport        Transport protocol
  ttl              Optional TTL or hop limit

R1(config-flow-exporter)#export-protocol ?
  netflow-v5  NetFlow Version 5
  netflow-v9  NetFlow Version 9

R1(config-flow-exporter)#export-protocol netflow-v9
R1(config-flow-exporter)#destination ?
  Hostname or A.B.C.D  Destination IPv4 address or hostname

R1(config-flow-exporter)#destination 172.12.34.4 ?
  vrf  Optional VRF label
  <cr>

R1(config-flow-exporter)#destination 172.12.34.4
R1(config-flow-exporter)#transport ?
  udp  UDP transport protocol

R1(config-flow-exporter)#transport udp ?
  <1-65535>  Port value

R1(config-flow-exporter)#transport udp 9996

R1(config-flow-exporter)#source fa0/1 ?
  <cr>

R1(config-flow-exporter)#source fa0/1
R1(config-flow-exporter)#exit
R1(config)#

So going from the top down, I’ve highlighted the 4 configs in the “netflow exporter (word)” command, which I will do the highlights bullet point style:

  • “export-protocol” – Shows you can configure it for either v5 or v9
  • “destination” – IP address where Netflow info is being sent to
  • “transport” to set the protocol (UDP only option) and port #
  • “source” the interface you are collecting data from

To verify your Netflow configuration:

R1(config)#do sh flow exporter EXPORT1
Flow Exporter EXPORT1:
  Description:              User defined
  Export protocol:          NetFlow Version 9
  Transport Configuration:
    Destination IP address: 172.12.34.4
    Source IP address:      172.12.15.1
    Source Interface:       FastEthernet0/1
    Transport Protocol:     UDP
    Destination Port:       9996
    Source Port:            56438
    DSCP:                   0x0
    TTL:                    255
    Output Features:        Not Used

R1(config)#

However the configuration is not complete, as we need to define the traffic we want to collect and export to the collector:

Flow Monitor – Defining the traffic to capture

R1(config)#flow monitor MONITOR1
R1(config-flow-monitor)#?
  cache        Configure Flow Cache parameters
  default      Set a command to its defaults
  description  Provide a description for this Flow Monitor
  exit         Exit from Flow Monitor configuration mode
  exporter     Add an Exporter to use to export records
  no           Negate a command or set its defaults
  record       Specify Flow Record to use to define Cache
  statistics   Collect statistics

R1(config-flow-monitor)#record ?
  netflow           Traditional NetFlow collection schemes
  netflow-original  Traditional IPv4 input NetFlow with origin ASs

R1(config-flow-monitor)#record netflow ?
  ipv4  Traditional IPv4 NetFlow collection schemes
  ipv6  Traditional IPv6 NetFlow collection schemes

R1(config-flow-monitor)#record netflow ipv4 ?
  as                      AS aggregation schemes
  as-tos                  AS and TOS aggregation schemes
  bgp-nexthop-tos         BGP next-hop and TOS aggregation schemes
  destination-prefix      Destination Prefix aggregation schemes
  destination-prefix-tos  Destination Prefix and TOS aggregation schemes
  original-input          Traditional IPv4 input NetFlow with ASs
  original-output         Traditional IPv4 output NetFlow with ASs
  prefix                  Source and Destination Prefixes aggregation schemes
  prefix-port             Prefixes and Ports aggregation scheme
  prefix-tos              Prefixes and TOS aggregation schemes
  protocol-port           Protocol and Ports aggregation scheme
  protocol-port-tos       Protocol, Ports and TOS aggregation scheme
  source-prefix           Source AS and Prefix aggregation schemes
  source-prefix-tos       Source Prefix and TOS aggregation schemes

R1(config-flow-monitor)#record netflow ipv4 original-input ?
  peer  Traditional IPv4 input NetFlow with peer ASs
  <cr>

R1(config-flow-monitor)#record netflow ipv4 original-input

So looking at that, I think we could have gotten away with just using “record netflow-traditional”, however I took the long way around doing that. I also noticed your choice between IPv4 and IPv6 traffic collection, and I am assuming that will be a difference between v5 and v9 but have not verified just yet!

Flow Monitor – Defining what “exporter” configuration to use to send data
R1(config-flow-monitor)#exporter EXPORT1
R1(config-flow-monitor)#exit
R1(config)#

To verify the monitor as well, the command is a bit long winded for my liking:

R1(config)#do sh flow monitor name MONITOR1
Flow Monitor MONITOR1:
  Description:       User defined
  Flow Record:       netflow ipv4 original-input
  Flow Exporter:     EXPORT1 (inactive)
  Cache:
    Type:              normal
    Status:            not allocated
    Size:              4096 entries / 0 bytes
    Inactive Timeout:  15 secs
    Active Timeout:    1800 secs
    Update Timeout:    1800 secs

R1(config)#

Now to put all this configuration together, we must lastly apply the Monitor to the interface being well… monitored:

R1(config)#int fa0/1
R1(config-if)#ip flow monitor MONITOR1 input
R1(config-if)#exit
R1(config)#

So now it should be actively capturing traffic, so I’ll make some from R5 with a continuous ping somewhere and see what we get with the CLI output of the cache:

R1(config)#do sh flow monitor name MONITOR1 cache
  Cache type:                               Normal
  Cache size:                                 4096
  Current entries:                               2
  High Watermark:                                2

  Flows added:                                   2
  Flows aged:                                    0
    – Active timeout      (  1800 secs)          0
    – Inactive timeout    (    15 secs)          0
    – Event aged                                 0
    – Watermark aged                             0
    – Emergency aged                             0

IPV4 SOURCE ADDRESS:       172.12.15.5
IPV4 DESTINATION ADDRESS:  224.0.0.5
TRNS SOURCE PORT:          0
TRNS DESTINATION PORT:     0
INTERFACE INPUT:           Fa0/1
FLOW SAMPLER ID:           0
IP TOS:                    0xC0
IP PROTOCOL:               89
ip source as:              0
ip destination as:         0
ipv4 next hop address:     0.0.0.0
ipv4 source mask:          /24
ipv4 destination mask:     /0
tcp flags:                 0x00
interface output:          Null
counter bytes:             1760
counter packets:           22
timestamp first:           04:07:03.643
timestamp last:            04:10:33.695

IPV4 SOURCE ADDRESS:       172.12.15.5
IPV4 DESTINATION ADDRESS:  172.12.23.100
TRNS SOURCE PORT:          0
TRNS DESTINATION PORT:     2048
INTERFACE INPUT:           Fa0/1
FLOW SAMPLER ID:           0
IP TOS:                    0x00
IP PROTOCOL:               1
ip source as:              0
ip destination as:         0
ipv4 next hop address:     172.12.123.3
ipv4 source mask:          /24
ipv4 destination mask:     /24
tcp flags:                 0x00
interface output:          Se0/0/0
counter bytes:             87900
counter packets:           879
timestamp first:           04:09:34.515
timestamp last:            04:10:36.799

R1(config)#

In this we can see both R5’s Multicast Traffic for OSPF to the “All OSPF routers” address of 224.0.0.5, and then our ping count going way up with the packets, which is what makes this a nice feature to check out directly on the CLI to identify top talkers in your network.

Now with Source or Destinations ports maybe be Publicly routable addresses being NAT’d, along with a well known port like 80 as the Source, but a random destination # from the host who sent it to get back to the LAN device.

Differences between v5 and v9

  • When configuring netflow on an interface, v5 only uses “ip flow ingress” on both ingress and egress interfaces, whereas v9 uses “ip flow ingress” on the igress interface and “ip flow egress” on the egress interface
  • A major difference (or not) between v5 and v9 is the flow-export is you can configure v5 settings while using v9 as the exporter version

 

Configuring a quicker, lighter version of Netflow to see on the CLI

This type of configuration is more real world for me, it is quickly configured and gives you a fast overview of network traffic, rather than exporting them to be made into a wonderful chart on some Network Mgmt Server somewhere.

The configuration goes like this:

R1(config-if)#ip flow ingress
R1(config-if)#ip route-cache flow
R1(config-if)#int s0/0/0
R1(config-if)#ip flow ingress
R1(config-if)#exit

Then you can see immediate results with this show command:

R1#sh ip cache flow
IP packet size distribution (9741 total packets):
   1-32   64   96  128  160  192  224  256  288  320  352  384  416  448  480
   .000 .000 .005 .994 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000

    512  544  576 1024 1536 2048 2560 3072 3584 4096 4608
   .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000

IP Flow Switching Cache, 278544 bytes
  4 active, 4092 inactive, 23 added
  1308 ager polls, 0 flow alloc failures
  Active flows timeout in 30 minutes
  Inactive flows timeout in 15 seconds
IP Sub Flow Cache, 34056 bytes
  0 active, 1024 inactive, 0 added, 0 added to flow
  0 alloc failures, 0 force free
  1 chunk, 1 chunk added
  last clearing of statistics never
Protocol         Total    Flows   Packets Bytes  Packets Active(Sec) Idle(Sec)
——–         Flows     /Sec     /Flow  /Pkt     /Sec     /Flow     /Flow
IP-other            19      0.0         1    80      0.0       0.0      15.2
Total:              19      0.0         1    80      0.0       0.0      15.2

SrcIf         SrcIPaddress    DstIf         DstIPaddress    Pr SrcP DstP  Pkts
SrcIf         SrcIPaddress    DstIf         DstIPaddress    Pr SrcP DstP  Pkts
Fa0/1         172.12.15.5     Se0/0/0       172.12.23.100   01 0000 0800  5277
Se0/0/0       172.12.23.100   Fa0/1         172.12.15.5     01 0000 0000  4410
Se0/0/0       172.12.123.2    Local         172.12.123.1    59 0000 0000     1
Fa0/1         172.12.15.5     Null          224.0.0.5       59 0000 0000    35
R1#

When you have a customer yelling at you about why their LAN is going so dag nab slow, or the internet in general running these commands can immediately tell you who the “top talkers” are in the network.

I find this information so important it bears repeating:

  • “ip flow ingress”
  • “ip route-cache flow”
  • “sh ip cache flow”

And that’s it, you have an immediate view of every IP address of every node on the network transmitting data through this router, and can immediately identify if one of them is blasting out packets (an employee watching HD movies or a server doing a middle of the day backup), and immediately know where your problem is.

So it is nice if you have an enterprise Network with a Syslog / Mgmt Server to collect network information and make pretty charts, but for real network admins / engineers who need information RIGHT NOW on the fly there is the bottom configuration that works fantastic for pin-pointing “network slowness” issues immediately!

That is all the time I can put in tonight, I think that pretty sufficiently covers Netflow, I may post up one more thing regarding SNMP to commit it to memory and then I will be doing rapid BGP review and labbing that I will not be posting at all I do not believe.

So this wraps it up for today, I wanted to squeeze SNMP in, but that will have to wait for tomorrow then BGP blitz til exam day in about 5 days from today 🙂

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s