I will be primarily looking at this through the lens of R3 to R4 when reviewing the below material, this is very heavy into MPLS verification commands / debug output / WireShark capture screen snips, so this will not be for anyone looking for a light read.
I was planning on fitting LDP and LSP Selection into this article, but having really gone off onto a tangent playing with LDP and drawing comparisons to BGP / CEF, I will leave LSP review for another post at some point.
This is more than even I will probably ever need to know about LDP, I trimmed out some information, however I’ll mostly leave this post as one giant blob of information for later reference if needed – Enjoy!
Two quick terms that I want to have down in here for future reference!
Couple of terms I may never happen across again, so want to jot them down quick
- Frame-Mode MPLS – MPLS running off connected Ethernet segments, this indicates that MPLS Labels are being distributed via L2 “Frames” over the Ethernet Segment
- Per-Platform Label Space – This simply means the same label for IP destination will be shared out all interfaces (to all neighbors) as the same Label #, whereas ATM or Frame-Relay may use “Per-Interface Label Spacing” where different interfaces might use different MPLS labels to refer to the same IP Destination network
Reviewing LDP Label Bindings via Conditional Debugging as MPLS starts
Debugging very well demonstrates the Label Binding / Exchange process when LDP starts and LSR’s are building an LDP Session, however I found WireShark to be a much easier way to show how the LDP Session is built.
That being said, I’ll first take a look at initial LSR / LDP Label Binding and initial Binding Exchanges between LSR’s with a Conditional Debug, then review the actual building of the LDP Session below via WireShark for clarity sake.
Create ACL for 5.5.5.5 / Start Conditional Debug
R3-P(config)#access-list 10 permit 5.5.5.5
R3-P(config)#do debug mpls ?
access-list MPLS ACCESS
events MPLS events
ip Debug MPLS IP information
l2transport Any transport over MPLS debug
ldp Label Distribution Protocol
lspv LSP Verification debug
mldp Debug MLDP routing protocol parameters
netflow MPLS Egress NetFlow Accounting
packet mpls packet debug
static MPLS Static labels
traffic-eng MPLS Traffic Engineering debug
R3-P(config)#do debug mpls ldp ?
address-to-session Address to LDP session mapping services
advertisements LDP label and address advertisements
backoff LDP session establishment backoff
bindings Label bindings and other Label Information Base (LIB)
changes
capabilities Capabilities debug
cli LDP Configuration events
dev MPLS development interested events
errors ALL LDP errors
graceful-restart LDP Graceful Restart events
iccp InterChassis Control Protocol debug
igp IGP-related interactions
messages LDP messages
peer LDP peer
prev-label Previous label protocol changes
refcount LDP instance reference count tracking
session LDP session
targeted-neighbors LDP targeted neighbor events
transport LDP transport and discovery
R3-P(config)#do debug mpls ldp bindings ?
filter Local Label filtering changes
peer-acl Peer access list filter
prefix-acl Prefix access list filter
<cr>
R3-P(config)#do debug mpls ldp bindings prefix-acl 10
LDP Label Information Base (LIB) changes debugging is on for prefix ACL 10
I made the “Conditional Debug” because the output for even a single network prefix is huge once I disable and re-enable MPLS Globally, however I wanted to also demonstrate above all the different debug options there are for MPLS, and that we are interested in seeing the LDP / Bindings output to see how all of this works.
I’ve disabled MPLS with “no mpls ip” Globally but will spare the output here as there will be enough output as it is, and will now re-Enable MPLS with the debug running:
R3-P(config)#mpls ip
R3-P(config)#
*Nov 24 17:51:23.171: ldpx_fwdg: invoke iprm to fib walk for if None
*Nov 24 17:51:23.215: ldpx_fwdg: path change upcall event from fwdg
*Nov 24 17:51:23.223: LIB: announced out label unknown for 1.1.1.1/32 (via 10.23.0.2)
*Nov 24 17:51:23.235: ldpx_fwdg: path change upcall event from fwdg
*Nov 24 17:51:23.243: LIB: announced out label unknown for 2.2.2.2/32 (via 10.23.0.2)
*Nov 24 17:51:23.255: ldpx_fwdg: path change upcall event from fwdg
*Nov 24 17:51:23.259: LIB: announced out label unknown for 3.3.3.3/32 (via 0.0.0.0)
*Nov 24 17:51:23.271: ldpx_fwdg: path change upcall event from fwdg
*Nov 24 17:51:23.279: LIB: announced out label unknown for 4.4.4.4/32 (via 10.34.0.4)
*Nov 24 17:51:23.291: LIB: prefix recurs walk start: 5.5.5.5/32, tableid: 0; tib entry not found
*Nov 24 17:51:23.295: ldpx_fwdg: path change upcall event from fwdg
*Nov 24 17:51:23.295: LIB: get path labels for route 5.5.5.5/32
*Nov 24 17:51:23.295: LIB: get path labels: 5.5.5.5/32(0), nh ctx_id: 0, Fa2/0, nh 10.34.0.4
*Nov 24 17:51:23.299: LDP LLAF: 5.5.5.5 accepted, absence of filtering config
*Nov 24 17:51:23.303: lcon: tibent(5.5.5.5/32): created; find route labels request
*Nov 24 17:51:23.303: tib: Assign 5.5.5.5/32 nh 10.34.0.4 real label
*Nov 24 17:51:23.307: lcon: tibent(5.5.5.5/32): label 302 (#10) assigned
*Nov 24 17:51:23.311: LIB: add a route info for 5.5.5.5/32(0, 10.34.0.4, Fa2/0), remote label Unknown
*Nov 24 17:51:23.311: LIB: update remote label for route info 5.5.5.5/32(0, 10.34.0.4, Fa2/0) remote label Unknown
R3-P(config)#
*Nov 24 17:51:23.315: lcon: announce labels for: 5.5.5.5/32; nh 10.34.0.4, Fa2/0, inlabel 302, outlabel unknown (from 0.0.0.0:0), fwdg upcall or intf event
*Nov 24 17:51:23.315: LIB: announced out label unknown for 5.5.5.5/32 (via 10.34.0.4)
*Nov 24 17:51:23.331: ldpx_fwdg: path change upcall event from fwdg
*Nov 24 17:51:23.335: LIB: announced out label unknown for 10.12.0.0/24 (via 10.23.0.2)
*Nov 24 17:51:23.347: ldpx_fwdg: path change upcall event from fwdg
*Nov 24 17:51:23.351: LIB: announced out label unknown for 10.23.0.0/24 (via 0.0.0.0)
*Nov 24 17:51:23.379: ldpx_fwdg: path change upcall event from fwdg
*Nov 24 17:51:23.383: LIB: announced out label unknown for 10.34.0.0/24 (via 0.0.0.0)
*Nov 24 17:51:23.451: ldpx_fwdg: path change upcall event from fwdg
*Nov 24 17:51:23.455: LIB: announced out label unknown for 10.45.0.0/24 (via 10.34.0.4)
*Nov 24 17:51:23.
R3-P(config)#787: lcon: (default) Assign peer id; 2.2.2.2:0: id 0
*Nov 24 17:51:23.791: %LDP-5-NBRCHG: LDP Neighbor 2.2.2.2:0 (1) is UP
*Nov 24 17:51:23.903: lcon: (default) Assign peer id; 4.4.4.4:0: id 1
*Nov 24 17:51:23.911: %LDP-5-NBRCHG: LDP Neighbor 4.4.4.4:0 (2) is UP
*Nov 24 17:51:23.959: lcon: 4.4.4.4:0: 10.45.0.4 added to addr<->ldp ident map
*Nov 24 17:51:23.963: lcon: 4.4.4.4:0: 4.4.4.4 added to addr<->ldp ident map
*Nov 24 17:51:23.967: lcon: 4.4.4.4:0: 10.34.0.4 added to addr<->ldp ident map
*Nov 24 17:51:23.971: LIB: 5.5.5.5/32:: learn binding 402 from 4.4.4.4:0
*Nov 24 17:51:23.971: LIB: a new binding 402 to be added
*Nov 24 17:51:23.975: lcon: tibent(5.5.5.5/32): label 402 from 4.4.4.4:0 added
*Nov 24 17:51:23.979: LIB: next hop for route 5.5.5.5/32(0, 10.34.0.4, Fa2/0) is mapped to peer 4.4.4.4:0, program label
*Nov 24 17:51:23.979: LIB: 5.5.5.5/32: LIB entry added to remote label programming list
*Nov 24 17:51:23.983: ldpx_fwdg: announced path label info for 10.45.0.0/24
R3-P(config)#
*Nov 24 17:51:23.983: ldpx_fwdg: path change upcall event from fwdg
*Nov 24 17:51:23.987: LIB: announced out label 3 for 10.45.0.0/24 (via 10.34.0.4)
*Nov 24 17:51:23.991: ldpx_fwdg: announced path label info for 4.4.4.4/32
*Nov 24 17:51:23.995: ldpx_fwdg: path change upcall event from fwdg
*Nov 24 17:51:23.995: LIB: announced out label 3 for 4.4.4.4/32 (via 10.34.0.4)
*Nov 24 17:51:23.999: lc_handle_rlbl_prgm handle 5.5.5.5/32
*Nov 24 17:51:24.003: ldpx_fwdg: announced path label info for 5.5.5.5/32
*Nov 24 17:51:24.007: LIB: prefix recurs walk start: 5.5.5.5/32, tableid: 0
*Nov 24 17:51:24.007: ldpx_fwdg: path change upcall event from fwdg
*Nov 24 17:51:24.007: LIB: get path labels for route 5.5.5.5/32
*Nov 24 17:51:24.011: LIB: get path labels: 5.5.5.5/32(0), nh ctx_id: 0, Fa2/0, nh 10.34.0.4
*Nov 24 17:51:24.011: LDP LLAF: 5.5.5.5 accepted, absence of filtering config
*Nov 24 17:51:24.015: tib: Assign 5.5.5.5/32 nh 10.34.0.4 real label
*Nov 24 17:51:24.015: LIB: found route info for 5.5.5.5/32(0, 10.34.0.4, Fa2/0), remote label 402 from 4.4.4.4:0
R3-P(config)#
*Nov 24 17:51:24.019: lcon: announce labels for: 5.5.5.5/32; nh 10.34.0.4, Fa2/0, inlabel 302, outlabel 402 (from 4.4.4.4:0), fwdg upcall or intf event
*Nov 24 17:51:24.019: LIB: announced out label 402 for 5.5.5.5/32 (via 10.34.0.4)
*Nov 24 17:51:24.087: lcon: 2.2.2.2:0: 10.12.0.2 added to addr<->ldp ident map
*Nov 24 17:51:24.087: lcon: 2.2.2.2:0: 2.2.2.2 added to addr<->ldp ident map
*Nov 24 17:51:24.091: lcon: 2.2.2.2:0: 10.23.0.2 added to addr<->ldp ident map
*Nov 24 17:51:24.095: LIB: 5.5.5.5/32:: learn binding 201 from 2.2.2.2:0
*Nov 24 17:51:24.099: LIB: a new binding 201 to be added
*Nov 24 17:51:24.099: lcon: tibent(5.5.5.5/32): label 201 from 2.2.2.2:0 added
*Nov 24 17:51:24.103: LIB: next hop for route 5.5.5.5/32(0, 10.34.0.4, Fa2/0) is not mapped to peer 2.2.2.2:0
*Nov 24 17:51:24.103: LIB: skip label announcement for 5.5.5.5/32
*Nov 24 17:51:24.107: ldpx_fwdg: announ
R3-P(config)#ced path label info for 2.2.2.2/32
*Nov 24 17:51:24.111: ldpx_fwdg: path change upcall event from fwdg
*Nov 24 17:51:24.111: LIB: announced out label 3 for 2.2.2.2/32 (via 10.23.0.2)
*Nov 24 17:51:24.115: ldpx_fwdg: announced path label info for 10.12.0.0/24
*Nov 24 17:51:24.119: ldpx_fwdg: path change upcall event from fwdg
*Nov 24 17:51:24.123: LIB: announced out label 3 for 10.12.0.0/24 (via 10.23.0.2)
*Nov 24 17:51:24.127: ldpx_fwdg: announced path label info for 1.1.1.1/32
*Nov 24 17:51:24.127: ldpx_fwdg: path change upcall event from fwdg
*Nov 24 17:51:24.131: LIB: announced out label 200 for 1.1.1.1/32 (via 10.23.0.2)
R3-P(config)#do u all
All possible debugging has been turned off
R3-P(config)#
There is a whole lot there that is honestly kind of a mystery to me, but the three sections I highlighted in blue that are of interest / I can tell are relevant are :
- R3 creates a Local Label Binding as LDP is starting up, calling out to its neighbors for the Remote Label Bindings so R3 can form its LIB
- LDP Session is ‘Up’ to R3 via its Loopback / Transport Address of 4.4.4.4, the LSR’s exchange Label Mappings, and R3 now has R4’s Remote Binding for 5.5.5.5/32
- LDP Session is ‘Up to R2 via its Loopback / Transport Address of 2.2.2.2, the LSR’s exchange Label Mappings, and R3 now has R2’s Remote Binding for 5.5.5.5/32
The debug info can be matched up with this verification command for 5.5.5.5:
R3-P#sh mpls ldp bindings 5.5.5.5 32
lib entry: 5.5.5.5/32, rev 10
local binding: label: 302
remote binding: lsr: 4.4.4.4:0, label: 402
remote binding: lsr: 2.2.2.2:0, label: 201
R3-P#
When LSR sends 5.5.5.5/32 traffic to R3 they will swap the label to “302” before sending it, and R3 will use Label “402” for 5.5.5.5/32 traffic to R4, and “201” when sending to R2.
The output for all Label Mappings is absolutely massive so I will leave it at 5.5.5.5, however R3’s full picture of the Topology via Bindings is seen here:
R3-P#sh mpls ldp bindings
lib entry: 1.1.1.1/32, rev 2
local binding: label: 304
remote binding: lsr: 4.4.4.4:0, label: 404
remote binding: lsr: 2.2.2.2:0, label: 200
lib entry: 2.2.2.2/32, rev 4
local binding: label: 303
remote binding: lsr: 4.4.4.4:0, label: 403
remote binding: lsr: 2.2.2.2:0, label: imp-null
lib entry: 3.3.3.3/32, rev 6
local binding: label: imp-null
remote binding: lsr: 4.4.4.4:0, label: 400
remote binding: lsr: 2.2.2.2:0, label: 203
lib entry: 4.4.4.4/32, rev 8
local binding: label: 300
remote binding: lsr: 4.4.4.4:0, label: imp-null
remote binding: lsr: 2.2.2.2:0, label: 202
lib entry: 5.5.5.5/32, rev 10
local binding: label: 302
remote binding: lsr: 4.4.4.4:0, label: 402
remote binding: lsr: 2.2.2.2:0, label: 201
lib entry: 10.12.0.0/24, rev 12
local binding: label: 305
remote binding: lsr: 4.4.4.4:0, label: 405
remote binding: lsr: 2.2.2.2:0, label: imp-null
lib entry: 10.23.0.0/24, rev 14
local binding: label: imp-null
remote binding: lsr: 4.4.4.4:0, label: 401
remote binding: lsr: 2.2.2.2:0, label: imp-null
lib entry: 10.34.0.0/24, rev 16
local binding: label: imp-null
remote binding: lsr: 4.4.4.4:0, label: imp-null
remote binding: lsr: 2.2.2.2:0, label: 205
lib entry: 10.45.0.0/24, rev 18
local binding: label: 301
remote binding: lsr: 4.4.4.4:0, label: imp-null
remote binding: lsr: 2.2.2.2:0, label: 204
R3-P#
I found it extremely odd that there is an alternating “rev #” from 2-18 in this binding table, and am having trouble finding out what this means, however I did find a great verification command from the nitty gritty details of label bindings:
“sh mpls forwarding-table detail”
The formatting of copy / paste was so ugly I figured just doing a quick screen snip would be much easier, and this demonstrates all the information one could ever want to know about MPLS forwarding decisions, such as:
- Local / Outgoing Label
- The IP network prefix associated with it
- Bytes Label Switched since this session has started
- MRU (Maximum Receive Unit) rather than MTU (Maximum Transmission Unit)
- Label Stack, this goes back to BoS bit, and will show if multiple labels are “stacked” on the MPLS Label (and in what order)
- Shows Layer 2 DST and SRC MAC Address exactly like IP CEF!
Given how similar this looked to “sh adj detail” output for IP CEF (my post on this exact thing can be found here), I grabbed a quick screen snip of that info on R3 for comparison, as the copy / paste output again is just gross:
I wanted to quickly specifically compare these two outputs as it relates to 5.5.5.5/32 or really Fa2/0 as that is what is pointing at R4 connected to R5:
MPLS (Layer 2.5) info for 5.5.5.5/32
302 402 5.5.5.5/32 3519 Fa2/0 10.34.0.4
MAC/Encaps=14/18, MRU=1500, Label Stack{402}
0000444444440000333333338847 00192000
No output feature configured
IP CEF (Layer 3 / Layer 2) info for R4
IP FastEthernet2/0
10.34.0.4(14)
16 packets, 1016 bytes
Protocol Interface Address
epoch 0
sourced in sev-epoch 0
Encap length 14
0000444444440000333333330800
L2 destination address byte offset 0
L2 destination address byte length 6
Link-type after encap: ip
ARP
MPLS – 0000444444440000333333338847
IP CEF – 0000444444440000333333330800
I changed each routers interface to match its Router #, so R3 here shows as 0000.3333.3333 and R4 obviously 0000.4444.4444 (if that wasn’t already clear).
So from the view of R3, the format is: (DST-MAC)(SRC-MAC)(EthType)
The EtherType field is at the front end of any traffic payload, and indicates what protocol is carrying the traffic between Cisco devices, 0800 = IP whereas 8847 = MPLS Unicast.
A full explanation (very good read) can be found on Wikpedia here, which is why you should not ignore the donation emails they send, and donate $5 so they stop asking me to donate the money 🙂
However that is way off the rails now, back on track to reviewing LDP Session creation!
LDP Session creation explained using WireShark screen snips
Given WireShark is readily available in EVE-NG, I used to capture that same “no mpls ip” / “mpls ip” on the Fa2/0 Interface (as you must select an Interface in WireShark if you are not familiar), so this captures the LDP Session traffic specifically between R3 and R4.
The bright red Line 12 shows the MPLS disbale command, dropping the LDP session, and informing R4 that it is toast.
MPLS is then re-enabled, and the setup of the LDP Session does not show any signs of LDP in the Wireshark Capture, only IP Traffic talking over TCP Port 646 (MPLS Unicast) starting with the SYN being sent on Line 14 of the packet capture:
However once the TCP Handshake concludes, and initialization of LDP begins on lines 18 and 19, WireShark now shows LDP as part of the traffic:
It doesn’t contain much in terms of just raw “Label” information, but within the initialization segment, there is more detail than you may ever want for LDP.
One thing to note here is that the LDP Hello messages are sent out Physical Interfaces of an LSR to the “All Routers” Multicast Address of 224.0.0.2, however inside the MPLS payload of the packet, it does contain the Loopback IP Address used in the LDP Session Establishment as shown here:
So these Hellos are not part of the LDP Session between R3 and R4, as this has both Source and Destination ports as TCP 646, this is just the LSR trying to make friends with any other LSR off its MPLS enabled interfaces.
If you drill deeper into this packet, you will actually see the Loopback being advertised as a “Transport Address” to build an LDP Session, which is why it is seen being used for the TCP 3-way Handshake:
This is essentially the same as the OSPF RID concept for OSPF Neighbor Adjacencies, where the RID can be hard coded into OSPF, otherwise it will use the highest IP Loopback address for its RID – And the Highest RID become the DR / BDR within the OSPF Domain.
The same is true for LDP, the neighbor with the higher “Transport Address” becomes the “Active” LDP Session LSR, while the LSR with the lower Transport Address will become the “Passive” LSR for the LDP Session.
What is an Active and Passive LDP Router? What does it do, and why do I care?
The Active LSR in an LDP Neighborship is the router with the higher Transport Address, the “Active” part of this relationship means it initialized the Session, and thus also means it uses the TCP Port 646 as its Destination for the LDP Session while it (the Active router) uses a randomly generated high # TCP Port to communicate on.
This is very similar to a review I did of BGP creating a TCP Session in my TSHOOT notes:
LDP does this exact same thing, only the above graphic was intended to demonstrate how ACL’s might block TCP Sessions from forming with BGP, the same could be done to prevent the formation of LDP neighborships between LSRs.
However that is not the point of that graphic, but only to demonstrate that the LSR with the highest Loopback or hard-coded Transport Address will become the “Active” router, and the “Passive” router will receive LDP Session traffic on TCP port 646.
So in terms of this Topology, the LDP Sessions of our 3 Core Routers would look like this:
Point being, the Active is the LSR with the higher # Transport Address, and therefor uses a the higher # Port in mainting the LDP Session. Keepalives are also used to maintain the LDP Session, which will bring down the session if they cease being reeived between neighbors, but I won’t go further into detail about them beyond that.
Take aways from how LDP Session are built and maintained
A few things that may have gotten lost in all of that probably unnecessary detail:
- LDP Sessions are initialized and maintained via TCP Port 646 (and a random high # TCP Port for the “Active” LSR)
- LDP Hellos are UDP Traffic Multicast to All Router Address 224.0.0.2
- Active routers initialize the session and use a random high port # to maintain the session, Passive routers SRC / DST Port will be TCP Port 646
A quick note on what will keep LDP neighbors from forming
If the “Transport Address” of an LSR is not reachable off the interface the LDP Hello was reached on, an LDP Adjacency will not form.
For example, if a Hello came into R3 on Fa2/0 with its Transport Address of 4.4.4.4, however I had a static route (or some other kind of routing goof up) pointing to 4.4.4.4 out Interface Fa1/0, then that LDP Adjacency would not form.
That is probably more like a CCNP Service Provider type question / issue, but I thought since I am wading into deep waters here in MPLS LDP, that this should be mentioned.
That will quite well do it for LDP Sessions and Label Binding Exchanges!
Please forgive any typos or paragrpahs that don’t make sense, I did some kind of editting on the fly as I thought of better ways to explain concepts, as I kind of went off the rails half way through drawing comparisons to other Protocols to help myself understand LDP one step further than I probably will ever need to know it.
Hope that all kind of makes sense, and if not, at least it was fun to lab with Wireshark mixed in there for once – Hope to use that more going forward!
(I will work on getting the actual .pcap attach or linked in this post in the near future)