Junos Routing Tables – Part 1

Along the journey of learning Junos, you will run across different routing tables that are used for different purposes. Some tables make sense right away, but others take some digesting. I hope I can answer some questions that you might already have, or that you might have later as you progress in your studying, regarding these tables.  

I decided to split this topic into two different articles, as there is a lot to talk about:

  • In this first article, we will cover inet.0, inet6.0, inet.1, inet.2, and inet.4.
  • In the next article we will cover inet.3, mpls.0, bgp.l3vpn.0, and vrf-name.inet.0.

We are going to explore the different routing tables by gradually adding them to our lab routers which have no IP or protocols configuration to start with.  

Depending on the version of Junos that you are running, when you enter the show route command in a device with no configuration, the result might be an empty output as shown: 

Does that mean there is no routing table? Yes, and no.

If we add the knob all at the end of the show route command, we find that actually there are some routing tables, with local and direct routes.

The name of these tables begins with __juniper_private, to indicate that they are internal Junos routing tables used only for internal operations. In fact, none of the interfaces and IP addresses that you see here are part of the router’s configuration, and there is a good chance you will not have to worry about them ever.

We can also enter the show route forwarding-table command to check the contents of the forwarding table, the table that is used to make forwarding decisions and that is derived from the routing table.

We can see here that without any additional knobs, the command output shows forwarding information associated with different routing-tables, including one named default.inet.

The second part of this name (inet), comes from the BSD terminology where inet is the term for the Internet protocol family comprising IP, ICMP,TCP and UDP. You can think of inet as just IPv4.

The entries currently present in the default.inet forwarding table, were added by default, and do not provide any information that could be used to forward any traffic. 

The first entry in the example (default), is used for traffic that does not match any other route in the forwarding table. The type rjct, indicates that traffic matching this entry would be dropped.

The second entry (0.0.0.0/32), says that traffic with a destination address = 0.0.0.0 (exactly that), will be discarded (silently dropped).

There are also a couple of multicast related entries: 224/4 (the entire multicast address range) and 224.0.0.1/32 (the all nodes multicast), and an entry that just indicates that traffic with destination address = 255.255.255.255 (exactly that) is broadcast.  

Thus, nothing really useful to forward transit traffic. The information that is there by default does not provide the router the ability to even accept traffic through the interfaces, even if those interfaces are up, because we have not enable IP traffic processing on those interfaces yet.

We are going to start adding some configuration to our router and see how different routing tables are created and populated with routes.

inet.0

inet.0 is the default IPv4 routing table and is created when we configure IPv4 addresses on the router interfaces.

Let’s configure the following addresses:

And then try the show route command again:

We didn’t get any output before, but now we get the inet.0 routing table.  

The table contains two entries for each one of the interfaces that we configured with IPv4 address: One direct route, and one local route.

If we look at ge-0/0/0.0 for example:

The local route 10.10.4.1/32 represents the address of the ge-0/0/0.0 interface itself, while the direct route 10.10.4.0/24 represents the directly connected network that can be reached via ge-0/0/0.0.

The routes are also added to the forwarding table.

Let’s now configure a couple of routing-instances inside R1: vR1 and vR2.

Yes! I am using the exact same IP addresses I used on ge-0/0/0.0 and lo0.0 but as you will see, there is no conflicts because a couple of new routing tables are created:

The two new routing tables are named vR1.inet.0 and vR2.inet.0, the routing-instance’s name followed by inet.0.

Each one of these tables is independent of each other. Thus, there is no conflict even when the IP addresses assigned to interfaces fe-0/0/0.0 and lt-0/0/0.1, or to interfaces lo0.0 and lo0.1, are the same.

Thus far, we have created inet.0, VR1.inet.0, and VR2.inet.0. We can check by typing show route summary:

Let’s put aside the routing-instances for now, and configure some static routes, as well as OSPFv2 and BGP, in the main instance. 

The static, OSPF and BGP routes were all added to the inet.0 routing table. In the case of BGP, the NLRI (Network Layer Reachability Information), or family, that is advertised by default is inet unicast (IPv4), thus the routes are installed in inet.0.

inet6.0

inet6.0 is the default IPv6 routing table.

The name of the table is inet6.0. People sometimes get confused and call it inet.6 but this is NOT correct.

In some devices, inet6.0 is displayed when you enter the show route command, even without any configuration. However, it only contains a route for ff02::2/128 (the all-routers multicast address).

In our routers, inet6.0 is not present and will only be created when we configure IPv6 addresses, as we will do next.

For each IPv6 address configured, the router installs a local and a direct route, the same way is does for IPv4. Also, each interface gets a link local address (fe80::/10) automatically, for which the router also installs a local route.  

We still have OSPF (version 2), and BGP configured. However, OSPFv2 does not support IPv6, and BGP only advertises family inet (IPv4) by default, as we saw before.

We can easily add IPv6 to our BGP sessions by configuring:

The routers now negotiate the inet6-unicast NLRI.

Notice that family inet is no longer negotiated. It is important to remember that every time we explicitly configured an address family under BGP, family inet is no longer advertised by default. Thus, we need to add family inet as well.

Now the routers are negotiating both family inet and family inet6, and we can clearly see in the last couple of lines in the show bgp neighbor command output, that any family inet0 route will be installed in inet.0, while any family inet6 will be installed in inet6.0.

We can add some static routes also under inet6.0, and export them using BGP:

Notice that we now have some hidden routes:

These routes are hidden because the next-hop cannot be resolved:

This can be easily solved by adding IPv4-mapped IPv6 addresses to interface ge-0/0/0.0:

The important detail here is that depending on the address family, the routes are installed in inet.0 or inet6.0 automatically.

We could also create a BGP session using IPv6 as transport:

To make this work, we need to advertise the loopback interfaces IPv6 addresses. We could either configure OSPFv3, or ISIS.  Let’s use ISIS, which advertises both IPv4 and IPv6, by default:

A new routing table named iso.0 is created, where the iso address of the loopback interface is installed.

After we configure ISIS, both the IPv4 and IPv6 addresses of the loopback interfaces are advertised. 

Because ISIS use different TLVs to advertise IPV4 and IPv6 prefixes, the router knows which routing table the resulting ISIS routes will be installed on:

Once the loopback IPv6 addresses are advertised, the new BGP session is established.

Because this BGP session is configured using IPv6 as transport, the routes are advertised as NLRI inet6-unicast, and the receiver is installing them into inet6.0 by default, without any address family being configured.

We can also configure IPv6 in the routing-instances that we created earlier:

As expected, two new routing tables are created:

The list of routing tables keeps growing:

But we can still add a few more: 😊

We will now look into how multicast traffic is forwarded, and the routing tables involved.

inet.2:

Forwarding multicast traffic is not as simple as forwarding unicast traffic. When a unicast packet shows up, a L3 route lookup is performed, which means the router compares the destination addresses of the packet against entries in the routing table. Once a matching route has been found, the packet is sent out of the interface indicated by that route.

In the example below, if a unicast packet with destination address 10.2.2.1 arrives at R1, R1 performs a route lookup in inet.0, and finds a matching route indicating that 10.2.2/24 is reachable via ge-0/0/0.0 with next-hop 172.16.1.2. 

The packet is forwarded to R2 out of interface ge-0/0/0.0; done!  

On the other hand, if a multicast packet arrives at R1 with destination address 239.1.1.1, R1 has to decide what to do with the packet, because 239.1.1.1 actually represents multiple destinations (receivers), which are dispersed across multiple networks. These decision will be based on the multicast routing protocol running on the routers.

Let’s say that the 4 routers in the example are running PIM in dense mode.

When R1 receives the packet it floods it out of interfaces ge-0/0/0.0 and ge-0/0/1.0 

When R2 and R4 receive the multicast packet they also flood it out of interfaces ge-0/0/0.0, ge-0/0/1.0 and ge-0/0/2.0

You can see that R2 ends up receiving two copies of the same packet (from R1 and R4).

These has two potential problems:

1) R2 could forward the two copies of the same packet to R3 which is a waste of resources.

2) R2 could forward the copy of the packet from R4 back to R1 (causing a loop).

These two problems are avoided by performing RPF (Reverse Path Forwarding) checks over received multicast packets, to decide whether to forward them or not.

These RPF checks involved performing a L3 route lookup for a matching route for the source address of the packet. Yes, the source address is compared against entries in the routing table.

* If the interface the packet arrived on matches the outbound interface of this best route, the packet passes the RPF check, and is flooded out.

* If the interface the packet arrived on does not match the outbound interface of this best route, the packet fails the RPF check, and is dropped.

Now, every router along the path from the multicast source to the receivers perform RPF checks, forward packets that pass the checks, and drop the ones that fail. Prune messages are sent out of the interfaces where RPF failing packets are received, to stop the flooding on those interfaces.

Now, RPF checks are performed using information in inet.0, by default, when PIM is the multicast routing protocol. Yes, inet.0! If you think about it, it is not too crazy. RPF checks are looking at the source address, which is a unicast address. 

You can check which routing table and which route is used for RPF checks for a particular source address, using show multicast rpf <address>

Because the same routing table is used for unicast traffic forwarding, and for multicast RPF checks, both unicast and multicast traffic follow the best path indicated by inet.0, by default.

What if we wanted to change the path that multicast traffic follows?

In our sample topology, we might want to make the interface between R4 and R3 the best interface to reach 10.1.1.1 from R3, but only for RPF checks, so that unicast traffic continues to flow as before, but multicast traffic flows like this:

To achieve this we need to have different routing information for unicast forwarding, and for RPF checks, which is exactly the purpose of inet.2; to provide an alternative routing table for RPF checks.

I imagine that now that you understand RPF checks and the fact that it performs unicast route lookups, it will not be a surprised to hear that inet.2 is NOT a multicast routing table, as it is commonly assumed.

inet.2 is a UNICAST routing table; used for multicast traffic RPF checks, but still a unicast routing table. 

As we just learned, this table is not used by default; inet.0 is, and if we check the contents of inet.2 we discover that the table is empty!

If we want to use this table, we need to:

  • tell PIM to use inet.2 instead of inet.0 for RPF checks.
  • populate the inet.2 table.  

NOTE: you might want to read my article about rib-groups to learn more about rib-groups.

We now need to have ISIS routes installed in inet.2, making sure that interface ge-0/0/0.0 is preferred over ge-0/0/1.0 to reach 10.1.1.1, while the route in inet.0 still has interface ge-0/0/1.0 as the preferred interface.   

We are going to configure multitopology and configure different metrics for unicast and for multicast.

https://tools.ietf.org/html/rfc5120

Different TLVs are used to advertise prefixes for the unicast topology (default) and for the multicast topology. Thus. the router will be able to differentiate them, and will know which routing table the routes should be installed in.

NOTE: for more details about ISIS TLVs you might want to read my article about ISIS metrics.

After the configuration changes, we now see that we have a route to 10.1.1.1 using ge-0/0/1.0 in inet.0 (for unicast traffic forwarding) and a route to 10.1.1.1 using ge-0/0/0.0 in inet.2 (for RPF checks), as we wanted.

Let’s look at one more thing before we talk about the next routing table. We are going to stop advertising 10.1.1/24 via ISIS and instead we are going to use BGP.

R1 is now advertising 10.1.1/24 via BGP. However, remember that by default when we configure a BGP session using IPv4 addresses, prefixes are advertised as NLRI = inet-unicast, and as a result the route is installed in inet.0 on the receiving side:

Also remember that we configured PIM to perform RPF checks using inet.2.

If multicast traffic from 10.1.1.1 arrived at R3 it would fail RPF checks because there is no route in inet.2.

We need to configure the router so that the BGP route is also installed in inet.2. We could do it using rib-groups, or we could configure BGP with the proper NLRI for the route to be automatically installed in inet.2. Let’s try the later.

We are going to configure family inet multicast, and see what happens:

NOTE: remember that when you configure a family under BGP, the default family is no longer advertised. Thus, we are also adding family inet unicast.

If we enter the show bgp neighbor command, we can see that the routers negotiated NLRIs: inet-unicast and inet-multicast.

Notice that the output shows tables inet.0 and inet.2!

BGP is configured to advertised family inet unicast, and family inet multicast!

However, configuring family inet-multicast under BGP, does NOT mean BGP will now advertise multicast routes.

Configuring family inet-multicast means that BGP will advertise unicast routes as NLRI inet-multicast, as you can see in the packet capture below.

Family inet multicast prefixes are automatically installed in inet.2 on the receiving router, and as we learned before, used for multicast traffic RPF checks.  

When we enter the show route 10.1.1/24 command in R3 again, we know find a route in inet.0, and a route in inet.2.

You might find interesting that the route in inet.0 prefers interface ge-0/0/1.0 while the route in inet.2 prefers interface ge-0/0/0.0, as we wanted, BUT without us doing any kind of BGP route manipulation. This is happening automatically because of next-hop resolution.

On R3, we already have routes for 192.168.1.1 in both inet.0 and inet.2 and, because of the multitopology metrics that we configured, the route in inet.0 prefers interface ge-0/0/1.0 while the route in inet.2 prefers interface ge-0/0/0.0.

When the router performs next-hop resolution for the BGP route to be installed in inet.0, it does a route lookup for the next-hop in inet.0. Likewise, when the router performs next-hop resolution for the BGP route to be installed in inet.2, it does a route lookup for the next-hop in inet.2. As simple as that!

NOTES:

Keep in mind that the route that is used for unicast forwarding is the route in inet.0. Thus, to establish the BGP session R1 and R3 look at inet.0 to find a route to each other’s loopback interface.

inet.2 is used for RPF checks for IPv4 multicast traffic only. It is not used to forward unicast nor multicast traffic.

The equivalent to inet.2 for IPv6 multicast traffic is called inet6.2. If you configure ISIS with multitopology inet6-multicast or BGP with family inet6-multicast, the routes automaticaly get installed in inet6.2.

You might now be asking: if inet.2 is not used for multicast traffic forwarding, is there a table for that purposes?

yes, and we are now going to talk about that.

inet.1:

inet.1 is used to forward multicast traffic, though is not like the unicast routing table where we perform a route lookup and send the packet out. This table keeps multicast forwarding decisions and though it is created when you enable PIM, it gets populated when multicast traffic starts flowing through the network.

If we disabled PIM temporarily on one of our router, and try show route table inet.1, we would get no output:

As soon as we re-enable PIM we get the following output:

Let’s run a quick multicast traffic test and run the show command again:

We can see a new entry now.

This route shows multicast group 239.1.1.1, with source address 172.16.1.1; a multicast forwarding state (S,G)

where:

  • S refers to the unicast IP address of the source   
  • G refers to the multicast group IP address for which S is the source

R2 will perform an RPF check:

Since traffic is arriving on ge-0/0/0.0, it passes RPF and can be forwarded. We are using PIM dense mode, so the packet will be flooded out of ge-0/0/1.0 and ge-0/0/2.0.

The output of the show route table inet.1 does not provide much information other than knowing that we have an S,G entry for the traffic. The show multicast route and show pim join commands provide more details for the multicast state.

NOTES: There is also an equivalent routing table for IPv6 called inet6.1.

inet.4

The last table that we will cover in this article, is on that you might have not heard about. before: inet.4, which keeps Source Active information learned from MSDP (Multicast Source Discovery Protocol).

MSDP allows interdomain multicast connectivity, and anycast RP, for PIM Sparse mode.

Here an example:

We have two multicast domains, each with its own rendezvous point (RP1 and RP2). The Multicast Sender is located in AS100, while the receivers are on both AS100 and AS200.

When the sender starts sending the multicast traffic, R11 sends a PIM register message to the local Rendezvous Point (RP1), which will now have an S,G entry on its table.

When local receivers send IGMP messages requesting traffic for group 239.1.1.1, R13 sends a PIM join message to the local RP, and because the local RP knows the source, it sends a join message towards that source and traffic is forwarded from R11, to the RP, from the RP to R13, and from R13 to the receivers. Life is good!

However, for the remote receivers things are not so great. The IGMP message is sent, R23 sends a PIM join message to RP2, but RP2 has no idea who the source of the traffic for group 239.1.1.1 is.

That’s why we configure MSDP between the two RPs.  Now, when RP1 learns about the source for group 239.1.1.1, it advertises that information to RP2 using a Source Active MSDP message.

R22 will keep this MSDP Source Active information in inet.4.

NOTES: There is NOT an equivalent table for IPv6.

SUMMARY

To finish this article, I will leave you with the tables below, that which summarize the routing tables that we learned about.

Stay tuned!

Related posts