IEEE 802.1Q tunneling can be used to achieve simple layer two VPN connectivity between sites by encapsulating one 802.1Q trunk inside another. The topology below illustrates a common scenario where 802.1Q (or "QinQ") tunneling can be very useful.
A service provider has infrastructure connecting two sites at layer two, and desires to provide its customers transparent layer two connectivity. A less-than-ideal solution would be to assign each customer a range of VLANs it may use. However, this is very limiting, both in that it removes the customers' flexibility to choose their own VLAN numbers, and there may not be enough VLAN numbers (we can only use a maximum of 4,094 or so) available on large networks.
802.1Q tunneling solves both of these issues by assigning each customer a single VLAN number, chosen by the service provider. Within each customer VLAN exists a secondary 802.1Q trunk, which is controlled by the customer. Each customer packet traversing the service provider network is tagged twice: the inner-most 802.1Q header contains the customer-chosen VLAN ID, and the outer-most header contains the VLAN ID assigned to the customer by the service provider.
802.1Q Tunnel Configuration
Before we get started with the configuration, we must verify that all of our switches support the necessary maximum transmission unit (MTU), 1504 bytes. We can use the command
show system mtu to check this, and the global configuration command
system mtu to modify the device MTU if necessary (note that a reload will be required for the new MTU to take effect).
S1# show system mtu System MTU size is 1500 bytes S1# configure terminal S1(config)# system mtu 1504 Changes to the System MTU will not take effect until the next reload is done.
Next, we'll configure our backbone trunk to carry the top-level VLANs for customers A and B, which have been assigned VLANs 118 and 209, respectively. We configure a normal 802.1Q trunk on both ISP switches. The last configuration line below restricts the trunk to carrying only VLANs 118 and 209; this is an optional step.
S1(config)# interface f0/13 S1(config-if)# switchport trunk encapsulation dot1q S1(config-if)# switchport mode trunk S1(config-if)# switchport trunk allowed vlan 118,209
S2(config)# interface f0/13 S2(config-if)# switchport trunk encapsulation dot1q S2(config-if)# switchport mode trunk S2(config-if)# switchport trunk allowed vlan 118,209
Now for the interesting bit: the customer-facing interfaces. We assign each interface to the appropriate upper-level (service provider) VLAN, and its operational mode to
dot1q-tunnel. We'll also enable Layer two protocol tunneling to transparently carry CDP and other layer two protocols between the CPE devices.
S1(config)# interface f0/1 S1(config-if)# switchport access vlan 118 S1(config-if)# switchport mode dot1q-tunnel S1(config-if)# l2protocol-tunnel S1(config-if)# interface f0/3 S1(config-if)# switchport access vlan 209 S1(config-if)# switchport mode dot1q-tunnel S1(config-if)# l2protocol-tunnel
S2(config)# interface f0/2 S2(config-if)# switchport access vlan 118 S2(config-if)# switchport mode dot1q-tunnel S2(config-if)# l2protocol-tunnel S2(config-if)# interface f0/4 S2(config-if)# switchport access vlan 209 S2(config-if)# switchport mode dot1q-tunnel S2(config-if)# l2protocol-tunnel
We can use the command
show dot1q-tunnel on the ISP switches to get a list of all interfaces configured as 802.1Q tunnels:
S1# show dot1q-tunnel dot1q-tunnel mode LAN Port(s) ----------------------------- Fa0/1 Fa0/3
Now that our tunnel configurations have been completed, each customer VLAN has transparent end-to-end connectivity between sites. This packet capture shows how customer traffic is double-encapsulated inside two 802.1Q headers along the ISP backbone. Any traffic left untagged by the customer (i.e., traffic in the native VLAN 1) is tagged only once, by the service provider.