Bandwidth versus maximum reserved bandwidth
By stretch | Thursday, February 12, 2009 at 2:00 a.m. UTC
By default, a QoS policy applied in IOS will only reserve up to 75% of a link's available bandwidth. The motivation behind this is to leave a bit of headroom for routing protocols and other critical traffic which might not otherwise be accounted for in a QoS policy. While this is effective for low-speed links, it leaves potential for a large amount of wasted bandwidth on high-speed links, particularly when policing is applied. For example, on a 10 Mbps Ethernet link, only 7.5 Mbps is available for CBWFQ or LLQ allocation.
This can present problems if you want to reserve, say, 90% of a link's bandwidth for RTP.
class-map match-all RTP match protocol rtp ! ! policy-map VOIP class RTP priority 9000
We can see the 75% limitation manifest itself when applying the policy to an interface. We receive the following error:
R1(config-if)# service-policy output VOIP I/f FastEthernet0/0 class RTP requested bandwidth 9000 (kbps), available only 7500 (kbps)
Oddly, at least on IOS 12.4(9)T1, a fair-queue configuration is applied as an apparent fallback: fair-queue 64 256 256
. Not quite what we wanted. One way to fix this would be to artificially increase the bandwidth of the interface to make the router think it has more than it does. To reserve a full 9 Mbps, we could set the interface bandwidth to 12 Mbps; 75% of 12 Mbps is 9 Mbps, so we stay within the 75% ceiling.
R1(config-if)# no fair-queue R1(config-if)# bandwidth 12000 R1(config-if)# service-policy output VOIP
This time our policy is applied without incident, as we're only reserving 75% of the supposed bandwidth. While our QoS goal has been achieved, we've inadvertently broken something else: routing. Consider a simple lab running a single OSPF area across all routers. Our QoS policy is applied outbound on R1's link to R2. R4 is advertising a default route into the area.
With all equal-cost links, R1 should load-balance traffic following the default route across both of its uplinks toward R4. Here's what the OSPF interface table looks like before our bandwidth modification on FastEthernet0/0:
R1# show ip ospf int brief Interface PID Area IP Address/Mask Cost State Nbrs F/C Fa0/1 1 0 10.0.13.1/24 10 BDR 1/1 Fa0/0 1 0 10.0.12.1/24 10 BDR 1/1
But increasing the observed bandwidth of FastEthernet0/0 to 12 Mbps lowers its OSPF cost from 10 to 8, resulting in a lower overall metric to R4:
R1# show ip ospf int brief Interface PID Area IP Address/Mask Cost State Nbrs F/C Fa0/1 1 0 10.0.13.1/24 10 BDR 1/1 Fa0/0 1 0 10.0.12.1/24 8 BDR 1/1
What we've done is killed our equal-cost load balancing; all traffic from R1 to R4 will now flow over the link via R2, effectively cutting our actual available bandwidth in half.
Fortunately IOS provides a much more elegant solution to this problem: max-reserved-bandwidth. This command allows us to change the default limit of 75% to something better suited to our needs. We can lower the configured bandwidth on FastEthernet0/0 to match its actual bandwidth, and increase the maximum QoS-reserved bandwidth to 90%.
R1(config-if)# bandwidth 10000 CBWFQ: Not enough available bandwidth for all classes Available 7500 (kbps) Needed 9000 (kbps) CBWFQ: Removing service policy on FastEthernet0/0 R1(config-if)# max-reserved-bandwidth 90 R1(config-if)# service-policy output VOIP
With this solution, we can leave the bandwidth settings where they should be while still getting efficient use out of our links.
R1# show interface f0/0 FastEthernet0/0 is up, line protocol is up Hardware is Gt96k FE, address is c201.4188.0000 (bia c201.4188.0000) Internet address is 10.0.12.1/24 MTU 1500 bytes, BW 10000 Kbit, DLY 1000 usec, ... R1# show policy-map interface f0/0 FastEthernet0/0 Service-policy output: VOIP Class-map: RTP (match-all) 0 packets, 0 bytes 5 minute offered rate 0 bps, drop rate 0 bps Match: protocol rtp Queueing Strict Priority Output Queue: Conversation 264 Bandwidth 9000 (kbps) Burst 225000 (Bytes) (pkts matched/bytes matched) 0/0 (total drops/bytes drops) 0/0 Class-map: class-default (match-any) 7 packets, 522 bytes 5 minute offered rate 0 bps, drop rate 0 bps Match: any
Posted in Routing
Comments
February 12, 2009 at 4:10 a.m. UTC
Hmm. I would have never thought of using the bandwidth command to make that work... but I guess that's just because I already knew about max-reserved-bandwidth... shrug
February 12, 2009 at 3:24 p.m. UTC
Nice, I haven't used the 'bandwidth' command to override any QoS stuff for a few years. The problem I had when using it, though, was in reverse. I had some Nokia PRI modems that only had Ethernet interfaces to them from the LAN side. Obviously the WAN speed was limited at far less than 10 or 100Mbps, so I had applied the bandwidth command at the Ethernet side of where QoS lived and also policed the outbound flow so as not to exceed the actual link speed... Hopefully nobody has the luxury of running into that situation any time soon though! :)
February 12, 2009 at 5:37 p.m. UTC
more qos articles would be great! no more blog drama ;)
February 13, 2009 at 2:16 p.m. UTC
Very nice blog entry, i am digging now into qos and this tip will help a lot.
February 14, 2009 at 9:18 p.m. UTC
Nice write-up, I was scratching my head at first because I was aware of the Max-Reserved-Bandwidth command from my QOS studies. This made me think however which is always a good thing. Hope the job hunting picks up fro you.
February 15, 2009 at 8:06 a.m. UTC
Nice tip! Yes fudging the bandwidth statement to reflect more than is available can have some quirky side effects :)
February 19, 2009 at 10:07 a.m. UTC
One other thing to note...
If you apply a QoS policy on a multilink, be sure that you consider the effect of one (or more) T1's going down.
For example, if you have two T1's multilinked together, and you have the following policy applied on the multilink...
policy-map VOIP class RTP priority 1544
What happens if one T1 goes down? Of course, the QoS policy becomes disabled on the interface because now your policy exceeds the maximum reservable bandwidth (since the multilink interface bandwidth has now decreased from 3Mb to 1.544Mb).
Perhaps a more appropriate configuration for this type of scenario would be...
policy-map VOIP class RTP priority percent 50
Just something to consider.
February 24, 2009 at 3:34 a.m. UTC
Great article. More QoS article will be great.
August 10, 2011 at 4:24 a.m. UTC
Thanks Jeremy, it's a very Good article. I realized that this is a article about 2 year ago. There are some changes to the "max-reserved-bandwidth" in newer IOS releases.
"Effective with Cisco IOS XE Release 2.6, Cisco IOS Release 15.0(1)S, and Cisco IOS Release 15.1(3)T, the max-reserved bandwidth command is hidden. "
"Effective with Cisco IOS XE Release 3.2S, the max-reserved bandwidth command is replaced by a modular QoS CLI (MQC) command (or sequence of MQC commands). "
From IOS XE Release 3.2S, the "max-reserved bandwidth" command has been replaced by following "MQC Command Sequence:
Router(config)# policy-map policy-map-name
Router(config-pmap)# class class-default
Router(config-pmap-c)# bandwidth {bandwidth-in-kbps | remaining percent percentage | percent percentage}
"
Have you got a chance to test how the new commands used?
Looks like it increases the bandwidth (as the way you described in the first part of your article) instead of modify the reserved bandwidth percentage.
Suggest you write up an follow-up article to further explain this new commands.
November 17, 2011 at 8:54 a.m. UTC
I have not test or checked. However my understanding is that if bandwidth statement is used under class default the max-reserved-bandwidth is uselles. The default of max-reserved-bandwidth reserves 25% for class-default, so if bandwitdh is set for class default "max-reserved-bandwidth" will have no effect.
December 23, 2011 at 2:06 p.m. UTC
Hello,
What If I have 80% EF Traffic and the circuit bandwidth is 100mbps. The interface is GigabiEthernet. Is it necessary to configure the max-reserved-bandwidth to 80??? Or is it ok If I max it out to 100?
max-reserved-bandwidth 80 or 100?
Even though I don't configure it It is still accepting the policy under interface.
Thank you!
January 6, 2012 at 4:57 a.m. UTC
Hi, good article. I understand that by default the 25% of bandwidth is to leave a bit of headroom for routing protocols and other critical traffic. But would "routing protocols and other critical traffic" fall into the class of class-default in the policy? So why bother?
Regards,
Alex
March 17, 2012 at 1:58 p.m. UTC
This is a wonderful forum. I have a question linking bandwidth and QoS:
if a customer has a CIR of 15Mb but want to be able to burst up to 100Mb when necessary, is the config below of ok:
)#policy-map SHAPE )#class class-default )#shape average 15000000 128000 100000000 )# int fa0/0 )# speed 100 )# duplex full )# bandwidth 100000 )# service-policy out SHAPE
Thank you in anticipation of your response.
April 12, 2013 at 8:36 a.m. UTC
Useful topic. Thanks Jeremy.
June 5, 2013 at 5:20 p.m. UTC
Hi Jay,
No. this configuration will not work.
At first, one needs to understand that the terms bc and be are Tc specific.
The maximum data that can be send in one Tc, i.e. bc + be = Access rate (here 100 mbps) * Tc
with CIR = 15000000 bps and bc = 128000 bits, then tc = bc/CIR = 8.533 ms = 9 ms approx.
so in this case the maximum value for be = 100000000 * .009 - 128000 = 772000 bits.
In fact, with shape average command one won't be able to send more than 15 mb of traffic in 1 second cycle.
July 4, 2014 at 7:16 a.m. UTC
Hi Jeremy,
I have a question:
In the formula BW Available= (BW * Max Reserv BW) - (Sum of all reserve), the "Max Reserv BW" for Cisco Interfaces is by default 75%.
If we use a virtual interface, for example, a service instance EVC, is the rule of 75% Max Reserv BW valid? Or we consider 100%?