Sometimes it’s hard finding extensive, easily digestible documentation on FTOS running on the Dell Force10 switch platform. Recently we had to apply simple rate limiting against ports generating more bandwidth than they should but not get too in the weeds with QoS policies. Also, we’ve needed to enable controls to throttle multicast traffic. I hope this information proves useful for someone else.
Simple Bandwidth Rate Limiting
To enable simple per-port rate limiting on a per interface basis (in megabits) we do the following. This is for a 10GbE port, rate-limiting to 2500 megabits/sec using the rate police command.
S4810-Leaf-5# conf t S4810-Leaf-5(conf)# interface tengigabitethernet 0/12 S4810-Leaf-5(conf)# rate police 2500
- Note that rate police will limit all traffic on the port, so if you know the type of traffic you want limited it’s better to use the storm-control command below or just set up a proper QoS policy.
Limiting Multicast Traffic
Sometimes multicast can be problematic, we had an issue with some R&D deployment tooling that generated an excessive amount of multicast traffic – upwards of several GB/second. Let’s rate limit multicast on the port down to 300 pps (packets per second).
S4810-Leaf-5# conf t S4810-Leaf-5(conf)# interface tengigabitethernet 0/12 S4810-Leaf-5(conf)# storm-control multicast 300 in
Note that the storm-control command can also apply to broadcast and unicast and either be enabled globally or on a per-port basis.
S4810-Leaf-5(conf-if-te-0/3)# storm-control ? broadcast Broadcast Traffic multicast Multicast Traffic unknown-unicast Unknown Unicast Traffic
- It’s recommended to enable it on a per-port basis after you’ve determined where it’s needed as it might be overkill to enable it globally unless you’ve got a small, homogenous environment.
Gathering Statistics or What to Throttle
If you are using a Spine-Leaf topology you can observe which Leaf switches are generating the most throttles and traffic which might impact CPU. If you’re using another design this will probably still apply.
Use the deb cpu-traffic-stats command to start generating debug statistics
01-Spine-1# deb cpu-traffic-stats Started collecting CPU traffic statistics.
Use the show cpu-traffic-stats to view statistics
01-Spine-1# show cpu-traffic-stats Processor : CP -------------- Received 28% traffic on fortyGigE 0/28 Total packets:374 LLC:0, SNAP:0, IP:349, ARP:16, other:9 Unicast:1, Multicast:358, Broadcast:15 Received 26% traffic on fortyGigE 0/24 Total packets:354 LLC:0, SNAP:0, IP:337, ARP:9, other:8 Unicast:1, Multicast:345, Broadcast:8 Received 15% traffic on fortyGigE 0/0 Total packets:207 LLC:0, SNAP:0, IP:17, ARP:5, other:185 Unicast:0, Multicast:202, Broadcast:5
Note the previous count and compare over time.
01-Spine-1# show cpu-traffic-stats Processor : CP -------------- Received 30% traffic on fortyGigE 0/28 Total packets:952 LLC:0, SNAP:0, IP:891, ARP:37, other:24 Unicast:1, Multicast:915, Broadcast:36 Received 29% traffic on fortyGigE 0/24 Total packets:920 LLC:0, SNAP:0, IP:873, ARP:25, other:22 Unicast:3, Multicast:895, Broadcast:22 Received 12% traffic on fortyGigE 0/0 Total packets:395 LLC:0, SNAP:0, IP:52, ARP:27, other:316 Unicast:0, Multicast:367, Broadcast:28
Make sure to use the un all command to turn off debugging when finished as it’s a resource-intensive process and will bog things down if left unkempt.
01-Spine-1# un all All possible debugging has been turned off
” Let’s rate limit multicast on the port down to 300 megabits/sec.”
Isn’t that actually pps rate?
storm-control multicast XXX in
LikeLike
Hi Brona, yes you are correct it is pps. I’ve corrected this, thanks for pointing it out.
LikeLike