What is it all about?
Some time ago I got a question about how telemetry can help with a quick and easy understanding of packet size distribution across the network.
Usually one can be interested in that topic for some reasons, such as to get a better understanding of the network and traffic profiles. Or to make sure there is a balanced distribution of packets of different size across routers’ ports for better performance and forwarding. Or even you may want to have a fast and simple way to see if there is a potential DOS attack with the same small packet size going through your port(s).
How to get this information? The first simple way is to get the total number of bytes, the total number of packet and divide. Sounds easy, but this approach lacks the level of granularity you would probably want.
Another option is to use Netflow or IPFIX. But Telemetry today can’t substitute Netflow. Yes, one can find MDT YANG models for Netflow, but they will just push Netflow operational data. (Netflow is still very helpful! If you want to read more about Netflow, please, jump here and read nice articles from Nicolas!)
So, there should be some better way. The easiest one is, probably, to use what you have already, by default, and without any configuration involved. Yes, I’m talking about interface controller stats!
Here is how it looks like when you use a CLI command:
RP/0/RP0/CPU0:NCS5501_top#sh controllers tenGigE 0/0/0/1 stats Sun Oct 21 12:25:01.001 PDT Statistics for interface TenGigE0/0/0/1 (cached values): Ingress: Input total bytes = 7335080089921 Input good bytes = 7335080089921 Input total packets = 10171454431 Input 802.1Q frames = 0 Input pause frames = 0 Input pkts 64 bytes = 448 Input pkts 65-127 bytes = 340045843 Input pkts 128-255 bytes = 877281960 Input pkts 256-511 bytes = 1776379079 Input pkts 512-1023 bytes = 7177746466 Input pkts 1024-1518 bytes = 0 Input pkts 1519-Max bytes = 749 Input good pkts = 10171454431 Input unicast pkts = 10171445103 Input multicast pkts = 9416 Input broadcast pkts = 0 Input drop overrun = 0 Input drop abort = 0 Input drop invalid VLAN = 0 Input drop invalid DMAC = 0 Input drop invalid encap = 0 Input drop other = 0 Input error giant = 0 Input error runt = 0 Input error jabbers = 0 Input error fragments = 0 Input error CRC = 0 Input error collisions = 0 Input error symbol = 0 Input error other = 0 Input MIB giant = 0 Input MIB jabber = 0 Input MIB CRC = 0 Egress: Output total bytes = 7329708508335 Output good bytes = 7329708508335 Output total packets = 10173563487 Output 802.1Q frames = 0 Output pause frames = 0 Output pkts 64 bytes = 243 Output pkts 65-127 bytes = 363717976 Output pkts 128-255 bytes = 871387990 Output pkts 256-511 bytes = 1751781169 Output pkts 512-1023 bytes = 7186675288 Output pkts 1024-1518 bytes = 173 Output pkts 1519-Max bytes = 750 Output good pkts = 10173563487 Output unicast pkts = 10173554134 Output multicast pkts = 9420 Output broadcast pkts = 0 Output drop underrun = 0 Output drop abort = 0 Output drop other = 0 Output error other = 0
Basically, a router itself gives you the data you’re looking for! All the details about ingress and egress packets are available there. The only problem is that nobody wants to go to every router and collect that information from every interface and then do calculations offline. We want this to be available in real time with minimum efforts from our side. This is right the place where Model Driven Streaming Telemetry can help!
Gathering information about packet length distribution
The very fist thing here is to configure the correct sensor path:
telemetry model-driven sensor-group size_distribution sensor-path Cisco-IOS-XR-drivers-media-eth-oper:ethernet-interface/statistics/statistic !
After you specified that sensor path and configured the destination group and subscription, you will have this information pushed out from the router (check here how to quickly find out the information to be streamed!):
It is straightforward to see that the information streamed through that sensor path contains all the needed counters and we can use that in out collector tool.
The final step here is how to process and show this information. Having raw data is fine, but still not what we want. That’s why there are two dashboards created for you! As always, feel free to mix and match dashboard panels any way you think is better for your goals.
The first dashboard gives you the flexibility to select a group of routers and a group of interfaces you’re interested about. With the help of the second dashboard you can get information about a single interface, but with more details in real time.
Group View Dashboard
The first dashboard gives you a possibility to select one or several routers and a group of interfaces you want to monitor. This is done with the help of variables in Grafana (it is worth mentioning that interfaces are tied to the selected routers, e.g. if you picked Router A and Router B, you will see only interfaces from those two devices, not from all your network).
For example, you have a location (or several locations) with peering connections, and you want to understand your traffic profile. You can select all (or a subset) of your peering routers, then choose outbound interfaces and collect a summary view of your ingress as well as egress packet size distribution. Sounds easy, right?
Packet size distribution is given in percentage on this dashboard. It was made for your convenience, as you, probably, collect the total traffic load by other means and the percentage view can give you a nice snapshot about packet length distribution.