[v2] net/pcap: improve rxtx statistics
Checks
Commit Message
In the receiving direction, if alloc mbuf or jumbo process failed, there
is no err_pkts count, which makes it difficult to locate the problem.
In the sending direction, if the pcap_sendpacket function returns
EMSGSIZE, it means that the size of the sent packet exceeds the buffer
size provided, and the corresponding mbuf needs to be released, otherwise
it will cause the mbuf to leak.
Signed-off-by: Qiming Chen <chenqiming_huawei@163.com>
---
v2:
Clear coding style issues.
---
drivers/net/pcap/pcap_ethdev.c | 14 +++++++++++---
1 file changed, 11 insertions(+), 3 deletions(-)
Comments
On 8/26/2021 4:23 AM, Qiming Chen wrote:
> In the receiving direction, if alloc mbuf or jumbo process failed, there
> is no err_pkts count, which makes it difficult to locate the problem.
ack, but patch adds two places that updates the 'err_pkts'. One of them is where
mbuf allocation failed, and for this case there is a specific stats field,
'rx_nombuf'. Can you please update the patch to use this field?
> In the sending direction, if the pcap_sendpacket function returns
> EMSGSIZE, it means that the size of the sent packet exceeds the buffer
> size provided, and the corresponding mbuf needs to be released, otherwise
> it will cause the mbuf to leak.
>
PMD Tx shouldn't free mbufs that are not sent, it is application's
responsibility to free (or resend, or whatever application wants).
Please drop this part of the patch.
> Signed-off-by: Qiming Chen <chenqiming_huawei@163.com>
<...>
@@ -297,8 +297,10 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
break;
mbuf = rte_pktmbuf_alloc(pcap_q->mb_pool);
- if (unlikely(mbuf == NULL))
- break;
+ if (unlikely(mbuf == NULL)) {
+ pcap_q->rx_stat.err_pkts++;
+ continue;
+ }
if (header.caplen <= rte_pktmbuf_tailroom(mbuf)) {
/* pcap packet will fit in the mbuf, can copy it */
@@ -311,6 +313,7 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
mbuf,
packet,
header.caplen) == -1)) {
+ pcap_q->rx_stat.err_pkts++;
rte_pktmbuf_free(mbuf);
break;
}
@@ -490,8 +493,12 @@ eth_pcap_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
*/
ret = pcap_sendpacket(pcap,
rte_pktmbuf_read(mbuf, 0, len, temp_data), len);
- if (unlikely(ret != 0))
+ if (unlikely(ret != 0)) {
+ if (errno == EMSGSIZE)
+ rte_pktmbuf_free(mbuf);
break;
+ }
+
num_tx++;
tx_bytes += len;
rte_pktmbuf_free(mbuf);
@@ -784,6 +791,7 @@ eth_stats_reset(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
internal->rx_queue[i].rx_stat.pkts = 0;
internal->rx_queue[i].rx_stat.bytes = 0;
+ internal->rx_queue[i].rx_stat.err_pkts = 0;
queue_missed_stat_reset(dev, i);
}