From patchwork Fri Apr 12 14:48:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiaolong Ye X-Patchwork-Id: 52719 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 990661B192; Fri, 12 Apr 2019 16:54:15 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id F0DCF1B14D for ; Fri, 12 Apr 2019 16:54:13 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 12 Apr 2019 07:54:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,341,1549958400"; d="scan'208";a="140014795" Received: from yexl-server.sh.intel.com ([10.67.110.206]) by fmsmga008.fm.intel.com with ESMTP; 12 Apr 2019 07:54:11 -0700 From: Xiaolong Ye To: dev@dpdk.org, Ferruh Yigit , David Marchand Cc: Qi Zhang , Karlsson Magnus , Topel Bjorn , Xiaolong Ye Date: Fri, 12 Apr 2019 22:48:43 +0800 Message-Id: <20190412144844.50712-1-xiaolong.ye@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: References: Subject: [dpdk-dev] [PATCH 1/2] net/af_xdp: enqueue buf ring when allocate Tx queue fails X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When it fails to allocate enough slots in Tx queue for transmitting packets, we need to return the dequeued addrs to buf ring. Fixes: f1debd77efaf ("net/af_xdp: introduce AF_XDP PMD") Signed-off-by: Xiaolong Ye --- drivers/net/af_xdp/rte_eth_af_xdp.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c index 007a1c6b4..5cc643ce2 100644 --- a/drivers/net/af_xdp/rte_eth_af_xdp.c +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c @@ -276,6 +276,7 @@ eth_af_xdp_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) if (xsk_ring_prod__reserve(&txq->tx, nb_pkts, &idx_tx) != nb_pkts) { kick_tx(txq); + rte_ring_enqueue_bulk(umem->buf_ring, addrs, nb_pkts, NULL); return 0; } From patchwork Fri Apr 12 14:48:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiaolong Ye X-Patchwork-Id: 52720 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 816A21B1FC; Fri, 12 Apr 2019 16:54:19 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id C6CFF1B1EB for ; Fri, 12 Apr 2019 16:54:15 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 12 Apr 2019 07:54:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,341,1549958400"; d="scan'208";a="140014803" Received: from yexl-server.sh.intel.com ([10.67.110.206]) by fmsmga008.fm.intel.com with ESMTP; 12 Apr 2019 07:54:14 -0700 From: Xiaolong Ye To: dev@dpdk.org, Ferruh Yigit , David Marchand Cc: Qi Zhang , Karlsson Magnus , Topel Bjorn , Xiaolong Ye Date: Fri, 12 Apr 2019 22:48:44 +0800 Message-Id: <20190412144844.50712-2-xiaolong.ye@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190412144844.50712-1-xiaolong.ye@intel.com> References: <20190412144844.50712-1-xiaolong.ye@intel.com> Subject: [dpdk-dev] [PATCH 2/2] net/af_xdp: make reserve/submit peek/release consistent X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" As David pointed out, if we reserve N slots, but only submit n slots, we would end up with an incorrect opinion of the number of available slots later, we also would get wrong idx when we call xsk_ring_prod__reserve next time. It also applies to xsk_ring_cons__peek()/xsk_ring_cons__release(). This patch ensures that both reserve/submit and peek/release are consistent. Fixes: f1debd77efaf ("net/af_xdp: introduce AF_XDP PMD") Reported-by: David Marchand Signed-off-by: Xiaolong Ye --- drivers/net/af_xdp/rte_eth_af_xdp.c | 80 +++++++++++++++-------------- 1 file changed, 41 insertions(+), 39 deletions(-) diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c index 5cc643ce2..76a6a8331 100644 --- a/drivers/net/af_xdp/rte_eth_af_xdp.c +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c @@ -138,22 +138,19 @@ reserve_fill_queue(struct xsk_umem_info *umem, int reserve_size) { struct xsk_ring_prod *fq = &umem->fq; uint32_t idx; - int i, ret; - - ret = xsk_ring_prod__reserve(fq, reserve_size, &idx); - if (unlikely(!ret)) { - AF_XDP_LOG(ERR, "Failed to reserve enough fq descs.\n"); - return ret; - } + int i; for (i = 0; i < reserve_size; i++) { __u64 *fq_addr; void *addr = NULL; if (rte_ring_dequeue(umem->buf_ring, &addr)) { - i--; break; } - fq_addr = xsk_ring_prod__fill_addr(fq, idx++); + if (unlikely(!xsk_ring_prod__reserve(fq, 1, &idx))) { + AF_XDP_LOG(WARNING, "Failed to reserve 1 fq desc.\n"); + break; + } + fq_addr = xsk_ring_prod__fill_addr(fq, idx); *fq_addr = (uint64_t)addr; } @@ -179,6 +176,9 @@ eth_af_xdp_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) nb_pkts = RTE_MIN(nb_pkts, ETH_AF_XDP_TX_BATCH_SIZE); + if (unlikely(rte_pktmbuf_alloc_bulk(rxq->mb_pool, mbufs, nb_pkts) != 0)) + return 0; + rcvd = xsk_ring_cons__peek(rx, nb_pkts, &idx_rx); if (rcvd == 0) return 0; @@ -186,9 +186,6 @@ eth_af_xdp_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) if (xsk_prod_nb_free(fq, free_thresh) >= free_thresh) (void)reserve_fill_queue(umem, ETH_AF_XDP_RX_BATCH_SIZE); - if (unlikely(rte_pktmbuf_alloc_bulk(rxq->mb_pool, mbufs, rcvd) != 0)) - return 0; - for (i = 0; i < rcvd; i++) { const struct xdp_desc *desc; uint64_t addr; @@ -211,6 +208,10 @@ eth_af_xdp_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) xsk_ring_cons__release(rx, rcvd); + /* free the extra mbufs */ + for (; rcvd < nb_pkts; rcvd++) + rte_pktmbuf_free(mbufs[rcvd]); + /* statistics */ rxq->stats.rx_pkts += (rcvd - dropped); rxq->stats.rx_bytes += rx_bytes; @@ -261,55 +262,56 @@ eth_af_xdp_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct xsk_umem_info *umem = txq->pair->umem; struct rte_mbuf *mbuf; void *addrs[ETH_AF_XDP_TX_BATCH_SIZE]; + struct rte_mbuf *valid_bufs[ETH_AF_XDP_TX_BATCH_SIZE]; unsigned long tx_bytes = 0; - int i, valid = 0; + int i; + uint16_t nb_valid = 0; uint32_t idx_tx; + uint32_t buf_len = ETH_AF_XDP_FRAME_SIZE - ETH_AF_XDP_DATA_HEADROOM; nb_pkts = RTE_MIN(nb_pkts, ETH_AF_XDP_TX_BATCH_SIZE); pull_umem_cq(umem, nb_pkts); - nb_pkts = rte_ring_dequeue_bulk(umem->buf_ring, addrs, - nb_pkts, NULL); - if (nb_pkts == 0) + for (i = 0; i < nb_pkts; i++) { + if (bufs[i]->pkt_len <= buf_len) + valid_bufs[nb_valid++] = bufs[i]; + else + rte_pktmbuf_free(bufs[i]); + } + + nb_valid = rte_ring_dequeue_bulk(umem->buf_ring, addrs, + nb_valid, NULL); + if (nb_valid == 0) return 0; - if (xsk_ring_prod__reserve(&txq->tx, nb_pkts, &idx_tx) != nb_pkts) { + if (xsk_ring_prod__reserve(&txq->tx, nb_valid, &idx_tx) != nb_valid) { kick_tx(txq); - rte_ring_enqueue_bulk(umem->buf_ring, addrs, nb_pkts, NULL); + rte_ring_enqueue_bulk(umem->buf_ring, addrs, nb_valid, NULL); return 0; } - for (i = 0; i < nb_pkts; i++) { + for (i = 0; i < nb_valid; i++) { struct xdp_desc *desc; void *pkt; - uint32_t buf_len = ETH_AF_XDP_FRAME_SIZE - - ETH_AF_XDP_DATA_HEADROOM; desc = xsk_ring_prod__tx_desc(&txq->tx, idx_tx + i); - mbuf = bufs[i]; - if (mbuf->pkt_len <= buf_len) { - desc->addr = (uint64_t)addrs[valid]; - desc->len = mbuf->pkt_len; - pkt = xsk_umem__get_data(umem->mz->addr, - desc->addr); - rte_memcpy(pkt, rte_pktmbuf_mtod(mbuf, void *), - desc->len); - valid++; - tx_bytes += mbuf->pkt_len; - } + mbuf = valid_bufs[i]; + desc->addr = (uint64_t)addrs[i]; + desc->len = mbuf->pkt_len; + pkt = xsk_umem__get_data(umem->mz->addr, + desc->addr); + rte_memcpy(pkt, rte_pktmbuf_mtod(mbuf, void *), + desc->len); + tx_bytes += mbuf->pkt_len; rte_pktmbuf_free(mbuf); } - xsk_ring_prod__submit(&txq->tx, nb_pkts); + xsk_ring_prod__submit(&txq->tx, nb_valid); kick_tx(txq); - if (valid < nb_pkts) - rte_ring_enqueue_bulk(umem->buf_ring, &addrs[valid], - nb_pkts - valid, NULL); - - txq->stats.err_pkts += nb_pkts - valid; - txq->stats.tx_pkts += valid; + txq->stats.err_pkts += nb_pkts - nb_valid; + txq->stats.tx_pkts += nb_valid; txq->stats.tx_bytes += tx_bytes; return nb_pkts;