From patchwork Thu Jun 2 09:13:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Huichao Cai X-Patchwork-Id: 112260 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5DD92A0548; Thu, 2 Jun 2022 11:13:15 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0069140691; Thu, 2 Jun 2022 11:13:15 +0200 (CEST) Received: from m12-17.163.com (m12-17.163.com [220.181.12.17]) by mails.dpdk.org (Postfix) with ESMTP id 09E1F4021E for ; Thu, 2 Jun 2022 11:13:12 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=163.com; s=s110527; h=From:Subject:Date:Message-Id; bh=5xuh9JqB6klO20CkTR qj3gXLNG5hmPuhKUTxnpzJwH4=; b=jcS0mWOYY4Ysb131e+HncQZKlufo6NXZYC LdkiLkt55rlJbkLsRV69gdJCDDBBtXAmjo7FJB+8I8VrdxErt36tCaSUnhhyq1rI R87fV70zXHDef2SGSefw9bl4mo2IgZrAXAXH5jlCfCjThx0CcyK+x+F5EWiaeXne cMEEaPrj8= Received: from 192.168.227.150 (unknown [120.244.160.202]) by smtp13 (Coremail) with SMTP id EcCowAD3lWUif5hiW7hMFw--.14118S2; Thu, 02 Jun 2022 17:13:10 +0800 (CST) From: Huichao Cai To: dev@dpdk.org Cc: konstantin.ananyev@intel.com Subject: [PATCH] ip_frag: add IPv4 fast fragment switch and test data Date: Thu, 2 Jun 2022 17:13:03 +0800 Message-Id: <1654161183-5391-1-git-send-email-chcchc88@163.com> X-Mailer: git-send-email 1.8.3.1 X-CM-TRANSID: EcCowAD3lWUif5hiW7hMFw--.14118S2 X-Coremail-Antispam: 1Uf129KBjvJXoWxGF48AFWDur4ruFWfuF43GFg_yoWrKw18pF W7K34vvwn5JFs7G3Z7X3W5ZFW3KasIqr4UKrZIqas7JF13Kr95KrW7tr1ayry7JrZ5ArnY vw4v9Fn8uw17C3JanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDUYxBIdaVFxhVjvjDU0xZFpf9x07jFwZ7UUUUU= X-Originating-IP: [120.244.160.202] X-CM-SenderInfo: pfkfuxrfyyqiywtou0bp/1tbitwkUF1aEQocJlQAAs2 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Some NIC drivers support DEV_TX_OFFLOAD_MBUF_FAST_FREE offload( Device supports optimization for fast release of mbufs.When set application must guarantee that per-queue all mbufs comes from the same mempool and has refcnt = 1).In order to adapt to this offload function,we need to modify the existing fragment logic(attach mbuf, so it is fast,we can call it fast fragment mode) and add the fragment logic in the non-attach mbuf mode(slow fragment mode).Add some test data for this modification. Signed-off-by: Huichao Cai --- app/test/test_ipfrag.c | 14 +++++++-- lib/ip_frag/rte_ipv4_fragmentation.c | 56 +++++++++++++++++++++++++----------- 2 files changed, 51 insertions(+), 19 deletions(-) diff --git a/app/test/test_ipfrag.c b/app/test/test_ipfrag.c index 610a86b..f5fe4b8 100644 --- a/app/test/test_ipfrag.c +++ b/app/test/test_ipfrag.c @@ -407,12 +407,20 @@ static void ut_teardown(void) pktid); } - if (tests[i].ipv == 4) - len = rte_ipv4_fragment_packet(b, pkts_out, BURST, + if (tests[i].ipv == 4) { + if (i % 2) + len = rte_ipv4_fragment_packet(b, + pkts_out, BURST, tests[i].mtu_size, direct_pool, indirect_pool); - else if (tests[i].ipv == 6) + else + len = rte_ipv4_fragment_packet(b, + pkts_out, BURST, + tests[i].mtu_size, + direct_pool, + direct_pool); + } else if (tests[i].ipv == 6) len = rte_ipv6_fragment_packet(b, pkts_out, BURST, tests[i].mtu_size, direct_pool, diff --git a/lib/ip_frag/rte_ipv4_fragmentation.c b/lib/ip_frag/rte_ipv4_fragmentation.c index a562424..65bfad7 100644 --- a/lib/ip_frag/rte_ipv4_fragmentation.c +++ b/lib/ip_frag/rte_ipv4_fragmentation.c @@ -102,6 +102,11 @@ static inline uint16_t __create_ipopt_frag_hdr(uint8_t *iph, * MBUF pool used for allocating direct buffers for the output fragments. * @param pool_indirect * MBUF pool used for allocating indirect buffers for the output fragments. + * If pool_indirect == pool_direct,this means that the fragment will adapt + * to DEV_TX_OFFLOAD_MBUF_FAST_FREE offload. + * DEV_TX_OFFLOAD_MBUF_FAST_FREE: Device supports optimization + * for fast release of mbufs. When set application must guarantee that + * per-queue all mbufs comes from the same mempool and has refcnt = 1. * @return * Upon successful completion - number of output fragments placed * in the pkts_out array. @@ -123,6 +128,7 @@ static inline uint16_t __create_ipopt_frag_hdr(uint8_t *iph, uint16_t frag_bytes_remaining; uint8_t ipopt_frag_hdr[IPV4_HDR_MAX_LEN]; uint16_t ipopt_len; + bool is_fast_frag_mode = true; /* * Formal parameter checking. @@ -133,6 +139,9 @@ static inline uint16_t __create_ipopt_frag_hdr(uint8_t *iph, unlikely(mtu_size < RTE_ETHER_MIN_MTU)) return -EINVAL; + if (pool_indirect == pool_direct) + is_fast_frag_mode = false; + in_hdr = rte_pktmbuf_mtod(pkt_in, struct rte_ipv4_hdr *); header_len = (in_hdr->version_ihl & RTE_IPV4_HDR_IHL_MASK) * RTE_IPV4_IHL_MULTIPLIER; @@ -190,30 +199,45 @@ static inline uint16_t __create_ipopt_frag_hdr(uint8_t *iph, out_seg_prev = out_pkt; more_out_segs = 1; while (likely(more_out_segs && more_in_segs)) { - struct rte_mbuf *out_seg = NULL; uint32_t len; - /* Allocate indirect buffer */ - out_seg = rte_pktmbuf_alloc(pool_indirect); - if (unlikely(out_seg == NULL)) { - rte_pktmbuf_free(out_pkt); - __free_fragments(pkts_out, out_pkt_pos); - return -ENOMEM; - } - out_seg_prev->next = out_seg; - out_seg_prev = out_seg; - - /* Prepare indirect buffer */ - rte_pktmbuf_attach(out_seg, in_seg); len = frag_bytes_remaining; if (len > (in_seg->data_len - in_seg_data_pos)) { len = in_seg->data_len - in_seg_data_pos; } - out_seg->data_off = in_seg->data_off + in_seg_data_pos; - out_seg->data_len = (uint16_t)len; + + if (is_fast_frag_mode) { + struct rte_mbuf *out_seg = NULL; + /* Allocate indirect buffer */ + out_seg = rte_pktmbuf_alloc(pool_indirect); + if (unlikely(out_seg == NULL)) { + rte_pktmbuf_free(out_pkt); + __free_fragments(pkts_out, out_pkt_pos); + return -ENOMEM; + } + out_seg_prev->next = out_seg; + out_seg_prev = out_seg; + + /* Prepare indirect buffer */ + rte_pktmbuf_attach(out_seg, in_seg); + + out_seg->data_off = in_seg->data_off + + in_seg_data_pos; + out_seg->data_len = (uint16_t)len; + out_pkt->nb_segs += 1; + } else { + rte_memcpy( + rte_pktmbuf_mtod_offset(out_pkt, char *, + out_pkt->pkt_len), + rte_pktmbuf_mtod_offset(in_seg, char *, + in_seg_data_pos), + len); + out_pkt->data_len = (uint16_t)(len + + out_pkt->data_len); + } + out_pkt->pkt_len = (uint16_t)(len + out_pkt->pkt_len); - out_pkt->nb_segs += 1; in_seg_data_pos += len; frag_bytes_remaining -= len;