From patchwork Fri May 5 17:48:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 126705 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C846842A49; Fri, 5 May 2023 19:48:34 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8180442D3A; Fri, 5 May 2023 19:48:21 +0200 (CEST) Received: from mail-pf1-f175.google.com (mail-pf1-f175.google.com [209.85.210.175]) by mails.dpdk.org (Postfix) with ESMTP id 3746842D2D for ; Fri, 5 May 2023 19:48:20 +0200 (CEST) Received: by mail-pf1-f175.google.com with SMTP id d2e1a72fcca58-6434e263962so1631974b3a.2 for ; Fri, 05 May 2023 10:48:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20221208.gappssmtp.com; s=20221208; t=1683308899; x=1685900899; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=pi6dUg2Njb2tdcP0jpO095ZANmtm9co07iwCj9l7TUs=; b=JK/eIik4S7Bqp2RbbF+2slBUiZoauC3c4gd7ux4yCldKxBVAF3ZgC3ez/AdxFuAfRh HvvI8PaiSv6G+C4IB5KSAp13jOaBT1w0RPD6uR7VBqJsAxNk9eCpCZIpcH2G5b5qWuCz 6BwpeJ4RfAzvPZtGPHmaua+m3L9zhTyVSioiBpM1KTX+f/d6XboryeDCJuHeSWpF1Cc4 oIOaN6cBf1bYqkGUQ2lFCYz3DVbzBDwhJZnM3tI77CxSjuzw5T/3O5koXJwlwXKQ/zxj KO2WFDcUirLP/lI+3iW4J1hJ+7kMOkono/r9imfp4zz8jLBvAdEUA5yzQUvyxYwmUUqT cKUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683308899; x=1685900899; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pi6dUg2Njb2tdcP0jpO095ZANmtm9co07iwCj9l7TUs=; b=Js+riJEGEh51DNgQhGdwB4qHgG6mVlArQLo1/nrsPaLL3ExhLkAMRQDiK2gEmlWVKy yZZclF/nSCfP7sFkyoocUiqQquMHWOZFg8qQDsRLqIyojVi2dfV+HE8wOQiSflPbjgZW F8FwPa/g/sBMu9k1QZoxn2j7yaUSkPBV0F12OFR21GXjajPXyAZAvNV1u/61RqVV/ei0 uoON/hUAIexX8PI5887JXpyQzXIyWidDmLP/qf1UZYuJEv7tMJ6E4BQWIJSgVl3VhW1O UucBaJsxG8bECtfqdPBeFf6Xv0NzBWl+T9cMkMR8Jr5J/lswuC5NknZ+T4aUZR4lIbkZ iSEQ== X-Gm-Message-State: AC+VfDyM4ncBf0cg+LjGqSP6+sGyJk8BRvq1yh00k1teDnmI8okS4yY0 yy/bFtbKEt27xjnrKYB9LNndBqkLetzsBFVXz7xsBQ== X-Google-Smtp-Source: ACHHUZ7E2RMsKG3chkHIwh3O3eHcTbu7+91O6KMSE6QLmRYfiyvby7mx9dp3rODyofRaZHmiAPnIrg== X-Received: by 2002:a05:6a00:a05:b0:63b:5496:7af5 with SMTP id p5-20020a056a000a0500b0063b54967af5mr3546035pfh.1.1683308898862; Fri, 05 May 2023 10:48:18 -0700 (PDT) Received: from hermes.local (204-195-120-218.wavecable.com. [204.195.120.218]) by smtp.gmail.com with ESMTPSA id a15-20020aa780cf000000b0063799398eb9sm1895707pfn.58.2023.05.05.10.48.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 May 2023 10:48:18 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , Jiayu Hu Subject: [PATCH 02/14] gso: use rte_pktmbuf_mtod_offset Date: Fri, 5 May 2023 10:48:01 -0700 Message-Id: <20230505174813.133894-3-stephen@networkplumber.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230505174813.133894-1-stephen@networkplumber.org> References: <20230505174813.133894-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use the rte_pktmbuf_mtod_offset macro. Change was automatically generated by cocci/mtod-offset.cocci. Signed-off-by: Stephen Hemminger --- lib/gso/gso_common.h | 12 ++++++------ lib/gso/gso_tcp4.c | 8 ++++---- lib/gso/gso_tunnel_tcp4.c | 12 ++++++------ lib/gso/gso_tunnel_udp4.c | 18 +++++++++--------- 4 files changed, 25 insertions(+), 25 deletions(-) diff --git a/lib/gso/gso_common.h b/lib/gso/gso_common.h index 9456d596d3c5..4100765f2355 100644 --- a/lib/gso/gso_common.h +++ b/lib/gso/gso_common.h @@ -52,8 +52,8 @@ update_udp_header(struct rte_mbuf *pkt, uint16_t udp_offset) { struct rte_udp_hdr *udp_hdr; - udp_hdr = (struct rte_udp_hdr *)(rte_pktmbuf_mtod(pkt, char *) + - udp_offset); + udp_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_udp_hdr *, + udp_offset); udp_hdr->dgram_len = rte_cpu_to_be_16(pkt->pkt_len - udp_offset); } @@ -77,8 +77,8 @@ update_tcp_header(struct rte_mbuf *pkt, uint16_t l4_offset, uint32_t sent_seq, { struct rte_tcp_hdr *tcp_hdr; - tcp_hdr = (struct rte_tcp_hdr *)(rte_pktmbuf_mtod(pkt, char *) + - l4_offset); + tcp_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_tcp_hdr *, + l4_offset); tcp_hdr->sent_seq = rte_cpu_to_be_32(sent_seq); if (likely(non_tail)) tcp_hdr->tcp_flags &= (~(TCP_HDR_PSH_MASK | @@ -104,8 +104,8 @@ update_ipv4_header(struct rte_mbuf *pkt, uint16_t l3_offset, uint16_t id) { struct rte_ipv4_hdr *ipv4_hdr; - ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char *) + - l3_offset); + ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_ipv4_hdr *, + l3_offset); ipv4_hdr->total_length = rte_cpu_to_be_16(pkt->pkt_len - l3_offset); ipv4_hdr->packet_id = rte_cpu_to_be_16(id); } diff --git a/lib/gso/gso_tcp4.c b/lib/gso/gso_tcp4.c index d31feaff95cd..e2ae4aaf6c5a 100644 --- a/lib/gso/gso_tcp4.c +++ b/lib/gso/gso_tcp4.c @@ -16,8 +16,8 @@ update_ipv4_tcp_headers(struct rte_mbuf *pkt, uint8_t ipid_delta, uint16_t l3_offset = pkt->l2_len; uint16_t l4_offset = l3_offset + pkt->l3_len; - ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char*) + - l3_offset); + ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_ipv4_hdr *, + l3_offset); tcp_hdr = (struct rte_tcp_hdr *)((char *)ipv4_hdr + pkt->l3_len); id = rte_be_to_cpu_16(ipv4_hdr->packet_id); sent_seq = rte_be_to_cpu_32(tcp_hdr->sent_seq); @@ -46,8 +46,8 @@ gso_tcp4_segment(struct rte_mbuf *pkt, int ret; /* Don't process the fragmented packet */ - ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char *) + - pkt->l2_len); + ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_ipv4_hdr *, + pkt->l2_len); frag_off = rte_be_to_cpu_16(ipv4_hdr->fragment_offset); if (unlikely(IS_FRAGMENTED(frag_off))) { return 0; diff --git a/lib/gso/gso_tunnel_tcp4.c b/lib/gso/gso_tunnel_tcp4.c index 1a7ef30ddebf..3a9159774b27 100644 --- a/lib/gso/gso_tunnel_tcp4.c +++ b/lib/gso/gso_tunnel_tcp4.c @@ -23,13 +23,13 @@ update_tunnel_ipv4_tcp_headers(struct rte_mbuf *pkt, uint8_t ipid_delta, tcp_offset = inner_ipv4_offset + pkt->l3_len; /* Outer IPv4 header. */ - ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char *) + - outer_ipv4_offset); + ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_ipv4_hdr *, + outer_ipv4_offset); outer_id = rte_be_to_cpu_16(ipv4_hdr->packet_id); /* Inner IPv4 header. */ - ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char *) + - inner_ipv4_offset); + ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_ipv4_hdr *, + inner_ipv4_offset); inner_id = rte_be_to_cpu_16(ipv4_hdr->packet_id); tcp_hdr = (struct rte_tcp_hdr *)((char *)ipv4_hdr + pkt->l3_len); @@ -65,8 +65,8 @@ gso_tunnel_tcp4_segment(struct rte_mbuf *pkt, int ret; hdr_offset = pkt->outer_l2_len + pkt->outer_l3_len + pkt->l2_len; - inner_ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char *) + - hdr_offset); + inner_ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_ipv4_hdr *, + hdr_offset); /* * Don't process the packet whose MF bit or offset in the inner * IPv4 header are non-zero. diff --git a/lib/gso/gso_tunnel_udp4.c b/lib/gso/gso_tunnel_udp4.c index 1fc7a8dbc5aa..4fb275484ca8 100644 --- a/lib/gso/gso_tunnel_udp4.c +++ b/lib/gso/gso_tunnel_udp4.c @@ -22,13 +22,13 @@ update_tunnel_ipv4_udp_headers(struct rte_mbuf *pkt, struct rte_mbuf **segs, inner_ipv4_offset = outer_udp_offset + pkt->l2_len; /* Outer IPv4 header. */ - ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char *) + - outer_ipv4_offset); + ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_ipv4_hdr *, + outer_ipv4_offset); outer_id = rte_be_to_cpu_16(ipv4_hdr->packet_id); /* Inner IPv4 header. */ - ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char *) + - inner_ipv4_offset); + ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_ipv4_hdr *, + inner_ipv4_offset); inner_id = rte_be_to_cpu_16(ipv4_hdr->packet_id); tail_idx = nb_segs - 1; @@ -42,9 +42,9 @@ update_tunnel_ipv4_udp_headers(struct rte_mbuf *pkt, struct rte_mbuf **segs, * * Set IP fragment offset for inner IP header. */ - ipv4_hdr = (struct rte_ipv4_hdr *) - (rte_pktmbuf_mtod(segs[i], char *) + - inner_ipv4_offset); + ipv4_hdr = rte_pktmbuf_mtod_offset(segs[i], + struct rte_ipv4_hdr *, + inner_ipv4_offset); is_mf = i < tail_idx ? IPV4_HDR_MF_BIT : 0; ipv4_hdr->fragment_offset = rte_cpu_to_be_16(frag_offset | is_mf); @@ -67,8 +67,8 @@ gso_tunnel_udp4_segment(struct rte_mbuf *pkt, int ret; hdr_offset = pkt->outer_l2_len + pkt->outer_l3_len + pkt->l2_len; - inner_ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char *) + - hdr_offset); + inner_ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_ipv4_hdr *, + hdr_offset); /* * Don't process the packet whose MF bit or offset in the inner * IPv4 header are non-zero.