From patchwork Tue Jun 14 09:10:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Li, Xiaoyun" X-Patchwork-Id: 112718 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E574BA00BE; Tue, 14 Jun 2022 11:22:59 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D78A04281E; Tue, 14 Jun 2022 11:22:59 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 78F9E41611 for ; Tue, 14 Jun 2022 11:22:57 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655198577; x=1686734577; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=D1eBils7fCqLWEBzb1shkYRWrflYDxb27R+1ADdmKyY=; b=GJk8tphK12zE004FomOLkZ/5XjO3L9RJCY/gL/801N6XP7N9lRgU0vsE 22y4DNqjyNluLVVKgzjPESrgzh9dso3mmg6AhFKW+TIovlUiYR/sV6oP0 ARKGGFk/MYkA5kWxk2SPZWgBxekdknO6Yy7Z3KhXtUZTqTG2nh2Tb7RHG q6mWicmXDupDDH8UBmDKVGLtYCwbyWKdLxOYbnbiwjh/qJqLbIooCMWPw NfXAbbGpcjv8mYe9T6bKqEyTSPrmox9pY2qeHGOrGzXeWbz1JCeGOJkuA RXlbes3fODqoMhma6k2IDXz7KYANgXt2iRcYLUNwBx+IuLBAx8dZly6x8 g==; X-IronPort-AV: E=McAfee;i="6400,9594,10377"; a="303971809" X-IronPort-AV: E=Sophos;i="5.91,299,1647327600"; d="scan'208";a="303971809" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Jun 2022 02:22:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,299,1647327600"; d="scan'208";a="830308832" Received: from dpdk-xiaoyun3.sh.intel.com ([10.67.119.214]) by fmsmga006.fm.intel.com with ESMTP; 14 Jun 2022 02:22:55 -0700 From: Xiaoyun Li To: ciara.loftus@intel.com, qi.z.zhang@intel.com, dev@dpdk.org Cc: Xiaoyun Li Subject: [PATCH v2] net/af_xdp: allow using copy mode in XSK Date: Tue, 14 Jun 2022 17:10:13 +0800 Message-Id: <20220614091013.1407008-1-xiaoyun.li@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220613151231.1359592-1-xiaoyun.li@intel.com> References: <20220613151231.1359592-1-xiaoyun.li@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org DPDK assumes that users only want AF_XDP socket (XSK) into zero copy mode when the kernel supports it. However, sometimes kernel driver doesn't support it well and copy mode is more stable and preferred. This patch allows using devarg "-a xx:xx.x,force_copy=1" to force the AF_XDP socket into copy mode. Signed-off-by: Xiaoyun Li Reviewed-by: Andrew Rybchenko --- v2: * Change name from no_zerocopy to force_copy. --- doc/guides/nics/af_xdp.rst | 2 ++ drivers/net/af_xdp/rte_eth_af_xdp.c | 25 ++++++++++++++++++++----- 2 files changed, 22 insertions(+), 5 deletions(-) diff --git a/doc/guides/nics/af_xdp.rst b/doc/guides/nics/af_xdp.rst index 56681c8365..d42e0f1f79 100644 --- a/doc/guides/nics/af_xdp.rst +++ b/doc/guides/nics/af_xdp.rst @@ -36,6 +36,8 @@ The following options can be provided to set up an af_xdp port in DPDK. default 0); * ``xdp_prog`` - path to custom xdp program (optional, default none); * ``busy_budget`` - busy polling budget (optional, default 64); +* ``force_copy`` - PMD will force AF_XDP socket into copy mode (optional, + default 0); Prerequisites ------------- diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c index 1e37da6e84..fce649c2a1 100644 --- a/drivers/net/af_xdp/rte_eth_af_xdp.c +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c @@ -150,6 +150,7 @@ struct pmd_internals { bool shared_umem; char prog_path[PATH_MAX]; bool custom_prog_configured; + bool force_copy; struct bpf_map *map; struct rte_ether_addr eth_addr; @@ -168,6 +169,7 @@ struct pmd_process_private { #define ETH_AF_XDP_SHARED_UMEM_ARG "shared_umem" #define ETH_AF_XDP_PROG_ARG "xdp_prog" #define ETH_AF_XDP_BUDGET_ARG "busy_budget" +#define ETH_AF_XDP_FORCE_COPY_ARG "force_copy" static const char * const valid_arguments[] = { ETH_AF_XDP_IFACE_ARG, @@ -176,6 +178,7 @@ static const char * const valid_arguments[] = { ETH_AF_XDP_SHARED_UMEM_ARG, ETH_AF_XDP_PROG_ARG, ETH_AF_XDP_BUDGET_ARG, + ETH_AF_XDP_FORCE_COPY_ARG, NULL }; @@ -1308,6 +1311,10 @@ xsk_configure(struct pmd_internals *internals, struct pkt_rx_queue *rxq, cfg.xdp_flags = XDP_FLAGS_UPDATE_IF_NOEXIST; cfg.bind_flags = 0; + /* Force AF_XDP socket into copy mode when users want it */ + if (internals->force_copy) + cfg.bind_flags |= XDP_COPY; + #if defined(XDP_USE_NEED_WAKEUP) cfg.bind_flags |= XDP_USE_NEED_WAKEUP; #endif @@ -1655,7 +1662,7 @@ xdp_get_channels_info(const char *if_name, int *max_queues, static int parse_parameters(struct rte_kvargs *kvlist, char *if_name, int *start_queue, int *queue_cnt, int *shared_umem, char *prog_path, - int *busy_budget) + int *busy_budget, int *force_copy) { int ret; @@ -1691,6 +1698,11 @@ parse_parameters(struct rte_kvargs *kvlist, char *if_name, int *start_queue, if (ret < 0) goto free_kvlist; + ret = rte_kvargs_process(kvlist, ETH_AF_XDP_FORCE_COPY_ARG, + &parse_integer_arg, force_copy); + if (ret < 0) + goto free_kvlist; + free_kvlist: rte_kvargs_free(kvlist); return ret; @@ -1729,7 +1741,7 @@ get_iface_info(const char *if_name, static struct rte_eth_dev * init_internals(struct rte_vdev_device *dev, const char *if_name, int start_queue_idx, int queue_cnt, int shared_umem, - const char *prog_path, int busy_budget) + const char *prog_path, int busy_budget, int force_copy) { const char *name = rte_vdev_device_name(dev); const unsigned int numa_node = dev->device.numa_node; @@ -1757,6 +1769,7 @@ init_internals(struct rte_vdev_device *dev, const char *if_name, } #endif internals->shared_umem = shared_umem; + internals->force_copy = force_copy; if (xdp_get_channels_info(if_name, &internals->max_queue_cnt, &internals->combined_queue_cnt)) { @@ -1941,6 +1954,7 @@ rte_pmd_af_xdp_probe(struct rte_vdev_device *dev) int shared_umem = 0; char prog_path[PATH_MAX] = {'\0'}; int busy_budget = -1, ret; + int force_copy = 0; struct rte_eth_dev *eth_dev = NULL; const char *name = rte_vdev_device_name(dev); @@ -1986,7 +2000,7 @@ rte_pmd_af_xdp_probe(struct rte_vdev_device *dev) if (parse_parameters(kvlist, if_name, &xsk_start_queue_idx, &xsk_queue_cnt, &shared_umem, prog_path, - &busy_budget) < 0) { + &busy_budget, &force_copy) < 0) { AF_XDP_LOG(ERR, "Invalid kvargs value\n"); return -EINVAL; } @@ -2001,7 +2015,7 @@ rte_pmd_af_xdp_probe(struct rte_vdev_device *dev) eth_dev = init_internals(dev, if_name, xsk_start_queue_idx, xsk_queue_cnt, shared_umem, prog_path, - busy_budget); + busy_budget, force_copy); if (eth_dev == NULL) { AF_XDP_LOG(ERR, "Failed to init internals\n"); return -1; @@ -2060,4 +2074,5 @@ RTE_PMD_REGISTER_PARAM_STRING(net_af_xdp, "queue_count= " "shared_umem= " "xdp_prog= " - "busy_budget="); + "busy_budget= " + "force_copy= ");