From patchwork Sat Oct 29 03:27:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 119281 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 10886A00C4; Sat, 29 Oct 2022 05:59:21 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 31E2F42B8F; Sat, 29 Oct 2022 05:58:15 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 39B2141145 for ; Sat, 29 Oct 2022 05:58:06 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1667015886; x=1698551886; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4rm4F5XRUNHgMoLy7sxiPN/poovpx4jwsFvb4ZWruSc=; b=inROw/Ro/e07j2dSTeMdDwXmVFDvk+OxpXoaYYGAONzjGlamiQAJUs+X QSl0Gbyxrp6dmodT1oIkKgzchh3a7930xeA/75FCJ3brVVtZxGNd234QN OSyiIESOH4ljeFXVuMUxODZKe1nXgODXlZfkUaL608LwxeFnTTO77a1Eb 8b+eZHyPO7KGfFYc9Nv84dHqt29GwVOGduqM7ok5RLiLPLFo7lPaW4x1o vupxxNOqtDCJB/AFQMkSb0w4GLOZHabseG1zSbUSM9+2k3cyDvVelmKrT ZHmJSAbw+DVmfcArXLiR5+PJsmCj8KqFVssxWUlZ3o4n8+hWFiY4nmQeQ Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10514"; a="296043749" X-IronPort-AV: E=Sophos;i="5.95,222,1661842800"; d="scan'208";a="296043749" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Oct 2022 20:58:05 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10514"; a="635523933" X-IronPort-AV: E=Sophos;i="5.95,222,1661842800"; d="scan'208";a="635523933" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by fmsmga007.fm.intel.com with ESMTP; 28 Oct 2022 20:58:01 -0700 From: beilei.xing@intel.com To: andrew.rybchenko@oktetlabs.ru, jingjing.wu@intel.com, beilei.xing@intel.com Cc: dev@dpdk.org, Junfeng Guo , Xiaoyun Li Subject: [PATCH v15 14/18] net/idpf: add support for RSS Date: Sat, 29 Oct 2022 03:27:25 +0000 Message-Id: <20221029032729.22772-15-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20221029032729.22772-1-beilei.xing@intel.com> References: <20221027074729.1494529-1-junfeng.guo@intel.com> <20221029032729.22772-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Junfeng Guo Add RSS support. Signed-off-by: Beilei Xing Signed-off-by: Xiaoyun Li Signed-off-by: Junfeng Guo --- drivers/net/idpf/idpf_ethdev.c | 120 ++++++++++++++++++++++++++++++++- drivers/net/idpf/idpf_ethdev.h | 26 +++++++ drivers/net/idpf/idpf_vchnl.c | 113 +++++++++++++++++++++++++++++++ 3 files changed, 258 insertions(+), 1 deletion(-) diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c index 957cc10616..58560ea404 100644 --- a/drivers/net/idpf/idpf_ethdev.c +++ b/drivers/net/idpf/idpf_ethdev.c @@ -59,6 +59,8 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->max_mtu = dev_info->max_rx_pktlen - IDPF_ETH_OVERHEAD; dev_info->min_mtu = RTE_ETHER_MIN_MTU; + dev_info->flow_type_rss_offloads = IDPF_RSS_OFFLOAD_ALL; + dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS; dev_info->default_txconf = (struct rte_eth_txconf) { @@ -169,6 +171,8 @@ idpf_parse_devarg_id(char *name) return val; } +#define IDPF_RSS_KEY_LEN 52 + static int idpf_init_vport(struct rte_eth_dev *dev) { @@ -189,6 +193,10 @@ idpf_init_vport(struct rte_eth_dev *dev) vport->max_mtu = vport_info->max_mtu; rte_memcpy(vport->default_mac_addr, vport_info->default_mac_addr, ETH_ALEN); + vport->rss_algorithm = vport_info->rss_algorithm; + vport->rss_key_size = RTE_MIN(IDPF_RSS_KEY_LEN, + vport_info->rss_key_size); + vport->rss_lut_size = vport_info->rss_lut_size; vport->sw_idx = idx; for (i = 0; i < vport_info->chunks.num_chunks; i++) { @@ -246,17 +254,110 @@ idpf_init_vport(struct rte_eth_dev *dev) return 0; } +static int +idpf_config_rss(struct idpf_vport *vport) +{ + int ret; + + ret = idpf_vc_set_rss_key(vport); + if (ret != 0) { + PMD_INIT_LOG(ERR, "Failed to configure RSS key"); + return ret; + } + + ret = idpf_vc_set_rss_lut(vport); + if (ret != 0) { + PMD_INIT_LOG(ERR, "Failed to configure RSS lut"); + return ret; + } + + ret = idpf_vc_set_rss_hash(vport); + if (ret != 0) { + PMD_INIT_LOG(ERR, "Failed to configure RSS hash"); + return ret; + } + + return ret; +} + +static int +idpf_init_rss(struct idpf_vport *vport) +{ + struct rte_eth_rss_conf *rss_conf; + uint16_t i, nb_q, lut_size; + int ret = 0; + + rss_conf = &vport->dev_data->dev_conf.rx_adv_conf.rss_conf; + nb_q = vport->dev_data->nb_rx_queues; + + vport->rss_key = rte_zmalloc("rss_key", + vport->rss_key_size, 0); + if (vport->rss_key == NULL) { + PMD_INIT_LOG(ERR, "Failed to allocate RSS key"); + ret = -ENOMEM; + goto err_alloc_key; + } + + lut_size = vport->rss_lut_size; + vport->rss_lut = rte_zmalloc("rss_lut", + sizeof(uint32_t) * lut_size, 0); + if (vport->rss_lut == NULL) { + PMD_INIT_LOG(ERR, "Failed to allocate RSS lut"); + ret = -ENOMEM; + goto err_alloc_lut; + } + + if (rss_conf->rss_key == NULL) { + for (i = 0; i < vport->rss_key_size; i++) + vport->rss_key[i] = (uint8_t)rte_rand(); + } else if (rss_conf->rss_key_len != vport->rss_key_size) { + PMD_INIT_LOG(ERR, "Invalid RSS key length in RSS configuration, should be %d", + vport->rss_key_size); + ret = -EINVAL; + goto err_cfg_key; + } else { + rte_memcpy(vport->rss_key, rss_conf->rss_key, + vport->rss_key_size); + } + + for (i = 0; i < lut_size; i++) + vport->rss_lut[i] = i % nb_q; + + vport->rss_hf = IDPF_DEFAULT_RSS_HASH_EXPANDED; + + ret = idpf_config_rss(vport); + if (ret != 0) { + PMD_INIT_LOG(ERR, "Failed to configure RSS"); + goto err_cfg_key; + } + + return ret; + +err_cfg_key: + rte_free(vport->rss_lut); + vport->rss_lut = NULL; +err_alloc_lut: + rte_free(vport->rss_key); + vport->rss_key = NULL; +err_alloc_key: + return ret; +} + static int idpf_dev_configure(struct rte_eth_dev *dev) { + struct idpf_vport *vport = dev->data->dev_private; struct rte_eth_conf *conf = &dev->data->dev_conf; + struct idpf_adapter *adapter = vport->adapter; + int ret; if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) { PMD_INIT_LOG(ERR, "Setting link speed is not supported"); return -ENOTSUP; } - if (dev->data->nb_rx_queues == 1 && conf->rxmode.mq_mode != RTE_ETH_MQ_RX_NONE) { + if ((dev->data->nb_rx_queues == 1 && conf->rxmode.mq_mode != RTE_ETH_MQ_RX_NONE) || + (dev->data->nb_rx_queues > 1 && conf->rxmode.mq_mode != RTE_ETH_MQ_RX_RSS)) { PMD_INIT_LOG(ERR, "Multi-queue packet distribution mode %d is not supported", conf->rxmode.mq_mode); return -ENOTSUP; @@ -294,6 +395,17 @@ idpf_dev_configure(struct rte_eth_dev *dev) return -ENOTSUP; } + if (adapter->caps->rss_caps != 0 && dev->data->nb_rx_queues != 0) { + ret = idpf_init_rss(vport); + if (ret != 0) { + PMD_INIT_LOG(ERR, "Failed to init rss"); + return ret; + } + } else { + PMD_INIT_LOG(ERR, "RSS is not supported."); + return -1; + } + return 0; } @@ -500,6 +612,12 @@ idpf_dev_close(struct rte_eth_dev *dev) idpf_vc_destroy_vport(vport); + rte_free(vport->rss_lut); + vport->rss_lut = NULL; + + rte_free(vport->rss_key); + vport->rss_key = NULL; + rte_free(vport->recv_vectors); vport->recv_vectors = NULL; diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h index 811240c386..8d0804f603 100644 --- a/drivers/net/idpf/idpf_ethdev.h +++ b/drivers/net/idpf/idpf_ethdev.h @@ -48,6 +48,20 @@ #define IDPF_ETH_OVERHEAD \ (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + IDPF_VLAN_TAG_SIZE * 2) +#define IDPF_RSS_OFFLOAD_ALL ( \ + RTE_ETH_RSS_IPV4 | \ + RTE_ETH_RSS_FRAG_IPV4 | \ + RTE_ETH_RSS_NONFRAG_IPV4_TCP | \ + RTE_ETH_RSS_NONFRAG_IPV4_UDP | \ + RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \ + RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \ + RTE_ETH_RSS_IPV6 | \ + RTE_ETH_RSS_FRAG_IPV6 | \ + RTE_ETH_RSS_NONFRAG_IPV6_TCP | \ + RTE_ETH_RSS_NONFRAG_IPV6_UDP | \ + RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \ + RTE_ETH_RSS_NONFRAG_IPV6_OTHER) + #define IDPF_ADAPTER_NAME_LEN (PCI_PRI_STR_SIZE + 1) /* Message type read in virtual channel from PF */ @@ -90,11 +104,20 @@ struct idpf_vport { uint16_t max_mtu; uint8_t default_mac_addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS]; + enum virtchnl_rss_algorithm rss_algorithm; + uint16_t rss_key_size; + uint16_t rss_lut_size; + uint16_t sw_idx; /* SW idx */ struct rte_eth_dev_data *dev_data; /* Pointer to the device data */ uint16_t max_pkt_len; /* Maximum packet length */ + /* RSS info */ + uint32_t *rss_lut; + uint8_t *rss_key; + uint64_t rss_hf; + /* MSIX info*/ struct virtchnl2_queue_vector *qv_map; /* queue vector mapping */ uint16_t max_vectors; @@ -200,6 +223,9 @@ int idpf_get_pkt_type(struct idpf_adapter *adapter); int idpf_vc_get_caps(struct idpf_adapter *adapter); int idpf_vc_create_vport(struct idpf_adapter *adapter); int idpf_vc_destroy_vport(struct idpf_vport *vport); +int idpf_vc_set_rss_key(struct idpf_vport *vport); +int idpf_vc_set_rss_lut(struct idpf_vport *vport); +int idpf_vc_set_rss_hash(struct idpf_vport *vport); int idpf_vc_config_rxqs(struct idpf_vport *vport); int idpf_vc_config_rxq(struct idpf_vport *vport, uint16_t rxq_id); int idpf_vc_config_txqs(struct idpf_vport *vport); diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c index 827689f7f5..9f72ae6264 100644 --- a/drivers/net/idpf/idpf_vchnl.c +++ b/drivers/net/idpf/idpf_vchnl.c @@ -224,6 +224,9 @@ idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct idpf_cmd_info *args) case VIRTCHNL2_OP_GET_CAPS: case VIRTCHNL2_OP_CREATE_VPORT: case VIRTCHNL2_OP_DESTROY_VPORT: + case VIRTCHNL2_OP_SET_RSS_KEY: + case VIRTCHNL2_OP_SET_RSS_LUT: + case VIRTCHNL2_OP_SET_RSS_HASH: case VIRTCHNL2_OP_CONFIG_RX_QUEUES: case VIRTCHNL2_OP_CONFIG_TX_QUEUES: case VIRTCHNL2_OP_ENABLE_QUEUES: @@ -525,6 +528,22 @@ idpf_vc_get_caps(struct idpf_adapter *adapter) memset(&caps_msg, 0, sizeof(struct virtchnl2_get_capabilities)); + caps_msg.rss_caps = + VIRTCHNL2_CAP_RSS_IPV4_TCP | + VIRTCHNL2_CAP_RSS_IPV4_UDP | + VIRTCHNL2_CAP_RSS_IPV4_SCTP | + VIRTCHNL2_CAP_RSS_IPV4_OTHER | + VIRTCHNL2_CAP_RSS_IPV6_TCP | + VIRTCHNL2_CAP_RSS_IPV6_UDP | + VIRTCHNL2_CAP_RSS_IPV6_SCTP | + VIRTCHNL2_CAP_RSS_IPV6_OTHER | + VIRTCHNL2_CAP_RSS_IPV4_AH | + VIRTCHNL2_CAP_RSS_IPV4_ESP | + VIRTCHNL2_CAP_RSS_IPV4_AH_ESP | + VIRTCHNL2_CAP_RSS_IPV6_AH | + VIRTCHNL2_CAP_RSS_IPV6_ESP | + VIRTCHNL2_CAP_RSS_IPV6_AH_ESP; + caps_msg.other_caps = VIRTCHNL2_CAP_WB_ON_ITR; args.ops = VIRTCHNL2_OP_GET_CAPS; @@ -615,6 +634,100 @@ idpf_vc_destroy_vport(struct idpf_vport *vport) return err; } +int +idpf_vc_set_rss_key(struct idpf_vport *vport) +{ + struct idpf_adapter *adapter = vport->adapter; + struct virtchnl2_rss_key *rss_key; + struct idpf_cmd_info args; + int len, err; + + len = sizeof(*rss_key) + sizeof(rss_key->key[0]) * + (vport->rss_key_size - 1); + rss_key = rte_zmalloc("rss_key", len, 0); + if (rss_key == NULL) + return -ENOMEM; + + rss_key->vport_id = vport->vport_id; + rss_key->key_len = vport->rss_key_size; + rte_memcpy(rss_key->key, vport->rss_key, + sizeof(rss_key->key[0]) * vport->rss_key_size); + + memset(&args, 0, sizeof(args)); + args.ops = VIRTCHNL2_OP_SET_RSS_KEY; + args.in_args = (uint8_t *)rss_key; + args.in_args_size = len; + args.out_buffer = adapter->mbx_resp; + args.out_size = IDPF_DFLT_MBX_BUF_SIZE; + + err = idpf_execute_vc_cmd(adapter, &args); + if (err != 0) + PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_SET_RSS_KEY"); + + rte_free(rss_key); + return err; +} + +int +idpf_vc_set_rss_lut(struct idpf_vport *vport) +{ + struct idpf_adapter *adapter = vport->adapter; + struct virtchnl2_rss_lut *rss_lut; + struct idpf_cmd_info args; + int len, err; + + len = sizeof(*rss_lut) + sizeof(rss_lut->lut[0]) * + (vport->rss_lut_size - 1); + rss_lut = rte_zmalloc("rss_lut", len, 0); + if (rss_lut == NULL) + return -ENOMEM; + + rss_lut->vport_id = vport->vport_id; + rss_lut->lut_entries = vport->rss_lut_size; + rte_memcpy(rss_lut->lut, vport->rss_lut, + sizeof(rss_lut->lut[0]) * vport->rss_lut_size); + + memset(&args, 0, sizeof(args)); + args.ops = VIRTCHNL2_OP_SET_RSS_LUT; + args.in_args = (uint8_t *)rss_lut; + args.in_args_size = len; + args.out_buffer = adapter->mbx_resp; + args.out_size = IDPF_DFLT_MBX_BUF_SIZE; + + err = idpf_execute_vc_cmd(adapter, &args); + if (err != 0) + PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_SET_RSS_LUT"); + + rte_free(rss_lut); + return err; +} + +int +idpf_vc_set_rss_hash(struct idpf_vport *vport) +{ + struct idpf_adapter *adapter = vport->adapter; + struct virtchnl2_rss_hash rss_hash; + struct idpf_cmd_info args; + int err; + + memset(&rss_hash, 0, sizeof(rss_hash)); + rss_hash.ptype_groups = vport->rss_hf; + rss_hash.vport_id = vport->vport_id; + + memset(&args, 0, sizeof(args)); + args.ops = VIRTCHNL2_OP_SET_RSS_HASH; + args.in_args = (uint8_t *)&rss_hash; + args.in_args_size = sizeof(rss_hash); + args.out_buffer = adapter->mbx_resp; + args.out_size = IDPF_DFLT_MBX_BUF_SIZE; + + err = idpf_execute_vc_cmd(adapter, &args); + if (err != 0) + PMD_DRV_LOG(ERR, "Failed to execute command of OP_SET_RSS_HASH"); + + return err; +} + #define IDPF_RX_BUF_STRIDE 64 int idpf_vc_config_rxqs(struct idpf_vport *vport)