From patchwork Tue Nov 9 09:42:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satheesh Paul Antonysamy X-Patchwork-Id: 104044 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 413E5A0C4B; Tue, 9 Nov 2021 10:42:19 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2CDF44111A; Tue, 9 Nov 2021 10:42:19 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id B85DE41104 for ; Tue, 9 Nov 2021 10:42:16 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 1A93CTQB013926 for ; Tue, 9 Nov 2021 01:42:16 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=djNJ0g3TlaoUBp81dNvJYaWVbbK+6FIAHpNiStsxdDg=; b=Nfmm04Pkd0EPUzqlKpxW9wCOv6o4fK/tFMO6AuuYJKBKcl4g1MCLLENzg2AqLk9Ch82c /J9pSZWXI6jor0QxQQJtcommDNdA7ala1Oul2BSMrPgxthkwphEdOn1ZjpeGnzyjp5wW B045kZ1Eayg7N4Kg0i7QbmtwUnq2W4yNLNSdi1htH7Sdacu1hQRwsjg4yYlhDgtr5neT u0/4XF3DRhhM0jyYIgII1FOaThB8fuhVbQ8q63JKqcXRjckcHDLrm8xU9DEP9VB/zTKo 3CpjzhVPbra7oJxrwQ+o5deDCGHmo1z2NvtLme4wf/TGjnphx2eHbmNWUb3xAJd4UZnz LQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3c7gvg1j5t-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 09 Nov 2021 01:42:15 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 9 Nov 2021 01:42:14 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Tue, 9 Nov 2021 01:42:14 -0800 Received: from satheeshpaullabpc.marvell.com (unknown [10.28.34.33]) by maili.marvell.com (Postfix) with ESMTP id A7C6B3F7077; Tue, 9 Nov 2021 01:42:12 -0800 (PST) From: To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao CC: , Satheesh Paul Date: Tue, 9 Nov 2021 15:12:04 +0530 Message-ID: <20211109094204.2343402-2-psatheesh@marvell.com> X-Mailer: git-send-email 2.25.4 In-Reply-To: <20211109094204.2343402-1-psatheesh@marvell.com> References: <20211109094204.2343402-1-psatheesh@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: ARO9THk6vOfAjGmfZ5FYrqOonfCTO0wS X-Proofpoint-ORIG-GUID: ARO9THk6vOfAjGmfZ5FYrqOonfCTO0wS X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.790,Hydra:6.0.425,FMLib:17.0.607.475 definitions=2021-11-09_03,2021-11-08_02,2020-04-07_01 Subject: [dpdk-dev] [PATCH 22.02 2/2] net/cnxk: add devargs for configuring SDP channel mask X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Satheesh Paul This patch adds support to configure channel mask which will be used by rte flow when adding flow rules on SDP interfaces. Signed-off-by: Satheesh Paul --- doc/guides/nics/cnxk.rst | 21 ++++++++++++++ drivers/net/cnxk/cnxk_ethdev_devargs.c | 40 ++++++++++++++++++++++++-- 2 files changed, 59 insertions(+), 2 deletions(-) diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst index 837ffc02b4..470e01b811 100644 --- a/doc/guides/nics/cnxk.rst +++ b/doc/guides/nics/cnxk.rst @@ -276,6 +276,27 @@ Runtime Config Options set with this custom mask, inbound encrypted traffic from all ports with matching channel number pattern will be directed to the inline IPSec device. +- ``SDP device channel and mask`` (default ``none``) + Set channel and channel mask configuration for the SDP device. This + will be used when creating flow rules on the SDP device. + + By default, for rules created on the SDP device, the RTE Flow API sets the + channel number and mask to cover the entire SDP channel range in the channel + field of the MCAM entry. This behaviour can be modified using the + ``sdp_channel_mask`` ``devargs`` parameter. + + For example:: + + -a 0002:1d:00.0,sdp_channel_mask=0x700/0xf00 + + With the above configuration, RTE Flow rules API will set the channel + and channel mask as 0x700 and 0xF00 in the MCAM entries of the flow rules + created on the SDP device. This option needs to be used when more than one + SDP interface is in use and RTE Flow rules created need to distinguish + between traffic from each SDP interface. The channel and mask combination + specified should match all the channels(or rings) configured on the SDP + interface. + .. note:: Above devarg parameters are configurable per device, user needs to pass the diff --git a/drivers/net/cnxk/cnxk_ethdev_devargs.c b/drivers/net/cnxk/cnxk_ethdev_devargs.c index e068f55349..ad7babdf52 100644 --- a/drivers/net/cnxk/cnxk_ethdev_devargs.c +++ b/drivers/net/cnxk/cnxk_ethdev_devargs.c @@ -7,6 +7,12 @@ #include "cnxk_ethdev.h" +struct sdp_channel { + bool is_sdp_mask_set; + uint16_t channel; + uint16_t mask; +}; + static int parse_outb_nb_desc(const char *key, const char *value, void *extra_args) { @@ -164,6 +170,27 @@ parse_switch_header_type(const char *key, const char *value, void *extra_args) return 0; } +static int +parse_sdp_channel_mask(const char *key, const char *value, void *extra_args) +{ + RTE_SET_USED(key); + uint16_t chan = 0, mask = 0; + char *next = 0; + + /* next will point to the separator '/' */ + chan = strtol(value, &next, 16); + mask = strtol(++next, 0, 16); + + if (chan > GENMASK(11, 0) || mask > GENMASK(11, 0)) + return -EINVAL; + + ((struct sdp_channel *)extra_args)->channel = chan; + ((struct sdp_channel *)extra_args)->mask = mask; + ((struct sdp_channel *)extra_args)->is_sdp_mask_set = true; + + return 0; +} + #define CNXK_RSS_RETA_SIZE "reta_size" #define CNXK_SCL_ENABLE "scalar_enable" #define CNXK_MAX_SQB_COUNT "max_sqb_count" @@ -177,6 +204,7 @@ parse_switch_header_type(const char *key, const char *value, void *extra_args) #define CNXK_OUTB_NB_DESC "outb_nb_desc" #define CNXK_FORCE_INB_INL_DEV "force_inb_inl_dev" #define CNXK_OUTB_NB_CRYPTO_QS "outb_nb_crypto_qs" +#define CNXK_SDP_CHANNEL_MASK "sdp_channel_mask" int cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, struct cnxk_eth_dev *dev) @@ -191,11 +219,14 @@ cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, struct cnxk_eth_dev *dev) uint16_t force_inb_inl_dev = 0; uint16_t outb_nb_crypto_qs = 1; uint16_t outb_nb_desc = 8200; + struct sdp_channel sdp_chan; uint16_t rss_tag_as_xor = 0; uint16_t scalar_enable = 0; uint8_t lock_rx_ctx = 0; struct rte_kvargs *kvlist; + memset(&sdp_chan, 0, sizeof(sdp_chan)); + if (devargs == NULL) goto null_devargs; @@ -228,6 +259,8 @@ cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, struct cnxk_eth_dev *dev) &parse_outb_nb_crypto_qs, &outb_nb_crypto_qs); rte_kvargs_process(kvlist, CNXK_FORCE_INB_INL_DEV, &parse_flag, &force_inb_inl_dev); + rte_kvargs_process(kvlist, CNXK_SDP_CHANNEL_MASK, + &parse_sdp_channel_mask, &sdp_chan); rte_kvargs_free(kvlist); null_devargs: @@ -246,8 +279,10 @@ cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, struct cnxk_eth_dev *dev) dev->npc.flow_prealloc_size = flow_prealloc_size; dev->npc.flow_max_priority = flow_max_priority; dev->npc.switch_header_type = switch_header_type; + dev->npc.sdp_channel = sdp_chan.channel; + dev->npc.sdp_channel_mask = sdp_chan.mask; + dev->npc.is_sdp_mask_set = sdp_chan.is_sdp_mask_set; return 0; - exit: return -EINVAL; } @@ -263,4 +298,5 @@ RTE_PMD_REGISTER_PARAM_STRING(net_cnxk, CNXK_IPSEC_IN_MAX_SPI "=<1-65535>" CNXK_OUTB_NB_DESC "=<1-65535>" CNXK_OUTB_NB_CRYPTO_QS "=<1-64>" - CNXK_FORCE_INB_INL_DEV "=1"); + CNXK_FORCE_INB_INL_DEV "=1" + CNXK_SDP_CHANNEL_MASK "=<1-4095>/<1-4095>");