From patchwork Wed Jan 18 12:55:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 122307 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 73C094240D; Wed, 18 Jan 2023 13:56:39 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 947CF42D47; Wed, 18 Jan 2023 13:56:31 +0100 (CET) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2075.outbound.protection.outlook.com [40.107.94.75]) by mails.dpdk.org (Postfix) with ESMTP id DBD97410DD for ; Wed, 18 Jan 2023 13:56:28 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=IPXS7sSZTCeGi2jGVyAl0rgfrEiFa5bHUUnw0HT7iicZsnNi6Nu6b7+F6Dfv+pWnFomDC4rFhZxi4NH1IbKfXxQuhIuWbR2fLecWZgvhnjRoep3w+JnlnddC7O425Wz4wvYhovS+FngqsD36DatmQ/lVCNU5LNwGbJDr55yLJjYEySmYXA1BA2bqy8VrHbd4Qad5rzjkHCDXAQgV32eMtMxsmoiYbFTxWck2yiFBtHqBZnFuDCTqsGiQdOsIG8/aq5BXirIuC7DnUjbhkwkCrYCOB5cs2T7NXJIvBUuphcvMoJvPhj3O67pxwgaJqkdbofJ2yb5DP0ejJKLeLzC7SA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=HFhyxzuWx0Bsf3F+unmUfff1YXqLsPM/7j8jRgdUIq0=; b=TVU3CSPp8X/jVNaW1a19Uouw+cj9rut8QBADtCUAECol4smx2n3SudDVqxwqGnF2OVSKuNliOdsVo9GqKHBg3u1yufpFJ2gSKfYXfZleMvHY1g2MO1ksnUTp19UztZwwF7NmrzCBgS/2L0+eLOlAmaGjTwjNs2evrN0CD3tGrYVLNJMp123oxx8iEIZMCdXGPWgQyAzRd0Ic5PCen0CmgDB0Jq9Izh65ovbhzNkGPRS6jSm7R0cep2ZDgJLogCzjGy7uevmY2O2HmNG5IylN7L/9/o6xdnmrVNRF9NruryAm6spIaVDvsfNIc5cTbaUnNoR+/QVQtDv04+YBPKh7YA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=HFhyxzuWx0Bsf3F+unmUfff1YXqLsPM/7j8jRgdUIq0=; b=OhLN6y2bxn484WSYR4Q/m0ceIPwjFcS2NITur5aL1Wy1tIY3mruCYShvwXlF1EzPg8h+lWjOvfnLCbKWgOY1UPpFXxmuU6OK9QGEv+K7SGbEiPrhP/aIuL8anv99rNhItSABH6/k3RKKnE5Otl1h6Pb4c1jyv9I5sPsxR8bktjnuraQM2/KwDIsYYGFshkFiAWW35CHtJF3ngEoblZjBnQIb24LcB4QRhMMR88fLQI/PBL7bvktHHE1pfy6unH4tmddoxyddsOV5oVcIHyP7al23fZTRp3e/mYBRNn9xOtdbFMjmfG401BsbpWcLsIzTbU1iXkHUF8bNE9zXZAlwkA== Received: from DS7PR05CA0034.namprd05.prod.outlook.com (2603:10b6:8:2f::35) by SJ2PR12MB8063.namprd12.prod.outlook.com (2603:10b6:a03:4d1::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Wed, 18 Jan 2023 12:56:26 +0000 Received: from DS1PEPF0000B078.namprd05.prod.outlook.com (2603:10b6:8:2f:cafe::e3) by DS7PR05CA0034.outlook.office365.com (2603:10b6:8:2f::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.12 via Frontend Transport; Wed, 18 Jan 2023 12:56:26 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DS1PEPF0000B078.mail.protection.outlook.com (10.167.17.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.11 via Frontend Transport; Wed, 18 Jan 2023 12:56:26 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 18 Jan 2023 04:56:16 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 18 Jan 2023 04:56:14 -0800 From: Gregory Etelson To: CC: , , , Viacheslav Ovsiienko Subject: [PATCH 2/5] net/mlx5: remove code duplication Date: Wed, 18 Jan 2023 14:55:53 +0200 Message-ID: <20230118125556.23622-3-getelson@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230118125556.23622-1-getelson@nvidia.com> References: <20230118125556.23622-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF0000B078:EE_|SJ2PR12MB8063:EE_ X-MS-Office365-Filtering-Correlation-Id: fc5d2b91-0900-4ee1-45cc-08daf95368e7 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 9SzHWZYdn4qW8IvOcMJlCBrZ2L7K6hvtMGo4CwWe+xAVWEB0fTThssEgSUgWAA+VTyG6KMS5xaXSl8vrOXJVewbwzzZcr4Tf1memiOpv4/WInPl7dqarbVKZL6tlj4rUvBQkj/ESBg4emgQzMwxeiwHOZHKHKkRB9C8Gw26fhP6FOLJ1mHO4LD7+zpo5atUDTaz6LyGFdjqb2+/11oL0PLmDegNEOEz8F1lEyLp0bleJHK7WI1Ws467Pc9sCrBdxugC1qOHdPmbTiSE9QWZ2o5osIFFBeca38Z7+Gq9N4PhBg38U7idbQjnDmofxL9c6Sw7FPMo3VOHUAGgBFB9zSxaS/AhDPbaP7LFC5edbH8dA+bv7PQ8UfvVF90D71oA8hWH842dr2KDRhmrTm4MRwm4lZnhgR0DptTfhAX6lWPmucHBCnq6potyxajiqK4pqesQdH1PShWp8oqgv/FWiVDN/n+O7uFwREQPWDHqW3mdBYmArZOKyLffKyjV4ZEVos4tPTZ+yDOF2rk/zP2skIP+AySsHTGFJ0dv69dv3p9/V4nsg/iqPZDmyvW7CEQUmpTiV2XanxKqMXo14WMGZoKedTjTfu7d0maLm05FkL81hurpuaBFbnX14vvWMaN2SuOwPiYVL/4TVQ/KrilZkyOeAlZsRlK4Jvof0Sb14Ywe1pfXUv1b+m9xZGyYrd+YknXWmkyt6kJcAeYMqMpiIeg== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(346002)(396003)(136003)(376002)(39860400002)(451199015)(46966006)(36840700001)(40470700004)(36860700001)(107886003)(36756003)(6666004)(2906002)(6286002)(82310400005)(70206006)(5660300002)(8676002)(83380400001)(8936002)(6916009)(4326008)(47076005)(82740400003)(26005)(426003)(478600001)(41300700001)(356005)(7636003)(16526019)(70586007)(336012)(55016003)(40480700001)(7696005)(186003)(1076003)(2616005)(40460700003)(86362001)(54906003)(316002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 12:56:26.2530 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: fc5d2b91-0900-4ee1-45cc-08daf95368e7 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF0000B078.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB8063 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Replace duplicated code with dedicated functions Signed-off-by: Gregory Etelson --- drivers/net/mlx5/mlx5.h | 6 +- drivers/net/mlx5/mlx5_flow_hw.c | 182 ++++++++++++++++---------------- 2 files changed, 95 insertions(+), 93 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index eaf2ad69fb..7c6bc91ddf 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -344,11 +344,11 @@ struct mlx5_lb_ctx { }; /* HW steering queue job descriptor type. */ -enum { +enum mlx5_hw_job_type { MLX5_HW_Q_JOB_TYPE_CREATE, /* Flow create job type. */ MLX5_HW_Q_JOB_TYPE_DESTROY, /* Flow destroy job type. */ - MLX5_HW_Q_JOB_TYPE_UPDATE, - MLX5_HW_Q_JOB_TYPE_QUERY, + MLX5_HW_Q_JOB_TYPE_UPDATE, /* Flow update job type. */ + MLX5_HW_Q_JOB_TYPE_QUERY, /* Flow query job type. */ }; #define MLX5_HW_MAX_ITEMS (16) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index df5883f340..04d0612ee1 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -7532,6 +7532,67 @@ flow_hw_action_handle_validate(struct rte_eth_dev *dev, uint32_t queue, return 0; } +static __rte_always_inline bool +flow_hw_action_push(const struct rte_flow_op_attr *attr) +{ + return attr ? !attr->postpone : true; +} + +static __rte_always_inline struct mlx5_hw_q_job * +flow_hw_job_get(struct mlx5_priv *priv, uint32_t queue) +{ + return priv->hw_q[queue].job[--priv->hw_q[queue].job_idx]; +} + +static __rte_always_inline void +flow_hw_job_put(struct mlx5_priv *priv, uint32_t queue) +{ + priv->hw_q[queue].job_idx++; +} + +static __rte_always_inline struct mlx5_hw_q_job * +flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue, + const struct rte_flow_action_handle *handle, + void *user_data, void *query_data, + enum mlx5_hw_job_type type, + struct rte_flow_error *error) +{ + struct mlx5_hw_q_job *job; + + MLX5_ASSERT(queue != MLX5_HW_INV_QUEUE); + if (unlikely(!priv->hw_q[queue].job_idx)) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, NULL, + "Action destroy failed due to queue full."); + return NULL; + } + job = flow_hw_job_get(priv, queue); + job->type = type; + job->action = handle; + job->user_data = user_data; + job->query.user = query_data; + return job; +} + +static __rte_always_inline void +flow_hw_action_finalize(struct rte_eth_dev *dev, uint32_t queue, + struct mlx5_hw_q_job *job, + bool push, bool aso, bool status) +{ + struct mlx5_priv *priv = dev->data->dev_private; + if (likely(status)) { + if (push) + __flow_hw_push_action(dev, queue); + if (!aso) + rte_ring_enqueue(push ? + priv->hw_q[queue].indir_cq : + priv->hw_q[queue].indir_iq, + job); + } else { + flow_hw_job_put(priv, queue); + } +} + /** * Create shared action. * @@ -7569,21 +7630,15 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue, cnt_id_t cnt_id; uint32_t mtr_id; uint32_t age_idx; - bool push = true; + bool push = flow_hw_action_push(attr); bool aso = false; if (attr) { - MLX5_ASSERT(queue != MLX5_HW_INV_QUEUE); - if (unlikely(!priv->hw_q[queue].job_idx)) { - rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Flow queue full."); + job = flow_hw_action_job_init(priv, queue, NULL, user_data, + NULL, MLX5_HW_Q_JOB_TYPE_CREATE, + error); + if (!job) return NULL; - } - job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx]; - job->type = MLX5_HW_Q_JOB_TYPE_CREATE; - job->user_data = user_data; - push = !attr->postpone; } switch (action->type) { case RTE_FLOW_ACTION_TYPE_AGE: @@ -7646,17 +7701,9 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue, break; } if (job) { - if (!handle) { - priv->hw_q[queue].job_idx++; - return NULL; - } job->action = handle; - if (push) - __flow_hw_push_action(dev, queue); - if (aso) - return handle; - rte_ring_enqueue(push ? priv->hw_q[queue].indir_cq : - priv->hw_q[queue].indir_iq, job); + flow_hw_action_finalize(dev, queue, job, push, aso, + handle != NULL); } return handle; } @@ -7704,19 +7751,15 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue, uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET; uint32_t idx = act_idx & ((1u << MLX5_INDIRECT_ACTION_TYPE_OFFSET) - 1); int ret = 0; - bool push = true; + bool push = flow_hw_action_push(attr); bool aso = false; if (attr) { - MLX5_ASSERT(queue != MLX5_HW_INV_QUEUE); - if (unlikely(!priv->hw_q[queue].job_idx)) - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Action update failed due to queue full."); - job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx]; - job->type = MLX5_HW_Q_JOB_TYPE_UPDATE; - job->user_data = user_data; - push = !attr->postpone; + job = flow_hw_action_job_init(priv, queue, handle, user_data, + NULL, MLX5_HW_Q_JOB_TYPE_UPDATE, + error); + if (!job) + return -rte_errno; } switch (type) { case MLX5_INDIRECT_ACTION_TYPE_AGE: @@ -7779,19 +7822,8 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue, "action type not supported"); break; } - if (job) { - if (ret) { - priv->hw_q[queue].job_idx++; - return ret; - } - job->action = handle; - if (push) - __flow_hw_push_action(dev, queue); - if (aso) - return 0; - rte_ring_enqueue(push ? priv->hw_q[queue].indir_cq : - priv->hw_q[queue].indir_iq, job); - } + if (job) + flow_hw_action_finalize(dev, queue, job, push, aso, ret == 0); return ret; } @@ -7830,20 +7862,16 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue, struct mlx5_hw_q_job *job = NULL; struct mlx5_aso_mtr *aso_mtr; struct mlx5_flow_meter_info *fm; - bool push = true; + bool push = flow_hw_action_push(attr); bool aso = false; int ret = 0; if (attr) { - MLX5_ASSERT(queue != MLX5_HW_INV_QUEUE); - if (unlikely(!priv->hw_q[queue].job_idx)) - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Action destroy failed due to queue full."); - job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx]; - job->type = MLX5_HW_Q_JOB_TYPE_DESTROY; - job->user_data = user_data; - push = !attr->postpone; + job = flow_hw_action_job_init(priv, queue, handle, user_data, + NULL, MLX5_HW_Q_JOB_TYPE_DESTROY, + error); + if (!job) + return -rte_errno; } switch (type) { case MLX5_INDIRECT_ACTION_TYPE_AGE: @@ -7906,19 +7934,8 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue, "action type not supported"); break; } - if (job) { - if (ret) { - priv->hw_q[queue].job_idx++; - return ret; - } - job->action = handle; - if (push) - __flow_hw_push_action(dev, queue); - if (aso) - return ret; - rte_ring_enqueue(push ? priv->hw_q[queue].indir_cq : - priv->hw_q[queue].indir_iq, job); - } + if (job) + flow_hw_action_finalize(dev, queue, job, push, aso, ret == 0); return ret; } @@ -8155,19 +8172,15 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue, uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET; uint32_t age_idx = act_idx & MLX5_HWS_AGE_IDX_MASK; int ret; - bool push = true; + bool push = flow_hw_action_push(attr); bool aso = false; if (attr) { - MLX5_ASSERT(queue != MLX5_HW_INV_QUEUE); - if (unlikely(!priv->hw_q[queue].job_idx)) - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Action destroy failed due to queue full."); - job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx]; - job->type = MLX5_HW_Q_JOB_TYPE_QUERY; - job->user_data = user_data; - push = !attr->postpone; + job = flow_hw_action_job_init(priv, queue, handle, user_data, + data, MLX5_HW_Q_JOB_TYPE_QUERY, + error); + if (!job) + return -rte_errno; } switch (type) { case MLX5_INDIRECT_ACTION_TYPE_AGE: @@ -8190,19 +8203,8 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue, "action type not supported"); break; } - if (job) { - if (ret) { - priv->hw_q[queue].job_idx++; - return ret; - } - job->action = handle; - if (push) - __flow_hw_push_action(dev, queue); - if (aso) - return ret; - rte_ring_enqueue(push ? priv->hw_q[queue].indir_cq : - priv->hw_q[queue].indir_iq, job); - } + if (job) + flow_hw_action_finalize(dev, queue, job, push, aso, ret == 0); return 0; }