From patchwork Wed Jan 3 07:16:14 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shahaf Shuler X-Patchwork-Id: 32841 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1D7DF1B1A6; Wed, 3 Jan 2018 08:17:14 +0100 (CET) Received: from EUR01-VE1-obe.outbound.protection.outlook.com (mail-ve1eur01on0042.outbound.protection.outlook.com [104.47.1.42]) by dpdk.org (Postfix) with ESMTP id DB7A41B198 for ; Wed, 3 Jan 2018 08:17:12 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=Unl6WBw288wuErzkHxwDFK/uScI0aDmrAYbDQRDYNcA=; b=Dzqd4f01gL3/PAnMlBdbQTQoTwAJRK8yMv6Sk2OQrcbr/W24mRee+JO6efMizwy4bFfyYqrP0UdeOgKdrKQ+OI36uahOgF5QozXyynWtNSpUsRECPmSJd0tdRBU/YQ9vUgLBslHgo3S+yyS1Hus83Wi6m7bQEhr0HWm+etGiluk= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=shahafs@mellanox.com; Received: from mellanox.com (82.166.227.17) by HE1PR05MB3148.eurprd05.prod.outlook.com (2603:10a6:7:36::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.366.8; Wed, 3 Jan 2018 07:17:09 +0000 From: Shahaf Shuler To: nelio.laranjeiro@6wind.com, yskoh@mellanox.com, adrien.mazarguil@6wind.com Cc: dev@dpdk.org Date: Wed, 3 Jan 2018 09:16:14 +0200 Message-Id: <39c804584f9c50892df300b98a78dc5d0c4a72ea.1514963302.git.shahafs@mellanox.com> X-Mailer: git-send-email 2.12.0 In-Reply-To: References: <20171123120252.143695-1-shahafs@mellanox.com> MIME-Version: 1.0 X-Originating-IP: [82.166.227.17] X-ClientProxiedBy: DB6P193CA0018.EURP193.PROD.OUTLOOK.COM (2603:10a6:6:29::28) To HE1PR05MB3148.eurprd05.prod.outlook.com (2603:10a6:7:36::18) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 1ef0056a-47fd-4c04-8a17-08d5527a0145 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(4534020)(4602075)(4627115)(201703031133081)(201702281549075)(48565401081)(5600026)(4604075)(2017052603307)(7153060); SRVR:HE1PR05MB3148; X-Microsoft-Exchange-Diagnostics: 1; HE1PR05MB3148; 3:s+fhmBzmkYCwRPWsvp3d3o3TVycFyf7bVXpsYibuJPgX40J6hHrr6dIOW1iyXbrM3Fg9RUrGuBL7n42DIMt97STpJ/OYF88A6YwdTMK/MQ9x2R+8R7K0x1PG6XJJEdzn0vxB8uyEJWrXPy/t95qRaAFw18DChRK+Bdrsr5vUziAusqkQP+zpH5+dkO7DI6FFpKfC4aE+JmbNgxT3VJePQULPx+q7NLDR1mKMqoydAYDw9jkTEvx1iRsn/rRUw/R2; 25:9i9pO5q5557OqCk5HpZaYEDt9bk0CGvHHo+FYMMeMVEY2Vl+F9ZQt3be693oIsKboDf+kHhjPIlpHtDOPto320AUrmeFfBF33J/+b+WgWtah/A6pzlelcbZyOzA88/lEEIDD8i47JzVMO+89eDHePWRzeCCQ+HUboq6E4ko4e+xtba+tLJDHzBbg2rOyYnq7tJps5ksa4tsNZCznQOgmK/4uCwd8FaXeHH4lQz7nke/bcLkoVPJ6YTVt1JaaaEeEU+hG4a94Xn24l1PjENqiRVeKTF/GJPR3ImrOR6rva00bviHDK4V/2swY7rFgrcDQTuttRDF0ghpqV94NeDOBug==; 31:/wLVk7g6DcYlQBTp9X+l/V7S8xke7SwjmYrmJbLGtJZ9mQmdNDTdW4ry1ruww4f0U20x2DGUymfD0/ZOnOeA8PVwWR/z1oOAIF2HoSr3VPVUyphZBhT+BsenfZdPKTfm7oDbycPEZ1+K4fw7bA+COY5Pgy755FSwb02g4jQ0u5GvRvk/pFjp1/ryPq6E4CXya8Dqcp5Lu9imLrcWKH8mSNoN9pGPbbQiyu3HUs+Y1sM= X-MS-TrafficTypeDiagnostic: HE1PR05MB3148: X-LD-Processed: a652971c-7d2e-4d9b-a6a4-d149256f461b,ExtAddr X-Microsoft-Exchange-Diagnostics: 1; HE1PR05MB3148; 20:e3eu3A5coMUGUN9eDjLIMEL4fy27pailFyoeZAePMorUDL5nU5JkfwfYWaZDKhOzkfbDTJNh+Bg73z8qj52ya/RT63LUH18yahtxtCkPgRcKewxPC4kCu7Bw6/8CXzF2CIyPgBqWuehCNSjIc66XGtia3fMcMt1eLFaeNo+Qjo5u6B/kk87Uy8A4FsgYELlZA9247Qoaad8DX9v23GHdOIN56fkdjGo3CBRaSAE667R+7A0zY0Cx2n2HNKYkEveKs2Vr7VCUNVmfM9yzWH0tFRRcQvYRNz2MhBXogkU0bCCa1K3Mm6OFkKwOsgUUz3zQRXDhw6NbcPGW/9l8eBrWBQqW3SDYrbGBgJHmmYfQGf8jvGVA1gqG0AhMJA5lXIxNmH9e5mLPkPiKWa2YC0w5YcqM4mNI+dIdqkoaPaXEpkRkvfg04uDAkVRhpWnVP8WTAFFG05D7KKEt0m0e6dUBjp4aXEtBu1wqrEVQ+MCMraxHObLZj70wp9gtTHdbqVy3; 4:0/zoODePs/zfqTTDdoYfHslC35VZO/FsplcngfCBtS984lrf4jUzqdmVxbN1GL7RAEKKIZlhRzO5hWYKV7O+qxLcQf3IXu77P4EiyHkwp/DIxv7AnPVZ1xMzPRv1mExIfvheOeX5EfINWOdiLGy54WKztv+KkNIjPaEH0ViIATCcpXyZ51TYkunct0riphmwVxJCcMTxZKZpu2dFUa68YIOTLgtrpc7ubIPeGwmQX3O3+phbBVhHwPqX10N5rKnraZ1YCOpU2ptSfwWEF6OHhQ== X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040470)(2401047)(5005006)(8121501046)(10201501046)(93006095)(93001095)(3231023)(944501075)(3002001)(6055026)(6041268)(20161123560045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123564045)(20161123562045)(20161123558120)(6072148)(201708071742011); SRVR:HE1PR05MB3148; BCL:0; PCL:0; RULEID:(100000803101)(100110400095); SRVR:HE1PR05MB3148; X-Forefront-PRVS: 0541031FF6 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(39380400002)(376002)(346002)(39860400002)(396003)(366004)(199004)(189003)(5660300001)(76176011)(51416003)(52116002)(4326008)(50226002)(7696005)(59450400001)(7736002)(8936002)(386003)(2950100002)(69596002)(6666003)(81166006)(118296001)(81156014)(55016002)(478600001)(68736007)(8676002)(33026002)(106356001)(105586002)(53936002)(50466002)(3846002)(48376002)(6116002)(25786009)(2906002)(97736004)(305945005)(21086003)(66066001)(575784001)(16586007)(36756003)(316002)(86362001)(16526018)(47776003)(309714004); DIR:OUT; SFP:1101; SCL:1; SRVR:HE1PR05MB3148; H:mellanox.com; FPR:; SPF:None; PTR:InfoNoRecords; A:1; MX:1; LANG:en; Received-SPF: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; HE1PR05MB3148; 23:iVI7qU7lSiw+y3nGHkZtTMyflTbW73Sm9suwczZ5L?= 0LQgQsh5vcA7c8yCdEG1EEwgckvSAtRls68UrxyQr2iK8+9pRCuEeDeHdlhnIcE4RgkrgA0/GSFnthGIqAgO+0KywC4v3/zW+YH/42leKdrMmU1OIPVb37pE/KauiF3Uu2I0KXusxRwuK6e7NuCXI2e87ldolL2AMz8oALi+MwsL5zbO6jak7GCase5yaw2WSkqHktQCPrWUDaSrAAJBUtmLSLdYiIGduuzHDySvvdet3LxKmxbLXW9H7hqbicYB+R6loPDruDbEL3O6LPvKC7oI38f42UXZTYvFnAwtsRzs0M6/F8JTMON07V3OJYOH8wa+x3MjMDAO2y8IY/ZmN3dDwYOovjvGh67Fra5/84duj0TQ8HA1yhWcpeAby7mL0kDltPOT0lybMc80J1g1EczqMKGZBlMQEJAkuhaa8YJ+hQfR1cRvZsjzXh+PfKqtA6mBRmTzFrsD5Ew0hZdMH4FE5rup9fSp0IGjLcz0OH0D89ldcCw6VvABG+dn6sdO8+l0S92npwVL9QmxPk2VXnQ1UP1BujbxwO6yfQaaDRNdHmIcn0r6aSxFzmmJMjVuzh6dSieIxtPgXIMadu7OCtiubxL5+hHazLKIiIYgv740r2qClIv3jb036WbDdlRtPlqfvd54wUBBM91+wwA2vUPv1igXKOAFiPbAb4qar1dBFRHDjrmHXC+RBZHOh52zttQ5pYL68lCq0J5lg+4MUXKPc9GkHbkx3KEHbTJRP9lKKMW2dUefdKqgQnwBlRGKpOlWUxcXv9tTNUu1m17Tw6tqbubItFSqHfzEL2MyD0SRQmKNNAsm3Cv76gw+tupmx95csQgoYe7klT84fMMnPsTRuSv7U6s1xrtJWMmf+YiycbEeYkcARgR+P2Z0aIBxTH9P1Z3PUDG6+4gZWQI/CWX5jWOqiZjDCvjBO6T9TFh304vPdnxIG+3iwgoZz6N6qUYuWhSRcf3ONYTe3S2Frbcj4HGev89wdjSbu396HNEg6ZwH9CQRoCGlWm7uWbChu8Y/+r69444XVbgRFl45pXFRg/ddeqMy7p/2OApLUcuzZV1w2964iwzkofm0yfltAVmxl3YCfNCPLT7vc6gICo3pk+Y6/DyK5FggPCVY5m21zTKbH6Utr/wUu6gRA7bAjhB+pa62GMkSAT+FWN1MjWh X-Microsoft-Exchange-Diagnostics: 1; HE1PR05MB3148; 6:VBs6kLN1CMy33gUVcd+ldVs9/MksnEC3bvQYjs/X+NmEml7a7GlAar5NDYBZzo4FzM954V5KJopIm33x9Ca2LTrlxAzkFxPHzASGRMMsHuqKD5lGMTu0jkQg/URGz0Glq6xgXVIocAlSb5ra2qK45LTKX6Ar1zBROHux3uxEs8Ba1cJ02CGheP3HgntE8diRqekeNFCBjW8II16k5NB+CvLmJBjf03OX83chcpnyg8RKRvrI6RE4qJ1q0EzHGpRzH/n+DdIiM7bN1FjLkOygT5bcZwqipA7/uaahyrykw37Rd91eokSYMNYVotBYGc/UhcpE3z7IP7jBJf6urS05CFyD5KMT3yB7MLjobVMiDHA=; 5:sd6tpzUGsiKLVAsaXweyg0rxJ2/tnmTSRp4VSxrFd8tCHFt7UYnDAWUEM58onMG1mndGB6jz0SZoWBjB2LT7oIwNHmWMbVgNrg8ZZ9yb9GKG4CFsGknk5DNj2cUbA8UkhwDIVZ/ppUXM1MUMII4SgBnmKkkxbDd2ve0+XvVplGk=; 24:BlpUrDBLAq45ij3hl5K6yYHevN3ej/1bILXfpLwaf58scCOCj9aMnP3t+GpVDC3rA1R7ZMRj3/Mcdv7KwNdhPSVMau5o6CNt9AGQDi8fdhU=; 7:rN10N3/KPkTwNdqXCcDn6QU9BJ+0Sc6ykNTZCIF0Oudqm/noudws+MaovkmSFP7UpPDTPZKlXjZODNF8yyzUt5pGQkH56S5WDGLKCrAaxDucEtY9kYayIq7LikJBPZj2NSWaYLQD4rxTTT/pqyLybL30VdSRXPzrIrpRAancKGjN9PClRKXsv0b5tZfqLYg+JXK3UD1tWXyFcZSWrMKXiCgItiJkr8GSJOIu/A3WKwa9C6NVWBhQcbvhRgE1AL4e SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jan 2018 07:17:09.5490 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1ef0056a-47fd-4c04-8a17-08d5527a0145 X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR05MB3148 Subject: [dpdk-dev] [PATCH v2 4/7] net/mlx5: convert to new Tx offloads API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Ethdev Tx offloads API has changed since: commit cba7f53b717d ("ethdev: introduce Tx queue offloads API") This commit support the new Tx offloads API. Signed-off-by: Shahaf Shuler Acked-by: Nelio Laranjeiro --- doc/guides/nics/mlx5.rst | 15 +++---- drivers/net/mlx5/mlx5.c | 18 ++------ drivers/net/mlx5/mlx5.h | 2 +- drivers/net/mlx5/mlx5_ethdev.c | 37 ++++++++-------- drivers/net/mlx5/mlx5_rxtx.c | 6 ++- drivers/net/mlx5/mlx5_rxtx.h | 7 +-- drivers/net/mlx5/mlx5_rxtx_vec.c | 32 +++++++------- drivers/net/mlx5/mlx5_rxtx_vec.h | 12 ++++++ drivers/net/mlx5/mlx5_txq.c | 80 ++++++++++++++++++++++++++++++++--- 9 files changed, 142 insertions(+), 67 deletions(-) diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index 154db64d7..bdc2216c0 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -262,8 +262,9 @@ Run-time configuration Enhanced MPS supports hybrid mode - mixing inlined packets and pointers in the same descriptor. - This option cannot be used in conjunction with ``tso`` below. When ``tso`` - is set, ``txq_mpw_en`` is disabled. + This option cannot be used with certain offloads such as ``DEV_TX_OFFLOAD_TCP_TSO, + DEV_TX_OFFLOAD_VXLAN_TNL_TSO, DEV_TX_OFFLOAD_GRE_TNL_TSO, DEV_TX_OFFLOAD_VLAN_INSERT``. + When those offloads are requested the MPS send function will not be used. It is currently only supported on the ConnectX-4 Lx and ConnectX-5 families of adapters. Enabled by default. @@ -284,17 +285,15 @@ Run-time configuration Effective only when Enhanced MPS is supported. The default value is 256. -- ``tso`` parameter [int] - - A nonzero value enables hardware TSO. - When hardware TSO is enabled, packets marked with TCP segmentation - offload will be divided into segments by the hardware. Disabled by default. - - ``tx_vec_en`` parameter [int] A nonzero value enables Tx vector on ConnectX-5 only NIC if the number of global Tx queues on the port is lesser than MLX5_VPMD_MIN_TXQS. + This option cannot be used with certain offloads such as ``DEV_TX_OFFLOAD_TCP_TSO, + DEV_TX_OFFLOAD_VXLAN_TNL_TSO, DEV_TX_OFFLOAD_GRE_TNL_TSO, DEV_TX_OFFLOAD_VLAN_INSERT``. + When those offloads are requested the MPS send function will not be used. + Enabled by default on ConnectX-5. - ``rx_vec_en`` parameter [int] diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index ca44a0a59..1c95f3520 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -85,9 +85,6 @@ /* Device parameter to limit the size of inlining packet. */ #define MLX5_TXQ_MAX_INLINE_LEN "txq_max_inline_len" -/* Device parameter to enable hardware TSO offload. */ -#define MLX5_TSO "tso" - /* Device parameter to enable hardware Tx vector. */ #define MLX5_TX_VEC_EN "tx_vec_en" @@ -406,8 +403,6 @@ mlx5_args_check(const char *key, const char *val, void *opaque) config->mpw_hdr_dseg = !!tmp; } else if (strcmp(MLX5_TXQ_MAX_INLINE_LEN, key) == 0) { config->inline_max_packet_sz = tmp; - } else if (strcmp(MLX5_TSO, key) == 0) { - config->tso = !!tmp; } else if (strcmp(MLX5_TX_VEC_EN, key) == 0) { config->tx_vec_en = !!tmp; } else if (strcmp(MLX5_RX_VEC_EN, key) == 0) { @@ -440,7 +435,6 @@ mlx5_args(struct mlx5_dev_config *config, struct rte_devargs *devargs) MLX5_TXQ_MPW_EN, MLX5_TXQ_MPW_HDR_DSEG_EN, MLX5_TXQ_MAX_INLINE_LEN, - MLX5_TSO, MLX5_TX_VEC_EN, MLX5_RX_VEC_EN, NULL, @@ -629,7 +623,6 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) .cqe_comp = cqe_comp, .mps = mps, .tunnel_en = tunnel_en, - .tso = 0, .tx_vec_en = 1, .rx_vec_en = 1, .mpw_hdr_dseg = 0, @@ -793,10 +786,9 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) priv_get_num_vfs(priv, &num_vfs); config.sriov = (num_vfs || sriov); - if (config.tso) - config.tso = ((device_attr_ex.tso_caps.max_tso > 0) && - (device_attr_ex.tso_caps.supported_qpts & - (1 << IBV_QPT_RAW_PACKET))); + config.tso = ((device_attr_ex.tso_caps.max_tso > 0) && + (device_attr_ex.tso_caps.supported_qpts & + (1 << IBV_QPT_RAW_PACKET))); if (config.tso) config.tso_max_payload_sz = device_attr_ex.tso_caps.max_tso; @@ -805,10 +797,6 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) " (" MLX5_TXQ_MPW_EN ")"); err = ENOTSUP; goto port_error; - } else if (config.mps && config.tso) { - WARN("multi-packet send not supported in conjunction " - "with TSO. MPS disabled"); - config.mps = 0; } INFO("%sMPS is %s", config.mps == MLX5_MPW_ENHANCED ? "Enhanced " : "", diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 171b3a933..8ee522069 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -112,7 +112,7 @@ struct mlx5_dev_config { unsigned int tunnel_en:1; /* Whether tunnel is supported. */ unsigned int flow_counter_en:1; /* Whether flow counter is supported. */ unsigned int cqe_comp:1; /* CQE compression is enabled. */ - unsigned int tso:1; /* Whether TSO is enabled. */ + unsigned int tso:1; /* Whether TSO is supported. */ unsigned int tx_vec_en:1; /* Tx vector is enabled. */ unsigned int rx_vec_en:1; /* Rx vector is enabled. */ unsigned int mpw_hdr_dseg:1; /* Enable DSEGs in the title WQEBB. */ diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c index d2f98769e..8be4f43f7 100644 --- a/drivers/net/mlx5/mlx5_ethdev.c +++ b/drivers/net/mlx5/mlx5_ethdev.c @@ -551,7 +551,15 @@ dev_configure(struct rte_eth_dev *dev) unsigned int reta_idx_n; const uint8_t use_app_rss_key = !!dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key; - + uint64_t supp_tx_offloads = mlx5_priv_get_tx_port_offloads(priv); + uint64_t tx_offloads = dev->data->dev_conf.txmode.offloads; + + if ((tx_offloads & supp_tx_offloads) != tx_offloads) { + ERROR("Some Tx offloads are not supported " + "requested 0x%lx supported 0x%lx\n", + tx_offloads, supp_tx_offloads); + return ENOTSUP; + } if (use_app_rss_key && (dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len != rss_hash_default_key_len)) { @@ -672,19 +680,7 @@ mlx5_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info) (priv->config.hw_vlan_strip ? DEV_RX_OFFLOAD_VLAN_STRIP : 0) | DEV_RX_OFFLOAD_TIMESTAMP; - if (!config->mps) - info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT; - if (config->hw_csum) - info->tx_offload_capa |= - (DEV_TX_OFFLOAD_IPV4_CKSUM | - DEV_TX_OFFLOAD_UDP_CKSUM | - DEV_TX_OFFLOAD_TCP_CKSUM); - if (config->tso) - info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO; - if (config->tunnel_en) - info->tx_offload_capa |= (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | - DEV_TX_OFFLOAD_VXLAN_TNL_TSO | - DEV_TX_OFFLOAD_GRE_TNL_TSO); + info->tx_offload_capa = mlx5_priv_get_tx_port_offloads(priv); if (priv_get_ifname(priv, &ifname) == 0) info->if_index = if_nametoindex(ifname); info->reta_size = priv->reta_idx_n ? @@ -1392,16 +1388,23 @@ mlx5_set_link_up(struct rte_eth_dev *dev) * Pointer to selected Tx burst function. */ eth_tx_burst_t -priv_select_tx_function(struct priv *priv, __rte_unused struct rte_eth_dev *dev) +priv_select_tx_function(struct priv *priv, struct rte_eth_dev *dev) { eth_tx_burst_t tx_pkt_burst = mlx5_tx_burst; struct mlx5_dev_config *config = &priv->config; + uint64_t tx_offloads = dev->data->dev_conf.txmode.offloads; + int tso = !!(tx_offloads & (DEV_TX_OFFLOAD_TCP_TSO | + DEV_TX_OFFLOAD_VXLAN_TNL_TSO | + DEV_TX_OFFLOAD_GRE_TNL_TSO)); + int vlan_insert = !!(tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT); assert(priv != NULL); /* Select appropriate TX function. */ + if (vlan_insert || tso) + return tx_pkt_burst; if (config->mps == MLX5_MPW_ENHANCED) { - if (priv_check_vec_tx_support(priv) > 0) { - if (priv_check_raw_vec_tx_support(priv) > 0) + if (priv_check_vec_tx_support(priv, dev) > 0) { + if (priv_check_raw_vec_tx_support(priv, dev) > 0) tx_pkt_burst = mlx5_tx_burst_raw_vec; else tx_pkt_burst = mlx5_tx_burst_vec; diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c index 67e3db168..3b8f71c28 100644 --- a/drivers/net/mlx5/mlx5_rxtx.c +++ b/drivers/net/mlx5/mlx5_rxtx.c @@ -1994,16 +1994,18 @@ mlx5_rx_burst_vec(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) } int __attribute__((weak)) -priv_check_raw_vec_tx_support(struct priv *priv) +priv_check_raw_vec_tx_support(struct priv *priv, struct rte_eth_dev *dev) { (void)priv; + (void)dev; return -ENOTSUP; } int __attribute__((weak)) -priv_check_vec_tx_support(struct priv *priv) +priv_check_vec_tx_support(struct priv *priv, struct rte_eth_dev *dev) { (void)priv; + (void)dev; return -ENOTSUP; } diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h index e70d52361..2728e8d5e 100644 --- a/drivers/net/mlx5/mlx5_rxtx.h +++ b/drivers/net/mlx5/mlx5_rxtx.h @@ -201,7 +201,7 @@ struct mlx5_txq_data { uint16_t inline_max_packet_sz; /* Max packet size for inlining. */ uint16_t mr_cache_idx; /* Index of last hit entry. */ uint32_t qp_num_8s; /* QP number shifted by 8. */ - uint32_t flags; /* Flags for Tx Queue. */ + uint64_t offloads; /* Offloads for Tx Queue. */ volatile struct mlx5_cqe (*cqes)[]; /* Completion queue. */ volatile void *wqes; /* Work queue (use volatile to write into). */ volatile uint32_t *qp_db; /* Work queue doorbell. */ @@ -293,6 +293,7 @@ int mlx5_priv_txq_release(struct priv *, uint16_t); int mlx5_priv_txq_releasable(struct priv *, uint16_t); int mlx5_priv_txq_verify(struct priv *); void txq_alloc_elts(struct mlx5_txq_ctrl *); +uint64_t mlx5_priv_get_tx_port_offloads(struct priv *); /* mlx5_rxtx.c */ @@ -310,8 +311,8 @@ int mlx5_rx_descriptor_status(void *, uint16_t); int mlx5_tx_descriptor_status(void *, uint16_t); /* Vectorized version of mlx5_rxtx.c */ -int priv_check_raw_vec_tx_support(struct priv *); -int priv_check_vec_tx_support(struct priv *); +int priv_check_raw_vec_tx_support(struct priv *, struct rte_eth_dev *); +int priv_check_vec_tx_support(struct priv *, struct rte_eth_dev *); int rxq_check_vec_support(struct mlx5_rxq_data *); int priv_check_vec_rx_support(struct priv *); uint16_t mlx5_tx_burst_raw_vec(void *, struct rte_mbuf **, uint16_t); diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c b/drivers/net/mlx5/mlx5_rxtx_vec.c index 761ed4971..f0530efbe 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec.c +++ b/drivers/net/mlx5/mlx5_rxtx_vec.c @@ -160,15 +160,15 @@ mlx5_tx_burst_vec(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n) uint16_t ret; /* Transmit multi-seg packets in the head of pkts list. */ - if (!(txq->flags & ETH_TXQ_FLAGS_NOMULTSEGS) && + if ((txq->offloads & DEV_TX_OFFLOAD_MULTI_SEGS) && NB_SEGS(pkts[nb_tx]) > 1) nb_tx += txq_scatter_v(txq, &pkts[nb_tx], pkts_n - nb_tx); n = RTE_MIN((uint16_t)(pkts_n - nb_tx), MLX5_VPMD_TX_MAX_BURST); - if (!(txq->flags & ETH_TXQ_FLAGS_NOMULTSEGS)) + if (txq->offloads & DEV_TX_OFFLOAD_MULTI_SEGS) n = txq_count_contig_single_seg(&pkts[nb_tx], n); - if (!(txq->flags & ETH_TXQ_FLAGS_NOOFFLOADS)) + if (txq->offloads & MLX5_VEC_TX_CKSUM_OFFLOAD_CAP) n = txq_calc_offload(txq, &pkts[nb_tx], n, &cs_flags); ret = txq_burst_v(txq, &pkts[nb_tx], n, cs_flags); nb_tx += ret; @@ -253,24 +253,20 @@ mlx5_rx_burst_vec(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) * * @param priv * Pointer to private structure. + * @param dev + * Pointer to rte_eth_dev structure. * * @return * 1 if supported, negative errno value if not. */ int __attribute__((cold)) -priv_check_raw_vec_tx_support(struct priv *priv) +priv_check_raw_vec_tx_support(__rte_unused struct priv *priv, + struct rte_eth_dev *dev) { - uint16_t i; - - /* All the configured queues should support. */ - for (i = 0; i < priv->txqs_n; ++i) { - struct mlx5_txq_data *txq = (*priv->txqs)[i]; + uint64_t offloads = dev->data->dev_conf.txmode.offloads; - if (!(txq->flags & ETH_TXQ_FLAGS_NOMULTSEGS) || - !(txq->flags & ETH_TXQ_FLAGS_NOOFFLOADS)) - break; - } - if (i != priv->txqs_n) + /* Doesn't support any offload. */ + if (offloads) return -ENOTSUP; return 1; } @@ -280,17 +276,21 @@ priv_check_raw_vec_tx_support(struct priv *priv) * * @param priv * Pointer to private structure. + * @param dev + * Pointer to rte_eth_dev structure. * * @return * 1 if supported, negative errno value if not. */ int __attribute__((cold)) -priv_check_vec_tx_support(struct priv *priv) +priv_check_vec_tx_support(struct priv *priv, struct rte_eth_dev *dev) { + uint64_t offloads = dev->data->dev_conf.txmode.offloads; + if (!priv->config.tx_vec_en || priv->txqs_n > MLX5_VPMD_MIN_TXQS || priv->config.mps != MLX5_MPW_ENHANCED || - priv->config.tso) + offloads & ~MLX5_VEC_TX_OFFLOAD_CAP) return -ENOTSUP; return 1; } diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.h b/drivers/net/mlx5/mlx5_rxtx_vec.h index 1f08ed0b2..7d7f016f1 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec.h +++ b/drivers/net/mlx5/mlx5_rxtx_vec.h @@ -40,6 +40,18 @@ #include "mlx5_autoconf.h" #include "mlx5_prm.h" +/* HW checksum offload capabilities of vectorized Tx. */ +#define MLX5_VEC_TX_CKSUM_OFFLOAD_CAP \ + (DEV_TX_OFFLOAD_IPV4_CKSUM | \ + DEV_TX_OFFLOAD_UDP_CKSUM | \ + DEV_TX_OFFLOAD_TCP_CKSUM | \ + DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) + +/* HW offload capabilities of vectorized Tx. */ +#define MLX5_VEC_TX_OFFLOAD_CAP \ + (MLX5_VEC_TX_CKSUM_OFFLOAD_CAP | \ + DEV_TX_OFFLOAD_MULTI_SEGS) + /* * Compile time sanity check for vectorized functions. */ diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c index 3e2075c79..b81c85fed 100644 --- a/drivers/net/mlx5/mlx5_txq.c +++ b/drivers/net/mlx5/mlx5_txq.c @@ -116,6 +116,63 @@ txq_free_elts(struct mlx5_txq_ctrl *txq_ctrl) } /** + * Returns the per-port supported offloads. + * + * @param priv + * Pointer to private structure. + * + * @return + * Supported Tx offloads. + */ +uint64_t +mlx5_priv_get_tx_port_offloads(struct priv *priv) +{ + uint64_t offloads = (DEV_TX_OFFLOAD_MULTI_SEGS | + DEV_TX_OFFLOAD_VLAN_INSERT); + struct mlx5_dev_config *config = &priv->config; + + if (config->hw_csum) + offloads |= (DEV_TX_OFFLOAD_IPV4_CKSUM | + DEV_TX_OFFLOAD_UDP_CKSUM | + DEV_TX_OFFLOAD_TCP_CKSUM); + if (config->tso) + offloads |= DEV_TX_OFFLOAD_TCP_TSO; + if (config->tunnel_en) { + if (config->hw_csum) + offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM; + if (config->tso) + offloads |= (DEV_TX_OFFLOAD_VXLAN_TNL_TSO | + DEV_TX_OFFLOAD_GRE_TNL_TSO); + } + return offloads; +} + +/** + * Checks if the per-queue offload configuration is valid. + * + * @param priv + * Pointer to private structure. + * @param offloads + * Per-queue offloads configuration. + * + * @return + * 1 if the configuration is valid, 0 otherwise. + */ +static int +priv_is_tx_queue_offloads_allowed(struct priv *priv, uint64_t offloads) +{ + uint64_t port_offloads = priv->dev->data->dev_conf.txmode.offloads; + uint64_t port_supp_offloads = mlx5_priv_get_tx_port_offloads(priv); + + /* There are no Tx offloads which are per queue. */ + if ((offloads & port_supp_offloads) != offloads) + return 0; + if ((port_offloads ^ offloads) & port_supp_offloads) + return 0; + return 1; +} + +/** * DPDK callback to configure a TX queue. * * @param dev @@ -143,6 +200,20 @@ mlx5_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, int ret = 0; priv_lock(priv); + /* + * Don't verify port offloads for application which + * use the old API. + */ + if (!!(conf->txq_flags & ETH_TXQ_FLAGS_IGNORE) && + !priv_is_tx_queue_offloads_allowed(priv, conf->offloads)) { + ret = ENOTSUP; + ERROR("%p: Tx queue offloads 0x%lx don't match port " + "offloads 0x%lx or supported offloads 0x%lx", + (void *)dev, conf->offloads, + dev->data->dev_conf.txmode.offloads, + mlx5_priv_get_tx_port_offloads(priv)); + goto out; + } if (desc <= MLX5_TX_COMP_THRESH) { WARN("%p: number of descriptors requested for TX queue %u" " must be higher than MLX5_TX_COMP_THRESH, using" @@ -579,6 +650,7 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl) unsigned int inline_max_packet_sz; eth_tx_burst_t tx_pkt_burst = priv_select_tx_function(priv, priv->dev); int is_empw_func = is_empw_burst_func(tx_pkt_burst); + int tso = !!(txq_ctrl->txq.offloads & DEV_TX_OFFLOAD_TCP_TSO); txq_inline = (config->txq_inline == MLX5_ARG_UNSET) ? 0 : config->txq_inline; @@ -603,8 +675,6 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl) txq_ctrl->txq.max_inline = ((txq_inline + (RTE_CACHE_LINE_SIZE - 1)) / RTE_CACHE_LINE_SIZE); - /* TSO and MPS can't be enabled concurrently. */ - assert(!config->tso || !config->mps); if (is_empw_func) { /* To minimize the size of data set, avoid requesting * too large WQ. @@ -614,7 +684,7 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl) inline_max_packet_sz) + (RTE_CACHE_LINE_SIZE - 1)) / RTE_CACHE_LINE_SIZE) * RTE_CACHE_LINE_SIZE; - } else if (config->tso) { + } else if (tso) { int inline_diff = txq_ctrl->txq.max_inline - max_tso_inline; @@ -652,7 +722,7 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl) RTE_CACHE_LINE_SIZE; } } - if (config->tso) { + if (tso) { txq_ctrl->max_tso_header = max_tso_inline * RTE_CACHE_LINE_SIZE; txq_ctrl->txq.max_inline = RTE_MAX(txq_ctrl->txq.max_inline, max_tso_inline); @@ -692,7 +762,7 @@ mlx5_priv_txq_new(struct priv *priv, uint16_t idx, uint16_t desc, if (!tmpl) return NULL; assert(desc > MLX5_TX_COMP_THRESH); - tmpl->txq.flags = conf->txq_flags; + tmpl->txq.offloads = conf->offloads; tmpl->priv = priv; tmpl->socket = socket; tmpl->txq.elts_n = log2above(desc);