[v4,3/6] net: advertise no support for keeping flow rules
Checks
Commit Message
When RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP capability bit is zero,
the specified behavior is the same as it had been before
this bit was introduced. Explicitly reset it in all PMDs
supporting rte_flow API in order to attract the attention
of maintainers, who should eventually choose to advertise
the new capability or not. It is already known that
mlx4 and mlx5 will not support this capability.
For RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP
similar action is not performed,
because no PMD except mlx5 supports indirect actions.
Any PMD that starts doing so will anyway have to consider
all relevant API, including this capability.
Suggested-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
---
drivers/net/bnxt/bnxt_ethdev.c | 1 +
drivers/net/bnxt/bnxt_reps.c | 1 +
drivers/net/cnxk/cnxk_ethdev_ops.c | 1 +
drivers/net/cxgbe/cxgbe_ethdev.c | 2 ++
drivers/net/dpaa2/dpaa2_ethdev.c | 1 +
drivers/net/e1000/em_ethdev.c | 2 ++
drivers/net/e1000/igb_ethdev.c | 1 +
drivers/net/enic/enic_ethdev.c | 1 +
drivers/net/failsafe/failsafe_ops.c | 1 +
drivers/net/hinic/hinic_pmd_ethdev.c | 2 ++
drivers/net/hns3/hns3_ethdev.c | 1 +
drivers/net/hns3/hns3_ethdev_vf.c | 1 +
drivers/net/i40e/i40e_ethdev.c | 1 +
drivers/net/i40e/i40e_vf_representor.c | 2 ++
drivers/net/iavf/iavf_ethdev.c | 1 +
drivers/net/ice/ice_dcf_ethdev.c | 1 +
drivers/net/igc/igc_ethdev.c | 1 +
drivers/net/ipn3ke/ipn3ke_representor.c | 1 +
drivers/net/mvpp2/mrvl_ethdev.c | 2 ++
drivers/net/octeontx2/otx2_ethdev_ops.c | 1 +
drivers/net/qede/qede_ethdev.c | 1 +
drivers/net/sfc/sfc_ethdev.c | 1 +
drivers/net/softnic/rte_eth_softnic.c | 1 +
drivers/net/tap/rte_eth_tap.c | 1 +
drivers/net/txgbe/txgbe_ethdev.c | 1 +
drivers/net/txgbe/txgbe_ethdev_vf.c | 1 +
26 files changed, 31 insertions(+)
Comments
On Wed, Oct 20, 2021 at 11:36 PM Dmitry Kozlyuk <dkozlyuk@nvidia.com> wrote:
>
> When RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP capability bit is zero,
> the specified behavior is the same as it had been before
> this bit was introduced. Explicitly reset it in all PMDs
> supporting rte_flow API in order to attract the attention
> of maintainers, who should eventually choose to advertise
> the new capability or not. It is already known that
> mlx4 and mlx5 will not support this capability.
>
> For RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP
> similar action is not performed,
> because no PMD except mlx5 supports indirect actions.
> Any PMD that starts doing so will anyway have to consider
> all relevant API, including this capability.
>
> Suggested-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
On Thu, 21 Oct 2021, 23:56 Ajit Khaparde, <ajit.khaparde@broadcom.com>
wrote:
> On Wed, Oct 20, 2021 at 11:36 PM Dmitry Kozlyuk <dkozlyuk@nvidia.com>
> wrote:
> >
> > When RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP capability bit is zero,
> > the specified behavior is the same as it had been before
> > this bit was introduced. Explicitly reset it in all PMDs
> > supporting rte_flow API in order to attract the attention
> > of maintainers, who should eventually choose to advertise
> > the new capability or not. It is already known that
> > mlx4 and mlx5 will not support this capability.
> >
> > For RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP
> > similar action is not performed,
> > because no PMD except mlx5 supports indirect actions.
> > Any PMD that starts doing so will anyway have to consider
> > all relevant API, including this capability.
> >
> > Suggested-by: Ferruh Yigit <ferruh.yigit@intel.com>
> > Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
>
Acked-by: Somnath Kotur
<somnath.kotur@broadcom.com>
>
> -----Original Message-----
> From: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
> Sent: Thursday, October 21, 2021 3:35 PM
[...]
> Subject: [PATCH v4 3/6] net: advertise no support for keeping flow rules
>
> When RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP capability bit is zero,
> the specified behavior is the same as it had been before
> this bit was introduced. Explicitly reset it in all PMDs
> supporting rte_flow API in order to attract the attention
> of maintainers, who should eventually choose to advertise
> the new capability or not. It is already known that
> mlx4 and mlx5 will not support this capability.
>
> For RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP
> similar action is not performed,
> because no PMD except mlx5 supports indirect actions.
> Any PMD that starts doing so will anyway have to consider
> all relevant API, including this capability.
>
> Suggested-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
> ---
For net/enic,
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
-Hyong
On 10/21/21 9:35 AM, Dmitry Kozlyuk wrote:
> When RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP capability bit is zero,
> the specified behavior is the same as it had been before
> this bit was introduced. Explicitly reset it in all PMDs
> supporting rte_flow API in order to attract the attention
> of maintainers, who should eventually choose to advertise
> the new capability or not. It is already known that
> mlx4 and mlx5 will not support this capability.
>
> For RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP
> similar action is not performed,
> because no PMD except mlx5 supports indirect actions.
> Any PMD that starts doing so will anyway have to consider
> all relevant API, including this capability.
>
> Suggested-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
I'm sorry, but I still think that the patch is confusing.
No strong opinion, but personally I'd go without it.
On 11/1/2021 3:06 PM, Andrew Rybchenko wrote:
> On 10/21/21 9:35 AM, Dmitry Kozlyuk wrote:
>> When RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP capability bit is zero,
>> the specified behavior is the same as it had been before
>> this bit was introduced. Explicitly reset it in all PMDs
>> supporting rte_flow API in order to attract the attention
>> of maintainers, who should eventually choose to advertise
>> the new capability or not. It is already known that
>> mlx4 and mlx5 will not support this capability.
>>
>> For RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP
>> similar action is not performed,
>> because no PMD except mlx5 supports indirect actions.
>> Any PMD that starts doing so will anyway have to consider
>> all relevant API, including this capability.
>>
>> Suggested-by: Ferruh Yigit <ferruh.yigit@intel.com>
>> Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
>
> I'm sorry, but I still think that the patch is confusing.
> No strong opinion, but personally I'd go without it.
It is confusing alright to add a flag that has no impact.
But again my concern is some PMDs maintainers not being aware of
this new capability flag that has an impact in the user application.
See the level of comments to the patch from PMD maintainers.
This way gives a visibility to both PMD maintainers that some action
is required, and gives visibility to application developers and maintainers
to track the support. By time the should go away.
Updating ethdev without updating all drivers is hard to manage.
@@ -1008,6 +1008,7 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
dev_info->speed_capa = bnxt_get_speed_capabilities(bp);
dev_info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
+ dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
@@ -526,6 +526,7 @@ int bnxt_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
dev_info->max_tx_queues = max_rx_rings;
dev_info->reta_size = bnxt_rss_hash_tbl_size(parent_bp);
dev_info->hash_key_size = 40;
+ dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
/* MTU specifics */
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
@@ -68,6 +68,7 @@ cnxk_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
devinfo->speed_capa = dev->speed_capa;
devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
+ devinfo->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
return 0;
}
@@ -131,6 +131,8 @@ int cxgbe_dev_info_get(struct rte_eth_dev *eth_dev,
device_info->max_vfs = adapter->params.arch.vfcount;
device_info->max_vmdq_pools = 0; /* XXX: For now no support for VMDQ */
+ device_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
+
device_info->rx_queue_offload_capa = 0UL;
device_info->rx_offload_capa = CXGBE_RX_OFFLOADS;
@@ -254,6 +254,7 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->speed_capa = ETH_LINK_SPEED_1G |
ETH_LINK_SPEED_2_5G |
ETH_LINK_SPEED_10G;
+ dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
dev_info->max_hash_mac_addrs = 0;
dev_info->max_vfs = 0;
@@ -1106,6 +1106,8 @@ eth_em_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
ETH_LINK_SPEED_1G;
+ dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
+
/* Preferred queue parameters */
dev_info->default_rxportconf.nb_queues = 1;
dev_info->default_txportconf.nb_queues = 1;
@@ -2174,6 +2174,7 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->tx_queue_offload_capa = igb_get_tx_queue_offloads_capa(dev);
dev_info->tx_offload_capa = igb_get_tx_port_offloads_capa(dev) |
dev_info->tx_queue_offload_capa;
+ dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
switch (hw->mac.type) {
case e1000_82575:
@@ -469,6 +469,7 @@ static int enicpmd_dev_info_get(struct rte_eth_dev *eth_dev,
device_info->rx_offload_capa = enic->rx_offload_capa;
device_info->tx_offload_capa = enic->tx_offload_capa;
device_info->tx_queue_offload_capa = enic->tx_queue_offload_capa;
+ device_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
device_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_free_thresh = ENIC_DEFAULT_RX_FREE_THRESH
};
@@ -1220,6 +1220,7 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
infos->dev_capa =
RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
+ infos->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_PROBED) {
struct rte_eth_dev_info sub_info;
@@ -751,6 +751,8 @@ hinic_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
DEV_TX_OFFLOAD_TCP_TSO |
DEV_TX_OFFLOAD_MULTI_SEGS;
+ info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
+
info->hash_key_size = HINIC_RSS_KEY_SIZE;
info->reta_size = HINIC_RSS_INDIR_SIZE;
info->flow_type_rss_offloads = HINIC_RSS_OFFLOAD_ALL;
@@ -2707,6 +2707,7 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
if (hns3_dev_get_support(hw, INDEP_TXRX))
info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
+ info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
if (hns3_dev_get_support(hw, PTP))
info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
@@ -965,6 +965,7 @@ hns3vf_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
if (hns3_dev_get_support(hw, INDEP_TXRX))
info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
+ info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
info->rx_desc_lim = (struct rte_eth_desc_lim) {
.nb_max = HNS3_MAX_RING_DESC,
@@ -3751,6 +3751,7 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->dev_capa =
RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
+ dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
dev_info->hash_key_size = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
sizeof(uint32_t);
@@ -35,6 +35,8 @@ i40e_vf_representor_dev_infos_get(struct rte_eth_dev *ethdev,
/* get dev info for the vdev */
dev_info->device = ethdev->device;
+ dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
+
dev_info->max_rx_queues = ethdev->data->nb_rx_queues;
dev_info->max_tx_queues = ethdev->data->nb_tx_queues;
@@ -960,6 +960,7 @@ iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->reta_size = vf->vf_res->rss_lut_size;
dev_info->flow_type_rss_offloads = IAVF_RSS_OFFLOAD_ALL;
dev_info->max_mac_addrs = IAVF_NUM_MACADDR_MAX;
+ dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
dev_info->rx_offload_capa =
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_QINQ_STRIP |
@@ -673,6 +673,7 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
dev_info->hash_key_size = hw->vf_res->rss_key_size;
dev_info->reta_size = hw->vf_res->rss_lut_size;
dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
+ dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
dev_info->rx_offload_capa =
DEV_RX_OFFLOAD_VLAN_STRIP |
@@ -1480,6 +1480,7 @@ eth_igc_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->min_rx_bufsize = 256; /* See BSIZE field of RCTL register. */
dev_info->max_rx_pktlen = MAX_RX_JUMBO_FRAME_SIZE;
dev_info->max_mac_addrs = hw->mac.rar_entry_count;
+ dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
dev_info->rx_offload_capa = IGC_RX_OFFLOAD_ALL;
dev_info->tx_offload_capa = IGC_TX_OFFLOAD_ALL;
dev_info->rx_queue_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
@@ -96,6 +96,7 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
dev_info->dev_capa =
RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
+ dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
dev_info->switch_info.name = ethdev->device->name;
dev_info->switch_info.domain_id = rpst->switch_domain_id;
@@ -1709,6 +1709,8 @@ mrvl_dev_infos_get(struct rte_eth_dev *dev,
{
struct mrvl_priv *priv = dev->data->dev_private;
+ info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
+
info->speed_capa = ETH_LINK_SPEED_10M |
ETH_LINK_SPEED_100M |
ETH_LINK_SPEED_1G |
@@ -583,6 +583,7 @@ otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
+ devinfo->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
return 0;
}
@@ -1367,6 +1367,7 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
dev_info->max_rx_pktlen = (uint32_t)ETH_TX_MAX_NON_LSO_PKT_LEN;
dev_info->rx_desc_lim = qede_rx_desc_lim;
dev_info->tx_desc_lim = qede_tx_desc_lim;
+ dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
if (IS_PF(edev))
dev_info->max_rx_queues = (uint16_t)RTE_MIN(
@@ -186,6 +186,7 @@ sfc_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
+ dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
if (mae->status == SFC_MAE_STATUS_SUPPORTED ||
mae->status == SFC_MAE_STATUS_ADMIN) {
@@ -93,6 +93,7 @@ pmd_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
dev_info->max_rx_pktlen = UINT32_MAX;
dev_info->max_rx_queues = UINT16_MAX;
dev_info->max_tx_queues = UINT16_MAX;
+ dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
return 0;
}
@@ -1006,6 +1006,7 @@ tap_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
* functions together and not in partial combinations
*/
dev_info->flow_type_rss_offloads = ~TAP_RSS_HF_MASK;
+ dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
return 0;
}
@@ -2603,6 +2603,7 @@ txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_vfs = pci_dev->max_vfs;
dev_info->max_vmdq_pools = ETH_64_POOLS;
dev_info->vmdq_queue_num = dev_info->max_rx_queues;
+ dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
dev_info->rx_queue_offload_capa = txgbe_get_rx_queue_offloads(dev);
dev_info->rx_offload_capa = (txgbe_get_rx_port_offloads(dev) |
dev_info->rx_queue_offload_capa);
@@ -487,6 +487,7 @@ txgbevf_dev_info_get(struct rte_eth_dev *dev,
dev_info->max_hash_mac_addrs = TXGBE_VMDQ_NUM_UC_MAC;
dev_info->max_vfs = pci_dev->max_vfs;
dev_info->max_vmdq_pools = ETH_64_POOLS;
+ dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
dev_info->rx_queue_offload_capa = txgbe_get_rx_queue_offloads(dev);
dev_info->rx_offload_capa = (txgbe_get_rx_port_offloads(dev) |
dev_info->rx_queue_offload_capa);