[v2,2/2] net/bonding: add command to set dedicated queue size
Checks
Commit Message
From: Long Wu <long.wu@corigine.com>
The testpmd application can not modify the value of
dedicated hardware Rx/Tx queue size, and hardcoded
them as (128/512). This will cause the bonding port
start fail if some NIC requires more Rx/Tx descriptors
than the hardcoded number.
Therefore, add a command into testpmd application to
support the modification of the size of the dedicated
hardware Rx/Tx queue. Also export an external interface
to also let other applications can change it.
Signed-off-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
---
.../link_bonding_poll_mode_drv_lib.rst | 8 ++
doc/guides/rel_notes/release_24_11.rst | 4 +
drivers/net/bonding/bonding_testpmd.c | 84 +++++++++++++++++++
drivers/net/bonding/eth_bond_8023ad_private.h | 3 +
drivers/net/bonding/rte_eth_bond_8023ad.c | 41 +++++++++
drivers/net/bonding/rte_eth_bond_8023ad.h | 23 +++++
drivers/net/bonding/rte_eth_bond_pmd.c | 6 +-
drivers/net/bonding/version.map | 1 +
8 files changed, 168 insertions(+), 2 deletions(-)
Comments
11/10/2024 05:24, Chaoyong He:
> From: Long Wu <long.wu@corigine.com>
>
> The testpmd application can not modify the value of
> dedicated hardware Rx/Tx queue size, and hardcoded
> them as (128/512). This will cause the bonding port
> start fail if some NIC requires more Rx/Tx descriptors
> than the hardcoded number.
>
> Therefore, add a command into testpmd application to
> support the modification of the size of the dedicated
> hardware Rx/Tx queue. Also export an external interface
> to also let other applications can change it.
>
> Signed-off-by: Long Wu <long.wu@corigine.com>
> Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
> Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
There is a lack of review in general for bonding patches.
Please could you review this one:
https://patches.dpdk.org/project/dpdk/patch/20241015082049.3910138-1-vignesh.purushotham.srinivas@ericsson.com/
Maybe he will do the same for you.
Very useful,
Acked-by: Huisong Li <lihuisong@huawei.com>
在 2024/10/11 11:24, Chaoyong He 写道:
> From: Long Wu <long.wu@corigine.com>
>
> The testpmd application can not modify the value of
> dedicated hardware Rx/Tx queue size, and hardcoded
> them as (128/512). This will cause the bonding port
> start fail if some NIC requires more Rx/Tx descriptors
> than the hardcoded number.
>
> Therefore, add a command into testpmd application to
> support the modification of the size of the dedicated
> hardware Rx/Tx queue. Also export an external interface
> to also let other applications can change it.
>
> Signed-off-by: Long Wu <long.wu@corigine.com>
> Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
> Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
> ---
> .../link_bonding_poll_mode_drv_lib.rst | 8 ++
> doc/guides/rel_notes/release_24_11.rst | 4 +
> drivers/net/bonding/bonding_testpmd.c | 84 +++++++++++++++++++
> drivers/net/bonding/eth_bond_8023ad_private.h | 3 +
> drivers/net/bonding/rte_eth_bond_8023ad.c | 41 +++++++++
> drivers/net/bonding/rte_eth_bond_8023ad.h | 23 +++++
> drivers/net/bonding/rte_eth_bond_pmd.c | 6 +-
> drivers/net/bonding/version.map | 1 +
> 8 files changed, 168 insertions(+), 2 deletions(-)
>
> diff --git a/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst b/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
> index 60717a3587..6498cf7d3d 100644
> --- a/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
> +++ b/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
> @@ -637,3 +637,11 @@ in balance mode with a transmission policy of layer 2+3::
> Members (3): [1 3 4]
> Active Members (3): [1 3 4]
> Primary: [3]
> +
> +set bonding lacp dedicated_queue size
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Set hardware dedicated queue size for LACP control traffic in
> +mode 4 (link-aggregation-802.3ad)::
> +
> + testpmd> set bonding lacp dedicated_queues <port_id> (rxq|txq) queue_size <size>
> diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
> index 9bfc719e02..bec466f58d 100644
> --- a/doc/guides/rel_notes/release_24_11.rst
> +++ b/doc/guides/rel_notes/release_24_11.rst
> @@ -98,6 +98,10 @@ New Features
> * Added SR-IOV VF support.
> * Added recent 1400/14000 and 15000 models to the supported list.
>
> +* **Updated bonding driver.**
> +
> + * Added new function ``rte_eth_bond_8023ad_dedicated_queue_size_set``
> + to set hardware dedicated Rx/Tx queue size in mode-4.
>
> Removed Items
> -------------
> diff --git a/drivers/net/bonding/bonding_testpmd.c b/drivers/net/bonding/bonding_testpmd.c
> index fc0bfd8f74..ce0e47d8ea 100644
> --- a/drivers/net/bonding/bonding_testpmd.c
> +++ b/drivers/net/bonding/bonding_testpmd.c
> @@ -156,6 +156,85 @@ static cmdline_parse_inst_t cmd_set_lacp_dedicated_queues = {
> }
> };
>
> +/* *** SET BONDING SLOW_QUEUE HW QUEUE SIZE *** */
> +struct cmd_set_bonding_hw_dedicated_queue_size_result {
> + cmdline_fixed_string_t set;
> + cmdline_fixed_string_t bonding;
> + cmdline_fixed_string_t lacp;
> + cmdline_fixed_string_t dedicated_queues;
> + portid_t port_id;
> + cmdline_fixed_string_t queue_type;
> + cmdline_fixed_string_t queue_size;
> + uint16_t size;
> +};
> +
> +static void cmd_set_bonding_hw_dedicated_queue_size_parsed(void *parsed_result,
> + __rte_unused struct cmdline *cl, __rte_unused void *data)
> +{
> + int ret;
> + struct rte_port *port;
> + struct cmd_set_bonding_hw_dedicated_queue_size_result *res = parsed_result;
> +
> + port = &ports[res->port_id];
> +
> + /** Check if the port is not started **/
> + if (port->port_status != RTE_PORT_STOPPED) {
> + TESTPMD_LOG(ERR, "Please stop port %u first\n", res->port_id);
> + return;
> + }
> +
> + ret = rte_eth_bond_8023ad_dedicated_queue_size_set(res->port_id,
> + res->size, res->queue_type);
> + if (ret != 0)
> + TESTPMD_LOG(ERR, "Failed to set port %u hardware dedicated %s "
> + "ring size %u\n",
> + res->port_id, res->queue_type, res->size);
> +}
> +
> +static cmdline_parse_token_string_t cmd_setbonding_hw_dedicated_queue_size_set =
> + TOKEN_STRING_INITIALIZER(struct cmd_set_bonding_hw_dedicated_queue_size_result,
> + set, "set");
> +static cmdline_parse_token_string_t cmd_setbonding_hw_dedicated_queue_size_bonding =
> + TOKEN_STRING_INITIALIZER(struct cmd_set_bonding_hw_dedicated_queue_size_result,
> + bonding, "bonding");
> +static cmdline_parse_token_string_t cmd_setbonding_hw_dedicated_queue_size_lacp =
> + TOKEN_STRING_INITIALIZER(struct cmd_set_bonding_hw_dedicated_queue_size_result,
> + lacp, "lacp");
> +static cmdline_parse_token_string_t cmd_setbonding_hw_dedicated_queue_size_dedicated =
> + TOKEN_STRING_INITIALIZER(struct cmd_set_bonding_hw_dedicated_queue_size_result,
> + dedicated_queues, "dedicated_queues");
> +static cmdline_parse_token_num_t cmd_setbonding_hw_dedicated_queue_port_id =
> + TOKEN_NUM_INITIALIZER(struct cmd_set_bonding_hw_dedicated_queue_size_result,
> + port_id, RTE_UINT16);
> +static cmdline_parse_token_string_t cmd_setbonding_hw_dedicated_queue_queue_type =
> + TOKEN_STRING_INITIALIZER(struct cmd_set_bonding_hw_dedicated_queue_size_result,
> + queue_type, "rxq#txq");
> +static cmdline_parse_token_string_t cmd_setbonding_hw_dedicated_queue_queue_size =
> + TOKEN_STRING_INITIALIZER(struct cmd_set_bonding_hw_dedicated_queue_size_result,
> + queue_size, "queue_size");
> +static cmdline_parse_token_num_t cmd_setbonding_hw_dedicated_queue_size_size =
> + TOKEN_NUM_INITIALIZER(struct cmd_set_bonding_hw_dedicated_queue_size_result,
> + size, RTE_UINT16);
> +
> +static cmdline_parse_inst_t cmd_set_lacp_dedicated_hw_queue_size = {
> + .f = cmd_set_bonding_hw_dedicated_queue_size_parsed,
> + .help_str = "set bonding lacp dedicated_queues <port_id> (rxq|txq) "
> + "queue_size <size>: "
> + "Set hardware dedicated queue size for LACP control traffic",
> + .data = NULL,
> + .tokens = {
> + (void *)&cmd_setbonding_hw_dedicated_queue_size_set,
> + (void *)&cmd_setbonding_hw_dedicated_queue_size_bonding,
> + (void *)&cmd_setbonding_hw_dedicated_queue_size_lacp,
> + (void *)&cmd_setbonding_hw_dedicated_queue_size_dedicated,
> + (void *)&cmd_setbonding_hw_dedicated_queue_port_id,
> + (void *)&cmd_setbonding_hw_dedicated_queue_queue_type,
> + (void *)&cmd_setbonding_hw_dedicated_queue_queue_size,
> + (void *)&cmd_setbonding_hw_dedicated_queue_size_size,
> + NULL
> + }
> +};
> +
> /* *** SET BALANCE XMIT POLICY *** */
> struct cmd_set_bonding_balance_xmit_policy_result {
> cmdline_fixed_string_t set;
> @@ -747,6 +826,11 @@ static struct testpmd_driver_commands bonding_cmds = {
> "set bonding mode IEEE802.3AD aggregator policy (port_id) (agg_name)\n"
> " Set Aggregation mode for IEEE802.3AD (mode 4)\n",
> },
> + {
> + &cmd_set_lacp_dedicated_hw_queue_size,
> + "set bonding lacp dedicated_queues <port_id> (rxq|txq) queue_size <size>\n"
> + " Set hardware dedicated queue size for LACP control traffic.\n",
> + },
> { NULL, NULL },
> },
> };
> diff --git a/drivers/net/bonding/eth_bond_8023ad_private.h b/drivers/net/bonding/eth_bond_8023ad_private.h
> index ab7d15f81a..b0264e2275 100644
> --- a/drivers/net/bonding/eth_bond_8023ad_private.h
> +++ b/drivers/net/bonding/eth_bond_8023ad_private.h
> @@ -176,6 +176,9 @@ struct mode8023ad_private {
>
> uint16_t rx_qid;
> uint16_t tx_qid;
> +
> + uint16_t rx_queue_size;
> + uint16_t tx_queue_size;
> } dedicated_queues;
> enum rte_bond_8023ad_agg_selection agg_selection;
> };
> diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
> index 7f885ab521..37a3f8528d 100644
> --- a/drivers/net/bonding/rte_eth_bond_8023ad.c
> +++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
> @@ -1254,6 +1254,8 @@ bond_mode_8023ad_conf_assign(struct mode8023ad_private *mode4,
> mode4->dedicated_queues.enabled = 0;
> mode4->dedicated_queues.rx_qid = UINT16_MAX;
> mode4->dedicated_queues.tx_qid = UINT16_MAX;
> + mode4->dedicated_queues.rx_queue_size = SLOW_RX_QUEUE_HW_DEFAULT_SIZE;
> + mode4->dedicated_queues.tx_queue_size = SLOW_TX_QUEUE_HW_DEFAULT_SIZE;
> }
>
> void
> @@ -1753,3 +1755,42 @@ rte_eth_bond_8023ad_dedicated_queues_disable(uint16_t port)
>
> return retval;
> }
> +
> +int
> +rte_eth_bond_8023ad_dedicated_queue_size_set(uint16_t port,
> + uint16_t queue_size,
> + const char *queue_type)
> +{
> + struct rte_eth_dev *dev;
> + struct bond_dev_private *internals;
> +
> + if (valid_bonding_port_id(port) != 0) {
> + RTE_BOND_LOG(ERR, "The bonding port id is invalid");
> + return -EINVAL;
> + }
> +
> + dev = &rte_eth_devices[port];
> +
> + /* Device must be stopped to set up slow queue */
> + if (dev->data->dev_started != 0) {
> + RTE_BOND_LOG(ERR, "Please stop the bonding port");
> + return -EINVAL;
> + }
> +
> + internals = dev->data->dev_private;
> + if (internals->mode4.dedicated_queues.enabled == 0) {
> + RTE_BOND_LOG(ERR, "Please enable dedicated queue");
> + return -EINVAL;
> + }
> +
> + if (strcmp(queue_type, "rxq") == 0) {
> + internals->mode4.dedicated_queues.rx_queue_size = queue_size;
> + } else if (strcmp(queue_type, "txq") == 0) {
> + internals->mode4.dedicated_queues.tx_queue_size = queue_size;
> + } else {
> + RTE_BOND_LOG(ERR, "Unknown queue type %s", queue_type);
> + return -EINVAL;
> + }
> +
> + return 0;
> +}
> diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.h b/drivers/net/bonding/rte_eth_bond_8023ad.h
> index b2deb26e2e..a8c9dadbd0 100644
> --- a/drivers/net/bonding/rte_eth_bond_8023ad.h
> +++ b/drivers/net/bonding/rte_eth_bond_8023ad.h
> @@ -35,6 +35,9 @@ extern "C" {
> #define MARKER_TLV_TYPE_INFO 0x01
> #define MARKER_TLV_TYPE_RESP 0x02
>
> +#define SLOW_TX_QUEUE_HW_DEFAULT_SIZE 512
> +#define SLOW_RX_QUEUE_HW_DEFAULT_SIZE 512
> +
> typedef void (*rte_eth_bond_8023ad_ext_slowrx_fn)(uint16_t member_id,
> struct rte_mbuf *lacp_pkt);
>
> @@ -309,6 +312,26 @@ rte_eth_bond_8023ad_dedicated_queues_enable(uint16_t port_id);
> int
> rte_eth_bond_8023ad_dedicated_queues_disable(uint16_t port_id);
>
> +
> +/**
> + * Set hardware slow queue ring size
> + *
> + * This function set bonding port hardware slow queue ring size.
> + * Bonding port must be stopped to change this configuration.
> + *
> + * @param port_id Bonding device id
> + * @param queue_size Slow queue ring size
> + * @param queue_type Slow queue type, "rxq" or "txq"
> + *
> + * @return
> + * 0 on success, negative value otherwise.
> + *
> + */
> +__rte_experimental
> +int
> +rte_eth_bond_8023ad_dedicated_queue_size_set(uint16_t port,
> + uint16_t queue_size,
> + const char *queue_type);
> /*
> * Get aggregator mode for 8023ad
> * @param port_id Bonding device id
> diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
> index 34131f0e35..d995fe62b2 100644
> --- a/drivers/net/bonding/rte_eth_bond_pmd.c
> +++ b/drivers/net/bonding/rte_eth_bond_pmd.c
> @@ -1688,7 +1688,8 @@ member_configure_slow_queue(struct rte_eth_dev *bonding_eth_dev,
> /* Configure slow Rx queue */
>
> errval = rte_eth_rx_queue_setup(member_eth_dev->data->port_id,
> - internals->mode4.dedicated_queues.rx_qid, 128,
> + internals->mode4.dedicated_queues.rx_qid,
> + internals->mode4.dedicated_queues.rx_queue_size,
> rte_eth_dev_socket_id(member_eth_dev->data->port_id),
> NULL, port->slow_pool);
> if (errval != 0) {
> @@ -1701,7 +1702,8 @@ member_configure_slow_queue(struct rte_eth_dev *bonding_eth_dev,
> }
>
> errval = rte_eth_tx_queue_setup(member_eth_dev->data->port_id,
> - internals->mode4.dedicated_queues.tx_qid, 512,
> + internals->mode4.dedicated_queues.tx_qid,
> + internals->mode4.dedicated_queues.tx_queue_size,
> rte_eth_dev_socket_id(member_eth_dev->data->port_id),
> NULL);
> if (errval != 0) {
> diff --git a/drivers/net/bonding/version.map b/drivers/net/bonding/version.map
> index a309469b1f..f9c935a04f 100644
> --- a/drivers/net/bonding/version.map
> +++ b/drivers/net/bonding/version.map
> @@ -30,6 +30,7 @@ DPDK_25 {
> EXPERIMENTAL {
> # added in 23.11
> global:
> + rte_eth_bond_8023ad_dedicated_queue_size_set;
> rte_eth_bond_8023ad_member_info;
> rte_eth_bond_active_members_get;
> rte_eth_bond_member_add;
On Fri, 11 Oct 2024 11:24:12 +0800
Chaoyong He <chaoyong.he@corigine.com> wrote:
> + .tokens = {
> + (void *)&cmd_setbonding_hw_dedicated_queue_size_set,
> + (void *)&cmd_setbonding_hw_dedicated_queue_size_bonding,
> + (void *)&cmd_setbonding_hw_dedicated_queue_size_lacp,
> + (void *)&cmd_setbonding_hw_dedicated_queue_size_dedicated,
> + (void *)&cmd_setbonding_hw_dedicated_queue_port_id,
> + (void *)&cmd_setbonding_hw_dedicated_queue_queue_type,
> + (void *)&cmd_setbonding_hw_dedicated_queue_queue_size,
> + (void *)&cmd_setbonding_hw_dedicated_queue_size_size,
> + NULL
No void * casts are needed here.
On Fri, 11 Oct 2024 11:24:12 +0800
Chaoyong He <chaoyong.he@corigine.com> wrote:
> From: Long Wu <long.wu@corigine.com>
>
> The testpmd application can not modify the value of
> dedicated hardware Rx/Tx queue size, and hardcoded
> them as (128/512). This will cause the bonding port
> start fail if some NIC requires more Rx/Tx descriptors
> than the hardcoded number.
>
> Therefore, add a command into testpmd application to
> support the modification of the size of the dedicated
> hardware Rx/Tx queue. Also export an external interface
> to also let other applications can change it.
>
> Signed-off-by: Long Wu <long.wu@corigine.com>
> Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
> Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
24.11 is released, this patch if still of interest will need to be rebased.
The definition of what a "dedicated queue" is a bit confusing.
If it is only for LACP packets, it should never need to be very big.
Only under a mis-configuration and DoS kind of flood should there
ever be many packets.
> On Fri, 11 Oct 2024 11:24:12 +0800
> Chaoyong He <chaoyong.he@corigine.com> wrote:
>
> > From: Long Wu <long.wu@corigine.com>
> >
> > The testpmd application can not modify the value of dedicated hardware
> > Rx/Tx queue size, and hardcoded them as (128/512). This will cause the
> > bonding port start fail if some NIC requires more Rx/Tx descriptors
> > than the hardcoded number.
> >
> > Therefore, add a command into testpmd application to support the
> > modification of the size of the dedicated hardware Rx/Tx queue. Also
> > export an external interface to also let other applications can change
> > it.
> >
> > Signed-off-by: Long Wu <long.wu@corigine.com>
> > Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
> > Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
>
> 24.11 is released, this patch if still of interest will need to be rebased.
Okay, we will send new version patch later.
>
> The definition of what a "dedicated queue" is a bit confusing.
> If it is only for LACP packets, it should never need to be very big.
> Only under a mis-configuration and DoS kind of flood should there ever be
> many packets.
Yes, the dedicated queue is only for LACP packets now and it doesn't need be set very big.
But if we use a hardware queue as the "dedicated queue", we must consider the hardware
capability. The minimum queue size of some NICs may be larger than the hardcode dedicated
queue size. In this case, I think it is better to add an interface to set the dedicated queue size.
On Wed, 4 Dec 2024 06:21:00 +0000
Chaoyong He <chaoyong.he@corigine.com> wrote:
> > The definition of what a "dedicated queue" is a bit confusing.
> > If it is only for LACP packets, it should never need to be very big.
> > Only under a mis-configuration and DoS kind of flood should there ever be
> > many packets.
>
> Yes, the dedicated queue is only for LACP packets now and it doesn't need be set very big.
>
> But if we use a hardware queue as the "dedicated queue", we must consider the hardware
> capability. The minimum queue size of some NICs may be larger than the hardcode dedicated
> queue size. In this case, I think it is better to add an interface to set the dedicated queue size.
How about using the existing descriptor queue limits api for that?
It is reported by info get
> On Wed, 4 Dec 2024 06:21:00 +0000
> Chaoyong He <chaoyong.he@corigine.com> wrote:
>
> > > The definition of what a "dedicated queue" is a bit confusing.
> > > If it is only for LACP packets, it should never need to be very big.
> > > Only under a mis-configuration and DoS kind of flood should there
> > > ever be many packets.
> >
> > Yes, the dedicated queue is only for LACP packets now and it doesn't need be
> set very big.
> >
> > But if we use a hardware queue as the "dedicated queue", we must
> > consider the hardware capability. The minimum queue size of some NICs
> > may be larger than the hardcode dedicated queue size. In this case, I think it
> is better to add an interface to set the dedicated queue size.
>
> How about using the existing descriptor queue limits api for that?
> It is reported by info get
Using existing descriptor queue limits api is good enough for current problem(hardware capability),
but I think it is not very flexible.
Now we use a macro as a default value for dedicated queue size, but we can replace the macro with queue limit
while still retaining the interface for modifying queue size.
What do you think of this?
> > On Wed, 4 Dec 2024 06:21:00 +0000
> > Chaoyong He <chaoyong.he@corigine.com> wrote:
> >
> > > > The definition of what a "dedicated queue" is a bit confusing.
> > > > If it is only for LACP packets, it should never need to be very big.
> > > > Only under a mis-configuration and DoS kind of flood should there
> > > > ever be many packets.
> > >
> > > Yes, the dedicated queue is only for LACP packets now and it doesn't
> > > need be
> > set very big.
> > >
> > > But if we use a hardware queue as the "dedicated queue", we must
> > > consider the hardware capability. The minimum queue size of some
> > > NICs may be larger than the hardcode dedicated queue size. In this
> > > case, I think it
> > is better to add an interface to set the dedicated queue size.
> >
> > How about using the existing descriptor queue limits api for that?
> > It is reported by info get
>
> Using existing descriptor queue limits api is good enough for current
> problem(hardware capability), but I think it is not very flexible.
> Now we use a macro as a default value for dedicated queue size, but we can
> replace the macro with queue limit while still retaining the interface for
> modifying queue size.
> What do you think of this?
A gentle ping ~
On Fri, 11 Oct 2024 11:24:12 +0800
Chaoyong He <chaoyong.he@corigine.com> wrote:
> From: Long Wu <long.wu@corigine.com>
>
> The testpmd application can not modify the value of
> dedicated hardware Rx/Tx queue size, and hardcoded
> them as (128/512). This will cause the bonding port
> start fail if some NIC requires more Rx/Tx descriptors
> than the hardcoded number.
>
> Therefore, add a command into testpmd application to
> support the modification of the size of the dedicated
> hardware Rx/Tx queue. Also export an external interface
> to also let other applications can change it.
>
> Signed-off-by: Long Wu <long.wu@corigine.com>
> Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
> Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
> ---
Seems this could just be a devarg to the bonding driver rather
than all the added cruft of making it a real API.
On Thu, 19 Dec 2024 05:52:40 +0000
Chaoyong He <chaoyong.he@corigine.com> wrote:
> > > On Wed, 4 Dec 2024 06:21:00 +0000
> > > Chaoyong He <chaoyong.he@corigine.com> wrote:
> > >
> > > > > The definition of what a "dedicated queue" is a bit confusing.
> > > > > If it is only for LACP packets, it should never need to be very big.
> > > > > Only under a mis-configuration and DoS kind of flood should there
> > > > > ever be many packets.
> > > >
> > > > Yes, the dedicated queue is only for LACP packets now and it doesn't
> > > > need be
> > > set very big.
> > > >
> > > > But if we use a hardware queue as the "dedicated queue", we must
> > > > consider the hardware capability. The minimum queue size of some
> > > > NICs may be larger than the hardcode dedicated queue size. In this
> > > > case, I think it
> > > is better to add an interface to set the dedicated queue size.
> > >
> > > How about using the existing descriptor queue limits api for that?
> > > It is reported by info get
> >
> > Using existing descriptor queue limits api is good enough for current
> > problem(hardware capability), but I think it is not very flexible.
> > Now we use a macro as a default value for dedicated queue size, but we can
> > replace the macro with queue limit while still retaining the interface for
> > modifying queue size.
> > What do you think of this?
>
> A gentle ping ~
Should be a devargs parameter to bonding PMD, not a whole new API.
@@ -637,3 +637,11 @@ in balance mode with a transmission policy of layer 2+3::
Members (3): [1 3 4]
Active Members (3): [1 3 4]
Primary: [3]
+
+set bonding lacp dedicated_queue size
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Set hardware dedicated queue size for LACP control traffic in
+mode 4 (link-aggregation-802.3ad)::
+
+ testpmd> set bonding lacp dedicated_queues <port_id> (rxq|txq) queue_size <size>
@@ -98,6 +98,10 @@ New Features
* Added SR-IOV VF support.
* Added recent 1400/14000 and 15000 models to the supported list.
+* **Updated bonding driver.**
+
+ * Added new function ``rte_eth_bond_8023ad_dedicated_queue_size_set``
+ to set hardware dedicated Rx/Tx queue size in mode-4.
Removed Items
-------------
@@ -156,6 +156,85 @@ static cmdline_parse_inst_t cmd_set_lacp_dedicated_queues = {
}
};
+/* *** SET BONDING SLOW_QUEUE HW QUEUE SIZE *** */
+struct cmd_set_bonding_hw_dedicated_queue_size_result {
+ cmdline_fixed_string_t set;
+ cmdline_fixed_string_t bonding;
+ cmdline_fixed_string_t lacp;
+ cmdline_fixed_string_t dedicated_queues;
+ portid_t port_id;
+ cmdline_fixed_string_t queue_type;
+ cmdline_fixed_string_t queue_size;
+ uint16_t size;
+};
+
+static void cmd_set_bonding_hw_dedicated_queue_size_parsed(void *parsed_result,
+ __rte_unused struct cmdline *cl, __rte_unused void *data)
+{
+ int ret;
+ struct rte_port *port;
+ struct cmd_set_bonding_hw_dedicated_queue_size_result *res = parsed_result;
+
+ port = &ports[res->port_id];
+
+ /** Check if the port is not started **/
+ if (port->port_status != RTE_PORT_STOPPED) {
+ TESTPMD_LOG(ERR, "Please stop port %u first\n", res->port_id);
+ return;
+ }
+
+ ret = rte_eth_bond_8023ad_dedicated_queue_size_set(res->port_id,
+ res->size, res->queue_type);
+ if (ret != 0)
+ TESTPMD_LOG(ERR, "Failed to set port %u hardware dedicated %s "
+ "ring size %u\n",
+ res->port_id, res->queue_type, res->size);
+}
+
+static cmdline_parse_token_string_t cmd_setbonding_hw_dedicated_queue_size_set =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_bonding_hw_dedicated_queue_size_result,
+ set, "set");
+static cmdline_parse_token_string_t cmd_setbonding_hw_dedicated_queue_size_bonding =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_bonding_hw_dedicated_queue_size_result,
+ bonding, "bonding");
+static cmdline_parse_token_string_t cmd_setbonding_hw_dedicated_queue_size_lacp =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_bonding_hw_dedicated_queue_size_result,
+ lacp, "lacp");
+static cmdline_parse_token_string_t cmd_setbonding_hw_dedicated_queue_size_dedicated =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_bonding_hw_dedicated_queue_size_result,
+ dedicated_queues, "dedicated_queues");
+static cmdline_parse_token_num_t cmd_setbonding_hw_dedicated_queue_port_id =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_bonding_hw_dedicated_queue_size_result,
+ port_id, RTE_UINT16);
+static cmdline_parse_token_string_t cmd_setbonding_hw_dedicated_queue_queue_type =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_bonding_hw_dedicated_queue_size_result,
+ queue_type, "rxq#txq");
+static cmdline_parse_token_string_t cmd_setbonding_hw_dedicated_queue_queue_size =
+ TOKEN_STRING_INITIALIZER(struct cmd_set_bonding_hw_dedicated_queue_size_result,
+ queue_size, "queue_size");
+static cmdline_parse_token_num_t cmd_setbonding_hw_dedicated_queue_size_size =
+ TOKEN_NUM_INITIALIZER(struct cmd_set_bonding_hw_dedicated_queue_size_result,
+ size, RTE_UINT16);
+
+static cmdline_parse_inst_t cmd_set_lacp_dedicated_hw_queue_size = {
+ .f = cmd_set_bonding_hw_dedicated_queue_size_parsed,
+ .help_str = "set bonding lacp dedicated_queues <port_id> (rxq|txq) "
+ "queue_size <size>: "
+ "Set hardware dedicated queue size for LACP control traffic",
+ .data = NULL,
+ .tokens = {
+ (void *)&cmd_setbonding_hw_dedicated_queue_size_set,
+ (void *)&cmd_setbonding_hw_dedicated_queue_size_bonding,
+ (void *)&cmd_setbonding_hw_dedicated_queue_size_lacp,
+ (void *)&cmd_setbonding_hw_dedicated_queue_size_dedicated,
+ (void *)&cmd_setbonding_hw_dedicated_queue_port_id,
+ (void *)&cmd_setbonding_hw_dedicated_queue_queue_type,
+ (void *)&cmd_setbonding_hw_dedicated_queue_queue_size,
+ (void *)&cmd_setbonding_hw_dedicated_queue_size_size,
+ NULL
+ }
+};
+
/* *** SET BALANCE XMIT POLICY *** */
struct cmd_set_bonding_balance_xmit_policy_result {
cmdline_fixed_string_t set;
@@ -747,6 +826,11 @@ static struct testpmd_driver_commands bonding_cmds = {
"set bonding mode IEEE802.3AD aggregator policy (port_id) (agg_name)\n"
" Set Aggregation mode for IEEE802.3AD (mode 4)\n",
},
+ {
+ &cmd_set_lacp_dedicated_hw_queue_size,
+ "set bonding lacp dedicated_queues <port_id> (rxq|txq) queue_size <size>\n"
+ " Set hardware dedicated queue size for LACP control traffic.\n",
+ },
{ NULL, NULL },
},
};
@@ -176,6 +176,9 @@ struct mode8023ad_private {
uint16_t rx_qid;
uint16_t tx_qid;
+
+ uint16_t rx_queue_size;
+ uint16_t tx_queue_size;
} dedicated_queues;
enum rte_bond_8023ad_agg_selection agg_selection;
};
@@ -1254,6 +1254,8 @@ bond_mode_8023ad_conf_assign(struct mode8023ad_private *mode4,
mode4->dedicated_queues.enabled = 0;
mode4->dedicated_queues.rx_qid = UINT16_MAX;
mode4->dedicated_queues.tx_qid = UINT16_MAX;
+ mode4->dedicated_queues.rx_queue_size = SLOW_RX_QUEUE_HW_DEFAULT_SIZE;
+ mode4->dedicated_queues.tx_queue_size = SLOW_TX_QUEUE_HW_DEFAULT_SIZE;
}
void
@@ -1753,3 +1755,42 @@ rte_eth_bond_8023ad_dedicated_queues_disable(uint16_t port)
return retval;
}
+
+int
+rte_eth_bond_8023ad_dedicated_queue_size_set(uint16_t port,
+ uint16_t queue_size,
+ const char *queue_type)
+{
+ struct rte_eth_dev *dev;
+ struct bond_dev_private *internals;
+
+ if (valid_bonding_port_id(port) != 0) {
+ RTE_BOND_LOG(ERR, "The bonding port id is invalid");
+ return -EINVAL;
+ }
+
+ dev = &rte_eth_devices[port];
+
+ /* Device must be stopped to set up slow queue */
+ if (dev->data->dev_started != 0) {
+ RTE_BOND_LOG(ERR, "Please stop the bonding port");
+ return -EINVAL;
+ }
+
+ internals = dev->data->dev_private;
+ if (internals->mode4.dedicated_queues.enabled == 0) {
+ RTE_BOND_LOG(ERR, "Please enable dedicated queue");
+ return -EINVAL;
+ }
+
+ if (strcmp(queue_type, "rxq") == 0) {
+ internals->mode4.dedicated_queues.rx_queue_size = queue_size;
+ } else if (strcmp(queue_type, "txq") == 0) {
+ internals->mode4.dedicated_queues.tx_queue_size = queue_size;
+ } else {
+ RTE_BOND_LOG(ERR, "Unknown queue type %s", queue_type);
+ return -EINVAL;
+ }
+
+ return 0;
+}
@@ -35,6 +35,9 @@ extern "C" {
#define MARKER_TLV_TYPE_INFO 0x01
#define MARKER_TLV_TYPE_RESP 0x02
+#define SLOW_TX_QUEUE_HW_DEFAULT_SIZE 512
+#define SLOW_RX_QUEUE_HW_DEFAULT_SIZE 512
+
typedef void (*rte_eth_bond_8023ad_ext_slowrx_fn)(uint16_t member_id,
struct rte_mbuf *lacp_pkt);
@@ -309,6 +312,26 @@ rte_eth_bond_8023ad_dedicated_queues_enable(uint16_t port_id);
int
rte_eth_bond_8023ad_dedicated_queues_disable(uint16_t port_id);
+
+/**
+ * Set hardware slow queue ring size
+ *
+ * This function set bonding port hardware slow queue ring size.
+ * Bonding port must be stopped to change this configuration.
+ *
+ * @param port_id Bonding device id
+ * @param queue_size Slow queue ring size
+ * @param queue_type Slow queue type, "rxq" or "txq"
+ *
+ * @return
+ * 0 on success, negative value otherwise.
+ *
+ */
+__rte_experimental
+int
+rte_eth_bond_8023ad_dedicated_queue_size_set(uint16_t port,
+ uint16_t queue_size,
+ const char *queue_type);
/*
* Get aggregator mode for 8023ad
* @param port_id Bonding device id
@@ -1688,7 +1688,8 @@ member_configure_slow_queue(struct rte_eth_dev *bonding_eth_dev,
/* Configure slow Rx queue */
errval = rte_eth_rx_queue_setup(member_eth_dev->data->port_id,
- internals->mode4.dedicated_queues.rx_qid, 128,
+ internals->mode4.dedicated_queues.rx_qid,
+ internals->mode4.dedicated_queues.rx_queue_size,
rte_eth_dev_socket_id(member_eth_dev->data->port_id),
NULL, port->slow_pool);
if (errval != 0) {
@@ -1701,7 +1702,8 @@ member_configure_slow_queue(struct rte_eth_dev *bonding_eth_dev,
}
errval = rte_eth_tx_queue_setup(member_eth_dev->data->port_id,
- internals->mode4.dedicated_queues.tx_qid, 512,
+ internals->mode4.dedicated_queues.tx_qid,
+ internals->mode4.dedicated_queues.tx_queue_size,
rte_eth_dev_socket_id(member_eth_dev->data->port_id),
NULL);
if (errval != 0) {
@@ -30,6 +30,7 @@ DPDK_25 {
EXPERIMENTAL {
# added in 23.11
global:
+ rte_eth_bond_8023ad_dedicated_queue_size_set;
rte_eth_bond_8023ad_member_info;
rte_eth_bond_active_members_get;
rte_eth_bond_member_add;