[v2] net/gve: fix RX buffer size alignment
Checks
Commit Message
In GVE, both queue formats have RX buffer size alignment requirements
which will not always be respected when a user specifies an mbuf size.
Assuming that an mbuf size is greater than the DPDK recommended default
(2048 + 128), if the buffer size is not properly aligned with what the
device expects, the device will silently fail to create any transmit or
receive queues.
Because no queues are created, there is no network traffic for the DPDK
program, and errors like the following are returned when attempting to
destroy queues:
gve_adminq_parse_err(): AQ command failed with status -11
gve_stop_tx_queues(): failed to destroy txqs
gve_adminq_parse_err(): AQ command failed with status -11
gve_stop_rx_queues(): failed to destroy rxqs
This change aims to remedy this by restricting the RX receive buffer
sizes to valid sizes for their respective queue formats, including both
alignment and minimum and maximum supported buffer sizes.
Fixes: 4bec2d0b5572 ("net/gve: support queue operations")
Fixes: 1dc00f4fc74b ("net/gve: add Rx queue setup for DQO")
Cc: junfeng.guo@intel.com
Cc: stable@dpdk.org
Signed-off-by: Joshua Washington <joshwash@google.com>
Reviewed-by: Rushil Gupta <rushilg@google.com>
---
drivers/net/gve/gve_ethdev.c | 5 ++++-
drivers/net/gve/gve_ethdev.h | 8 +++++++-
drivers/net/gve/gve_rx.c | 7 ++++++-
drivers/net/gve/gve_rx_dqo.c | 6 +++++-
4 files changed, 22 insertions(+), 4 deletions(-)
Comments
> -----Original Message-----
> From: Joshua Washington <joshwash@google.com>
> Sent: Tuesday, November 14, 2023 07:12
> To: Guo, Junfeng <junfeng.guo@intel.com>; Jeroen de Borst
> <jeroendb@google.com>; Rushil Gupta <rushilg@google.com>; Joshua
> Washington <joshwash@google.com>; Li, Xiaoyun <xiaoyun.li@intel.com>
> Cc: dev@dpdk.org; stable@dpdk.org; Ferruh Yigit <ferruh.yigit@amd.com>
> Subject: [PATCH v2] net/gve: fix RX buffer size alignment
>
> In GVE, both queue formats have RX buffer size alignment requirements
> which will not always be respected when a user specifies an mbuf size.
> Assuming that an mbuf size is greater than the DPDK recommended default
> (2048 + 128), if the buffer size is not properly aligned with what the
> device expects, the device will silently fail to create any transmit or
> receive queues.
>
> Because no queues are created, there is no network traffic for the DPDK
> program, and errors like the following are returned when attempting to
> destroy queues:
>
> gve_adminq_parse_err(): AQ command failed with status -11
> gve_stop_tx_queues(): failed to destroy txqs
> gve_adminq_parse_err(): AQ command failed with status -11
> gve_stop_rx_queues(): failed to destroy rxqs
>
> This change aims to remedy this by restricting the RX receive buffer
> sizes to valid sizes for their respective queue formats, including both
> alignment and minimum and maximum supported buffer sizes.
>
> Fixes: 4bec2d0b5572 ("net/gve: support queue operations")
> Fixes: 1dc00f4fc74b ("net/gve: add Rx queue setup for DQO")
> Cc: junfeng.guo@intel.com
> Cc: stable@dpdk.org
>
> Signed-off-by: Joshua Washington <joshwash@google.com>
> Reviewed-by: Rushil Gupta <rushilg@google.com>
Acked-by: Junfeng Guo <junfeng.guo@intel.com>
Regards,
Junfeng Guo
> ---
> drivers/net/gve/gve_ethdev.c | 5 ++++-
> drivers/net/gve/gve_ethdev.h | 8 +++++++-
> drivers/net/gve/gve_rx.c | 7 ++++++-
> drivers/net/gve/gve_rx_dqo.c | 6 +++++-
> 4 files changed, 22 insertions(+), 4 deletions(-)
>
> --
> 2.42.0.869.gea05f2083d-goog
On 11/14/2023 2:41 AM, Guo, Junfeng wrote:
>
>
>> -----Original Message-----
>> From: Joshua Washington <joshwash@google.com>
>> Sent: Tuesday, November 14, 2023 07:12
>> To: Guo, Junfeng <junfeng.guo@intel.com>; Jeroen de Borst
>> <jeroendb@google.com>; Rushil Gupta <rushilg@google.com>; Joshua
>> Washington <joshwash@google.com>; Li, Xiaoyun <xiaoyun.li@intel.com>
>> Cc: dev@dpdk.org; stable@dpdk.org; Ferruh Yigit <ferruh.yigit@amd.com>
>> Subject: [PATCH v2] net/gve: fix RX buffer size alignment
>>
>> In GVE, both queue formats have RX buffer size alignment requirements
>> which will not always be respected when a user specifies an mbuf size.
>> Assuming that an mbuf size is greater than the DPDK recommended default
>> (2048 + 128), if the buffer size is not properly aligned with what the
>> device expects, the device will silently fail to create any transmit or
>> receive queues.
>>
>> Because no queues are created, there is no network traffic for the DPDK
>> program, and errors like the following are returned when attempting to
>> destroy queues:
>>
>> gve_adminq_parse_err(): AQ command failed with status -11
>> gve_stop_tx_queues(): failed to destroy txqs
>> gve_adminq_parse_err(): AQ command failed with status -11
>> gve_stop_rx_queues(): failed to destroy rxqs
>>
>> This change aims to remedy this by restricting the RX receive buffer
>> sizes to valid sizes for their respective queue formats, including both
>> alignment and minimum and maximum supported buffer sizes.
>>
>> Fixes: 4bec2d0b5572 ("net/gve: support queue operations")
>> Fixes: 1dc00f4fc74b ("net/gve: add Rx queue setup for DQO")
>> Cc: junfeng.guo@intel.com
>> Cc: stable@dpdk.org
>>
>> Signed-off-by: Joshua Washington <joshwash@google.com>
>> Reviewed-by: Rushil Gupta <rushilg@google.com>
>
> Acked-by: Junfeng Guo <junfeng.guo@intel.com>
>
Applied to dpdk-next-net/main, thanks.
@@ -296,7 +296,10 @@ gve_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_mac_addrs = 1;
dev_info->max_rx_queues = priv->max_nb_rxq;
dev_info->max_tx_queues = priv->max_nb_txq;
- dev_info->min_rx_bufsize = GVE_MIN_BUF_SIZE;
+ if (gve_is_gqi(priv))
+ dev_info->min_rx_bufsize = GVE_RX_MIN_BUF_SIZE_GQI;
+ else
+ dev_info->min_rx_bufsize = GVE_RX_MIN_BUF_SIZE_DQO;
dev_info->max_rx_pktlen = priv->max_mtu + RTE_ETHER_HDR_LEN;
dev_info->max_mtu = priv->max_mtu;
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
@@ -20,7 +20,13 @@
#define GVE_DEFAULT_TX_RS_THRESH 32
#define GVE_TX_MAX_FREE_SZ 512
-#define GVE_MIN_BUF_SIZE 1024
+#define GVE_RX_BUF_ALIGN_DQO 128
+#define GVE_RX_MIN_BUF_SIZE_DQO 1024
+#define GVE_RX_MAX_BUF_SIZE_DQO ((16 * 1024) - GVE_RX_BUF_ALIGN_DQO)
+
+#define GVE_RX_BUF_ALIGN_GQI 2048
+#define GVE_RX_MIN_BUF_SIZE_GQI 2048
+#define GVE_RX_MAX_BUF_SIZE_GQI 4096
#define GVE_TX_CKSUM_OFFLOAD_MASK ( \
RTE_MBUF_F_TX_L4_MASK | \
@@ -301,6 +301,7 @@ gve_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id,
const struct rte_memzone *mz;
struct gve_rx_queue *rxq;
uint16_t free_thresh;
+ uint32_t mbuf_len;
int err = 0;
if (nb_desc != hw->rx_desc_cnt) {
@@ -344,7 +345,11 @@ gve_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id,
rxq->hw = hw;
rxq->ntfy_addr = &hw->db_bar2[rte_be_to_cpu_32(hw->irq_dbs[rxq->ntfy_id].id)];
- rxq->rx_buf_len = rte_pktmbuf_data_room_size(rxq->mpool) - RTE_PKTMBUF_HEADROOM;
+ mbuf_len =
+ rte_pktmbuf_data_room_size(rxq->mpool) - RTE_PKTMBUF_HEADROOM;
+ rxq->rx_buf_len =
+ RTE_MIN((uint16_t)GVE_RX_MAX_BUF_SIZE_GQI,
+ RTE_ALIGN_FLOOR(mbuf_len, GVE_RX_BUF_ALIGN_GQI));
/* Allocate software ring */
rxq->sw_ring = rte_zmalloc_socket("gve rx sw ring", sizeof(struct rte_mbuf *) * nb_desc,
@@ -220,6 +220,7 @@ gve_rx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id,
const struct rte_memzone *mz;
struct gve_rx_queue *rxq;
uint16_t free_thresh;
+ uint32_t mbuf_len;
int err = 0;
if (nb_desc != hw->rx_desc_cnt) {
@@ -264,8 +265,11 @@ gve_rx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id,
rxq->hw = hw;
rxq->ntfy_addr = &hw->db_bar2[rte_be_to_cpu_32(hw->irq_dbs[rxq->ntfy_id].id)];
- rxq->rx_buf_len =
+ mbuf_len =
rte_pktmbuf_data_room_size(rxq->mpool) - RTE_PKTMBUF_HEADROOM;
+ rxq->rx_buf_len =
+ RTE_MIN((uint16_t)GVE_RX_MAX_BUF_SIZE_DQO,
+ RTE_ALIGN_FLOOR(mbuf_len, GVE_RX_BUF_ALIGN_DQO));
/* Allocate software ring */
rxq->sw_ring = rte_zmalloc_socket("gve rx sw ring",