[v5,1/5] net/virtio: add initial RSS support

Message ID 20211018102045.255831-2-maxime.coquelin@redhat.com (mailing list archive)
State Superseded, archived
Delegated to: Maxime Coquelin
Headers
Series Virtio PMD RSS support & RSS fixes |

Checks

Context Check Description
ci/checkpatch success coding style OK

Commit Message

Maxime Coquelin Oct. 18, 2021, 10:20 a.m. UTC
  Provide the capability to update the hash key, hash types
and RETA table on the fly (without needing to stop/start
the device). However, the key length and the number of RETA
entries are fixed to 40B and 128 entries respectively. This
is done in order to simplify the design, but may be
revisited later as the Virtio spec provides this
flexibility.

Note that only VIRTIO_NET_F_RSS support is implemented,
VIRTIO_NET_F_HASH_REPORT, which would enable reporting the
packet RSS hash calculated by the device into mbuf.rss, is
not yet supported.

Regarding the default RSS configuration, it has been
chosen to use the default Intel ixgbe key as default key,
and default RETA is a simple modulo between the hash and
the number of Rx queues.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 doc/guides/nics/features/virtio.ini    |   3 +
 doc/guides/nics/virtio.rst             |   3 +
 doc/guides/rel_notes/release_21_11.rst |   6 +
 drivers/net/virtio/virtio.h            |  31 +-
 drivers/net/virtio/virtio_ethdev.c     | 394 ++++++++++++++++++++++++-
 drivers/net/virtio/virtio_ethdev.h     |   3 +-
 drivers/net/virtio/virtqueue.h         |  25 ++
 7 files changed, 456 insertions(+), 9 deletions(-)
  

Comments

Chenbo Xia Oct. 19, 2021, 4:47 a.m. UTC | #1
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Monday, October 18, 2021 6:21 PM
> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>; amorenoz@redhat.com;
> david.marchand@redhat.com; andrew.rybchenko@oktetlabs.ru; Yigit, Ferruh
> <ferruh.yigit@intel.com>; michaelba@nvidia.com; viacheslavo@nvidia.com; Li,
> Xiaoyun <xiaoyun.li@intel.com>
> Cc: nelio.laranjeiro@6wind.com; yvugenfi@redhat.com; ybendito@redhat.com;
> Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: [PATCH v5 1/5] net/virtio: add initial RSS support
> 
> Provide the capability to update the hash key, hash types
> and RETA table on the fly (without needing to stop/start
> the device). However, the key length and the number of RETA
> entries are fixed to 40B and 128 entries respectively. This
> is done in order to simplify the design, but may be
> revisited later as the Virtio spec provides this
> flexibility.
> 
> Note that only VIRTIO_NET_F_RSS support is implemented,
> VIRTIO_NET_F_HASH_REPORT, which would enable reporting the
> packet RSS hash calculated by the device into mbuf.rss, is
> not yet supported.
> 
> Regarding the default RSS configuration, it has been
> chosen to use the default Intel ixgbe key as default key,
> and default RETA is a simple modulo between the hash and
> the number of Rx queues.
> 
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> 2.31.1

Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>

And I think the reported issue in patchwork is not related with
this series.
  
Andrew Rybchenko Oct. 19, 2021, 7:30 a.m. UTC | #2
On 10/18/21 1:20 PM, Maxime Coquelin wrote:
> Provide the capability to update the hash key, hash types
> and RETA table on the fly (without needing to stop/start
> the device). However, the key length and the number of RETA
> entries are fixed to 40B and 128 entries respectively. This
> is done in order to simplify the design, but may be
> revisited later as the Virtio spec provides this
> flexibility.
> 
> Note that only VIRTIO_NET_F_RSS support is implemented,
> VIRTIO_NET_F_HASH_REPORT, which would enable reporting the
> packet RSS hash calculated by the device into mbuf.rss, is
> not yet supported.
> 
> Regarding the default RSS configuration, it has been
> chosen to use the default Intel ixgbe key as default key,
> and default RETA is a simple modulo between the hash and
> the number of Rx queues.
> 
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>

See review notes below

> diff --git a/drivers/net/virtio/virtio.h b/drivers/net/virtio/virtio.h
> index e78b2e429e..7118e5d24c 100644
> --- a/drivers/net/virtio/virtio.h
> +++ b/drivers/net/virtio/virtio.h

[snip]

> @@ -100,6 +101,29 @@
>   */
>  #define VIRTIO_MAX_INDIRECT ((int)(rte_mem_page_size() / 16))
>  
> +/*  Virtio RSS hash types */
> +#define VIRTIO_NET_HASH_TYPE_IPV4	(1 << 0)
> +#define VIRTIO_NET_HASH_TYPE_TCPV4	(1 << 1)
> +#define VIRTIO_NET_HASH_TYPE_UDPV4	(1 << 2)
> +#define VIRTIO_NET_HASH_TYPE_IPV6	(1 << 3)
> +#define VIRTIO_NET_HASH_TYPE_TCPV6	(1 << 4)
> +#define VIRTIO_NET_HASH_TYPE_UDPV6	(1 << 5)
> +#define VIRTIO_NET_HASH_TYPE_IP_EX	(1 << 6)
> +#define VIRTIO_NET_HASH_TYPE_TCP_EX	(1 << 7)
> +#define VIRTIO_NET_HASH_TYPE_UDP_EX	(1 << 8)

I think it is a bit better to use RTE_BIT32() above.

[snip]

> diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
> index aff791fbd0..a8e2bf3e1a 100644
> --- a/drivers/net/virtio/virtio_ethdev.c
> +++ b/drivers/net/virtio/virtio_ethdev.c

[snip]

> @@ -347,20 +357,51 @@ virtio_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl,
>  }
>  
>  static int
> -virtio_set_multiple_queues(struct rte_eth_dev *dev, uint16_t nb_queues)
> +virtio_set_multiple_queues_rss(struct rte_eth_dev *dev, uint16_t nb_queues)
>  {
>  	struct virtio_hw *hw = dev->data->dev_private;
>  	struct virtio_pmd_ctrl ctrl;
> -	int dlen[1];
> +	struct virtio_net_ctrl_rss rss;
> +	int dlen, ret;
> +
> +	rss.hash_types = hw->rss_hash_types & VIRTIO_NET_HASH_TYPE_MASK;

RTE_BUILD_BUG_ON(!RTE_IS_POWER_OF_2(VIRTIO_NET_RSS_RETA_SIZE));

> +	rss.indirection_table_mask = VIRTIO_NET_RSS_RETA_SIZE - 1;

It relies on the fact that device is power of 2.
So, suggest to add above check.

> +	rss.unclassified_queue = 0;
> +	memcpy(rss.indirection_table, hw->rss_reta, VIRTIO_NET_RSS_RETA_SIZE * sizeof(uint16_t));
> +	rss.max_tx_vq = nb_queues;

Is it guaranteed that driver is configured with equal number
of Rx and Tx queues? Or is it not a problem otherwise?

[snip]

> +static int
> +virtio_dev_get_rss_config(struct virtio_hw *hw, uint32_t *rss_hash_types)
> +{
> +	struct virtio_net_config local_config;
> +	struct virtio_net_config *config = &local_config;
> +
> +	virtio_read_dev_config(hw,
> +			offsetof(struct virtio_net_config, rss_max_key_size),
> +			&config->rss_max_key_size,
> +			sizeof(config->rss_max_key_size));
> +	if (config->rss_max_key_size < VIRTIO_NET_RSS_KEY_SIZE) {

Shouldn't it be
config->rss_max_key_size != VIRTIO_NET_RSS_KEY_SIZE ?
Or do we just ensure that HW supports at least required key
size and rely on the fact that it will reject set request later
if our size is not supported in fact?

> +		PMD_INIT_LOG(ERR, "Invalid device RSS max key size (%u)",
> +				config->rss_max_key_size);
> +		return -EINVAL;
> +	}
> +
> +	virtio_read_dev_config(hw,
> +			offsetof(struct virtio_net_config,
> +				rss_max_indirection_table_length),
> +			&config->rss_max_indirection_table_length,
> +			sizeof(config->rss_max_indirection_table_length));
> +	if (config->rss_max_indirection_table_length < VIRTIO_NET_RSS_RETA_SIZE) {

Same question here.

> +		PMD_INIT_LOG(ERR, "Invalid device RSS max reta size (%u)",
> +				config->rss_max_indirection_table_length);
> +		return -EINVAL;
> +	}
> +
> +	virtio_read_dev_config(hw,
> +			offsetof(struct virtio_net_config, supported_hash_types),
> +			&config->supported_hash_types,
> +			sizeof(config->supported_hash_types));
> +	if ((config->supported_hash_types & VIRTIO_NET_HASH_TYPE_MASK) == 0) {
> +		PMD_INIT_LOG(ERR, "Invalid device RSS hash types (0x%x)",
> +				config->supported_hash_types);
> +		return -EINVAL;
> +	}
> +
> +	*rss_hash_types = config->supported_hash_types & VIRTIO_NET_HASH_TYPE_MASK;
> +
> +	PMD_INIT_LOG(DEBUG, "Device RSS config:");
> +	PMD_INIT_LOG(DEBUG, "\t-Max key size: %u", config->rss_max_key_size);
> +	PMD_INIT_LOG(DEBUG, "\t-Max reta size: %u", config->rss_max_indirection_table_length);
> +	PMD_INIT_LOG(DEBUG, "\t-Supported hash types: 0x%x", *rss_hash_types);
> +
> +	return 0;
> +}
> +
> +static int
> +virtio_dev_rss_hash_update(struct rte_eth_dev *dev,
> +		struct rte_eth_rss_conf *rss_conf)
> +{
> +	struct virtio_hw *hw = dev->data->dev_private;
> +	uint16_t nb_queues;
> +
> +	if (!virtio_with_feature(hw, VIRTIO_NET_F_RSS))
> +		return -ENOTSUP;
> +
> +	if (rss_conf->rss_hf & ~virtio_to_ethdev_rss_offloads(VIRTIO_NET_HASH_TYPE_MASK))
> +		return -EINVAL;
> +
> +	hw->rss_hash_types = ethdev_to_virtio_rss_offloads(rss_conf->rss_hf);
> +
> +	if (rss_conf->rss_key && rss_conf->rss_key_len) {
> +		if (rss_conf->rss_key_len != VIRTIO_NET_RSS_KEY_SIZE) {
> +			PMD_INIT_LOG(ERR, "Driver only supports %u RSS key length",
> +					VIRTIO_NET_RSS_KEY_SIZE);
> +			return -EINVAL;
> +		}
> +		memcpy(hw->rss_key, rss_conf->rss_key, VIRTIO_NET_RSS_KEY_SIZE);
> +	}
> +
> +	nb_queues = RTE_MAX(dev->data->nb_rx_queues, dev->data->nb_tx_queues);
> +	return virtio_set_multiple_queues_rss(dev, nb_queues);

Don't we need to rollback data in hw in the case of failure?

[snip]

> +static int virtio_dev_rss_reta_update(struct rte_eth_dev *dev,
> +			 struct rte_eth_rss_reta_entry64 *reta_conf,
> +			 uint16_t reta_size)
> +{
> +	struct virtio_hw *hw = dev->data->dev_private;
> +	uint16_t nb_queues;
> +	int idx, pos, i;
> +
> +	if (!virtio_with_feature(hw, VIRTIO_NET_F_RSS))
> +		return -ENOTSUP;
> +
> +	if (reta_size != VIRTIO_NET_RSS_RETA_SIZE)
> +		return -EINVAL;
> +
> +	for (i = 0; i < reta_size; i++) {
> +		idx = i / RTE_RETA_GROUP_SIZE;
> +		pos = i % RTE_RETA_GROUP_SIZE;
> +
> +		if (((reta_conf[idx].mask >> pos) & 0x1) == 0)
> +			continue;
> +
> +		hw->rss_reta[i] = reta_conf[idx].reta[pos];
> +	}
> +
> +	nb_queues = RTE_MAX(dev->data->nb_rx_queues, dev->data->nb_tx_queues);
> +	return virtio_set_multiple_queues_rss(dev, nb_queues);

Question about rollback in the case of failure stands here as
well.

> +}

[snip]


> +static int
> +virtio_dev_rss_init(struct rte_eth_dev *eth_dev)
> +{
> +	struct virtio_hw *hw = eth_dev->data->dev_private;
> +	uint16_t nb_rx_queues = eth_dev->data->nb_rx_queues;
> +	struct rte_eth_rss_conf *rss_conf;
> +	int ret, i;
> +
> +	rss_conf = &eth_dev->data->dev_conf.rx_adv_conf.rss_conf;
> +
> +	ret = virtio_dev_get_rss_config(hw, &hw->rss_hash_types);
> +	if (ret)
> +		return ret;
> +
> +	if (rss_conf->rss_hf) {
> +		/*  Ensure requested hash types are supported by the device */
> +		if (rss_conf->rss_hf & ~virtio_to_ethdev_rss_offloads(hw->rss_hash_types))
> +			return -EINVAL;
> +
> +		hw->rss_hash_types = ethdev_to_virtio_rss_offloads(rss_conf->rss_hf);
> +	}
> +
> +	if (!hw->rss_key) {
> +		/* Setup default RSS key if not already setup by the user */
> +		hw->rss_key = rte_malloc_socket("rss_key",
> +				VIRTIO_NET_RSS_KEY_SIZE, 0,
> +				eth_dev->device->numa_node);
> +		if (!hw->rss_key) {
> +			PMD_INIT_LOG(ERR, "Failed to allocate RSS key");
> +			return -1;
> +		}
> +
> +		if (rss_conf->rss_key && rss_conf->rss_key_len) {
> +			if (rss_conf->rss_key_len != VIRTIO_NET_RSS_KEY_SIZE) {
> +				PMD_INIT_LOG(ERR, "Driver only supports %u RSS key length",
> +						VIRTIO_NET_RSS_KEY_SIZE);
> +				return -EINVAL;
> +			}
> +			memcpy(hw->rss_key, rss_conf->rss_key, VIRTIO_NET_RSS_KEY_SIZE);
> +		} else {
> +			memcpy(hw->rss_key, rss_intel_key, VIRTIO_NET_RSS_KEY_SIZE);
> +		}

Above if should work in the case of reconfigure as well when
array is already allocated.

> +	}
> +
> +	if (!hw->rss_reta) {
> +		/* Setup default RSS reta if not already setup by the user */
> +		hw->rss_reta = rte_malloc_socket("rss_reta",
> +				VIRTIO_NET_RSS_RETA_SIZE * sizeof(uint16_t), 0,
> +				eth_dev->device->numa_node);
> +		if (!hw->rss_reta) {
> +			PMD_INIT_LOG(ERR, "Failed to allocate RSS reta");
> +			return -1;
> +		}
> +		for (i = 0; i < VIRTIO_NET_RSS_RETA_SIZE; i++)
> +			hw->rss_reta[i] = i % nb_rx_queues;

How should it work in the case of reconfigure if a nubmer of Rx
queue changes?

Also I'm wondering how it works...
virtio_dev_rss_init() is called from eth_virtio_dev_init() as
well when a number of Rx queues is zero. I guess the reason is
VIRTIO_PMD_DEFAULT_GUEST_FEATURES, but it looks very fragile.

> +	}
> +
> +	return 0;
> +}
> +

[snip]

> @@ -2107,6 +2465,9 @@ virtio_dev_configure(struct rte_eth_dev *dev)
>  			return ret;
>  	}
>  
> +	if (rxmode->mq_mode == ETH_MQ_RX_RSS)
> +		req_features |= (1ULL << VIRTIO_NET_F_RSS);

RTE_BIT64

> +
>  	if ((rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
>  	    (rxmode->max_rx_pkt_len > hw->max_mtu + ether_hdr_len))
>  		req_features &= ~(1ULL << VIRTIO_NET_F_MTU);

[snip]

> @@ -2578,6 +2946,18 @@ virtio_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
>  		(1ULL << VIRTIO_NET_F_HOST_TSO6);
>  	if ((host_features & tso_mask) == tso_mask)
>  		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
> +	if (host_features & (1ULL << VIRTIO_NET_F_RSS)) {

RTE_BIT64

> +		virtio_dev_get_rss_config(hw, &rss_hash_types);
> +		dev_info->hash_key_size = VIRTIO_NET_RSS_KEY_SIZE;
> +		dev_info->reta_size = VIRTIO_NET_RSS_RETA_SIZE;
> +		dev_info->flow_type_rss_offloads =
> +			virtio_to_ethdev_rss_offloads(rss_hash_types);
> +	} else {
> +		dev_info->hash_key_size = 0;
> +		dev_info->reta_size = 0;
> +		dev_info->flow_type_rss_offloads = 0;
> +	}
> +
>  
>  	if (host_features & (1ULL << VIRTIO_F_RING_PACKED)) {
>  		/*
> diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h
> index 40be484218..c08f382791 100644
> --- a/drivers/net/virtio/virtio_ethdev.h
> +++ b/drivers/net/virtio/virtio_ethdev.h
> @@ -45,7 +45,8 @@
>  	 1u << VIRTIO_NET_F_GUEST_TSO6     |	\
>  	 1u << VIRTIO_NET_F_CSUM           |	\
>  	 1u << VIRTIO_NET_F_HOST_TSO4      |	\
> -	 1u << VIRTIO_NET_F_HOST_TSO6)
> +	 1u << VIRTIO_NET_F_HOST_TSO6      |	\
> +	 1ULL << VIRTIO_NET_F_RSS)

IMHO it should be converted to use RTE_BIT64().
Yes, separate story, but right now it looks
confusing to see 1u and 1ULL above.

>  
>  extern const struct eth_dev_ops virtio_user_secondary_eth_dev_ops;
>  
[

[snip]
  
Maxime Coquelin Oct. 19, 2021, 7:31 a.m. UTC | #3
On 10/19/21 06:47, Xia, Chenbo wrote:
>> -----Original Message-----
>> From: Maxime Coquelin <maxime.coquelin@redhat.com>
>> Sent: Monday, October 18, 2021 6:21 PM
>> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>; amorenoz@redhat.com;
>> david.marchand@redhat.com; andrew.rybchenko@oktetlabs.ru; Yigit, Ferruh
>> <ferruh.yigit@intel.com>; michaelba@nvidia.com; viacheslavo@nvidia.com; Li,
>> Xiaoyun <xiaoyun.li@intel.com>
>> Cc: nelio.laranjeiro@6wind.com; yvugenfi@redhat.com; ybendito@redhat.com;
>> Maxime Coquelin <maxime.coquelin@redhat.com>
>> Subject: [PATCH v5 1/5] net/virtio: add initial RSS support
>>
>> Provide the capability to update the hash key, hash types
>> and RETA table on the fly (without needing to stop/start
>> the device). However, the key length and the number of RETA
>> entries are fixed to 40B and 128 entries respectively. This
>> is done in order to simplify the design, but may be
>> revisited later as the Virtio spec provides this
>> flexibility.
>>
>> Note that only VIRTIO_NET_F_RSS support is implemented,
>> VIRTIO_NET_F_HASH_REPORT, which would enable reporting the
>> packet RSS hash calculated by the device into mbuf.rss, is
>> not yet supported.
>>
>> Regarding the default RSS configuration, it has been
>> chosen to use the default Intel ixgbe key as default key,
>> and default RETA is a simple modulo between the hash and
>> the number of Rx queues.
>>
>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>> 2.31.1
> 
> Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
> 
> And I think the reported issue in patchwork is not related with
> this series.
> 

Thanks, I agree this does not seem related.

Maxime
  
Maxime Coquelin Oct. 19, 2021, 9:22 a.m. UTC | #4
Hi Andrew,

On 10/19/21 09:30, Andrew Rybchenko wrote:
> On 10/18/21 1:20 PM, Maxime Coquelin wrote:
>> Provide the capability to update the hash key, hash types
>> and RETA table on the fly (without needing to stop/start
>> the device). However, the key length and the number of RETA
>> entries are fixed to 40B and 128 entries respectively. This
>> is done in order to simplify the design, but may be
>> revisited later as the Virtio spec provides this
>> flexibility.
>>
>> Note that only VIRTIO_NET_F_RSS support is implemented,
>> VIRTIO_NET_F_HASH_REPORT, which would enable reporting the
>> packet RSS hash calculated by the device into mbuf.rss, is
>> not yet supported.
>>
>> Regarding the default RSS configuration, it has been
>> chosen to use the default Intel ixgbe key as default key,
>> and default RETA is a simple modulo between the hash and
>> the number of Rx queues.
>>
>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> 
> See review notes below
> 
>> diff --git a/drivers/net/virtio/virtio.h b/drivers/net/virtio/virtio.h
>> index e78b2e429e..7118e5d24c 100644
>> --- a/drivers/net/virtio/virtio.h
>> +++ b/drivers/net/virtio/virtio.h
> 
> [snip]
> 
>> @@ -100,6 +101,29 @@
>>    */
>>   #define VIRTIO_MAX_INDIRECT ((int)(rte_mem_page_size() / 16))
>>   
>> +/*  Virtio RSS hash types */
>> +#define VIRTIO_NET_HASH_TYPE_IPV4	(1 << 0)
>> +#define VIRTIO_NET_HASH_TYPE_TCPV4	(1 << 1)
>> +#define VIRTIO_NET_HASH_TYPE_UDPV4	(1 << 2)
>> +#define VIRTIO_NET_HASH_TYPE_IPV6	(1 << 3)
>> +#define VIRTIO_NET_HASH_TYPE_TCPV6	(1 << 4)
>> +#define VIRTIO_NET_HASH_TYPE_UDPV6	(1 << 5)
>> +#define VIRTIO_NET_HASH_TYPE_IP_EX	(1 << 6)
>> +#define VIRTIO_NET_HASH_TYPE_TCP_EX	(1 << 7)
>> +#define VIRTIO_NET_HASH_TYPE_UDP_EX	(1 << 8)
> 
> I think it is a bit better to use RTE_BIT32() above.

I did not know this macro existed, I will use it in next revision.

> [snip]
> 
>> diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
>> index aff791fbd0..a8e2bf3e1a 100644
>> --- a/drivers/net/virtio/virtio_ethdev.c
>> +++ b/drivers/net/virtio/virtio_ethdev.c
> 
> [snip]
> 
>> @@ -347,20 +357,51 @@ virtio_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl,
>>   }
>>   
>>   static int
>> -virtio_set_multiple_queues(struct rte_eth_dev *dev, uint16_t nb_queues)
>> +virtio_set_multiple_queues_rss(struct rte_eth_dev *dev, uint16_t nb_queues)
>>   {
>>   	struct virtio_hw *hw = dev->data->dev_private;
>>   	struct virtio_pmd_ctrl ctrl;
>> -	int dlen[1];
>> +	struct virtio_net_ctrl_rss rss;
>> +	int dlen, ret;
>> +
>> +	rss.hash_types = hw->rss_hash_types & VIRTIO_NET_HASH_TYPE_MASK;
> 
> RTE_BUILD_BUG_ON(!RTE_IS_POWER_OF_2(VIRTIO_NET_RSS_RETA_SIZE));

Makes sense, it indeed relies on the reta size being a power of 2,
which the spec requires.

>> +	rss.indirection_table_mask = VIRTIO_NET_RSS_RETA_SIZE - 1;
> 
> It relies on the fact that device is power of 2.
> So, suggest to add above check.
> 
>> +	rss.unclassified_queue = 0;
>> +	memcpy(rss.indirection_table, hw->rss_reta, VIRTIO_NET_RSS_RETA_SIZE * sizeof(uint16_t));
>> +	rss.max_tx_vq = nb_queues;
> 
> Is it guaranteed that driver is configured with equal number
> of Rx and Tx queues? Or is it not a problem otherwise?

Virtio networking devices works with queue pairs.

> [snip]
> 
>> +static int
>> +virtio_dev_get_rss_config(struct virtio_hw *hw, uint32_t *rss_hash_types)
>> +{
>> +	struct virtio_net_config local_config;
>> +	struct virtio_net_config *config = &local_config;
>> +
>> +	virtio_read_dev_config(hw,
>> +			offsetof(struct virtio_net_config, rss_max_key_size),
>> +			&config->rss_max_key_size,
>> +			sizeof(config->rss_max_key_size));
>> +	if (config->rss_max_key_size < VIRTIO_NET_RSS_KEY_SIZE) {
> 
> Shouldn't it be
> config->rss_max_key_size != VIRTIO_NET_RSS_KEY_SIZE ?

I don't think so.

> Or do we just ensure that HW supports at least required key
> size and rely on the fact that it will reject set request later
> if our size is not supported in fact?

Exactly. The device advertises the max key length it supports in its PCI
config space. Then, the driver specifies the key length it uses when
sending the virtio_net_ctrl_rss message on the control queue.

If we later try to set a different key via .rss_hash_update(), its size
is checked there (see virtio_dev_rss_hash_update()).

>> +		PMD_INIT_LOG(ERR, "Invalid device RSS max key size (%u)",
>> +				config->rss_max_key_size);
>> +		return -EINVAL;
>> +	}
>> +
>> +	virtio_read_dev_config(hw,
>> +			offsetof(struct virtio_net_config,
>> +				rss_max_indirection_table_length),
>> +			&config->rss_max_indirection_table_length,
>> +			sizeof(config->rss_max_indirection_table_length));
>> +	if (config->rss_max_indirection_table_length < VIRTIO_NET_RSS_RETA_SIZE) {
> 
> Same question here.

Same anwser, virtio_dev_rss_reta_update() ensures the table size is
VIRTIO_NET_RSS_RETA_SIZE.

>> +		PMD_INIT_LOG(ERR, "Invalid device RSS max reta size (%u)",
>> +				config->rss_max_indirection_table_length);
>> +		return -EINVAL;
>> +	}
>> +
>> +	virtio_read_dev_config(hw,
>> +			offsetof(struct virtio_net_config, supported_hash_types),
>> +			&config->supported_hash_types,
>> +			sizeof(config->supported_hash_types));
>> +	if ((config->supported_hash_types & VIRTIO_NET_HASH_TYPE_MASK) == 0) {
>> +		PMD_INIT_LOG(ERR, "Invalid device RSS hash types (0x%x)",
>> +				config->supported_hash_types);
>> +		return -EINVAL;
>> +	}
>> +
>> +	*rss_hash_types = config->supported_hash_types & VIRTIO_NET_HASH_TYPE_MASK;
>> +
>> +	PMD_INIT_LOG(DEBUG, "Device RSS config:");
>> +	PMD_INIT_LOG(DEBUG, "\t-Max key size: %u", config->rss_max_key_size);
>> +	PMD_INIT_LOG(DEBUG, "\t-Max reta size: %u", config->rss_max_indirection_table_length);
>> +	PMD_INIT_LOG(DEBUG, "\t-Supported hash types: 0x%x", *rss_hash_types);
>> +
>> +	return 0;
>> +}
>> +
>> +static int
>> +virtio_dev_rss_hash_update(struct rte_eth_dev *dev,
>> +		struct rte_eth_rss_conf *rss_conf)
>> +{
>> +	struct virtio_hw *hw = dev->data->dev_private;
>> +	uint16_t nb_queues;
>> +
>> +	if (!virtio_with_feature(hw, VIRTIO_NET_F_RSS))
>> +		return -ENOTSUP;
>> +
>> +	if (rss_conf->rss_hf & ~virtio_to_ethdev_rss_offloads(VIRTIO_NET_HASH_TYPE_MASK))
>> +		return -EINVAL;
>> +
>> +	hw->rss_hash_types = ethdev_to_virtio_rss_offloads(rss_conf->rss_hf);
>> +
>> +	if (rss_conf->rss_key && rss_conf->rss_key_len) {
>> +		if (rss_conf->rss_key_len != VIRTIO_NET_RSS_KEY_SIZE) {
>> +			PMD_INIT_LOG(ERR, "Driver only supports %u RSS key length",
>> +					VIRTIO_NET_RSS_KEY_SIZE);
>> +			return -EINVAL;
>> +		}
>> +		memcpy(hw->rss_key, rss_conf->rss_key, VIRTIO_NET_RSS_KEY_SIZE);
>> +	}
>> +
>> +	nb_queues = RTE_MAX(dev->data->nb_rx_queues, dev->data->nb_tx_queues);
>> +	return virtio_set_multiple_queues_rss(dev, nb_queues);
> 
> Don't we need to rollback data in hw in the case of failure?

Agree, it would be better.

> [snip]
> 
>> +static int virtio_dev_rss_reta_update(struct rte_eth_dev *dev,
>> +			 struct rte_eth_rss_reta_entry64 *reta_conf,
>> +			 uint16_t reta_size)
>> +{
>> +	struct virtio_hw *hw = dev->data->dev_private;
>> +	uint16_t nb_queues;
>> +	int idx, pos, i;
>> +
>> +	if (!virtio_with_feature(hw, VIRTIO_NET_F_RSS))
>> +		return -ENOTSUP;
>> +
>> +	if (reta_size != VIRTIO_NET_RSS_RETA_SIZE)
>> +		return -EINVAL;
>> +
>> +	for (i = 0; i < reta_size; i++) {
>> +		idx = i / RTE_RETA_GROUP_SIZE;
>> +		pos = i % RTE_RETA_GROUP_SIZE;
>> +
>> +		if (((reta_conf[idx].mask >> pos) & 0x1) == 0)
>> +			continue;
>> +
>> +		hw->rss_reta[i] = reta_conf[idx].reta[pos];
>> +	}
>> +
>> +	nb_queues = RTE_MAX(dev->data->nb_rx_queues, dev->data->nb_tx_queues);
>> +	return virtio_set_multiple_queues_rss(dev, nb_queues);
> 
> Question about rollback in the case of failure stands here as
> well.
> 
>> +}

Yes.

> [snip]
> 
> 
>> +static int
>> +virtio_dev_rss_init(struct rte_eth_dev *eth_dev)
>> +{
>> +	struct virtio_hw *hw = eth_dev->data->dev_private;
>> +	uint16_t nb_rx_queues = eth_dev->data->nb_rx_queues;
>> +	struct rte_eth_rss_conf *rss_conf;
>> +	int ret, i;
>> +
>> +	rss_conf = &eth_dev->data->dev_conf.rx_adv_conf.rss_conf;
>> +
>> +	ret = virtio_dev_get_rss_config(hw, &hw->rss_hash_types);
>> +	if (ret)
>> +		return ret;
>> +
>> +	if (rss_conf->rss_hf) {
>> +		/*  Ensure requested hash types are supported by the device */
>> +		if (rss_conf->rss_hf & ~virtio_to_ethdev_rss_offloads(hw->rss_hash_types))
>> +			return -EINVAL;
>> +
>> +		hw->rss_hash_types = ethdev_to_virtio_rss_offloads(rss_conf->rss_hf);
>> +	}
>> +
>> +	if (!hw->rss_key) {
>> +		/* Setup default RSS key if not already setup by the user */
>> +		hw->rss_key = rte_malloc_socket("rss_key",
>> +				VIRTIO_NET_RSS_KEY_SIZE, 0,
>> +				eth_dev->device->numa_node);
>> +		if (!hw->rss_key) {
>> +			PMD_INIT_LOG(ERR, "Failed to allocate RSS key");
>> +			return -1;
>> +		}
>> +
>> +		if (rss_conf->rss_key && rss_conf->rss_key_len) {
>> +			if (rss_conf->rss_key_len != VIRTIO_NET_RSS_KEY_SIZE) {
>> +				PMD_INIT_LOG(ERR, "Driver only supports %u RSS key length",
>> +						VIRTIO_NET_RSS_KEY_SIZE);
>> +				return -EINVAL;
>> +			}
>> +			memcpy(hw->rss_key, rss_conf->rss_key, VIRTIO_NET_RSS_KEY_SIZE);
>> +		} else {
>> +			memcpy(hw->rss_key, rss_intel_key, VIRTIO_NET_RSS_KEY_SIZE);
>> +		}
> 
> Above if should work in the case of reconfigure as well when
> array is already allocated.

I'm not sure, because rte_eth_dev_rss_hash_update() API does not update
eth_dev->data->dev_conf.rx_adv_conf.rss_conf, so we may lose the RSS key
the user would have updated using this API. What do you think?

>> +	}
>> +
>> +	if (!hw->rss_reta) {
>> +		/* Setup default RSS reta if not already setup by the user */
>> +		hw->rss_reta = rte_malloc_socket("rss_reta",
>> +				VIRTIO_NET_RSS_RETA_SIZE * sizeof(uint16_t), 0,
>> +				eth_dev->device->numa_node);
>> +		if (!hw->rss_reta) {
>> +			PMD_INIT_LOG(ERR, "Failed to allocate RSS reta");
>> +			return -1;
>> +		}
>> +		for (i = 0; i < VIRTIO_NET_RSS_RETA_SIZE; i++)
>> +			hw->rss_reta[i] = i % nb_rx_queues;
> 
> How should it work in the case of reconfigure if a nubmer of Rx
> queue changes?

Hmm, good catch! I did not think about this, and so did not test it.

Not to lose user-provisionned reta in case of unrelated change, maybe I 
should save the number of Rx queues when setting the reta (here and in
virtio_dev_rss_reta_update), and perform a re-initialization if it is 
different?

> Also I'm wondering how it works...
> virtio_dev_rss_init() is called from eth_virtio_dev_init() as
> well when a number of Rx queues is zero. I guess the reason is
> VIRTIO_PMD_DEFAULT_GUEST_FEATURES, but it looks very fragile.

Yes, we only add VIRTIO_NET_F_RSS to the supported features in
virtio_dev_configure(), if Rx MQ mode is RSS. So virtio_dev_rss_init()
will never be called from eth_virtio_dev_init().

If I'm not mistaken, rte_eth_dev_configure() must be called it is stated
in the API documentation, so it will be negotiated if conditions are
met.

>> +	}
>> +
>> +	return 0;
>> +}
>> +
> 
> [snip]
> 
>> @@ -2107,6 +2465,9 @@ virtio_dev_configure(struct rte_eth_dev *dev)
>>   			return ret;
>>   	}
>>   
>> +	if (rxmode->mq_mode == ETH_MQ_RX_RSS)
>> +		req_features |= (1ULL << VIRTIO_NET_F_RSS);
> 
> RTE_BIT64
> 
>> +
>>   	if ((rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
>>   	    (rxmode->max_rx_pkt_len > hw->max_mtu + ether_hdr_len))
>>   		req_features &= ~(1ULL << VIRTIO_NET_F_MTU);
> 
> [snip]
> 
>> @@ -2578,6 +2946,18 @@ virtio_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
>>   		(1ULL << VIRTIO_NET_F_HOST_TSO6);
>>   	if ((host_features & tso_mask) == tso_mask)
>>   		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
>> +	if (host_features & (1ULL << VIRTIO_NET_F_RSS)) {
> 
> RTE_BIT64
> 
>> +		virtio_dev_get_rss_config(hw, &rss_hash_types);
>> +		dev_info->hash_key_size = VIRTIO_NET_RSS_KEY_SIZE;
>> +		dev_info->reta_size = VIRTIO_NET_RSS_RETA_SIZE;
>> +		dev_info->flow_type_rss_offloads =
>> +			virtio_to_ethdev_rss_offloads(rss_hash_types);
>> +	} else {
>> +		dev_info->hash_key_size = 0;
>> +		dev_info->reta_size = 0;
>> +		dev_info->flow_type_rss_offloads = 0;
>> +	}
>> +
>>   
>>   	if (host_features & (1ULL << VIRTIO_F_RING_PACKED)) {
>>   		/*
>> diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h
>> index 40be484218..c08f382791 100644
>> --- a/drivers/net/virtio/virtio_ethdev.h
>> +++ b/drivers/net/virtio/virtio_ethdev.h
>> @@ -45,7 +45,8 @@
>>   	 1u << VIRTIO_NET_F_GUEST_TSO6     |	\
>>   	 1u << VIRTIO_NET_F_CSUM           |	\
>>   	 1u << VIRTIO_NET_F_HOST_TSO4      |	\
>> -	 1u << VIRTIO_NET_F_HOST_TSO6)
>> +	 1u << VIRTIO_NET_F_HOST_TSO6      |	\
>> +	 1ULL << VIRTIO_NET_F_RSS)
> 
> IMHO it should be converted to use RTE_BIT64().
> Yes, separate story, but right now it looks
> confusing to see 1u and 1ULL above.

Ok, I'll add that to my Todo list.

Thanks for the review,
Maxime

>>   
>>   extern const struct eth_dev_ops virtio_user_secondary_eth_dev_ops;
>>   
> [
> 
> [snip]
>
  
Andrew Rybchenko Oct. 19, 2021, 9:37 a.m. UTC | #5
Hi Maxime,

On 10/19/21 12:22 PM, Maxime Coquelin wrote:
> Hi Andrew,
> 
> On 10/19/21 09:30, Andrew Rybchenko wrote:
>> On 10/18/21 1:20 PM, Maxime Coquelin wrote:
>>> Provide the capability to update the hash key, hash types
>>> and RETA table on the fly (without needing to stop/start
>>> the device). However, the key length and the number of RETA
>>> entries are fixed to 40B and 128 entries respectively. This
>>> is done in order to simplify the design, but may be
>>> revisited later as the Virtio spec provides this
>>> flexibility.
>>>
>>> Note that only VIRTIO_NET_F_RSS support is implemented,
>>> VIRTIO_NET_F_HASH_REPORT, which would enable reporting the
>>> packet RSS hash calculated by the device into mbuf.rss, is
>>> not yet supported.
>>>
>>> Regarding the default RSS configuration, it has been
>>> chosen to use the default Intel ixgbe key as default key,
>>> and default RETA is a simple modulo between the hash and
>>> the number of Rx queues.
>>>
>>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>

[snip]

>>> +    rss.unclassified_queue = 0;
>>> +    memcpy(rss.indirection_table, hw->rss_reta,
>>> VIRTIO_NET_RSS_RETA_SIZE * sizeof(uint16_t));
>>> +    rss.max_tx_vq = nb_queues;
>>
>> Is it guaranteed that driver is configured with equal number
>> of Rx and Tx queues? Or is it not a problem otherwise?
> 
> Virtio networking devices works with queue pairs.

But it seems to me that I still can configure just 1 Rx queue
and many Tx queues. Basically just non equal.
The line is suspicious since I'd expect nb_queues to be
a number of Rx queues in the function, but we set max_tx_vq
count here.

>>> +static int
>>> +virtio_dev_rss_init(struct rte_eth_dev *eth_dev)
>>> +{
>>> +    struct virtio_hw *hw = eth_dev->data->dev_private;
>>> +    uint16_t nb_rx_queues = eth_dev->data->nb_rx_queues;
>>> +    struct rte_eth_rss_conf *rss_conf;
>>> +    int ret, i;
>>> +
>>> +    rss_conf = &eth_dev->data->dev_conf.rx_adv_conf.rss_conf;
>>> +
>>> +    ret = virtio_dev_get_rss_config(hw, &hw->rss_hash_types);
>>> +    if (ret)
>>> +        return ret;
>>> +
>>> +    if (rss_conf->rss_hf) {
>>> +        /*  Ensure requested hash types are supported by the device */
>>> +        if (rss_conf->rss_hf &
>>> ~virtio_to_ethdev_rss_offloads(hw->rss_hash_types))
>>> +            return -EINVAL;
>>> +
>>> +        hw->rss_hash_types =
>>> ethdev_to_virtio_rss_offloads(rss_conf->rss_hf);
>>> +    }
>>> +
>>> +    if (!hw->rss_key) {
>>> +        /* Setup default RSS key if not already setup by the user */
>>> +        hw->rss_key = rte_malloc_socket("rss_key",
>>> +                VIRTIO_NET_RSS_KEY_SIZE, 0,
>>> +                eth_dev->device->numa_node);
>>> +        if (!hw->rss_key) {
>>> +            PMD_INIT_LOG(ERR, "Failed to allocate RSS key");
>>> +            return -1;
>>> +        }
>>> +
>>> +        if (rss_conf->rss_key && rss_conf->rss_key_len) {
>>> +            if (rss_conf->rss_key_len != VIRTIO_NET_RSS_KEY_SIZE) {
>>> +                PMD_INIT_LOG(ERR, "Driver only supports %u RSS key
>>> length",
>>> +                        VIRTIO_NET_RSS_KEY_SIZE);
>>> +                return -EINVAL;
>>> +            }
>>> +            memcpy(hw->rss_key, rss_conf->rss_key,
>>> VIRTIO_NET_RSS_KEY_SIZE);
>>> +        } else {
>>> +            memcpy(hw->rss_key, rss_intel_key,
>>> VIRTIO_NET_RSS_KEY_SIZE);
>>> +        }
>>
>> Above if should work in the case of reconfigure as well when
>> array is already allocated.
> 
> I'm not sure, because rte_eth_dev_rss_hash_update() API does not update
> eth_dev->data->dev_conf.rx_adv_conf.rss_conf, so we may lose the RSS key
> the user would have updated using this API. What do you think?

I think, but I think it should be used correctly by the
application. I.e. if application does reconfigure and
provides same or new key, does not matter, it should be
applied. Key could be a part of configure request and we
should not ignore it in the case of reconfigure.

Yes, I agree that it is a bit tricky, but still right
logic.

>>> +    }
>>> +
>>> +    if (!hw->rss_reta) {
>>> +        /* Setup default RSS reta if not already setup by the user */
>>> +        hw->rss_reta = rte_malloc_socket("rss_reta",
>>> +                VIRTIO_NET_RSS_RETA_SIZE * sizeof(uint16_t), 0,
>>> +                eth_dev->device->numa_node);
>>> +        if (!hw->rss_reta) {
>>> +            PMD_INIT_LOG(ERR, "Failed to allocate RSS reta");
>>> +            return -1;
>>> +        }
>>> +        for (i = 0; i < VIRTIO_NET_RSS_RETA_SIZE; i++)
>>> +            hw->rss_reta[i] = i % nb_rx_queues;
>>
>> How should it work in the case of reconfigure if a nubmer of Rx
>> queue changes?
> 
> Hmm, good catch! I did not think about this, and so did not test it.
> 
> Not to lose user-provisionned reta in case of unrelated change, maybe I
> should save the number of Rx queues when setting the reta (here and in
> virtio_dev_rss_reta_update), and perform a re-initialization if it is
> different?

Yes, it is much harder than RSS key case since the data are not
provided in dev_conf explicitly. So, we should guess here what
to do. The simple solution is to reinitialize if number of RxQs
change. I'd go this way.

>> Also I'm wondering how it works...
>> virtio_dev_rss_init() is called from eth_virtio_dev_init() as
>> well when a number of Rx queues is zero. I guess the reason is
>> VIRTIO_PMD_DEFAULT_GUEST_FEATURES, but it looks very fragile.
> 
> Yes, we only add VIRTIO_NET_F_RSS to the supported features in
> virtio_dev_configure(), if Rx MQ mode is RSS. So virtio_dev_rss_init()
> will never be called from eth_virtio_dev_init().
> 
> If I'm not mistaken, rte_eth_dev_configure() must be called it is stated
> in the API documentation, so it will be negotiated if conditions are
> met.

As far as I know we can configure ethdev with zero number of
RxQs, but non-zero number of Tx queues. E.g. traffic generator
usecase.

Division by zero is a guaranteed crash, so, I'd care
about it explicitly in any case. Just do not try to rely
on other checks faraway.

Andrew.
  
Maxime Coquelin Oct. 27, 2021, 10:55 a.m. UTC | #6
Hi,

On 10/19/21 11:37, Andrew Rybchenko wrote:
> Hi Maxime,
> 
> On 10/19/21 12:22 PM, Maxime Coquelin wrote:
>> Hi Andrew,
>>
>> On 10/19/21 09:30, Andrew Rybchenko wrote:
>>> On 10/18/21 1:20 PM, Maxime Coquelin wrote:
>>>> Provide the capability to update the hash key, hash types
>>>> and RETA table on the fly (without needing to stop/start
>>>> the device). However, the key length and the number of RETA
>>>> entries are fixed to 40B and 128 entries respectively. This
>>>> is done in order to simplify the design, but may be
>>>> revisited later as the Virtio spec provides this
>>>> flexibility.
>>>>
>>>> Note that only VIRTIO_NET_F_RSS support is implemented,
>>>> VIRTIO_NET_F_HASH_REPORT, which would enable reporting the
>>>> packet RSS hash calculated by the device into mbuf.rss, is
>>>> not yet supported.
>>>>
>>>> Regarding the default RSS configuration, it has been
>>>> chosen to use the default Intel ixgbe key as default key,
>>>> and default RETA is a simple modulo between the hash and
>>>> the number of Rx queues.
>>>>
>>>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> 
> [snip]
> 
>>>> +    rss.unclassified_queue = 0;
>>>> +    memcpy(rss.indirection_table, hw->rss_reta,
>>>> VIRTIO_NET_RSS_RETA_SIZE * sizeof(uint16_t));
>>>> +    rss.max_tx_vq = nb_queues;
>>>
>>> Is it guaranteed that driver is configured with equal number
>>> of Rx and Tx queues? Or is it not a problem otherwise?
>>
>> Virtio networking devices works with queue pairs.
> 
> But it seems to me that I still can configure just 1 Rx queue
> and many Tx queues. Basically just non equal.
> The line is suspicious since I'd expect nb_queues to be
> a number of Rx queues in the function, but we set max_tx_vq
> count here.

The Virtio spec says:
"
A driver sets max_tx_vq to inform a device how many transmit
virtqueues it may use (transmitq1. . .transmitq max_tx_vq).
"

But looking at Qemu side, its value is interpreted as the number of
queue pairs setup by the driver, same as is handled virtqueue_pairs of
struct virtio_net_ctrl_mq when RSS is not supported.

In this patch we are compatible with what is done in Qemu, and what is
done for multiqueue when RSS is not enabled.

I don't get why the spec talks about transmit queues, Yan & Yuri, any
idea?

>>>> +static int
>>>> +virtio_dev_rss_init(struct rte_eth_dev *eth_dev)
>>>> +{
>>>> +    struct virtio_hw *hw = eth_dev->data->dev_private;
>>>> +    uint16_t nb_rx_queues = eth_dev->data->nb_rx_queues;
>>>> +    struct rte_eth_rss_conf *rss_conf;
>>>> +    int ret, i;
>>>> +
>>>> +    rss_conf = &eth_dev->data->dev_conf.rx_adv_conf.rss_conf;
>>>> +
>>>> +    ret = virtio_dev_get_rss_config(hw, &hw->rss_hash_types);
>>>> +    if (ret)
>>>> +        return ret;
>>>> +
>>>> +    if (rss_conf->rss_hf) {
>>>> +        /*  Ensure requested hash types are supported by the device */
>>>> +        if (rss_conf->rss_hf &
>>>> ~virtio_to_ethdev_rss_offloads(hw->rss_hash_types))
>>>> +            return -EINVAL;
>>>> +
>>>> +        hw->rss_hash_types =
>>>> ethdev_to_virtio_rss_offloads(rss_conf->rss_hf);
>>>> +    }
>>>> +
>>>> +    if (!hw->rss_key) {
>>>> +        /* Setup default RSS key if not already setup by the user */
>>>> +        hw->rss_key = rte_malloc_socket("rss_key",
>>>> +                VIRTIO_NET_RSS_KEY_SIZE, 0,
>>>> +                eth_dev->device->numa_node);
>>>> +        if (!hw->rss_key) {
>>>> +            PMD_INIT_LOG(ERR, "Failed to allocate RSS key");
>>>> +            return -1;
>>>> +        }
>>>> +
>>>> +        if (rss_conf->rss_key && rss_conf->rss_key_len) {
>>>> +            if (rss_conf->rss_key_len != VIRTIO_NET_RSS_KEY_SIZE) {
>>>> +                PMD_INIT_LOG(ERR, "Driver only supports %u RSS key
>>>> length",
>>>> +                        VIRTIO_NET_RSS_KEY_SIZE);
>>>> +                return -EINVAL;
>>>> +            }
>>>> +            memcpy(hw->rss_key, rss_conf->rss_key,
>>>> VIRTIO_NET_RSS_KEY_SIZE);
>>>> +        } else {
>>>> +            memcpy(hw->rss_key, rss_intel_key,
>>>> VIRTIO_NET_RSS_KEY_SIZE);
>>>> +        }
>>>
>>> Above if should work in the case of reconfigure as well when
>>> array is already allocated.
>>
>> I'm not sure, because rte_eth_dev_rss_hash_update() API does not update
>> eth_dev->data->dev_conf.rx_adv_conf.rss_conf, so we may lose the RSS key
>> the user would have updated using this API. What do you think?
> 
> I think, but I think it should be used correctly by the
> application. I.e. if application does reconfigure and
> provides same or new key, does not matter, it should be
> applied. Key could be a part of configure request and we
> should not ignore it in the case of reconfigure.
> 
> Yes, I agree that it is a bit tricky, but still right
> logic.

Ok. I'm applying your suggestion, which is to apply rss_conf whatever
rss_key was initialized or not before.

>>>> +    }
>>>> +
>>>> +    if (!hw->rss_reta) {
>>>> +        /* Setup default RSS reta if not already setup by the user */
>>>> +        hw->rss_reta = rte_malloc_socket("rss_reta",
>>>> +                VIRTIO_NET_RSS_RETA_SIZE * sizeof(uint16_t), 0,
>>>> +                eth_dev->device->numa_node);
>>>> +        if (!hw->rss_reta) {
>>>> +            PMD_INIT_LOG(ERR, "Failed to allocate RSS reta");
>>>> +            return -1;
>>>> +        }
>>>> +        for (i = 0; i < VIRTIO_NET_RSS_RETA_SIZE; i++)
>>>> +            hw->rss_reta[i] = i % nb_rx_queues;
>>>
>>> How should it work in the case of reconfigure if a nubmer of Rx
>>> queue changes?
>>
>> Hmm, good catch! I did not think about this, and so did not test it.
>>
>> Not to lose user-provisionned reta in case of unrelated change, maybe I
>> should save the number of Rx queues when setting the reta (here and in
>> virtio_dev_rss_reta_update), and perform a re-initialization if it is
>> different?
> 
> Yes, it is much harder than RSS key case since the data are not
> provided in dev_conf explicitly. So, we should guess here what
> to do. The simple solution is to reinitialize if number of RxQs
> change. I'd go this way.

OK, this will be done in next revision.

>>> Also I'm wondering how it works...
>>> virtio_dev_rss_init() is called from eth_virtio_dev_init() as
>>> well when a number of Rx queues is zero. I guess the reason is
>>> VIRTIO_PMD_DEFAULT_GUEST_FEATURES, but it looks very fragile.
>>
>> Yes, we only add VIRTIO_NET_F_RSS to the supported features in
>> virtio_dev_configure(), if Rx MQ mode is RSS. So virtio_dev_rss_init()
>> will never be called from eth_virtio_dev_init().
>>
>> If I'm not mistaken, rte_eth_dev_configure() must be called it is stated
>> in the API documentation, so it will be negotiated if conditions are
>> met.
> 
> As far as I know we can configure ethdev with zero number of
> RxQs, but non-zero number of Tx queues. E.g. traffic generator
> usecase.
> 
> Division by zero is a guaranteed crash, so, I'd care
> about it explicitly in any case. Just do not try to rely
> on other checks faraway.

Sure, adding a check on nb_rx_queues before doing any RSS init.

Thanks,
Maxime
> Andrew.
>
  
Yuri Benditovich Oct. 27, 2021, 2:45 p.m. UTC | #7
On Wed, Oct 27, 2021 at 1:55 PM Maxime Coquelin <maxime.coquelin@redhat.com>
wrote:

> Hi,
>
> On 10/19/21 11:37, Andrew Rybchenko wrote:
> > Hi Maxime,
> >
> > On 10/19/21 12:22 PM, Maxime Coquelin wrote:
> >> Hi Andrew,
> >>
> >> On 10/19/21 09:30, Andrew Rybchenko wrote:
> >>> On 10/18/21 1:20 PM, Maxime Coquelin wrote:
> >>>> Provide the capability to update the hash key, hash types
> >>>> and RETA table on the fly (without needing to stop/start
> >>>> the device). However, the key length and the number of RETA
> >>>> entries are fixed to 40B and 128 entries respectively. This
> >>>> is done in order to simplify the design, but may be
> >>>> revisited later as the Virtio spec provides this
> >>>> flexibility.
> >>>>
> >>>> Note that only VIRTIO_NET_F_RSS support is implemented,
> >>>> VIRTIO_NET_F_HASH_REPORT, which would enable reporting the
> >>>> packet RSS hash calculated by the device into mbuf.rss, is
> >>>> not yet supported.
> >>>>
> >>>> Regarding the default RSS configuration, it has been
> >>>> chosen to use the default Intel ixgbe key as default key,
> >>>> and default RETA is a simple modulo between the hash and
> >>>> the number of Rx queues.
> >>>>
> >>>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> >
> > [snip]
> >
> >>>> +    rss.unclassified_queue = 0;
> >>>> +    memcpy(rss.indirection_table, hw->rss_reta,
> >>>> VIRTIO_NET_RSS_RETA_SIZE * sizeof(uint16_t));
> >>>> +    rss.max_tx_vq = nb_queues;
> >>>
> >>> Is it guaranteed that driver is configured with equal number
> >>> of Rx and Tx queues? Or is it not a problem otherwise?
> >>
> >> Virtio networking devices works with queue pairs.
> >
> > But it seems to me that I still can configure just 1 Rx queue
> > and many Tx queues. Basically just non equal.
> > The line is suspicious since I'd expect nb_queues to be
> > a number of Rx queues in the function, but we set max_tx_vq
> > count here.
>
> The Virtio spec says:
> "
> A driver sets max_tx_vq to inform a device how many transmit
> virtqueues it may use (transmitq1. . .transmitq max_tx_vq).
> "
>
> But looking at Qemu side, its value is interpreted as the number of
> queue pairs setup by the driver, same as is handled virtqueue_pairs of
> struct virtio_net_ctrl_mq when RSS is not supported.
>
> In this patch we are compatible with what is done in Qemu, and what is
> done for multiqueue when RSS is not enabled.
>
> I don't get why the spec talks about transmit queues, Yan & Yuri, any
> idea?
>
>
Indeed QEMU reference code uses the max_tx_vq as a number of queue pairs
the same way it uses a parameter of _MQ command.
Mainly this is related to vhost start flow which assumes that there is some
number of ready vq pairs.
When the driver sets max_tx_vq it guarantees that it does not use more than
max_tx_vq TX queues.
Actual RX queues that will be used can be taken from the indirection table
which contains indices of RX queues.


> >>>> +static int
> >>>> +virtio_dev_rss_init(struct rte_eth_dev *eth_dev)
> >>>> +{
> >>>> +    struct virtio_hw *hw = eth_dev->data->dev_private;
> >>>> +    uint16_t nb_rx_queues = eth_dev->data->nb_rx_queues;
> >>>> +    struct rte_eth_rss_conf *rss_conf;
> >>>> +    int ret, i;
> >>>> +
> >>>> +    rss_conf = &eth_dev->data->dev_conf.rx_adv_conf.rss_conf;
> >>>> +
> >>>> +    ret = virtio_dev_get_rss_config(hw, &hw->rss_hash_types);
> >>>> +    if (ret)
> >>>> +        return ret;
> >>>> +
> >>>> +    if (rss_conf->rss_hf) {
> >>>> +        /*  Ensure requested hash types are supported by the device
> */
> >>>> +        if (rss_conf->rss_hf &
> >>>> ~virtio_to_ethdev_rss_offloads(hw->rss_hash_types))
> >>>> +            return -EINVAL;
> >>>> +
> >>>> +        hw->rss_hash_types =
> >>>> ethdev_to_virtio_rss_offloads(rss_conf->rss_hf);
> >>>> +    }
> >>>> +
> >>>> +    if (!hw->rss_key) {
> >>>> +        /* Setup default RSS key if not already setup by the user */
> >>>> +        hw->rss_key = rte_malloc_socket("rss_key",
> >>>> +                VIRTIO_NET_RSS_KEY_SIZE, 0,
> >>>> +                eth_dev->device->numa_node);
> >>>> +        if (!hw->rss_key) {
> >>>> +            PMD_INIT_LOG(ERR, "Failed to allocate RSS key");
> >>>> +            return -1;
> >>>> +        }
> >>>> +
> >>>> +        if (rss_conf->rss_key && rss_conf->rss_key_len) {
> >>>> +            if (rss_conf->rss_key_len != VIRTIO_NET_RSS_KEY_SIZE) {
> >>>> +                PMD_INIT_LOG(ERR, "Driver only supports %u RSS key
> >>>> length",
> >>>> +                        VIRTIO_NET_RSS_KEY_SIZE);
> >>>> +                return -EINVAL;
> >>>> +            }
> >>>> +            memcpy(hw->rss_key, rss_conf->rss_key,
> >>>> VIRTIO_NET_RSS_KEY_SIZE);
> >>>> +        } else {
> >>>> +            memcpy(hw->rss_key, rss_intel_key,
> >>>> VIRTIO_NET_RSS_KEY_SIZE);
> >>>> +        }
> >>>
> >>> Above if should work in the case of reconfigure as well when
> >>> array is already allocated.
> >>
> >> I'm not sure, because rte_eth_dev_rss_hash_update() API does not update
> >> eth_dev->data->dev_conf.rx_adv_conf.rss_conf, so we may lose the RSS key
> >> the user would have updated using this API. What do you think?
> >
> > I think, but I think it should be used correctly by the
> > application. I.e. if application does reconfigure and
> > provides same or new key, does not matter, it should be
> > applied. Key could be a part of configure request and we
> > should not ignore it in the case of reconfigure.
> >
> > Yes, I agree that it is a bit tricky, but still right
> > logic.
>
> Ok. I'm applying your suggestion, which is to apply rss_conf whatever
> rss_key was initialized or not before.
>
> >>>> +    }
> >>>> +
> >>>> +    if (!hw->rss_reta) {
> >>>> +        /* Setup default RSS reta if not already setup by the user */
> >>>> +        hw->rss_reta = rte_malloc_socket("rss_reta",
> >>>> +                VIRTIO_NET_RSS_RETA_SIZE * sizeof(uint16_t), 0,
> >>>> +                eth_dev->device->numa_node);
> >>>> +        if (!hw->rss_reta) {
> >>>> +            PMD_INIT_LOG(ERR, "Failed to allocate RSS reta");
> >>>> +            return -1;
> >>>> +        }
> >>>> +        for (i = 0; i < VIRTIO_NET_RSS_RETA_SIZE; i++)
> >>>> +            hw->rss_reta[i] = i % nb_rx_queues;
> >>>
> >>> How should it work in the case of reconfigure if a nubmer of Rx
> >>> queue changes?
> >>
> >> Hmm, good catch! I did not think about this, and so did not test it.
> >>
> >> Not to lose user-provisionned reta in case of unrelated change, maybe I
> >> should save the number of Rx queues when setting the reta (here and in
> >> virtio_dev_rss_reta_update), and perform a re-initialization if it is
> >> different?
> >
> > Yes, it is much harder than RSS key case since the data are not
> > provided in dev_conf explicitly. So, we should guess here what
> > to do. The simple solution is to reinitialize if number of RxQs
> > change. I'd go this way.
>
> OK, this will be done in next revision.
>
> >>> Also I'm wondering how it works...
> >>> virtio_dev_rss_init() is called from eth_virtio_dev_init() as
> >>> well when a number of Rx queues is zero. I guess the reason is
> >>> VIRTIO_PMD_DEFAULT_GUEST_FEATURES, but it looks very fragile.
> >>
> >> Yes, we only add VIRTIO_NET_F_RSS to the supported features in
> >> virtio_dev_configure(), if Rx MQ mode is RSS. So virtio_dev_rss_init()
> >> will never be called from eth_virtio_dev_init().
> >>
> >> If I'm not mistaken, rte_eth_dev_configure() must be called it is stated
> >> in the API documentation, so it will be negotiated if conditions are
> >> met.
> >
> > As far as I know we can configure ethdev with zero number of
> > RxQs, but non-zero number of Tx queues. E.g. traffic generator
> > usecase.
> >
> > Division by zero is a guaranteed crash, so, I'd care
> > about it explicitly in any case. Just do not try to rely
> > on other checks faraway.
>
> Sure, adding a check on nb_rx_queues before doing any RSS init.
>
> Thanks,
> Maxime
> > Andrew.
> >
>
>
  
Maxime Coquelin Oct. 27, 2021, 7:59 p.m. UTC | #8
Hi Yuri,

On 10/27/21 16:45, Yuri Benditovich wrote:
> 
> 
> On Wed, Oct 27, 2021 at 1:55 PM Maxime Coquelin 
> <maxime.coquelin@redhat.com <mailto:maxime.coquelin@redhat.com>> wrote:
> 
>     Hi,
> 
>     On 10/19/21 11:37, Andrew Rybchenko wrote:
>      > Hi Maxime,
>      >
>      > On 10/19/21 12:22 PM, Maxime Coquelin wrote:
>      >> Hi Andrew,
>      >>
>      >> On 10/19/21 09:30, Andrew Rybchenko wrote:
>      >>> On 10/18/21 1:20 PM, Maxime Coquelin wrote:
>      >>>> Provide the capability to update the hash key, hash types
>      >>>> and RETA table on the fly (without needing to stop/start
>      >>>> the device). However, the key length and the number of RETA
>      >>>> entries are fixed to 40B and 128 entries respectively. This
>      >>>> is done in order to simplify the design, but may be
>      >>>> revisited later as the Virtio spec provides this
>      >>>> flexibility.
>      >>>>
>      >>>> Note that only VIRTIO_NET_F_RSS support is implemented,
>      >>>> VIRTIO_NET_F_HASH_REPORT, which would enable reporting the
>      >>>> packet RSS hash calculated by the device into mbuf.rss, is
>      >>>> not yet supported.
>      >>>>
>      >>>> Regarding the default RSS configuration, it has been
>      >>>> chosen to use the default Intel ixgbe key as default key,
>      >>>> and default RETA is a simple modulo between the hash and
>      >>>> the number of Rx queues.
>      >>>>
>      >>>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com
>     <mailto:maxime.coquelin@redhat.com>>
>      >
>      > [snip]
>      >
>      >>>> +    rss.unclassified_queue = 0;
>      >>>> +    memcpy(rss.indirection_table, hw->rss_reta,
>      >>>> VIRTIO_NET_RSS_RETA_SIZE * sizeof(uint16_t));
>      >>>> +    rss.max_tx_vq = nb_queues;
>      >>>
>      >>> Is it guaranteed that driver is configured with equal number
>      >>> of Rx and Tx queues? Or is it not a problem otherwise?
>      >>
>      >> Virtio networking devices works with queue pairs.
>      >
>      > But it seems to me that I still can configure just 1 Rx queue
>      > and many Tx queues. Basically just non equal.
>      > The line is suspicious since I'd expect nb_queues to be
>      > a number of Rx queues in the function, but we set max_tx_vq
>      > count here.
> 
>     The Virtio spec says:
>     "
>     A driver sets max_tx_vq to inform a device how many transmit
>     virtqueues it may use (transmitq1. . .transmitq max_tx_vq).
>     "
> 
>     But looking at Qemu side, its value is interpreted as the number of
>     queue pairs setup by the driver, same as is handled virtqueue_pairs of
>     struct virtio_net_ctrl_mq when RSS is not supported.
> 
>     In this patch we are compatible with what is done in Qemu, and what is
>     done for multiqueue when RSS is not enabled.
> 
>     I don't get why the spec talks about transmit queues, Yan & Yuri, any
>     idea?
> 
> 
> Indeed QEMU reference code uses the max_tx_vq as a number of queue pairs 
> the same way it uses a parameter of _MQ command.
> Mainly this is related to vhost start flow which assumes that there is 
> some number of ready vq pairs.
> When the driver sets max_tx_vq it guarantees that it does not use more 
> than max_tx_vq TX queues.
> Actual RX queues that will be used can be taken from the indirection 
> table which contains indices of RX queues.

Thanks for the quick reply.
Then setting to MAX(nb_rx_queue, nb_tx_queue) is compliant with both the
spec and the Qemu implementation.

Maxime
  

Patch

diff --git a/doc/guides/nics/features/virtio.ini b/doc/guides/nics/features/virtio.ini
index 48f6f393b1..a5eab4932f 100644
--- a/doc/guides/nics/features/virtio.ini
+++ b/doc/guides/nics/features/virtio.ini
@@ -14,6 +14,9 @@  Promiscuous mode     = Y
 Allmulticast mode    = Y
 Unicast MAC filter   = Y
 Multicast MAC filter = Y
+RSS hash             = P
+RSS key update       = Y
+RSS reta update      = Y
 VLAN filter          = Y
 Basic stats          = Y
 Stats per queue      = Y
diff --git a/doc/guides/nics/virtio.rst b/doc/guides/nics/virtio.rst
index 82ce7399ce..98e0d012b7 100644
--- a/doc/guides/nics/virtio.rst
+++ b/doc/guides/nics/virtio.rst
@@ -73,6 +73,9 @@  In this release, the virtio PMD driver provides the basic functionality of packe
 
 *   Virtio supports using port IO to get PCI resource when UIO module is not available.
 
+*   Virtio supports RSS Rx mode with 40B configurable hash key length, 128
+    configurable RETA entries and configurable hash types.
+
 Prerequisites
 -------------
 
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index ec2a788789..6e2c7df4b9 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -194,6 +194,12 @@  New Features
   * Added tests to verify tunnel header verification in IPsec inbound.
   * Added tests to verify inner checksum.
 
+* **Added initial RSS support to Virtio PMD.**
+
+  Initial support for RSS receive mode has been added to the Virtio PMD,
+  with the capability for the application to configure the hash key, the
+  RETA and the hash types. Virtio hash reporting is yet to be added.
+
 
 Removed Items
 -------------
diff --git a/drivers/net/virtio/virtio.h b/drivers/net/virtio/virtio.h
index e78b2e429e..7118e5d24c 100644
--- a/drivers/net/virtio/virtio.h
+++ b/drivers/net/virtio/virtio.h
@@ -30,6 +30,7 @@ 
 #define VIRTIO_NET_F_GUEST_ANNOUNCE 21	/* Guest can announce device on the network */
 #define VIRTIO_NET_F_MQ		22	/* Device supports Receive Flow Steering */
 #define VIRTIO_NET_F_CTRL_MAC_ADDR 23	/* Set MAC address */
+#define VIRTIO_NET_F_RSS	60	/* RSS supported */
 
 /*
  * Do we get callbacks when the ring is completely used,
@@ -100,6 +101,29 @@ 
  */
 #define VIRTIO_MAX_INDIRECT ((int)(rte_mem_page_size() / 16))
 
+/*  Virtio RSS hash types */
+#define VIRTIO_NET_HASH_TYPE_IPV4	(1 << 0)
+#define VIRTIO_NET_HASH_TYPE_TCPV4	(1 << 1)
+#define VIRTIO_NET_HASH_TYPE_UDPV4	(1 << 2)
+#define VIRTIO_NET_HASH_TYPE_IPV6	(1 << 3)
+#define VIRTIO_NET_HASH_TYPE_TCPV6	(1 << 4)
+#define VIRTIO_NET_HASH_TYPE_UDPV6	(1 << 5)
+#define VIRTIO_NET_HASH_TYPE_IP_EX	(1 << 6)
+#define VIRTIO_NET_HASH_TYPE_TCP_EX	(1 << 7)
+#define VIRTIO_NET_HASH_TYPE_UDP_EX	(1 << 8)
+
+#define VIRTIO_NET_HASH_TYPE_MASK ( \
+	VIRTIO_NET_HASH_TYPE_IPV4 | \
+	VIRTIO_NET_HASH_TYPE_TCPV4 | \
+	VIRTIO_NET_HASH_TYPE_UDPV4 | \
+	VIRTIO_NET_HASH_TYPE_IPV6 | \
+	VIRTIO_NET_HASH_TYPE_TCPV6 | \
+	VIRTIO_NET_HASH_TYPE_UDPV6 | \
+	VIRTIO_NET_HASH_TYPE_IP_EX | \
+	VIRTIO_NET_HASH_TYPE_TCP_EX | \
+	VIRTIO_NET_HASH_TYPE_UDP_EX)
+
+
 /*
  * Maximum number of virtqueues per device.
  */
@@ -157,7 +181,9 @@  struct virtio_net_config {
 	 * Any other value stands for unknown.
 	 */
 	uint8_t duplex;
-
+	uint8_t rss_max_key_size;
+	uint16_t rss_max_indirection_table_length;
+	uint32_t supported_hash_types;
 } __rte_packed;
 
 struct virtio_hw {
@@ -190,6 +216,9 @@  struct virtio_hw {
 	rte_spinlock_t state_lock;
 	struct rte_mbuf **inject_pkts;
 	uint16_t max_queue_pairs;
+	uint32_t rss_hash_types;
+	uint16_t *rss_reta;
+	uint8_t *rss_key;
 	uint64_t req_guest_features;
 	struct virtnet_ctl *cvq;
 	bool use_va;
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index aff791fbd0..a8e2bf3e1a 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -51,6 +51,16 @@  static int virtio_dev_info_get(struct rte_eth_dev *dev,
 static int virtio_dev_link_update(struct rte_eth_dev *dev,
 	int wait_to_complete);
 static int virtio_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask);
+static int virtio_dev_rss_hash_update(struct rte_eth_dev *dev,
+		struct rte_eth_rss_conf *rss_conf);
+static int virtio_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+		struct rte_eth_rss_conf *rss_conf);
+static int virtio_dev_rss_reta_update(struct rte_eth_dev *dev,
+			 struct rte_eth_rss_reta_entry64 *reta_conf,
+			 uint16_t reta_size);
+static int virtio_dev_rss_reta_query(struct rte_eth_dev *dev,
+			 struct rte_eth_rss_reta_entry64 *reta_conf,
+			 uint16_t reta_size);
 
 static void virtio_set_hwaddr(struct virtio_hw *hw);
 static void virtio_get_hwaddr(struct virtio_hw *hw);
@@ -347,20 +357,51 @@  virtio_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl,
 }
 
 static int
-virtio_set_multiple_queues(struct rte_eth_dev *dev, uint16_t nb_queues)
+virtio_set_multiple_queues_rss(struct rte_eth_dev *dev, uint16_t nb_queues)
 {
 	struct virtio_hw *hw = dev->data->dev_private;
 	struct virtio_pmd_ctrl ctrl;
-	int dlen[1];
+	struct virtio_net_ctrl_rss rss;
+	int dlen, ret;
+
+	rss.hash_types = hw->rss_hash_types & VIRTIO_NET_HASH_TYPE_MASK;
+	rss.indirection_table_mask = VIRTIO_NET_RSS_RETA_SIZE - 1;
+	rss.unclassified_queue = 0;
+	memcpy(rss.indirection_table, hw->rss_reta, VIRTIO_NET_RSS_RETA_SIZE * sizeof(uint16_t));
+	rss.max_tx_vq = nb_queues;
+	rss.hash_key_length = VIRTIO_NET_RSS_KEY_SIZE;
+	memcpy(rss.hash_key_data, hw->rss_key, VIRTIO_NET_RSS_KEY_SIZE);
+
+	ctrl.hdr.class = VIRTIO_NET_CTRL_MQ;
+	ctrl.hdr.cmd = VIRTIO_NET_CTRL_MQ_RSS_CONFIG;
+	memcpy(ctrl.data, &rss, sizeof(rss));
+
+	dlen = sizeof(rss);
+
+	ret = virtio_send_command(hw->cvq, &ctrl, &dlen, 1);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "RSS multiqueue configured but send command failed");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+virtio_set_multiple_queues_auto(struct rte_eth_dev *dev, uint16_t nb_queues)
+{
+	struct virtio_hw *hw = dev->data->dev_private;
+	struct virtio_pmd_ctrl ctrl;
+	int dlen;
 	int ret;
 
 	ctrl.hdr.class = VIRTIO_NET_CTRL_MQ;
 	ctrl.hdr.cmd = VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET;
 	memcpy(ctrl.data, &nb_queues, sizeof(uint16_t));
 
-	dlen[0] = sizeof(uint16_t);
+	dlen = sizeof(uint16_t);
 
-	ret = virtio_send_command(hw->cvq, &ctrl, dlen, 1);
+	ret = virtio_send_command(hw->cvq, &ctrl, &dlen, 1);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Multiqueue configured but send command "
 			  "failed, this is too late now...");
@@ -370,6 +411,17 @@  virtio_set_multiple_queues(struct rte_eth_dev *dev, uint16_t nb_queues)
 	return 0;
 }
 
+static int
+virtio_set_multiple_queues(struct rte_eth_dev *dev, uint16_t nb_queues)
+{
+	struct virtio_hw *hw = dev->data->dev_private;
+
+	if (virtio_with_feature(hw, VIRTIO_NET_F_RSS))
+		return virtio_set_multiple_queues_rss(dev, nb_queues);
+	else
+		return virtio_set_multiple_queues_auto(dev, nb_queues);
+}
+
 static uint16_t
 virtio_get_nr_vq(struct virtio_hw *hw)
 {
@@ -708,6 +760,16 @@  virtio_alloc_queues(struct rte_eth_dev *dev)
 
 static void virtio_queues_unbind_intr(struct rte_eth_dev *dev);
 
+static void
+virtio_free_rss(struct virtio_hw *hw)
+{
+	rte_free(hw->rss_key);
+	hw->rss_key = NULL;
+
+	rte_free(hw->rss_reta);
+	hw->rss_reta = NULL;
+}
+
 int
 virtio_dev_close(struct rte_eth_dev *dev)
 {
@@ -738,6 +800,7 @@  virtio_dev_close(struct rte_eth_dev *dev)
 	virtio_reset(hw);
 	virtio_dev_free_mbufs(dev);
 	virtio_free_queues(hw);
+	virtio_free_rss(hw);
 
 	return VIRTIO_OPS(hw)->dev_close(hw);
 }
@@ -976,6 +1039,10 @@  static const struct eth_dev_ops virtio_eth_dev_ops = {
 	.rx_queue_intr_enable    = virtio_dev_rx_queue_intr_enable,
 	.rx_queue_intr_disable   = virtio_dev_rx_queue_intr_disable,
 	.tx_queue_setup          = virtio_dev_tx_queue_setup,
+	.rss_hash_update         = virtio_dev_rss_hash_update,
+	.rss_hash_conf_get       = virtio_dev_rss_hash_conf_get,
+	.reta_update             = virtio_dev_rss_reta_update,
+	.reta_query              = virtio_dev_rss_reta_query,
 	/* collect stats per queue */
 	.queue_stats_mapping_set = virtio_dev_queue_stats_mapping_set,
 	.vlan_filter_set         = virtio_vlan_filter_set,
@@ -1719,6 +1786,291 @@  virtio_configure_intr(struct rte_eth_dev *dev)
 
 	return 0;
 }
+
+static uint64_t
+ethdev_to_virtio_rss_offloads(uint64_t ethdev_hash_types)
+{
+	uint64_t virtio_hash_types = 0;
+
+	if (ethdev_hash_types & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER))
+		virtio_hash_types |= VIRTIO_NET_HASH_TYPE_IPV4;
+
+	if (ethdev_hash_types & ETH_RSS_NONFRAG_IPV4_TCP)
+		virtio_hash_types |= VIRTIO_NET_HASH_TYPE_TCPV4;
+
+	if (ethdev_hash_types & ETH_RSS_NONFRAG_IPV4_UDP)
+		virtio_hash_types |= VIRTIO_NET_HASH_TYPE_UDPV4;
+
+	if (ethdev_hash_types & (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER))
+		virtio_hash_types |= VIRTIO_NET_HASH_TYPE_IPV6;
+
+	if (ethdev_hash_types & ETH_RSS_NONFRAG_IPV6_TCP)
+		virtio_hash_types |= VIRTIO_NET_HASH_TYPE_TCPV6;
+
+	if (ethdev_hash_types & ETH_RSS_NONFRAG_IPV6_UDP)
+		virtio_hash_types |= VIRTIO_NET_HASH_TYPE_UDPV6;
+
+	if (ethdev_hash_types & ETH_RSS_IPV6_EX)
+		virtio_hash_types |= VIRTIO_NET_HASH_TYPE_IP_EX;
+
+	if (ethdev_hash_types & ETH_RSS_IPV6_TCP_EX)
+		virtio_hash_types |= VIRTIO_NET_HASH_TYPE_TCP_EX;
+
+	if (ethdev_hash_types & ETH_RSS_IPV6_UDP_EX)
+		virtio_hash_types |= VIRTIO_NET_HASH_TYPE_UDP_EX;
+
+	return virtio_hash_types;
+}
+
+static uint64_t
+virtio_to_ethdev_rss_offloads(uint64_t virtio_hash_types)
+{
+	uint64_t rss_offloads = 0;
+
+	if (virtio_hash_types & VIRTIO_NET_HASH_TYPE_IPV4)
+		rss_offloads |= ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER;
+
+	if (virtio_hash_types & VIRTIO_NET_HASH_TYPE_TCPV4)
+		rss_offloads |= ETH_RSS_NONFRAG_IPV4_TCP;
+
+	if (virtio_hash_types & VIRTIO_NET_HASH_TYPE_UDPV4)
+		rss_offloads |= ETH_RSS_NONFRAG_IPV4_UDP;
+
+	if (virtio_hash_types & VIRTIO_NET_HASH_TYPE_IPV6)
+		rss_offloads |= ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER;
+
+	if (virtio_hash_types & VIRTIO_NET_HASH_TYPE_TCPV6)
+		rss_offloads |= ETH_RSS_NONFRAG_IPV6_TCP;
+
+	if (virtio_hash_types & VIRTIO_NET_HASH_TYPE_UDPV6)
+		rss_offloads |= ETH_RSS_NONFRAG_IPV6_UDP;
+
+	if (virtio_hash_types & VIRTIO_NET_HASH_TYPE_IP_EX)
+		rss_offloads |= ETH_RSS_IPV6_EX;
+
+	if (virtio_hash_types & VIRTIO_NET_HASH_TYPE_TCP_EX)
+		rss_offloads |= ETH_RSS_IPV6_TCP_EX;
+
+	if (virtio_hash_types & VIRTIO_NET_HASH_TYPE_UDP_EX)
+		rss_offloads |= ETH_RSS_IPV6_UDP_EX;
+
+	return rss_offloads;
+}
+
+static int
+virtio_dev_get_rss_config(struct virtio_hw *hw, uint32_t *rss_hash_types)
+{
+	struct virtio_net_config local_config;
+	struct virtio_net_config *config = &local_config;
+
+	virtio_read_dev_config(hw,
+			offsetof(struct virtio_net_config, rss_max_key_size),
+			&config->rss_max_key_size,
+			sizeof(config->rss_max_key_size));
+	if (config->rss_max_key_size < VIRTIO_NET_RSS_KEY_SIZE) {
+		PMD_INIT_LOG(ERR, "Invalid device RSS max key size (%u)",
+				config->rss_max_key_size);
+		return -EINVAL;
+	}
+
+	virtio_read_dev_config(hw,
+			offsetof(struct virtio_net_config,
+				rss_max_indirection_table_length),
+			&config->rss_max_indirection_table_length,
+			sizeof(config->rss_max_indirection_table_length));
+	if (config->rss_max_indirection_table_length < VIRTIO_NET_RSS_RETA_SIZE) {
+		PMD_INIT_LOG(ERR, "Invalid device RSS max reta size (%u)",
+				config->rss_max_indirection_table_length);
+		return -EINVAL;
+	}
+
+	virtio_read_dev_config(hw,
+			offsetof(struct virtio_net_config, supported_hash_types),
+			&config->supported_hash_types,
+			sizeof(config->supported_hash_types));
+	if ((config->supported_hash_types & VIRTIO_NET_HASH_TYPE_MASK) == 0) {
+		PMD_INIT_LOG(ERR, "Invalid device RSS hash types (0x%x)",
+				config->supported_hash_types);
+		return -EINVAL;
+	}
+
+	*rss_hash_types = config->supported_hash_types & VIRTIO_NET_HASH_TYPE_MASK;
+
+	PMD_INIT_LOG(DEBUG, "Device RSS config:");
+	PMD_INIT_LOG(DEBUG, "\t-Max key size: %u", config->rss_max_key_size);
+	PMD_INIT_LOG(DEBUG, "\t-Max reta size: %u", config->rss_max_indirection_table_length);
+	PMD_INIT_LOG(DEBUG, "\t-Supported hash types: 0x%x", *rss_hash_types);
+
+	return 0;
+}
+
+static int
+virtio_dev_rss_hash_update(struct rte_eth_dev *dev,
+		struct rte_eth_rss_conf *rss_conf)
+{
+	struct virtio_hw *hw = dev->data->dev_private;
+	uint16_t nb_queues;
+
+	if (!virtio_with_feature(hw, VIRTIO_NET_F_RSS))
+		return -ENOTSUP;
+
+	if (rss_conf->rss_hf & ~virtio_to_ethdev_rss_offloads(VIRTIO_NET_HASH_TYPE_MASK))
+		return -EINVAL;
+
+	hw->rss_hash_types = ethdev_to_virtio_rss_offloads(rss_conf->rss_hf);
+
+	if (rss_conf->rss_key && rss_conf->rss_key_len) {
+		if (rss_conf->rss_key_len != VIRTIO_NET_RSS_KEY_SIZE) {
+			PMD_INIT_LOG(ERR, "Driver only supports %u RSS key length",
+					VIRTIO_NET_RSS_KEY_SIZE);
+			return -EINVAL;
+		}
+		memcpy(hw->rss_key, rss_conf->rss_key, VIRTIO_NET_RSS_KEY_SIZE);
+	}
+
+	nb_queues = RTE_MAX(dev->data->nb_rx_queues, dev->data->nb_tx_queues);
+	return virtio_set_multiple_queues_rss(dev, nb_queues);
+}
+
+static int
+virtio_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+		struct rte_eth_rss_conf *rss_conf)
+{
+	struct virtio_hw *hw = dev->data->dev_private;
+
+	if (!virtio_with_feature(hw, VIRTIO_NET_F_RSS))
+		return -ENOTSUP;
+
+	if (rss_conf->rss_key && rss_conf->rss_key_len >= VIRTIO_NET_RSS_KEY_SIZE)
+		memcpy(rss_conf->rss_key, hw->rss_key, VIRTIO_NET_RSS_KEY_SIZE);
+	rss_conf->rss_key_len = VIRTIO_NET_RSS_KEY_SIZE;
+	rss_conf->rss_hf = virtio_to_ethdev_rss_offloads(hw->rss_hash_types);
+
+	return 0;
+}
+
+static int virtio_dev_rss_reta_update(struct rte_eth_dev *dev,
+			 struct rte_eth_rss_reta_entry64 *reta_conf,
+			 uint16_t reta_size)
+{
+	struct virtio_hw *hw = dev->data->dev_private;
+	uint16_t nb_queues;
+	int idx, pos, i;
+
+	if (!virtio_with_feature(hw, VIRTIO_NET_F_RSS))
+		return -ENOTSUP;
+
+	if (reta_size != VIRTIO_NET_RSS_RETA_SIZE)
+		return -EINVAL;
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_RETA_GROUP_SIZE;
+		pos = i % RTE_RETA_GROUP_SIZE;
+
+		if (((reta_conf[idx].mask >> pos) & 0x1) == 0)
+			continue;
+
+		hw->rss_reta[i] = reta_conf[idx].reta[pos];
+	}
+
+	nb_queues = RTE_MAX(dev->data->nb_rx_queues, dev->data->nb_tx_queues);
+	return virtio_set_multiple_queues_rss(dev, nb_queues);
+}
+
+static int virtio_dev_rss_reta_query(struct rte_eth_dev *dev,
+			 struct rte_eth_rss_reta_entry64 *reta_conf,
+			 uint16_t reta_size)
+{
+	struct virtio_hw *hw = dev->data->dev_private;
+	int idx, i;
+
+	if (!virtio_with_feature(hw, VIRTIO_NET_F_RSS))
+		return -ENOTSUP;
+
+	if (reta_size != VIRTIO_NET_RSS_RETA_SIZE)
+		return -EINVAL;
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_RETA_GROUP_SIZE;
+		reta_conf[idx].reta[i % RTE_RETA_GROUP_SIZE] = hw->rss_reta[i];
+	}
+
+	return 0;
+}
+
+/*
+ * As default RSS hash key, it uses the default key of the
+ * Intel IXGBE devices. It can be updated by the application
+ * with any 40B key value.
+ */
+static uint8_t rss_intel_key[VIRTIO_NET_RSS_KEY_SIZE] = {
+	0x6D, 0x5A, 0x56, 0xDA, 0x25, 0x5B, 0x0E, 0xC2,
+	0x41, 0x67, 0x25, 0x3D, 0x43, 0xA3, 0x8F, 0xB0,
+	0xD0, 0xCA, 0x2B, 0xCB, 0xAE, 0x7B, 0x30, 0xB4,
+	0x77, 0xCB, 0x2D, 0xA3, 0x80, 0x30, 0xF2, 0x0C,
+	0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA,
+};
+
+static int
+virtio_dev_rss_init(struct rte_eth_dev *eth_dev)
+{
+	struct virtio_hw *hw = eth_dev->data->dev_private;
+	uint16_t nb_rx_queues = eth_dev->data->nb_rx_queues;
+	struct rte_eth_rss_conf *rss_conf;
+	int ret, i;
+
+	rss_conf = &eth_dev->data->dev_conf.rx_adv_conf.rss_conf;
+
+	ret = virtio_dev_get_rss_config(hw, &hw->rss_hash_types);
+	if (ret)
+		return ret;
+
+	if (rss_conf->rss_hf) {
+		/*  Ensure requested hash types are supported by the device */
+		if (rss_conf->rss_hf & ~virtio_to_ethdev_rss_offloads(hw->rss_hash_types))
+			return -EINVAL;
+
+		hw->rss_hash_types = ethdev_to_virtio_rss_offloads(rss_conf->rss_hf);
+	}
+
+	if (!hw->rss_key) {
+		/* Setup default RSS key if not already setup by the user */
+		hw->rss_key = rte_malloc_socket("rss_key",
+				VIRTIO_NET_RSS_KEY_SIZE, 0,
+				eth_dev->device->numa_node);
+		if (!hw->rss_key) {
+			PMD_INIT_LOG(ERR, "Failed to allocate RSS key");
+			return -1;
+		}
+
+		if (rss_conf->rss_key && rss_conf->rss_key_len) {
+			if (rss_conf->rss_key_len != VIRTIO_NET_RSS_KEY_SIZE) {
+				PMD_INIT_LOG(ERR, "Driver only supports %u RSS key length",
+						VIRTIO_NET_RSS_KEY_SIZE);
+				return -EINVAL;
+			}
+			memcpy(hw->rss_key, rss_conf->rss_key, VIRTIO_NET_RSS_KEY_SIZE);
+		} else {
+			memcpy(hw->rss_key, rss_intel_key, VIRTIO_NET_RSS_KEY_SIZE);
+		}
+	}
+
+	if (!hw->rss_reta) {
+		/* Setup default RSS reta if not already setup by the user */
+		hw->rss_reta = rte_malloc_socket("rss_reta",
+				VIRTIO_NET_RSS_RETA_SIZE * sizeof(uint16_t), 0,
+				eth_dev->device->numa_node);
+		if (!hw->rss_reta) {
+			PMD_INIT_LOG(ERR, "Failed to allocate RSS reta");
+			return -1;
+		}
+		for (i = 0; i < VIRTIO_NET_RSS_RETA_SIZE; i++)
+			hw->rss_reta[i] = i % nb_rx_queues;
+	}
+
+	return 0;
+}
+
 #define DUPLEX_UNKNOWN   0xff
 /* reset device and renegotiate features if needed */
 static int
@@ -1806,14 +2158,15 @@  virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
 			config->status = 0;
 		}
 
-		if (virtio_with_feature(hw, VIRTIO_NET_F_MQ)) {
+		if (virtio_with_feature(hw, VIRTIO_NET_F_MQ) ||
+				virtio_with_feature(hw, VIRTIO_NET_F_RSS)) {
 			virtio_read_dev_config(hw,
 				offsetof(struct virtio_net_config, max_virtqueue_pairs),
 				&config->max_virtqueue_pairs,
 				sizeof(config->max_virtqueue_pairs));
 		} else {
 			PMD_INIT_LOG(DEBUG,
-				     "VIRTIO_NET_F_MQ is not supported");
+				     "Neither VIRTIO_NET_F_MQ nor VIRTIO_NET_F_RSS are supported");
 			config->max_virtqueue_pairs = 1;
 		}
 
@@ -1845,6 +2198,11 @@  virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
 				VLAN_TAG_LEN - hw->vtnet_hdr_size;
 		}
 
+		hw->rss_hash_types = 0;
+		if (virtio_with_feature(hw, VIRTIO_NET_F_RSS))
+			if (virtio_dev_rss_init(eth_dev))
+				return -1;
+
 		PMD_INIT_LOG(DEBUG, "config->max_virtqueue_pairs=%d",
 				config->max_virtqueue_pairs);
 		PMD_INIT_LOG(DEBUG, "config->status=%d", config->status);
@@ -2087,7 +2445,7 @@  virtio_dev_configure(struct rte_eth_dev *dev)
 	PMD_INIT_LOG(DEBUG, "configure");
 	req_features = VIRTIO_PMD_DEFAULT_GUEST_FEATURES;
 
-	if (rxmode->mq_mode != ETH_MQ_RX_NONE) {
+	if (rxmode->mq_mode != ETH_MQ_RX_NONE && rxmode->mq_mode != ETH_MQ_RX_RSS) {
 		PMD_DRV_LOG(ERR,
 			"Unsupported Rx multi queue mode %d",
 			rxmode->mq_mode);
@@ -2107,6 +2465,9 @@  virtio_dev_configure(struct rte_eth_dev *dev)
 			return ret;
 	}
 
+	if (rxmode->mq_mode == ETH_MQ_RX_RSS)
+		req_features |= (1ULL << VIRTIO_NET_F_RSS);
+
 	if ((rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
 	    (rxmode->max_rx_pkt_len > hw->max_mtu + ether_hdr_len))
 		req_features &= ~(1ULL << VIRTIO_NET_F_MTU);
@@ -2141,6 +2502,12 @@  virtio_dev_configure(struct rte_eth_dev *dev)
 			return ret;
 	}
 
+	if ((rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG) &&
+			!virtio_with_feature(hw, VIRTIO_NET_F_RSS)) {
+		PMD_DRV_LOG(ERR, "RSS support requested but not supported by the device");
+		return -ENOTSUP;
+	}
+
 	if ((rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
 			    DEV_RX_OFFLOAD_TCP_CKSUM)) &&
 		!virtio_with_feature(hw, VIRTIO_NET_F_GUEST_CSUM)) {
@@ -2538,6 +2905,7 @@  static int
 virtio_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 {
 	uint64_t tso_mask, host_features;
+	uint32_t rss_hash_types = 0;
 	struct virtio_hw *hw = dev->data->dev_private;
 	dev_info->speed_capa = virtio_dev_speed_capa_get(hw->speed);
 
@@ -2578,6 +2946,18 @@  virtio_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		(1ULL << VIRTIO_NET_F_HOST_TSO6);
 	if ((host_features & tso_mask) == tso_mask)
 		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+	if (host_features & (1ULL << VIRTIO_NET_F_RSS)) {
+		virtio_dev_get_rss_config(hw, &rss_hash_types);
+		dev_info->hash_key_size = VIRTIO_NET_RSS_KEY_SIZE;
+		dev_info->reta_size = VIRTIO_NET_RSS_RETA_SIZE;
+		dev_info->flow_type_rss_offloads =
+			virtio_to_ethdev_rss_offloads(rss_hash_types);
+	} else {
+		dev_info->hash_key_size = 0;
+		dev_info->reta_size = 0;
+		dev_info->flow_type_rss_offloads = 0;
+	}
+
 
 	if (host_features & (1ULL << VIRTIO_F_RING_PACKED)) {
 		/*
diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h
index 40be484218..c08f382791 100644
--- a/drivers/net/virtio/virtio_ethdev.h
+++ b/drivers/net/virtio/virtio_ethdev.h
@@ -45,7 +45,8 @@ 
 	 1u << VIRTIO_NET_F_GUEST_TSO6     |	\
 	 1u << VIRTIO_NET_F_CSUM           |	\
 	 1u << VIRTIO_NET_F_HOST_TSO4      |	\
-	 1u << VIRTIO_NET_F_HOST_TSO6)
+	 1u << VIRTIO_NET_F_HOST_TSO6      |	\
+	 1ULL << VIRTIO_NET_F_RSS)
 
 extern const struct eth_dev_ops virtio_user_secondary_eth_dev_ops;
 
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 5baac221f7..3268bcec70 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -201,6 +201,28 @@  struct virtio_net_ctrl_mac {
 #define VIRTIO_NET_CTRL_VLAN_ADD 0
 #define VIRTIO_NET_CTRL_VLAN_DEL 1
 
+/**
+ * RSS control
+ *
+ * The RSS feature configuration message is sent by the driver when
+ * VIRTIO_NET_F_RSS has been negotiated. It provides the device with
+ * hash types to use, hash key and indirection table. In this
+ * implementation, the driver only supports fixed key length (40B)
+ * and indirection table size (128 entries).
+ */
+#define VIRTIO_NET_RSS_RETA_SIZE 128
+#define VIRTIO_NET_RSS_KEY_SIZE 40
+
+struct virtio_net_ctrl_rss {
+	uint32_t hash_types;
+	uint16_t indirection_table_mask;
+	uint16_t unclassified_queue;
+	uint16_t indirection_table[VIRTIO_NET_RSS_RETA_SIZE];
+	uint16_t max_tx_vq;
+	uint8_t hash_key_length;
+	uint8_t hash_key_data[VIRTIO_NET_RSS_KEY_SIZE];
+};
+
 /*
  * Control link announce acknowledgement
  *
@@ -292,7 +314,10 @@  struct virtqueue {
 
 /* If multiqueue is provided by host, then we suppport it. */
 #define VIRTIO_NET_CTRL_MQ   4
+
 #define VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET        0
+#define VIRTIO_NET_CTRL_MQ_RSS_CONFIG          1
+
 #define VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MIN        1
 #define VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX        0x8000