net/tap: free mempool when closing
Checks
Commit Message
From: Yunjian Wang <wangyunjian@huawei.com>
When setup tx queues, we will create a mempool for the 'gso_ctx'.
The mempool is not freed when closing tap device. If free the tap
device and create it with different name, it will create a new
mempool. This maybe cause an OOM.
Fixes: 050316a88313 ("net/tap: support TSO (TCP Segment Offload)")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
---
drivers/net/tap/rte_eth_tap.c | 43 +++++++++++++++++++++--------------
drivers/net/tap/rte_eth_tap.h | 1 +
2 files changed, 27 insertions(+), 17 deletions(-)
Comments
29/07/2020 13:35, wangyunjian:
> From: Yunjian Wang <wangyunjian@huawei.com>
>
> When setup tx queues, we will create a mempool for the 'gso_ctx'.
> The mempool is not freed when closing tap device. If free the tap
> device and create it with different name, it will create a new
> mempool. This maybe cause an OOM.
While at it, please look at implementing RTE_ETH_DEV_CLOSE_REMOVE
behaviour in tap. Thanks
On 7/29/2020 12:35 PM, wangyunjian wrote:
> From: Yunjian Wang <wangyunjian@huawei.com>
>
> When setup tx queues, we will create a mempool for the 'gso_ctx'.
> The mempool is not freed when closing tap device. If free the tap
> device and create it with different name, it will create a new
> mempool. This maybe cause an OOM.
>
> Fixes: 050316a88313 ("net/tap: support TSO (TCP Segment Offload)")
> Cc: stable@dpdk.org
>
> Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
<...>
> @@ -1317,26 +1320,31 @@ tap_gso_ctx_setup(struct rte_gso_ctx *gso_ctx, struct rte_eth_dev *dev)
> {
> uint32_t gso_types;
> char pool_name[64];
> -
> - /*
> - * Create private mbuf pool with TAP_GSO_MBUF_SEG_SIZE bytes
> - * size per mbuf use this pool for both direct and indirect mbufs
> - */
> -
> - struct rte_mempool *mp; /* Mempool for GSO packets */
> + struct pmd_internals *pmd = dev->data->dev_private;
> + int ret;
>
> /* initialize GSO context */
> gso_types = DEV_TX_OFFLOAD_TCP_TSO;
> - snprintf(pool_name, sizeof(pool_name), "mp_%s", dev->device->name);
> - mp = rte_mempool_lookup((const char *)pool_name);
> - if (!mp) {
> - mp = rte_pktmbuf_pool_create(pool_name, TAP_GSO_MBUFS_NUM,
> - TAP_GSO_MBUF_CACHE_SIZE, 0,
> + if (!pmd->gso_ctx_mp) {
> + /*
> + * Create private mbuf pool with TAP_GSO_MBUF_SEG_SIZE
> + * bytes size per mbuf use this pool for both direct and
> + * indirect mbufs
> + */
> + ret = snprintf(pool_name, sizeof(pool_name), "mp_%s",
> + dev->device->name);
> + if (ret < 0 || ret >= (int)sizeof(pool_name)) {
> + TAP_LOG(ERR,
> + "%s: failed to create mbuf pool "
> + "name for device %s\n",
> + pmd->name, dev->device->name);
Overall looks good. Only above error doesn't say why it failed, informing the
user that device name is too long may help her to overcome the error.
> -----Original Message-----
> From: Ferruh Yigit [mailto:ferruh.yigit@intel.com]
> Sent: Thursday, August 6, 2020 12:36 AM
> To: wangyunjian <wangyunjian@huawei.com>; dev@dpdk.org
> Cc: keith.wiles@intel.com; ophirmu@mellanox.com; Lilijun (Jerry)
> <jerry.lilijun@huawei.com>; xudingke <xudingke@huawei.com>;
> stable@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] net/tap: free mempool when closing
>
> On 7/29/2020 12:35 PM, wangyunjian wrote:
> > From: Yunjian Wang <wangyunjian@huawei.com>
> >
> > When setup tx queues, we will create a mempool for the 'gso_ctx'.
> > The mempool is not freed when closing tap device. If free the tap
> > device and create it with different name, it will create a new
> > mempool. This maybe cause an OOM.
> >
> > Fixes: 050316a88313 ("net/tap: support TSO (TCP Segment Offload)")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
>
> <...>
>
> > @@ -1317,26 +1320,31 @@ tap_gso_ctx_setup(struct rte_gso_ctx *gso_ctx,
> struct rte_eth_dev *dev)
> > {
> > uint32_t gso_types;
> > char pool_name[64];
> > -
> > - /*
> > - * Create private mbuf pool with TAP_GSO_MBUF_SEG_SIZE bytes
> > - * size per mbuf use this pool for both direct and indirect mbufs
> > - */
> > -
> > - struct rte_mempool *mp; /* Mempool for GSO packets */
> > + struct pmd_internals *pmd = dev->data->dev_private;
> > + int ret;
> >
> > /* initialize GSO context */
> > gso_types = DEV_TX_OFFLOAD_TCP_TSO;
> > - snprintf(pool_name, sizeof(pool_name), "mp_%s", dev->device->name);
> > - mp = rte_mempool_lookup((const char *)pool_name);
> > - if (!mp) {
> > - mp = rte_pktmbuf_pool_create(pool_name, TAP_GSO_MBUFS_NUM,
> > - TAP_GSO_MBUF_CACHE_SIZE, 0,
> > + if (!pmd->gso_ctx_mp) {
> > + /*
> > + * Create private mbuf pool with TAP_GSO_MBUF_SEG_SIZE
> > + * bytes size per mbuf use this pool for both direct and
> > + * indirect mbufs
> > + */
> > + ret = snprintf(pool_name, sizeof(pool_name), "mp_%s",
> > + dev->device->name);
> > + if (ret < 0 || ret >= (int)sizeof(pool_name)) {
> > + TAP_LOG(ERR,
> > + "%s: failed to create mbuf pool "
> > + "name for device %s\n",
> > + pmd->name, dev->device->name);
>
> Overall looks good. Only above error doesn't say why it failed, informing the
> user that device name is too long may help her to overcome the error.
I found that the return value of functions snprintf was not checked
when modifying the code, so fixed it.
I think it maybe fail, because the max device name length is
RTE_DEV_NAME_MAX_LEN(64).
Do I need to split into two patches?
Thanks,
Yunjian
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Wednesday, August 5, 2020 9:48 PM
> To: wangyunjian <wangyunjian@huawei.com>
> Cc: dev@dpdk.org; stable@dpdk.org; keith.wiles@intel.com;
> ophirmu@mellanox.com; Lilijun (Jerry) <jerry.lilijun@huawei.com>; xudingke
> <xudingke@huawei.com>; wangyunjian <wangyunjian@huawei.com>;
> stable@dpdk.org
> Subject: Re: [dpdk-stable] [dpdk-dev] [PATCH] net/tap: free mempool when
> closing
>
> 29/07/2020 13:35, wangyunjian:
> > From: Yunjian Wang <wangyunjian@huawei.com>
> >
> > When setup tx queues, we will create a mempool for the 'gso_ctx'.
> > The mempool is not freed when closing tap device. If free the tap
> > device and create it with different name, it will create a new
> > mempool. This maybe cause an OOM.
>
> While at it, please look at implementing RTE_ETH_DEV_CLOSE_REMOVE
> behaviour in tap. Thanks
>
I read the codes about tap device. Currently, the tap pmd doesn't
use RTE_ETH_DEV_CLOSE_REMOVE flag.
Thanks,
Yunjian
On 8/6/2020 1:45 PM, wangyunjian wrote:
>
>
>> -----Original Message-----
>> From: Ferruh Yigit [mailto:ferruh.yigit@intel.com]
>> Sent: Thursday, August 6, 2020 12:36 AM
>> To: wangyunjian <wangyunjian@huawei.com>; dev@dpdk.org
>> Cc: keith.wiles@intel.com; ophirmu@mellanox.com; Lilijun (Jerry)
>> <jerry.lilijun@huawei.com>; xudingke <xudingke@huawei.com>;
>> stable@dpdk.org
>> Subject: Re: [dpdk-dev] [PATCH] net/tap: free mempool when closing
>>
>> On 7/29/2020 12:35 PM, wangyunjian wrote:
>>> From: Yunjian Wang <wangyunjian@huawei.com>
>>>
>>> When setup tx queues, we will create a mempool for the 'gso_ctx'.
>>> The mempool is not freed when closing tap device. If free the tap
>>> device and create it with different name, it will create a new
>>> mempool. This maybe cause an OOM.
>>>
>>> Fixes: 050316a88313 ("net/tap: support TSO (TCP Segment Offload)")
>>> Cc: stable@dpdk.org
>>>
>>> Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
>>
>> <...>
>>
>>> @@ -1317,26 +1320,31 @@ tap_gso_ctx_setup(struct rte_gso_ctx *gso_ctx,
>> struct rte_eth_dev *dev)
>>> {
>>> uint32_t gso_types;
>>> char pool_name[64];
>>> -
>>> - /*
>>> - * Create private mbuf pool with TAP_GSO_MBUF_SEG_SIZE bytes
>>> - * size per mbuf use this pool for both direct and indirect mbufs
>>> - */
>>> -
>>> - struct rte_mempool *mp; /* Mempool for GSO packets */
>>> + struct pmd_internals *pmd = dev->data->dev_private;
>>> + int ret;
>>>
>>> /* initialize GSO context */
>>> gso_types = DEV_TX_OFFLOAD_TCP_TSO;
>>> - snprintf(pool_name, sizeof(pool_name), "mp_%s", dev->device->name);
>>> - mp = rte_mempool_lookup((const char *)pool_name);
>>> - if (!mp) {
>>> - mp = rte_pktmbuf_pool_create(pool_name, TAP_GSO_MBUFS_NUM,
>>> - TAP_GSO_MBUF_CACHE_SIZE, 0,
>>> + if (!pmd->gso_ctx_mp) {
>>> + /*
>>> + * Create private mbuf pool with TAP_GSO_MBUF_SEG_SIZE
>>> + * bytes size per mbuf use this pool for both direct and
>>> + * indirect mbufs
>>> + */
>>> + ret = snprintf(pool_name, sizeof(pool_name), "mp_%s",
>>> + dev->device->name);
>>> + if (ret < 0 || ret >= (int)sizeof(pool_name)) {
>>> + TAP_LOG(ERR,
>>> + "%s: failed to create mbuf pool "
>>> + "name for device %s\n",
>>> + pmd->name, dev->device->name);
>>
>> Overall looks good. Only above error doesn't say why it failed, informing the
>> user that device name is too long may help her to overcome the error.
>
> I found that the return value of functions snprintf was not checked
> when modifying the code, so fixed it.
> I think it maybe fail, because the max device name length is
> RTE_DEV_NAME_MAX_LEN(64).
+1 to the check.
My comment was on the log message, which says "failed to create mbuf pool", but
it doesn't say it is failed because of long device name.
If user knows the reason of the failure, can prevent it by providing shorter
device name.
My suggestion is update the error log message to have the reason of failure.
>
> Do I need to split into two patches?
I think OK to have the change in this patch.
>
> Thanks,
> Yunjian
>
06/08/2020 14:47, wangyunjian:
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > 29/07/2020 13:35, wangyunjian:
> > > From: Yunjian Wang <wangyunjian@huawei.com>
> > >
> > > When setup tx queues, we will create a mempool for the 'gso_ctx'.
> > > The mempool is not freed when closing tap device. If free the tap
> > > device and create it with different name, it will create a new
> > > mempool. This maybe cause an OOM.
> >
> > While at it, please look at implementing RTE_ETH_DEV_CLOSE_REMOVE
> > behaviour in tap. Thanks
> >
>
> I read the codes about tap device. Currently, the tap pmd doesn't
> use RTE_ETH_DEV_CLOSE_REMOVE flag.
I know. That's why I suggest to switch to RTE_ETH_DEV_CLOSE_REMOVE.
Please see this deprecation notice:
http://git.dpdk.org/dpdk/commit/?id=7efbaa7b4e423
> -----Original Message-----
> From: Ferruh Yigit [mailto:ferruh.yigit@intel.com]
> Sent: Thursday, August 6, 2020 9:04 PM
> To: wangyunjian <wangyunjian@huawei.com>; dev@dpdk.org
> Cc: keith.wiles@intel.com; ophirmu@mellanox.com; Lilijun (Jerry)
> <jerry.lilijun@huawei.com>; xudingke <xudingke@huawei.com>;
> stable@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] net/tap: free mempool when closing
>
> On 8/6/2020 1:45 PM, wangyunjian wrote:
> >
> >
> >> -----Original Message-----
> >> From: Ferruh Yigit [mailto:ferruh.yigit@intel.com]
> >> Sent: Thursday, August 6, 2020 12:36 AM
> >> To: wangyunjian <wangyunjian@huawei.com>; dev@dpdk.org
> >> Cc: keith.wiles@intel.com; ophirmu@mellanox.com; Lilijun (Jerry)
> >> <jerry.lilijun@huawei.com>; xudingke <xudingke@huawei.com>;
> >> stable@dpdk.org
> >> Subject: Re: [dpdk-dev] [PATCH] net/tap: free mempool when closing
> >>
> >> On 7/29/2020 12:35 PM, wangyunjian wrote:
> >>> From: Yunjian Wang <wangyunjian@huawei.com>
> >>>
> >>> When setup tx queues, we will create a mempool for the 'gso_ctx'.
> >>> The mempool is not freed when closing tap device. If free the tap
> >>> device and create it with different name, it will create a new
> >>> mempool. This maybe cause an OOM.
> >>>
> >>> Fixes: 050316a88313 ("net/tap: support TSO (TCP Segment Offload)")
> >>> Cc: stable@dpdk.org
> >>>
> >>> Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
> >>
> >> <...>
> >>
> >>> @@ -1317,26 +1320,31 @@ tap_gso_ctx_setup(struct rte_gso_ctx
> >>> *gso_ctx,
> >> struct rte_eth_dev *dev)
> >>> {
> >>> uint32_t gso_types;
> >>> char pool_name[64];
> >>> -
> >>> - /*
> >>> - * Create private mbuf pool with TAP_GSO_MBUF_SEG_SIZE bytes
> >>> - * size per mbuf use this pool for both direct and indirect mbufs
> >>> - */
> >>> -
> >>> - struct rte_mempool *mp; /* Mempool for GSO packets */
> >>> + struct pmd_internals *pmd = dev->data->dev_private;
> >>> + int ret;
> >>>
> >>> /* initialize GSO context */
> >>> gso_types = DEV_TX_OFFLOAD_TCP_TSO;
> >>> - snprintf(pool_name, sizeof(pool_name), "mp_%s",
> dev->device->name);
> >>> - mp = rte_mempool_lookup((const char *)pool_name);
> >>> - if (!mp) {
> >>> - mp = rte_pktmbuf_pool_create(pool_name,
> TAP_GSO_MBUFS_NUM,
> >>> - TAP_GSO_MBUF_CACHE_SIZE, 0,
> >>> + if (!pmd->gso_ctx_mp) {
> >>> + /*
> >>> + * Create private mbuf pool with TAP_GSO_MBUF_SEG_SIZE
> >>> + * bytes size per mbuf use this pool for both direct and
> >>> + * indirect mbufs
> >>> + */
> >>> + ret = snprintf(pool_name, sizeof(pool_name), "mp_%s",
> >>> + dev->device->name);
> >>> + if (ret < 0 || ret >= (int)sizeof(pool_name)) {
> >>> + TAP_LOG(ERR,
> >>> + "%s: failed to create mbuf pool "
> >>> + "name for device %s\n",
> >>> + pmd->name, dev->device->name);
> >>
> >> Overall looks good. Only above error doesn't say why it failed,
> >> informing the user that device name is too long may help her to overcome
> the error.
> >
> > I found that the return value of functions snprintf was not checked
> > when modifying the code, so fixed it.
> > I think it maybe fail, because the max device name length is
> > RTE_DEV_NAME_MAX_LEN(64).
>
> +1 to the check.
> My comment was on the log message, which says "failed to create mbuf pool",
> but it doesn't say it is failed because of long device name.
> If user knows the reason of the failure, can prevent it by providing shorter
> device name.
> My suggestion is update the error log message to have the reason of failure.
Thanks for your suggestion, will include them in next version.
>
> >
> > Do I need to split into two patches?
>
> I think OK to have the change in this patch.
OK
>
> >
> > Thanks,
> > Yunjian
> >
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Thursday, August 6, 2020 9:20 PM
> To: wangyunjian <wangyunjian@huawei.com>
> Cc: dev@dpdk.org; stable@dpdk.org; keith.wiles@intel.com;
> ophirmu@mellanox.com; Lilijun (Jerry) <jerry.lilijun@huawei.com>; xudingke
> <xudingke@huawei.com>; stable@dpdk.org
> Subject: Re: [dpdk-stable] [dpdk-dev] [PATCH] net/tap: free mempool when
> closing
>
> 06/08/2020 14:47, wangyunjian:
> > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > 29/07/2020 13:35, wangyunjian:
> > > > From: Yunjian Wang <wangyunjian@huawei.com>
> > > >
> > > > When setup tx queues, we will create a mempool for the 'gso_ctx'.
> > > > The mempool is not freed when closing tap device. If free the tap
> > > > device and create it with different name, it will create a new
> > > > mempool. This maybe cause an OOM.
> > >
> > > While at it, please look at implementing RTE_ETH_DEV_CLOSE_REMOVE
> > > behaviour in tap. Thanks
> > >
> >
> > I read the codes about tap device. Currently, the tap pmd doesn't use
> > RTE_ETH_DEV_CLOSE_REMOVE flag.
>
> I know. That's why I suggest to switch to RTE_ETH_DEV_CLOSE_REMOVE.
> Please see this deprecation notice:
> http://git.dpdk.org/dpdk/commit/?id=7efbaa7b4e423
OK, I have sent a patch to add this feature for tap device.
https://patchwork.dpdk.org/patch/76137/
Thanks,
Yunjian
>
28/08/2020 14:51, wangyunjian:
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > 06/08/2020 14:47, wangyunjian:
> > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > > 29/07/2020 13:35, wangyunjian:
> > > > > From: Yunjian Wang <wangyunjian@huawei.com>
> > > > >
> > > > > When setup tx queues, we will create a mempool for the 'gso_ctx'.
> > > > > The mempool is not freed when closing tap device. If free the tap
> > > > > device and create it with different name, it will create a new
> > > > > mempool. This maybe cause an OOM.
> > > >
> > > > While at it, please look at implementing RTE_ETH_DEV_CLOSE_REMOVE
> > > > behaviour in tap. Thanks
> > > >
> > >
> > > I read the codes about tap device. Currently, the tap pmd doesn't use
> > > RTE_ETH_DEV_CLOSE_REMOVE flag.
> >
> > I know. That's why I suggest to switch to RTE_ETH_DEV_CLOSE_REMOVE.
> > Please see this deprecation notice:
> > http://git.dpdk.org/dpdk/commit/?id=7efbaa7b4e423
>
> OK, I have sent a patch to add this feature for tap device.
>
> https://patchwork.dpdk.org/patch/76137/
Thanks a lot
@@ -1070,6 +1070,9 @@ tap_dev_close(struct rte_eth_dev *dev)
&internals->remote_initial_flags);
}
+ rte_mempool_free(internals->gso_ctx_mp);
+ internals->gso_ctx_mp = NULL;
+
if (internals->ka_fd != -1) {
close(internals->ka_fd);
internals->ka_fd = -1;
@@ -1317,26 +1320,31 @@ tap_gso_ctx_setup(struct rte_gso_ctx *gso_ctx, struct rte_eth_dev *dev)
{
uint32_t gso_types;
char pool_name[64];
-
- /*
- * Create private mbuf pool with TAP_GSO_MBUF_SEG_SIZE bytes
- * size per mbuf use this pool for both direct and indirect mbufs
- */
-
- struct rte_mempool *mp; /* Mempool for GSO packets */
+ struct pmd_internals *pmd = dev->data->dev_private;
+ int ret;
/* initialize GSO context */
gso_types = DEV_TX_OFFLOAD_TCP_TSO;
- snprintf(pool_name, sizeof(pool_name), "mp_%s", dev->device->name);
- mp = rte_mempool_lookup((const char *)pool_name);
- if (!mp) {
- mp = rte_pktmbuf_pool_create(pool_name, TAP_GSO_MBUFS_NUM,
- TAP_GSO_MBUF_CACHE_SIZE, 0,
+ if (!pmd->gso_ctx_mp) {
+ /*
+ * Create private mbuf pool with TAP_GSO_MBUF_SEG_SIZE
+ * bytes size per mbuf use this pool for both direct and
+ * indirect mbufs
+ */
+ ret = snprintf(pool_name, sizeof(pool_name), "mp_%s",
+ dev->device->name);
+ if (ret < 0 || ret >= (int)sizeof(pool_name)) {
+ TAP_LOG(ERR,
+ "%s: failed to create mbuf pool "
+ "name for device %s\n",
+ pmd->name, dev->device->name);
+ return -ENAMETOOLONG;
+ }
+ pmd->gso_ctx_mp = rte_pktmbuf_pool_create(pool_name,
+ TAP_GSO_MBUFS_NUM, TAP_GSO_MBUF_CACHE_SIZE, 0,
RTE_PKTMBUF_HEADROOM + TAP_GSO_MBUF_SEG_SIZE,
SOCKET_ID_ANY);
- if (!mp) {
- struct pmd_internals *pmd = dev->data->dev_private;
-
+ if (!pmd->gso_ctx_mp) {
TAP_LOG(ERR,
"%s: failed to create mbuf pool for device %s\n",
pmd->name, dev->device->name);
@@ -1344,8 +1352,8 @@ tap_gso_ctx_setup(struct rte_gso_ctx *gso_ctx, struct rte_eth_dev *dev)
}
}
- gso_ctx->direct_pool = mp;
- gso_ctx->indirect_pool = mp;
+ gso_ctx->direct_pool = pmd->gso_ctx_mp;
+ gso_ctx->indirect_pool = pmd->gso_ctx_mp;
gso_ctx->gso_types = gso_types;
gso_ctx->gso_size = 0; /* gso_size is set in tx_burst() per packet */
gso_ctx->flag = 0;
@@ -1842,6 +1850,7 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name,
pmd->type = type;
pmd->ka_fd = -1;
pmd->nlsk_fd = -1;
+ pmd->gso_ctx_mp = NULL;
pmd->ioctl_sock = socket(AF_INET, SOCK_DGRAM, 0);
if (pmd->ioctl_sock == -1) {
@@ -91,6 +91,7 @@ struct pmd_internals {
struct tx_queue txq[RTE_PMD_TAP_MAX_QUEUES]; /* List of TX queues */
struct rte_intr_handle intr_handle; /* LSC interrupt handle. */
int ka_fd; /* keep-alive file descriptor */
+ struct rte_mempool *gso_ctx_mp; /* Mempool for GSO packets */
};
struct pmd_process_private {