From patchwork Sun Nov 5 07:41:14 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilya Matveychikov X-Patchwork-Id: 31175 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B89511B2F1; Sun, 5 Nov 2017 08:41:17 +0100 (CET) Received: from mail-wm0-f66.google.com (mail-wm0-f66.google.com [74.125.82.66]) by dpdk.org (Postfix) with ESMTP id BBAE01B215 for ; Sun, 5 Nov 2017 08:41:16 +0100 (CET) Received: by mail-wm0-f66.google.com with SMTP id r68so8246111wmr.1 for ; Sun, 05 Nov 2017 00:41:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:content-transfer-encoding:mime-version:subject:message-id:date :to; bh=br3uzFuDt2SZWWpiPei2l0MP0OEDQgLdoT8dC/tzFGA=; b=c0QkmRc2qQcK4wRDOvWFycb1US2EPZsZBN980B51tTXOpGyNaaTHkJTobRh4moDd0J +2mL5VYYuXaOIT1HxN2MVcRJS+PpONB6QvjM/IEaC8ZldCz4CIGjkeYlGgcNcfx8FLxE EYoGhkmzOGBO9jiUDtFwgraLRAQWHc3RmBVxQYvOKky7yHeit2lo2Lv48zNrg6QWyxhe /9qQdAmibuCC71YuCs2qL+aqeRWLIezfu54Jp988gn54VDahF5WmUq5gWs9CH54/WgZ6 TVWKYS5pdsdcXTga1K9SiR2YCf6UhCVr/swpTXNhIuSCNSyOrXUaz8oqnC4ad1EUkNJq QkTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:content-transfer-encoding:mime-version :subject:message-id:date:to; bh=br3uzFuDt2SZWWpiPei2l0MP0OEDQgLdoT8dC/tzFGA=; b=LGbYhLy2gEJ0spLJAj9QNy2PBe4ugJn+ctI6YO7JpDb30oj8ZoOlDJ2HOayvnD/1P5 ia+pvKm7yqmfg38JiTel4f/reCJj5gv+M8licTBjNKpqFdL4dAWscjYNA/4xDefeEFA3 K4T8r7XZmReYLGup2Am0Ogr/qZShJEM7StdmCQ/E0Ow+8qD+/unFmH74fOzYvzK7El1b t/11jhMPHEQXtBRppuNzdFegsKGWrNoZST38f064u5CuRpb7yg1OZtietmVLJVbOhKvH AtyhE2Nxwd4Myh/NhHBJg/CSdQMW+LOTAyeeyF/YZCE3zyqn0cKV/mAS5/DA65Bg9nma N8Ig== X-Gm-Message-State: AMCzsaXe99/r8LEtaE8nKpUj50PaPAP2KjpiYG4h1SHjsxYlQER+9HMB zZm6NO7twWIgmTjpBa0pZh+4WLMb X-Google-Smtp-Source: ABhQp+RuDn9eijiiHWULdjBMCr7dCL8PxrdDQogWOOvl4PMOrvHaHK0g2ZwUDJ3B7/bjMIM8sBCcfQ== X-Received: by 10.80.135.73 with SMTP id 9mr15103001edv.266.1509867675928; Sun, 05 Nov 2017 00:41:15 -0700 (PDT) Received: from [10.0.1.183] ([185.92.25.118]) by smtp.gmail.com with ESMTPSA id y21sm8419677edi.25.2017.11.05.00.41.14 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 05 Nov 2017 00:41:15 -0700 (PDT) From: Ilya Matveychikov Mime-Version: 1.0 (Mac OS X Mail 10.3 \(3273\)) Message-Id: Date: Sun, 5 Nov 2017 11:41:14 +0400 To: dev@dpdk.org X-Mailer: Apple Mail (2.3273) Subject: [dpdk-dev] net/pcap: remove single interface constraint X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hello folks, This patch removes single interface constraint from the libpcap-based PMD. The problem it solves is to allow PCAP device consists of more than single device: # testpmd --vdev net_pcap0,iface=vethA,iface=vethB,iface=vethC and so.. I found the issue when performed RSS emulation based on PCAP PMD. For my task I had to create multi-queue PCAP device based on a number of veth devices which in turn was combined in bonding device. --- drivers/net/pcap/rte_eth_pcap.c | 77 +++++++++++++++++++++++------------------ 1 file changed, 43 insertions(+), 34 deletions(-) -- 2.7.4 diff --git a/drivers/net/pcap/rte_eth_pcap.c b/drivers/net/pcap/rte_eth_pcap.c index defb3b4..ae03c3b 100644 --- a/drivers/net/pcap/rte_eth_pcap.c +++ b/drivers/net/pcap/rte_eth_pcap.c @@ -94,7 +94,6 @@ struct pmd_internals { struct pcap_rx_queue rx_queue[RTE_PMD_PCAP_MAX_QUEUES]; struct pcap_tx_queue tx_queue[RTE_PMD_PCAP_MAX_QUEUES]; int if_index; - int single_iface; }; struct pmd_devargs { @@ -441,15 +440,19 @@ eth_dev_start(struct rte_eth_dev *dev) struct pcap_rx_queue *rx; /* Special iface case. Single pcap is open and shared between tx/rx. */ - if (internals->single_iface) { - tx = &internals->tx_queue[0]; - rx = &internals->rx_queue[0]; - - if (!tx->pcap && strcmp(tx->type, ETH_PCAP_IFACE_ARG) == 0) { - if (open_single_iface(tx->name, &tx->pcap) < 0) - return -1; - rx->pcap = tx->pcap; + if (internals->tx_queue[0].pcap == internals->rx_queue[0].pcap) { + RTE_ASSERT(dev->data->nb_tx_queues == dev->data->nb_rx_queues); + for (i = 0; i < dev->data->nb_tx_queues; i++) { + tx = &internals->tx_queue[i]; + rx = &internals->rx_queue[i]; + + if (!tx->pcap && strcmp(tx->type, ETH_PCAP_IFACE_ARG) == 0) { + if (open_single_iface(tx->name, &tx->pcap) < 0) + return -1; + rx->pcap = tx->pcap; + } } + goto status_up; } @@ -504,12 +507,15 @@ eth_dev_stop(struct rte_eth_dev *dev) struct pcap_rx_queue *rx; /* Special iface case. Single pcap is open and shared between tx/rx. */ - if (internals->single_iface) { - tx = &internals->tx_queue[0]; - rx = &internals->rx_queue[0]; - pcap_close(tx->pcap); - tx->pcap = NULL; - rx->pcap = NULL; + if (internals->tx_queue[0].pcap == internals->rx_queue[0].pcap) { + RTE_ASSERT(dev->data->nb_tx_queues == dev->data->nb_rx_queues); + for (i = 0; i < dev->data->nb_tx_queues; i++) { + tx = &internals->tx_queue[i]; + rx = &internals->rx_queue[i]; + pcap_close(tx->pcap); + tx->pcap = NULL; + rx->pcap = NULL; + } goto status_down; } @@ -730,6 +736,7 @@ open_tx_pcap(const char *key, const char *value, void *extra_args) static inline int open_rx_tx_iface(const char *key, const char *value, void *extra_args) { + unsigned int i; const char *iface = value; struct pmd_devargs *tx = extra_args; pcap_t *pcap = NULL; @@ -737,9 +744,14 @@ open_rx_tx_iface(const char *key, const char *value, void *extra_args) if (open_single_iface(iface, &pcap) < 0) return -1; - tx->queue[0].pcap = pcap; - tx->queue[0].name = iface; - tx->queue[0].type = key; + for (i = 0; i < tx->num_of_queue; i++) { + if (tx->queue[i].pcap == NULL) { + tx->queue[i].pcap = pcap; + tx->queue[i].name = iface; + tx->queue[i].type = key; + break; + } + } return 0; } @@ -901,8 +913,7 @@ static int eth_from_pcaps(struct rte_vdev_device *vdev, struct pmd_devargs *rx_queues, const unsigned int nb_rx_queues, struct pmd_devargs *tx_queues, const unsigned int nb_tx_queues, - struct rte_kvargs *kvlist, int single_iface, - unsigned int using_dumpers) + struct rte_kvargs *kvlist, unsigned int using_dumpers) { struct pmd_internals *internals = NULL; struct rte_eth_dev *eth_dev = NULL; @@ -914,9 +925,6 @@ eth_from_pcaps(struct rte_vdev_device *vdev, if (ret < 0) return ret; - /* store weather we are using a single interface for rx/tx or not */ - internals->single_iface = single_iface; - eth_dev->rx_pkt_burst = eth_pcap_rx; if (using_dumpers) @@ -935,7 +943,6 @@ pmd_pcap_probe(struct rte_vdev_device *dev) struct rte_kvargs *kvlist; struct pmd_devargs pcaps = {0}; struct pmd_devargs dumpers = {0}; - int single_iface = 0; int ret; name = rte_vdev_device_name(dev); @@ -953,19 +960,21 @@ pmd_pcap_probe(struct rte_vdev_device *dev) * If iface argument is passed we open the NICs and use them for * reading / writing */ - if (rte_kvargs_count(kvlist, ETH_PCAP_IFACE_ARG) == 1) { + if (rte_kvargs_count(kvlist, ETH_PCAP_IFACE_ARG)) { + int queues = rte_kvargs_count(kvlist, ETH_PCAP_IFACE_ARG); - ret = rte_kvargs_process(kvlist, ETH_PCAP_IFACE_ARG, - &open_rx_tx_iface, &pcaps); + pcaps.num_of_queue = queues; + dumpers.num_of_queue = queues; - if (ret < 0) - goto free_kvlist; + for (int i = 0; i < queues; i++) { + ret = rte_kvargs_process(kvlist, ETH_PCAP_IFACE_ARG, + &open_rx_tx_iface, &pcaps); - dumpers.queue[0] = pcaps.queue[0]; + if (ret < 0) + goto free_kvlist; - single_iface = 1; - pcaps.num_of_queue = 1; - dumpers.num_of_queue = 1; + dumpers.queue[i] = pcaps.queue[i]; + } goto create_eth; } @@ -1020,7 +1029,7 @@ pmd_pcap_probe(struct rte_vdev_device *dev) create_eth: ret = eth_from_pcaps(dev, &pcaps, pcaps.num_of_queue, &dumpers, - dumpers.num_of_queue, kvlist, single_iface, is_tx_pcap); + dumpers.num_of_queue, kvlist, is_tx_pcap); free_kvlist: rte_kvargs_free(kvlist);