Message ID | 20201008091729.4321-1-ciara.loftus@intel.com (mailing list archive) |
---|---|
State | Superseded, archived |
Delegated to: | Ferruh Yigit |
Headers | show |
Series | net/af_xdp: Don't allow umem sharing for xsks with same netdev, qid | expand |
Context | Check | Description |
---|---|---|
ci/iol-mellanox-Performance | success | Performance Testing PASS |
ci/iol-testing | success | Testing PASS |
ci/iol-intel-Performance | success | Performance Testing PASS |
ci/travis-robot | success | Travis build: passed |
ci/iol-broadcom-Performance | success | Performance Testing PASS |
ci/iol-broadcom-Functional | success | Functional Testing PASS |
ci/Intel-compilation | success | Compilation OK |
ci/checkpatch | success | coding style OK |
On 10/8/2020 10:17 AM, Ciara Loftus wrote: > Supporting this would require locks, which would impact the performance of > the more typical cases - xsks with different qids and netdevs. > > Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> > Fixes: 74b46340e2d4 ("net/af_xdp: support shared UMEM") Off the topic. This is the patch 80000 in patchwork! This was fast. Thanks to everyone who contributed! https://patches.dpdk.org/patch/80000/ The historical numbers from DPDK patchwork: 80000 - Oct. 8, 2020 (153 days) [ 5 months / 21 weeks and 6 days ] 70000 - May 8, 2020 (224 days) 60000 - Sept. 27, 2019 (248 days) 50000 - Jan. 22, 2019 (253 days) 40000 - May 14, 2018 (217 days) 30000 - Oct. 9, 2017 (258 days) 20000 - Jan. 25, 2017 (372 days) 10000 - Jan. 20, 2016 (645 days) 00001 - April 16, 2014 This is the fastest 10K of patches, the average ~250 days for 10K patch seems passed by 100 days. v20.11 being ABI break release must have helped it but meanwhile the big difference may mean project is still growing. Just to put into another perspective, this means continuous 65 patches in average each day for last a few months. Thanks again to all contributors.
On 10/8/2020 10:17 AM, Ciara Loftus wrote: > Supporting this would require locks, which would impact the performance of > the more typical cases - xsks with different qids and netdevs. > > Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> > Fixes: 74b46340e2d4 ("net/af_xdp: support shared UMEM") Hi Ciara, 'check-git-log.sh' script is giving some errors, can you please fix them, you can run script as: "./devtools/check-git-log.sh -n1" And can you please give some sample in the description what is not supported now? Also does this require any documentation update, as kind of limitation or known issues?
> On 10/8/2020 10:17 AM, Ciara Loftus wrote: > > Supporting this would require locks, which would impact the performance > of > > the more typical cases - xsks with different qids and netdevs. > > > > Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> > > Fixes: 74b46340e2d4 ("net/af_xdp: support shared UMEM") > > Hi Ciara, > > 'check-git-log.sh' script is giving some errors, can you please fix them, you > can run script as: "./devtools/check-git-log.sh -n1" > > And can you please give some sample in the description what is not > supported now? > > Also does this require any documentation update, as kind of limitation or > known > issues? Thanks for the suggestions Ferruh, all valid. I've implemented them in the v2. Ciara
diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c index eaf2c9c873..9e0e5c254a 100644 --- a/drivers/net/af_xdp/rte_eth_af_xdp.c +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c @@ -634,16 +634,35 @@ find_internal_resource(struct pmd_internals *port_int) return list; } +/* Check if the netdev,qid context already exists */ +static inline bool +ctx_exists(struct pkt_rx_queue *rxq, const char *ifname, + struct pkt_rx_queue *list_rxq, const char *list_ifname) +{ + bool exists = false; + + if (rxq->xsk_queue_idx == list_rxq->xsk_queue_idx && + !strncmp(ifname, list_ifname, IFNAMSIZ)) { + AF_XDP_LOG(ERR, "ctx %s,%i already exists, cannot share umem\n", + ifname, rxq->xsk_queue_idx); + exists = true; + } + + return exists; +} + /* Get a pointer to an existing UMEM which overlays the rxq's mb_pool */ -static inline struct xsk_umem_info * -get_shared_umem(struct pkt_rx_queue *rxq) { +static inline int +get_shared_umem(struct pkt_rx_queue *rxq, const char *ifname, + struct xsk_umem_info **umem) +{ struct internal_list *list; struct pmd_internals *internals; - int i = 0; + int i = 0, ret = 0; struct rte_mempool *mb_pool = rxq->mb_pool; if (mb_pool == NULL) - return NULL; + return ret; pthread_mutex_lock(&internal_list_lock); @@ -655,20 +674,25 @@ get_shared_umem(struct pkt_rx_queue *rxq) { if (rxq == list_rxq) continue; if (mb_pool == internals->rx_queues[i].mb_pool) { + if (ctx_exists(rxq, ifname, list_rxq, + internals->if_name)) { + ret = -1; + goto out; + } if (__atomic_load_n( &internals->rx_queues[i].umem->refcnt, __ATOMIC_ACQUIRE)) { - pthread_mutex_unlock( - &internal_list_lock); - return internals->rx_queues[i].umem; + *umem = internals->rx_queues[i].umem; + goto out; } } } } +out: pthread_mutex_unlock(&internal_list_lock); - return NULL; + return ret; } static int @@ -913,7 +937,9 @@ xsk_umem_info *xdp_umem_configure(struct pmd_internals *internals, uint64_t umem_size, align = 0; if (internals->shared_umem) { - umem = get_shared_umem(rxq); + if (get_shared_umem(rxq, internals->if_name, &umem) < 0) + return NULL; + if (umem != NULL && __atomic_load_n(&umem->refcnt, __ATOMIC_ACQUIRE) < umem->max_xsks) {
Supporting this would require locks, which would impact the performance of the more typical cases - xsks with different qids and netdevs. Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> Fixes: 74b46340e2d4 ("net/af_xdp: support shared UMEM") --- drivers/net/af_xdp/rte_eth_af_xdp.c | 44 +++++++++++++++++++++++------ 1 file changed, 35 insertions(+), 9 deletions(-)