From patchwork Sun Sep 28 17:52:16 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Wiles, Roger Keith" X-Patchwork-Id: 619 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 14FFF7E14; Sun, 28 Sep 2014 19:45:54 +0200 (CEST) Received: from mail1.windriver.com (mail1.windriver.com [147.11.146.13]) by dpdk.org (Postfix) with ESMTP id 35AD57E13 for ; Sun, 28 Sep 2014 19:45:50 +0200 (CEST) Received: from ALA-HCA.corp.ad.wrs.com (ala-hca.corp.ad.wrs.com [147.11.189.40]) by mail1.windriver.com (8.14.9/8.14.5) with ESMTP id s8SHqHVF027580 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL) for ; Sun, 28 Sep 2014 10:52:18 -0700 (PDT) Received: from ALA-MBB.corp.ad.wrs.com ([169.254.1.18]) by ALA-HCA.corp.ad.wrs.com ([147.11.189.40]) with mapi id 14.03.0174.001; Sun, 28 Sep 2014 10:52:17 -0700 From: "Wiles, Roger Keith" To: "" Thread-Topic: [RFC] More changes for rte_mempool.h:__mempool_get_bulk() Thread-Index: AQHP20TxUYvnLZ5sIU2OzJoGryugjQ== Date: Sun, 28 Sep 2014 17:52:16 +0000 Message-ID: <3B9A624B-ABBF-4A20-96CD-8D5607006FEA@windriver.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [172.25.40.166] Content-ID: MIME-Version: 1.0 Subject: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk() X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Here is a Request for Comment on __mempool_get_bulk() routine. I believe I am seeing a few more issues in this routine, please look at the code below and see if these seem to fix some concerns in how the ring is handled. The first issue I believe is cache->len is increased by ret and not req as we do not know if ret == req. This also means the cache->len may still not satisfy the request from the cache. The second issue is if you believe the above code then we have to account for that issue in the stats. Let me know what you think? ++Keith ——— Keith Wiles, Principal Technologist with CTO office, Wind River mobile 972-213-5533 diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h index 199a493..b1b1f7a 100644 --- a/lib/librte_mempool/rte_mempool.h +++ b/lib/librte_mempool/rte_mempool.h @@ -945,9 +945,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n, int is_mc) { int ret; -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG - unsigned n_orig = n; -#endif + #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0 struct rte_mempool_cache *cache; uint32_t index, len; @@ -979,7 +977,21 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table, goto ring_dequeue; } - cache->len += req; + cache->len += ret; // Need to adjust len by ret not req, as (ret != req) + + if ( cache->len < n ) { + /* + * Number (ret + cache->len) may not be >= n. As + * the 'ret' value maybe zero or less then 'req'. + * + * Note: + * An issue of order from the cache and common pool could + * be an issue if (cache->len != 0 and less then n), but the + * normal case it should be OK. If the user needs to preserve + * the order of packets then he must set cache_size == 0. + */ + goto ring_dequeue; + } } /* Now fill in the response ... */ @@ -1002,9 +1014,12 @@ ring_dequeue: ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n); if (ret < 0) - __MEMPOOL_STAT_ADD(mp, get_fail, n_orig); - else + __MEMPOOL_STAT_ADD(mp, get_fail, n); + else { __MEMPOOL_STAT_ADD(mp, get_success, ret); + // Catch the case when ret != n, adding zero should not be a problem. + __MEMPOOL_STAT_ADD(mp, get_fail, n - ret); + } return ret; }