[v2] app/testpmd: fix lcore ID restriction
Checks
Commit Message
With modern CPUs, it is possible to have higher
CPU count thus we can have higher RTE_MAX_LCORES.
In testpmd application, the current config forwarding
cores option "--nb-cores" is hard limited to 255.
The patch fixes this constraint and also adjusts the lcore
data structure to 32-bit to align with rte lcore APIs.
Fixes: af75078fece3 ("first public release")
Cc: stable@dpdk.org
Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
---
v2:
* Fixed type mistmatch in comparison
* Fixed incorrect format specifier
---
app/test-pmd/config.c | 6 +++---
app/test-pmd/parameters.c | 4 ++--
app/test-pmd/testpmd.h | 2 +-
3 files changed, 6 insertions(+), 6 deletions(-)
Comments
On 4/16/2024 10:55 AM, Sivaprasad Tummala wrote:
> With modern CPUs, it is possible to have higher
> CPU count thus we can have higher RTE_MAX_LCORES.
> In testpmd application, the current config forwarding
> cores option "--nb-cores" is hard limited to 255.
>
> The patch fixes this constraint and also adjusts the lcore
> data structure to 32-bit to align with rte lcore APIs.
>
> Fixes: af75078fece3 ("first public release")
> Cc: stable@dpdk.org
>
> Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
>
Recheck-request: iol-unit-amd64-testing
On 4/16/2024 10:55 AM, Sivaprasad Tummala wrote:
> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> index ba1007ace6..6b28c22c96 100644
> --- a/app/test-pmd/config.c
> +++ b/app/test-pmd/config.c
> @@ -4785,9 +4785,9 @@ fwd_stream_on_other_lcores(uint16_t domain_id, lcoreid_t src_lc,
> continue;
> printf("Shared Rx queue group %u queue %hu can't be scheduled on different cores:\n",
> share_group, share_rxq);
> - printf(" lcore %hhu Port %hu queue %hu\n",
> + printf(" lcore %u Port %hu queue %hu\n",
> src_lc, src_port, src_rxq);
> - printf(" lcore %hhu Port %hu queue %hu\n",
> + printf(" lcore %u Port %hu queue %hu\n",
> lc_id, fs->rx_port, fs->rx_queue);
> printf("Please use --nb-cores=%hu to limit number of forwarding cores\n",
> nb_rxq);
> @@ -5159,7 +5159,7 @@ icmp_echo_config_setup(void)
> lcoreid_t lc_id;
> uint16_t sm_id;
>
> - if ((nb_txq * nb_fwd_ports) < nb_fwd_lcores)
> + if ((lcoreid_t)(nb_txq * nb_fwd_ports) < nb_fwd_lcores)
> cur_fwd_config.nb_fwd_lcores = (lcoreid_t)
> (nb_txq * nb_fwd_ports);
>
Hi Sivaprasad,
Is this '(lcoreid_t)' cast required? Because of integer promotion I
think result will be correct without casting.
(And without integer promotion considered, casting needs to be done on
one of the variables, not to the result, because result may be already
cast down I think. Anyway this is not required for this case since
variables are u16.)
@@ -4785,9 +4785,9 @@ fwd_stream_on_other_lcores(uint16_t domain_id, lcoreid_t src_lc,
continue;
printf("Shared Rx queue group %u queue %hu can't be scheduled on different cores:\n",
share_group, share_rxq);
- printf(" lcore %hhu Port %hu queue %hu\n",
+ printf(" lcore %u Port %hu queue %hu\n",
src_lc, src_port, src_rxq);
- printf(" lcore %hhu Port %hu queue %hu\n",
+ printf(" lcore %u Port %hu queue %hu\n",
lc_id, fs->rx_port, fs->rx_queue);
printf("Please use --nb-cores=%hu to limit number of forwarding cores\n",
nb_rxq);
@@ -5159,7 +5159,7 @@ icmp_echo_config_setup(void)
lcoreid_t lc_id;
uint16_t sm_id;
- if ((nb_txq * nb_fwd_ports) < nb_fwd_lcores)
+ if ((lcoreid_t)(nb_txq * nb_fwd_ports) < nb_fwd_lcores)
cur_fwd_config.nb_fwd_lcores = (lcoreid_t)
(nb_txq * nb_fwd_ports);
else
@@ -1071,8 +1071,8 @@ launch_args_parse(int argc, char** argv)
break;
case TESTPMD_OPT_NB_CORES_NUM:
n = atoi(optarg);
- if (n > 0 && n <= nb_lcores)
- nb_fwd_lcores = (uint8_t) n;
+ if (n > 0 && (lcoreid_t)n <= nb_lcores)
+ nb_fwd_lcores = (lcoreid_t) n;
else
rte_exit(EXIT_FAILURE,
"nb-cores should be > 0 and <= %d\n",
@@ -84,7 +84,7 @@ extern volatile uint8_t f_quit;
/* Maximum number of pools supported per Rx queue */
#define MAX_MEMPOOL 8
-typedef uint8_t lcoreid_t;
+typedef uint32_t lcoreid_t;
typedef uint16_t portid_t;
typedef uint16_t queueid_t;
typedef uint16_t streamid_t;