From patchwork Fri Aug 30 14:00:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sriharsha Basavapatna X-Patchwork-Id: 143497 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 28EB5458A8; Fri, 30 Aug 2024 15:56:29 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2B4F442F1D; Fri, 30 Aug 2024 15:55:46 +0200 (CEST) Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) by mails.dpdk.org (Postfix) with ESMTP id B336F42E9D for ; Fri, 30 Aug 2024 15:55:44 +0200 (CEST) Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-201fae21398so13268515ad.1 for ; Fri, 30 Aug 2024 06:55:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1725026144; x=1725630944; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jhQ4G6rttAFdhrGNT0YvinvH6N3n8wtn+hs/Qrivne4=; b=Eu0KpnpjN8yJBiAEVRV0TaF63z3qK8j/oy4ZzYZsa+baM+9uO4p1IY+oXH4QjtlWIB VoO+TUcoe0xAlQuxsMCKUU9VjbUsAwhKkOW58EDZ4P/iOpsgfCbQrJA2Feb80xXiSTNb WOTRe6Q00mBR1++GhO/l9hBMgc3WJeWNOPhWg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725026144; x=1725630944; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jhQ4G6rttAFdhrGNT0YvinvH6N3n8wtn+hs/Qrivne4=; b=u2t1oUebbcFedhxkXtlvrKCcbSyvsBiL9UWbQhfcxvcxLpE9vOxChMD/HDYNFq8U6U /kNRGJz1xjU7ETxINGZfuXzOFp1gZ2HllQtXVdwNkbsUOpAO8BEWcqRSKhNC+ZIgAfy0 UyqlqVCbrOcjhX0HZfSJ6YHvQKAV8v0jJaXYv43gaMHj3XXKQ0E8vI4BbWu/9kPkA4v2 AJt3KM/p1eQTYtzhi2qYH96IGnoQt9zxOu0uKZHG7+wvHkRHIdIaJefJ51oLzcY2Rlju UVFzQ0URFEKQswnAVvYWLBOxT0z+ay14qRorMFVFRILMXSK1FyTf3WYBkTELhsyZUUXa nAXw== X-Gm-Message-State: AOJu0YxPXmJoRIbABdcBw6iFQhVtzI2AgtogaB5YjpdAF42U5PVX3XYr wviHiblRt1Lz8eGxDoaWFZK5WovPc176mNRWHREaz9vfMmthHuP4BuNkJlDv9rJ7zOy4xMusoZ/ aQVCy9orAC8ldcWGk9pPj7n2HjCZilc90bjEFe1/RthtcRoAzHJNeyx5Qy6R15oW6fDAfGO7KTg I3qcB4nGYHmSV7nStzwTgtlUamLQxclPs+vqZl7VEq3AK4 X-Google-Smtp-Source: AGHT+IFnHSpgDUACF18XQtRZt8AAze4UGp2YjOcSIao9WC6u/ZjwstEdeACQwh/Ay3uGg22YIQsYXQ== X-Received: by 2002:a17:902:f682:b0:202:27ba:3639 with SMTP id d9443c01a7336-2050c21705bmr73529945ad.10.1725026143171; Fri, 30 Aug 2024 06:55:43 -0700 (PDT) Received: from dhcp-10-123-154-23.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-205155658dfsm27067145ad.297.2024.08.30.06.55.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 30 Aug 2024 06:55:42 -0700 (PDT) From: Sriharsha Basavapatna To: dev@dpdk.org Cc: Kishore Padmanabha , Shuanglin Wang , Michael Baucom , Sriharsha Basavapatna Subject: [PATCH 45/47] net/bnxt: tf_ulp: support a few feature extensions Date: Fri, 30 Aug 2024 19:30:47 +0530 Message-Id: <20240830140049.1715230-46-sriharsha.basavapatna@broadcom.com> X-Mailer: git-send-email 2.39.0.189.g4dbebc36b0 In-Reply-To: <20240830140049.1715230-1-sriharsha.basavapatna@broadcom.com> References: <20240830140049.1715230-1-sriharsha.basavapatna@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Kishore Padmanabha This patch supports the following features. add support for port table write operation Added support for port table write operation from the template so that template can write mirror id details into the port database. support generic template for socket direct Support the socket direct feature, which is disabled by default. User could enable it with meson configuration parameter truflow feature bit. add support for truflow promiscuous mode The truflow application supports promiscuous mode to enable or disable receiving the packets with unknown destination mac addresses. set metadata for profile tcam entry The metadata higher bits are currently used for profile tcam entry. To make better use of EM entries, it is better to use metadata fully instead of only the higher bits of the metadata. support the group miss action Generic template supports the feature of setting group miss action with the following rte command: flow group 0 group_id 1 ingress set_miss_actions jump group 3 / end fix some build failures This change resolves a build issue seen on some OS's and compiler versions. Signed-off-by: Kishore Padmanabha Signed-off-by: Shuanglin Wang Reviewed-by: Michael Baucom Signed-off-by: Sriharsha Basavapatna --- drivers/net/bnxt/tf_ulp/bnxt_ulp.h | 31 +++ drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c | 27 ++- drivers/net/bnxt/tf_ulp/ulp_alloc_tbl.c | 4 + drivers/net/bnxt/tf_ulp/ulp_def_rules.c | 286 ++++++++++++++++++++++- drivers/net/bnxt/tf_ulp/ulp_mapper.c | 43 +++- drivers/net/bnxt/tf_ulp/ulp_port_db.c | 89 +++++++ drivers/net/bnxt/tf_ulp/ulp_port_db.h | 28 +++ drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 17 ++ 8 files changed, 520 insertions(+), 5 deletions(-) diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h index 758b9deb63..a35f79f167 100644 --- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h +++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h @@ -92,9 +92,19 @@ enum bnxt_rte_flow_action_type { BNXT_RTE_FLOW_ACTION_TYPE_LAST }; +#define BNXT_ULP_MAX_GROUP_CNT 8 +struct bnxt_ulp_grp_rule_info { + uint32_t group_id; + uint32_t flow_id; + uint8_t dir; + uint8_t valid; +}; + struct bnxt_ulp_df_rule_info { uint32_t def_port_flow_id; + uint32_t promisc_flow_id; uint8_t valid; + struct bnxt_ulp_grp_rule_info grp_df_rule[BNXT_ULP_MAX_GROUP_CNT]; }; struct bnxt_ulp_vfr_rule_info { @@ -291,4 +301,25 @@ bnxt_ulp_cntxt_entry_acquire(void *arg); void bnxt_ulp_cntxt_entry_release(void); +int32_t +bnxt_ulp_promisc_mode_set(struct bnxt *bp, uint8_t enable); + +int32_t +bnxt_ulp_set_prio_attribute(struct ulp_rte_parser_params *params, + const struct rte_flow_attr *attr); + +void +bnxt_ulp_set_dir_attributes(struct ulp_rte_parser_params *params, + const struct rte_flow_attr *attr); + +void +bnxt_ulp_init_parser_cf_defaults(struct ulp_rte_parser_params *params, + uint16_t port_id); + +int32_t +bnxt_ulp_grp_miss_act_set(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_action actions[], + uint32_t *flow_id); + #endif /* _BNXT_ULP_H_ */ diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c index eea05e129a..334eda99ce 100644 --- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c +++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c @@ -66,7 +66,7 @@ bnxt_ulp_flow_validate_args(const struct rte_flow_attr *attr, return BNXT_TF_RC_SUCCESS; } -static inline void +void bnxt_ulp_set_dir_attributes(struct ulp_rte_parser_params *params, const struct rte_flow_attr *attr) { @@ -86,7 +86,7 @@ bnxt_ulp_set_dir_attributes(struct ulp_rte_parser_params *params, } } -static int32_t +int32_t bnxt_ulp_set_prio_attribute(struct ulp_rte_parser_params *params, const struct rte_flow_attr *attr) { @@ -117,7 +117,7 @@ bnxt_ulp_set_prio_attribute(struct ulp_rte_parser_params *params, return 0; } -static inline void +void bnxt_ulp_init_parser_cf_defaults(struct ulp_rte_parser_params *params, uint16_t port_id) { @@ -268,6 +268,26 @@ bnxt_ulp_init_mapper_params(struct bnxt_ulp_mapper_parms *mparms, ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_SOCKET_DIRECT_VPORT, (vport == 1) ? 2 : 1); } + + /* Update the socket direct svif when socket_direct feature enabled. */ + if (ULP_BITMAP_ISSET(bnxt_ulp_feature_bits_get(params->ulp_ctx), + BNXT_ULP_FEATURE_BIT_SOCKET_DIRECT)) { + enum bnxt_ulp_intf_type intf_type; + /* For ingress flow on trusted_vf port */ + intf_type = bnxt_pmd_get_interface_type(params->port_id); + if (intf_type == BNXT_ULP_INTF_TYPE_TRUSTED_VF) { + uint16_t svif; + /* Get the socket direct svif of the given dev port */ + if (unlikely(ulp_port_db_dev_port_socket_direct_svif_get(params->ulp_ctx, + params->port_id, + &svif))) { + BNXT_DRV_DBG(ERR, "Invalid port id %u\n", + params->port_id); + return; + } + ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_SOCKET_DIRECT_SVIF, svif); + } + } } /* Function to create the rte flow. */ @@ -305,6 +325,7 @@ bnxt_ulp_flow_create(struct rte_eth_dev *dev, /* Initialize the parser params */ memset(¶ms, 0, sizeof(struct ulp_rte_parser_params)); params.ulp_ctx = ulp_ctx; + params.port_id = dev->data->port_id; if (unlikely(bnxt_ulp_cntxt_app_id_get(params.ulp_ctx, ¶ms.app_id))) { BNXT_DRV_DBG(ERR, "failed to get the app id\n"); diff --git a/drivers/net/bnxt/tf_ulp/ulp_alloc_tbl.c b/drivers/net/bnxt/tf_ulp/ulp_alloc_tbl.c index 6401a7a80f..f09d072ef3 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_alloc_tbl.c +++ b/drivers/net/bnxt/tf_ulp/ulp_alloc_tbl.c @@ -181,6 +181,8 @@ ulp_allocator_tbl_list_alloc(struct bnxt_ulp_mapper_data *mapper_data, BNXT_DRV_DBG(ERR, "unable to alloc index %x\n", idx); return -ENOMEM; } + /* Not using zero index */ + *alloc_id += 1; return 0; } @@ -210,6 +212,8 @@ ulp_allocator_tbl_list_free(struct bnxt_ulp_mapper_data *mapper_data, BNXT_DRV_DBG(ERR, "invalid table index %x\n", idx); return -EINVAL; } + /* not using zero index */ + index -= 1; if (index < 0 || index > entry->num_entries) { BNXT_DRV_DBG(ERR, "invalid alloc index %x\n", index); return -EINVAL; diff --git a/drivers/net/bnxt/tf_ulp/ulp_def_rules.c b/drivers/net/bnxt/tf_ulp/ulp_def_rules.c index 17d2daeea3..b7a893a04f 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_def_rules.c +++ b/drivers/net/bnxt/tf_ulp/ulp_def_rules.c @@ -12,7 +12,7 @@ #include "ulp_port_db.h" #include "ulp_flow_db.h" #include "ulp_mapper.h" - +#include "ulp_rte_parser.h" static void ulp_l2_custom_tunnel_id_update(struct bnxt *bp, struct bnxt_ulp_mapper_parms *params); @@ -485,6 +485,24 @@ ulp_default_flow_destroy(struct rte_eth_dev *eth_dev, uint32_t flow_id) return rc; } +static void +bnxt_ulp_destroy_group_rules(struct bnxt *bp, uint16_t port_id) +{ + struct bnxt_ulp_grp_rule_info *info; + struct bnxt_ulp_grp_rule_info *grp_rules; + uint16_t idx; + + grp_rules = bp->ulp_ctx->cfg_data->df_rule_info[port_id].grp_df_rule; + + for (idx = 0; idx < BNXT_ULP_MAX_GROUP_CNT; idx++) { + info = &grp_rules[idx]; + if (!info->valid) + continue; + ulp_default_flow_destroy(bp->eth_dev, info->flow_id); + memset(info, 0, sizeof(struct bnxt_ulp_grp_rule_info)); + } +} + void bnxt_ulp_destroy_df_rules(struct bnxt *bp, bool global) { @@ -505,8 +523,14 @@ bnxt_ulp_destroy_df_rules(struct bnxt *bp, bool global) if (!info->valid) return; + /* Delete the group default rules */ + bnxt_ulp_destroy_group_rules(bp, port_id); + ulp_default_flow_destroy(bp->eth_dev, info->def_port_flow_id); + if (info->promisc_flow_id) + ulp_default_flow_destroy(bp->eth_dev, + info->promisc_flow_id); memset(info, 0, sizeof(struct bnxt_ulp_df_rule_info)); return; } @@ -517,8 +541,14 @@ bnxt_ulp_destroy_df_rules(struct bnxt *bp, bool global) if (!info->valid) continue; + /* Delete the group default rules */ + bnxt_ulp_destroy_group_rules(bp, port_id); + ulp_default_flow_destroy(bp->eth_dev, info->def_port_flow_id); + if (info->promisc_flow_id) + ulp_default_flow_destroy(bp->eth_dev, + info->promisc_flow_id); memset(info, 0, sizeof(struct bnxt_ulp_df_rule_info)); } } @@ -552,6 +582,7 @@ bnxt_create_port_app_df_rule(struct bnxt *bp, uint8_t flow_type, int32_t bnxt_ulp_create_df_rules(struct bnxt *bp) { + struct rte_eth_dev *dev = bp->eth_dev; struct bnxt_ulp_df_rule_info *info; uint16_t port_id; int rc = 0; @@ -581,6 +612,9 @@ bnxt_ulp_create_df_rules(struct bnxt *bp) if (rc || BNXT_TESTPMD_EN(bp)) bp->tx_cfa_action = 0; + /* set or reset the promiscuous rule */ + bnxt_ulp_promisc_mode_set(bp, dev->data->promiscuous); + info->valid = true; return 0; } @@ -709,3 +743,253 @@ ulp_l2_custom_tunnel_id_update(struct bnxt *bp, ULP_WP_SYM_TUN_HDR_TYPE_UPAR2); } } + +/* + * Function to execute a specific template, this does not create flow id + * + * bp [in] Ptr to bnxt + * param_list [in] Ptr to a list of parameters (Currently, only DPDK port_id). + * ulp_class_tid [in] Class template ID number. + * + * Returns 0 on success or negative number on failure. + */ +static int32_t +ulp_flow_template_process(struct bnxt *bp, + struct ulp_tlv_param *param_list, + uint32_t ulp_class_tid, + uint16_t port_id, + uint32_t flow_id) +{ + struct ulp_rte_hdr_field hdr_field[BNXT_ULP_PROTO_HDR_MAX]; + uint64_t comp_fld[BNXT_ULP_CF_IDX_LAST]; + struct bnxt_ulp_mapper_parms mapper_params = { 0 }; + struct ulp_rte_act_prop act_prop; + struct ulp_rte_act_bitmap act = { 0 }; + struct bnxt_ulp_context *ulp_ctx; + uint32_t type; + int rc = 0; + + memset(&mapper_params, 0, sizeof(mapper_params)); + memset(hdr_field, 0, sizeof(hdr_field)); + memset(comp_fld, 0, sizeof(comp_fld)); + memset(&act_prop, 0, sizeof(act_prop)); + + mapper_params.hdr_field = hdr_field; + mapper_params.act_bitmap = &act; + mapper_params.act_prop = &act_prop; + mapper_params.comp_fld = comp_fld; + mapper_params.class_tid = ulp_class_tid; + mapper_params.port_id = port_id; + + ulp_ctx = bp->ulp_ctx; + if (!ulp_ctx) { + BNXT_DRV_DBG(ERR, + "ULP is not init'ed. Fail to create dflt flow.\n"); + return -EINVAL; + } + + type = param_list->type; + while (type != BNXT_ULP_DF_PARAM_TYPE_LAST) { + if (ulp_def_handler_tbl[type].vfr_func) { + rc = ulp_def_handler_tbl[type].vfr_func(ulp_ctx, + param_list, + &mapper_params); + if (rc) { + BNXT_DRV_DBG(ERR, + "Failed to create default flow\n"); + return rc; + } + } + + param_list++; + type = param_list->type; + } + /* Protect flow creation */ + if (bnxt_ulp_cntxt_acquire_fdb_lock(ulp_ctx)) { + BNXT_DRV_DBG(ERR, "Flow db lock acquire failed\n"); + return -EINVAL; + } + + mapper_params.flow_id = flow_id; + rc = ulp_mapper_flow_create(ulp_ctx, &mapper_params, + NULL); + bnxt_ulp_cntxt_release_fdb_lock(ulp_ctx); + return rc; +} + +int32_t +bnxt_ulp_promisc_mode_set(struct bnxt *bp, uint8_t enable) +{ + uint32_t flow_type; + struct bnxt_ulp_df_rule_info *info; + uint16_t port_id; + int rc = 0; + + if (!BNXT_TRUFLOW_EN(bp) || BNXT_ETH_DEV_IS_REPRESENTOR(bp->eth_dev) || + !bp->ulp_ctx) + return rc; + + if (!BNXT_CHIP_P5(bp)) + return rc; + + port_id = bp->eth_dev->data->port_id; + info = &bp->ulp_ctx->cfg_data->df_rule_info[port_id]; + + /* create the promiscuous rule */ + if (enable && !info->promisc_flow_id) { + flow_type = BNXT_ULP_TEMPLATE_PROMISCUOUS_ENABLE; + rc = bnxt_create_port_app_df_rule(bp, flow_type, + &info->promisc_flow_id); + BNXT_DRV_DBG(DEBUG, "enable ulp promisc mode on port %u:%u\n", + port_id, info->promisc_flow_id); + } else if (!enable && info->promisc_flow_id) { + struct ulp_tlv_param param_list[] = { + { + .type = BNXT_ULP_DF_PARAM_TYPE_DEV_PORT_ID, + .length = 2, + .value = {(port_id >> 8) & 0xff, port_id & 0xff} + }, + { + .type = BNXT_ULP_DF_PARAM_TYPE_LAST, + .length = 0, + .value = {0} + } + }; + + flow_type = BNXT_ULP_TEMPLATE_PROMISCUOUS_DISABLE; + if (ulp_flow_template_process(bp, param_list, flow_type, + port_id, 0)) + return -EIO; + + rc = ulp_default_flow_destroy(bp->eth_dev, + info->promisc_flow_id); + BNXT_DRV_DBG(DEBUG, "disable ulp promisc mode on port %u:%u\n", + port_id, info->promisc_flow_id); + info->promisc_flow_id = 0; + } + return rc; +} + +/* Function to create the rte flow for miss action. */ +int32_t +bnxt_ulp_grp_miss_act_set(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_action actions[], + uint32_t *flow_id) +{ + struct bnxt_ulp_mapper_parms mparms = { 0 }; + struct ulp_rte_parser_params params; + struct bnxt_ulp_context *ulp_ctx; + int ret = BNXT_TF_RC_ERROR; + uint16_t func_id; + uint32_t fid; + uint32_t group_id; + + ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(dev); + if (unlikely(!ulp_ctx)) { + BNXT_DRV_DBG(ERR, "ULP context is not initialized\n"); + goto flow_error; + } + + /* Initialize the parser params */ + memset(¶ms, 0, sizeof(struct ulp_rte_parser_params)); + params.ulp_ctx = ulp_ctx; + params.port_id = dev->data->port_id; + /* classid is the group action template*/ + params.class_id = BNXT_ULP_TEMPLATE_GROUP_MISS_ACTION; + + if (unlikely(bnxt_ulp_cntxt_app_id_get(params.ulp_ctx, ¶ms.app_id))) { + BNXT_DRV_DBG(ERR, "failed to get the app id\n"); + goto flow_error; + } + + /* Set the flow attributes */ + bnxt_ulp_set_dir_attributes(¶ms, attr); + + if (unlikely(bnxt_ulp_set_prio_attribute(¶ms, attr))) + goto flow_error; + + bnxt_ulp_init_parser_cf_defaults(¶ms, params.port_id); + + /* Get the function id */ + if (unlikely(ulp_port_db_port_func_id_get(ulp_ctx, + params.port_id, + &func_id))) { + BNXT_DRV_DBG(ERR, "conversion of port to func id failed\n"); + goto flow_error; + } + + /* Protect flow creation */ + if (unlikely(bnxt_ulp_cntxt_acquire_fdb_lock(ulp_ctx))) { + BNXT_DRV_DBG(ERR, "Flow db lock acquire failed\n"); + goto flow_error; + } + + /* Allocate a Flow ID for attaching all resources for the flow to. + * Once allocated, all errors have to walk the list of resources and + * free each of them. + */ + ret = ulp_flow_db_fid_alloc(ulp_ctx, BNXT_ULP_FDB_TYPE_DEFAULT, + func_id, &fid); + if (unlikely(ret)) { + BNXT_DRV_DBG(ERR, "Unable to allocate flow table entry\n"); + goto release_lock; + } + + /* Update the implied SVIF */ + ulp_rte_parser_implicit_match_port_process(¶ms); + + /* Parse the rte flow action */ + ret = bnxt_ulp_rte_parser_act_parse(actions, ¶ms); + if (unlikely(ret != BNXT_TF_RC_SUCCESS)) + goto free_fid; + + /* Verify the jump target group id */ + if (ULP_BITMAP_ISSET(params.act_bitmap.bits, BNXT_ULP_ACT_BIT_JUMP)) { + memcpy(&group_id, + ¶ms.act_prop.act_details[BNXT_ULP_ACT_PROP_IDX_JUMP], + BNXT_ULP_ACT_PROP_SZ_JUMP); + if (rte_cpu_to_be_32(group_id) == attr->group) { + BNXT_DRV_DBG(ERR, "Jump action cannot jump to its own group.\n"); + ret = BNXT_TF_RC_ERROR; + goto free_fid; + } + } + + mparms.flow_id = fid; + mparms.func_id = func_id; + mparms.port_id = params.port_id; + + /* Perform the rte flow post process */ + bnxt_ulp_rte_parser_post_process(¶ms); + +#ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG +#ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG_PARSER + /* Dump the rte flow action */ + ulp_parser_act_info_dump(¶ms); +#endif +#endif + + ret = ulp_matcher_action_match(¶ms, ¶ms.act_tmpl); + if (unlikely(ret != BNXT_TF_RC_SUCCESS)) + goto free_fid; + + bnxt_ulp_init_mapper_params(&mparms, ¶ms, + BNXT_ULP_FDB_TYPE_DEFAULT); + /* Call the ulp mapper to create the flow in the hardware. */ + ret = ulp_mapper_flow_create(ulp_ctx, &mparms, NULL); + if (unlikely(ret)) + goto free_fid; + + bnxt_ulp_cntxt_release_fdb_lock(ulp_ctx); + + *flow_id = fid; + return 0; + +free_fid: + ulp_flow_db_fid_free(ulp_ctx, BNXT_ULP_FDB_TYPE_DEFAULT, fid); +release_lock: + bnxt_ulp_cntxt_release_fdb_lock(ulp_ctx); +flow_error: + return ret; +} diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c index c595e7cfc3..721e8f4992 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c +++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c @@ -858,6 +858,37 @@ ulp_mapper_field_port_db_process(struct bnxt_ulp_mapper_parms *parms, return 0; } +static int32_t +ulp_mapper_field_port_db_write(struct bnxt_ulp_mapper_parms *parms, + uint32_t port_id, + uint16_t idx, + uint8_t *val, + uint32_t length) +{ + enum bnxt_ulp_port_table port_data = idx; + uint32_t val32; + + switch (port_data) { + case BNXT_ULP_PORT_TABLE_PHY_PORT_MIRROR_ID: + if (ULP_BITS_2_BYTE(length) > sizeof(val32)) { + BNXT_DRV_DBG(ERR, "Invalid data length %u\n", length); + return -EINVAL; + } + memcpy(&val32, val, ULP_BITS_2_BYTE(length)); + if (unlikely(ulp_port_db_port_table_mirror_set(parms->ulp_ctx, + port_id, + val32))) { + BNXT_DRV_DBG(ERR, "Invalid port id %u\n", port_id); + return -EINVAL; + } + break; + default: + BNXT_DRV_DBG(ERR, "Invalid port_data %d\n", port_data); + return -EINVAL; + } + return 0; +} + static int32_t ulp_mapper_field_src_process(struct bnxt_ulp_mapper_parms *parms, enum bnxt_ulp_field_src field_src, @@ -3569,6 +3600,10 @@ ulp_mapper_func_info_process(struct bnxt_ulp_mapper_parms *parms, process_src1 = 1; case BNXT_ULP_FUNC_OPC_COND_LIST: break; + case BNXT_ULP_FUNC_OPC_PORT_TABLE: + process_src1 = 1; + process_src2 = 1; + break; default: break; } @@ -3680,6 +3715,12 @@ ulp_mapper_func_info_process(struct bnxt_ulp_mapper_parms *parms, &res, sizeof(res))) return -EINVAL; break; + case BNXT_ULP_FUNC_OPC_PORT_TABLE: + rc = ulp_mapper_field_port_db_write(parms, res1, + func_info->func_dst_opr, + (uint8_t *)&res2, + func_info->func_oper_size); + return rc; default: BNXT_DRV_DBG(ERR, "invalid func code %u\n", func_info->func_opc); @@ -3842,7 +3883,7 @@ ulp_mapper_cond_execute_list_process(struct bnxt_ulp_mapper_parms *parms, { struct bnxt_ulp_mapper_cond_list_info *execute_info; struct bnxt_ulp_mapper_cond_list_info *oper; - int32_t cond_list_res, cond_res = 0, rc = 0; + int32_t cond_list_res = 0, cond_res = 0, rc = 0; uint32_t idx; /* set the execute result to true */ diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.c b/drivers/net/bnxt/tf_ulp/ulp_port_db.c index 384b89da46..6907771725 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_port_db.c +++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.c @@ -513,6 +513,49 @@ ulp_port_db_phy_port_svif_get(struct bnxt_ulp_context *ulp_ctxt, return 0; } +/* + * Api to get the socket direct svif for a given device port. + * + * ulp_ctxt [in] Ptr to ulp context + * port_id [in] device port id + * svif [out] the socket direct svif of the given device index + * + * Returns 0 on success or negative number on failure. + */ +int32_t +ulp_port_db_dev_port_socket_direct_svif_get(struct bnxt_ulp_context *ulp_ctxt, + uint32_t port_id, + uint16_t *svif) +{ + struct bnxt_ulp_port_db *port_db; + uint32_t ifindex; + uint16_t phy_port_id, func_id; + + port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt); + + if (!port_db || port_id >= RTE_MAX_ETHPORTS) { + BNXT_DRV_DBG(ERR, "Invalid Arguments\n"); + return -EINVAL; + } + if (!port_db->dev_port_list[port_id]) + return -ENOENT; + + /* Get physical port id */ + ifindex = port_db->dev_port_list[port_id]; + func_id = port_db->ulp_intf_list[ifindex].drv_func_id; + phy_port_id = port_db->ulp_func_id_tbl[func_id].phy_port_id; + + /* Calculate physical port id for socket direct port */ + phy_port_id = phy_port_id ? 0 : 1; + if (phy_port_id >= port_db->phy_port_cnt) { + BNXT_DRV_DBG(ERR, "Invalid Arguments\n"); + return -EINVAL; + } + + *svif = port_db->phy_port_list[phy_port_id].port_svif; + return 0; +} + /* * Api to get the port type for a given ulp ifindex. * @@ -812,3 +855,49 @@ ulp_port_db_port_table_scope_get(struct bnxt_ulp_context *ulp_ctxt, } return -EINVAL; } + +/* Api to get the PF Mirror Id for a given port id + * + * ulp_ctxt [in] Ptr to ulp context + * port_id [in] dpdk port id + * mirror id [in] mirror id + * + * Returns 0 on success or negative number on failure. + */ +int32_t +ulp_port_db_port_table_mirror_set(struct bnxt_ulp_context *ulp_ctxt, + uint16_t port_id, uint32_t mirror_id) +{ + struct ulp_phy_port_info *port_data; + struct bnxt_ulp_port_db *port_db; + struct ulp_interface_info *intf; + struct ulp_func_if_info *func; + uint32_t ifindex; + + port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt); + if (!port_db) { + BNXT_DRV_DBG(ERR, "Invalid Arguments\n"); + return -EINVAL; + } + + if (ulp_port_db_dev_port_to_ulp_index(ulp_ctxt, port_id, &ifindex)) { + BNXT_DRV_DBG(ERR, "Invalid port id %u\n", port_id); + return -EINVAL; + } + + intf = &port_db->ulp_intf_list[ifindex]; + func = &port_db->ulp_func_id_tbl[intf->drv_func_id]; + if (!func->func_valid) { + BNXT_DRV_DBG(ERR, "Invalid func for port id %u\n", port_id); + return -EINVAL; + } + + port_data = &port_db->phy_port_list[func->phy_port_id]; + if (!port_data->port_valid) { + BNXT_DRV_DBG(ERR, "Invalid phy port\n"); + return -EINVAL; + } + + port_data->port_mirror_id = mirror_id; + return 0; +} diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.h b/drivers/net/bnxt/tf_ulp/ulp_port_db.h index ef164f1e9b..8a2c08fe67 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_port_db.h +++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.h @@ -70,6 +70,7 @@ struct ulp_phy_port_info { uint16_t port_spif; uint16_t port_parif; uint16_t port_vport; + uint32_t port_mirror_id; }; /* Structure for the Port database */ @@ -240,6 +241,20 @@ ulp_port_db_phy_port_svif_get(struct bnxt_ulp_context *ulp_ctxt, uint32_t phy_port, uint16_t *svif); +/* + * Api to get the socket direct svif for a given device port. + * + * ulp_ctxt [in] Ptr to ulp context + * port_id [in] device port id + * svif [out] the socket direct svif of the given device index + * + * Returns 0 on success or negative number on failure. + */ +int32_t +ulp_port_db_dev_port_socket_direct_svif_get(struct bnxt_ulp_context *ulp_ctxt, + uint32_t port_id, + uint16_t *svif); + /* * Api to get the port type for a given ulp ifindex. * @@ -379,4 +394,17 @@ ulp_port_db_port_vf_fid_get(struct bnxt_ulp_context *ulp_ctxt, int32_t ulp_port_db_port_table_scope_get(struct bnxt_ulp_context *ulp_ctxt, uint16_t port_id, uint8_t **tsid); + +/* Api to get the PF Mirror Id for a given port id + * + * ulp_ctxt [in] Ptr to ulp context + * port_id [in] dpdk port id + * mirror id [in] mirror id + * + * Returns 0 on success or negative number on failure. + */ +int32_t +ulp_port_db_port_table_mirror_set(struct bnxt_ulp_context *ulp_ctxt, + uint16_t port_id, uint32_t mirror_id); + #endif /* _ULP_PORT_DB_H_ */ diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c index dbd8a118df..dd5985cd7b 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c +++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c @@ -307,6 +307,14 @@ bnxt_ulp_comp_fld_intf_update(struct ulp_rte_parser_params *params) BNXT_ULP_CF_IDX_VF_FUNC_PARIF, parif); + /* Set VF func SVIF */ + if (ulp_port_db_svif_get(params->ulp_ctx, ifindex, + BNXT_ULP_CF_IDX_VF_FUNC_SVIF, &svif)) { + BNXT_DRV_DBG(ERR, "ParseErr:ifindex is not valid\n"); + return; + } + ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_VF_FUNC_SVIF, + svif); } else { /* Set DRV func PARIF */ if (ulp_port_db_parif_get(params->ulp_ctx, ifindex, @@ -319,6 +327,15 @@ bnxt_ulp_comp_fld_intf_update(struct ulp_rte_parser_params *params) ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_DRV_FUNC_PARIF, parif); + + /* Set DRV SVIF */ + if (ulp_port_db_svif_get(params->ulp_ctx, ifindex, + BNXT_ULP_DRV_FUNC_SVIF, &svif)) { + BNXT_DRV_DBG(ERR, "ParseErr:ifindex is not valid\n"); + return; + } + ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_DRV_FUNC_SVIF, + svif); } if (mtype == BNXT_ULP_INTF_TYPE_PF) { ULP_COMP_FLD_IDX_WR(params,