From patchwork Thu Jul 30 14:42:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manish Chopra X-Patchwork-Id: 75046 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 04F9EA052B; Thu, 30 Jul 2020 16:44:55 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DD68710A3; Thu, 30 Jul 2020 16:44:54 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 7EC711023 for ; Thu, 30 Jul 2020 16:44:52 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 06UEds2U020253; Thu, 30 Jul 2020 07:44:51 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=0EtBZ/IvLTmuqr3gNtLTc88fKDH7UEdBuvhTeLrAGKc=; b=U479Fy6HSByVsO/kC6hBTMUKkQOQwDR+j8HJJeHwvko/i/TM8j8b1Mc8ISPHaYzxBupR pqePLQuqHyl93FCI7/8DH/8Ed/7YePjTbb7+tULfS0ap/GRsM5kD2nvhNGTVjfQ6n3vd anweI/TNV6Tw/kYuuOTKJqXSIHz68/l/Y4Vlegutque+c3bJKbolrklV6V6tcztAlFAu /V+vqGRAnne/zbJ1JR12hRZhyilqvjDfsdyVkWlChqIt5hzRw0sU6f2pJCwjSg9xr6fs +SZoTUE3QOYHqSpepqvNzndde+gQFQwy75YYB14xaypM1yQcmo6IhuhUN8pg+mp6xAIR lA== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 32gj3r6c2c-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 30 Jul 2020 07:44:51 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 30 Jul 2020 07:44:50 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 30 Jul 2020 07:44:50 -0700 Received: from dut1171.mv.qlogic.com (unknown [10.112.88.18]) by maili.marvell.com (Postfix) with ESMTP id 7FFD83F703F; Thu, 30 Jul 2020 07:44:50 -0700 (PDT) Received: from dut1171.mv.qlogic.com (localhost [127.0.0.1]) by dut1171.mv.qlogic.com (8.14.7/8.14.7) with ESMTP id 06UEioQn029134; Thu, 30 Jul 2020 07:44:50 -0700 Received: (from root@localhost) by dut1171.mv.qlogic.com (8.14.7/8.14.7/Submit) id 06UEioeG029125; Thu, 30 Jul 2020 07:44:50 -0700 From: Manish Chopra To: , , , CC: , , , , , , , , Date: Thu, 30 Jul 2020 07:42:20 -0700 Message-ID: <20200730144221.29051-6-manishc@marvell.com> X-Mailer: git-send-email 2.12.0 In-Reply-To: <20200730144221.29051-1-manishc@marvell.com> References: <20200730144221.29051-1-manishc@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687 definitions=2020-07-30_11:2020-07-30, 2020-07-30 signatures=0 Subject: [dpdk-dev] [PATCH v5 5/6] net/qede: initialize VF MAC and link X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch configures VFs with random mac if no MAC is provided by the PF/bulletin. This also adds required bulletin APIs by PF-PMD driver to communicate LINK properties/changes to the VFs through bulletin update mechanism. With these changes, VF-PMD instance is able to run fastpath over PF-PMD driver instance. Signed-off-by: Manish Chopra Signed-off-by: Igor Russkikh Signed-off-by: Rasesh Mody --- drivers/net/qede/qede_ethdev.c | 34 ++++++++++++++++++++- drivers/net/qede/qede_main.c | 7 ++++- drivers/net/qede/qede_sriov.c | 55 ++++++++++++++++++++++++++++++++++ drivers/net/qede/qede_sriov.h | 1 + 4 files changed, 95 insertions(+), 2 deletions(-) diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c index 210a3b10f..e785f3fb0 100644 --- a/drivers/net/qede/qede_ethdev.c +++ b/drivers/net/qede/qede_ethdev.c @@ -2479,6 +2479,24 @@ static void qede_update_pf_params(struct ecore_dev *edev) qed_ops->common->update_pf_params(edev, &pf_params); } +static void qede_generate_random_mac_addr(struct rte_ether_addr *mac_addr) +{ + uint64_t random; + + /* Set Organizationally Unique Identifier (OUI) prefix. */ + mac_addr->addr_bytes[0] = 0x00; + mac_addr->addr_bytes[1] = 0x09; + mac_addr->addr_bytes[2] = 0xC0; + + /* Force indication of locally assigned MAC address. */ + mac_addr->addr_bytes[0] |= RTE_ETHER_LOCAL_ADMIN_ADDR; + + /* Generate the last 3 bytes of the MAC address with a random number. */ + random = rte_rand(); + + memcpy(&mac_addr->addr_bytes[3], &random, 3); +} + static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf) { struct rte_pci_device *pci_dev; @@ -2491,7 +2509,7 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf) uint8_t bulletin_change; uint8_t vf_mac[RTE_ETHER_ADDR_LEN]; uint8_t is_mac_forced; - bool is_mac_exist; + bool is_mac_exist = false; /* Fix up ecore debug level */ uint32_t dp_module = ~0 & ~ECORE_MSG_HW; uint8_t dp_level = ECORE_LEVEL_VERBOSE; @@ -2669,6 +2687,20 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf) DP_ERR(edev, "No VF macaddr assigned\n"); } } + + /* If MAC doesn't exist from PF, generate random one */ + if (!is_mac_exist) { + struct rte_ether_addr *mac_addr; + + mac_addr = (struct rte_ether_addr *)&vf_mac; + qede_generate_random_mac_addr(mac_addr); + + rte_ether_addr_copy(mac_addr, + ð_dev->data->mac_addrs[0]); + + rte_ether_addr_copy(ð_dev->data->mac_addrs[0], + &adapter->primary_mac); + } } eth_dev->dev_ops = (is_vf) ? &qede_eth_vf_dev_ops : &qede_eth_dev_ops; diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c index 0afacc064..805a95e3c 100644 --- a/drivers/net/qede/qede_main.c +++ b/drivers/net/qede/qede_main.c @@ -651,10 +651,15 @@ void qed_link_update(struct ecore_hwfn *hwfn) struct ecore_dev *edev = hwfn->p_dev; struct qede_dev *qdev = (struct qede_dev *)edev; struct rte_eth_dev *dev = (struct rte_eth_dev *)qdev->ethdev; + int rc; + + rc = qede_link_update(dev, 0); + qed_inform_vf_link_state(hwfn); - if (!qede_link_update(dev, 0)) + if (!rc) { _rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC, NULL); + } } static int qed_drain(struct ecore_dev *edev) diff --git a/drivers/net/qede/qede_sriov.c b/drivers/net/qede/qede_sriov.c index 6d620dde8..93f7a2a55 100644 --- a/drivers/net/qede/qede_sriov.c +++ b/drivers/net/qede/qede_sriov.c @@ -126,6 +126,28 @@ static void qed_handle_vf_msg(struct ecore_hwfn *hwfn) ecore_ptt_release(hwfn, ptt); } +static void qed_handle_bulletin_post(struct ecore_hwfn *hwfn) +{ + struct ecore_ptt *ptt; + int i; + + ptt = ecore_ptt_acquire(hwfn); + if (!ptt) { + DP_NOTICE(hwfn, true, "PTT acquire failed\n"); + qed_schedule_iov(hwfn, QED_IOV_WQ_BULLETIN_UPDATE_FLAG); + return; + } + + /* TODO - at the moment update bulletin board of all VFs. + * if this proves to costly, we can mark VFs that need their + * bulletins updated. + */ + ecore_for_each_vf(hwfn, i) + ecore_iov_post_vf_bulletin(hwfn, i, ptt); + + ecore_ptt_release(hwfn, ptt); +} + void qed_iov_pf_task(void *arg) { struct ecore_hwfn *p_hwfn = arg; @@ -134,6 +156,13 @@ void qed_iov_pf_task(void *arg) OSAL_CLEAR_BIT(QED_IOV_WQ_MSG_FLAG, &p_hwfn->iov_task_flags); qed_handle_vf_msg(p_hwfn); } + + if (OSAL_GET_BIT(QED_IOV_WQ_BULLETIN_UPDATE_FLAG, + &p_hwfn->iov_task_flags)) { + OSAL_CLEAR_BIT(QED_IOV_WQ_BULLETIN_UPDATE_FLAG, + &p_hwfn->iov_task_flags); + qed_handle_bulletin_post(p_hwfn); + } } int qed_schedule_iov(struct ecore_hwfn *p_hwfn, enum qed_iov_wq_flag flag) @@ -144,3 +173,29 @@ int qed_schedule_iov(struct ecore_hwfn *p_hwfn, enum qed_iov_wq_flag flag) OSAL_SET_BIT(flag, &p_hwfn->iov_task_flags); return rte_eal_alarm_set(1, qed_iov_pf_task, p_hwfn); } + +void qed_inform_vf_link_state(struct ecore_hwfn *hwfn) +{ + struct ecore_hwfn *lead_hwfn = ECORE_LEADING_HWFN(hwfn->p_dev); + struct ecore_mcp_link_capabilities caps; + struct ecore_mcp_link_params params; + struct ecore_mcp_link_state link; + int i; + + if (!hwfn->pf_iov_info) + return; + + rte_memcpy(¶ms, ecore_mcp_get_link_params(lead_hwfn), + sizeof(params)); + rte_memcpy(&link, ecore_mcp_get_link_state(lead_hwfn), sizeof(link)); + rte_memcpy(&caps, ecore_mcp_get_link_capabilities(lead_hwfn), + sizeof(caps)); + + /* Update bulletin of all future possible VFs with link configuration */ + for (i = 0; i < hwfn->p_dev->p_iov_info->total_vfs; i++) { + ecore_iov_set_link(hwfn, i, + ¶ms, &link, &caps); + } + + qed_schedule_iov(hwfn, QED_IOV_WQ_BULLETIN_UPDATE_FLAG); +} diff --git a/drivers/net/qede/qede_sriov.h b/drivers/net/qede/qede_sriov.h index 8b7fa7daa..e58ecc2a5 100644 --- a/drivers/net/qede/qede_sriov.h +++ b/drivers/net/qede/qede_sriov.h @@ -17,5 +17,6 @@ enum qed_iov_wq_flag { QED_IOV_WQ_DB_REC_HANDLER, }; +void qed_inform_vf_link_state(struct ecore_hwfn *hwfn); int qed_schedule_iov(struct ecore_hwfn *p_hwfn, enum qed_iov_wq_flag flag); void qed_iov_pf_task(void *arg);