[dpdk-dev] In DPDK 1.7.1, the link status of the interface using virtio driver is always down.
Message ID | 2680B515A539A446ACBEC0EBBDEC3DF80E938465@SGSIMBX001.nsn-intra.net (mailing list archive) |
---|---|
State | Not Applicable, archived |
Headers |
Return-Path: <dev-bounces@dpdk.org> X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id DE3947E23; Thu, 11 Dec 2014 12:42:20 +0100 (CET) Received: from demumfd001.nsn-inter.net (demumfd001.nsn-inter.net [93.183.12.32]) by dpdk.org (Postfix) with ESMTP id 7684C6A95 for <dev@dpdk.org>; Thu, 11 Dec 2014 12:42:19 +0100 (CET) Received: from demuprx016.emea.nsn-intra.net ([10.150.129.55]) by demumfd001.nsn-inter.net (8.14.3/8.14.3) with ESMTP id sBBBgIsi008472 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Thu, 11 Dec 2014 11:42:18 GMT Received: from SGSIHTC004.nsn-intra.net ([10.159.225.21]) by demuprx016.emea.nsn-intra.net (8.12.11.20060308/8.12.11) with ESMTP id sBBBfxNE026584 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL); Thu, 11 Dec 2014 12:42:17 +0100 Received: from SGSIMBX001.nsn-intra.net ([169.254.1.131]) by SGSIHTC004.nsn-intra.net ([10.159.225.21]) with mapi id 14.03.0195.001; Thu, 11 Dec 2014 19:41:53 +0800 From: "Fu, Weiyi (NSN - CN/Hangzhou)" <weiyi.fu@nsn.com> To: "Fu, Weiyi (NSN - CN/Hangzhou)" <weiyi.fu@nsn.com>, "ext Ouyang, Changchun" <changchun.ouyang@intel.com>, "dev@dpdk.org" <dev@dpdk.org> Thread-Topic: [dpdk-dev] In DPDK 1.7.1, the link status of the interface using virtio driver is always down. Thread-Index: AQHQFRoVfrrffY6AGUCV6tB+0HXMP5yKEnVAgAAxdTA= Date: Thu, 11 Dec 2014 11:41:52 +0000 Message-ID: <2680B515A539A446ACBEC0EBBDEC3DF80E938465@SGSIMBX001.nsn-intra.net> References: <2680B515A539A446ACBEC0EBBDEC3DF80E938312@SGSIMBX001.nsn-intra.net> <F52918179C57134FAEC9EA62FA2F9625119456D4@shsmsx102.ccr.corp.intel.com> Accept-Language: zh-CN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.159.225.120] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-purgate-type: clean X-purgate-Ad: Categorized by eleven eXpurgate (R) http://www.eleven.de X-purgate: clean X-purgate: This mail is considered clean (visit http://www.eleven.de for further information) X-purgate-size: 11930 X-purgate-ID: 151667::1418298138-0000658F-D4C69CBC/0/0 Subject: Re: [dpdk-dev] In DPDK 1.7.1, the link status of the interface using virtio driver is always down. X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK <dev.dpdk.org> List-Unsubscribe: <http://dpdk.org/ml/options/dev>, <mailto:dev-request@dpdk.org?subject=unsubscribe> List-Archive: <http://dpdk.org/ml/archives/dev/> List-Post: <mailto:dev@dpdk.org> List-Help: <mailto:dev-request@dpdk.org?subject=help> List-Subscribe: <http://dpdk.org/ml/listinfo/dev>, <mailto:dev-request@dpdk.org?subject=subscribe> Errors-To: dev-bounces@dpdk.org Sender: "dev" <dev-bounces@dpdk.org> |
Commit Message
Fu, Weiyi (NSN - CN/Hangzhou)
Dec. 11, 2014, 11:41 a.m. UTC
Hi Changchun, I found you had done follow change to allow the virtio interface startup when the link is down. Is there any scenario causing link down for virtio interface? Brs, Fu Weiyi -----Original Message----- From: Fu, Weiyi (NSN - CN/Hangzhou) Sent: Thursday, December 11, 2014 4:43 PM To: 'ext Ouyang, Changchun'; dev@dpdk.org Subject: RE: [dpdk-dev] In DPDK 1.7.1, the link status of the interface using virtio driver is always down. Hi, The result is still the same. [root@EIPU-0(KVMCluster) /root] # ./testpmd -c 3 -n 4 -- --burst=64 -i --txq=1 --rxq=1 --txqflags=0xffff EAL: Cannot read numa node link for lcore 0 - using physical package id instead EAL: Detected lcore 0 as core 0 on socket 0 EAL: Cannot read numa node link for lcore 1 - using physical package id instead EAL: Detected lcore 1 as core 1 on socket 0 EAL: Cannot read numa node link for lcore 2 - using physical package id instead EAL: Detected lcore 2 as core 2 on socket 0 EAL: Cannot read numa node link for lcore 3 - using physical package id instead EAL: Detected lcore 3 as core 3 on socket 0 EAL: Cannot read numa node link for lcore 4 - using physical package id instead EAL: Detected lcore 4 as core 4 on socket 0 EAL: Cannot read numa node link for lcore 5 - using physical package id instead EAL: Detected lcore 5 as core 5 on socket 0 EAL: Cannot read numa node link for lcore 6 - using physical package id instead EAL: Detected lcore 6 as core 6 on socket 0 EAL: Support maximum 64 logical core(s) by configuration. EAL: Detected 7 lcore(s) EAL: Searching for IVSHMEM devices... EAL: No IVSHMEM configuration found! EAL: Setting up memory... EAL: cannot open /proc/self/numa_maps, consider that all memory is in socket_id 0 EAL: Ask a virtual area of 0x13400000 bytes EAL: Virtual area found at 0x7fb8e2600000 (size = 0x13400000) EAL: Ask a virtual area of 0x1f000000 bytes EAL: Virtual area found at 0x7fb8c3400000 (size = 0x1f000000) EAL: Ask a virtual area of 0x200000 bytes EAL: Virtual area found at 0x7fb8c3000000 (size = 0x200000) EAL: Ask a virtual area of 0x400000 bytes EAL: Virtual area found at 0x7fb8c2a00000 (size = 0x400000) EAL: Ask a virtual area of 0x200000 bytes EAL: Virtual area found at 0x7fb8c2600000 (size = 0x200000) EAL: Ask a virtual area of 0x200000 bytes EAL: Virtual area found at 0x7fb8c2200000 (size = 0x200000) EAL: Ask a virtual area of 0x400000 bytes EAL: Virtual area found at 0x7fb8c1c00000 (size = 0x400000) EAL: Ask a virtual area of 0x200000 bytes EAL: Virtual area found at 0x7fb8c1800000 (size = 0x200000) EAL: Requesting 410 pages of size 2MB from socket 0 EAL: TSC frequency is ~2792867 KHz EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles ! EAL: Master core 0 is ready (tid=f6998800) EAL: Core 1 is ready (tid=c0ffe710) EAL: PCI device 0000:00:03.0 on NUMA socket -1 EAL: probe driver: 1af4:1000 rte_virtio_pmd EAL: 0000:00:03.0 not managed by UIO driver, skipping EAL: PCI device 0000:00:04.0 on NUMA socket -1 EAL: probe driver: 1af4:1000 rte_virtio_pmd EAL: PCI memory mapped at 0x7fb8f6959000 PMD: eth_virtio_dev_init(): PCI Port IO found start=0xc020 with size=0x20 PMD: virtio_negotiate_features(): guest_features before negotiate = 438020 PMD: virtio_negotiate_features(): host_features before negotiate = 489f7c26 PMD: virtio_negotiate_features(): features after negotiate = 30020 PMD: eth_virtio_dev_init(): PORT MAC: FF:FF:00:00:00:00 PMD: eth_virtio_dev_init(): VIRTIO_NET_F_MQ is not supported PMD: virtio_dev_cq_queue_setup(): >> PMD: virtio_dev_queue_setup(): selecting queue: 2 PMD: virtio_dev_queue_setup(): vq_size: 64 nb_desc:0 PMD: virtio_dev_queue_setup(): vring_size: 4612, rounded_vring_size: 8192 PMD: virtio_dev_queue_setup(): vq->vq_ring_mem: 0x212bdd000 PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7fb8c31dd000 PMD: eth_virtio_dev_init(): config->max_virtqueue_pairs=1 PMD: eth_virtio_dev_init(): config->status=0 PMD: eth_virtio_dev_init(): PORT MAC: FF:FF:00:00:00:00 PMD: eth_virtio_dev_init(): hw->max_rx_queues=1 hw->max_tx_queues=1 PMD: eth_virtio_dev_init(): port 0 vendorID=0x1af4 deviceID=0x1000 EAL: PCI device 0000:00:05.0 on NUMA socket -1 EAL: probe driver: 1af4:1000 rte_virtio_pmd EAL: PCI memory mapped at 0x7fb8f6958000 PMD: eth_virtio_dev_init(): PCI Port IO found start=0xc000 with size=0x20 PMD: virtio_negotiate_features(): guest_features before negotiate = 438020 PMD: virtio_negotiate_features(): host_features before negotiate = 489f7c26 PMD: virtio_negotiate_features(): features after negotiate = 30020 PMD: eth_virtio_dev_init(): PORT MAC: FF:FF:00:00:00:00 PMD: eth_virtio_dev_init(): VIRTIO_NET_F_MQ is not supported PMD: virtio_dev_cq_queue_setup(): >> PMD: virtio_dev_queue_setup(): selecting queue: 2 PMD: virtio_dev_queue_setup(): vq_size: 64 nb_desc:0 PMD: virtio_dev_queue_setup(): vring_size: 4612, rounded_vring_size: 8192 PMD: virtio_dev_queue_setup(): vq->vq_ring_mem: 0x212be0000 PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7fb8c31e0000 PMD: eth_virtio_dev_init(): config->max_virtqueue_pairs=1 PMD: eth_virtio_dev_init(): config->status=0 PMD: eth_virtio_dev_init(): PORT MAC: FF:FF:00:00:00:00 PMD: eth_virtio_dev_init(): hw->max_rx_queues=1 hw->max_tx_queues=1 PMD: eth_virtio_dev_init(): port 1 vendorID=0x1af4 deviceID=0x1000 Interactive-mode selected Configuring Port 0 (socket 0) PMD: virtio_dev_configure(): configure PMD: virtio_dev_tx_queue_setup(): >> PMD: virtio_dev_queue_setup(): selecting queue: 1 PMD: virtio_dev_queue_setup(): vq_size: 256 nb_desc:512 PMD: virtio_dev_queue_setup(): Warning: nb_desc(512) is not equal to vq size (256), fall to vq size PMD: virtio_dev_queue_setup(): vring_size: 10244, rounded_vring_size: 12288 PMD: virtio_dev_queue_setup(): vq->vq_ring_mem: 0x212be3000 PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7fb8c31e3000 PMD: virtio_dev_rx_queue_setup(): >> PMD: virtio_dev_queue_setup(): selecting queue: 0 PMD: virtio_dev_queue_setup(): vq_size: 256 nb_desc:128 PMD: virtio_dev_queue_setup(): Warning: nb_desc(128) is not equal to vq size (256), fall to vq size PMD: virtio_dev_queue_setup(): vring_size: 10244, rounded_vring_size: 12288 PMD: virtio_dev_queue_setup(): vq->vq_ring_mem: 0x212be7000 PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7fb8c31e7000 PMD: virtio_dev_vring_start(): >> PMD: virtio_dev_rxtx_start(): >> PMD: virtio_dev_vring_start(): >> PMD: virtio_dev_vring_start(): Allocated 256 bufs PMD: virtio_dev_vring_start(): >> Port: 0 Link is DOWN PMD: virtio_dev_start(): nb_queues=1 PMD: virtio_dev_start(): Notified backend at initialization PMD: rte_eth_dev_config_restore: port 0: MAC address array not supported PMD: rte_eth_promiscuous_disable: Function not supported PMD: rte_eth_allmulticast_disable: Function not supported Port 0: FF:FF:00:00:00:00 Configuring Port 1 (socket 0) PMD: virtio_dev_configure(): configure PMD: virtio_dev_tx_queue_setup(): >> PMD: virtio_dev_queue_setup(): selecting queue: 1 PMD: virtio_dev_queue_setup(): vq_size: 256 nb_desc:512 PMD: virtio_dev_queue_setup(): Warning: nb_desc(512) is not equal to vq size (256), fall to vq size PMD: virtio_dev_queue_setup(): vring_size: 10244, rounded_vring_size: 12288 PMD: virtio_dev_queue_setup(): vq->vq_ring_mem: 0x212bea000 PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7fb8c31ea000 PMD: virtio_dev_rx_queue_setup(): >> PMD: virtio_dev_queue_setup(): selecting queue: 0 PMD: virtio_dev_queue_setup(): vq_size: 256 nb_desc:128 PMD: virtio_dev_queue_setup(): Warning: nb_desc(128) is not equal to vq size (256), fall to vq size PMD: virtio_dev_queue_setup(): vring_size: 10244, rounded_vring_size: 12288 PMD: virtio_dev_queue_setup(): vq->vq_ring_mem: 0x212bee000 PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7fb8c31ee000 PMD: virtio_dev_vring_start(): >> PMD: virtio_dev_rxtx_start(): >> PMD: virtio_dev_vring_start(): >> PMD: virtio_dev_vring_start(): Allocated 256 bufs PMD: virtio_dev_vring_start(): >> Port: 1 Link is DOWN PMD: virtio_dev_start(): nb_queues=1 PMD: virtio_dev_start(): Notified backend at initialization PMD: rte_eth_dev_config_restore: port 1: MAC address array not supported PMD: rte_eth_promiscuous_disable: Function not supported PMD: rte_eth_allmulticast_disable: Function not supported Port 1: FF:FF:00:00:00:00 Checking link statuses... PMD: virtio_dev_link_update(): Get link status from hw PMD: virtio_dev_link_update(): Port 0 is down PMD: virtio_dev_link_update(): Get link status from hw PMD: virtio_dev_link_update(): Port 0 is down PMD: virtio_dev_link_update(): Get link status from hw PMD: virtio_dev_link_update(): Port 0 is down PMD: virtio_dev_link_update(): Get link status from hw PMD: virtio_dev_link_update(): Port 0 is down PMD: virtio_dev_link_update(): Get link status from hw PMD: virtio_dev_link_update(): Port 0 is down PMD: virtio_dev_link_update(): Get link status from hw PMD: virtio_dev_link_update(): Port 0 is down PMD: virtio_dev_link_update(): Get link status from hw PMD: virtio_dev_link_update(): Port 0 is down PMD: virtio_dev_link_update(): Get link status from hw PMD: virtio_dev_link_update(): Port 0 is down PMD: virtio_dev_link_update(): Get link status from hw PMD: virtio_dev_link_update(): Port 0 is down PMD: virtio_dev_link_update(): Get link status from hw PMD: virtio_dev_link_update(): Port 0 is down PMD: virtio_dev_link_update(): Get link status from hw PMD: virtio_dev_link_update(): Port 0 is down PMD: virtio_dev_link_update(): Get link status from hw PMD: virtio_dev_link_update(): Port 0 is down PMD: virtio_dev_link_update(): Get link status from hw PMD: virtio_dev_link_update(): Port 0 is down PMD: virtio_dev_link_update(): Get link status from hw PMD: virtio_dev_link_update(): Port 0 is down Brs, Fu Weiyi -----Original Message----- From: ext Ouyang, Changchun [mailto:changchun.ouyang@intel.com] Sent: Thursday, December 11, 2014 4:11 PM To: Fu, Weiyi (NSN - CN/Hangzhou); dev@dpdk.org Cc: Ouyang, Changchun Subject: RE: [dpdk-dev] In DPDK 1.7.1, the link status of the interface using virtio driver is always down. Hi, > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Fu, Weiyi (NSN - > CN/Hangzhou) > Sent: Thursday, December 11, 2014 3:57 PM > To: dev@dpdk.org > Subject: [dpdk-dev] In DPDK 1.7.1, the link status of the interface using virtio > driver is always down. > > Hi, > We are using the l2fwd based on DPDK 1.7.1 and found out that the link > status of the interface using virtio driver is always down. > Is there any precondition to let the link up? > Suggest you use testpmd to replace l2fwd, and virito need tx some packets before forwarding any packet. In testpmd, you can use the following cmd: Start tx_first Thanks Changchun
Comments
Hi, I have seen this issue happen on older kernels like 2.6.32-220.el6.x86_64 while it works with no issues on a recent kernel like 3.10.x. Further, I found this issue was happening due to /sys/bus/pci/devices/<virtio_nic_dev>/msi_irqs dir not being enumerated in older kernels resulting in hw->use_msix=0. This causes VIRTIO_PCI_CONFIG() offset to change and hence the Link status issue shows up. Setting hw->use_msix=1 helped me get past this issue on 2.6.32-220.el6.x86_64 Thanks, Vijay On Thu, Dec 11, 2014 at 3:41 AM, Fu, Weiyi (NSN - CN/Hangzhou) < weiyi.fu@nsn.com> wrote: > Hi Changchun, > I found you had done follow change to allow the virtio interface startup > when the link is down. Is there any scenario causing link down for virtio > interface? > > diff --git a/lib/librte_pmd_virtio/virtio_ethdev.c > b/lib/librte_pmd_virtio/virtio_ethdev.c > index 78018f9..4bff0fe 100644 > --- a/lib/librte_pmd_virtio/virtio_ethdev.c > +++ b/lib/librte_pmd_virtio/virtio_ethdev.c > @@ -1057,14 +1057,12 @@ virtio_dev_start(struct rte_eth_dev *dev) > vtpci_read_dev_config(hw, > offsetof(struct virtio_net_config, status), > &status, sizeof(status)); > - if ((status & VIRTIO_NET_S_LINK_UP) == 0) { > + if ((status & VIRTIO_NET_S_LINK_UP) == 0) > PMD_INIT_LOG(ERR, "Port: %d Link is DOWN", > dev->data->port_id); > - return -EIO; > - } else { > + else > PMD_INIT_LOG(DEBUG, "Port: %d Link is UP", > dev->data->port_id); > - } > } > vtpci_reinit_complete(hw); > > > > Brs, > Fu Weiyi > > -----Original Message----- > From: Fu, Weiyi (NSN - CN/Hangzhou) > Sent: Thursday, December 11, 2014 4:43 PM > To: 'ext Ouyang, Changchun'; dev@dpdk.org > Subject: RE: [dpdk-dev] In DPDK 1.7.1, the link status of the interface > using virtio driver is always down. > > Hi, > The result is still the same. > > [root@EIPU-0(KVMCluster) /root] > # ./testpmd -c 3 -n 4 -- --burst=64 -i --txq=1 --rxq=1 --txqflags=0xffff > EAL: Cannot read numa node link for lcore 0 - using physical package id > instead > EAL: Detected lcore 0 as core 0 on socket 0 > EAL: Cannot read numa node link for lcore 1 - using physical package id > instead > EAL: Detected lcore 1 as core 1 on socket 0 > EAL: Cannot read numa node link for lcore 2 - using physical package id > instead > EAL: Detected lcore 2 as core 2 on socket 0 > EAL: Cannot read numa node link for lcore 3 - using physical package id > instead > EAL: Detected lcore 3 as core 3 on socket 0 > EAL: Cannot read numa node link for lcore 4 - using physical package id > instead > EAL: Detected lcore 4 as core 4 on socket 0 > EAL: Cannot read numa node link for lcore 5 - using physical package id > instead > EAL: Detected lcore 5 as core 5 on socket 0 > EAL: Cannot read numa node link for lcore 6 - using physical package id > instead > EAL: Detected lcore 6 as core 6 on socket 0 > EAL: Support maximum 64 logical core(s) by configuration. > EAL: Detected 7 lcore(s) > EAL: Searching for IVSHMEM devices... > EAL: No IVSHMEM configuration found! > EAL: Setting up memory... > EAL: cannot open /proc/self/numa_maps, consider that all memory is in > socket_id 0 > EAL: Ask a virtual area of 0x13400000 bytes > EAL: Virtual area found at 0x7fb8e2600000 (size = 0x13400000) > EAL: Ask a virtual area of 0x1f000000 bytes > EAL: Virtual area found at 0x7fb8c3400000 (size = 0x1f000000) > EAL: Ask a virtual area of 0x200000 bytes > EAL: Virtual area found at 0x7fb8c3000000 (size = 0x200000) > EAL: Ask a virtual area of 0x400000 bytes > EAL: Virtual area found at 0x7fb8c2a00000 (size = 0x400000) > EAL: Ask a virtual area of 0x200000 bytes > EAL: Virtual area found at 0x7fb8c2600000 (size = 0x200000) > EAL: Ask a virtual area of 0x200000 bytes > EAL: Virtual area found at 0x7fb8c2200000 (size = 0x200000) > EAL: Ask a virtual area of 0x400000 bytes > EAL: Virtual area found at 0x7fb8c1c00000 (size = 0x400000) > EAL: Ask a virtual area of 0x200000 bytes > EAL: Virtual area found at 0x7fb8c1800000 (size = 0x200000) > EAL: Requesting 410 pages of size 2MB from socket 0 > EAL: TSC frequency is ~2792867 KHz > EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using > unreliable clock cycles ! > EAL: Master core 0 is ready (tid=f6998800) > EAL: Core 1 is ready (tid=c0ffe710) > EAL: PCI device 0000:00:03.0 on NUMA socket -1 > EAL: probe driver: 1af4:1000 rte_virtio_pmd > EAL: 0000:00:03.0 not managed by UIO driver, skipping > EAL: PCI device 0000:00:04.0 on NUMA socket -1 > EAL: probe driver: 1af4:1000 rte_virtio_pmd > EAL: PCI memory mapped at 0x7fb8f6959000 > PMD: eth_virtio_dev_init(): PCI Port IO found start=0xc020 with size=0x20 > PMD: virtio_negotiate_features(): guest_features before negotiate = 438020 > PMD: virtio_negotiate_features(): host_features before negotiate = 489f7c26 > PMD: virtio_negotiate_features(): features after negotiate = 30020 > PMD: eth_virtio_dev_init(): PORT MAC: FF:FF:00:00:00:00 > PMD: eth_virtio_dev_init(): VIRTIO_NET_F_MQ is not supported > PMD: virtio_dev_cq_queue_setup(): >> > PMD: virtio_dev_queue_setup(): selecting queue: 2 > PMD: virtio_dev_queue_setup(): vq_size: 64 nb_desc:0 > PMD: virtio_dev_queue_setup(): vring_size: 4612, rounded_vring_size: 8192 > PMD: virtio_dev_queue_setup(): vq->vq_ring_mem: 0x212bdd000 > PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7fb8c31dd000 > PMD: eth_virtio_dev_init(): config->max_virtqueue_pairs=1 > PMD: eth_virtio_dev_init(): config->status=0 > PMD: eth_virtio_dev_init(): PORT MAC: FF:FF:00:00:00:00 > PMD: eth_virtio_dev_init(): hw->max_rx_queues=1 hw->max_tx_queues=1 > PMD: eth_virtio_dev_init(): port 0 vendorID=0x1af4 deviceID=0x1000 > EAL: PCI device 0000:00:05.0 on NUMA socket -1 > EAL: probe driver: 1af4:1000 rte_virtio_pmd > EAL: PCI memory mapped at 0x7fb8f6958000 > PMD: eth_virtio_dev_init(): PCI Port IO found start=0xc000 with size=0x20 > PMD: virtio_negotiate_features(): guest_features before negotiate = 438020 > PMD: virtio_negotiate_features(): host_features before negotiate = 489f7c26 > PMD: virtio_negotiate_features(): features after negotiate = 30020 > PMD: eth_virtio_dev_init(): PORT MAC: FF:FF:00:00:00:00 > PMD: eth_virtio_dev_init(): VIRTIO_NET_F_MQ is not supported > PMD: virtio_dev_cq_queue_setup(): >> > PMD: virtio_dev_queue_setup(): selecting queue: 2 > PMD: virtio_dev_queue_setup(): vq_size: 64 nb_desc:0 > PMD: virtio_dev_queue_setup(): vring_size: 4612, rounded_vring_size: 8192 > PMD: virtio_dev_queue_setup(): vq->vq_ring_mem: 0x212be0000 > PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7fb8c31e0000 > PMD: eth_virtio_dev_init(): config->max_virtqueue_pairs=1 > PMD: eth_virtio_dev_init(): config->status=0 > PMD: eth_virtio_dev_init(): PORT MAC: FF:FF:00:00:00:00 > PMD: eth_virtio_dev_init(): hw->max_rx_queues=1 hw->max_tx_queues=1 > PMD: eth_virtio_dev_init(): port 1 vendorID=0x1af4 deviceID=0x1000 > Interactive-mode selected > Configuring Port 0 (socket 0) > PMD: virtio_dev_configure(): configure > PMD: virtio_dev_tx_queue_setup(): >> > PMD: virtio_dev_queue_setup(): selecting queue: 1 > PMD: virtio_dev_queue_setup(): vq_size: 256 nb_desc:512 > PMD: virtio_dev_queue_setup(): Warning: nb_desc(512) is not equal to vq > size (256), fall to vq size > PMD: virtio_dev_queue_setup(): vring_size: 10244, rounded_vring_size: 12288 > PMD: virtio_dev_queue_setup(): vq->vq_ring_mem: 0x212be3000 > PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7fb8c31e3000 > PMD: virtio_dev_rx_queue_setup(): >> > PMD: virtio_dev_queue_setup(): selecting queue: 0 > PMD: virtio_dev_queue_setup(): vq_size: 256 nb_desc:128 > PMD: virtio_dev_queue_setup(): Warning: nb_desc(128) is not equal to vq > size (256), fall to vq size > PMD: virtio_dev_queue_setup(): vring_size: 10244, rounded_vring_size: 12288 > PMD: virtio_dev_queue_setup(): vq->vq_ring_mem: 0x212be7000 > PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7fb8c31e7000 > PMD: virtio_dev_vring_start(): >> > PMD: virtio_dev_rxtx_start(): >> > PMD: virtio_dev_vring_start(): >> > PMD: virtio_dev_vring_start(): Allocated 256 bufs > PMD: virtio_dev_vring_start(): >> > > Port: 0 Link is DOWN > PMD: virtio_dev_start(): nb_queues=1 > PMD: virtio_dev_start(): Notified backend at initialization > PMD: rte_eth_dev_config_restore: port 0: MAC address array not supported > PMD: rte_eth_promiscuous_disable: Function not supported > PMD: rte_eth_allmulticast_disable: Function not supported > Port 0: FF:FF:00:00:00:00 > Configuring Port 1 (socket 0) > PMD: virtio_dev_configure(): configure > PMD: virtio_dev_tx_queue_setup(): >> > PMD: virtio_dev_queue_setup(): selecting queue: 1 > PMD: virtio_dev_queue_setup(): vq_size: 256 nb_desc:512 > PMD: virtio_dev_queue_setup(): Warning: nb_desc(512) is not equal to vq > size (256), fall to vq size > PMD: virtio_dev_queue_setup(): vring_size: 10244, rounded_vring_size: 12288 > PMD: virtio_dev_queue_setup(): vq->vq_ring_mem: 0x212bea000 > PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7fb8c31ea000 > PMD: virtio_dev_rx_queue_setup(): >> > PMD: virtio_dev_queue_setup(): selecting queue: 0 > PMD: virtio_dev_queue_setup(): vq_size: 256 nb_desc:128 > PMD: virtio_dev_queue_setup(): Warning: nb_desc(128) is not equal to vq > size (256), fall to vq size > PMD: virtio_dev_queue_setup(): vring_size: 10244, rounded_vring_size: 12288 > PMD: virtio_dev_queue_setup(): vq->vq_ring_mem: 0x212bee000 > PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7fb8c31ee000 > PMD: virtio_dev_vring_start(): >> > PMD: virtio_dev_rxtx_start(): >> > PMD: virtio_dev_vring_start(): >> > PMD: virtio_dev_vring_start(): Allocated 256 bufs > PMD: virtio_dev_vring_start(): >> > > Port: 1 Link is DOWN > PMD: virtio_dev_start(): nb_queues=1 > PMD: virtio_dev_start(): Notified backend at initialization > PMD: rte_eth_dev_config_restore: port 1: MAC address array not supported > PMD: rte_eth_promiscuous_disable: Function not supported > PMD: rte_eth_allmulticast_disable: Function not supported > Port 1: FF:FF:00:00:00:00 > Checking link statuses... > PMD: virtio_dev_link_update(): Get link status from hw > PMD: virtio_dev_link_update(): Port 0 is down > PMD: virtio_dev_link_update(): Get link status from hw > PMD: virtio_dev_link_update(): Port 0 is down > PMD: virtio_dev_link_update(): Get link status from hw > PMD: virtio_dev_link_update(): Port 0 is down > PMD: virtio_dev_link_update(): Get link status from hw > PMD: virtio_dev_link_update(): Port 0 is down > PMD: virtio_dev_link_update(): Get link status from hw > PMD: virtio_dev_link_update(): Port 0 is down > PMD: virtio_dev_link_update(): Get link status from hw > PMD: virtio_dev_link_update(): Port 0 is down > PMD: virtio_dev_link_update(): Get link status from hw > PMD: virtio_dev_link_update(): Port 0 is down > PMD: virtio_dev_link_update(): Get link status from hw > PMD: virtio_dev_link_update(): Port 0 is down > PMD: virtio_dev_link_update(): Get link status from hw > PMD: virtio_dev_link_update(): Port 0 is down > PMD: virtio_dev_link_update(): Get link status from hw > PMD: virtio_dev_link_update(): Port 0 is down > PMD: virtio_dev_link_update(): Get link status from hw > PMD: virtio_dev_link_update(): Port 0 is down > PMD: virtio_dev_link_update(): Get link status from hw > PMD: virtio_dev_link_update(): Port 0 is down > PMD: virtio_dev_link_update(): Get link status from hw > PMD: virtio_dev_link_update(): Port 0 is down > PMD: virtio_dev_link_update(): Get link status from hw > PMD: virtio_dev_link_update(): Port 0 is down > > Brs, > Fu Weiyi > > -----Original Message----- > From: ext Ouyang, Changchun [mailto:changchun.ouyang@intel.com] > Sent: Thursday, December 11, 2014 4:11 PM > To: Fu, Weiyi (NSN - CN/Hangzhou); dev@dpdk.org > Cc: Ouyang, Changchun > Subject: RE: [dpdk-dev] In DPDK 1.7.1, the link status of the interface > using virtio driver is always down. > > Hi, > > > -----Original Message----- > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Fu, Weiyi (NSN - > > CN/Hangzhou) > > Sent: Thursday, December 11, 2014 3:57 PM > > To: dev@dpdk.org > > Subject: [dpdk-dev] In DPDK 1.7.1, the link status of the interface > using virtio > > driver is always down. > > > > Hi, > > We are using the l2fwd based on DPDK 1.7.1 and found out that the link > > status of the interface using virtio driver is always down. > > Is there any precondition to let the link up? > > > > Suggest you use testpmd to replace l2fwd, and virito need tx some packets > before forwarding any packet. > In testpmd, you can use the following cmd: > Start tx_first > > Thanks > Changchun > >
Hi > -----Original Message----- > From: Fu, Weiyi (NSN - CN/Hangzhou) [mailto:weiyi.fu@nsn.com] > Sent: Thursday, December 11, 2014 7:42 PM > To: Fu, Weiyi (NSN - CN/Hangzhou); Ouyang, Changchun; dev@dpdk.org > Subject: RE: [dpdk-dev] In DPDK 1.7.1, the link status of the interface using > virtio driver is always down. > > Hi Changchun, > I found you had done follow change to allow the virtio interface startup > when the link is down. Is there any scenario causing link down for virtio > interface? > Not really in my environment, those codes are RFC codes from Brocade, Not merged into mainline yet. You can apply this patch and ignore the link state to see if rx and tx still works. Thanks Changchun
Hi, I have ingnored the link status. Rx and tx can't work. I will try the methods Vijay suggested to have a try. Thanks! Brs, Fu Weiyi -----Original Message----- From: ext Ouyang, Changchun [mailto:changchun.ouyang@intel.com] Sent: Friday, December 12, 2014 9:00 AM To: Fu, Weiyi (NSN - CN/Hangzhou); dev@dpdk.org Cc: Ouyang, Changchun Subject: RE: [dpdk-dev] In DPDK 1.7.1, the link status of the interface using virtio driver is always down. Hi > -----Original Message----- > From: Fu, Weiyi (NSN - CN/Hangzhou) [mailto:weiyi.fu@nsn.com] > Sent: Thursday, December 11, 2014 7:42 PM > To: Fu, Weiyi (NSN - CN/Hangzhou); Ouyang, Changchun; dev@dpdk.org > Subject: RE: [dpdk-dev] In DPDK 1.7.1, the link status of the interface using > virtio driver is always down. > > Hi Changchun, > I found you had done follow change to allow the virtio interface startup > when the link is down. Is there any scenario causing link down for virtio > interface? > Not really in my environment, those codes are RFC codes from Brocade, Not merged into mainline yet. You can apply this patch and ignore the link state to see if rx and tx still works. Thanks Changchun
diff --git a/lib/librte_pmd_virtio/virtio_ethdev.c b/lib/librte_pmd_virtio/virtio_ethdev.c index 78018f9..4bff0fe 100644 --- a/lib/librte_pmd_virtio/virtio_ethdev.c +++ b/lib/librte_pmd_virtio/virtio_ethdev.c @@ -1057,14 +1057,12 @@ virtio_dev_start(struct rte_eth_dev *dev) vtpci_read_dev_config(hw, offsetof(struct virtio_net_config, status), &status, sizeof(status)); - if ((status & VIRTIO_NET_S_LINK_UP) == 0) { + if ((status & VIRTIO_NET_S_LINK_UP) == 0) PMD_INIT_LOG(ERR, "Port: %d Link is DOWN", dev->data->port_id); - return -EIO; - } else { + else PMD_INIT_LOG(DEBUG, "Port: %d Link is UP", dev->data->port_id); - } } vtpci_reinit_complete(hw);