Message ID | 54C23265.8090403@allegro-packets.com (mailing list archive) |
---|---|
State | Not Applicable, archived |
Headers |
Return-Path: <dev-bounces@dpdk.org> X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 7789B5B01; Fri, 23 Jan 2015 12:37:14 +0100 (CET) Received: from smtprelay03.ispgateway.de (smtprelay03.ispgateway.de [80.67.29.28]) by dpdk.org (Postfix) with ESMTP id 572F65AFF for <dev@dpdk.org>; Fri, 23 Jan 2015 12:37:12 +0100 (CET) Received: from [87.172.153.118] (helo=nb-mweiser.local) by smtprelay03.ispgateway.de with esmtpsa (TLSv1.2:DHE-RSA-AES128-SHA:128) (Exim 4.84) (envelope-from <martin.weiser@allegro-packets.com>) id 1YEcY3-0007al-6q; Fri, 23 Jan 2015 12:37:11 +0100 Message-ID: <54C23265.8090403@allegro-packets.com> Date: Fri, 23 Jan 2015 12:37:09 +0100 From: Martin Weiser <martin.weiser@allegro-packets.com> User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:31.0) Gecko/20100101 Thunderbird/31.4.0 MIME-Version: 1.0 To: Bruce Richardson <bruce.richardson@intel.com> References: <54BCDBF1.8020909@allegro-packets.com> <54BE3047.9060909@allegro-packets.com> <20150121134921.GA2592@bricha3-MOBL3> In-Reply-To: <20150121134921.GA2592@bricha3-MOBL3> X-Df-Sender: bWFydGluLndlaXNlckBhbGxlZ3JvLXBhY2tldHMuY29t Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: dev@dpdk.org Subject: Re: [dpdk-dev] Segmentation fault in ixgbe_rxtx_vec.c:444 with 1.8.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK <dev.dpdk.org> List-Unsubscribe: <http://dpdk.org/ml/options/dev>, <mailto:dev-request@dpdk.org?subject=unsubscribe> List-Archive: <http://dpdk.org/ml/archives/dev/> List-Post: <mailto:dev@dpdk.org> List-Help: <mailto:dev-request@dpdk.org?subject=help> List-Subscribe: <http://dpdk.org/ml/listinfo/dev>, <mailto:dev-request@dpdk.org?subject=subscribe> Errors-To: dev-bounces@dpdk.org Sender: "dev" <dev-bounces@dpdk.org> |
Commit Message
Martin Weiser
Jan. 23, 2015, 11:37 a.m. UTC
Hi Bruce, I now had the chance to reproduce the issue we are seeing with a DPDK example app. I started out with a vanilla DPDK 1.8.0 and only made the following changes: end->next = rx_bufs[buf_idx]; @@ -448,7 +448,6 @@ reassemble_packets(struct igb_rx_queue *rxq, struct rte_mbuf **rx_bufs, rx_bufs[buf_idx]->data_len += rxq->crc_len; rx_bufs[buf_idx]->pkt_len += rxq->crc_len; } - buf_idx++; } /* save the partial packet for next time */ This includes your previously posted fix and makes a small modification to the l2fwd example app to enable jumbo frames of up to 9000 bytes. The system is equipped with a two port Intel 82599 card and both ports are hooked up to a packet generator. The packet generator produces simple Ethernet/IPv4/UDP packets. I started the l2fwd app with the following command line: $ ./build/l2fwd -c f -n 4 -- -q 8 -p 3 Both build variants that I have tested (CONFIG_RTE_IXGBE_INC_VECTOR=y and CONFIG_RTE_IXGBE_INC_VECTOR=n) now give me the same result: As long as the packet size is <= 2048 bytes the application behaves normally and all packets are forwarded as expected. As soon as the packet size exceeds 2048 bytes the application will only forward some packets and then stop forwarding altogether. Even small packets will not be forwarded anymore. If you want me to try out anything else just let me know. Best regards, Martin On 21.01.15 14:49, Bruce Richardson wrote: > On Tue, Jan 20, 2015 at 11:39:03AM +0100, Martin Weiser wrote: >> Hi again, >> >> I did some further testing and it seems like this issue is linked to >> jumbo frames. I think a similar issue has already been reported by >> Prashant Upadhyaya with the subject 'Packet Rx issue with DPDK1.8'. >> In our application we use the following rxmode port configuration: >> >> .mq_mode = ETH_MQ_RX_RSS, >> .split_hdr_size = 0, >> .header_split = 0, >> .hw_ip_checksum = 1, >> .hw_vlan_filter = 0, >> .jumbo_frame = 1, >> .hw_strip_crc = 1, >> .max_rx_pkt_len = 9000, >> >> and the mbuf size is calculated like the following: >> >> (2048 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM) >> >> This works fine with DPDK 1.7 and jumbo frames are split into buffer >> chains and can be forwarded on another port without a problem. >> With DPDK 1.8 and the default configuration (CONFIG_RTE_IXGBE_INC_VECTOR >> enabled) the application sometimes crashes like described in my first >> mail and sometimes packet receiving stops with subsequently arriving >> packets counted as rx errors. When CONFIG_RTE_IXGBE_INC_VECTOR is >> disabled the packet processing also comes to a halt as soon as jumbo >> frames arrive with a the slightly different effect that now >> rte_eth_tx_burst refuses to send any previously received packets. >> >> Is there anything special to consider regarding jumbo frames when moving >> from DPDK 1.7 to 1.8 that we might have missed? >> >> Martin >> >> >> >> On 19.01.15 11:26, Martin Weiser wrote: >>> Hi everybody, >>> >>> we quite recently updated one of our applications to DPDK 1.8.0 and are >>> now seeing a segmentation fault in ixgbe_rxtx_vec.c:444 after a few minutes. >>> I just did some quick debugging and I only have a very limited >>> understanding of the code in question but it seems that the 'continue' >>> in line 445 without increasing 'buf_idx' might cause the problem. In one >>> debugging session when the crash occurred the value of 'buf_idx' was 2 >>> and the value of 'pkt_idx' was 8965. >>> Any help with this issue would be greatly appreciated. If you need any >>> further information just let me know. >>> >>> Martin >>> >>> > Hi Martin, Prashant, > > I've managed to reproduce the issue here and had a look at it. Could you > both perhaps try the proposed change below and see if it fixes the problem for > you and gives you a working system? If so, I'll submit this as a patch fix > officially - or go back to the drawing board, if not. :-) > > diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c > index b54cb19..dfaccee 100644 > --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c > +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c > @@ -402,10 +402,10 @@ reassemble_packets(struct igb_rx_queue *rxq, struct rte_mbuf **rx_bufs, > struct rte_mbuf *pkts[RTE_IXGBE_VPMD_RX_BURST]; /*finished pkts*/ > struct rte_mbuf *start = rxq->pkt_first_seg; > struct rte_mbuf *end = rxq->pkt_last_seg; > - unsigned pkt_idx = 0, buf_idx = 0; > + unsigned pkt_idx, buf_idx; > > > - while (buf_idx < nb_bufs) { > + for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) { > if (end != NULL) { > /* processing a split packet */ > end->next = rx_bufs[buf_idx]; > @@ -448,7 +448,6 @@ reassemble_packets(struct igb_rx_queue *rxq, struct rte_mbuf **rx_bufs, > rx_bufs[buf_idx]->data_len += rxq->crc_len; > rx_bufs[buf_idx]->pkt_len += rxq->crc_len; > } > - buf_idx++; > } > > /* save the partial packet for next time */ > > > Regards, > /Bruce >
Comments
On Fri, Jan 23, 2015 at 12:37:09PM +0100, Martin Weiser wrote: > Hi Bruce, > > I now had the chance to reproduce the issue we are seeing with a DPDK > example app. > I started out with a vanilla DPDK 1.8.0 and only made the following changes: > > diff --git a/examples/l2fwd/main.c b/examples/l2fwd/main.c > index e684234..48e6b7c 100644 > --- a/examples/l2fwd/main.c > +++ b/examples/l2fwd/main.c > @@ -118,8 +118,9 @@ static const struct rte_eth_conf port_conf = { > .header_split = 0, /**< Header Split disabled */ > .hw_ip_checksum = 0, /**< IP checksum offload disabled */ > .hw_vlan_filter = 0, /**< VLAN filtering disabled */ > - .jumbo_frame = 0, /**< Jumbo Frame Support disabled */ > + .jumbo_frame = 1, /**< Jumbo Frame Support disabled */ > .hw_strip_crc = 0, /**< CRC stripped by hardware */ > + .max_rx_pkt_len = 9000, > }, > .txmode = { > .mq_mode = ETH_MQ_TX_NONE, > diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c > b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c > index b54cb19..dfaccee 100644 > --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c > +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c > @@ -402,10 +402,10 @@ reassemble_packets(struct igb_rx_queue *rxq, > struct rte_mbuf **rx_bufs, > struct rte_mbuf *pkts[RTE_IXGBE_VPMD_RX_BURST]; /*finished pkts*/ > struct rte_mbuf *start = rxq->pkt_first_seg; > struct rte_mbuf *end = rxq->pkt_last_seg; > - unsigned pkt_idx = 0, buf_idx = 0; > + unsigned pkt_idx, buf_idx; > > > - while (buf_idx < nb_bufs) { > + for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) { > if (end != NULL) { > /* processing a split packet */ > end->next = rx_bufs[buf_idx]; > @@ -448,7 +448,6 @@ reassemble_packets(struct igb_rx_queue *rxq, struct > rte_mbuf **rx_bufs, > rx_bufs[buf_idx]->data_len += rxq->crc_len; > rx_bufs[buf_idx]->pkt_len += rxq->crc_len; > } > - buf_idx++; > } > > /* save the partial packet for next time */ > > > This includes your previously posted fix and makes a small modification > to the l2fwd example app to enable jumbo frames of up to 9000 bytes. > The system is equipped with a two port Intel 82599 card and both ports > are hooked up to a packet generator. The packet generator produces > simple Ethernet/IPv4/UDP packets. > I started the l2fwd app with the following command line: > > $ ./build/l2fwd -c f -n 4 -- -q 8 -p 3 > > Both build variants that I have tested (CONFIG_RTE_IXGBE_INC_VECTOR=y > and CONFIG_RTE_IXGBE_INC_VECTOR=n) now give me the same result: > As long as the packet size is <= 2048 bytes the application behaves > normally and all packets are forwarded as expected. > As soon as the packet size exceeds 2048 bytes the application will only > forward some packets and then stop forwarding altogether. Even small > packets will not be forwarded anymore. > > If you want me to try out anything else just let me know. > > > Best regards, > Martin > I think the txq flags are at fault here. The default txq flags setting for the l2fwd sample application includes the flag ETH_TXQ_FLAGS_NOMULTSEGS which disables support for sending packets with multiple segments i.e. jumbo frames in this case. If you change l2fwd to explicitly pass a txqflags parameter in as part of the port setup (as was the case in previous releases), and set txqflags to 0, does the problem go away? /Bruce > > > On 21.01.15 14:49, Bruce Richardson wrote: > > On Tue, Jan 20, 2015 at 11:39:03AM +0100, Martin Weiser wrote: > >> Hi again, > >> > >> I did some further testing and it seems like this issue is linked to > >> jumbo frames. I think a similar issue has already been reported by > >> Prashant Upadhyaya with the subject 'Packet Rx issue with DPDK1.8'. > >> In our application we use the following rxmode port configuration: > >> > >> .mq_mode = ETH_MQ_RX_RSS, > >> .split_hdr_size = 0, > >> .header_split = 0, > >> .hw_ip_checksum = 1, > >> .hw_vlan_filter = 0, > >> .jumbo_frame = 1, > >> .hw_strip_crc = 1, > >> .max_rx_pkt_len = 9000, > >> > >> and the mbuf size is calculated like the following: > >> > >> (2048 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM) > >> > >> This works fine with DPDK 1.7 and jumbo frames are split into buffer > >> chains and can be forwarded on another port without a problem. > >> With DPDK 1.8 and the default configuration (CONFIG_RTE_IXGBE_INC_VECTOR > >> enabled) the application sometimes crashes like described in my first > >> mail and sometimes packet receiving stops with subsequently arriving > >> packets counted as rx errors. When CONFIG_RTE_IXGBE_INC_VECTOR is > >> disabled the packet processing also comes to a halt as soon as jumbo > >> frames arrive with a the slightly different effect that now > >> rte_eth_tx_burst refuses to send any previously received packets. > >> > >> Is there anything special to consider regarding jumbo frames when moving > >> from DPDK 1.7 to 1.8 that we might have missed? > >> > >> Martin > >> > >> > >> > >> On 19.01.15 11:26, Martin Weiser wrote: > >>> Hi everybody, > >>> > >>> we quite recently updated one of our applications to DPDK 1.8.0 and are > >>> now seeing a segmentation fault in ixgbe_rxtx_vec.c:444 after a few minutes. > >>> I just did some quick debugging and I only have a very limited > >>> understanding of the code in question but it seems that the 'continue' > >>> in line 445 without increasing 'buf_idx' might cause the problem. In one > >>> debugging session when the crash occurred the value of 'buf_idx' was 2 > >>> and the value of 'pkt_idx' was 8965. > >>> Any help with this issue would be greatly appreciated. If you need any > >>> further information just let me know. > >>> > >>> Martin > >>> > >>> > > Hi Martin, Prashant, > > > > I've managed to reproduce the issue here and had a look at it. Could you > > both perhaps try the proposed change below and see if it fixes the problem for > > you and gives you a working system? If so, I'll submit this as a patch fix > > officially - or go back to the drawing board, if not. :-) > > > > diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c > > index b54cb19..dfaccee 100644 > > --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c > > +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c > > @@ -402,10 +402,10 @@ reassemble_packets(struct igb_rx_queue *rxq, struct rte_mbuf **rx_bufs, > > struct rte_mbuf *pkts[RTE_IXGBE_VPMD_RX_BURST]; /*finished pkts*/ > > struct rte_mbuf *start = rxq->pkt_first_seg; > > struct rte_mbuf *end = rxq->pkt_last_seg; > > - unsigned pkt_idx = 0, buf_idx = 0; > > + unsigned pkt_idx, buf_idx; > > > > > > - while (buf_idx < nb_bufs) { > > + for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) { > > if (end != NULL) { > > /* processing a split packet */ > > end->next = rx_bufs[buf_idx]; > > @@ -448,7 +448,6 @@ reassemble_packets(struct igb_rx_queue *rxq, struct rte_mbuf **rx_bufs, > > rx_bufs[buf_idx]->data_len += rxq->crc_len; > > rx_bufs[buf_idx]->pkt_len += rxq->crc_len; > > } > > - buf_idx++; > > } > > > > /* save the partial packet for next time */ > > > > > > Regards, > > /Bruce > > >
Hi Bruce, yes, you are absolutely right. That resolves the problem. I was really happy to see that DPDK 1.8 includes proper default configurations for each driver and I made use of this. But unfortunately I was not aware that the default configuration did include the ETH_TXQ_FLAGS_NOMULTSEGS flag for ixgbe and i40e. I now use rte_eth_dev_info_get to get the default config for the port and then modify the txq_flags to not not include ETH_TXQ_FLAGS_NOMULTSEGS. With your fix this now works for CONFIG_RTE_IXGBE_INC_VECTOR=y, too. Sorry for missing this and thanks for the quick help. Best regards, Martin On 23.01.15 12:52, Bruce Richardson wrote: > On Fri, Jan 23, 2015 at 12:37:09PM +0100, Martin Weiser wrote: >> Hi Bruce, >> >> I now had the chance to reproduce the issue we are seeing with a DPDK >> example app. >> I started out with a vanilla DPDK 1.8.0 and only made the following changes: >> >> diff --git a/examples/l2fwd/main.c b/examples/l2fwd/main.c >> index e684234..48e6b7c 100644 >> --- a/examples/l2fwd/main.c >> +++ b/examples/l2fwd/main.c >> @@ -118,8 +118,9 @@ static const struct rte_eth_conf port_conf = { >> .header_split = 0, /**< Header Split disabled */ >> .hw_ip_checksum = 0, /**< IP checksum offload disabled */ >> .hw_vlan_filter = 0, /**< VLAN filtering disabled */ >> - .jumbo_frame = 0, /**< Jumbo Frame Support disabled */ >> + .jumbo_frame = 1, /**< Jumbo Frame Support disabled */ >> .hw_strip_crc = 0, /**< CRC stripped by hardware */ >> + .max_rx_pkt_len = 9000, >> }, >> .txmode = { >> .mq_mode = ETH_MQ_TX_NONE, >> diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c >> b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c >> index b54cb19..dfaccee 100644 >> --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c >> +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c >> @@ -402,10 +402,10 @@ reassemble_packets(struct igb_rx_queue *rxq, >> struct rte_mbuf **rx_bufs, >> struct rte_mbuf *pkts[RTE_IXGBE_VPMD_RX_BURST]; /*finished pkts*/ >> struct rte_mbuf *start = rxq->pkt_first_seg; >> struct rte_mbuf *end = rxq->pkt_last_seg; >> - unsigned pkt_idx = 0, buf_idx = 0; >> + unsigned pkt_idx, buf_idx; >> >> >> - while (buf_idx < nb_bufs) { >> + for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) { >> if (end != NULL) { >> /* processing a split packet */ >> end->next = rx_bufs[buf_idx]; >> @@ -448,7 +448,6 @@ reassemble_packets(struct igb_rx_queue *rxq, struct >> rte_mbuf **rx_bufs, >> rx_bufs[buf_idx]->data_len += rxq->crc_len; >> rx_bufs[buf_idx]->pkt_len += rxq->crc_len; >> } >> - buf_idx++; >> } >> >> /* save the partial packet for next time */ >> >> >> This includes your previously posted fix and makes a small modification >> to the l2fwd example app to enable jumbo frames of up to 9000 bytes. >> The system is equipped with a two port Intel 82599 card and both ports >> are hooked up to a packet generator. The packet generator produces >> simple Ethernet/IPv4/UDP packets. >> I started the l2fwd app with the following command line: >> >> $ ./build/l2fwd -c f -n 4 -- -q 8 -p 3 >> >> Both build variants that I have tested (CONFIG_RTE_IXGBE_INC_VECTOR=y >> and CONFIG_RTE_IXGBE_INC_VECTOR=n) now give me the same result: >> As long as the packet size is <= 2048 bytes the application behaves >> normally and all packets are forwarded as expected. >> As soon as the packet size exceeds 2048 bytes the application will only >> forward some packets and then stop forwarding altogether. Even small >> packets will not be forwarded anymore. >> >> If you want me to try out anything else just let me know. >> >> >> Best regards, >> Martin >> > I think the txq flags are at fault here. The default txq flags setting for > the l2fwd sample application includes the flag ETH_TXQ_FLAGS_NOMULTSEGS which > disables support for sending packets with multiple segments i.e. jumbo frames > in this case. If you change l2fwd to explicitly pass a txqflags parameter in > as part of the port setup (as was the case in previous releases), and set txqflags > to 0, does the problem go away? > > /Bruce > >> >> On 21.01.15 14:49, Bruce Richardson wrote: >>> On Tue, Jan 20, 2015 at 11:39:03AM +0100, Martin Weiser wrote: >>>> Hi again, >>>> >>>> I did some further testing and it seems like this issue is linked to >>>> jumbo frames. I think a similar issue has already been reported by >>>> Prashant Upadhyaya with the subject 'Packet Rx issue with DPDK1.8'. >>>> In our application we use the following rxmode port configuration: >>>> >>>> .mq_mode = ETH_MQ_RX_RSS, >>>> .split_hdr_size = 0, >>>> .header_split = 0, >>>> .hw_ip_checksum = 1, >>>> .hw_vlan_filter = 0, >>>> .jumbo_frame = 1, >>>> .hw_strip_crc = 1, >>>> .max_rx_pkt_len = 9000, >>>> >>>> and the mbuf size is calculated like the following: >>>> >>>> (2048 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM) >>>> >>>> This works fine with DPDK 1.7 and jumbo frames are split into buffer >>>> chains and can be forwarded on another port without a problem. >>>> With DPDK 1.8 and the default configuration (CONFIG_RTE_IXGBE_INC_VECTOR >>>> enabled) the application sometimes crashes like described in my first >>>> mail and sometimes packet receiving stops with subsequently arriving >>>> packets counted as rx errors. When CONFIG_RTE_IXGBE_INC_VECTOR is >>>> disabled the packet processing also comes to a halt as soon as jumbo >>>> frames arrive with a the slightly different effect that now >>>> rte_eth_tx_burst refuses to send any previously received packets. >>>> >>>> Is there anything special to consider regarding jumbo frames when moving >>>> from DPDK 1.7 to 1.8 that we might have missed? >>>> >>>> Martin >>>> >>>> >>>> >>>> On 19.01.15 11:26, Martin Weiser wrote: >>>>> Hi everybody, >>>>> >>>>> we quite recently updated one of our applications to DPDK 1.8.0 and are >>>>> now seeing a segmentation fault in ixgbe_rxtx_vec.c:444 after a few minutes. >>>>> I just did some quick debugging and I only have a very limited >>>>> understanding of the code in question but it seems that the 'continue' >>>>> in line 445 without increasing 'buf_idx' might cause the problem. In one >>>>> debugging session when the crash occurred the value of 'buf_idx' was 2 >>>>> and the value of 'pkt_idx' was 8965. >>>>> Any help with this issue would be greatly appreciated. If you need any >>>>> further information just let me know. >>>>> >>>>> Martin >>>>> >>>>> >>> Hi Martin, Prashant, >>> >>> I've managed to reproduce the issue here and had a look at it. Could you >>> both perhaps try the proposed change below and see if it fixes the problem for >>> you and gives you a working system? If so, I'll submit this as a patch fix >>> officially - or go back to the drawing board, if not. :-) >>> >>> diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c >>> index b54cb19..dfaccee 100644 >>> --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c >>> +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c >>> @@ -402,10 +402,10 @@ reassemble_packets(struct igb_rx_queue *rxq, struct rte_mbuf **rx_bufs, >>> struct rte_mbuf *pkts[RTE_IXGBE_VPMD_RX_BURST]; /*finished pkts*/ >>> struct rte_mbuf *start = rxq->pkt_first_seg; >>> struct rte_mbuf *end = rxq->pkt_last_seg; >>> - unsigned pkt_idx = 0, buf_idx = 0; >>> + unsigned pkt_idx, buf_idx; >>> >>> >>> - while (buf_idx < nb_bufs) { >>> + for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) { >>> if (end != NULL) { >>> /* processing a split packet */ >>> end->next = rx_bufs[buf_idx]; >>> @@ -448,7 +448,6 @@ reassemble_packets(struct igb_rx_queue *rxq, struct rte_mbuf **rx_bufs, >>> rx_bufs[buf_idx]->data_len += rxq->crc_len; >>> rx_bufs[buf_idx]->pkt_len += rxq->crc_len; >>> } >>> - buf_idx++; >>> } >>> >>> /* save the partial packet for next time */ >>> >>> >>> Regards, >>> /Bruce >>>
diff --git a/examples/l2fwd/main.c b/examples/l2fwd/main.c index e684234..48e6b7c 100644 --- a/examples/l2fwd/main.c +++ b/examples/l2fwd/main.c @@ -118,8 +118,9 @@ static const struct rte_eth_conf port_conf = { .header_split = 0, /**< Header Split disabled */ .hw_ip_checksum = 0, /**< IP checksum offload disabled */ .hw_vlan_filter = 0, /**< VLAN filtering disabled */ - .jumbo_frame = 0, /**< Jumbo Frame Support disabled */ + .jumbo_frame = 1, /**< Jumbo Frame Support disabled */ .hw_strip_crc = 0, /**< CRC stripped by hardware */ + .max_rx_pkt_len = 9000, }, .txmode = { .mq_mode = ETH_MQ_TX_NONE, diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c index b54cb19..dfaccee 100644 --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c @@ -402,10 +402,10 @@ reassemble_packets(struct igb_rx_queue *rxq, struct rte_mbuf **rx_bufs, struct rte_mbuf *pkts[RTE_IXGBE_VPMD_RX_BURST]; /*finished pkts*/ struct rte_mbuf *start = rxq->pkt_first_seg; struct rte_mbuf *end = rxq->pkt_last_seg; - unsigned pkt_idx = 0, buf_idx = 0; + unsigned pkt_idx, buf_idx; - while (buf_idx < nb_bufs) { + for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) { if (end != NULL) { /* processing a split packet */