Message ID | 20220330134956.18927-1-david.marchand@redhat.com (mailing list archive) |
---|---|
Headers |
Return-Path: <dev-bounces@dpdk.org> X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CE63FA050D; Wed, 30 Mar 2022 15:50:11 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7F69A40685; Wed, 30 Mar 2022 15:50:11 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id CF8974013F for <dev@dpdk.org>; Wed, 30 Mar 2022 15:50:09 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1648648209; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=z9bL2D5v4kuYvCzXXMvaZD8FWhpKbTo2aO+0+l9kL6I=; b=MsCsOUONQIjUfBeaiQro3rpXgpaYkxrK7im7d5NLErEal2yNUITcExpp6vO4oJnN6BP/Ug BwWOCc1vpnsPD7yTHLhjnSuyqPrMH6nELeqDUfEqWqdOiRlJvUT45Qt/L3ndKKn4elDBZN o7UsILG6gb/up5nMZQxRAUl+waDYVkU= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-299-0rrXvIdxNnS_954znrzu5A-1; Wed, 30 Mar 2022 09:50:05 -0400 X-MC-Unique: 0rrXvIdxNnS_954znrzu5A-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 659D81C168E8; Wed, 30 Mar 2022 13:50:05 +0000 (UTC) Received: from dmarchan.remote.csb (unknown [10.40.195.27]) by smtp.corp.redhat.com (Postfix) with ESMTP id F1AB740D2825; Wed, 30 Mar 2022 13:50:03 +0000 (UTC) From: David Marchand <david.marchand@redhat.com> To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, chenbo.xia@intel.com, jiayu.hu@intel.com, yuanx.wang@intel.com, xuan.ding@intel.com Subject: [RFC PATCH v2 0/9] vhost lock annotations Date: Wed, 30 Mar 2022 15:49:47 +0200 Message-Id: <20220330134956.18927-1-david.marchand@redhat.com> In-Reply-To: <20220328121758.26632-1-david.marchand@redhat.com> References: <20220328121758.26632-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david.marchand@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="US-ASCII"; x-default=true X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions <dev.dpdk.org> List-Unsubscribe: <https://mails.dpdk.org/options/dev>, <mailto:dev-request@dpdk.org?subject=unsubscribe> List-Archive: <http://mails.dpdk.org/archives/dev/> List-Post: <mailto:dev@dpdk.org> List-Help: <mailto:dev-request@dpdk.org?subject=help> List-Subscribe: <https://mails.dpdk.org/listinfo/dev>, <mailto:dev-request@dpdk.org?subject=subscribe> Errors-To: dev-bounces@dpdk.org |
Series |
vhost lock annotations
|
|
Message
David Marchand
March 30, 2022, 1:49 p.m. UTC
vhost internals involves multiple locks to protect data access by multiple threads. This series is a try at using clang thread safety checks [1] to catch issues during compilation: EAL spinlock and rwlock are annotated and vhost code is instrumented so that clang can statically check correctness. This is still a work in progress (some documentation and a release note update are missing). Those annotations are quite heavy to maintain because the full path of code must be annotated, but I think it is worth using. 1: https://clang.llvm.org/docs/ThreadSafetyAnalysis.html
Comments
On Wed, Mar 30, 2022 at 3:50 PM David Marchand <david.marchand@redhat.com> wrote: > > vhost internals involves multiple locks to protect data access by > multiple threads. > > This series is a try at using clang thread safety checks [1] to catch > issues during compilation: EAL spinlock and rwlock are annotated and > vhost code is instrumented so that clang can statically check > correctness. > > This is still a work in progress (some documentation and a release note > update are missing). > > Those annotations are quite heavy to maintain because the full path of > code must be annotated, but I think it is worth using. > > > 1: https://clang.llvm.org/docs/ThreadSafetyAnalysis.html It looks like mimecast shot the first patch (which I sent in place of Maxime, because this series should go through the main repo). Looking at the mail source, I see: X-Mimecast-Impersonation-Protect: Policy=CLT - Impersonation Protection Definition;Similar Internal Domain=false;Similar Monitored External Domain=false;Custom External Domain=false;Mimecast External Domain=false;Newly Observed Domain=false;Internal User Name=false;Custom Display Name List=false;Reply-to Address Mismatch=false;Targeted Threat Dictionary=false;Mimecast Threat Dictionary=false;Custom Threat Dictionary=false I don't know how to understand this... But as a result, the series is missing this patch in patchwork. Patch 1 is still available from RFC v1: https://patchwork.dpdk.org/project/dpdk/patch/20220328121758.26632-2-david.marchand@redhat.com/ or via next-virtio.
[..] > It looks like mimecast shot the first patch (which I sent in place of > Maxime, because this series should go through the main repo). > > Looking at the mail source, I see: > > X-Mimecast-Impersonation-Protect: Policy=CLT - Impersonation > Protection Definition;Similar Internal Domain=false;Similar Monitored > External Domain=false;Custom External Domain=false;Mimecast External > Domain=false;Newly Observed Domain=false;Internal User > Name=false;Custom Display Name List=false;Reply-to Address > Mismatch=false;Targeted Threat Dictionary=false;Mimecast Threat > Dictionary=false;Custom Threat Dictionary=false > > I don't know how to understand this... > But as a result, the series is missing this patch in patchwork. I believe it was ignored by Patchwork because of its content-type (application/octet-stream), which indicates that the message contains binary data instead of text: > Content-Transfer-Encoding: 8bit > Content-Type: application/octet-stream; x-default=true Regards, Ali
On Wed, Mar 30, 2022 at 4:37 PM Ali Alnubani <alialnu@nvidia.com> wrote: > > [..] > > It looks like mimecast shot the first patch (which I sent in place of > > Maxime, because this series should go through the main repo). > > > > Looking at the mail source, I see: > > > > X-Mimecast-Impersonation-Protect: Policy=CLT - Impersonation > > Protection Definition;Similar Internal Domain=false;Similar Monitored > > External Domain=false;Custom External Domain=false;Mimecast External > > Domain=false;Newly Observed Domain=false;Internal User > > Name=false;Custom Display Name List=false;Reply-to Address > > Mismatch=false;Targeted Threat Dictionary=false;Mimecast Threat > > Dictionary=false;Custom Threat Dictionary=false > > > > I don't know how to understand this... > > But as a result, the series is missing this patch in patchwork. > > I believe it was ignored by Patchwork because of its content-type (application/octet-stream), which indicates that the message contains binary data instead of text: > > Content-Transfer-Encoding: 8bit > > Content-Type: application/octet-stream; x-default=true Probably the consequence of some Mimecast mangling. I noticed similar issues in my inbox for some Luca mails on stable@ and some mails from @Intel people on dev@ and even on netdev@. In all cases where From: contains a name != sender, my inbox got the same "content stripped and attached" symptom. On the other hand, those mails end up fine in public mail archives. Looking at headers of publicly archived mails and comparing with what I got, there is an additional trace of Mimecast between dpdk.org server and my inbox. I opened a IT ticket internally. I hope it will get fixed quickly.