This is an archive of the discontinued LLVM Phabricator instance.

[docs] LLVM Security Group and Process
ClosedPublic

Authored by jfb on Nov 15 2019, 10:57 AM.

Details

Summary

See the corresponding RFC on llvm-dev for a discussion of this proposal.

On this review we're looking for the following feedback: going into specific details, what do you think should be done differently, and what do you think is exactly right in the draft proposal?

Diff Detail

Event Timeline

jfb created this revision.Nov 15 2019, 10:57 AM
jfb edited the summary of this revision. (Show Details)Nov 16 2019, 2:34 PM

We should explicitly state that patches to LLVM sent to the group are subject to the standard LLVM developer policy/license. This is important so members of the security group can use any patches.

We should prominently state that all messages and attachments will be publicly disclosed after any embargo expires. This is important so issue reporters don't send code under NDAs/etc.

mattd added a subscriber: mattd.Nov 22 2019, 8:50 AM
kcc added a subscriber: kcc.Nov 26 2019, 5:48 PM
kcc added inline comments.
llvm/docs/Security.rst
181

crbug.org has been working well for us e.g. for oss-fuzz or for one-off cases like
https://bugs.chromium.org/p/chromium/issues/detail?id=994957
https://bugs.chromium.org/p/chromium/issues/detail?id=606626

GitHub's security advisories are very recent and unclear if the workflow is polished.
E.g. I can't seem to add comments to the advisory once it's public.
I didn't check if these advisories have an API (they should).

Yet, I think we should consider GitHub as the primary candidate because this is where LLVM is and where the majority of OSS people are.
We may need to ask GitHub to implement missing features, if any.

I've not read this in detail or followed the list, but I wanted to add that I believe it's important that we have some form of public acknowledgement for the people that have reported security vulnerabilities in there as well.

jfb updated this revision to Diff 236761.Jan 7 2020, 9:30 PM
  • Address feedback
jfb added a comment.Jan 7 2020, 9:32 PM

We should explicitly state that patches to LLVM sent to the group are subject to the standard LLVM developer policy/license. This is important so members of the security group can use any patches.

We should prominently state that all messages and attachments will be publicly disclosed after any embargo expires. This is important so issue reporters don't send code under NDAs/etc.

I'm not aware of projects pointing out their contribution policy in a different manner for security patches. Certainly we want the contributor policy to be prominent, for example if we use GitHub we can add a CONTRIBUTING.md file to do this. I'm just not sure I understand how it should be different for the purpose of security issues.

jfb added a comment.Jan 7 2020, 9:33 PM

I've not read this in detail or followed the list, but I wanted to add that I believe it's important that we have some form of public acknowledgement for the people that have reported security vulnerabilities in there as well.

CVEs have that property. Folks on the list are worried about "security theater", so I don't think I want to maintain a public leaderboard.

We (Microsoft) are interested in participating in this process.

I have one concern, which is that most of the security issues arising from LLVM are not necessarily security issues in LLVM itself. For example, a miscompilation that breaks invariants that a sandboxing technique depends on will appear to most LLVM developers as a simple miscompilation and may be a security issue only for downstream consumers. Presumably this group should be involved if the issue may apply to multiple downstream consumers?

Where do things like the null-check elision that caught Linux fall? This was a Linux dependence on undefined behaviour that caused compilers to emit code that introduced security vulnerabilities into Linux. Is that in scope for this group, or would we regard that as a Linux vulnerability independent of LLVM? What about, for example, a change to if-conversion that introduces branches in C code believed to be constant time and introduces vulnerabilities in crypto libraries, is that in scope?

I don't think that we can characterize security issues based on where in the code they exist, but rather based on the kinds of behaviour that they trigger and we need to provide very clear advice on what that should be.

jfb added a comment.Jan 8 2020, 10:23 AM

We (Microsoft) are interested in participating in this process.

Thanks David.

I have one concern, which is that most of the security issues arising from LLVM are not necessarily security issues in LLVM itself. For example, a miscompilation that breaks invariants that a sandboxing technique depends on will appear to most LLVM developers as a simple miscompilation and may be a security issue only for downstream consumers. Presumably this group should be involved if the issue may apply to multiple downstream consumers?

Where do things like the null-check elision that caught Linux fall? This was a Linux dependence on undefined behaviour that caused compilers to emit code that introduced security vulnerabilities into Linux. Is that in scope for this group, or would we regard that as a Linux vulnerability independent of LLVM? What about, for example, a change to if-conversion that introduces branches in C code believed to be constant time and introduces vulnerabilities in crypto libraries, is that in scope?

I don't think that we can characterize security issues based on where in the code they exist, but rather based on the kinds of behaviour that they trigger and we need to provide very clear advice on what that should be.

I absolutely agree! This complexity is why I'm not trying to decide what's in / out right now, and would rather have that process occur as follow-up RFCs (with updates to this document explaining what's in / out and why). Folks were expressing similar worries on the mailing list.

aadg added a subscriber: aadg.Jan 9 2020, 3:11 PM
aadg added inline comments.
llvm/docs/Security.rst
26

I understand we have to solve a chicken & egg problem here to get the group started ; I think we should rather say that a call for application to the initial security group should be made, and the board will pick 10 candidates amongst the applications. The board can not possibly know everyone in the community, and to be effective, this group needs volunteers, not people who have been volunteered.

10 seems like a big number of people for an initial group --- given the number of people who expressed interest in the forming of this group, so what should we do if there are less than 10 volunteers ?

The initial task for this group will probably be to finish fleshing up this proposal.

Shayne added a subscriber: Shayne.Jan 21 2020, 1:25 PM
In D70326#1810421, @jfb wrote:

We (Microsoft) are interested in participating in this process.

Thanks David.

I have one concern, which is that most of the security issues arising from LLVM are not necessarily security issues in LLVM itself. For example, a miscompilation that breaks invariants that a sandboxing technique depends on will appear to most LLVM developers as a simple miscompilation and may be a security issue only for downstream consumers. Presumably this group should be involved if the issue may apply to multiple downstream consumers?

Where do things like the null-check elision that caught Linux fall? This was a Linux dependence on undefined behaviour that caused compilers to emit code that introduced security vulnerabilities into Linux. Is that in scope for this group, or would we regard that as a Linux vulnerability independent of LLVM? What about, for example, a change to if-conversion that introduces branches in C code believed to be constant time and introduces vulnerabilities in crypto libraries, is that in scope?

I don't think that we can characterize security issues based on where in the code they exist, but rather based on the kinds of behaviour that they trigger and we need to provide very clear advice on what that should be.

I absolutely agree! This complexity is why I'm not trying to decide what's in / out right now, and would rather have that process occur as follow-up RFCs (with updates to this document explaining what's in / out and why). Folks were expressing similar worries on the mailing list.

Hello. As David mentioned, we are interested in being involved and I'll be the contact from Microsoft. We have quite a few teams using LLVM to compile parts of their products, and it's going to be important to figure out how to ensure security. As David said, this often will not mean changes to LLVM, but rebuilding the project source with new security fixes/features. Thinking about a previous security vulnerability, Spectre, if this group would have handled it, we could learn about LLVM features to apply to our projects prior to disclosure.

We (MathWorks) are interested in participating in this process.

I would be the contact from MathWorks. We ship LLVM as part of MATLAB and Simulink; these products are released twice a year.

The proposed document looks great. My only minor suggestion is that the LLVM Security Group have an odd number of members to limit the chance of tie vote.
I agree with theraven and jfb that analysis of security issues should be based on the kinds of behavior that they trigger.

aadg added a comment.EditedApr 27 2020, 7:18 AM
This comment has been deleted.
aadg accepted this revision.Apr 27 2020, 7:22 AM

Considering that:

  • this topic and associated review has been open for quite some time already,
  • comments have been addressed when possible,
  • the LLVM Foundation board believes it is important for our community to have some security process in place, and this proposal is a step in the right direction,
  • keeping this stalled will not bring any benefit to the LLVM community (especially since face-to-face discussions at EuroLLVM'20 could not take place, and no one can be sure if the US conference will enable more face to face discussions),

I am approving this on the LLVM Foundation board behalf so that the forming of a security group and the finalization of this process can start, with the community and especially those who have expressed interest in this.

Thanks for the effort in pushing this !

This revision is now accepted and ready to land.Apr 27 2020, 7:22 AM
jfb added a comment.Jun 11 2020, 8:39 AM

This has received good feedback, I'll therefore commit it in the next few days.

rengolin added inline comments.
llvm/docs/Security.rst
26

I agree on both points. We shouldn't burden the foundation with it nor we should restrict the number of members to a fixed size.

53

Redundant wording. Perhaps sub-bullet points?

113

What if the group doesn't have a member from an affected vendor? How do we handle external vendor/country embargo?

mattdr added a subscriber: mattdr.Jun 18 2020, 4:35 PM
psmith added a subscriber: psmith.Jun 19 2020, 1:51 AM
psmith added inline comments.
llvm/docs/Security.rst
47

There are many vendors that build products from LLVM, and would like to be informed about vulnerabilities, but they may not be able to provide a security expert for the group. We may be at risk of putting off smaller vendors from putting names forward that largely want to be informed but may not be able to contribute fixes.

I don't think this needs changing in the text though. We'll have to see how it goes.

154

Is it worth documenting what happens when the decision that the issue is not security-related? For example update "What is a security issue?" if necessary.

We have time limits and a place for communicating fixes. How and where do we communicate a non-security issue? For example is there a LLVM-DEV post?

I'm sure that there will be some decisions that will need revisiting due to community feedback or further information. I don't think that there needs to be a formal appeals procedure, I think that if the arguments are persuasive the committee can change their mind.

theraven added inline comments.Jun 19 2020, 2:16 AM
llvm/docs/Security.rst
47

I think that's a great point. We may want to have a two-ring approach, with a group that will coordinate the response and patch, and a wider distribution group that has access to the embargoed patch so that they can do package builds and coordinated releases. My concern over this approach is that it's much lower overhead to be in the second group and so the incentive is to only be in the second group, where you benefit from the process but don't contribute.

This also needs to be balanced with the fact that a leak of an embargoed patch is more likely the more people are exposed to it (for example, OpenBSD leaked the fix for the KRACK WPA2 attack before the embargo, which put everyone else at risk and got the project banned from access to embargoed fixes for a few things).

From the project's perspective, what is the benefit of having these small vendors participating? For the vendor to benefit, they must have a process for handling embargoed fixes and doing coordinated releases. It seems quite unlikely that such a vendor would not have someone who can help our at least in coordinating the response, if not in assessing the security.

jfb updated this revision to Diff 277107.Jul 10 2020, 11:24 AM
jfb marked 14 inline comments as done.

Address more comments, add names.

llvm/docs/Security.rst
26

The board's expressed preferred direction is to start with those who have self-identified in the RFC discussion. I'll go with that.

47

Agreed that this is a valid concern. I would like to figure it out as we move forward, not in the first steps of setting up our process.

53

Using sub-bullets makes it unclear what is "and" and what isn't.

113

It won't be perfect at the start, but it's already way more imperfect right now. We should do outreach (such as on this review and the RFC), and over the first few months I think we'll be in a position where what you ask isn't an issue anymore.

154

An issue that isn't deemed part of the security surface area is opened to the public as part of the embargo. In most cases, we'd decide that there's no embargo.

That being said, we can keep an embargo if, for example, it's not part of LLVM's security surface area but is for some non-LLVM project. Say: we use a 3rd party library in a non-secure manner, we're told of an issue, we say "not in out threat model", but other open-source projects have it in their thread model. In such a case we don't want to lift embargo, because it would affect the non-LLVM project.

Ultimately, we'll also do a periodic report where those issues show up.

181

That's been my thinking as well, and one of the first follow-ups I'd like to come after this initial commit.

jfb added a comment.Jul 10 2020, 11:24 AM

I believe this is now ready to go, with more to do afterwards.

jfb marked an inline comment as done.Jul 10 2020, 11:28 AM
jfb added subscribers: ojhunt, peter.smith, dim and 3 others.
jfb added inline comments.
llvm/docs/Security.rst
41

Tagging @dim @emaste @kristof.beyls @mattdr @ojhunt @probinson @peter.smith @reames @Shayne, they're the folks who have Phabricator accounts from this list.

jkorous added inline comments.Jul 10 2020, 12:43 PM
llvm/docs/Security.rst
204

We should probably include tools that need to be run with elevated privileges of some sort. For example lldb getting root.

jfb marked 2 inline comments as done.Jul 10 2020, 1:57 PM
jfb added inline comments.
llvm/docs/Security.rst
204

We'd need LLDB maintainers signing up to doing this maintenance. Not that we can't / shouldn't, but that we ought to consider these one at a time, with proper support.

This revision was automatically updated to reflect the committed changes.
jfb marked an inline comment as done.

Hi, your git commit contains extra Phabricator tags. You can drop Reviewers: Subscribers: Tags: and the text Summary: from the git commit with the following script:

arcfilter () {
        arc amend
        git log -1 --pretty=%B | awk '/Reviewers:|Subscribers:/{p=1} /Reviewed By:|Differential Revision:/{p=0} !p && !/^Summary:$/ {sub(/^Summary: /,"");print}' | git commit --amend --date=now -F -
}

Reviewed By: is considered important by some people. Please keep the tag. (--date=now is my personal preference (author dates are usually not useful. Using committer dates can make log almost monotonic in time))

llvm/utils/git/pre-push.py can validate the message does not include unneeded tags.

hfinkel added inline comments.
llvm/docs/Security.rst
177

I recommend that part of this process, presumably at the end, be directed at fulfilling goal #6 above ("Strive to improve security over time, for example by adding additional testing, fuzzing, and hardening after fixing issues."). Maybe something along the lines of: LLVM bug reports will be filed against fuzz testers and/or other components to detail gaps in testing coverage that seem likely to prevent similar cases from arising in the future.