Page MenuHomePhabricator

[docs] LLVM Security Group and Process
Needs ReviewPublic

Authored by jfb on Nov 15 2019, 10:57 AM.
This revision needs review, but there are no reviewers specified.

Details

Reviewers
None
Summary

See the corresponding RFC on llvm-dev for a discussion of this proposal.

On this review we're looking for the following feedback: going into specific details, what do you think should be done differently, and what do you think is exactly right in the draft proposal?

Event Timeline

jfb created this revision.Nov 15 2019, 10:57 AM
jfb edited the summary of this revision. (Show Details)Nov 16 2019, 2:34 PM

We should explicitly state that patches to LLVM sent to the group are subject to the standard LLVM developer policy/license. This is important so members of the security group can use any patches.

We should prominently state that all messages and attachments will be publicly disclosed after any embargo expires. This is important so issue reporters don't send code under NDAs/etc.

mattd added a subscriber: mattd.Nov 22 2019, 8:50 AM
kcc added a subscriber: kcc.Nov 26 2019, 5:48 PM
kcc added inline comments.
llvm/docs/Security.rst
181

crbug.org has been working well for us e.g. for oss-fuzz or for one-off cases like
https://bugs.chromium.org/p/chromium/issues/detail?id=994957
https://bugs.chromium.org/p/chromium/issues/detail?id=606626

GitHub's security advisories are very recent and unclear if the workflow is polished.
E.g. I can't seem to add comments to the advisory once it's public.
I didn't check if these advisories have an API (they should).

Yet, I think we should consider GitHub as the primary candidate because this is where LLVM is and where the majority of OSS people are.
We may need to ask GitHub to implement missing features, if any.

I've not read this in detail or followed the list, but I wanted to add that I believe it's important that we have some form of public acknowledgement for the people that have reported security vulnerabilities in there as well.

jfb updated this revision to Diff 236761.Jan 7 2020, 9:30 PM
  • Address feedback
jfb added a comment.Jan 7 2020, 9:32 PM

We should explicitly state that patches to LLVM sent to the group are subject to the standard LLVM developer policy/license. This is important so members of the security group can use any patches.

We should prominently state that all messages and attachments will be publicly disclosed after any embargo expires. This is important so issue reporters don't send code under NDAs/etc.

I'm not aware of projects pointing out their contribution policy in a different manner for security patches. Certainly we want the contributor policy to be prominent, for example if we use GitHub we can add a CONTRIBUTING.md file to do this. I'm just not sure I understand how it should be different for the purpose of security issues.

jfb added a comment.Jan 7 2020, 9:33 PM

I've not read this in detail or followed the list, but I wanted to add that I believe it's important that we have some form of public acknowledgement for the people that have reported security vulnerabilities in there as well.

CVEs have that property. Folks on the list are worried about "security theater", so I don't think I want to maintain a public leaderboard.

We (Microsoft) are interested in participating in this process.

I have one concern, which is that most of the security issues arising from LLVM are not necessarily security issues in LLVM itself. For example, a miscompilation that breaks invariants that a sandboxing technique depends on will appear to most LLVM developers as a simple miscompilation and may be a security issue only for downstream consumers. Presumably this group should be involved if the issue may apply to multiple downstream consumers?

Where do things like the null-check elision that caught Linux fall? This was a Linux dependence on undefined behaviour that caused compilers to emit code that introduced security vulnerabilities into Linux. Is that in scope for this group, or would we regard that as a Linux vulnerability independent of LLVM? What about, for example, a change to if-conversion that introduces branches in C code believed to be constant time and introduces vulnerabilities in crypto libraries, is that in scope?

I don't think that we can characterize security issues based on where in the code they exist, but rather based on the kinds of behaviour that they trigger and we need to provide very clear advice on what that should be.

jfb added a comment.Jan 8 2020, 10:23 AM

We (Microsoft) are interested in participating in this process.

Thanks David.

I have one concern, which is that most of the security issues arising from LLVM are not necessarily security issues in LLVM itself. For example, a miscompilation that breaks invariants that a sandboxing technique depends on will appear to most LLVM developers as a simple miscompilation and may be a security issue only for downstream consumers. Presumably this group should be involved if the issue may apply to multiple downstream consumers?

Where do things like the null-check elision that caught Linux fall? This was a Linux dependence on undefined behaviour that caused compilers to emit code that introduced security vulnerabilities into Linux. Is that in scope for this group, or would we regard that as a Linux vulnerability independent of LLVM? What about, for example, a change to if-conversion that introduces branches in C code believed to be constant time and introduces vulnerabilities in crypto libraries, is that in scope?

I don't think that we can characterize security issues based on where in the code they exist, but rather based on the kinds of behaviour that they trigger and we need to provide very clear advice on what that should be.

I absolutely agree! This complexity is why I'm not trying to decide what's in / out right now, and would rather have that process occur as follow-up RFCs (with updates to this document explaining what's in / out and why). Folks were expressing similar worries on the mailing list.

aadg added a subscriber: aadg.Jan 9 2020, 3:11 PM
aadg added inline comments.
llvm/docs/Security.rst
25

I understand we have to solve a chicken & egg problem here to get the group started ; I think we should rather say that a call for application to the initial security group should be made, and the board will pick 10 candidates amongst the applications. The board can not possibly know everyone in the community, and to be effective, this group needs volunteers, not people who have been volunteered.

10 seems like a big number of people for an initial group --- given the number of people who expressed interest in the forming of this group, so what should we do if there are less than 10 volunteers ?

The initial task for this group will probably be to finish fleshing up this proposal.

Shayne added a subscriber: Shayne.Jan 21 2020, 1:25 PM
In D70326#1810421, @jfb wrote:

We (Microsoft) are interested in participating in this process.

Thanks David.

I have one concern, which is that most of the security issues arising from LLVM are not necessarily security issues in LLVM itself. For example, a miscompilation that breaks invariants that a sandboxing technique depends on will appear to most LLVM developers as a simple miscompilation and may be a security issue only for downstream consumers. Presumably this group should be involved if the issue may apply to multiple downstream consumers?

Where do things like the null-check elision that caught Linux fall? This was a Linux dependence on undefined behaviour that caused compilers to emit code that introduced security vulnerabilities into Linux. Is that in scope for this group, or would we regard that as a Linux vulnerability independent of LLVM? What about, for example, a change to if-conversion that introduces branches in C code believed to be constant time and introduces vulnerabilities in crypto libraries, is that in scope?

I don't think that we can characterize security issues based on where in the code they exist, but rather based on the kinds of behaviour that they trigger and we need to provide very clear advice on what that should be.

I absolutely agree! This complexity is why I'm not trying to decide what's in / out right now, and would rather have that process occur as follow-up RFCs (with updates to this document explaining what's in / out and why). Folks were expressing similar worries on the mailing list.

Hello. As David mentioned, we are interested in being involved and I'll be the contact from Microsoft. We have quite a few teams using LLVM to compile parts of their products, and it's going to be important to figure out how to ensure security. As David said, this often will not mean changes to LLVM, but rebuilding the project source with new security fixes/features. Thinking about a previous security vulnerability, Spectre, if this group would have handled it, we could learn about LLVM features to apply to our projects prior to disclosure.

We (MathWorks) are interested in participating in this process.

I would be the contact from MathWorks. We ship LLVM as part of MATLAB and Simulink; these products are released twice a year.

The proposed document looks great. My only minor suggestion is that the LLVM Security Group have an odd number of members to limit the chance of tie vote.
I agree with theraven and jfb that analysis of security issues should be based on the kinds of behavior that they trigger.