This is an archive of the discontinued LLVM Phabricator instance.

LTO: Keep file handles open for memory mapped files.
ClosedPublic

Authored by pcc on Jun 11 2018, 1:36 PM.

Details

Summary

On Windows we've observed that if you open a file,
write to it, map it into memory and close the file handle, the
contents of the memory mapping can sometimes be incorrect. That
was what we did when adding an entry to the ThinLTO cache using the
TempFile and MemoryBuffer classes, and it was causing intermittent
build failures on Chromium's ThinLTO bots on Windows. More details
are in the associated Chromium bug (crbug.com/786127).

We can prevent this from happening by keeping a handle to the
file open while the mapping is active. So this patch changes the
mapped_file_region class to duplicate the file handle when mapping
the file and close it upon unmapping it.

One gotcha is that the file handle that we keep open must not have
been created with FILE_FLAG_DELETE_ON_CLOSE, as otherwise the operating
system will prevent other processes from opening the file. We can achieve
this by avoiding the use of FILE_FLAG_DELETE_ON_CLOSE altogether.
Instead, we use SetFileInformationByHandle with FileDispositionInfo to manage
the delete-on-close bit. This lets us remove the hack that we used to use to clear
the delete-on-close bit on a file opened with FILE_FLAG_DELETE_ON_CLOSE.

A downside of using SetFileInformationByHandle/FileDispositionInfo as opposed
to FILE_FLAG_DELETE_ON_CLOSE is that it prevents us from using CreateFile to open
the file while the flag is set, even within the same process. This doesn't seem to matter
for almost every client of TempFile, except for LockFileManager, which calls sys::fs::create_link
to create a hard link from the lock file, and in the process of doing so tries to open the
file. To prevent this change from breaking LockFileManager I changed it to stop using
TempFile by effectively reverting r318550.

Event Timeline

pcc created this revision.Jun 11 2018, 1:36 PM

FWIW, I'm no longer convinced this is a Windows bug based on the reading of the blog post linked in comment 52 of the crbug.

But this comment worries me: "One gotcha is that the file handle that we keep open must not have been created with FILE_FLAG_DELETE_ON_CLOSE, as otherwise the operating system will prevent other processes from opening the file". The aforementioned blog post hints that if other processes are being prevented from opening the file, then you've already gone astray.

Is there any we can just transfer ownership of the original handle (with FILE_FLAG_DELETE_ON_CLOSE) from whoever had it before to the TempFile. This way we can just not close that handle until we're sure we're done with it.

Thanks for the fix! A couple of suggestions:

I wouldn't call it a bug unless we are sure that it is. My reading of the documentation of FILE_FLAG_DELETE_ON_CLOSE is that what we're hitting may actually be the intended behavior. Instead of "bug", can we describe is at "observed behavior" or words to that effect? That way, we aren't implying that the behavior is incorrect while still explaining why our code needs to be the way it is.

The documentation for FILE_FLAG_DELETE_ON_CLOSE also mentions "Subsequent open requests for the file fail, unless the FILE_SHARE_DELETE share mode is specified." I wonder if we could use that to avoid having to split keep(). I think that would make the tempfile API a little nicer, although I'm ok splitting keep() if that's what we have to do to make it correct.

pcc added a comment.Jun 11 2018, 3:06 PM

Is there any we can just transfer ownership of the original handle (with FILE_FLAG_DELETE_ON_CLOSE) from whoever had it before to the TempFile. This way we can just not close that handle until we're sure we're done with it.

I'm not sure what you mean. The FILE_FLAG_DELETE_ON_CLOSE handle is created and owned by TempFile. Do you mean transferring ownership from TempFile to MemoryBuffer when we map the file?

pcc added a comment.Jun 11 2018, 3:08 PM

Thanks for the fix! A couple of suggestions:

I wouldn't call it a bug unless we are sure that it is. My reading of the documentation of FILE_FLAG_DELETE_ON_CLOSE is that what we're hitting may actually be the intended behavior. Instead of "bug", can we describe is at "observed behavior" or words to that effect? That way, we aren't implying that the behavior is incorrect while still explaining why our code needs to be the way it is.

Okay.

The documentation for FILE_FLAG_DELETE_ON_CLOSE also mentions "Subsequent open requests for the file fail, unless the FILE_SHARE_DELETE share mode is specified." I wonder if we could use that to avoid having to split keep(). I think that would make the tempfile API a little nicer, although I'm ok splitting keep() if that's what we have to do to make it correct.

By "that" do you mean the "unless the FILE_SHARE_DELETE share mode is specified" part? Unfortunately that doesn't seem to work; even if both processes pass FILE_SHARE_DELETE the open still fails.

In D48051#1129014, @pcc wrote:

Is there any we can just transfer ownership of the original handle (with FILE_FLAG_DELETE_ON_CLOSE) from whoever had it before to the TempFile. This way we can just not close that handle until we're sure we're done with it.

I'm not sure what you mean. The FILE_FLAG_DELETE_ON_CLOSE handle is created and owned by TempFile. Do you mean transferring ownership from TempFile to MemoryBuffer when we map the file?

Yea, sorry. I'm just worried that by copying and duplicating the handle, and cancelling the delete on close, and all this other stuff we're just exposing ourselves to even more strange problems. It seems like the "correct" thing to do is to just keep the original handle open until we're absolutely done with that file.

pcc added a comment.Jun 11 2018, 3:45 PM
In D48051#1129014, @pcc wrote:

Is there any we can just transfer ownership of the original handle (with FILE_FLAG_DELETE_ON_CLOSE) from whoever had it before to the TempFile. This way we can just not close that handle until we're sure we're done with it.

I'm not sure what you mean. The FILE_FLAG_DELETE_ON_CLOSE handle is created and owned by TempFile. Do you mean transferring ownership from TempFile to MemoryBuffer when we map the file?

Yea, sorry. I'm just worried that by copying and duplicating the handle, and cancelling the delete on close, and all this other stuff we're just exposing ourselves to even more strange problems. It seems like the "correct" thing to do is to just keep the original handle open until we're absolutely done with that file.

As I wrote in the commit message, that would prevent other processes from opening the file. What that would mean is that if we have two concurrent link processes they would not be able to share results.

I guess what we could do instead is exclusively use SetFileInformationByHandle/FileDispositionInfo for delete-on-close to avoid needing to do weird things to cancel FILE_FLAG_DELETE_ON_CLOSE. That should solve the sharing problem as well.

Also it would mean that we would need to change all other clients of MemoryBuffer::getOpenFile{,Slice}() to pass FD ownership to the MemoryBuffer. While that would seem to work for most in-tree clients there does appear to be at least one case where the FD is owned by external code: http://llvm-cs.pcc.me.uk/tools/gold/gold-plugin.cpp#498

So we'd probably either want the FD to be optionally owned or require the client to duplicate the handle if it doesn't own it. But that would mean that someone would still end up needing to duplicate on Windows in the non-owned case.

In D48051#1129059, @pcc wrote:
In D48051#1129014, @pcc wrote:

Is there any we can just transfer ownership of the original handle (with FILE_FLAG_DELETE_ON_CLOSE) from whoever had it before to the TempFile. This way we can just not close that handle until we're sure we're done with it.

I'm not sure what you mean. The FILE_FLAG_DELETE_ON_CLOSE handle is created and owned by TempFile. Do you mean transferring ownership from TempFile to MemoryBuffer when we map the file?

Yea, sorry. I'm just worried that by copying and duplicating the handle, and cancelling the delete on close, and all this other stuff we're just exposing ourselves to even more strange problems. It seems like the "correct" thing to do is to just keep the original handle open until we're absolutely done with that file.

As I wrote in the commit message, that would prevent other processes from opening the file. What that would mean is that if we have two concurrent link processes they would not be able to share results.

I guess what we could do instead is exclusively use SetFileInformationByHandle/FileDispositionInfo for delete-on-close to avoid needing to do weird things to cancel FILE_FLAG_DELETE_ON_CLOSE. That should solve the sharing problem as well.

Also it would mean that we would need to change all other clients of MemoryBuffer::getOpenFile{,Slice}() to pass FD ownership to the MemoryBuffer. While that would seem to work for most in-tree clients there does appear to be at least one case where the FD is owned by external code: http://llvm-cs.pcc.me.uk/tools/gold/gold-plugin.cpp#498

So we'd probably either want the FD to be optionally owned or require the client to duplicate the handle if it doesn't own it. But that would mean that someone would still end up needing to duplicate on Windows in the non-owned case.

I don't think that would prevent other processes from opening the file. Both processes just have to make sure they both specify FILE_SHARE_DELETE.

Do we have any kind of guarantee that one entity will outlive all other entities that may attempt to share the file?

pcc added a comment.Jun 11 2018, 4:01 PM
In D48051#1129059, @pcc wrote:
In D48051#1129014, @pcc wrote:

Is there any we can just transfer ownership of the original handle (with FILE_FLAG_DELETE_ON_CLOSE) from whoever had it before to the TempFile. This way we can just not close that handle until we're sure we're done with it.

I'm not sure what you mean. The FILE_FLAG_DELETE_ON_CLOSE handle is created and owned by TempFile. Do you mean transferring ownership from TempFile to MemoryBuffer when we map the file?

Yea, sorry. I'm just worried that by copying and duplicating the handle, and cancelling the delete on close, and all this other stuff we're just exposing ourselves to even more strange problems. It seems like the "correct" thing to do is to just keep the original handle open until we're absolutely done with that file.

As I wrote in the commit message, that would prevent other processes from opening the file. What that would mean is that if we have two concurrent link processes they would not be able to share results.

I guess what we could do instead is exclusively use SetFileInformationByHandle/FileDispositionInfo for delete-on-close to avoid needing to do weird things to cancel FILE_FLAG_DELETE_ON_CLOSE. That should solve the sharing problem as well.

Also it would mean that we would need to change all other clients of MemoryBuffer::getOpenFile{,Slice}() to pass FD ownership to the MemoryBuffer. While that would seem to work for most in-tree clients there does appear to be at least one case where the FD is owned by external code: http://llvm-cs.pcc.me.uk/tools/gold/gold-plugin.cpp#498

So we'd probably either want the FD to be optionally owned or require the client to duplicate the handle if it doesn't own it. But that would mean that someone would still end up needing to duplicate on Windows in the non-owned case.

I don't think that would prevent other processes from opening the file. Both processes just have to make sure they both specify FILE_SHARE_DELETE.

As I wrote in my message to Bob that doesn't seem to work. According to process monitor the second process fails to open the file with "DELETE PENDING".

In D48051#1129100, @pcc wrote:
In D48051#1129059, @pcc wrote:
In D48051#1129014, @pcc wrote:

Is there any we can just transfer ownership of the original handle (with FILE_FLAG_DELETE_ON_CLOSE) from whoever had it before to the TempFile. This way we can just not close that handle until we're sure we're done with it.

I'm not sure what you mean. The FILE_FLAG_DELETE_ON_CLOSE handle is created and owned by TempFile. Do you mean transferring ownership from TempFile to MemoryBuffer when we map the file?

Yea, sorry. I'm just worried that by copying and duplicating the handle, and cancelling the delete on close, and all this other stuff we're just exposing ourselves to even more strange problems. It seems like the "correct" thing to do is to just keep the original handle open until we're absolutely done with that file.

As I wrote in the commit message, that would prevent other processes from opening the file. What that would mean is that if we have two concurrent link processes they would not be able to share results.

I guess what we could do instead is exclusively use SetFileInformationByHandle/FileDispositionInfo for delete-on-close to avoid needing to do weird things to cancel FILE_FLAG_DELETE_ON_CLOSE. That should solve the sharing problem as well.

Also it would mean that we would need to change all other clients of MemoryBuffer::getOpenFile{,Slice}() to pass FD ownership to the MemoryBuffer. While that would seem to work for most in-tree clients there does appear to be at least one case where the FD is owned by external code: http://llvm-cs.pcc.me.uk/tools/gold/gold-plugin.cpp#498

So we'd probably either want the FD to be optionally owned or require the client to duplicate the handle if it doesn't own it. But that would mean that someone would still end up needing to duplicate on Windows in the non-owned case.

I don't think that would prevent other processes from opening the file. Both processes just have to make sure they both specify FILE_SHARE_DELETE.

As I wrote in my message to Bob that doesn't seem to work. According to process monitor the second process fails to open the file with "DELETE PENDING".

That's because a handle has been opened with FILE_FLAG_DELETE_ON_CLOSE and then subsequently closed. As soon as you close a file that has been openend with that flag, no other opens can happen on the file. That's why I suggested transferring ownership of it.

Are we guaranteed that there is one entity that will outlive all other entities who may wish to share the file? As long as that one keeps the file open for its entire life, there shouldn't be any problems.

pcc added a comment.Jun 11 2018, 4:07 PM

Do we have any kind of guarantee that one entity will outlive all other entities that may attempt to share the file?

I don't think so, basically the process that creates a cache entry could exit at any time before or after any processes that might read the cache entry.

pcc added a comment.Jun 11 2018, 4:22 PM
In D48051#1129100, @pcc wrote:

I don't think that would prevent other processes from opening the file. Both processes just have to make sure they both specify FILE_SHARE_DELETE.

As I wrote in my message to Bob that doesn't seem to work. According to process monitor the second process fails to open the file with "DELETE PENDING".

That's because a handle has been opened with FILE_FLAG_DELETE_ON_CLOSE and then subsequently closed. As soon as you close a file that has been openend with that flag, no other opens can happen on the file. That's why I suggested transferring ownership of it.

But we need to close the handle to clear FILE_FLAG_DELETE_ON_CLOSE. Or are you suggesting that we do this:

  • open file with FILE_FLAG_DELETE_ON_CLOSE
  • map the file
  • use it
  • unmap it
  • do magic to clear FILE_FLAG_DELETE_ON_CLOSE

? I'm not sure that would be simpler because it would mean transferring ownership back to the TempFile.

In D48051#1129112, @pcc wrote:

Do we have any kind of guarantee that one entity will outlive all other entities that may attempt to share the file?

I don't think so, basically the process that creates a cache entry could exit at any time before or after any processes that might read the cache entry.

What happens if process A creates a cache entry, then exits, then process B tries to re-use the same cache entry? Since it got closed (and deleted) before anyone could try to re-use it, does it just create it again? It seems like it would be more efficient from a hit/miss ratio perspective if once a cache file was created, it would never be created again.

That would also sidestep this problem because we wouldn't even have to use FILE_FLAG_DELETE_ON_CLOSE in the first place, you could wipe the cache at the beginning of a multi-process link operation and let it build up over time as processes spawn and die.

I'm concerned that we're tacking more and more complexity and strange combinations of flags and sequences of operations that we're just inviting even more (and even harder to find) problems.

In D48051#1129116, @pcc wrote:
In D48051#1129100, @pcc wrote:

I don't think that would prevent other processes from opening the file. Both processes just have to make sure they both specify FILE_SHARE_DELETE.

As I wrote in my message to Bob that doesn't seem to work. According to process monitor the second process fails to open the file with "DELETE PENDING".

That's because a handle has been opened with FILE_FLAG_DELETE_ON_CLOSE and then subsequently closed. As soon as you close a file that has been openend with that flag, no other opens can happen on the file. That's why I suggested transferring ownership of it.

But we need to close the handle to clear FILE_FLAG_DELETE_ON_CLOSE. Or are you suggesting that we do this:

  • open file with FILE_FLAG_DELETE_ON_CLOSE
  • map the file
  • use it
  • unmap it
  • do magic to clear FILE_FLAG_DELETE_ON_CLOSE

? I'm not sure that would be simpler because it would mean transferring ownership back to the TempFile.

If you could guarantee that the person who originally opened it with FILE_FLAG_DELETE_ON_CLOSE could outlive any other process that may wish to share it, then you could just do:

  1. Process A - Open it with FILE_FLAG_DELETE_ON_CLOSE.
  2. Processes B1, B2, ..., Bn - Open it with FILE_SHARE_DELETE (this should work)
  3. Wait for all those processes to exit.
  4. Process A - close the file.
pcc added a comment.Jun 11 2018, 4:38 PM
In D48051#1129112, @pcc wrote:

Do we have any kind of guarantee that one entity will outlive all other entities that may attempt to share the file?

I don't think so, basically the process that creates a cache entry could exit at any time before or after any processes that might read the cache entry.

What happens if process A creates a cache entry, then exits, then process B tries to re-use the same cache entry? Since it got closed (and deleted) before anyone could try to re-use it, does it just create it again?

It doesn't normally get deleted because we clear the delete-on-close bit. It might get deleted later as a result of cache pruning, but that's a separate mechanism. But if it does get deleted through cache pruning, we do create it again.

It seems like it would be more efficient from a hit/miss ratio perspective if once a cache file was created, it would never be created again.

That's exactly what happens, provided that the cache entry isn't pruned.

In D48051#1129121, @pcc wrote:
In D48051#1129112, @pcc wrote:

Do we have any kind of guarantee that one entity will outlive all other entities that may attempt to share the file?

I don't think so, basically the process that creates a cache entry could exit at any time before or after any processes that might read the cache entry.

What happens if process A creates a cache entry, then exits, then process B tries to re-use the same cache entry? Since it got closed (and deleted) before anyone could try to re-use it, does it just create it again?

It doesn't normally get deleted because we clear the delete-on-close bit. It might get deleted later as a result of cache pruning, but that's a separate mechanism. But if it does get deleted through cache pruning, we do create it again.

It seems like it would be more efficient from a hit/miss ratio perspective if once a cache file was created, it would never be created again.

That's exactly what happens, provided that the cache entry isn't pruned.

So after the entire link operation is finished, there are still cache files lying around on the hard drive (because their delete on close bit was cleared)? If that's the case, can we just not even specify it in the first place? What would be the implications of that?

pcc added a comment.Jun 11 2018, 4:48 PM
In D48051#1129121, @pcc wrote:
In D48051#1129112, @pcc wrote:

Do we have any kind of guarantee that one entity will outlive all other entities that may attempt to share the file?

I don't think so, basically the process that creates a cache entry could exit at any time before or after any processes that might read the cache entry.

What happens if process A creates a cache entry, then exits, then process B tries to re-use the same cache entry? Since it got closed (and deleted) before anyone could try to re-use it, does it just create it again?

It doesn't normally get deleted because we clear the delete-on-close bit. It might get deleted later as a result of cache pruning, but that's a separate mechanism. But if it does get deleted through cache pruning, we do create it again.

It seems like it would be more efficient from a hit/miss ratio perspective if once a cache file was created, it would never be created again.

That's exactly what happens, provided that the cache entry isn't pruned.

So after the entire link operation is finished, there are still cache files lying around on the hard drive (because their delete on close bit was cleared)? If that's the case, can we just not even specify it in the first place? What would be the implications of that?

It would mean that if a link process is terminated forcefully or if there is a power cut, the link process's temporary files would be left on the disk.

That's why I was thinking about using SetFileInformationByHandle/FileDispositionInfo instead of FILE_FLAG_DELETE_ON_CLOSE to manage the delete-on-close bit for temporary files. It would mean that if the process is terminated in the window between CreateFile and SetFileInformationByHandle the file would be left on the disk, but other than that we should basically get the same behaviour. Since getting terminated in that window is pretty unlikely, it seems okay to me.

In D48051#1129127, @pcc wrote:
In D48051#1129121, @pcc wrote:
In D48051#1129112, @pcc wrote:

Do we have any kind of guarantee that one entity will outlive all other entities that may attempt to share the file?

I don't think so, basically the process that creates a cache entry could exit at any time before or after any processes that might read the cache entry.

What happens if process A creates a cache entry, then exits, then process B tries to re-use the same cache entry? Since it got closed (and deleted) before anyone could try to re-use it, does it just create it again?

It doesn't normally get deleted because we clear the delete-on-close bit. It might get deleted later as a result of cache pruning, but that's a separate mechanism. But if it does get deleted through cache pruning, we do create it again.

It seems like it would be more efficient from a hit/miss ratio perspective if once a cache file was created, it would never be created again.

That's exactly what happens, provided that the cache entry isn't pruned.

So after the entire link operation is finished, there are still cache files lying around on the hard drive (because their delete on close bit was cleared)? If that's the case, can we just not even specify it in the first place? What would be the implications of that?

It would mean that if a link process is terminated forcefully or if there is a power cut, the link process's temporary files would be left on the disk.

That's why I was thinking about using SetFileInformationByHandle/FileDispositionInfo instead of FILE_FLAG_DELETE_ON_CLOSE to manage the delete-on-close bit for temporary files. It would mean that if the process is terminated in the window between CreateFile and SetFileInformationByHandle the file would be left on the disk, but other than that we should basically get the same behaviour. Since getting terminated in that window is pretty unlikely, it seems okay to me.

Would the temporary files potentially be re-used on a subsequent link (if it were restarted by the user) and then cause mysterious failures? If not, I'm honestly fine to just never set the flag at all. We've been burned by it hard enough that who knows if we're triggering some additional strange / subtle codepath. That said, your idea is probably fine. At least it solves all the problems and eliminates all the duplicate handles and complexity.

pcc added a comment.Jun 11 2018, 5:07 PM
In D48051#1129127, @pcc wrote:
In D48051#1129121, @pcc wrote:
In D48051#1129112, @pcc wrote:

Do we have any kind of guarantee that one entity will outlive all other entities that may attempt to share the file?

I don't think so, basically the process that creates a cache entry could exit at any time before or after any processes that might read the cache entry.

What happens if process A creates a cache entry, then exits, then process B tries to re-use the same cache entry? Since it got closed (and deleted) before anyone could try to re-use it, does it just create it again?

It doesn't normally get deleted because we clear the delete-on-close bit. It might get deleted later as a result of cache pruning, but that's a separate mechanism. But if it does get deleted through cache pruning, we do create it again.

It seems like it would be more efficient from a hit/miss ratio perspective if once a cache file was created, it would never be created again.

That's exactly what happens, provided that the cache entry isn't pruned.

So after the entire link operation is finished, there are still cache files lying around on the hard drive (because their delete on close bit was cleared)? If that's the case, can we just not even specify it in the first place? What would be the implications of that?

It would mean that if a link process is terminated forcefully or if there is a power cut, the link process's temporary files would be left on the disk.

That's why I was thinking about using SetFileInformationByHandle/FileDispositionInfo instead of FILE_FLAG_DELETE_ON_CLOSE to manage the delete-on-close bit for temporary files. It would mean that if the process is terminated in the window between CreateFile and SetFileInformationByHandle the file would be left on the disk, but other than that we should basically get the same behaviour. Since getting terminated in that window is pretty unlikely, it seems okay to me.

Would the temporary files potentially be re-used on a subsequent link (if it were restarted by the user) and then cause mysterious failures? If not, I'm honestly fine to just never set the flag at all. We've been burned by it hard enough that who knows if we're triggering some additional strange / subtle codepath. That said, your idea is probably fine. At least it solves all the problems and eliminates all the duplicate handles and complexity.

As I mentioned I think we'd still need to duplicate in cases where we didn't create the file handle (and there do seem to be at least a couple), so from an API simplicity perspective we might as well do it unconditionally. But I think we should at least be able to eliminate the magic for clearing the delete-on-close bit and the two-stage keep/close for temporary files.

zturner added a subscriber: pcc.Jun 11 2018, 5:17 PM

We can’t transfer ownership of the handle in that case? With the flag
cleared we shouldn’t have to worry about the pending delete error

pcc added a comment.Jun 11 2018, 5:26 PM

We can’t transfer ownership of the handle in that case? With the flag
cleared we shouldn’t have to worry about the pending delete error

No because in those cases another program owns the file handle.

In this case gold owns the file handle: http://llvm-cs.pcc.me.uk/tools/gold/gold-plugin.cpp#498
and in this case the user of the C LTO API owns it: http://llvm-cs.pcc.me.uk/lib/LTO/LTOModule.cpp#134

In both cases the owner of the file handle is on the other end of a stable API.

Alright, if there is no way to get rid of the Duplicates then we can live with them. As you said, the proposed solution is already better than before. I mildly prefer just never setting the delete on close flag in the first place because the case that it address is so rare that it almost doesn't even matter IMO, and I think just reducing the surface area of the Windows' kernels code paths that we touch is a win given how subtle this bug was and how long it took us to figure out. But if you think we should have it then I won't make a big fuss over it :)

Alright, sounds like we are in agreement about the code. @pcc, can you still change the wording from "bug" to something else (like "observed behavior")?

pcc updated this revision to Diff 151092.Jun 12 2018, 7:52 PM
  • Replace FILE_FLAG_DELETE_ON_CLOSE with SetFileInformationByHandle/FileDispositionInfo
pcc retitled this revision from LTO: Work around a Windows kernel bug by keeping file handles open for memory mapped files. to LTO: Keep file handles open for memory mapped files..Jun 12 2018, 8:06 PM
pcc edited the summary of this revision. (Show Details)
inglorion accepted this revision.Jun 13 2018, 9:54 AM

LGTM, thanks!

This revision is now accepted and ready to land.Jun 13 2018, 9:54 AM
This revision was automatically updated to reflect the committed changes.