zturner (Zachary Turner)
User

Projects

User does not belong to any projects.

User Details

User Since
May 26 2014, 12:49 PM (212 w, 2 d)

Recent Activity

Today

zturner updated subscribers of D48393: Make DWARFParsing more thread-safe.

Related question: Is the laziness done to save memory, startup time, or
both?

Thu, Jun 21, 7:48 AM

Yesterday

zturner updated the diff for D48240: Try again to implement a FIFO task queue.

Updated with suggestions from chandler. I'm not sure why all my tests passed even despite having that bug that he noticed. Would be nice if there were a way to write a test for that case, but I'm not sure how. I also made the cleanup suggested to eliminate the extra cost of copying the task. This was a little bit annoying, because I had to go back to using the callable object instead of using the lambda so that I could get move semantics of the callable. But this means I can't easily capture this, so I had to move the callable up to the class level so that I could make it a friend to give it access to TaskQueue's internal state. Anyway, it all works now.

Wed, Jun 20, 5:48 PM
zturner added a comment to D48240: Try again to implement a FIFO task queue.

Really nice approach for THREADS=0 w/ the deferred future that checks that "enough" of the queue has been processed. I like it.

If you want to avoid the address stability stuff, you could always keep an integer next to the things in the queue and track the integers...

I want to think a bit harder to understand if using the address creates an A-B-A problem. I feel like it does, but my brain is exhausted and not able to pin it down yet.

Wed, Jun 20, 5:13 PM
zturner added a comment to D48393: Make DWARFParsing more thread-safe.

Long term I think one potentially interesting possibility for solving a lot of these threading and lazy evaluation issues is to make a task queue that runs all related operations on a single thread and returns a future you can wait on. This way, the core data structures themselves do not need synchronization because they are only ever accessed from one thread. I actually have a patch in progress to make this kind of infrastructure, which I actually needed for a different reason, but it would work equally well here.

Wed, Jun 20, 2:42 PM
zturner added a comment to D48349: [ADT] Add llvm::unique_function which is like std::function but supporting move-only closures..

In that case maybe FunctionExtras.h in keeping with the llvm naming
spirit

Wed, Jun 20, 12:03 PM
zturner updated subscribers of D48349: [ADT] Add llvm::unique_function which is like std::function but supporting move-only closures..

Not to suggest the obvious, but UniqueFunction.h?

Wed, Jun 20, 11:29 AM
zturner updated subscribers of D48140: Fix padding with custom character in formatv.

Sorry this slipped by me. Lgtm and thanks for the fix!

Wed, Jun 20, 6:23 AM

Tue, Jun 19

zturner updated the diff for D48240: Try again to implement a FIFO task queue.

This gets the LLVM_ENABLE_THREADS=0 case working. It does so by taking advantage of pointer invalidation properties of std::deque, so I can save off a pointer to the newly inserted element and just run tasks until the pointers match.

Tue, Jun 19, 9:02 PM
zturner updated the diff for D48240: Try again to implement a FIFO task queue.

Updated based on suggestions from Chandler. I don't have the LLVM_ENABLE_THREADS=0 case working yet, but wanted to make sure this is generally what you had in mind.

Tue, Jun 19, 7:03 PM
zturner added a comment to D48240: Try again to implement a FIFO task queue.

Yea I'm not really able to get anything working using the suggested method of the local class. The Task class here doesn't have any logic to wait on the parent task to complete, and furthermore we just return a future from a promise without using async with deferred launch, so waiting on the future doesn't run the function. All of this is fixable, but ultimately you just end up getting pretty close to what I already have I think.

Tue, Jun 19, 2:11 PM

Mon, Jun 18

zturner added a comment to D48240: Try again to implement a FIFO task queue.

I can't get your suggestion to work, it's very bizarre. It's claiming that I can't even *move* construct a std::function because Task has a deleted copy constructor.

See for yourself here:

https://godbolt.org/g/x1eCnX

Any ideas?

Mon, Jun 18, 8:42 PM
zturner added a comment to D48240: Try again to implement a FIFO task queue.

I can't get your suggestion to work, it's very bizarre. It's claiming that I can't even *move* construct a std::function because Task has a deleted copy constructor.

Mon, Jun 18, 8:39 PM
zturner added a comment to D48240: Try again to implement a FIFO task queue.

I also wonder whether you want to decouple this serialized execution from the thread creation and management. Could you accept a thread pool instead, and enqueue work on the thread pool? You can do this by having each task on the thread pool, on completion, check the queue for more work, and if there is some, schedule a new task on the thread pool. The reason why this is interesting is because if you have, say, a system thread pool that maybe gets a lot of other work, you can intersperse these tasks on it without overloading the system. It also allows the client of this class to dictate which threads are allowable threads for these tasks to run by providing a correct thread pool. My hope is that worst case, creating a one-thread thread pool and handing it to this class would be the equivalent of what you have here, but with a more clean separation of roles. I understand it would add some complexity, for example you'd have to handle the case where no work is currently in-flight when asyncImpl finishes adding the task to the queue and start a new batch. But this seems reasonably tolerable.

Isn't using multiple threads strictly incompatible with the idea of it being serialized or synchronized? With multiple threads, you will have work overlapping, which confuses the purpose of what this is for. If someone comes up with a use case for it, then maybe, but otherwise, YAGNI?

It isn't about *using* multiple threads, its about selecting which threads run things.

Whether tasks are serialized in their execution is independent of how many threads they run on. Its just that there doesn't *need* to be more than one if they are serialized.

But a system thread pool might expose the equivalent of <1 thread.

Think of this example. You have three different systems which each want to use this infrastructure to run a series of asynchronous tasks, but for each system the series of tasks must run in-order (FIFO) and without overlap. You can re-use a single threadpool with potentially fewer than three threads for all three of these systems. This allows the client to control the rate of work -- perhaps only allowing at most two operations to happen concurrently.

While with three systems and two threads this seems somewhat odd, if you have *many* systems (maybe 100s) all using this to coordinate a series of tasks, you might not want hundereds of threads, all spinning on their own queues. Instead, it seems nicer to have a single threadpool, sized appropriately for the amount of resources on the system, and have all of the users of this class enqueue their serial sequence of tasks into this shared threadpool.

Mon, Jun 18, 8:10 PM
zturner added a comment to D48240: Try again to implement a FIFO task queue.

I also wonder whether you want to decouple this serialized execution from the thread creation and management. Could you accept a thread pool instead, and enqueue work on the thread pool? You can do this by having each task on the thread pool, on completion, check the queue for more work, and if there is some, schedule a new task on the thread pool. The reason why this is interesting is because if you have, say, a system thread pool that maybe gets a lot of other work, you can intersperse these tasks on it without overloading the system. It also allows the client of this class to dictate which threads are allowable threads for these tasks to run by providing a correct thread pool. My hope is that worst case, creating a one-thread thread pool and handing it to this class would be the equivalent of what you have here, but with a more clean separation of roles. I understand it would add some complexity, for example you'd have to handle the case where no work is currently in-flight when asyncImpl finishes adding the task to the queue and start a new batch. But this seems reasonably tolerable.

Mon, Jun 18, 7:24 PM
zturner added a dependent revision for D48240: Try again to implement a FIFO task queue: D48306: Make TaskQueue support tasks that return values.
Mon, Jun 18, 5:23 PM
zturner created D48306: Make TaskQueue support tasks that return values.
Mon, Jun 18, 5:23 PM
zturner updated the diff for D48240: Try again to implement a FIFO task queue.
Mon, Jun 18, 4:39 PM
zturner added inline comments to D48240: Try again to implement a FIFO task queue.
Mon, Jun 18, 4:39 PM

Sat, Jun 16

zturner added a comment to D48240: Try again to implement a FIFO task queue.

Before I dig in too much to the code, I want to better understand the underlying goal...

The core primitive here is a *serialized* task queue from what I understand...

Is it important that it starts a separate thread and eagerly (as opposed to lazily) processes the enqueued tasks?

Sat, Jun 16, 5:12 PM

Fri, Jun 15

zturner updated the diff for D48240: Try again to implement a FIFO task queue.

Made the test cases a little stronger. Before I was simply assigning to the local variables in the test case. Now I increment them. This way the test will fail if a task were to ever get executed more than once, for example, whereas before it would pass since it would just be re-assigning the same value.

Fri, Jun 15, 5:15 PM
zturner updated the diff for D48240: Try again to implement a FIFO task queue.

Added 2 more tests, the first of which exposed a bug in the wait() function. All the wait() function was doing was joining the work thread, but this is incorrect. What it's supposed to do is simply drain the queue. i.e. keep processing work until the queue is empty. It's not supposed to join the thread. Joining the thread doesn't work anyway since we weren't setting the EnableFlag to be false, so this method was deadlocking.

Fri, Jun 15, 5:09 PM
zturner added a reviewer for D48240: Try again to implement a FIFO task queue: chandlerc.
Fri, Jun 15, 3:03 PM
zturner created D48240: Try again to implement a FIFO task queue.
Fri, Jun 15, 2:52 PM
zturner added a comment to D48215: Remove dependency from Host to python.

It can assert when we pass the python or clang directory enumeration. We
could also remove the values from the internal enumeration but keep them in
the public enumeration. However, we can’t be forced into providing a stable
internal api just because we must provide a stable public api.

Fri, Jun 15, 8:33 AM
zturner updated subscribers of D48215: Remove dependency from Host to python.

The internal api has no guarantees as to its stability.

Fri, Jun 15, 8:22 AM

Wed, Jun 13

zturner committed rL334658: Revert "Enable ThreadPool to queue tasks that return values.".
Revert "Enable ThreadPool to queue tasks that return values."
Wed, Jun 13, 2:28 PM
zturner committed rL334644: Add missing #include..
Add missing #include.
Wed, Jun 13, 12:42 PM
zturner committed rL334643: Enable ThreadPool to support tasks that return values..
Enable ThreadPool to support tasks that return values.
Wed, Jun 13, 12:33 PM
zturner closed D48115: Make ThreadPool support running tasks that return values.
Wed, Jun 13, 12:33 PM
zturner added a comment to D48115: Make ThreadPool support running tasks that return values.

Looks fine to me. Do you plan to use It in a subsequent patch?

Wed, Jun 13, 12:32 PM

Tue, Jun 12

zturner created D48115: Make ThreadPool support running tasks that return values.
Tue, Jun 12, 9:00 PM
zturner committed rC334518: Refactor ExecuteAndWait to take StringRefs..
Refactor ExecuteAndWait to take StringRefs.
Tue, Jun 12, 10:48 AM
zturner committed rLLD334518: Refactor ExecuteAndWait to take StringRefs..
Refactor ExecuteAndWait to take StringRefs.
Tue, Jun 12, 10:48 AM
zturner committed rL334518: Refactor ExecuteAndWait to take StringRefs..
Refactor ExecuteAndWait to take StringRefs.
Tue, Jun 12, 10:48 AM
zturner accepted D48084: [FileSpec] Delegate common operations to llvm::sys::path.

Looks like good cleanup to me, thanks!

Tue, Jun 12, 10:29 AM

Mon, Jun 11

zturner added a comment to D48051: LTO: Keep file handles open for memory mapped files..

Alright, if there is no way to get rid of the Duplicates then we can live with them. As you said, the proposed solution is already better than before. I mildly prefer just never setting the delete on close flag in the first place because the case that it address is so rare that it almost doesn't even matter IMO, and I think just reducing the surface area of the Windows' kernels code paths that we touch is a win given how subtle this bug was and how long it took us to figure out. But if you think we should have it then I won't make a big fuss over it :)

Mon, Jun 11, 8:12 PM
zturner updated subscribers of D48051: LTO: Keep file handles open for memory mapped files..

We can’t transfer ownership of the handle in that case? With the flag
cleared we shouldn’t have to worry about the pending delete error

Mon, Jun 11, 5:18 PM
zturner added a comment to D48051: LTO: Keep file handles open for memory mapped files..
In D48051#1129127, @pcc wrote:
In D48051#1129121, @pcc wrote:
In D48051#1129112, @pcc wrote:

Do we have any kind of guarantee that one entity will outlive all other entities that may attempt to share the file?

I don't think so, basically the process that creates a cache entry could exit at any time before or after any processes that might read the cache entry.

What happens if process A creates a cache entry, then exits, then process B tries to re-use the same cache entry? Since it got closed (and deleted) before anyone could try to re-use it, does it just create it again?

It doesn't normally get deleted because we clear the delete-on-close bit. It might get deleted later as a result of cache pruning, but that's a separate mechanism. But if it does get deleted through cache pruning, we do create it again.

It seems like it would be more efficient from a hit/miss ratio perspective if once a cache file was created, it would never be created again.

That's exactly what happens, provided that the cache entry isn't pruned.

So after the entire link operation is finished, there are still cache files lying around on the hard drive (because their delete on close bit was cleared)? If that's the case, can we just not even specify it in the first place? What would be the implications of that?

It would mean that if a link process is terminated forcefully or if there is a power cut, the link process's temporary files would be left on the disk.

That's why I was thinking about using SetFileInformationByHandle/FileDispositionInfo instead of FILE_FLAG_DELETE_ON_CLOSE to manage the delete-on-close bit for temporary files. It would mean that if the process is terminated in the window between CreateFile and SetFileInformationByHandle the file would be left on the disk, but other than that we should basically get the same behaviour. Since getting terminated in that window is pretty unlikely, it seems okay to me.

Mon, Jun 11, 4:58 PM
zturner added a comment to D48051: LTO: Keep file handles open for memory mapped files..
In D48051#1129121, @pcc wrote:
In D48051#1129112, @pcc wrote:

Do we have any kind of guarantee that one entity will outlive all other entities that may attempt to share the file?

I don't think so, basically the process that creates a cache entry could exit at any time before or after any processes that might read the cache entry.

What happens if process A creates a cache entry, then exits, then process B tries to re-use the same cache entry? Since it got closed (and deleted) before anyone could try to re-use it, does it just create it again?

It doesn't normally get deleted because we clear the delete-on-close bit. It might get deleted later as a result of cache pruning, but that's a separate mechanism. But if it does get deleted through cache pruning, we do create it again.

It seems like it would be more efficient from a hit/miss ratio perspective if once a cache file was created, it would never be created again.

That's exactly what happens, provided that the cache entry isn't pruned.

Mon, Jun 11, 4:43 PM
zturner added a comment to D48051: LTO: Keep file handles open for memory mapped files..
In D48051#1129116, @pcc wrote:
In D48051#1129100, @pcc wrote:

I don't think that would prevent other processes from opening the file. Both processes just have to make sure they both specify FILE_SHARE_DELETE.

As I wrote in my message to Bob that doesn't seem to work. According to process monitor the second process fails to open the file with "DELETE PENDING".

That's because a handle has been opened with FILE_FLAG_DELETE_ON_CLOSE and then subsequently closed. As soon as you close a file that has been openend with that flag, no other opens can happen on the file. That's why I suggested transferring ownership of it.

But we need to close the handle to clear FILE_FLAG_DELETE_ON_CLOSE. Or are you suggesting that we do this:

  • open file with FILE_FLAG_DELETE_ON_CLOSE
  • map the file
  • use it
  • unmap it
  • do magic to clear FILE_FLAG_DELETE_ON_CLOSE ? I'm not sure that would be simpler because it would mean transferring ownership back to the TempFile.
Mon, Jun 11, 4:26 PM
zturner added a comment to D48051: LTO: Keep file handles open for memory mapped files..
In D48051#1129112, @pcc wrote:

Do we have any kind of guarantee that one entity will outlive all other entities that may attempt to share the file?

I don't think so, basically the process that creates a cache entry could exit at any time before or after any processes that might read the cache entry.

Mon, Jun 11, 4:26 PM
zturner added a comment to D48051: LTO: Keep file handles open for memory mapped files..
In D48051#1129100, @pcc wrote:
In D48051#1129059, @pcc wrote:
In D48051#1129014, @pcc wrote:

Is there any we can just transfer ownership of the original handle (with FILE_FLAG_DELETE_ON_CLOSE) from whoever had it before to the TempFile. This way we can just not close that handle until we're sure we're done with it.

I'm not sure what you mean. The FILE_FLAG_DELETE_ON_CLOSE handle is created and owned by TempFile. Do you mean transferring ownership from TempFile to MemoryBuffer when we map the file?

Yea, sorry. I'm just worried that by copying and duplicating the handle, and cancelling the delete on close, and all this other stuff we're just exposing ourselves to even more strange problems. It seems like the "correct" thing to do is to just keep the original handle open until we're absolutely done with that file.

As I wrote in the commit message, that would prevent other processes from opening the file. What that would mean is that if we have two concurrent link processes they would not be able to share results.

I guess what we could do instead is exclusively use SetFileInformationByHandle/FileDispositionInfo for delete-on-close to avoid needing to do weird things to cancel FILE_FLAG_DELETE_ON_CLOSE. That should solve the sharing problem as well.

Also it would mean that we would need to change all other clients of MemoryBuffer::getOpenFile{,Slice}() to pass FD ownership to the MemoryBuffer. While that would seem to work for most in-tree clients there does appear to be at least one case where the FD is owned by external code: http://llvm-cs.pcc.me.uk/tools/gold/gold-plugin.cpp#498

So we'd probably either want the FD to be optionally owned or require the client to duplicate the handle if it doesn't own it. But that would mean that someone would still end up needing to duplicate on Windows in the non-owned case.

I don't think that would prevent other processes from opening the file. Both processes just have to make sure they both specify FILE_SHARE_DELETE.

As I wrote in my message to Bob that doesn't seem to work. According to process monitor the second process fails to open the file with "DELETE PENDING".

Mon, Jun 11, 4:04 PM
zturner added a comment to D48051: LTO: Keep file handles open for memory mapped files..

Do we have any kind of guarantee that one entity will outlive all other entities that may attempt to share the file?

Mon, Jun 11, 3:55 PM
zturner added a comment to D48051: LTO: Keep file handles open for memory mapped files..
In D48051#1129059, @pcc wrote:
In D48051#1129014, @pcc wrote:

Is there any we can just transfer ownership of the original handle (with FILE_FLAG_DELETE_ON_CLOSE) from whoever had it before to the TempFile. This way we can just not close that handle until we're sure we're done with it.

I'm not sure what you mean. The FILE_FLAG_DELETE_ON_CLOSE handle is created and owned by TempFile. Do you mean transferring ownership from TempFile to MemoryBuffer when we map the file?

Yea, sorry. I'm just worried that by copying and duplicating the handle, and cancelling the delete on close, and all this other stuff we're just exposing ourselves to even more strange problems. It seems like the "correct" thing to do is to just keep the original handle open until we're absolutely done with that file.

As I wrote in the commit message, that would prevent other processes from opening the file. What that would mean is that if we have two concurrent link processes they would not be able to share results.

I guess what we could do instead is exclusively use SetFileInformationByHandle/FileDispositionInfo for delete-on-close to avoid needing to do weird things to cancel FILE_FLAG_DELETE_ON_CLOSE. That should solve the sharing problem as well.

Also it would mean that we would need to change all other clients of MemoryBuffer::getOpenFile{,Slice}() to pass FD ownership to the MemoryBuffer. While that would seem to work for most in-tree clients there does appear to be at least one case where the FD is owned by external code: http://llvm-cs.pcc.me.uk/tools/gold/gold-plugin.cpp#498

So we'd probably either want the FD to be optionally owned or require the client to duplicate the handle if it doesn't own it. But that would mean that someone would still end up needing to duplicate on Windows in the non-owned case.

Mon, Jun 11, 3:50 PM
zturner added a comment to D48051: LTO: Keep file handles open for memory mapped files..
In D48051#1129014, @pcc wrote:

Is there any we can just transfer ownership of the original handle (with FILE_FLAG_DELETE_ON_CLOSE) from whoever had it before to the TempFile. This way we can just not close that handle until we're sure we're done with it.

I'm not sure what you mean. The FILE_FLAG_DELETE_ON_CLOSE handle is created and owned by TempFile. Do you mean transferring ownership from TempFile to MemoryBuffer when we map the file?

Mon, Jun 11, 3:11 PM
zturner added a comment to D48051: LTO: Keep file handles open for memory mapped files..

FWIW, I'm no longer convinced this is a Windows bug based on the reading of the blog post linked in comment 52 of the crbug.

Mon, Jun 11, 2:48 PM
zturner added a comment to D48050: [lit] Split test_set_working_dir TestProcessLaunch into two tests and fix it on Windows.

Interestingly, I've been working on LLVM's low level process launching APIs to make it so we can hopefully replace some of this code with that. I don't think it's quite there yet, but hopefully we can converge soon. I'll keep this test case in mind.

Mon, Jun 11, 1:30 PM

Sun, Jun 10

zturner committed rL334375: Attempt 3: Resubmit "[Support] Expose flattenWindowsCommandLine.".
Attempt 3: Resubmit "[Support] Expose flattenWindowsCommandLine."
Sun, Jun 10, 2:01 PM

Sat, Jun 9

zturner committed rL334356: Revert "Resubmit "[Support] Expose flattenWindowsCommandLine."".
Revert "Resubmit "[Support] Expose flattenWindowsCommandLine.""
Sat, Jun 9, 8:20 PM
zturner committed rL334355: Resubmit "[Support] Expose flattenWindowsCommandLine.".
Resubmit "[Support] Expose flattenWindowsCommandLine."
Sat, Jun 9, 7:53 PM
zturner committed rL334354: Revert "[Support] Expose flattenWindowsCommandLine.".
Revert "[Support] Expose flattenWindowsCommandLine."
Sat, Jun 9, 4:12 PM
zturner committed rL334353: [Support] Expose flattenWindowsCommandLine..
[Support] Expose flattenWindowsCommandLine.
Sat, Jun 9, 3:49 PM

Fri, Jun 8

zturner added a comment to D47913: [Support] Introduce a new utility for testing child process execution.

Hey Zach,

Thanks for looking into/improving test coverage!

My knee-jerk/gut reaction to the idea of adding a tool binary for testing a
fairly low-level API like this is that it doesn't /feel/ quite right, and
I've looked over the patch a bit but not in quite enough detail to provide
really actionable/specific feedback yet.

My broad/high-level question would be: Could the existing unit test
approach be made more/sufficiently readable by implementing helper
functions (essentially the kind of code that's in the new tool you're
introducing - providing that as utility functions with parameters, rather
than a program with command line parameters). It seems like that might be a
simpler alternative, but I'm certainly not 100% sure of it.

  • Dave
Fri, Jun 8, 3:46 PM
zturner added a comment to D47708: PDB support of function-level linking and splitted functions.

Doesn't the LIT based test drop the split-function case (originally
produced with PGO)?

Sorry for being late to the party, but it seems beneficial to have both LIT
*and* checked in binaries since in general they are complementary: checking
against freshly built binaries only covers a matching set of toolchain
components (in particular it's hard to cover the cross-targeting scenarios).

Other than the inconvenience with Phabricator, is there a reason not to
include the original tests as well? The size of the binaries?

Fri, Jun 8, 10:04 AM
zturner committed rL334294: Clean up some code in Program..
Clean up some code in Program.
Fri, Jun 8, 8:21 AM
zturner committed rL334293: Add a file open flag that disables O_CLOEXEC..
Add a file open flag that disables O_CLOEXEC.
Fri, Jun 8, 8:21 AM

Thu, Jun 7

zturner added a comment to D47406: [ADT] Add `StringRef::rsplit(StringRef Separator)`..

lgtm

Thu, Jun 7, 7:48 PM
zturner added a reviewer for D47913: [Support] Introduce a new utility for testing child process execution: davide.
Thu, Jun 7, 7:40 PM
zturner added a reviewer for D47913: [Support] Introduce a new utility for testing child process execution: rnk.

Also, one thing that's a little strange here is that this now marks the one and only Support test that is *not* a unit test. Is that weird? I feel like this is a much better pattern for testing process launching though, so I'd feel pretty bad if I had to re-write all this as unit tests.

Thu, Jun 7, 5:08 PM
zturner committed rL334246: Expose a single global file open function..
Expose a single global file open function.
Thu, Jun 7, 4:32 PM
zturner created D47913: [Support] Introduce a new utility for testing child process execution.
Thu, Jun 7, 3:31 PM
zturner abandoned D47863: [Support] Add support for the OF_Delete open flag on posix platforms.

FWIW turns out this doesn't work as nicely as I thought it did. So I'm going to bench this one for now. createUniqueFile uses this flag, and there are several places in the codebase that assume they can refer to the path afterwords, which breaks here as this effectively makes an open unliked file on posix platforms. I'll think about the best way to handle this in the future, but it's not high priority right now.

Thu, Jun 7, 2:01 PM
zturner committed rL334231: Try to fix build..
Try to fix build.
Thu, Jun 7, 1:41 PM
zturner accepted D47406: [ADT] Add `StringRef::rsplit(StringRef Separator)`..
Thu, Jun 7, 1:36 PM
zturner committed rL334223: Fix unused private variable..
Fix unused private variable.
Thu, Jun 7, 1:11 PM
zturner committed rCTE334221: [FileSystem] Split up the OpenFlags enumeration..
[FileSystem] Split up the OpenFlags enumeration.
Thu, Jun 7, 1:05 PM
zturner committed rC334221: [FileSystem] Split up the OpenFlags enumeration..
[FileSystem] Split up the OpenFlags enumeration.
Thu, Jun 7, 1:05 PM
zturner committed rL334221: [FileSystem] Split up the OpenFlags enumeration..
[FileSystem] Split up the OpenFlags enumeration.
Thu, Jun 7, 1:05 PM
zturner accepted D47892: [lit, windows] Disable a number of tests that are failing on Windows.
Thu, Jun 7, 9:38 AM
zturner added a comment to D47887: Move VersionTuple from clang/Basic to llvm/Support, llvm part.

It would be better if you're using a monorepo, that way both parts can just be one single patch.

Thu, Jun 7, 8:35 AM
zturner added inline comments to D47406: [ADT] Add `StringRef::rsplit(StringRef Separator)`..
Thu, Jun 7, 8:34 AM
zturner added a comment to D47708: PDB support of function-level linking and splitted functions.

As a general rule, lld-link is command line compatible with MSVC and
clang-cl is command line compatible with cl. So, the /order option should
work exactly the same with lld-link as it does with link.

Thu, Jun 7, 7:16 AM

Wed, Jun 6

zturner created D47863: [Support] Add support for the OF_Delete open flag on posix platforms.
Wed, Jun 6, 7:57 PM
zturner updated subscribers of D47708: PDB support of function-level linking and splitted functions.

Do you just need a pdb, or does it really need to be a vs pdb? lld can
generate high quality pdbs now. So it might be possible to use lld to link
and produce a pdb when you run the test.

Wed, Jun 6, 7:06 PM
zturner created D47852: [Support] Re-work the Flags parameter of the FileSystem open APIs.
Wed, Jun 6, 4:41 PM
zturner added a comment to D47708: PDB support of function-level linking and splitted functions.

The problem is that Phabricator doesn't preserve binary in diffs

Wed, Jun 6, 7:50 AM

Tue, Jun 5

zturner committed rL334046: [FileSystem] Remove OpenFlags param from several functions..
[FileSystem] Remove OpenFlags param from several functions.
Tue, Jun 5, 1:02 PM
zturner closed D47789: [FileSystem] Delete OpenFlags parameter from several file system functions.
Tue, Jun 5, 1:02 PM
zturner created D47789: [FileSystem] Delete OpenFlags parameter from several file system functions.
Tue, Jun 5, 11:00 AM

Mon, Jun 4

zturner updated subscribers of D47746: [lit, pdb] Fix func-symbols.test (on Windows).

Lgtm

Mon, Jun 4, 5:27 PM
zturner created D47741: Rename openFileForRead/Write to openFileDescriptorForRead/Write.
Mon, Jun 4, 1:05 PM
zturner committed rL333945: [Support] Add functions that operate on native file handles on Windows..
[Support] Add functions that operate on native file handles on Windows.
Mon, Jun 4, 12:42 PM
zturner closed D47688: [Support] Add functions that return native file handles on Windows instead of fds..
Mon, Jun 4, 12:42 PM
zturner updated the diff for D47688: [Support] Add functions that return native file handles on Windows instead of fds..

Changes from first revision:

Mon, Jun 4, 11:37 AM
zturner committed rL333933: Remove dependency from Host to clang..
Remove dependency from Host to clang.
Mon, Jun 4, 10:45 AM
This revision was not accepted when it landed; it landed in state Needs Review.
Mon, Jun 4, 10:45 AM
zturner added a comment to D47653: [lit, pdb] Fix two failing PDB tests on Windows.

Thanks!

In that case, will FileCheck only match the label that was specified or will it also match just CHECK and CHECK-DAG as well?

Mon, Jun 4, 10:20 AM
zturner added a comment to D47653: [lit, pdb] Fix two failing PDB tests on Windows.

These two sections switch order between builds. It looks like the CHECK command checks in order, so the two CHECKs will fail based on the order in which the symbols are produced in the output. I could make them both CHECK-DAG, but I assume they are CHECK on purpose.

CHECK: {{.*}}:   CompileUnit{{.*}}, language = "c++", file = '{{.*}}\FuncSymbols.cpp'
CHECK-DAG: Function{[[UID30]]}, mangled = ?FunctionCall@@YAXXZ, demangled = {{.*}}, type = [[TY30]]
CHECK-DAG: Function{[[UID31]]}, demangled = {{.*}}`anonymous namespace'::StaticFunction{{.*}}, type = [[TY31]]
CHECK-DAG: Function{[[UID32]]}, demangled = {{.*}}InlinedFunction{{.*}}, type = [[TY32]]

CHECK: {{.*}}:   CompileUnit{{.*}}, language = "c++", file = '{{.*}}\FuncSymbolsTestMain.cpp'
CHECK-DAG: Function{[[UID0]]}, mangled = ?Func_arg_array@@YAHQAH@Z, demangled = {{.*}}, type = [[TY0]]
CHECK-DAG: Function{[[UID1]]}, mangled = ?Func_arg_void@@YAXXZ, demangled = {{.*}}, type = [[TY1]]
CHECK-DAG: Function{[[UID2]]}, mangled = ?Func_arg_none@@YAXXZ, demangled = {{.*}}, type = [[TY2]]
Mon, Jun 4, 10:10 AM
zturner added a comment to D47653: [lit, pdb] Fix two failing PDB tests on Windows.

newer DIA SDKs annotate the function names with their return type and inputs

Does this mean that the tests will now fail for people who happen to have older/different versions of MSVS/DIA SDK installed?

It should not fail for people with newer versions because I updated the matching pattern to include both.

One thing I did notice, though, is that the order in which some of the symbol sections are generated differs but FileCheck looks for matches in the order in the test file. Do you know of a way to make it match either order? Right now the section for FuncSymbols.cpp is sometimes before and sometimes after the section for FuncSymbolsTestMain.cpp

Mon, Jun 4, 10:02 AM
zturner added inline comments to D47688: [Support] Add functions that return native file handles on Windows instead of fds..
Mon, Jun 4, 9:07 AM

Sun, Jun 3

zturner added a comment to D47406: [ADT] Add `StringRef::rsplit(StringRef Separator)`..

rsplit(char) already exists. That said, another way to reuse code would be
to have rsplit(char) call the StringRef version

Sun, Jun 3, 5:37 PM
zturner created D47688: [Support] Add functions that return native file handles on Windows instead of fds..
Sun, Jun 3, 10:08 AM

Fri, Jun 1

zturner committed rL333798: Move some function declarations out of WindowsSupport.h.
Move some function declarations out of WindowsSupport.h
Fri, Jun 1, 3:28 PM
zturner closed D47662: Move some function declarations out of WindowsSupport.h.
Fri, Jun 1, 3:28 PM
zturner created D47662: Move some function declarations out of WindowsSupport.h.
Fri, Jun 1, 3:15 PM
zturner accepted D47653: [lit, pdb] Fix two failing PDB tests on Windows.
Fri, Jun 1, 2:25 PM
zturner accepted D47609: [lldb, process] Fix occasional hang when launching a process in LLDB.

I trust that this doesn't regress the test suite on Windows. You guys are more active in the Windows stuff for the time being, so if this works for you and doesn't break anything, I'm good with it.

Fri, Jun 1, 10:15 AM

Thu, May 31

zturner added a reviewer for D47609: [lldb, process] Fix occasional hang when launching a process in LLDB: jingham.
Thu, May 31, 3:49 PM

Wed, May 30

zturner updated subscribers of D47535: [FileSpec] Add support for lambdas to EnumerateDirectory. NFC.

+1 I’d like to get rid of all EnumerateXXX with callback functions and
replace with iterator based equivalents. Given that in this case the
iterator version already exists, I definitely think we should try to use it
instead

Wed, May 30, 9:35 AM

Tue, May 29

zturner accepted D47478: [CodeView] Add prefix to CodeView registers..
Tue, May 29, 7:15 AM