This is an archive of the discontinued LLVM Phabricator instance.

[ThreadPool] Do not return shared futures.
ClosedPublic

Authored by fhahn on Nov 22 2021, 5:15 AM.

Details

Summary

The only users of returned futures from ThreadPool is llvm-reduce after
D113857.

There should be no cases where multiple threads wait on the same future,
so there should be no need to return std::shared_future<>. Instead return
plain std::future<>.

If users need to share a future between multiple threads, they can share
the futures themselves.

Diff Detail

Event Timeline

fhahn created this revision.Nov 22 2021, 5:15 AM
fhahn requested review of this revision.Nov 22 2021, 5:15 AM
Herald added a project: Restricted Project. · View Herald TranscriptNov 22 2021, 5:15 AM
mehdi_amini accepted this revision.Nov 22 2021, 11:43 AM

I don't remember why this API was written with shared_future in the first place, this was part of the original implementation. The change makes sense though.

This revision is now accepted and ready to land.Nov 22 2021, 11:43 AM
Meinersbur accepted this revision.Nov 22 2021, 9:24 PM

LGTM as well. Indeed, users that need shared_future can call .share() themselves. Nothing else than the caller to async can retrieve the future anyway.

fhahn updated this revision to Diff 389102.Nov 23 2021, 12:12 AM

Rebase so this can be landed independent of D113857.

It also updates the only place I could find that was storing futures in a container (in mlir/include/mlir/IR/Threading.h). The futures are only used by the main thread for waiting, so they shouldnt need to be shared.

Herald added a project: Restricted Project. · View Herald TranscriptNov 23 2021, 12:12 AM
This revision was automatically updated to reflect the committed changes.

Funnily enough this is causing a build failure when threads are disabled, because Tasks.push([Future]() { Future.get(); }); won't work. I reverted the patch for now.

Any ideas on how to best deal with that? :)

Ah, that may be why I used shared_future in the first place! :)

Ah, that may be why I used shared_future in the first place! :)

Yeah, it seems a shame though if we have to use std::shared_future to support the single threaded uses cases :(

Funnily enough this is causing a build failure when threads are disabled, because Tasks.push([Future]() { Future.get(); }); won't work. I reverted the patch for now.

Any ideas on how to best deal with that? :)

Sounds like it might be due to lambda capture by value/copy (but if you could copy/paste the error it might help diagnose) - maybe capturing by move is all that's required?

[Future = std::move(Future)]() { Future.get(); }

Funnily enough this is causing a build failure when threads are disabled, because Tasks.push([Future]() { Future.get(); }); won't work. I reverted the patch for now.

Any ideas on how to best deal with that? :)

Sounds like it might be due to lambda capture by value/copy (but if you could copy/paste the error it might help diagnose) - maybe capturing by move is all that's required?

[Future = std::move(Future)]() { Future.get(); }

I put the error message in a comment on this review: D114183, since that is the one that introduced the failure.

fhahn added a comment.Nov 25 2021, 9:37 AM

Funnily enough this is causing a build failure when threads are disabled, because Tasks.push([Future]() { Future.get(); }); won't work. I reverted the patch for now.

Any ideas on how to best deal with that? :)

Sounds like it might be due to lambda capture by value/copy (but if you could copy/paste the error it might help diagnose) - maybe capturing by move is all that's required?

[Future = std::move(Future)]() { Future.get(); }

I put the error message in a comment on this review: D114183, since that is the one that introduced the failure.

Are you sure? IIUC the problem was the change to not using shared_future. It built fine with LLVM_ENABLE_THREADS=Off with *this* patch.

Are you sure? IIUC the problem was the change to not using shared_future. It built fine with LLVM_ENABLE_THREADS=Off with *this* patch.

No, I was mistaken. I assumed that since it was still broken after rGfb46e64a013a2c7457db3312da07db1b25ad5219, this also needed to be reverted. Looks like the problem was actually solved by bc41653a1f289a63a1472b7364437515c9659f77 though.