Details
Diff Detail
- Repository
- rL LLVM
Event Timeline
The patch itself is also reasonable, but it would probably be good to
give some motivation in the commit message why we need this cache.
Does this already improve compile time for cases we see today? Or
is this only an improvement if/when we add more complexity to the
SCEVAffinator e.g. to model modulo wrapping?
+ /// @brief Map to rememember cached expressions.
remembered
Tobias
There might be also some saving in memory since the isl_pw_aff can be reused.
It might be interesting how many instances are reused. I think ratio cache hits to lookups might be more meaningful then a speed difference which may disappear in noise.
lib/Support/SCEVAffinator.cpp | ||
---|---|---|
50 ↗ | (On Diff #31912) | Taking a reference s.t. there is no second lookup required when the value is set afterwards? isl_pw_aff &*PWA = CachedExpressions[Key]; PWA = ... |
I haven't profiled reuse. Speed difference looked like noise to me (no meaningful difference in either direction).
I will add the same as given below to the commit message. Is that enough?
Does this already improve compile time for cases we see today? Or
is this only an improvement if/when we add more complexity to the
SCEVAffinator e.g. to model modulo wrapping?
There is no difference in compile time today, however, with bigger SCoPs there might be.
Additionally, the cache will allow us to build all modulo wrapping constraints after we crreated the SCoP (before or during is not possible as the domains might not be ready).
+ /// @brief Map to rememember cached expressions.
remembered
Thx
lib/Support/SCEVAffinator.cpp | ||
---|---|---|
50 ↗ | (On Diff #31912) | That doesn't work (it was my intitial approach) as the visit function is recursive. Hence, we modify the map and invalidate old iterators (or references to members). |
That's why I suggested a hit ratio.
lib/Support/SCEVAffinator.cpp | ||
---|---|---|
50 ↗ | (On Diff #31912) | Uhhh, thanks. Then maybe using DenseMap::lookup() to avoid creating an empty key? Yes, it will be overwritten again eventually, but until then there are no empty values in the map. |
jdoerfert marked an inline comment as done.
jdoerfert added a comment.In http://reviews.llvm.org/D11975#222559, @grosser wrote:
The patch itself is also reasonable, but it would probably be good to
give some motivation in the commit message why we need this cache.I will add the same as given below to the commit message. Is that enough?
Adding a cache that does not give speedup seems like a bad idea, but as
the code is rather simple we can probably justify this if this simplifies
the development of the modulo wrapping.
However, I did not yet understand how you will use this cache precisely.
The relevant patches (and some speedupdata) would be interesting to see,
(but not required for this patch to go in).
I personally would probably just discuss the modulo wrapping all together.
This would make it easier to understand how you use this cache?
Does this already improve compile time for cases we see today? Or
is this only an improvement if/when we add more complexity to theSCEVAffinator e.g. to model modulo wrapping?There is no difference in compile time today, however, with bigger SCoPs there might be.
Additionally, the cache will allow us to build all modulo wrapping constraints after we crreated the SCoP (before or during is not possible as the domains might not be ready).
I am not sure how this will work in the end. If we build the wrappings later,
I see two issues
- Besides the SCEV's we probably also want to context in which they were built/used
to simplify them, right?
- Do we simplify with context constraints that are only valid after simplification (and get a causality loop)
I mean, if we construct the context with the non-wrapping domains, and then use this
to simplify our non-wrapping constraints that we add in the assumed context, does this
not mean, that we used information in this simplification that we only had due to us
assuming the simplification to be valid in the first place? This could possibly
eliminate important constraints/restrictions, no?
Tobias
final comment.
lib/Support/SCEVAffinator.cpp | ||
---|---|---|
50 ↗ | (On Diff #31912) | I was hopeing to get better cache behaviour this way but I don't know. I'll commit it now but feel free to change it. (should not make a difference anyway). |
I also just realized this patch did not have llvm-commits or polly-dev copied.
It would be good to add them for the next patches.
Best,
Tobias
jdoerfert marked an inline comment as done.
jdoerfert added a comment.final comment.
Comment at: lib/Support/SCEVAffinator.cpp:50
@@ +49,3 @@
+ auto Key = std::make_pair(Expr, Stmt);
+ isl_pw_aff *PWA = CachedExpressions[Key];+
Meinersbur wrote:
jdoerfert wrote:
Meinersbur wrote:
Taking a reference s.t. there is no second lookup required when the value is set afterwards?
isl_pw_aff &*PWA = CachedExpressions[Key]; PWA = ...That doesn't work (it was my intitial approach) as the visit function is recursive. Hence, we modify the map and invalidate old iterators (or references to members).
Uhhh, thanks.
Then maybe using DenseMap::lookup() to avoid creating an empty key? Yes, it will be overwritten again eventually, but until then there are no empty values in the map.
I was hopeing to get better cache behaviour this way but I don't know. I'll commit it now but feel free to change it. (should not make a difference anyway).
Could you add the pollydev Phabricator user to the polly group? That way
adding the polly group will suffice :)
jdoerfert added a subscriber: jdoerfert.
jdoerfert added a comment.Could you add the pollydev Phabricator user to the polly group? That way
adding the polly group will suffice :)
Done. Thanks for the idea.
Tobias