- User Since
- Jul 21 2014, 12:07 PM (195 w, 5 d)
Tue, Apr 17
Looks pretty straight-forward.
Mar 20 2018
While it's preferred to use ORE as an analysis pass, sometimes that's hard (e.g because it's a function pass, or simply because it's hard to thread the ORE instance through the many layers). In these cases it's fine to construct one inline. When remarks are requested this will amount to repopulating BFI for the function as the ORE instance is created.
Mar 13 2018
@inglorion, I am inclined to recommit this unless I hear from you in a few days:
Mar 12 2018
Mar 7 2018
@inglorion Is this from a bot, I didn't see any failures? A bit more info would be helpful.
Mar 6 2018
Feb 26 2018
Feb 24 2018
Feb 21 2018
Feb 20 2018
Feb 12 2018
Jan 20 2018
This is used by Swift. Providing a macro instead of a static value may be a better solution. If you are willing to do that I am OK with that but I have a strong objection against simply removing this.
Jan 9 2018
Jan 5 2018
LGTM too. Thanks for getting back to this!
Jan 2 2018
Dec 20 2017
Dec 14 2017
I had to further tweak this in rL320725. Let me know if you see any issues.
Dec 6 2017
Dec 1 2017
Looks like it's a test problem. When I tweak the sample profile file according to https://clang.llvm.org/docs/UsersManual.html#sample-profile-text-format, I do get hotness on the remarks.
@modocache, @davide, are you guys sure this feature is working? The test does not actually check whether hotness is included in the remarks and when I run it manually they are missing. In D40678, I am filtering out remarks with no hotness when any threshold is set all the remarks are filtered out in this new test.
Nov 30 2017
Nov 29 2017
Nov 28 2017
Nov 27 2017
Thanks, Chris! This moves the cmake bits to config-ix.cmake.
Nov 17 2017
Nov 15 2017
Nov 14 2017
I get two failures, can you please take a look?
This looks great with some minor nits (go ahead and commit after fixing them). Thanks for your work! And sorry about the delay.
Nov 13 2017
Nov 6 2017
This was committed a while ago.
Nov 3 2017
Also by any chance, did you run this on some real code base? Some of these may trigger quite a bit and I want to make sure they are not on the top of the list. You can use opt-viewer/opt-stats.py to get a sense how frequently your remark is generated.
I will look at the rest of the patch in more detail later unless Florian beats me to it. Thanks for tackling this!
Please use the new closure API to emit remarks.
Oct 13 2017
It's unintuitive why you need to fix this at the IR level. Both the load and the prefetch should be uses of address and there should be no dependence between them.
Oct 12 2017
Seems reasonable to me. I don't know anything about the MIR parser's use of diagnostics, though.
Oct 11 2017
Oct 10 2017
Thanks for working on this!
Sorry about the delay! I remembered something similar also for Python2 so I wanted to doublecheck. Turns out that was https://reviews.llvm.org/D29802 which is unrelated.