- User Since
- Feb 4 2016, 3:03 PM (97 w, 2 d)
Wed, Dec 6
Tue, Dec 5
@jkorous-apple Got it. I agree that it would be better to move the comments to the header. Will land it soon. Thanks!
Mon, Dec 4
Thanks @jkorous-apple for the comment. I think your suggestion is a more
precise description for the implementation, and adjusted the comments
Wed, Nov 29
@jkorous-apple Thanks for the comments! Yeah, I was thinking of O(lenght_of_string) approach, but considering the complicatedness of the implementation (I guess the real implementation would be a bit more complex than your pseudo implementation to handle quote and '\n\r' '\r\n' cases) I decided to stay with O(length_of_string * number_of_endlines_in_string) but optimizing the number of move operations.
Mon, Nov 27
Thanks @jkorous-apple for your comments. I modified the type for the variables and replaced unnecessary inserts and erases with updates.
Mon, Nov 20
Addressing @vsapsai's comments. Thank you for the suggestion! Added test case actually finds an off-by-one error in the original patch. I improved the comments as well.
Nov 1 2017
Oct 25 2017
Oct 24 2017
Sep 29 2017
Sep 27 2017
Sep 25 2017
Sep 19 2017
Aug 28 2017
Fix a test.
Aug 25 2017
Aug 16 2017
Friendly ping. @davidxl, I think there's no harm to make clang consistent with gcc for compiler options, and I wonder if you have any concerns that I may miss. Thanks!
Aug 10 2017
Aug 8 2017
Addressing dblaikie's comments. Thanks!
Aug 7 2017
Aug 5 2017
I think it is generally good to match what GCC does to not to confuse people.
Aug 3 2017
Jul 31 2017
Update documentation. Please let me know if I need to update other documents as well. Thanks!
@davidxl I think it is theoretically possible, if the if branch is not taken on line 294. Did I miss something? Thanks!
Jul 28 2017
Delete unnecessary line from the test.
Jun 29 2017
@wmi Good call! I fixed the code per your suggestion. Thanks!
Addressing comments from @wmi. Thank you for the suggestion!
Jun 28 2017
https://reviews.llvm.org/D34796 is clang side change.
Jun 25 2017
Jun 20 2017
Addressing @wmi's concern by limiting the targets to the recurrence cycles that only the last instruction of the recurrence (that feeds the PHI instruction) can have uses outside of the recurrence. This is not an ideal solution yet, and more fundamental solution (such as having recurrence optimization as a separate pass and/or using live range analysis for it) should follow. But still I think it is worth to have it here.
Jun 19 2017
I think this is a right approach, but concerned that the experimental results I shared on D32451 show that it is generally better to not to vectorize the low trip count loops. @Ayal, I wonder if you have any results that this patch actually improves the performance. Thanks!
Addressing comments from @tejohnson. Thanks!
Jun 18 2017
@tejohnson Sorry for the late reply. I was out of internet for days. I have no objection to set the default value to true, but prefer to have it as a separate patch so that we can track the impact on performance of each chance more easily.
Jun 12 2017
Jun 10 2017
@wmi Not at all! Thanks for your comments.
Jun 7 2017
Jun 4 2017
Jun 2 2017
May 31 2017
FYI, below is the compile difference in percentage for spec2006:
Add missing reference.
Reimplement the feature in peephole optimizer. This patch makes the values in the recurrence cycle to be tied to each other if possible.
May 22 2017
Comparing the existing implementation and this patch, I don't observe noticeable compile time different with Video-SIMDBench. I compiled the benchmark suite 3 times each and the median was 27.12 sec vs 27.55 sec, while the average was 27.96sec vs 27.89 sec. There were some code size differences, but it is simply more vectorization results bigger code size. There's no difference between OptForSize and non-vectorized.
May 19 2017
Addressing Ayal's comments.
@wmi Thank you for your reply. I agree on you that we should consider tied operand group, and as @qcolombet mentioned in the previous comment, if tie operand information is already available before this pass, I'd like to discuss where would be the best place to implement this.
May 17 2017
@qcolombet Actually you're right. Having tied operands, this can be done within SSA. Do you have a suggestion for a better place to implement this? I thought what isRevCopyChain does is most close to what this patch does.
May 16 2017
@MatzeB Thank you for your comments. I'm collection compile time numbers now with spec2006. For our internal benchmark, which takes about ~40min to compile, the compile time difference was negligible.
Addressing comments from @MatzeB.
May 13 2017
@Ayal Thanks for the pointer.
May 12 2017
Ping. Can someone please let me know what would be the best way to have this patch reviewed? Thanks!
May 11 2017
I don't have a strong opinion about treating static trip count and profile-based trip count or not. I treated them separately because existing implementation sets OptForSize to true instead of returning false when it makes a profile-based decision. It would be great if someone can explain the reason behind that.
May 7 2017
Thanks @Ayal for your comments! If the profile-based trip count checking is moved above the line
Addresing comments from Ayal.
May 5 2017
Apr 28 2017
Ping. If you're too busy to review this patch, could you please recommend someone else? Thanks!
Apr 27 2017
So I evaluated loop vectorizer with https://github.com/malvanos/Video-SIMDBench, which is introduced in this paper: http://ieeexplore.ieee.org/document/7723550/. I first built it without profile data and ran linux perf with a command of
Use profile-based trip count estimation only when trip count cannot be computed.
Apr 24 2017
@mkuper I can do that, but the issue is that the vectorized part doesn't dominated the execution time of the benchmark (profile is pretty flat across multiple functions), so it may not be suitable to tell the impact of vectorization change. Let me think about a good way to measure the it. Thanks!
@mkuper This helps with our internal benchmarks, but I don't have numbers with public benchmarks. What would be the good benchmarks to evaluate the vectorization? Thanks!
Apr 21 2017
@MatzeB I was considering other passes as well for the sake of simplicity, but I concluded two-address instruction is the right place to do. The problem doesn't appear while we're in SSA, but becomes an issue when we make an decision about which operand register should be the destination register. I think this is why isProfitableToCommute function is in two-address instruction pass.
Apr 14 2017
Friendly ping. Thanks!
Apr 12 2017
With numbers, it doesn't seem worth to turn it on by default. @dberlin, just curious, do you have any rough estimation about how much compilation time can be saved by implementing rank-order based iteration optimization?
Below are the numbers with spec2006 (The numbers in the last row are overall for SPEC score and total for compile time and binary size):
Apr 11 2017
@dberlin Thank you for your comments. I agree on you that it is not easy to invent a good heuristics around this knob, but still I find this useful for some cases that actually matters. Assuming that this doesn't make the implementation much complex, wouldn't it be better for compiler users to have more options?
No perf numbers with public benchmarks yet, but with our internal benchmark I confirmed that aggressive load PRE removes redundant instructions in a hot loop. I'll report spec2006 numbers later. This patch is for quick unlocking of the feature before making more fundamental improvement.