- User Since
- Jul 1 2015, 10:19 AM (216 w, 2 d)
Tue, Aug 20
Mon, Aug 19
Removed casts, default constructors, added all operators where it makes sense (unary *, ->, ->*, unary &, comma,  and () not added because a default return type is not obvious), placed operators in their own namespace to avoid collisions, rewrote example, updated tests.
Fri, Aug 16
Thu, Aug 15
Wed, Aug 14
Tue, Aug 13
LGTM, though are we sure this is true for all targets? The comments in the referenced patch only consider X86. I'm pretty sure it is true for common architectures like AArch64 but I'm not as sure for more exotic things.
Oops, I missed that this landed already. Perhaps a later commit can improve the debug message.
This LGTM but I think someone else should probably sign off on it as well.
Fri, Aug 2
I wonder if this should have a test that ensures we generate VL-scaled addressing modes for SVE object addressing. If there's not enough codegen yet to emit the asm, then we should probably add such a test when we can. After all, it's the stated goal of this patch. :)
I can't really comment on the correctness of this but other than the one comment I'd like to see added, LGTM.
Thu, Aug 1
This was accepted. Did it land?
I wouldn't really worry about optimizing this; dynamic stack allocation is rare in most C and C++ codebases, and one integer register likely doesn't matter much.
Wed, Jul 31
What's the status of this?
Jul 24 2019
Jul 18 2019
Jul 17 2019
What about downstream users that have added directories in their local forks? Having git suddenly ignore them would be surprising. We are in that situation.
Jul 10 2019
Jul 5 2019
I know this now says "ready to land" but is one review really sufficient?
Updated to latest master and removed comments on implementations
Jul 4 2019
Jun 27 2019
Jun 21 2019
Jun 20 2019
Unfortunately, I've never worked in the IntrinsicEmitter so I can't really comment on the correctness of the patch. I will make some inline comments on non-correctness things.
This is the subset of D58736 covering changes to TTI and related classes. It does not introduce any new functionality, only reorganizes things a bit to move implementations into subtargets in preparation for defining system models for targets.
I just posted D63614, the subset of this patch covering only the changes to TTI and related classes.
Jun 14 2019
Updated to address comments.
May 28 2019
May 26 2019
May 24 2019
Oops, needs a testcase. Will add.
May 13 2019
May 9 2019
What's the status of this? It seems like discussion has died down a bit. I think Graham's idea to change from <scalable 2 x float> to <vscale x 2 x float> will make the IR more readable/understandable but it's not a show-stopper for me.
May 8 2019
May 7 2019
Added a test for catchswitch and fixed a bug with falling off the end of a basic block.
May 1 2019
Updated to account for isEHPad including catchswitch. I'm not very happy with the hacky use of FoundCatchSwitch but could not think of a way to do this that keeps things relatively clear/readable and doesn't put the for loop into a deeper nesting level or completely reformat the function.
Apr 30 2019
Bail out of the loop that found an existing neg if there is a catchswitch
and just create a new neg instead.
Updated to use isEHPad.
Apr 29 2019
Rebased on latest master and fixed test name.
Apr 24 2019
Apr 22 2019
Apr 17 2019
We need to clarify on insertelement/extractelement. Maybe already done in some other patches, but that clarification should be part of this patch.
Is the "length of val" under the semantics "scalable * n" in <scalable n x ElemTy>, right? Or is it still n?
Apr 3 2019
Mar 20 2019
Mar 14 2019
Mar 8 2019
This all LGTM.
Mar 7 2019
I know this isn't ready for merge, but since the mailing list discussion has died down it seems like maybe we should move the discussion here. If so, it would be helpful to have comments on all the routines explaining what they do and how they differ from the existing routines, in order to aid discussion.
Mar 6 2019
Feb 28 2019
Cool,. I've wanted this for a while. LGTM.
Feb 27 2019
A larger design question I have about this is the proper place to put software prefetching configuration. Right now it lives at the memory model level, in that a memory model specifies a cache heirachy along with a software prefetch configuration. I wonder if we should allow for software prefetching configuration for each cache level, as targets might want different policies depending on which cache level they are prefetching into. I don't think we have any examples of that in the codebase today but I can imagine cases where targets might want it.
Feb 22 2019
I believe https://reviews.llvm.org/D56266 is working for me.