- User Since
- Oct 3 2013, 11:31 AM (232 w, 6 d)
Tue, Feb 20
Feb 18 2018
Given that the original motivation for these lifetime intrinsincs was for inlining, I don't see a reason to support multiple start/ends on the same pointer.
Or is there another new use case that I'm unaware of? Also, we don't have any transformation splitting (or even shrinking I think) these lifetimes.
Feb 11 2018
oh, wow, so many bugs in DAG combiner handling undef.. Nice table btw!
Jan 17 2018
Before accepting this patch, we really need to see benchmark results. I'm not going to change clang to start emitting non-UB divs if the perf is going to be horrible. We need data.
Otherwise I don't see the need for this poison version of division. Could you elaborate if your plan is to expose this somehow to the application developer?
Dec 13 2017
Dec 12 2017
Nov 30 2017
This change is incorrect. null can be a valid pointer in a non-0 address space, and alloca may return it.
If your target's address space guarantees that alloca doesn't return null, then we can probably make this target-dependent. But we cannot simply make it unconditional; that's not correct.
Nov 28 2017
Nov 23 2017
Nov 22 2017
This patch LGTM, except for the changes in tryFactorization(). It seems there's some code missing.
The concept looks interesting to me. I'll let the experts review the patch, though (I just skimmed over it).
Out of curiosity, what are the clients you envision could use this functionality?
I can imagine that we can do store forwarding with a mustalias MemUse if stored size >= load size. Also, a mustalias MemDef can kill the previous MemDef if it has no other uses (2 stores to same location).
Are there any other uses cases? Can it be used to simplify more complex optimizations? Even better, are there missing blocks that would be needed to simplify these optimizations? (tricky question; just wondering if you have some thoughts on that).
Nov 21 2017
Nov 9 2017
Nov 8 2017
Oct 28 2017
Sep 21 2017
Ok, mea culpa, I thought CreateBitOrPointerCast() would simply create a bitcast.
Then, what we have here is:
v = phi(ptr2int(p), ptr2int(q)) => v = ptr2int(phi(p, q))
Sep 20 2017
Sep 19 2017
Ok, so after another scan through the patch and a discussion with Gil, I must say the transformation is not fully correct.
Sep 18 2017
This transformation looks correct to me and goes in the right direction (remove unneeded int2ptr/ptr2int, since they block many optimizations). The patch can be made a bit more general later.
Before the patch goes in, please add a few negative tests; you only have positive ones. Thanks!
Sep 13 2017
You're right: the Boolean connector for the second transformation was flipped. All good now :)
Sep 12 2017
I was trying to prove this in Alive, but the proof doesn't go through.
Some corner cases are not correct: http://rise4fun.com/Alive/iVs
For example: 'INT_MIN / something' would be replaced with 0, but shouldn't.
Sep 9 2017
Sep 8 2017
Sep 6 2017
Aug 12 2017
LGTM, with minor comments: missing the CHECK comment above, and missing a test for the assume removal.
Aug 10 2017
Aug 9 2017
Aug 8 2017
Jul 29 2017
Jul 28 2017
Jul 27 2017
Just realized there's a complicated isGEPKnownNonNull() in ValueTracking.cpp, which is use solely by isKnownNonZero().
I don't believe such a thing is necessary, since the input pointer to a GEP inbounds has to be non-null in address space 0. If others agree I'll also remove that function in a subsequent patch.
Jul 26 2017
Jul 25 2017
Seems good enough to me as a temporary hack.
Jul 15 2017
Jun 6 2017
May 30 2017
This one for example (stole from Dave; I don't have my list here handy):
rewrite (1 << Y) * X into X << Y:
May 29 2017
May 5 2017
Mar 27 2017
Mar 26 2017
Let me maybe zoom out and give a different perspective:
Right now call site and function attributes are an AND of predicates that are always guaranteed to hold for that specific call site or for all call sites, respectively.
Predicates include things like doesn't write to memory, only writes to memory that is not observable, etc. Using attributes we can state that several of these predicates hold.
In the ideal world, predicates wouldn't overlap, although since we can only state ANDs of predicates and not ORs, some overlap may be need in practice.
Mar 15 2017
Mar 11 2017
Mar 9 2017
Mar 8 2017
Feb 12 2017
The easy way to check for overflow in Alive: http://rise4fun.com/Alive/MH
Feb 5 2017
Jan 28 2017
Let me give just 2 more Z3-related suggestions:
- instead of re-creating the solver, it might be faster to do Z3_solver_reset
- "once in a while" it might be helpful to delete everything (all solvers, asts, context) and call Z3_reset_memory. Z3's small object pool is not very good and keeps growing /ad infinitum/, so cleaning it up once a while is a good thing (improves cache usage and reduces memory consumption).
Jan 26 2017
- add UB semantics to indirectbr and switch
Jan 25 2017
Regarding incremental solving with Z3 (or with most SMT solvers in general), let me just lower the expectations a bit:
In Z3, when you do push(), there are a few things that change immediately: 1) it uses a different SAT solver (one that supports incremental reasoning), and 2) some preprocessing steps are disabled completely.
The advantage of incremental solving is that it shares the state of the SAT solver between queries. On the other hand, Z3 disables a bunch of nice preprocessors, and also it switches to eager bit-blasting. So whether going incremental pays off, it depends.. I've seen cases where it's faster to create a new solver for each query and cases where incremental is a good thing.
It's on our plans to "fix" Z3 to do better preprocessing in incremental mode, but there's no ETA at the moment to start this work..
Jan 24 2017
Alive is still not great for thins kind of patch; I guess it needs some macro support.
But you can use a little more general preconditions like this: http://rise4fun.com/Alive/Tz0
Jan 23 2017
Jan 22 2017
Cool work Dominic!
Just a few random comments from the peanut gallery regarding the usage of Z3:
- I would definitely split the Z3Expr into expr/solver/model classes. Several advantages: plugging in new solvers becomes easier and reference counting can be handled more safely and automatically. Right now your solver+model related code handled ref-counting "by hand", which can easily lead to bugs.
- I would add an rvalue constructor to the expr class to avoid ref-counting when unneeded.
- Function z3Expr::fromData() seems to be able to create arbitrarily-sized bit-vectors. I would limit to e.g. 512 bits or something like that. Z3 is super slow with huge bit-vectors. If you feel fancy (for a following version of this feature), you could encode this information lazily (based on the model).
- Z3Expr::fromInt() -- why take a string as input and not a uin64_t?
- Why call simplify before asserting a constraint? That's usually a pretty slow thing to do (in my experience).
- You should replace Z3_mk_solver() with something fancier. If you don't feel fancy, just use Z3_mk_simple_solver() (simple is short for simplifying btw). If you know the theory of the constraints (e.g., without floats, it would be QF_BV), you should use Z3_mk_solver_for_theory() (or whatever the name was). If you feel really fancy, you could roll your own tactic (in my tools I often have a tactic class to build my own tactics; Z3 even allows you to case split on the theory after simplifications)
- Instead of asserting multiple constraints to the solver, I would and() them and assert just once (plays nicer with some preprocessors of Z3)
- Timeout: SMT solvers sometimes go crazy and take forever. I would advise to add a timeout to the solver to avoid getting caught on a trap. This timeout can be different for the two use cases: check if a path is feasible (lower timeout), or check if you actually have a bug (higher timeout)
- Also, I didn't see anything regarding memory (I might have missed yet). Are you using the array theory? If so, I would probably advise benchmarking it carefully since Z3's support for arrays is not great (even though it has improved a bit in the last year or so).
Oct 6 2016
Apr 9 2016
Mar 16 2016
Could you please explain the motivation for this transformation?
Doesn't seem very profitable to me unless strlen(s) has already been computed before. Otherwise we end up visiting more memory positions than with the original code.
Feb 29 2016
Feb 28 2016
Feb 22 2016
Feb 21 2016
Just a quick comment on this bug. We can see it from two angles:
- TBAA is just doing its job since it is exploiting the fact that 2 pointers with different types don't alias (the metadata has 2 different type branches). So TBAA claims the pointers don't alias, while they do in practice (it's undefined behavior, though). -- in this case LICM needs patching to account for this aggressiveness.
Dec 30 2015
I agree with you re InferFunctionAttrs. I don't see any reason why we have InferFunctionAttrs recognize all those library functions instead of having the frontend doing it. But removing it at this point may break someone's code..
Dec 16 2015
Dec 15 2015
I was wondering whether we could go a bit more generic and support arbitrary expressions (like llvm.assume). This design only supports allocation functions that return a pointer to the beginning of a buffer with size p1 * p2 * ... * pn, where pi are the parameters.