This patch adds IPSCCP-like interprocedural analysis to the generic sparse conditional propagation solver. It allows the tracking of function return values and the values stored in global variables. It also gives clients the ability to mark basic blocks executable. The patch introduces separate state maps, distinct from the existing value state map, for maintaining function return values and values stored in global variables. This allows clients to distinguish, for example, a global variable from it's in-memory value and a function from the values it returns.
Details
Diff Detail
- Build Status
Buildable 10851 Build 10851: arc lint + arc unit
Event Timeline
The sparse propagation utility isn't currently used by any passes, so it can't really be tested by itself. I'm attempting to use it as the solver in D37355. I split these changes out to make the review of that patch easier. I've added some tests there for the review, but I expect I'll add some more as the review of that pass progresses.
Ah, I didn't think about adding tests to unittests/. That makes sense - I'll give it a shot. I do wonder if it'd be easier to just move SCCP over to the generic engine, though.
Added some unit tests. Thanks for the suggestion. I briefly considered moving SCCP over to the generic solver (as a simple way to re-purpose the existing tests), but this might not be that straightforward after all. The re-solving for undef values will need to be considered. In the end, adding unit tests for this patch wasn't that difficult to do.
Ideally we should consider moving to a model where the resolver doesn't really exist.
There are lots of subtleties in the way solver and resolver work in lockstep.
I'm pretty sure the only reason why SCCP didn't fall apart as GVN (the old one) or many other passes in tree is because Zhendong didn't fire up his fuzzer engine on it :)
I didn't have time to work on this (I had a couple of prototypes) and it's unlikely things will change in the near future (1-2 months).
So, for now I'm fine to leave the constant solver where it is but I'd still like to point out you just uncovered the tip of the iceberg.
Yes, and I just found your post about this very issue on llvm-dev from many months ago. Thanks!
It's worth noting: You can make propagation much faster by keeping a separate worklist for overdefined , and prioritizing it over the normal one.
This will move things to overdefined as quickly as possible.
(Because anything meet overdefined is just about always overdefined).
This is what SCCP does.
Again, just recording it here, i can do it in a followup.
include/llvm/Analysis/SparsePropagation.h | ||
---|---|---|
29 | Is this, and the associated changes you make to do the mapping, still needed if lattice values can be something other than void pointers? | |
161 | SmallVector please | |
165 | SmallVector please | |
lib/Analysis/SparsePropagation.cpp | ||
58 ↗ | (On Diff #115282) | auto |
58 ↗ | (On Diff #115282) | Given this occurs so often, can you please just pull it out into a templated helper. template <class T, class V> or whatever. |
64 ↗ | (On Diff #115282) | auto |
70 ↗ | (On Diff #115282) | auto |
Thanks again for taking a look at this! Right, we should eventually make a separate worklist for overdefined, like SCCP does.
include/llvm/Analysis/SparsePropagation.h | ||
---|---|---|
29 | I added StateTy here only as shorthand - it's not really needed, especially with the auto suggestions you have. The main changes I'm making to the mapping are adding distinct maps for values loaded-from/stored-to global variables and function return values. I also changed the existing mapping from Instruction -> LatticeVal to Value -> LatticeVal to handle arguments. This mimics the behavior of IPSCCP. | |
161 | Sounds good. | |
165 | Sounds good. | |
lib/Analysis/SparsePropagation.cpp | ||
58 ↗ | (On Diff #115282) | Sounds good. |
FYI: I'm about to rename getOrInitValueState to getValueState.
That is what SCCP calls it, and IMHO, we should be consistent.
I think you should do roughly the same (IE name the return-only ones as getExisting or getLattice, and not add "orInit" to the initializing ones)
Addressed comments form Danny.
- Because AbstractLatticeFunction and SparseSolver are now templates, I had to move the member functions from the .cpp file over to the header. The patch that does that is D38561.
lib/Analysis/SparsePropagation.cpp | ||
---|---|---|
318 ↗ | (On Diff #115282) | This assumes a very specific set of lattices being used here. I guess i feel like this kind of mapping function should provide, not something part of the sparse solver. IE maybe you need SparseSolverTraits, and this is one of those functions. (that would avoid the overhead of virtual calling). Essentially, i feel like this is one step too specific. I'm not sure it's worth going all the way to having the clients own *all* the state maps, and do their initialization. Maybe halfway? The function and value states are managed by the sparse solver, and other state maps are owned/initialized by the client, and this function is the only interface to get the right map? |
I agree, getStateMapForInstruction is a bit strange. I've been thinking about what to do here. And one of the simplest things I came up with was to just define a custom LatticeKey type like we do a LatticeVal. This would allow the solver to maintain a single map (meaning we no longer need something like getStateMapForInstruction), and give the client the flexibility to separate the kinds of values it cares about.
So for example, to replicate what I currently have, the LatticeKey could be a PointerIntPair of a Value* and an enum indicating the grouping (value, function, global variable). getValueState(Value *V) and the family of functions I added for functions and global variables that may not exist yet in the map would be replaced by a single getValueState(LatticeKey Key). If Key doesn't exist in the map it would delegate to a single LatticeFunc->ComputeLatticeValue(LatticeKey Key) that would produce a new LatticeVal for Key. Currently, we have separate compute methods for arguments, functions, global variables, etc., which would also go away. It seems reasonable to move that complexity out of the generic classes. Additionally, the ComputeInstructionState functions would then be responsible for coming up with the right key. For example, visitStore would look something like:
auto *GV = dyn_cast<GlobalVariable>(Store.getPointerOperand()); if (!GV) return; LatticeKey Key(GV, LatticeKey::GlobalVariable); ChangedValues[Key] = MergeValues(...);
What do you think?
This actually sounds like a great idea, IMHO.
Let's try it and see how well it works in practice.
It certainly would address my current concerns.
I believe most users should be able to construct effective and cheap keys without real trouble.
Implement the previously discussed approach using generic LatticeKeys. A few things to note about this version of the patch:
First, there's a requirement that LatticeKeys are able to be the key_type of whatever mapping structure the generic solver uses internally (i.e., DenseMap). If a client uses a non-trivial type for LatticeKey, it will have to provide a specialization of DenseMapInfo. But we already have specializations for pointers and PointerIntPairs, which I think probably covers most potential clients. I've added a comment at the top of AbstractLatticeFunction mentioning this.
Second, there are cases when the generic solver needs to translate between LatticeKeys and their corresponding LLVM Values. One case is when it needs to add LLVM Values to the work list after the state of a LatticeKey changes. Another case is when the solver tries to determine the executable blocks, and needs to get the state of an LLVM Value (e.g., a branch or switch condition or phi node). It needs to know the LatticeKey of a Value before it can get its associated state. I originally thought about adding two virtual functions to AbstractLatticeFunction that would let the client define the conversions between LatticeKeys and LLVM Values. But I thought the overhead of the calls might be too large, especially for simple cases, like when the LatticeKey is just an LLVM Value. I instead went with a LatticeKeyInfo template that clients can specialize for their chosen LatticeKey type.
The test cases show how this works for simple ipa-cp-like problems. If the approach looks good, I'll update the summary and the CalledValuePropagation pass (D37355).
This seems a reasonable restriction, it's true in other parts of llvm as well anyway, so ..
Second, there are cases when the generic solver needs to translate between LatticeKeys and their corresponding LLVM Values. One case is when it needs to add LLVM Values to the work list after the state of a LatticeKey changes. Another case is when the solver tries to determine the executable blocks, and needs to get the state of an LLVM Value (e.g., a branch or switch condition or phi node). It needs to know the LatticeKey of a Value before it can get its associated state. I originally thought about adding two virtual functions to AbstractLatticeFunction that would let the client define the conversions between LatticeKeys and LLVM Values. But I thought the overhead of the calls might be too large, especially for simple cases, like when the LatticeKey is just an LLVM Value. I instead went with a LatticeKeyInfo template that clients can specialize for their chosen LatticeKey type.
SGTM
The test cases show how this works for simple ipa-cp-like problems. If the approach looks good, I'll update the summary and the CalledValuePropagation pass (D37355).
At this point, i think it looks great to start. I don't think it's worth making any more changes until their are more clients. Thanks for working through this, i think the sparse propagation solver is a *lot* more useful thanks to your work.
Is this, and the associated changes you make to do the mapping, still needed if lattice values can be something other than void pointers?