Page MenuHomePhabricator

Derive GEP index type from Data Layout (cont)
Needs ReviewPublic

Authored by delena on Jun 19 2018, 7:32 AM.

Details

Summary

This patch fixes some places that were not covered in https://reviews.llvm.org/D42123
I added a test for ptr_to_int operations.
Some minor changes are in the backend, can't be covered by tests for an out-of-tree target.

Diff Detail

Repository
rL LLVM

Event Timeline

delena created this revision.Jun 19 2018, 7:32 AM
lebedev.ri added inline comments.
lib/Transforms/InstCombine/InstCombineCasts.cpp
1727

I realize that it is existing behaviour, but this creates new extra instruction,
and does not check for uses of previous instruction.
This is not good for instcombine, although i suppose this is a very special case.

You might want to check that there are tests that verify this behavior, or add such tests.

test/Transforms/InstCombine/ptr-int-cast_custom_gep.ll
2

Instcombine tests use utils/update_test_checks.py.
(And ideally (at least when committing) the diff should be a diff to the previous results)

delena added inline comments.Jun 19 2018, 11:09 PM
lib/Transforms/InstCombine/InstCombineCasts.cpp
1727

It shouldn't create an extra instruction. I fixed incompatibility between IntPtrType and expected type of the source.
The current version just entered to an endless loop, because the IntPtrType is the index type and not the pointer type.

test/Transforms/InstCombine/ptr-int-cast_custom_gep.ll
2

I can generate. I just took the original code and changed it.

delena updated this revision to Diff 152026.Jun 19 2018, 11:53 PM

The new test is auto-generated now.

lebedev.ri added inline comments.
lib/Transforms/InstCombine/InstCombineCasts.cpp
1727

Hm.
@test1 had just ptrtoint, and ended up with ptrtoint+and+icmp
@test2 had just inttoptr, and ended up with trunc+inttoptr
That to me looks like new instructions.
But like i said, i suspect this is a very special edge-case.

delena added inline comments.Jun 20 2018, 1:14 AM
lib/Transforms/InstCombine/InstCombineCasts.cpp
1727

and+icmp is not new. It is just a way to return i1.
test2 returns another type. And it is common, nothing special here.

bjope added a subscriber: bjope.Jun 20 2018, 1:25 AM

AFAIK BasicAA assumes that all GEP indices have a common type. So "normalization" is for example needed before passes like GVN that uses MemoryDependenceResults, that is using BasicAA. Or maybe that is a bug in BasicAA?

Passes like InstCombine might skip doing rewrites in some basic blocks (e.g. if they are unreachable from entry). So I do not think that we can't rely on InstCombine doing a normalization for all GEP:s in the function.
So are we moving towards always using the "normalized" type from scratch (in all passes) when creating a GEP, or what is the plan here?

AFAIK BasicAA assumes that all GEP indices have a common type. So "normalization" is for example needed before passes like GVN that uses MemoryDependenceResults, that is using BasicAA. Or maybe that is a bug in BasicAA?

Passes like InstCombine might skip doing rewrites in some basic blocks (e.g. if they are unreachable from entry). So I do not think that we can't rely on InstCombine doing a normalization for all GEP:s in the function.
So are we moving towards always using the "normalized" type from scratch (in all passes) when creating a GEP, or what is the plan here?

I don't think that we are going towards the always-normalized GEP, not within these changes, at least. We say that when normalized, the type of index should not be always a pointer type. Each target may specify index type in the Data Layout.
This is about my patches.

I don't know how is the canonization of the GEP form important for the alias analysis. It's more for loop transformations, SCEVs, GEP combining/simplifications, I think

arsenm added a subscriber: arsenm.Jun 24 2018, 11:50 PM
arsenm added inline comments.
lib/CodeGen/AsmPrinter/AsmPrinter.cpp
2071 ↗(On Diff #152026)

This shouldn't be using the default address space

lib/Transforms/InstCombine/InstCombineCasts.cpp
1727

The InstCombine change is a separate patch from the codegen changes

bjope added a comment.Jun 25 2018, 1:51 AM

AFAIK BasicAA assumes that all GEP indices have a common type. So "normalization" is for example needed before passes like GVN that uses MemoryDependenceResults, that is using BasicAA. Or maybe that is a bug in BasicAA?

Passes like InstCombine might skip doing rewrites in some basic blocks (e.g. if they are unreachable from entry). So I do not think that we can't rely on InstCombine doing a normalization for all GEP:s in the function.
So are we moving towards always using the "normalized" type from scratch (in all passes) when creating a GEP, or what is the plan here?

I don't think that we are going towards the always-normalized GEP, not within these changes, at least. We say that when normalized, the type of index should not be always a pointer type. Each target may specify index type in the Data Layout.
This is about my patches.

I don't know how is the canonization of the GEP form important for the alias analysis. It's more for loop transformations, SCEVs, GEP combining/simplifications, I think

I only know that once upon a time I made a change in SeparateConstOffsetFromGEP to avoid doing rewrites on unreachable code (thus the canonicalization that is done for GEP indices in that pass did not happen in unreachable code). After that we got asserts like this one:

opt: ../lib/Support/APInt.cpp:193: llvm::APInt& llvm::APInt::operator+=(const llvm::APInt&): Assertion `BitWidth == RHS.BitWidth && "Bit widths must be the same"' failed.
...
opt(llvm::BasicAAResult::GetLinearExpression(llvm::Value const*, llvm::APInt&, llvm::APInt&, unsigned int&, unsigned int&, llvm::DataLayout const&, unsigned int, llvm::AssumptionCache*, llvm::DominatorTree*, bool&, bool&) 0x357)[0x1650837]
opt(llvm::BasicAAResult::GetLinearExpression(llvm::Value const*, llvm::APInt&, llvm::APInt&, unsigned int&, unsigned int&, llvm::DataLayout const&, unsigned int, llvm::AssumptionCache*, llvm::DominatorTree*, bool&, bool&) 0x345)[0x1650825]
opt(llvm::BasicAAResult::GetLinearExpression(llvm::Value const*, llvm::APInt&, llvm::APInt&, unsigned int&, unsigned int&, llvm::DataLayout const&, unsigned int, llvm::AssumptionCache*, llvm::DominatorTree*, bool&, bool&) 0x345)[0x1650825]
opt(llvm::BasicAAResult::GetLinearExpression(llvm::Value const*, llvm::APInt&, llvm::APInt&, unsigned int&, unsigned int&, llvm::DataLayout const&, unsigned int, llvm::AssumptionCache*, llvm::DominatorTree*, bool&, bool&) 0x345)[0x1650825]
opt(llvm::BasicAAResult::GetLinearExpression(llvm::Value const*, llvm::APInt&, llvm::APInt&, unsigned int&, unsigned int&, llvm::DataLayout const&, unsigned int, llvm::AssumptionCache*, llvm::DominatorTree*, bool&, bool&) 0x345)[0x1650825]
opt(llvm::BasicAAResult::GetLinearExpression(llvm::Value const*, llvm::APInt&, llvm::APInt&, unsigned int&, unsigned int&, llvm::DataLayout const&, unsigned int, llvm::AssumptionCache*, llvm::DominatorTree*, bool&, bool&) 0x345)[0x1650825]
opt(llvm::BasicAAResult::constantOffsetHeuristic(llvm::SmallVectorImpl<llvm::BasicAAResult::VariableGEPIndex> const&, unsigned long, unsigned long, long, llvm::AssumptionCache*, llvm::DominatorTree*) 0x17e)[0x1650dde]
opt(llvm::BasicAAResult::aliasGEP(llvm::GEPOperator const*, unsigned long, llvm::AAMDNodes const&, llvm::Value const*, unsigned long, llvm::AAMDNodes const&, llvm::Value const*, llvm::Value const*) 0x7d4)[0x16553a4]
opt(llvm::BasicAAResult::aliasCheck(llvm::Value const*, unsigned long, llvm::AAMDNodes, llvm::Value const*, unsigned long, llvm::AAMDNodes, llvm::Value const*, llvm::Value const*) 0x49f)[0x1653c7f]
opt(llvm::BasicAAResult::alias(llvm::MemoryLocation const&, llvm::MemoryLocation const&) 0x26a)[0x165431a]
opt(llvm::AAResults::alias(llvm::MemoryLocation const&, llvm::MemoryLocation const&) 0x38)[0x1631a58]
opt(llvm::MemoryDependenceResults::getSimplePointerDependencyFrom(llvm::MemoryLocation const&, bool, llvm::ilist_iterator<llvm::ilist_detail::node_options<llvm::Instruction, true, false, void>, false, false>, llvm::BasicBlock*, llvm::Instruction*, unsigned int*) 0x1fa)[0x174823a]
opt(llvm::MemoryDependenceResults::getSimplePointerDependencyFrom(llvm::MemoryLocation const&, bool, llvm::ilist_iterator<llvm::ilist_detail::node_options<llvm::Instruction, true, false, void>, false, false>, llvm::BasicBlock*, llvm::Instruction*, unsigned int*) 0x57c)[0x17485bc]
opt(llvm::MemoryDependenceResults::getPointerDependencyFrom(llvm::MemoryLocation const&, bool, llvm::ilist_iterator<llvm::ilist_detail::node_options<llvm::Instruction, true, false, void>, false, false>, llvm::BasicBlock*, llvm::Instruction*, unsigned int*) 0x8b)[0x1749b4b]
opt(llvm::MemoryDependenceResults::GetNonLocalInfoForBlock(llvm::Instruction*, llvm::MemoryLocation const&, bool, llvm::BasicBlock*, std::vector<llvm::NonLocalDepEntry, std::allocator<llvm::NonLocalDepEntry> >*, unsigned int) 0xd9)[0x174a799]

I don't remember exactly why I thought that the assert above was connected to not having a canonical type for the GEP indices. Somehow I think this code was involved:

bool BasicAAResult::constantOffsetHeuristic(
    const SmallVectorImpl<VariableGEPIndex> &VarIndices, LocationSize V1Size,
    LocationSize V2Size, int64_t BaseOffset, AssumptionCache *AC,
    DominatorTree *DT) {
  if (VarIndices.size() != 2 || V1Size == MemoryLocation::UnknownSize ||
      V2Size == MemoryLocation::UnknownSize)
    return false;

  const VariableGEPIndex &Var0 = VarIndices[0], &Var1 = VarIndices[1];

  if (Var0.ZExtBits != Var1.ZExtBits || Var0.SExtBits != Var1.SExtBits ||
      Var0.Scale != -Var1.Scale)
    return false;

  unsigned Width = Var1.V->getType()->getIntegerBitWidth();

  // We'll strip off the Extensions of Var0 and Var1 and do another round
  // of GetLinearExpression decomposition. In the example above, if Var0
  // is zext(%x + 1) we should get V1 == %x and V1Offset == 1.

  APInt V0Scale(Width, 0), V0Offset(Width, 0), V1Scale(Width, 0),
      V1Offset(Width, 0);
  bool NSW = true, NUW = true;
  unsigned V0ZExtBits = 0, V0SExtBits = 0, V1ZExtBits = 0, V1SExtBits = 0;
  const Value *V0 = GetLinearExpression(Var0.V, V0Scale, V0Offset, V0ZExtBits,
                                        V0SExtBits, DL, 0, AC, DT, NSW, NUW);
  NSW = true;
  NUW = true;
  const Value *V1 = GetLinearExpression(Var1.V, V1Scale, V1Offset, V1ZExtBits,
                                        V1SExtBits, DL, 0, AC, DT, NSW, NUW);

The same Width is used for both V0 and V1, but if I remember correctly I had Var1.V->getType()->getIntegerBitWidth() != Var0.V->getType()->getIntegerBitWidth() when the assert triggered. When letting SeparateConstOffsetFromGEP::canonicalizeArrayIndicesToPointerSize rewrite all GEPs again (even unreachable GEPs) I no longer got the assertion.

delena updated this revision to Diff 152692.Jun 25 2018, 7:44 AM

I separated the CodeGen and middle-end changes.
I added one more test for SeparateConstOffsetFromGEP.

I looked at the code in SeparateConstOffsetFromGEP::canonicalizeArrayIndicesToPointerSize
The DL->getIntPtrType() does the right job for the "custom" GEP. I added a test that covers canonicalizeArrayIndicesToPointerSize().
I'll try to add a test to the constantOffsetHeuristic() as well.

delena updated this revision to Diff 152838.Jun 25 2018, 10:44 PM

Added more tests for BasicAA.

Ping.., Can anybody complete the review, please?