This is an archive of the discontinued LLVM Phabricator instance.

[mlir][sparse] group tensor id and levels into pairs in loop emitter
ClosedPublic

Authored by Peiming on Apr 17 2023, 1:09 PM.

Diff Detail

Event Timeline

Peiming created this revision.Apr 17 2023, 1:09 PM
Herald added a project: Restricted Project. · View Herald Transcript
Peiming requested review of this revision.Apr 17 2023, 1:09 PM
Peiming edited the summary of this revision. (Show Details)Apr 17 2023, 1:10 PM
Peiming added reviewers: wrengr, bixia.
Peiming updated this revision to Diff 514380.Apr 17 2023, 1:12 PM

remove unused code.

wrengr added inline comments.Apr 17 2023, 2:05 PM
mlir/lib/Dialect/SparseTensor/Transforms/LoopEmitter.h
46

I think it'd be better to define using TensorLevel = unsigned; and compress things like we do for TensorLoopId.

215

I think you should just define the struct directly, rather than using std::tuple. That way client code can use nice names for accessing the fields directly, rather than defining all those wrappers around std::get<N>.

Also, the struct should be named LoopSliceInfo for consistency with LoopInfo, SliceInfo, etc. And, of course, the struct should be final.

If you are concerned about the ability to use structured-binding in the foreach-loops, then take a look at the "Case 3: binding to data members" section of https://en.cppreference.com/w/cpp/language/structured_binding . tl;dr: the syntax/semantics should be exactly the same as for the std::tuple case.

224

I think the name tidLvls would be a lot cleaner here, and consistent with elsewhere.

229–232

I think it'd be best to leave this as "operating on" like it was before. Although this is "iterating" in the sparse-compiler sense of enumerating the stored values of sparse-tensors, I feel that using that term here would cause confusion vs the C++ iterator framework

231

Grammatically, that should be "information" (singular, and lowercase). However, I think it'd be better to leave it as "slice-driven loop conditions" like it was before, since the term "information" doesn't really tell us very much about what exactly that means/contains.

mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp
954–955

Why keep this defn of ld around but commented out? If it's no longer needed, then should remove it (along with my fixme comment about trying to figure out what it's actually supposed to be)

mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
1303–1307

should be "tidLvl" to match the names used elsewhere

1304

It seems worth adding a comment explaining why we discard tlPair.second and use ldx instead; i.e., a comment saying that isSparse is checking whether any tensor is sparse for the loop ldx, regardless of what level that corresponds to and regardless of what level is stored in tidLvls. (Or, if the levels in tidLvls are supposed to agree with the loop, then say that instead. Assuming we actually maintain that invariant, then we don't need to add assertions to verify that; but we do want a comment explaining that this invariant holds.)

1547

Why condTidLvls instead of just tidLvls? (I'm not saying it should be changed, just curious why the different name)

1650

I think the affineTidLvls and affines should actually be combined into a single SmallVector<TLA> where struct TLA final { TensorId; Level; AffineExpr } (or struct TLA final {TensorLevel; AffineExpr} with the appropriate getters). That helps capture the invariant that translateBitsToTidLvlPairs keeps the the same length, and thus also helps avoid needing to zip them together later on

Of course, you'll need to come up with a better name than my "TLA" ;)

Peiming added inline comments.Apr 17 2023, 2:23 PM
mlir/lib/Dialect/SparseTensor/Transforms/LoopEmitter.h
215

Actually I delete the wrappers, I find them useless.

We only use in in structure-bindings in foreach loop, and this is a private type so it should be okay? WDYT?

Peiming updated this revision to Diff 514413.Apr 17 2023, 2:35 PM
Peiming marked 7 inline comments as done.

address some comments.

Peiming added inline comments.Apr 17 2023, 2:39 PM
mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
1547

Because the callsites was using condTid (to distinguish from affineTid).

but yeah, cond- is not a accurate prefix, since not all tid, lvl are appeared in the loop condition (most of the dense levels are not). I changed it to tidLvls

wrengr added inline comments.Apr 17 2023, 3:00 PM
mlir/lib/Dialect/SparseTensor/Transforms/LoopEmitter.h
215

I still think we should define our own struct rather than reusing std::tuple. I feel like the std::pair/std::tuple classes are really only intended to be used for those cases where folks need an ad-hoc struct; hence, whenever things aren't ad-hoc then they should be given their own definitions.

Peiming updated this revision to Diff 514462.Apr 17 2023, 4:56 PM

use single unsigned for tensor lvl pairs

Peiming marked an inline comment as done.Apr 17 2023, 5:01 PM
aartbik accepted this revision.Apr 20 2023, 2:00 PM
aartbik added inline comments.
mlir/lib/Dialect/SparseTensor/Transforms/CodegenEnv.h
68 ↗(On Diff #514462)

please put this empty line back, all


Section
//

comments have whitespace around them

89 ↗(On Diff #514462)

empty line

91 ↗(On Diff #514462)

agree (since the subject is plural)

mlir/lib/Dialect/SparseTensor/Transforms/LoopEmitter.h
195

Gets

209

Give this one a doc string too

This revision is now accepted and ready to land.Apr 20 2023, 2:00 PM
Peiming updated this revision to Diff 515486.Apr 20 2023, 2:27 PM
Peiming marked 5 inline comments as done.

address comments.

Peiming updated this revision to Diff 515491.Apr 20 2023, 2:36 PM
Peiming marked 2 inline comments as done.

address comments.

Peiming added inline comments.Apr 20 2023, 2:38 PM
mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
1650

I want to keep it, because the affineTidLvls are used independently from affines later. See L1662

wrengr added inline comments.Apr 21 2023, 12:00 PM
mlir/lib/Dialect/SparseTensor/Transforms/CodegenEnv.h
92 ↗(On Diff #515491)

This should be TensorId

92 ↗(On Diff #515491)

This should be Level

mlir/lib/Dialect/SparseTensor/Transforms/LoopEmitter.cpp
763

It's more appropriate to use push_back here, since the return type of makeTensorLevel is exactly what's being stored (i.e., it's not being passed to some constructor for what's actually stored)

769

ditto, should be push_back

mlir/lib/Dialect/SparseTensor/Transforms/LoopEmitter.h
204

I think a name like unpackTensorLevel would better match makeTensorLevel.

209

This should use "///" since it's documentation not a comment

209

I get what you're after by saying "Container<>" and "range<>", however since those are not actual names of templates, it'd be better to rephrase this to "Converts a range of TensorLevel to a range of std::pair<TensorId, Level>."

211

I think you'll want to add std::remove_const_t there too, since that's what all the analogous sfinae code in MLIR does for this sort of thing.

215

Should be final.

215

Mirroring the earlier suggestion, this would then be unpackTensorLevelRange

216–218

style-guide says that the fields should be at the end of the defn

217

this should be std::forward<ContainerTy>(c)

229

Should be just "tensor"

mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
1492

push_back

1561

push_back

1579

push_back

1582

push_back

1620

push_back

1632

push_back

1658–1660

I still think it'd be better to have a single SmallVector<std::pair<TensorLevel, AffineExpr>> since that helps capture the invariant that these should be the same length.

But if you're not going to do that, then you should use llvm::zip_equal so that it checks that they have the same length rather than silently truncating things to the shorter list

1666–1667

If this concat is what's making you reluctant to use SmallVector<std::pair<TensorLevel, AffineExpr>>, then recall that you can use llvm::concat<TensorLevel>(tidLvls, llvm::make_first_range(affineTidLvlExprs)).

Peiming updated this revision to Diff 515885.Apr 21 2023, 1:21 PM
Peiming marked 21 inline comments as done.

address comments.

Peiming added inline comments.Apr 21 2023, 1:22 PM
mlir/lib/Dialect/SparseTensor/Transforms/LoopEmitter.h
216–218

Good to know!

mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
1658–1660

Good to know there is a zip_equal.

Peiming updated this revision to Diff 515894.Apr 21 2023, 1:30 PM

Use a pair for affineTidLvls and AffineExpr.

Peiming updated this revision to Diff 515903.Apr 21 2023, 1:40 PM

minor improvement.

wrengr accepted this revision.Apr 21 2023, 2:00 PM
wrengr added inline comments.
mlir/lib/Dialect/SparseTensor/Transforms/LoopEmitter.h
211

it should be in the order remove_reference_t<remove_const_t<, at least that's the order I see elsewhere, haven't checked to see if/how it matters

Peiming marked an inline comment as done.Apr 21 2023, 2:19 PM
Peiming added inline comments.
mlir/lib/Dialect/SparseTensor/Transforms/LoopEmitter.h
211

I don't think it matters, but yeah, good to be consistent anyway.

Peiming marked 2 inline comments as done.Apr 21 2023, 2:20 PM
Peiming added inline comments.Apr 21 2023, 2:26 PM
mlir/lib/Dialect/SparseTensor/Transforms/LoopEmitter.h
211
Peiming updated this revision to Diff 515918.Apr 21 2023, 2:31 PM

using llvm::remove_cvref_t

This revision was landed with ongoing or failed builds.May 4 2023, 9:15 AM
This revision was automatically updated to reflect the committed changes.

@Peiming can you please check, the build is failing. Pasting the link to premerge-check https://buildkite.com/llvm-project/premerge-checks/builds/150384#0187e78d-a047-440e-9018-0eecabff91c8

In file included from /home/scratch/llvm-project/mlir/lib/Dialect/SparseTensor/Transforms/CodegenEnv.h:17,
                 from /home/scratch/llvm-project/mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp:13:
/home/scratch/llvm-project/mlir/lib/Dialect/SparseTensor/Transforms/LoopEmitter.h: In member function ‘constexpr mlir::sparse_tensor::TensorLevel mlir::sparse_tensor::LoopEmitter::makeTensorLevel(mlir::sparse_tensor::TensorId, mlir::sparse_tensor::Level) const’:
/home/scratch/llvm-project/mlir/lib/Dialect/SparseTensor/Transforms/LoopEmitter.h:199:29: error: call to non-‘constexpr’ function ‘unsigned int mlir::sparse_tensor::LoopEmitter::getNumTensors() const’
  199 |     return l * getNumTensors() + t;
      |                ~~~~~~~~~~~~~^~
/home/scratch/llvm-project/mlir/lib/Dialect/SparseTensor/Transforms/LoopEmitter.h:195:12: note: ‘unsigned int mlir::sparse_tensor::LoopEmitter::getNumTensors() const’ declared here
  195 |   unsigned getNumTensors() const { return tensors.size(); }
      |            ^~~~~~~~~~~~~
In file included from /usr/include/c++/11/cassert:44,
                 from /home/scratch/llvm-project/llvm/include/llvm/Support/CommandLine.h:34,
                 from /home/scratch/llvm-project/mlir/include/mlir/Pass/PassOptions.h:21,
                 from /home/scratch/llvm-project/mlir/include/mlir/Pass/PassRegistry.h:17,
                 from /home/scratch/llvm-project/mlir/include/mlir/Pass/Pass.h:13,
                 from /home/scratch/llvm-project/mlir/include/mlir/Dialect/SparseTensor/Transforms/Passes.h:21,
                 from /home/scratch/llvm-project/mlir/lib/Dialect/SparseTensor/Transforms/CodegenEnv.h:21,
                 from /home/scratch/llvm-project/mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp:13:
/home/scratch/llvm-project/mlir/lib/Dialect/SparseTensor/Transforms/CodegenEnv.h: In member function ‘constexpr mlir::sparse_tensor::TensorLevel mlir::sparse_tensor::CodegenEnv::makeTensorLevel(mlir::sparse_tensor::TensorId, mlir::sparse_tensor::Level) const’:
/home/scratch/llvm-project/mlir/lib/Dialect/SparseTensor/Transforms/CodegenEnv.h:95:37: error: call to non-‘constexpr’ function ‘unsigned int mlir::sparse_tensor::LoopEmitter::getNumTensors() const’
   95 |     assert(loopEmitter.getNumTensors() == linalgOp->getNumOperands() &&
      |            ~~~~~~~~~~~~~~~~~~~~~~~~~^~
In file included from /home/scratch/llvm-project/mlir/lib/Dialect/SparseTensor/Transforms/CodegenEnv.h:17

@Peiming can you please check, the build is failing. Pasting the link to premerge-check https://buildkite.com/llvm-project/premerge-checks/builds/150384#0187e78d-a047-440e-9018-0eecabff91c8

In file included from /home/scratch/llvm-project/mlir/lib/Dialect/SparseTensor/Transforms/CodegenEnv.h:17,
                 from /home/scratch/llvm-project/mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp:13:
/home/scratch/llvm-project/mlir/lib/Dialect/SparseTensor/Transforms/LoopEmitter.h: In member function ‘constexpr mlir::sparse_tensor::TensorLevel mlir::sparse_tensor::LoopEmitter::makeTensorLevel(mlir::sparse_tensor::TensorId, mlir::sparse_tensor::Level) const’:
/home/scratch/llvm-project/mlir/lib/Dialect/SparseTensor/Transforms/LoopEmitter.h:199:29: error: call to non-‘constexpr’ function ‘unsigned int mlir::sparse_tensor::LoopEmitter::getNumTensors() const’
  199 |     return l * getNumTensors() + t;
      |                ~~~~~~~~~~~~~^~
/home/scratch/llvm-project/mlir/lib/Dialect/SparseTensor/Transforms/LoopEmitter.h:195:12: note: ‘unsigned int mlir::sparse_tensor::LoopEmitter::getNumTensors() const’ declared here
  195 |   unsigned getNumTensors() const { return tensors.size(); }
      |            ^~~~~~~~~~~~~
In file included from /usr/include/c++/11/cassert:44,
                 from /home/scratch/llvm-project/llvm/include/llvm/Support/CommandLine.h:34,
                 from /home/scratch/llvm-project/mlir/include/mlir/Pass/PassOptions.h:21,
                 from /home/scratch/llvm-project/mlir/include/mlir/Pass/PassRegistry.h:17,
                 from /home/scratch/llvm-project/mlir/include/mlir/Pass/Pass.h:13,
                 from /home/scratch/llvm-project/mlir/include/mlir/Dialect/SparseTensor/Transforms/Passes.h:21,
                 from /home/scratch/llvm-project/mlir/lib/Dialect/SparseTensor/Transforms/CodegenEnv.h:21,
                 from /home/scratch/llvm-project/mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp:13:
/home/scratch/llvm-project/mlir/lib/Dialect/SparseTensor/Transforms/CodegenEnv.h: In member function ‘constexpr mlir::sparse_tensor::TensorLevel mlir::sparse_tensor::CodegenEnv::makeTensorLevel(mlir::sparse_tensor::TensorId, mlir::sparse_tensor::Level) const’:
/home/scratch/llvm-project/mlir/lib/Dialect/SparseTensor/Transforms/CodegenEnv.h:95:37: error: call to non-‘constexpr’ function ‘unsigned int mlir::sparse_tensor::LoopEmitter::getNumTensors() const’
   95 |     assert(loopEmitter.getNumTensors() == linalgOp->getNumOperands() &&
      |            ~~~~~~~~~~~~~~~~~~~~~~~~~^~
In file included from /home/scratch/llvm-project/mlir/lib/Dialect/SparseTensor/Transforms/CodegenEnv.h:17

On it

vzakhari added inline comments.
mlir/lib/Dialect/SparseTensor/Transforms/CodegenEnv.h
92 ↗(On Diff #519522)

getNumTensors cannot be called from constexpr function.