This is an archive of the discontinued LLVM Phabricator instance.

[mlir][linalg] Add support for folding pack(fill) into fill.
ClosedPublic

Authored by hanchung on May 3 2023, 4:46 PM.

Diff Detail

Event Timeline

hanchung created this revision.May 3 2023, 4:46 PM
Herald added a project: Restricted Project. · View Herald TranscriptMay 3 2023, 4:46 PM
hanchung requested review of this revision.May 3 2023, 4:46 PM
Herald added a project: Restricted Project. · View Herald TranscriptMay 3 2023, 4:46 PM
hanchung updated this revision to Diff 519311.May 3 2023, 5:26 PM

clang-format

qedawkins accepted this revision.May 3 2023, 6:03 PM

Mostly LGTM. Also I have a question about how best to handle patterns for named linalg ops going forward.

mlir/lib/Dialect/Linalg/Transforms/DataLayoutPropagation.cpp
491–492

Can we forward the new pack op destination to the fill op if the result is a tensor.empty (same as we're doing for generic elementwise in D149250)? This should be fine because the padding value and fill value are the same.

492

Once padding is supported (with some toggle option) in propagation through generics, would it be better to let the linalg.fill building happen via a callback specified in BubbleUpPackOpThroughFillOpPattern? Then we can reuse the pattern for generic ops for any linalg op which could help consolidate code at the expense of extra analysis in certain cases like this.

This revision is now accepted and ready to land.May 3 2023, 6:03 PM
chelini added inline comments.May 4 2023, 1:14 AM
mlir/lib/Dialect/Linalg/Transforms/DataLayoutPropagation.cpp
481

nit, I would write: SmallVector<Value> operands = {fillOp.getDpsInitOperand(0)->get(), packOpDest};

486

nit: operands.append(packOp.getInnerTiles().begin(), packOp.getInnerTiles().end());

mlir/test/Dialect/Linalg/data-layout-propagation.mlir
843

Do we plan to fold the pack into the fill op in another PR? I would avoid the packing and make the fill operation work directly on the packed layout. Do you have some use cases where you want to preserve the packing? If so, we could do this folding in populateSimplifyTensorPack.

hanchung marked 4 inline comments as done.May 4 2023, 3:51 PM
hanchung added inline comments.
mlir/lib/Dialect/Linalg/Transforms/DataLayoutPropagation.cpp
491–492

I think we can drop the pack op directly after I read chelini's comment. I can't think a case that we'd like to preserve the new pack op because it's only used by the new fill op. What do you think?

492

Propagation with padding value is tricky, esp. for generic ops. The fill op case is fairly simple, so I'd like to scope it to the pattern at this moment.

mlir/test/Dialect/Linalg/data-layout-propagation.mlir
843

Sounds really good to me, maybe we can do it in this PR..

hanchung updated this revision to Diff 519685.May 4 2023, 3:51 PM
hanchung marked 2 inline comments as done.

address commets

hanchung retitled this revision from [mlir][linalg] Add support for propagating pack op through linalg.fill to [mlir][linalg] Add support for folding pack(fill) into fill..May 4 2023, 3:52 PM
qedawkins added inline comments.May 4 2023, 4:52 PM
mlir/lib/Dialect/Linalg/Transforms/DataLayoutPropagation.cpp
491–492

That's probably fine too, although I think they work out to be essentially the same thing whenever the fill is on an empty tensor (with an extra floating pack that can get DCE'd). Correct me if I'm wrong about that though.

My only concern was if the fill op destination was a non-empty tensor for whatever reason and we would then want to maintain the use-def chain similar to bubbling up through pure elementwise (as noted in D149250, given that fill is just a specific kind of pure elementwise). The difference is that filling a tensor is only sensible on an empty tensor then? Basically I don't see why a named op gets to be propagated differently than an equivalent generic (without padding). At a minimum I would expect us to be using the LinalgFillOpInterface for this then.

hanchung added inline comments.May 4 2023, 5:55 PM
mlir/lib/Dialect/Linalg/Transforms/DataLayoutPropagation.cpp
491–492

My only concern was if the fill op destination was a non-empty tensor for whatever reason...

It looks fine to me because all the things in tensors world are destructive. Filling on whatever tensors (maybe except bufferization.alloc_tensor op) should create a new tensor with filled value. The pass is intended to be applied at tensor graph level, which is at a higher level than bufferization. So I think it is fine.

The generic op version of the fill op is very rare to me.. I agree that it could be a case, but we don't have to expose it at this moment. I have an idea for supporting padding cases in limit cases, and it could cover the case.

At a minimum I would expect us to be using the LinalgFillOpInterface for this then.

I was not aware of the interface. I studied it, and found that we can't use it here. The property is having a scalar value operand and one output operand. The computation body can be different, e.g., we could define FillExpOp using the interface which fills the arith.exp(scalar) into the output operand. In the context, we'll need different check between padding value and filled value. This interface is only used by linalg.fill op now, I think we can scope it to linalg.fill at this moment. If we really have other needs, it should be easy to extend.

hanchung added inline comments.May 4 2023, 6:00 PM
mlir/lib/Dialect/Linalg/Transforms/DataLayoutPropagation.cpp
491–492

I'm open to keep the pack chain; fold them away only if the dest tensor is a tensor.empty op. There could be other use cases that I haven't seen before.

qedawkins added inline comments.May 4 2023, 6:30 PM
mlir/lib/Dialect/Linalg/Transforms/DataLayoutPropagation.cpp
491–492

I see, this is fine then and I didn't intend to block. I just figured named linalg ops should try to be as close as possible to the generic patterns (the only difference being the pack chain in this case), but agreed that it would be strange usage of a fill op that I haven't seen either. I can go either way on keeping the pack chain because the pattern looks cleaner without it. Thanks for the discussion!

hanchung added inline comments.May 4 2023, 7:52 PM
mlir/lib/Dialect/Linalg/Transforms/DataLayoutPropagation.cpp
491–492

I'm happy that we have the nice discussion! (I planned to wait for a day just in case if others would like to review it. No worry at all!)

chelini added inline comments.May 5 2023, 1:35 AM
mlir/lib/Dialect/Linalg/Transforms/DataLayoutPropagation.cpp
491–492

That's probably fine too, although I think they work out to be essentially the same thing whenever the fill is on an empty tensor (with an extra floating pack that can get DCE'd). Correct me if I'm wrong about that though.

My only concern was if the fill op destination was a non-empty tensor for whatever reason and we would then want to maintain the use-def chain similar to bubbling up through pure elementwise (as noted in D149250, given that fill is just a specific kind of pure elementwise). The difference is that filling a tensor is only sensible on an empty tensor then? Basically I don't see why a named op gets to be propagated differently than an equivalent generic (without padding). At a minimum I would expect us to be using the LinalgFillOpInterface for this then.

Thanks for all these discussions! In D149250, I attempted to avoid packing an 'init' operand if the 'init' is already a tensor.empty. In this case, empty carries only the shape; we can update it with the packed version. I start wondering if the into keyword in tensor.pack is confusing. Internally there was some discussion about it because people think we are writing into the destination tensor. Let's discuss this offline in a call!

mlir/test/Dialect/Linalg/data-layout-propagation.mlir
843

Why do we want to pass %dest as arg for the fill, seems a bit wired a tensor level. I would create a tensor.empty inside the func.

859

same here.

qedawkins added inline comments.May 5 2023, 9:07 AM
mlir/lib/Dialect/Linalg/Transforms/DataLayoutPropagation.cpp
491–492

I start wondering if the into keyword in tensor.pack is confusing. Internally there was some discussion about it because people think we are writing into the destination tensor. Let's discuss this offline in a call!

Agreed, that has confused me as well; also happy to discuss in a call!

hanchung marked an inline comment as done.May 5 2023, 11:30 AM
hanchung added inline comments.
mlir/lib/Dialect/Linalg/Transforms/DataLayoutPropagation.cpp
491–492

agree, into is a confusing term. We can maybe use ins and outs because it's intended to capture the shape information. happy to discuss it!

mlir/test/Dialect/Linalg/data-layout-propagation.mlir
843

It is not a big concern to me because sometimes a function can be inlined into a program. My intention was to create a minimal test for the pass. We can do this for static cases, but not dynamic shapes. We'll have to tie the dim sizes to something. Passing it as an argument is simplest way I've found. Let's do it for static shapes, and leave the dynamic case as what it is..

hanchung updated this revision to Diff 519930.May 5 2023, 11:32 AM

update tests

hanchung updated this revision to Diff 519931.May 5 2023, 11:33 AM

update tests

This revision was landed with ongoing or failed builds.May 5 2023, 11:42 AM
This revision was automatically updated to reflect the committed changes.
nicolasvasilache added inline comments.
mlir/lib/Dialect/Linalg/Transforms/DataLayoutPropagation.cpp
453

I had the impression that @qcolombet had written something like this ~1 month ago ?

Signaling in case there is a dedup that needs to happen here.

qcolombet added inline comments.May 11 2023, 2:09 PM
mlir/lib/Dialect/Linalg/Transforms/DataLayoutPropagation.cpp
453

We definitely talked about it, but no I didn't land anything on that.