This is an archive of the discontinued LLVM Phabricator instance.

[mlir][MemRefToLLVM] Remove the code for lowering subview
ClosedPublic

Authored by qcolombet on Oct 20 2022, 1:57 PM.

Details

Summary

Subviews are supposed to be expanded before we hit the lowering code.
The expansion is done with the pass called expand-strided-metadata.

Add a test that demonstrate how these passes can be linked up to achieve the desired lowering.

This patch is NFC in spirit but not in practice because subview gets lowered into reinterpret_cast(extract_strided_metadata, <some math>) which lowers in two memref descriptors (one for reinterpert_cast and one for extract_strided_metadata), which creates some noise of the form: extractvalue(unrealized_cast(extractvalue[0]))[0] that is currently not simplified within MLIR but that is really just noop in that case.

Diff Detail

Event Timeline

qcolombet created this revision.Oct 20 2022, 1:57 PM
qcolombet requested review of this revision.Oct 20 2022, 1:57 PM

I am really not a fan of the layering here. This goes back to what we had with uber-passes "convert-X-Y-Z-to-A-B-C" with big dependencies. It is very surprising that running convert-memref-to-llvm will also convert any affine operations directly to LLVM, instead of lowering them to Arith where they can be jointly optimized with other arithmetic operations.

ftynse requested changes to this revision.Oct 25 2022, 2:26 AM
This revision now requires changes to proceed.Oct 25 2022, 2:26 AM

While I don't have an immediate suggestion on what to do here, I don't think memref-to-llvm should subsume affine-to-llvm and arith-to-llvm. Ideally, there shouldn't be a dependency on the Affine dialect either, but I can live with that if it is minimal.

Hi @ftynse ,

Thanks for the review.

I don't think memref-to-llvm should subsume affine-to-llvm and arith-to-llvm.

We had this exact same discussion with @nicolasvasilache and thought it was fine.
The alternative I can see is we could declare affine.apply legal, so that they don't get expanded in the process. The problem with that is it is oddly specific.

Let me check if that would work to begin with.

Cheers,
-Quentin

Ideally, there shouldn't be a dependency on the Affine dialect either, but I can live with that if it is minimal.

To comment on that, the alternative would be to produce arith ops instead of affine.apply in the lowering of subview, expand_shape, etc., but Nicolas and I thought the affine.apply was easier to reason about.

The alternative I can see is we could declare affine.apply legal, so that they don't get expanded in the process.

To report on this, if I add target.addLegalOp<AffineApplyOp>(); we can avoid adding the conversion from affine-to-llvm and arith-to-llvm. The downside is that memref-to-llvm may create affine.apply ops.

@ftynse what do you think?

We had this exact same discussion with @nicolasvasilache and thought it was fine.

I really don't think so. I have lost track of the time I had spent helping people debug surprising lowerings back in the day when we had the "standard" dialect that conflated everything. This was one of the driving factors to ultimately split the standard dialect into smaller pieces. We've collectively spend a significant amount of time to implement that change as well as to split the lowerings into isolated sets of patterns, exercised by the respective passes. As much as I dislike that process, I am going to have to ask you to post an RFC on the forum if you really want to revert that decision entirely or to say that A->B lowerings should be allowed to subsume lowerings from dialects other than A.

Now, I am aware that some downstream compilers, including the one that @nicolasvasilache is contributing to, have an uber-lowering pass that runs lowering patterns from a dozen dialects to LLVM in a single rewriter sweep. For that pass, it is okay to just depend on everything and emit intermediate ops from any dialect, but we haven't really designed upstream passes for this. Furthermore, having the memref-to-llvm _pass_ subsume patterns that the populateMemRefToLLVM does not contain will break such uber-lowering passes and require surprising, in my opinion, modifications to fix them.

Just to give an additional concrete example, "affine-to-standard" patterns include conversion from affine.for to scf.for, among other things. So it is not sufficient to just have "affine" as dependent dialect, so should be "scf" and a bunch of other things. Without that, the pass may trigger an assertion and stop the pipeline. Even with that, it will introduces ops from other dialects that need a two-stage lowering before they reach the LLVM dialect.

The alternative I can see is we could declare affine.apply legal, so that they don't get expanded in the process. The problem with that is it is oddly specific.

This is slightly better. Conceptually, the right solution for me would be to separate the part that produces non-LLVM operations into a separate "expand-memref-ops" pass like we have for math, and then just lower the resulting mix with the existing passes.

Thanks for the history here @ftynse !

To give a little bit of context, Nicolas realized that some code that currently lives in memref-to-llvm has to be reimplemented when doing memref-to-something-else (e.g., like memref to vmvx in the IREE compiler https://github.com/iree-org/iree/blob/main/compiler/src/iree/compiler/Dialect/VMVX/Transforms/ResolveBufferDescriptors.cpp#L21.). (In particular all the code about figuring out the strides, sizes, and offset for view like operations.) The goal was to move the common part of this lowering in core MLIR, i.e., what we do in populateSimplifyExtractStridedMetadataOpPatterns, but that common part cannot obviously target the final dialect (LLVM or something else). We settle on affine for the intermediate step (but using arith would yield the exact same problems), hence we need an additional lowering step.

Furthermore, having the memref-to-llvm _pass_ subsume patterns that the populateMemRefToLLVM does not contain will break such uber-lowering passes and require surprising, in my opinion, modifications to fix them.

I didn't get that part. populateMemRefToLLVMConversionPatterns is augmented to include all the required patterns in that patch. Thus uber-lowering passes would work just fine unless I missed something.
Anyhow, that doesn't address the core issue that we don't want to pull affine-to-std and/or arith-to-llvm.

I don't think that's something possible in the current framework, but to be NFC(-ish) here we would need to only apply the affine-to-std and arith-to-llvm to the view like operations, not all operations.

Conceptually, the right solution for me would be to separate the part that produces non-LLVM operations into a separate "expand-memref-ops" pass like we have for math, and then just lower the resulting mix with the existing passes.

That sounds exactly what we did with the SimplifyExtractStridedMetadata pass (which we should probably rename in expand-memref-ops given how it evolved).

It looks like I am missing only one thing: how do you connect the pipeline so that it works for all the users of memref-to-llvm? We cannot expect all of them to call expand-memref-ops before calling memref-to-llvm, can we?

I didn't see how it works for the math dialect: I see the expand pass, but I don't see how it is connected with the conversion math-to-llvm conversion.

Cheers,
-Quentin

To give a little bit of context, Nicolas realized that some code that currently lives in memref-to-llvm has to be reimplemented when doing memref-to-something-else (e.g., like memref to vmvx in the IREE compiler https://github.com/iree-org/iree/blob/main/compiler/src/iree/compiler/Dialect/VMVX/Transforms/ResolveBufferDescriptors.cpp#L21.). (In particular all the code about figuring out the strides, sizes, and offset for view like operations.) The goal was to move the common part of this lowering in core MLIR, i.e., what we do in populateSimplifyExtractStridedMetadataOpPatterns, but that common part cannot obviously target the final dialect (LLVM or something else). We settle on affine for the intermediate step (but using arith would yield the exact same problems), hence we need an additional lowering step.

Thanks. The implementation of "extract metadata" was a bit rushed and this explanation wasn't sufficiently clear before. This approach to factoring out the common code may be overusing the pattern idea, surely there are other ways of sharing the implementation. IMO, we could have just introduced a helper function along the lines of "emit arith dialect IR equivalent to applying the given affine map to the given operands" and called it from several patterns instead of emitting an "affine.apply" (with all the baggage it brings) and then applying another pattern to lower it.

I didn't get that part. populateMemRefToLLVMConversionPatterns is augmented to include all the required patterns in that patch.

My bad, I haven't noticed what is modified exactly. But you are right that it doesn't matter for the actual issue.

I don't think that's something possible in the current framework, but to be NFC(-ish) here we would need to only apply the affine-to-std and arith-to-llvm to the view like operations, not all operations.

Yeah, apply some additional patterns only to the operations produced by a specific other patterns. I'd also like this feature to run "local cleanups" instead of calling the global canonicalizer, but we don't have this and the driver code is already highly complex.

It looks like I am missing only one thing: how do you connect the pipeline so that it works for all the users of memref-to-llvm? We cannot expect all of them to call expand-memref-ops before calling memref-to-llvm, can we?
I didn't see how it works for the math dialect: I see the expand pass, but I don't see how it is connected with the conversion math-to-llvm conversion.

The user is expected to run expand-math or convert-math-to-libm in their pass pipeline or otherwise get rid of those operations before calling convert-math-to-llvm. So I guess we can expect users to call expand-memref-ops, we just have to broadcast this requirement on the communication channels.
In some distant future, we could have some abstract description of the state of the IR before and after applying a group of patterns and then have some logic figure out the order of groups to reach a certain state...

nicolasvasilache added a comment.EditedOct 27 2022, 7:42 AM

To give a little bit of context, Nicolas realized that some code that currently lives in memref-to-llvm has to be reimplemented when doing memref-to-something-else (e.g., like memref to vmvx in the IREE compiler https://github.com/iree-org/iree/blob/main/compiler/src/iree/compiler/Dialect/VMVX/Transforms/ResolveBufferDescriptors.cpp#L21.). (In particular all the code about figuring out the strides, sizes, and offset for view like operations.) The goal was to move the common part of this lowering in core MLIR, i.e., what we do in populateSimplifyExtractStridedMetadataOpPatterns, but that common part cannot obviously target the final dialect (LLVM or something else). We settle on affine for the intermediate step (but using arith would yield the exact same problems), hence we need an additional lowering step.

Thanks. The implementation of "extract metadata" was a bit rushed and this explanation wasn't sufficiently clear before.

How was this rushed? It was deliberately done at a snails' pace over 3+weeks between the beginning of the discussion and the op implementation landing.
The explanation is pretty clearly discussed here: https://discourse.llvm.org/t/extracting-dynamic-offsets-strides-from-memref/64170/9 LLVM is but one target of interest.

This approach to factoring out the common code may be overusing the pattern idea, surely there are other ways of sharing the implementation.
IMO, we could have just introduced a helper function along the lines of "emit arith dialect IR equivalent to applying the given affine map to the given operands" and called it from several patterns instead of emitting an "affine.apply" (with all the baggage it brings) and then applying another pattern to lower it.

I don't see value in jumping that abstraction gap and introducing a significantly more convoluted implementation for a small subset of functionality.
This is akin to reimplementing canonicalizations inside of a pass rather than using the proper things and should actively be discouraged.

Additionally, this helper function along the lines of "emit arith dialect IR equivalent .." still does not solve the arith-to-llvm part.

I am really not a fan of the layering here. This goes back to what we had with uber-passes "convert-X-Y-Z-to-A-B-C" with big dependencies. It is very surprising that running convert-memref-to-llvm will also convert any affine operations directly to LLVM, instead of lowering them to Arith where they can be jointly optimized with other arithmetic operations.

No question that we should avoid pulling the whole of affine-to-llvm and arith-to-llvm transitively with memref-to-llvm.
My thinking was to just bring in the AffineApplyLowering pattern but 1. even that may be too intrusive and 2. it still does not solve the arith-to-llvm problem.

Separately, I don't think selectively adding rewrite patterns to conversions is a problem.
Are you suggesting this as a general problem?
If so, it seems you are proposing that conversion infra should never be able to call rewrite patterns (didn't look deep into the implications here but my gut feeling is it seems too restrictive).

Stepping back, this is the classic ordering problem we get with dialects + progressive lowering: dialects percolate ops together into clusters and dependences arise.
The traditional solution is to split ops into new dialects; I am not clear it would make sense to put memref.extract_metadata into another dialect (but maybe it does?)

An orthogonal concern is that all these memref-to-llvm and other conversion passes remain test passes that we insist on string-chaining together via mlir-opt and handwaving that into a "compiler".
As long as we don't have a ConvertToLLVM and insist on peer-to-peer conversions by proxy, these issues will continue popping over and over again.
A notional ConvertToLLVM could detect which dialects are registered and add all the relevant lowerings in the proper order.
Not having such a batteries-included pass makes MLIR unnecessarily hard to use, having folks run around digging for unspoken phase orderings.

For the time being, to unblock progress, I would:

  1. avoid populating the world in this pass and I would update the test to run mlir-opt --expand-memref-ops --affine-to-std --arith-to-llvm --memref-to-llvm
  2. prefix all the "-to-llvm" passes with "test" to make it clear what their intent is
  3. start a convert-to-llvm if we care about making mlir-opt usable.

@ftynse any objection to 1. ? (or a better proposal?)

Thanks @ftynse and @nicolasvasilache for your inputs. I feel we've reached an agreement (and an understanding on my end!): we'll use a two steps approach, first expand-memref-ops, second we do the actual memref-to-llvm.

@ftynse, for communicating this new requirement, should I just send a PSA on MLIR discourse, or you have something else in mind?

The user is expected to run expand-math or convert-math-to-libm in their pass pipeline or otherwise get rid of those operations before calling convert-math-to-llvm. So I guess we can expect users to call expand-memref-ops, we just have to broadcast this requirement on the communication channels.

Sounds like a plan. I didn't realize it was okay to have dotted lines like that. I was looking for a working out-of-the-box solution.

Unless I miss something, this matches what Nicolas is suggesting in #1:

  1. avoid populating the world in this pass and I would update the test to run mlir-opt --expand-memref-ops --affine-to-std --arith-to-llvm --memref-to-llvm

I'll keep a separate test for the full convert MemRef to LLVM "pipeline".

To give a little bit of context, Nicolas realized that some code that currently lives in memref-to-llvm has to be reimplemented when doing memref-to-something-else (e.g., like memref to vmvx in the IREE compiler https://github.com/iree-org/iree/blob/main/compiler/src/iree/compiler/Dialect/VMVX/Transforms/ResolveBufferDescriptors.cpp#L21.). (In particular all the code about figuring out the strides, sizes, and offset for view like operations.) The goal was to move the common part of this lowering in core MLIR, i.e., what we do in populateSimplifyExtractStridedMetadataOpPatterns, but that common part cannot obviously target the final dialect (LLVM or something else). We settle on affine for the intermediate step (but using arith would yield the exact same problems), hence we need an additional lowering step.

Thanks. The implementation of "extract metadata" was a bit rushed and this explanation wasn't sufficiently clear before.

How was this rushed? It was deliberately done at a snails' pace over 3+weeks between the beginning of the discussion and the op implementation landing.
The explanation is pretty clearly discussed here: https://discourse.llvm.org/t/extracting-dynamic-offsets-strides-from-memref/64170/9 LLVM is but one target of interest.

The referenced post does not seem to discuss the lowering, only the operation motivated by downstream needs. Furthermore, it states that "Our objective is to get rid of that descriptor ourselves by folding it away before LLVM", which seems to be the exact opposite of what is being implemented: extracting elements from the descriptor while keeping it alive and hoping that LLVM will simplify it. What is being implemented is, however, consistent with the state of the lowering before that proposal, which I described as "the common wisdom was that something will do SRoA and DCE at a lower level (LLVM dialect or LLVM IR) anyway" in https://discourse.llvm.org/t/extracting-dynamic-offsets-strides-from-memref/64170/16. I did not have concerns with materializing the descriptor as a data structure or with doing address arithmetic at the arith level, potentially even at the affine level. I do have concerns with this happening inside the LLVM-specific lowering pass (contrary to the motivation of the LLVM being only one of targets) and exercised only indirectly.

My thinking was to just bring in the AffineApplyLowering pattern but 1. even that may be too intrusive and 2. it still does not solve the arith-to-llvm problem.

This still brings in a dependency on Affine, which is rather heavy and mostly unnecessary... We have repeatedly mentioned the idea of having a variant of affine.apply that wasn't part of the affine dialect, but it never materialized.

Separately, I don't think selectively adding rewrite patterns to conversions is a problem.
Are you suggesting this as a general problem?
If so, it seems you are proposing that conversion infra should never be able to call rewrite patterns (didn't look deep into the implications here but my gut feeling is it seems too restrictive).

I specifically object to having affine-to-std conversion patterns subsumed by memref-to-llvm conversion pattern set because "memref-to-llvm" does not mention neither affine nor any dialect that used to be std, and to this tendency to subsume one broad set of patterns in another in general. This has nothing to do with some patterns being implemented as ConversionPattern or RewritePattern. We have ~50 conversion passes and as many pattern groups only in the main repo. This is already difficult to comprehend and manage.

While this was not a part of my reasoning above, I do think that conversion infra should not call rewrite patterns. Patterns are written taking into account the inner workings of a specific driver, whether we want it or not. The differences between the greedy and the conversion driver are numerous and subtle. Here are some that come to mind: rewrite patterns can check for single-use or no-use of a value, conversion patterns cannot because the original uses may or may not be removed immediately (depending on the ops being replaced or updated in-place); rewrite patterns can be applied recursively provided there is a fixed point, conversion patterns cannot because they would produce an operation declared illegal (if it is declared legal, the pattern would never apply); the usual "thou shalt not mutate IR bypassing the rewriter" to which the conversion rewriter is sensitive differently than the greedy one though both will crash with abstruse memory corruption errors.

avoid populating the world in this pass and I would update the test to run mlir-opt --expand-memref-ops --affine-to-std --arith-to-llvm --memref-to-llvm

This seems to be what I suggested above, certainly I cannot object to what I said I thought was the right approach.

Conceptually, the right solution for me would be to separate the part that produces non-LLVM operations into a separate "expand-memref-ops" pass like we have for math, and then just lower the resulting mix with the existing passes.

  1. prefix all the "-to-llvm" passes with "test" to make it clear what their intent is
  2. start a convert-to-llvm if we care about making mlir-opt usable.

There doesn't seem to be a consensus on these, and it is outside of the scope of this commit.

@ftynse, for communicating this new requirement, should I just send a PSA on MLIR discourse, or you have something else in mind?

A PSA should do.

I'll keep a separate test for the full convert MemRef to LLVM "pipeline".

You can define an actual named pass pipeline https://mlir.llvm.org/docs/PassManagement/#pass-pipeline-registration, this may also help users that want a one-step solution to lowering all of memref to llvm at any cost.

Also, IIRC there was a transform in the memref dialect (or maybe in affine) that would "normalize" memrefs with non-strided layout maps by shifting the layout into affine.load and/or affine.apply operations on operands and by making the memref have the default layout instead. If it is still around, we may consider a memref-to-affine transformation that combines that with what is being proposed here.

qcolombet updated this revision to Diff 475931.Nov 16 2022, 3:12 PM
qcolombet retitled this revision from [mlir][MemRefToLLVM] Reuse existing subview lowering to [mlir][MemRefToLLVM] Remove the code for lowering subview.
qcolombet edited the summary of this revision. (Show Details)
  • Update the description of the PR.
  • Use the simplify-extract-strided-metadata as the expand-memref pass. (Will rename that pass in a different patch.)
  • Move the tests related to subviews from memref-to-llvm to a new file that first run the expand pass.

Note: As far as code gen goes, the new lowering produces the same codegen after LLVM CSE or instcombine. Some examples for collapse_shape are available at https://reviews.llvm.org/D136483

Hi @ftynse,

I've finally updated this PR to do what we discuss: use a different pass for the expand-memref part.
I still need to rename the expand pass, but that would be done in parallel.

When we're happy with the PR, I'll do a PSA for how memref conversion to llvm need to be done.

Cheers,
-Quentin

ftynse accepted this revision.Nov 18 2022, 4:12 AM
ftynse added inline comments.
mlir/test/Conversion/MemRefToLLVM/expand-then-convert-to-llvm.mlir
8–13 ↗(On Diff #475931)

I'd expect -reconcile-unrealized-casts to fix that.

This revision is now accepted and ready to land.Nov 18 2022, 4:12 AM
qcolombet added inline comments.Nov 18 2022, 10:39 AM
mlir/test/Conversion/MemRefToLLVM/expand-then-convert-to-llvm.mlir
8–13 ↗(On Diff #475931)

Ah thanks, let me add that to the pipeline.

qcolombet added inline comments.Nov 21 2022, 9:55 AM
mlir/test/Conversion/MemRefToLLVM/expand-then-convert-to-llvm.mlir
8–13 ↗(On Diff #475931)

That fix that but only if I run convert-func-to-llvm before because otherwise I end up with illegal unrealized casts for function arguments. (See example below.)

@ftynse, I'm inclined to leave the test as is since lowering func to llvm creates a lot of unnecessary noise IMHO. (E.g., memref are expanded in a bunch of arguments.)
What do you think?

error: failed to legalize operation 'builtin.unrealized_conversion_cast' that was explicitly marked illegal
    %0 = builtin.unrealized_conversion_cast %arg1 : index to i64
         ^
qcolombet updated this revision to Diff 477012.Nov 21 2022, 2:59 PM
qcolombet edited the summary of this revision. (Show Details)

Rebase to take into account the change in name of the simplify-extract-strided-metadata pass.

bkramer added a subscriber: bkramer.Dec 2 2022, 6:03 AM

This breaks most integration tests, which for some unfathomable reason are under a MLIR_INCLUDE_INTEGRATION_TESTS cmake flag.

Failed Tests (54):
  MLIR :: Integration/Dialect/Linalg/CPU/matmul-vs-matvec.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/rank-reducing-subview.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-conv-1d-call.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-conv-1d-nwc-wcf-call.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-conv-2d-call.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-conv-2d-nhwc-hwcf-call.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-conv-3d-call.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-conv-3d-ndhwc-dhwcf-call.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-one-shot-bufferize.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-padtensor.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-subtensor-insert-multiple-uses.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-subtensor-insert.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-tensor-matmul.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/concatenate.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/dense_output.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/dense_output_bf16.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/dense_output_f16.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_binary.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_codegen_foreach.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_constant_to_sparse_tensor.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_conv_1d_nwc_wcf.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_conv_2d.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_conv_2d_nhwc_hwcf.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_conv_3d.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_conv_3d_ndhwc_dhwcf.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_conversion.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_conversion_dyn.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_conversion_ptr.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_conversion_sparse2dense.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_conversion_sparse2sparse.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_expand.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_insert_2d.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_insert_3d.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_matmul.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_matvec.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_out_mult_elt.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_out_reduction.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_reduce_custom.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_reductions.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_rewrite_push_back.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_rewrite_sort_coo.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_sampled_mm_fusion.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_scale.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_select.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_sign.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_sorted_coo.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_spmm.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_storage.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_tensor_ops.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_transpose.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_unary.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_vector_ops.mlir
  MLIR :: Integration/Dialect/Standard/CPU/test_subview.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-transfer-read-1d.mlir

Thanks for the heads up!

Before the breakage, I was not even aware you could turn them on and off!

I've reverted the change about an hour ago and relanded it with the
proper test fixes in
https://reviews.llvm.org/rG786cbb09edf2e4b6b057351444720e96c6dd2066.

Sorry about that.

Le ven. 2 déc. 2022 à 15:03, Benjamin Kramer via Phabricator
<reviews@reviews.llvm.org> a écrit :

bkramer added a comment.

This breaks most integration tests, which for some unfathomable reason are under a MLIR_INCLUDE_INTEGRATION_TESTS cmake flag.

Failed Tests (54):
  MLIR :: Integration/Dialect/Linalg/CPU/matmul-vs-matvec.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/rank-reducing-subview.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-conv-1d-call.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-conv-1d-nwc-wcf-call.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-conv-2d-call.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-conv-2d-nhwc-hwcf-call.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-conv-3d-call.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-conv-3d-ndhwc-dhwcf-call.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-one-shot-bufferize.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-padtensor.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-subtensor-insert-multiple-uses.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-subtensor-insert.mlir
  MLIR :: Integration/Dialect/Linalg/CPU/test-tensor-matmul.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/concatenate.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/dense_output.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/dense_output_bf16.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/dense_output_f16.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_binary.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_codegen_foreach.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_constant_to_sparse_tensor.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_conv_1d_nwc_wcf.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_conv_2d.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_conv_2d_nhwc_hwcf.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_conv_3d.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_conv_3d_ndhwc_dhwcf.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_conversion.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_conversion_dyn.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_conversion_ptr.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_conversion_sparse2dense.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_conversion_sparse2sparse.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_expand.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_insert_2d.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_insert_3d.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_matmul.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_matvec.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_out_mult_elt.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_out_reduction.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_reduce_custom.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_reductions.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_rewrite_push_back.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_rewrite_sort_coo.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_sampled_mm_fusion.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_scale.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_select.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_sign.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_sorted_coo.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_spmm.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_storage.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_tensor_ops.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_transpose.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_unary.mlir
  MLIR :: Integration/Dialect/SparseTensor/CPU/sparse_vector_ops.mlir
  MLIR :: Integration/Dialect/Standard/CPU/test_subview.mlir
  MLIR :: Integration/Dialect/Vector/CPU/test-transfer-read-1d.mlir

Repository:

rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION

https://reviews.llvm.org/D136377/new/

https://reviews.llvm.org/D136377