- User Since
- Aug 20 2012, 11:51 AM (461 w, 8 h)
Fri, Jun 18
Thanks you Matthias for doing this! I apologize for leaving this in the sorry state that it was left in.
@HassanElDesouky sorry, I'm not that familiar with this code.
Thu, Jun 17
This makes me so sad that the memref.dim infection is forcing this to go in memref/transforms :'(
Tue, Jun 15
Hi @dfki-jugr, can we please split memref.dim into tensor.dim (and memref.rank into tensor.rank). I've now independently heard complaints about this from 5+ people across many different teams I interact with.
Mon, Jun 14
Fri, Jun 11
Thu, May 27
DimOp should already have a canonicalizer that will use InferShapedTypeOpInterface to rewrite dim(linalgop(x)) to some_expr_of(dim(x)). Do you have a test case where that doesn't happen?
May 19 2021
May 13 2021
LG once updated to not use the interface.
Nicolas and I had a long conversation regarding the "in place op interface". We managed to tease out two separate concepts that are independent.
May 12 2021
Code looks fine. Will let @sjarus review the rounding algorithm details.
May 11 2021
It seems there is agreement that this is a useful feature to support. Can someone LGTM?
May 10 2021
I have a dialect in npcomp that would greatly benefit from this. The basic problem is that we are importing a notion of "op" that is present in PyTorch, and has a "namespace" and "unqualified portion" of the op names. The "namespace" isn't really a dialect, but is useful to map to C++ namespaces in the code. We still model all these ops as part of a single dialect of "registered torch ops".
May 7 2021
We should commit this, but it is unfortunate, because now we have a dependency cycle between std and memref, e.g. MemRefDialect::materializeConstant uses mlir::ConstantOp (actually, it seems like a bug that we don't have a dependentDialects for the memref dialect depending on std)
May 6 2021
May 5 2021
btw, I think you might need to reupload the patch. I don't see the new code.
Any progress on splitting memref.dim into tensor.dim?
May 4 2021
Apr 30 2021
Apr 27 2021
Apr 26 2021
LGTM from me. Please get LGTM from someone with sparse compiler expertise on the exact API for SparseTensorEncodingAttr (choice of default, what would be most ergonomic, etc.).
Apr 23 2021
I still caught a few places using bare "lattice" (noted some; more exist). There are a lot of cases where you say "lattice value" or "lattice state" as well which probably should all be normalized to "lattice element".
Apr 22 2021
LGTM. The code looks reasonable, assuming you verified the outputs are correct on a few cases (always tricky to statically inspect the code).
I still would rather see this as just attributes on the SparseTensorAttr class, until there is a demonstrated need to make this an interface (and that this particular interface is the right choice). But I don't have the energy to argue it further. Carry on.
Apr 20 2021
Apr 19 2021
Awesome. Looks great!
Apr 14 2021
Apr 12 2021
Any progress on splitting memref.dim into tensor.dim? Multiple downstream projects are now complaining about this.
Thanks! Great work!
Apr 9 2021
Apr 8 2021
This is not a correct xform because broadcastability is not transitive.
Apr 7 2021
Apr 6 2021
Not sure if you want to add folders in this patch.
Apr 5 2021
Apr 2 2021
Apr 1 2021
Mar 25 2021
Mar 24 2021
I think just a patch splitting the op into tensor.dim and memref.dim would be fine. There isn't much to discuss.
Mar 23 2021
It looks like this patch simply moved std.dim to memref.dim, without doing the "splitting" which we had discussed. I now have code that only operates on tensors that is needing to pull in the memref dialect just for this. Can you split memref.dim into tensor.dim for the tensor case?
I haven't audited it yet, but I suspect we can do better. In this case, the binding for PassManager::run would need to check that the context for the op is the same as its context.