Setting up input data for benchmarks and integration tests can be tedious in
pure MLIR. With more sparse tensor work planned, this convenience library
simplifies reading sparse matrices in the popular Matrix Market Exchange
Format (see https://math.nist.gov/MatrixMarket). Note that this library
is *not* part of core MLIR. It is merely intended as a convenience library
for benchmarking and integration testing.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Event Timeline
I'm new to MLIR, so sorry in advance for noob MLIR comments!
mlir/integration_test/Sparse/CPU/matrix-market-example.mlir | ||
---|---|---|
27 | To make this more generic, shouldn't a be allocated after we know m and n? | |
48 | I think there should be a separate routine to convert sparse matrix to dense, i.e., one routine to read Matrix Market entries into 3 arrays, then another to convert these arrays to a dense matrix. If we materialize the dense matrix in the sparse matrix read routine, we wouldn't be able to read many large sparse matrices from real applications. This would allow us to just read and store all entries in 3 arrays at once (instead of reading entry-by-entry): i[nnz], j[nnz], and val[nnz]. | |
69 | Noob question, why isn't there a CHECK-SAME in front of ( 1, 0, 0, 1.4, 0 ) just like the four lines below? | |
mlir/integration_test/data/test.mtx | ||
13 | Great idea making the coordinates not sorted! This makes for a good test case for when we want to convert to CSR, etc. :) | |
mlir/lib/ExecutionEngine/SparseUtils.cpp | ||
37 | (If we are still going to have this routine) | |
81 | Should this be inline? |
Welcome to this repo, Penporn!
And thanks for the comments.
mlir/integration_test/Sparse/CPU/matrix-market-example.mlir | ||
---|---|---|
27 | Yes, you are right. I was a bit lazy anticipating the size, but for illustration purposes, adapting to the size would be better. I changed this to read more like an example. Thanks! | |
48 | Note that the purpose of this library is not to provide deep support for sparse computations, but merely something light weight that makes it a bit easier to set up tests and benchmarks. I did it this way to avoid a difficult interaction between memory allocation at MLIR level and in the C library. By "coming back first" after reading the header, all allocation can be done in MLIR. | |
69 | The -SAME part indicates that the CHECK continues at the same line as the previous CHECK. 5 | |
mlir/lib/ExecutionEngine/SparseUtils.cpp | ||
37 | I strongly prefer our own very light-weight implementation over pulling in sources from elsewhere. | |
81 | If you prefer, but given the static scope, the compiler could already decide to inline this based on its heuristics. |
Very cool @aartbik , nice use of MLIR / C interop to avoid hardcoding stuff in IR prematurely.
mlir/integration_test/Sparse/CPU/matrix-market-example.mlir | ||
---|---|---|
59 | This is a nice minimal first step to think about for adding sparse functionality to Linalg. There are a few things going on that would be nice to separate:
Let's chat in our next VC. |
mlir/lib/ExecutionEngine/SparseUtils.cpp | ||
---|---|---|
43 | This whole API is forcing us into using files, while we ought to work with a buffer abstraction in general (i.e. we can get a file or a buffer from another source, but the API should abstract it). Can you refactor this and use something like llvm::MemoryBuffer for example? |
mlir/lib/ExecutionEngine/SparseUtils.cpp | ||
---|---|---|
43 | I like this idea (did not know about the built-in mem buffer support). Coming up in next sparse utils enhancements.... |
To make this more generic, shouldn't a be allocated after we know m and n?