This patch extend the @llvm.vector.reduce.fadd to take another integer
argument to indicate the order to apply.
Details
Diff Detail
- Repository
- rG LLVM Github Monorepo
Unit Tests
Event Timeline
I don't get this change. We already use reassoc FMF to allow a non-ordered reduction, what's the purpose of the new flag?
This also needs a LangRef update.
Please note that this patch is WIP.
After we add a new argument to the intrinsic, how can we update the previous lit tests automatically? I haven't tried to update all the tests by hand, cuz it looks like takes a huge amount of work and I believe there should be a way to do things more easily and I don't know.
Thanks to have a look at this!
The relevant context is here: https://reviews.llvm.org/D116736
I don't get this change. We already use reassoc FMF to allow a non-ordered reduction, what's the purpose of the new flag?
Well, maybe I didn't get the reviewer's ideas right? Any suggestions are appreciated!
reassoc implies 'any order', but in some cases it is desirable to specify a specific order, e.g. for the vector reduction builtin provided by Clang.
I'm not convinced about all this implied complexity.
I think clang will just have to emit expanded form of the reduction in that case.
Are we following someone else's standard here, or can we specify the behavior? If a specific reduction order is required, I would very much expect that to be "ordered reduction".
I'm not a fan of introducing a third order here if we can avoid it. The value of having something other than ordered/unordered is not clear to me.
I assume ordered here means sequential. The clang builtin specifies a tree-wise reduction order. The main motivation for that order is to enable faster execution and consistent results on different HW architectures. At the moment the existing intrinsic is either slow & consistent or fast & inconsistent.
I'm not convinced about all this implied complexity.
I think clang will just have to emit expanded form of the reduction in that case.
Sure it could, but it seems unfortunate that a common reduction pattern cannot be expressed with the reduction intrinsic. The main benefit of using the reduction intrinsic here is to make instruction selection substantially easier. AFAICT the main complexity will be in the backends, which will now have to be able to deal with the new order. But given that at the moment only AArch64 and RISCV, it should't be too bad?
It's worth noting that "sequential" is only desirable for matching scalar code (which is important, but isn't everything). Explicit vector code essentially always prefers a tree reduction (either even + odd or low + high, depending on arch/uarch, but either is better than sequential). Either normal tree reduction delivers faster results with smaller average errors, and they are still reproducible. Having a defined tree reduction is a great tool for explicit vectorizers.