This is an archive of the discontinued LLVM Phabricator instance.

Masked load/store optimization for scalar code
ClosedPublic

Authored by delena on Oct 18 2015, 11:17 PM.

Details

Summary

When the target is unable to handle masked load/store intrinsic, we convert the intrinsic to scalar code. I added optimization for constant mask.

Diff Detail

Repository
rL LLVM

Event Timeline

delena updated this revision to Diff 37718.Oct 18 2015, 11:17 PM
delena retitled this revision from to Masked load/store optimization for scalar code.
delena updated this object.
delena added a reviewer: mkuper.
delena set the repository for this revision to rL LLVM.
delena added a subscriber: llvm-commits.
mkuper accepted this revision.Oct 21 2015, 3:21 AM
mkuper edited edge metadata.

LGTM with a few nits.

../lib/CodeGen/CodeGenPrepare.cpp
1142 ↗(On Diff #37718)

"Shorten the way" -> "Short-cut"

1154 ↗(On Diff #37718)

You can load the first scalar with the original alignment, right? (Not that I'm sure it matters any)

1166 ↗(On Diff #37718)

Why do you need the temp variable? I think it's just as clear with "if (isa<ConstantVector>Mask)) { ..."
As a different option, you could have
ConstantVector *ConstMask = dyn_cast<ConstantVector>(Mask)

And then use ConstMask in both lines 1167 and 1169.

1286 ↗(On Diff #37718)

Same as above.

1304 ↗(On Diff #37718)

Same as above.

This revision is now accepted and ready to land.Oct 21 2015, 3:21 AM

Michael, Thanks a lot for review!

This revision was automatically updated to reflect the committed changes.