This is an archive of the discontinued LLVM Phabricator instance.

[MLIR][Linalg] Add max named op to linalg
ClosedPublic

Authored by rengolin on Jul 7 2023, 4:12 AM.

Details

Summary

I've been trying to come up with a simple and clean implementation for
ReLU. TOSA uses clamp which is probably the goal, but that means
table-gen to make it efficient (attributes, only lower min or max).

For now, max is a reasonable named op despite ReLU, so we can start
using it for tiling and fusion, and upon success, we create a more
complete op clamp that doesn't need a whole tensor filled with zeroes
or ones to implement the different activation functions.

As with other named ops, we start "requiring" type casts and broadcasts,
and zero filled constant tensors to a more complex pattern-matcher, and
can slowly simplify with attributes or structured matchers (ex. PDL) in
the future.

Diff Detail

Event Timeline

rengolin created this revision.Jul 7 2023, 4:12 AM
Herald added a project: Restricted Project. · View Herald TranscriptJul 7 2023, 4:12 AM
rengolin requested review of this revision.Jul 7 2023, 4:12 AM
Herald added a project: Restricted Project. · View Herald TranscriptJul 7 2023, 4:12 AM
nicolasvasilache accepted this revision.Jul 7 2023, 4:48 AM

As with other named ops, we start "requiring" type casts and broadcasts, and zero filled constant tensors to a more complex pattern-matcher, and can slowly simplify with attributes or structured matchers (ex. PDL) in the future.

sgtm!

This revision is now accepted and ready to land.Jul 7 2023, 4:48 AM
This revision was automatically updated to reflect the committed changes.