This is an archive of the discontinued LLVM Phabricator instance.

[NVPTX] Integrate ptxas to LIT tests
ClosedPublic

Authored by asavonic on Mar 15 2022, 12:03 PM.

Details

Summary

ptxas is a proprietary compiler from Nvidia that can compile PTX to
machine code (SASS). It has a lot of diagnostics to catch errors
in PTX, which can be used to verify PTX output from llc.

Set -DPXTAS_EXECUTABLE=/path/to/ptxas CMake option to enable it.
If this option is not set, then ptxas is substituted to true which
effectively disables all ptxas RUN lines.

LLVM_PTXAS_EXECUTABLE environment variable takes precedence over
the CMake option, and allows to override ptxas executable that is used for LIT
without complete re-configuration.

Diff Detail

Event Timeline

asavonic created this revision.Mar 15 2022, 12:03 PM
Herald added a project: Restricted Project. · View Herald TranscriptMar 15 2022, 12:03 PM
asavonic requested review of this revision.Mar 15 2022, 12:03 PM
Herald added a project: Restricted Project. · View Herald TranscriptMar 15 2022, 12:03 PM

This patch is here to start a discussion and get feedback.
Once we agree on the implementation I will update all other LIT tests in the same way.

tra added a comment.Mar 15 2022, 12:12 PM

Substituting true may not always work. It's OK if we only need pass/fail result from ptxas, but we may want to check ptxas output, too.

Can we make ptxas a feature instead? The tests that need it would use REQUIRES: ptxas and would be guaranteed to have ptxas available.
It may also need to be versioned, too. Some tests may require recent enough version of ptxas in order to work (e.g. we may want to use it to test new instructions that an older ptxas may not understand).

Substituting true may not always work. It's OK if we only need pass/fail result from ptxas, but we may want to check ptxas output, too.

Can we make ptxas a feature instead? The tests that need it would use REQUIRES: ptxas and would be guaranteed to have ptxas available.
It may also need to be versioned, too. Some tests may require recent enough version of ptxas in order to work (e.g. we may want to use it to test new instructions that an older ptxas may not understand).

We should probably have both: substitution to true works for *all* existing tests where can use ptxas as a sanity check, and a feature allows us to write tests for machine code.

tra added a comment.Mar 15 2022, 2:52 PM

We should probably have both: substitution to true works for *all* existing tests where can use ptxas as a sanity check, and a feature allows us to write tests for machine code.

Wouldn't it be redundant? If REQUIRES: ptxas is satisfied, there's no need for using true as a substitute. If it's not satisfied, then we would not run such test and therefore true would not be useful either.

Using ptxas in a test w/o REQUIRES: ptxas would be a user error, IMO as the test would be relying on something that's not expected to be available by default and would not do anything useful.
I would actually make this assertion even stronger -- allowing a test to run and return a success when we in fact didn't test anything would be a wrong thing to do. We should correctly report such test as not executed.

We should probably have both: substitution to true works for *all* existing tests where can use ptxas as a sanity check, and a feature allows us to write tests for machine code.

Wouldn't it be redundant? If REQUIRES: ptxas is satisfied, there's no need for using true as a substitute. If it's not satisfied, then we would not run such test and therefore true would not be useful either.

Using ptxas in a test w/o REQUIRES: ptxas would be a user error, IMO as the test would be relying on something that's not expected to be available by default and would not do anything useful.
I would actually make this assertion even stronger -- allowing a test to run and return a success when we in fact didn't test anything would be a wrong thing to do. We should correctly report such test as not executed.

ptxas in a test without REQUIRES: ptxas can still work as an optional verification step, similar to opt -verify or llc -verify-machineinstrs.
Yes, if ptxas is not available then these checks will do nothing, but it should be fine as long as there is a way to enable them. We can also setup a buildbot to run with ptxas.
The advantage here is that we can add these checks to all existing LIT tests and make sure that they indeed produce valid PTX code.

For tests that require ptxas to function - REQUIRES: ptxas is definitely needed.

If we want to distinguish these two cases, then we can have two substitutions: ptxas-verify that expands to ptxas -c -o /dev/null (or to true if ptxas is not available), and just ptxas that requires the corresponding feature.

tra added a subscriber: echristo.Mar 16 2022, 11:28 AM

The advantage here is that we can add these checks to all existing LIT tests and make sure that they indeed produce valid PTX code.

Without ptxas available those tests would not do anything useful, would they? Why not just use REQUIRES: ptxas and avoid adding this special-case substitution of ptxas with true.
I really don't see it buying us anything. There are no existing ptxas tests and adding REQUIRES: ptxas to the new ones sounds like the right thing to do.

For tests that require ptxas to function - REQUIRES: ptxas is definitely needed.

If we want to distinguish these two cases, then we can have two substitutions: ptxas-verify that expands to ptxas -c -o /dev/null (or to true if ptxas is not available), and just ptxas that requires the corresponding feature.

I'm not convinced we need those two cases. I would prefer to keep things simple. If there's no ptxas available -> don't run ptxas tests.
As a user I would be unpoleasantly surprised to see a test supposedly launching ptxas passing in my sandbox (because it actually happened ran 'true', without my knowledge), yet LLVM producing invalid PTX.
I would much rather know upfront that PTX has not been tested with ptxas, rather than risking a false positive if ptxas was not enabled.

@echristo - WDYT? Do we pretend to run any other external tools that we use in tests if they are not available?

asavonic updated this revision to Diff 415934.Mar 16 2022, 12:23 PM
  • Added ptxasand ptxas-X.Y features.
  • Added ptxas-verify substitution that gets replaced to either ptxas or true.

The advantage here is that we can add these checks to all existing LIT tests and make sure that they indeed produce valid PTX code.

Without ptxas available those tests would not do anything useful, would they? Why not just use REQUIRES: ptxas and avoid adding this special-case substitution of ptxas with true.
I really don't see it buying us anything. There are no existing ptxas tests and adding REQUIRES: ptxas to the new ones sounds like the right thing to do.

The idea is to add ptxas RUN lines to existing ~200 LIT tests (see access-non-generic.ll as an example). Since we cannot add REQUIRES: ptxas to all of them, we can make these new checks optional.

For tests that require ptxas to function - REQUIRES: ptxas is definitely needed.

If we want to distinguish these two cases, then we can have two substitutions: ptxas-verify that expands to ptxas -c -o /dev/null (or to true if ptxas is not available), and just ptxas that requires the corresponding feature.

I'm not convinced we need those two cases. I would prefer to keep things simple. If there's no ptxas available -> don't run ptxas tests.
As a user I would be unpoleasantly surprised to see a test supposedly launching ptxas passing in my sandbox (because it actually happened ran 'true', without my knowledge), yet LLVM producing invalid PTX.
I would much rather know upfront that PTX has not been tested with ptxas, rather than risking a false positive if ptxas was not enabled.

Yes, I agree. Unfortunately, I don't see any other way to reuse the existing tests and RUN lines.
If PXTAS_EXECUTABLE is set, nothing should be ignored.

tra added a comment.Mar 16 2022, 1:28 PM

The idea is to add ptxas RUN lines to existing ~200 LIT tests (see access-non-generic.ll as an example). Since we cannot add REQUIRES: ptxas to all of them, we can make these new checks optional.

Ah. I see. Being able to verify that the generated PTX produced by existing tests does get assembled would be useful.

... Unfortunately, I don't see any other way to reuse the existing tests and RUN lines.

Ideally we'd want lit to provide a way to make some RUN lines conditional on available features, so we could run more tests when we can, without having to rewrite the file.
And we don't have anything like that.

That said, I'm still not quite happy about substituting it with 'true'.
Perhaps we can do it slightly differently:

  • do not run RUN lines with ptxas in them unless the feature is available, and
  • document this auto-disabled behavior of ptxas in LLVM's lit docs.

This should give us more consistent behavior. E.g. if someone writes a test checking ptxas output with FileCheck, it would still work consistently. Replacing ptxas with true would cause the test to fail in this scenario, because the RUN line would still be executed and FileCheck would expect to see the output.
It's not quite like predicating RUN lines based on available features, but it would be close enough to be usable in this case. Combined with documentation that may do.

Ideally we'd want lit to provide a way to make some RUN lines conditional on available features, so we could run more tests when we can, without having to rewrite the file.
And we don't have anything like that.

That said, I'm still not quite happy about substituting it with 'true'.
Perhaps we can do it slightly differently:

  • do not run RUN lines with ptxas in them unless the feature is available, and
  • document this auto-disabled behavior of ptxas in LLVM's lit docs.

This should give us more consistent behavior. E.g. if someone writes a test checking ptxas output with FileCheck, it would still work consistently. Replacing ptxas with true would cause the test to fail in this scenario, because the RUN line would still be executed and FileCheck would expect to see the output.
It's not quite like predicating RUN lines based on available features, but it would be close enough to be usable in this case. Combined with documentation that may do.

Ok, so I made a draft implementation of conditional execution of RUN commands. We can now decide what approach is better:

RUN: ptxas-verify %t-nvptx.ptx
RUN: %(when ptxas ptxas -c %t-nvptx.ptx -o /dev/null)

The first one is substituted to true if ptxasexecutable is not available. It does not expose output machine code, so there is no issues with a pipelined FileCheck invocation.
The second one is a new feature of LIT: %(when cond command) is expanded to command if cond feature is available.

If we decide to proceed with the second approach, I'll finalize and submit it as a separate patch.

asavonic updated this revision to Diff 416283.Mar 17 2022, 1:39 PM
  • Added a draft for %(when cond command) LIT syntax.
tra added a subscriber: mstorsjo.Mar 17 2022, 2:46 PM
  • Added a draft for %(when cond command) LIT syntax.

RUN: %(when ptxas ptxas -c %t-nvptx.ptx -o /dev/null)

That may work. I'd change the syntax a bit to separate the condition check from command itself.
E.g. something like RUN: %when(ptxas) ptxas -c %t-nvptx.ptx -o /dev/null

Perhaps we can just expose available features via environment variables and then let shell do the work.
E.g. set LIT_HAS_PTXAS=1 and then use RUN: ((LIT_HAS_PTXAS)) && ptxas command

The downside of this approach is that it assumes that we do run tests using shell, which may not always be the case (on Windows?).

Another approach would be to use RUN-<feature>: to mark conditional lines. Or RUNIF:feature1,feature2:

I'm just thinking out loud. Some sort of %when predicate would work for me, but I'm not the owner of lit. Speaking of owners, we should add someone familiar to the review.
Summoning @mstorsjo -- WDYT?

  • Added a draft for %(when cond command) LIT syntax.

RUN: %(when ptxas ptxas -c %t-nvptx.ptx -o /dev/null)

That may work. I'd change the syntax a bit to separate the condition check from command itself.
E.g. something like RUN: %when(ptxas) ptxas -c %t-nvptx.ptx -o /dev/null

Perhaps we can just expose available features via environment variables and then let shell do the work.
E.g. set LIT_HAS_PTXAS=1 and then use RUN: ((LIT_HAS_PTXAS)) && ptxas command

The downside of this approach is that it assumes that we do run tests using shell, which may not always be the case (on Windows?).

Tests can either be executed by the lit internal shell, or by shelling out to Windows. Within the llvm/clang testsuites, the lit internal shell is used by default on Windows, but not e.g. in the libcxx testsuite: https://github.com/llvm/llvm-project/blob/main/llvm/utils/lit/lit/llvm/config.py#L25-L42

The lit internal shell can handle a fair bit of expressions, but I'm not sure if it handles this feature.

Another approach would be to use RUN-<feature>: to mark conditional lines. Or RUNIF:feature1,feature2:

I'm just thinking out loud. Some sort of %when predicate would work for me, but I'm not the owner of lit. Speaking of owners, we should add someone familiar to the review.
Summoning @mstorsjo -- WDYT?

Not much opinions on the suggested ways of implementing this. I think @jdenny is code owner (or the closest thing) for lit, I think he'd have more input on the direction here.

I think @jdenny is code owner (or the closest thing) for lit,

Not sure I am. :-)

I think he'd have more input on the direction here.

Yes, I can offer an opinion. Thanks for tagging me.

I'm reluctant to add a new lit construct (%when or RUNIF:) based on only one use case (or are there more I missed?) when an existing construct (lit substitution) would work easily. A second or third use case might reveal more generality is needed. I'm also slightly concerned the proposed %when or RUNIF: might generally encourage people to use it when REQUIRES is the right thing to do (but I understand REQUIRES is not right for this use case).

If you go with a new construct, here are some additional capabilities that occur to me:

  • A block syntax would be nice so you don't have to repeat the condition across multiple RUN lines and don't have to string together commands with &&. Given that, a one-line syntax might not even be worthwhile.
  • Besides features, it might support lit substitutions within the condition. Some test suites (I'm thinking of libomptarget and some of my downstream work) instantiate the same lit test and the same lit RUN lines multiple times in different sub test suites for different configurations. If those configurations are reflected in lit substitutions, then the RUN lines could vary across instantiations to some degree based on the condition in a %when or RUNIF.
  • Ultimately, I imagine this would become a general preprocessor #if directive for lit. Maybe name it %if?
  • An additional operator syntax could sometimes be useful for changing parts of a command: RUN: FileCheck -DVAR=%if(cond, then, else).

But do we want to go down this road just yet? Maybe it needs to bake a bit longer? The lit substitution is simple and fits your current use case well, in my opinion.

If you go with the lit substitution, I have a few small comments. I like the proposal to discard its output. That makes it harder to misuse it, and it leads people back to REQUIRES in some cases. The substitution could be named to more clearly indicate it might not run: %optional-ptxas-verify is one possibility. When not available, it could expand to an explicit message instead of true to help people understand what's happening when debugging in lit's verbose mode. We already use the : command to report RUN line numbers, so that could be used here. The message might be "Skipping verification because ptxas is not available".

I've added a few other subscribers that have participated in lit discussions in the past.

tra added a comment.Mar 18 2022, 11:11 AM

I'm reluctant to add a new lit construct (%when or RUNIF:) based on only one use case (or are there more I missed?)

It's a use case where we currently can not easily verify the assembly we produce for the whole back-end (NVPTX). When we add new instructions, we have to test the output manually. The end result is that we've already had a handful of bugs that were detected only after they showed up long after commit and that has non-0trivial logistical consequences. E.g. if it makes it into a compiler release, it's hard to get all the users to update their compiler even if we do make a patch release with the fix.

Making things easily testable would provide an important safeguard.

when an existing construct (lit substitution) would work easily.

Could you elaborate? Do you mean the substitution of %ptxas with true if it's not available? Something else?

A second or third use case might reveal more generality is needed. I'm also slightly concerned the proposed %when or RUNIF: might generally encourage people to use it when REQUIRES is the right thing to do (but I understand REQUIRES is not right for this use case).

Those were just strawman-style proposals. I don't have particular attachment to any of them.
My main concern was that tests should not lie to the user. One should be able to tell a test that was not run from the test that ran and succeeded.

But do we want to go down this [ complicated ] road just yet? Maybe it needs to bake a bit longer? The lit substitution is simple and fits your current use case well, in my opinion.

SGTM, modulo "no-lies" concern I've outlined above.

If you go with the lit substitution, I have a few small comments. I like the proposal to discard its output.

Which output do you have in mind?

If we go with substitution, I think the sensible thing to do would be to discard the whole RUN line if the substitution fails. Unlike substituting %ptxas with true, it would not work with RUN likes that depend on ptxas output. E.g. %ptxas | FileCheck.

That makes it harder to misuse it, and it leads people back to REQUIRES in some cases. The substitution could be named to more clearly indicate it might not run: %optional-ptxas-verify is one possibility. When not available, it could expand to an explicit message instead of true to help people understand what's happening when debugging in lit's verbose mode.

What if we introduce a pseudo-tool %if-<feature>. If the feature is available it would be substituted by an empty string and would not affect the result of the substitution of the RUN line. If the substitution fails, it would disable the RUN line and we could issue a diagnostic when run in verbose mode. It would be easy to tell what's expected to happen in the test source and, as a bonus, would allow predicating RUN lines based on multiple features.

E.g. RUN: %if-ptxas %ptxas whatever.

This would also be useful when we need to do some back-end specific tests in a largely generic test file. E.g. when we're testing generic LLVM intrinsics we may want to verify that they produce particular instruction for a given back-end, but we don't know whether that back-end is compiled in. Right now we have to copy/paste such tests per-back-end and use REQUIRES.

We already use the : command to report RUN line numbers, so that could be used here. The message might be "Skipping verification because ptxas is not available".

I'm not familiar with that. Could you give me an example of how/where this is used?

What I've seen done in other cases is to write a second test file that uses the first test file as input. This lets the second test file have different requirements than the first. In the example here, you wouldn't modify access-non-generic.ll but rather add a second file, say access-non-generic-ptxas.ll that contained just this:

REQUIRES: ptxas
RUN: llc < access-non-generic.ll -march=nvptx -mcpu=sm_20 -o %t.ptx
RUN: ptxas -c %t.ptx -o /dev/null

The upside is, this uses existing lit mechanisms (feature test based on discovery of "ptxas" on PATH, or some environment variable is set).
The downside is, a bit more effort to keep the two files consistent; although in this case, it looks like the "ptxas" commands are pretty simple so I shouldn't think it would be a real burden.

when an existing construct (lit substitution) would work easily.

Could you elaborate? Do you mean the substitution of %ptxas with true if it's not available? Something else?

Yes, I was referring to the %ptxas-verify proposal mentioned earlier.

A second or third use case might reveal more generality is needed. I'm also slightly concerned the proposed %when or RUNIF: might generally encourage people to use it when REQUIRES is the right thing to do (but I understand REQUIRES is not right for this use case).

Those were just strawman-style proposals. I don't have particular attachment to any of them.
My main concern was that tests should not lie to the user. One should be able to tell a test that was not run from the test that ran and succeeded.

I'm not seeing how %when or RUNIF: accomplishes that particular goal better than %ptxas-verify. Either way, the RUN line runs when ptxas is available and quietly doesn't run when it's not available. Am I misunderstanding?

If you go with the lit substitution, I have a few small comments. I like the proposal to discard its output.

Which output do you have in mind?

I was referring to the ptxas -c -o /dev/null proposal mentioned earlier, but now I realize that only helps with file output. More below.

If we go with substitution, I think the sensible thing to do would be to discard the whole RUN line if the substitution fails. Unlike substituting %ptxas with true, it would not work with RUN likes that depend on ptxas output. E.g. %ptxas | FileCheck.

So the problem is someone might check ptxas diagnostic output with FileCheck? That seems like a bad idea even when you're sure ptxas is available given that it's an external tool whose diagnostics can change over time. At most, it seems like tests would be written to fail for a non-zero exit status and possibly a non-empty stderr, and that should work fine when expanding to true.

What if we introduce a pseudo-tool %if-<feature>. If the feature is available it would be substituted by an empty string and would not affect the result of the substitution of the RUN line. If the substitution fails, it would disable the RUN line and we could issue a diagnostic when run in verbose mode. It would be easy to tell what's expected to happen in the test source and, as a bonus, would allow predicating RUN lines based on multiple features.

E.g. RUN: %if-ptxas %ptxas whatever.

In comparison to the %when, RUNIF:, %if, etc. proposals, is this just a new syntax proposal? Or is there another difference to consider?

This would also be useful when we need to do some back-end specific tests in a largely generic test file. E.g. when we're testing generic LLVM intrinsics we may want to verify that they produce particular instruction for a given back-end, but we don't know whether that back-end is compiled in. Right now we have to copy/paste such tests per-back-end and use REQUIRES.

It sounds like we're getting into more use cases now, so that makes a new construct more appealing.

We already use the : command to report RUN line numbers, so that could be used here. The message might be "Skipping verification because ptxas is not available".

I'm not familiar with that. Could you give me an example of how/where this is used?

Failed lit tests have output like the following so that, if there are many RUN lines, it's easier to tell which RUN line failed:

$ ":" "RUN: at line 1"

You might need LIT_OPTS=-vv to make it useful, depending on your config.

Another use case for some sort of "if X" construct (or RUNIF:X etc) is tests that have slightly different behaviours on Windows and Linux. One such example that already exists is in tools\llvm-objdump\X86\source-interleave-prefix-non-windows.test and its Windows equivalent tools\llvm-objdump\X86\source-interleave-prefix-windows.test. It would be great to be able to have the same test, but the behaviour to differ slightly for the specific checks. Using the RUNIF format as an example, it might look like:

RUN: ... commands to generate some output ... > output.txt
RUNIF:windows: FileCheck %s --input=output.txt --check-prefix=WINDOWS
RUNIF:!windows: FileCheck %s --input=output.txt --check-prefix=NON-WINDOWS

Of course, this above has a lot of duplicated code, so it might be nice to make the if an if/else ternary-style. Spitball idea:

RUN: ... | FileCheck %s --check-prefix={%if windows {WINDOWS} else {NON-WINDOWS}}

The specific case here could become:

RUN: {%if ptxas {ptxas some-args | FileCheck %s} else true}

Ideally, we'd even allow empty RUN lines, if a substitution on the line evaluates to an empty string, so that the else true isn't necessary. Bonus points would allow this if to span multiple lines. Maybe this:

RUN: {%if feature { \
RUN:   command1; \
RUN:   command2 | \
RUN:   command3}}

Of course, this above has a lot of duplicated code, so it might be nice to make the if an if/else ternary-style. Spitball idea:

RUN: ... | FileCheck %s --check-prefix={%if windows {WINDOWS} else {NON-WINDOWS}}

I like this direction as it seems more flexible. Why do we need the outermost braces?

Bonus points would allow this if to span multiple lines. Maybe this:

RUN: {%if feature { \
RUN:   command1; \
RUN:   command2 | \
RUN:   command3}}

Might be nice if the \ is not required so it's possible to enclose multiple existing RUN lines without having to join them with ;, &&, etc.

Of course, this above has a lot of duplicated code, so it might be nice to make the if an if/else ternary-style. Spitball idea:

RUN: ... | FileCheck %s --check-prefix={%if windows {WINDOWS} else {NON-WINDOWS}}

I like this direction as it seems more flexible. Why do we need the outermost braces?

Because the else might be optional, so it clearly delineates the if/else block, as opposed to something using the term "else" immediately after the if part. I'm open to other suggestions on this though.

Bonus points would allow this if to span multiple lines. Maybe this:

RUN: {%if feature { \
RUN:   command1; \
RUN:   command2 | \
RUN:   command3}}

Might be nice if the \ is not required so it's possible to enclose multiple existing RUN lines without having to join them with ;, &&, etc.

Agreed, though I don't have a clear view of how this might work, since the RUN lines are done pre-subsititution (I think, not actually checked), and I see this "if" style as a kind of conditionalised substitution.

tra added a comment.EditedMar 18 2022, 2:59 PM

I'm not seeing how %when or RUNIF: accomplishes that particular goal better than %ptxas-verify. Either way, the RUN line runs when ptxas is available and quietly doesn't run when it's not available. Am I misunderstanding?

The difference is that replacing the commend itself with true still executes the RUN line.

I believe that the whole "RUN" line must be handled atomically. Either all commands are executed as specified by user, or none of them, if we can't provide any of the substitutions.
We should not magically replace a command with true, execute the RUN line and expect it not to fail for all possible commands users may put on the RUN line.
lit can control whether it runs the RUN line or not, but it has no control what the user expected that command to do.
Sure, we can provide a restricted variant of the tool substitution which would explicitly require RUN line not to depend on command's outputs, only on its exit status.
That would be unnecessarily restrictive, IMO, but we can live with that. Still, I think executing or not whole RUN line will give us more flexibility and consistency.

If we go with substitution, I think the sensible thing to do would be to discard the whole RUN line if the substitution fails. Unlike substituting %ptxas with true, it would not work with RUN lines that depend on ptxas output. E.g. %ptxas | FileCheck.

So the problem is someone might check ptxas diagnostic output with FileCheck?

It's just one possibility. In general, it's applicable to any output the program may produce, not just what it may print on the console. It's also not specific to ptxas either. In general it may be applicable to any optionally available external tool. ptxas just happened to be the first one where the benefit was worth the trouble implementing this patch.
Even in case of ptxas, I can see how we may conceivably want to examine the ELF binary it produces. Fir instance, we may want to verify that it's got correct ELF sections in it, or examine dwarf data, etc. Maybe disassemble the code (we'd need another optional tool for that)

That seems like a bad idea even when you're sure ptxas is available given that it's an external tool whose diagnostics can change over time.
At most, it seems like tests would be written to fail for a non-zero exit status and possibly a non-empty stderr, and that should work fine when expanding to true.

True, to an extent, but even potentially unstable tool output is better than nothing. I can think of using the output of ptxas -v to catch at least one kind of regressions we've ran into in the past.

I think we should figure out how such optional tools are supposed to work in lit and leave it up to users what they want to do with it. We don't know whether relying on external tool output is a good idea or not for all use cases.

What if we introduce a pseudo-tool %if-<feature>. If the feature is available it would be substituted by an empty string and would not affect the result of the substitution of the RUN line. If the substitution fails, it would disable the RUN line and we could issue a diagnostic when run in verbose mode. It would be easy to tell what's expected to happen in the test source and, as a bonus, would allow predicating RUN lines based on multiple features.

E.g. RUN: %if-ptxas %ptxas whatever.

In comparison to the %when, RUNIF:, %if, etc. proposals, is this just a new syntax proposal? Or is there another difference to consider?

Mostly syntax. It's yet another way to express "don't run if the feature is not available", as opposed to "substitute command with true and run".

Failed lit tests have output like the following so that, if there are many RUN lines, it's easier to tell which RUN line failed:

$ ":" "RUN: at line 1"

You might need LIT_OPTS=-vv to make it useful, depending on your config.

Thank you. This is good to know.

Of course, this above has a lot of duplicated code, so it might be nice to make the if an if/else ternary-style. Spitball idea:

RUN: ... | FileCheck %s --check-prefix={%if windows {WINDOWS} else {NON-WINDOWS}}

I like this direction as it seems more flexible. Why do we need the outermost braces?

Because the else might be optional, so it clearly delineates the if/else block, as opposed to something using the term "else" immediately after the if part. I'm open to other suggestions on this though.

I think else -> %else should fix the parsing issue.

Bonus points would allow this if to span multiple lines. Maybe this:

RUN: {%if feature { \
RUN:   command1; \
RUN:   command2 | \
RUN:   command3}}

Might be nice if the \ is not required so it's possible to enclose multiple existing RUN lines without having to join them with ;, &&, etc.

Agreed, though I don't have a clear view of how this might work, since the RUN lines are done pre-subsititution (I think, not actually checked), and I see this "if" style as a kind of conditionalised substitution.

In my downstream work, I managed to implement a lit %for data {...} syntax that unrolls its body's RUN lines before handing things to the shell. It does not require \. Based on that, we can probably get %if without \ to work somehow.

tra added a comment.Mar 18 2022, 3:09 PM

What I've seen done in other cases is to write a second test file that uses the first test file as input. This lets the second test file have different requirements than the first. In the example here, you wouldn't modify access-non-generic.ll but rather add a second file, say access-non-generic-ptxas.ll that contained just this:

REQUIRES: ptxas
RUN: llc < access-non-generic.ll -march=nvptx -mcpu=sm_20 -o %t.ptx
RUN: ptxas -c %t.ptx -o /dev/null

The upside is, this uses existing lit mechanisms (feature test based on discovery of "ptxas" on PATH, or some environment variable is set).
The downside is, a bit more effort to keep the two files consistent; although in this case, it looks like the "ptxas" commands are pretty simple so I shouldn't think it would be a real burden.

As you've mentioned, we'll need to do it for all the 200 or so tests we potentially want to use ptxas on.

I believe that the whole "RUN" line must be handled atomically.

At this point, multiple use cases beyond the specific ptxas verification use case that started this thread are arising, so it sounds like a new construct that's general enough to handle them all is warranted. For that general case, I agree with your arguments along the above lines. Thanks for explaining your position.

I can think of using the output of ptxas -v to catch at least one kind of regressions we've ran into in the past.

Thanks. I wasn't aware of that use case.

ychen added a subscriber: ychen.Mar 23 2022, 12:36 PM

Published the LIT patch to: https://reviews.llvm.org/D122569 [lit] Support %if ... %else syntax for RUN lines

While D122569 is waiting for feedback or approval, I've submitted these patches to fix minor issues that were found during ptxas integration:

  • D123038 [NVPTX] 64-bit atom.{and,or,xor,min,max} require sm_32 or higher
  • D123039 [NVPTX] shfl.sync is introduced in PTX 6.0
  • D123040 [NVPTX] .attribute(.managed) is only supported for sm_30 and PTX 4.0
  • D123041 [NVPTX] Avoid dots in global names
tra added a comment.Apr 4 2022, 11:38 AM

While D122569 is waiting for feedback or approval, I've submitted these patches to fix minor issues that were found during ptxas integration:

  • D123038 [NVPTX] 64-bit atom.{and,or,xor,min,max} require sm_32 or higher
  • D123039 [NVPTX] shfl.sync is introduced in PTX 6.0
  • D123040 [NVPTX] .attribute(.managed) is only supported for sm_30 and PTX 4.0
  • D123041 [NVPTX] Avoid dots in global names

Thank you for the clean-ups.
You may want to arrange the patches into a stack of related revisions (see phabricator menu on the right side of patch description).

asavonic updated this revision to Diff 423984.Apr 20 2022, 12:18 PM
asavonic edited the summary of this revision. (Show Details)
  • Added LLVM_PTXAS_EXECUTABLE environment variable to simplify testing against different ptxas versions.
  • Rebased on top of D122569.
  • Integrated ptxas to all tests in CodeGen/NVPTX and DebugInfo/NVPTX.
tra added inline comments.Apr 20 2022, 12:24 PM
llvm/test/CodeGen/NVPTX/bug41651.ll
2 ↗(On Diff #423984)

Here and everywhere -- I'd run conditional ptxas testing as the last step.
We should make sure that the actual tests have succeeded, first. No point running ptxas on the output that we already know is wrong.

I've tested the patch against every ptxas release from 9.0 to the latest 11.6. Version 9.0 is the earliest version that I was able to install on my Debian oldstable machine, so it seems like a good minimal version that we can support. Also, it is worth noting that ptxas is not completely backwards compatible: newer version can drop old SM versions, so we sometimes need to account for this. For example:

; sm_30 is dropped by ptxas 11.0
; RUN: %if ptxas %{ %ptxas-verify %if !ptxas-11.0 %{-arch=sm_30%} %t.ptx %}

One disadvantage of this patch is that we can no longer pipe llc output to FileCheck: we have to use -o option with a temporary file. This may break update_llc_checks.py, but I haven't confirmed this yet.
We can use tee to avoid this, but it is not portable. Perhaps we can write a simple version of tee and add it to LLVM tools.

tra added a comment.Apr 20 2022, 12:40 PM

One disadvantage of this patch is that we can no longer pipe llc output to FileCheck: we have to use -o option with a temporary file. This may break update_llc_checks.py, but I haven't confirmed this yet.

We don't have to use a temporary file, I think. ptxas can get its own input from its own llc invocation
echo -e '.version 3.2\n.target sm_20\n.address_size 64' | $HOME/local/cuda-11.2/bin/ptxas - works for me.

One disadvantage of this patch is that we can no longer pipe llc output to FileCheck: we have to use -o option with a temporary file. This may break update_llc_checks.py, but I haven't confirmed this yet.

We don't have to use a temporary file, I think. ptxas can get its own input from its own llc invocation
echo -e '.version 3.2\n.target sm_20\n.address_size 64' | $HOME/local/cuda-11.2/bin/ptxas - works for me.

Right, but then we can't pipe it to file check. Something like this should work, but it is not portable:

llc ... | tee %t.ptx | FileCheck
ptxas %.ptx
tra added a comment.Apr 20 2022, 12:57 PM

Right, but then we can't pipe it to file check. Something like this should work, but it is not portable:

llc ... | tee %t.ptx | FileCheck
ptxas %.ptx

What stops us doing it the same way we do for all the test cases? E.g.

RUN: llc ... %s |  FileCheck
RUN: %if ptxas %{ llc... %s | ptxas - %}

Right, but then we can't pipe it to file check. Something like this should work, but it is not portable:

llc ... | tee %t.ptx | FileCheck
ptxas %.ptx

What stops us doing it the same way we do for all the test cases? E.g.

RUN: llc ... %s |  FileCheck
RUN: %if ptxas %{ llc... %s | ptxas - %}

Well, this will probably double execution time for all tests. Options for llc have to be duplicated as well and kept in sync.

tra added a comment.Apr 20 2022, 2:11 PM

Well, this will probably double execution time for all tests. Options for llc have to be duplicated as well and kept in sync.

It's not any different compared to using llc or any other tool on any other RUN lines. In any case, for most of the users it will be a no-op as ptxas will be disabled for almost everyone.

asavonic updated this revision to Diff 424908.Apr 25 2022, 7:56 AM
  • Reimplemented ptxas RUN lines to use pipes instead of temporary files
tra accepted this revision.Apr 25 2022, 11:07 AM
This revision is now accepted and ready to land.Apr 25 2022, 11:07 AM
This revision was landed with ongoing or failed builds.Apr 28 2022, 5:02 AM
This revision was automatically updated to reflect the committed changes.
thakis added a subscriber: thakis.Apr 28 2022, 5:46 AM

It looks very weird to me to have LLVM's test suite support calling some proprietary tool. I can see how this is useful, but imho it doesn't belong into LLVM itself.

This is both for practical and other reasons.

From a practical point of view, if we rely on this tool for testing in LLVM, it means we can't run LLVM's tests on platforms that that tool doesn't run on.

Independent of practical considerations, LLVM tries to be an open-source, standalone toolchain development suite. Making it depend on proprietary programs, even just optionally and at the test level, works against both these goals.

How do others feel about this?

It looks very weird to me to have LLVM's test suite support calling some proprietary tool. I can see how this is useful, but imho it doesn't belong into LLVM itself.

This is both for practical and other reasons.

From a practical point of view, if we rely on this tool for testing in LLVM, it means we can't run LLVM's tests on platforms that that tool doesn't run on.

It is weird and inconvenient to have a proprietary tool that is used for testing, I agree. This is why this feature is completely optional, and mainly intended to be used by buildbots or by people who contribute to NVPTX regularly.
If I understand correctly, PTX is only defined by the ISA manual, which is sometimes incomplete or unclear, so ptxas is used as a reference implementation. In fact, requests to test code generated by llc with ptxas (such as [1]) were frequent enough (and helpful as well) for me to justify making this patch set. Before this integration I had to dump llc output, run ptxas with correct options for each test, and do this for every test that I could break.

[1]: https://reviews.llvm.org/D114367#3147361

Independent of practical considerations, LLVM tries to be an open-source, standalone toolchain development suite. Making it depend on proprietary programs, even just optionally and at the test level, works against both these goals.

I personally wouldn't call it a dependency. If there is a free software tool that can compile PTX assembly and verify it (I'm not aware of any) - it should be fairly easy to support it as well.
In any case, ptxas is just a tool used for development. It is the same story as with static analyzers: yes, many of them are proprietary, but if some people are willing to run them - they can help to make free software better.

tra added a comment.Apr 28 2022, 12:00 PM

I agree, with caveats, that in general LLVM should not depend on proprietary tools.
I'm also potentially biased as I'm one of the primary beneficiaries of these changes.

It's unfortunate that we both do want to verify that LLVM produces valid output and are unable to do so
without ptxas, as it is the ultimate consumer of the LLVM output and is the source of the ground truth when it comes to the validity of PTX syntax.

Without this patch, the only choice we have is to land NVPTX back-end changes with limited testing for what *we* think is a valid PTX.
ptxas does not always agree. In order to verify correctness of the PTX we do need to feed the generated PTX to ptxas.
Right now it has to be done manually. I frequently ask contributors to do that during code reviews and it did allow us to spot some errors early.
We've also discovered a handful of old errors by running the tests introduced by this patch.

We do have bots running CUDA test-suite, but they are mostly concerned with front-end being functional for
various combinations of GPU, CUDA version, and the standard C++ library used.
These tests are not particularly well-suited for verifying syntax of wide range of instructions we generate.

We could've set up yet another CPU-only buildbot for running ptxas compilation tests.
I actually do plan to do that, and having this patch allows doing that relatively easily.
That would still be removed from the LLVM commits that introduce the changes in PTX output.
Developers would still have to do their own tests manually and we would still need some sort of solution for that.

This feels like (maybe is?) supporting a target whose assembler happens not to be generally distributed, but which LLVM still supports as a target. For those environments/bots that actually have the assembler installed, having that extra level of validation seems worthwhile.

But perhaps this discussion should be taking place on discourse, not in a review?

tra added a comment.May 2 2022, 2:34 PM

This feels like (maybe is?) supporting a target whose assembler happens not to be generally distributed, but which LLVM still supports as a target.
For those environments/bots that actually have the assembler installed, having that extra level of validation seems worthwhile.

That is exactly the case.

But perhaps this discussion should be taking place on discourse, not in a review?

Sure. If @thakis has further concerns or questions, I'll be happy to continue the discussion there.

thakis added a comment.May 4 2022, 7:30 AM

I think having a short thread on discourse might be a good idea.

If we had used the strategy here in the clang-cl project, I feel we would've ended up in a much less useful place. Now we have (cross-platform!) PDB writers and dumpers and whatnot in llvm. If we had (optionally) relied on proprietary tools like this here does, that wouldn't have happened.