This is an archive of the discontinued LLVM Phabricator instance.

[lit] Implement non-pipelined ‘mkdir’, ‘diff’ and ‘rm’ commands internally
ClosedPublic

Authored by MaggieYi on Nov 2 2017, 2:10 PM.

Diff Detail

Event Timeline

MaggieYi created this revision.Nov 2 2017, 2:10 PM
zturner edited edge metadata.Nov 2 2017, 2:20 PM

First off, let me say this is very cool. I look forward to the day when all commands are builtin, and we can switch every client to the builtin lit shell.

I have two primary thoughts upon reading the patch:

  1. I don't see any tests for the case where the commands *succeed*. Only for the case where they fail. Shouldn't we have some tests that verify the commands actually work?
  1. Now that we have upwards of 6 builtin commands, I'm starting to think they should be separated into their own file. For example, lit/test/ShellBuiltins.py We could test them via unittests (e.g. see how BooleanExpression.py implements its unit tests).

What do you think?

Update: I actually do see where you're testing that they work, but I'm not convinced that the tests are sufficient. For several reasons.

  1. it doesn't test that the builtin command is actually run versus the OS command. It only tests that whatever command is run doesn't fail.
  1. It doesn't test that the result of the command is as expected, it only tests that it doesn't return an error. For a somewhat unrealistic illustration of this, imagine that all the commands just returned success without actually doing anything. The test as it is written now would pass.
  1. Now that we have upwards of 6 builtin commands, I'm starting to think they should be separated into their own file. For example, lit/test/ShellBuiltins.py We could test them via unittests (e.g. see how BooleanExpression.py implements its unit tests).

What do you think?

Hi Zachary,

Many thanks for your help on reviewing the patch and sorry for my late reply.

Yes, you are right. It could be done as a follow-up patch.

Thanks,

Update: I actually do see where you're testing that they work, but I'm not convinced that the tests are sufficient. For several reasons.

  1. it doesn't test that the builtin command is actually run versus the OS command. It only tests that whatever command is run doesn't fail.
  1. It doesn't test that the result of the command is as expected, it only tests that it doesn't return an error. For a somewhat unrealistic illustration of this, imagine that all the commands just returned success without actually doing anything. The test as it is written now would pass.

Good points. I am working on this issue now.

rnk added inline comments.Nov 6 2017, 2:50 PM
lit/lit/TestRunner.py
593–596

Can you refactor the handling of non-pipelined commands, either in this change, or split before or after? I'm imagining a dict of commands and methods that implement builtin shell commands, which would then be ripe for refactoring into a separate shell module or something.

600–603

I'd be surprised if this isn't used somewhere.

MaggieYi updated this revision to Diff 121882.Nov 7 2017, 6:14 AM

The patch updated the test (valid-shell.txt) where lit internal shell commands succeeded.

  1. I don't see any tests for the case where the commands *succeed*. Only for the case where they fail. Shouldn't we have some tests that verify the commands actually work?

I think it tests the lit builtin command for the following reasons:

  1. Lit tests enables the use of LIT’s internal shell in ShTests (\llvm\utils\lit\tests\lit.cfg) Line 14: config.test_format = lit.formats.ShTest(execute_external=False) Lit tests use internal shell instead of using external shell.
  2. The function _executeShCmd returns once the shell builtins commands finish. (line 571, 578 and 585 in TestRunner.py) They will not execute the OS command.
  3. Since all the failed tests print the builtins commands error information, the successful tests should be run using lit builtins shell.

What do you think?

  1. Now that we have upwards of 6 builtin commands, I'm starting to think they should be separated into their own file. For example, lit/test/ShellBuiltins.py We could test them via unittests (e.g. see how BooleanExpression.py implements its unit tests).

Thanks, I have updated the patch to solve this issue.

lit/lit/TestRunner.py
593–596

It is a good idea.

600–603

I'm sorry, I'm not sure about this. Could you possibly give me an explanation? Thanks.

MaggieYi updated this revision to Diff 122330.Nov 9 2017, 1:59 PM

Updated the builtin diff shell to support ‘-u’, ‘-b’ and ‘-w’ options. Now, all llvm, and clang tests passed with both Python 2 and Python 3.

MaggieYi updated this revision to Diff 122523.Nov 10 2017, 2:08 PM

Optimised the patch by using python ‘getopt’ module.

zturner accepted this revision.Nov 17 2017, 9:50 AM

looks good to me, rnk@ are you ok?

This revision is now accepted and ready to land.Nov 17 2017, 9:50 AM
rnk accepted this revision.Nov 17 2017, 1:37 PM

lgtm

MaggieYi updated this revision to Diff 123687.Nov 20 2017, 4:23 PM

The test of MC/AsmParser/preserve-comments-crlf.s failed since it uses the unsupported diff option --strip-trailing-cr. I updated the internal diff shell command to support this option.

This revision was automatically updated to reflect the committed changes.
MaggieYi reopened this revision.Nov 23 2017, 5:26 AM

Reverted rL318911 since it broke the sanitizer-windows.

This revision is now accepted and ready to land.Nov 23 2017, 5:26 AM
MaggieYi updated this revision to Diff 124534.Nov 28 2017, 3:19 AM

The previous patch doesn’t support the sequential commands, for example “mkdir existing_folder || true”. This caused sanitizer-windows test failed. The patch has been updated to support the sequential commands.

rnk accepted this revision.Nov 29 2017, 1:09 PM

lgtm

I think the windows asan test cases should probably use mkdir -p instead but I like the changes you made to make the internal shell commands simply print to stderr and return a non-zero exit code. I think that'll be simpler and more readable when writing new test cases with the internal shell. The test will get a FAIL status instead of an UNRESOLVED status.

This revision was automatically updated to reflect the committed changes.