This is an archive of the discontinued LLVM Phabricator instance.

[LNT] Python 3 support: fix several bytes/str inconsistencies
ClosedPublic

Authored by danilaml on Jan 11 2021, 8:06 AM.

Details

Summary

Fix Profile.render() to return str. It's expected that .render()
methods return str but in py3 it started returning bytes.

Also fix reading/writing profile as string when it should be bytes.

Diff Detail

Repository
rL LLVM

Event Timeline

danilaml created this revision.Jan 11 2021, 8:06 AM
danilaml requested review of this revision.Jan 11 2021, 8:06 AM
danilaml added inline comments.Jan 11 2021, 8:13 AM
lnt/testing/profile/profile.py
118

Without this change, on py3 _importProfile in test_suite.py would save "Data":"b'ABCD'" to report.json (note the b''), which would lead to garbage when trying to deserialize such json on import.

thopre added inline comments.Jan 11 2021, 9:41 AM
lnt/testing/profile/profile.py
118

Wasn't that already the case when calling serialize without parameter? The abstract method has:

"Serializes the profile to the given filename (base). If fname is None, returns as a bytes instance."

Also, are we guaranteed to have ascii characters?

Could you add a test for this method as well? Clearly the testsuite did not exercise this code or it would have been caught earlier.

I'll try to come up with a test case. I thought there were already tests for profile data, so I need to look into why they didn't catch it.

lnt/testing/profile/profile.py
118

Not sure I got the "serialize" bit. The problem here is that on Python2 b64encode returns str but on python2 it returns bytes.
This matters in at least one place: https://github.com/llvm/llvm-lnt/blob/f98bc1b3018248fbe551b0e7491ac5ba388216d5/lnt/tests/test_suite.py#L113

Since str(b'foo') is "b'foo'" and not "foo" as was expected (at least during py2).

I assumed (based on this usage and the somewhat vague docstring above) that the usage is correct, but the render needs to be fixed (this patch changed the return type to unicode on py2, but it doesn't seem to matter at any points of usage as far as I can tell).

In short, this code works fine on Py2 but fails on Py3: b64decode(str(b64encode(b'foo'))).

danilaml added inline comments.Jan 11 2021, 10:00 AM
lnt/testing/profile/profile.py
118

on python3* it returns bytes

thopre added inline comments.Jan 11 2021, 10:25 AM
lnt/testing/profile/profile.py
118

My bad I misread the parenthesis grouping. I thought decode applied to serialize, not b64encode.

I think it's common usage at least in LNT for render to return a string indeed. That said I'm a bit surprise the place you show causes any issue. TestSamples constructor should already call str on the single element list containing the profile.

danilaml updated this revision to Diff 316436.Jan 13 2021, 9:53 AM

Added a test.

Btw, I've also found that one lit test (testing/profilev1impl.py) currently fails with Python 3 (should be trivial to fix), but this raises the question - are the tests still being run under py2 only?

lnt/testing/profile/profile.py
118

TestSamples constructor should already call str on the single element list containing the profile.

And str(b'foo') == "b'foo'" in py3, not "foo" like in py2.

Added a test.

Btw, I've also found that one lit test (testing/profilev1impl.py) currently fails with Python 3 (should be trivial to fix), but this raises the question - are the tests still being run under py2 only?

Tests are run in Python 3 but this test simply does not seem to be run for some reason. I tried to change one of the assertTrue for an assertFalse inside and neither Python2 nor Python3 testing failed.

lnt/testing/profile/profile.py
118

Right, then we ought to use something else than str.

Added a test.

Btw, I've also found that one lit test (testing/profilev1impl.py) currently fails with Python 3 (should be trivial to fix), but this raises the question - are the tests still being run under py2 only?

Tests are run in Python 3 but this test simply does not seem to be run for some reason. I tried to change one of the assertTrue for an assertFalse inside and neither Python2 nor Python3 testing failed.

I was wrong, not only is the test run but it also fails if one uses the latest source. With the old source nothing would get run anything passed the import. I've tracked it down to the import of cPerf in lnt/testing/profile/perf.py. This must have been fixed by @tnfchris f98bc1b3018248fbe551b0e7491ac5ba388216d5 patch.

Apply the proposed edit to profile.py as well as the following two changes to fix the profilev1impl test error:

  • in lnt/testing/profile/profilev1impl.py in serialize(), change with open(fname, 'w') as fd: into with open(fname, 'wb') as fd:
  • in tests/testing/profilev1impl.py in test_saveFromRendered(), change p2 = ProfileV1.deserialize(open(f.name)) into p2 = ProfileV1.deserialize(open(f.name, "rb"))

I'm not sure whether we should do a separate patch for that or not considering that your fix to render is also needed for the testcase to pass. You could perhaps reword your title and/or description to mention that this fixes the failing test.

lnt/testing/profile/profile.py
83
danilaml updated this revision to Diff 316614.Jan 14 2021, 4:01 AM
danilaml retitled this revision from [LNT] Python 3 support: fix Profile.render() to return str to [LNT] Python 3 support: fix several bytes/str inconsistencies.
danilaml edited the summary of this revision. (Show Details)

Could you explain me what does the test you added do? I'm not familiar with lnt runtest and its option nor with all the lit thingy.

danilaml marked an inline comment as done.Jan 14 2021, 4:26 AM

Fixed tests.

Btw, I've also found several places where files are opened without being closed (i.e. open(...).read(N) == b'...', some NamedTemporary usages). It's not a big issue, but a candidate for a cleanup patch.

lnt/testing/profile/profile.py
118

Where? As I've described in the summary,

profilefile = pf.render()
return lnt.testing.TestSamples(name + '.profile', [profilefile], {}, str)

I think it makes more sense/expected to have pf.render() return str as in py2, instead of trying to convert it to "proper" str here.

Fixed tests.

Btw, I've also found several places where files are opened without being closed (i.e. open(...).read(N) == b'...', some NamedTemporary usages). It's not a big issue, but a candidate for a cleanup patch.

Be my guest :-) I'm trying to clean up all mypy and flake8 warnings

lnt/testing/profile/profile.py
118

Ok

danilaml marked 3 inline comments as done.Jan 14 2021, 5:25 AM

Could you explain me what does the test you added do? I'm not familiar with lnt runtest and its option nor with all the lit thingy.

lit here is used in very simple way: it just reads the file and runs all the commands in RUN: (the test fails if the command exit code is non-zero). Beside that I only use some pre-defined substitutions (%s - current file, %t - temporary name/file, %{shared_inputs} - path to SharedInputs).

As for runtest: I've copied the beginning of the test_suite-profile.shtest test, that runs the test-suite under fake lit fake-lit-profile which just copies the results.json to the designated place, but I've removed the exec-multisample option (since i want the test only imported once).
So the end result should be LNT treating fake-results-profile-import.json as if it was the result of running test. Since path to the profile perf_data is hardcoded in the json, I've also had to do the replacement in fake-lit-profile-import (I think the current way the profiles are stored/handled is unfortunate, especially once the number of profiles grows).
The result of the runtest is the report.json file with one of the Data fields containing the encoded/rendered profile data via _importProfile method.
After that the test imports this report checking that the report was generated correctly (the converted data is stored in 'PATH_TO_DB/data/profiles/PROFILE_NAME.lntprof). To further ensure that LNT was able to decode the profile field I have an assert ProfileV2.checkFile(profile).

I thought about simplifying the test by only checking that report.json contains valid BASE64 Data, but I think that this way it's more foolproof (I.e. it doesn't rely on the way the data is marshaled - just checks that it can successfully "roundtrip").

thopre accepted this revision.Jan 14 2021, 5:52 AM

LGTM, thanks!

This revision is now accepted and ready to land.Jan 14 2021, 5:52 AM

I've seen it landed on the upstream branch rather than the main branch. Was that a mistake?

thopre requested changes to this revision.Jan 14 2021, 7:00 AM

I've tried applying your patch and its test fail for me.

tests/runtest/test_suite-profile-import.py
23 ↗(On Diff #316614)

Redundant, please remove

This revision now requires changes to proceed.Jan 14 2021, 7:00 AM

I've tried applying your patch and its test fail for me.

+$ ":" "RUN: at line 2"
+$ "rm" "-rf" "/home/thomasp/repos/lnt/test_run_tmp/runtest/Output/test_suite-profile-import.py.tmp.SANDBOX"
+$ ":" "RUN: at line 3"
+$ "lnt" "runtest" "test-suite" "--sandbox" "/home/thomasp/repos/lnt/test_run_tmp/runtest/Output/test_suite-profile-import.py.tmp.SANDBOX" "--no-timestamp" "--test-suite" "/home/thomasp/repos/lnt/tests/ru
ntest/Inputs/test-suite-cmake" "--cc" "/home/thomasp/repos/lnt/tests/../tests/SharedInputs/FakeCompilers/clang-r154331" "--use-cmake" "/home/thomasp/repos/lnt/tests/runtest/Inputs/test-suite-cmake/fake-cm
ake" "--use-make" "/home/thomasp/repos/lnt/tests/runtest/Inputs/test-suite-cmake/fake-make" "--use-lit" "/home/thomasp/repos/lnt/tests/runtest/Inputs/test-suite-cmake/fake-lit-profile-import" "--use-perf=
all" "-j2" "--verbose"
+$ ":" "RUN: at line 15"
+$ "rm" "-rf" "/home/thomasp/repos/lnt/test_run_tmp/runtest/Output/test_suite-profile-import.py.tmp.DB"
+$ ":" "RUN: at line 16"
+$ "lnt" "create" "/home/thomasp/repos/lnt/test_run_tmp/runtest/Output/test_suite-profile-import.py.tmp.DB"
+$ ":" "RUN: at line 17"
+$ "lnt" "import" "/home/thomasp/repos/lnt/test_run_tmp/runtest/Output/test_suite-profile-import.py.tmp.DB" "/home/thomasp/repos/lnt/test_run_tmp/runtest/Output/test_suite-profile-import.py.tmp.SANDBOX/bu
ild/report.json" "--show-sample-count"
+$ ":" "RUN: at line 19"
+$ "python" "/home/thomasp/repos/lnt/tests/runtest/test_suite-profile-import.py" "/home/thomasp/repos/lnt/test_run_tmp/runtest/Output/test_suite-profile-import.py.tmp.DB"
+# command stderr:
+Traceback (most recent call last):
+  File "/home/thomasp/repos/lnt/tests/runtest/test_suite-profile-import.py", line 26, in <module>
+    profile = glob.glob('%s/data/profiles/*.lntprof' % sys.argv[1])[0]
+IndexError: list index out of range

Sorry, pushed to the wrong branch (Phabricator/arcanist workflow is annoying >_<) and didn't notice your comment.

Hm, I've made sure that all tests pass on both py2 and py3 before submitting and I can't reproduce it on my machine.
Could you check if test_suite-profile-import.py.tmp.err and test_suite-profile-import.py.tmp.log have any suspicious errors (they should be in <..>/test_run_tmp/runtest/Output/)?

Sorry, pushed to the wrong branch (Phabricator/arcanist workflow is annoying >_<) and didn't notice your comment.

Hm, I've made sure that all tests pass on both py2 and py3 before submitting and I can't reproduce it on my machine.
Could you check if test_suite-profile-import.py.tmp.err and test_suite-profile-import.py.tmp.log have any suspicious errors (they should be in <..>/test_run_tmp/runtest/Output/)?

Do you use tox to run the tests?

Sorry, pushed to the wrong branch (Phabricator/arcanist workflow is annoying >_<) and didn't notice your comment.

Hm, I've made sure that all tests pass on both py2 and py3 before submitting and I can't reproduce it on my machine.
Could you check if test_suite-profile-import.py.tmp.err and test_suite-profile-import.py.tmp.log have any suspicious errors (they should be in <..>/test_run_tmp/runtest/Output/)?

Do you use tox to run the tests?

I use lit directly (lit -svv ./tests) which what I understand tox does as well. I've rerun all tests using tox and the only test that failed was server/ui/V4Pages.py (I believe it's because I've already had an lnt instance running and that test doesn't account for that possibility).

Sorry, pushed to the wrong branch (Phabricator/arcanist workflow is annoying >_<) and didn't notice your comment.

Hm, I've made sure that all tests pass on both py2 and py3 before submitting and I can't reproduce it on my machine.
Could you check if test_suite-profile-import.py.tmp.err and test_suite-profile-import.py.tmp.log have any suspicious errors (they should be in <..>/test_run_tmp/runtest/Output/)?

Do you use tox to run the tests?

I use lit directly (lit -svv ./tests) which what I understand tox does as well. I've rerun all tests using tox and the only test that failed was server/ui/V4Pages.py (I believe it's because I've already had an lnt instance running and that test doesn't account for that possibility).

Are you testing your patch on top of 042938b5c332fa63f1fefdd7116f39e161b8c51b? My guess is you are testing on a commit prior to f98bc1b3018248fbe551b0e7491ac5ba388216d5 which prevents tests involving profile from being run. Btw, using tox with a comparison of before/after logs allows you to spot codestyle issues added by your patch like the double glob import I pointed out.

Best regards,

Thomas

Sorry, pushed to the wrong branch (Phabricator/arcanist workflow is annoying >_<) and didn't notice your comment.

Hm, I've made sure that all tests pass on both py2 and py3 before submitting and I can't reproduce it on my machine.
Could you check if test_suite-profile-import.py.tmp.err and test_suite-profile-import.py.tmp.log have any suspicious errors (they should be in <..>/test_run_tmp/runtest/Output/)?

Do you use tox to run the tests?

I use lit directly (lit -svv ./tests) which what I understand tox does as well. I've rerun all tests using tox and the only test that failed was server/ui/V4Pages.py (I believe it's because I've already had an lnt instance running and that test doesn't account for that possibility).

Are you testing your patch on top of 042938b5c332fa63f1fefdd7116f39e161b8c51b? My guess is you are testing on a commit prior to f98bc1b3018248fbe551b0e7491ac5ba388216d5 which prevents tests involving profile from being run. Btw, using tox with a comparison of before/after logs allows you to spot codestyle issues added by your patch like the double glob import I pointed out.

Best regards,

Thomas

Yes, I was testing upon that (otherwise I wouldn't have caught the issue with the profilvev1 test case). I've also retested on top of the current main with the same result.
What about the logs in your case?

Btw, I can't find a link to the lnt buildbot. I think it should already finish the testing.

Sorry, pushed to the wrong branch (Phabricator/arcanist workflow is annoying >_<) and didn't notice your comment.

Hm, I've made sure that all tests pass on both py2 and py3 before submitting and I can't reproduce it on my machine.
Could you check if test_suite-profile-import.py.tmp.err and test_suite-profile-import.py.tmp.log have any suspicious errors (they should be in <..>/test_run_tmp/runtest/Output/)?

%t.err has:

2021-01-14 18:07:42 WARNING: Traceback (most recent call last):

File "/home/thomasp/repos/lnt/lnt/testing/profile/perf.py", line 32, in deserialize
  data = cPerf.importPerf(f, nm, objdump)

NameError: global name 'cPerf' is not defined

I'm not sure why cPerf appears not to be installed under tox, it should build everything. tests/testing/cPerf.py does not have this problem because it skips the test if cPerf is not found.

Sorry, pushed to the wrong branch (Phabricator/arcanist workflow is annoying >_<) and didn't notice your comment.

Hm, I've made sure that all tests pass on both py2 and py3 before submitting and I can't reproduce it on my machine.
Could you check if test_suite-profile-import.py.tmp.err and test_suite-profile-import.py.tmp.log have any suspicious errors (they should be in <..>/test_run_tmp/runtest/Output/)?

%t.err has:

2021-01-14 18:07:42 WARNING: Traceback (most recent call last):

File "/home/thomasp/repos/lnt/lnt/testing/profile/perf.py", line 32, in deserialize
  data = cPerf.importPerf(f, nm, objdump)

NameError: global name 'cPerf' is not defined

I'm not sure why cPerf appears not to be installed under tox, it should build everything. tests/testing/cPerf.py does not have this problem because it skips the test if cPerf is not found.

Got it, this is because of tests/lit.cfg which adds the source tree in PYTHONPATH. This will have priority over what is in .tox/py27 and .tox/py3 which is where the extension will be installed when running under tox. I'm guessingy you ran setup.py in develop mode which installed the cPerf extension into the repo. If I apply this change:

-config.environment['PYTHONPATH'] = '%s:%s' % (build_root, src_root)
+config.environment['PYTHONPATH'] = '%s' % build_root

then your test passes. I'm unsure how to proceed forward from there. I presume running lit outside of tox requires the source repo to be in PYTHONPATH but then your test need to be changed not to assume cPerf is available (since it only is when lit is installed). I'm personally more enclined to prevent lit from running without lit installed because then some of the tests are skipped.

What do you think?

I'm personally more enclined to prevent lit from running without lit installed because then some of the tests are skipped.

Do you mean without lnt installed? If so, I agree. It makes sense to expect LNT to b fully configured/installed before running the tests.
I think the initial test skip was done in case cPerf fails to build: https://github.com/llvm/llvm-lnt/commit/a57bf295f7988de9d9f5fbf30b429cf03d54271d
Not sure if it's still relevant for supported platforms (does it build on Mac?). Even if it doesn't, no harm in having those tests failed there.

Not sure about PYTHONPATH manipulations. Go ahead, if it fixes running under tox.

I see you've already fixed the issue with LNT :: server/ui/V4Pages.py I had.
Is the buildbot still running for LNT? I feel uneasy seeing how many tests were silently failing.

I'm personally more enclined to prevent lit from running without lit installed because then some of the tests are skipped.

Do you mean without lnt installed? If so, I agree. It makes sense to expect LNT to b fully configured/installed before running the tests.
I think the initial test skip was done in case cPerf fails to build: https://github.com/llvm/llvm-lnt/commit/a57bf295f7988de9d9f5fbf30b429cf03d54271d
Not sure if it's still relevant for supported platforms (does it build on Mac?). Even if it doesn't, no harm in having those tests failed there.

Not sure about PYTHONPATH manipulations. Go ahead, if it fixes running under tox.

I see you've already fixed the issue with LNT :: server/ui/V4Pages.py I had.
Is the buildbot still running for LNT? I feel uneasy seeing how many tests were silently failing.

Sorry, I'm not aware of a buildbot for LNT. I did not even know there was one before.

thopre accepted this revision.Jan 15 2021, 2:54 AM
This revision is now accepted and ready to land.Jan 15 2021, 2:54 AM

Sorry, I'm not aware of a buildbot for LNT. I did not even know there was one before.

I remember triggering it sometime in 2019. Don't know what happened to it since then. Maybe @cmatthews remembers.

Also, don't know if you can, but it'd be great to finally setup LNT (and test-suite for that matter) repositories in phabricator (so it'd close submitted reviews automatically).