- User Since
- Oct 10 2012, 3:27 PM (353 w, 4 d)
Jan 3 2019
Jan 1 2019
Oct 18 2018
Apr 23 2018
Oct 12 2017
What is the motivation for using this, over using the --filter option? I have found --filter to be substantially more general and it is easier for us to support, so my preference would be to move it to be the standard way of solving this problem.
Jul 13 2017
LGTM, with one inline request, thanks!
Jul 6 2017
Jan 17 2017
LGTM, this seems like a great idea!
Dec 17 2016
Why don't we make this use an script we define to test the failure mode, rather than depend on an external tool, to make the test more reliable.
Oct 20 2016
Oct 9 2016
Oct 5 2016
Oct 4 2016
Cute, I like the idea.
Sep 27 2016
Ah, got it... that's a new feature to me, I am so out of the loop. :/
I think we should just wire it up with all the other tests, from lits perspective it would just be another test suite (like we do with the regular tests and the unit tests).
I believe they do all pass, but they are *not* (IIRC) wired up to run as part of LLVM's tests. We should definitely do that.
Sep 26 2016
Greg, I assumed you had LLVM commit access and could land this?
I am too out of the loop on Clang development to be able to comment on the specific direction, but I will just say that I am highly in favor of adding new features in this direction. Thank you!
Sep 1 2016
This isn't correct, llvm-lit is something else (an llvm specific entry point to lit that knows the object directory). That doesn't belong in the base lit project.
This looks reasonable to me, can we get a test for the behavior though?
LGTM, but can you add a section on the format to lit.pod? I know we don't have complete docs on the format, but if you could at least add section on the syntax it would be good.
There might be clients of the capture() API which might aren't expecting it to throw on error, and will need to be updated, but I agree this makes more sense.
Jul 21 2016
Jun 26 2016
I have no objection.
Jun 9 2016
Jun 8 2016
Jun 7 2016
I merged this with corrections in 271608, 271610, and 272021.
Jun 2 2016
May 31 2016
I believe you are thinking of: http://reviews.llvm.org/D18185
May 26 2016
Mar 31 2016
In cases where this is trigger, wouldn't the warning always fire and the
proper way to suppress (so developers don't regularly see warnings) to
exclude that path? If so, then it seems like erroring out is ok, it just
forces resolution of that issue immediately.
Applied in r265034
LGTM, thanks for the fix and the new test coverage!
Mar 24 2016
Feb 19 2016
Right, that is what I expected, something like MetricValueSignature that
clients would anticipate is a unique but otherwise meaningless signature
for some part of the test (and display accordingly, maybe by highlighting
when it changes).
Feb 10 2016
I would just use addMetric for this. I see a binary signature as just
another kind of metric data, sometimes people even use the term "measure"
to the process of computing a hash signature for a binary.
Jul 6 2015
I was just running the top-level suite, with lit -sv tests.
Jun 26 2015
May 25 2015
May 14 2015
May 11 2015
May 4 2015
Fine by me, can you also update the 'lit' docs in CommandGuide to note this.
Apr 30 2015
It doesn't look like ChildPIDs is ever added to?
Apr 5 2015
Mar 6 2015
Looks fine to me!
Jan 28 2015
Jan 23 2015
Jan 3 2015
Dec 10 2014
Dec 9 2014
I sympathize with the use case, and in fact if you look at LLVM and Clang's
config files they have code which tries to do a trick where if it finds it
is running in the source without a lit.site.cfg, then it tries to infer the
location of the actual exec_root by seeing if you are the kind of user who
puts such things in your path. It's messy and only works for some users and
is complicated, so I'm ultimately not a big fan of that solution, but it
was the first thing I came up with.
Looks good, some nits inline. I would like to see a test or two before this goes in, though.
LGTM, although it might be good to add a note about how require_script can
be used. To be honest I don't quite understand how you will use this yet.
In addition to the inline comments, could you include some tests that exercise this code?
The MetricValue abstraction doesn't currently provide any value really, but it is there in case we ever wanted to make use of the values (e.g., report performance changes). Given that, I think it probably makes sense to just call this JSONMetricValue and make it explicit that this is tied to values which are representable in the output, but which will otherwise be uninterpreted.