Page MenuHomePhabricator

[Analyzer][Z3] Test fixes for Z3 constraint manager
ClosedPublic

Authored by vlad.tsyrklevich on Jun 4 2018, 9:43 AM.

Diff Detail

Repository
rL LLVM

Event Timeline

Impressive, really. So almost all tests still pass.
However, a few questions should be resolved before we can merge:

  • Do all tests for Z3 run when LLVM is configured to use Z3? I'm not sure if that's the right thing: do all tests start to take 10x time to run once Z3 is enabled?

Should we create a separate ninja target for those?

  • Is anyone interested in running a buildbot running those?
george.karpenkov accepted this revision.Jun 4 2018, 12:10 PM

Seems to be good to merge regardless.

This revision is now accepted and ready to land.Jun 4 2018, 12:10 PM
ddcc added a subscriber: dcoughlin.Jun 4 2018, 12:53 PM
  • Do all tests for Z3 run when LLVM is configured to use Z3? I'm not sure if that's the right thing: do all tests start to take 10x time to run once Z3 is enabled?

The test approach is what @dcoughlin suggested in D28952, where the tests are run using each constraint manager that is available. At that time, he measured an increase in test time from 25s to 90s (see https://reviews.llvm.org/D28952#686284).

  • Is anyone interested in running a buildbot running those?

@dcoughlin mentioned that he (Apple?) would set up a buildbot for this, but I don't know if that happened.

@ddcc To be completely honest, I see a few design issues with the current implementation of Z3 backend,
the main one being that it checks satisfiability after every single exploded node.
To the best of my knowledge, reasonable scalability would not be achieved with such an approach,
and I'm not sure how feasible would it be to change it without rewriting most of the checkers.

Thus we currently do not plan to set up a Z3 bot, but if you wish to maintain we certainly can provide pointers on how this can be done.

The test approach is what @dcoughlin suggested in D28952, where the tests are run using each constraint manager that is available

Apologies for conflicting instruction. Now that @mikhail.ramalho is working on refutation with Z3 it might not make sense to force the user to run all tests with Z3 at all times.
What do you think if we introduce ninja check-clang-analyzer to run the analyzer tests, and ninja check-clang-analyzer-z3 to run all the analyzer tests with Z3?
[there might be another ninja target for running all analyzer tests, I just don't remember which one. But at the end of the day there should be away to run analyzer tests without Z3 even when Z3 is linked in]

ddcc added a comment.Jun 4 2018, 6:23 PM

@ddcc To be completely honest, I see a few design issues with the current implementation of Z3 backend,
the main one being that it checks satisfiability after every single exploded node.
To the best of my knowledge, reasonable scalability would not be achieved with such an approach,
and I'm not sure how feasible would it be to change it without rewriting most of the checkers.

I agree, though a number of these are limitations in CSA, and not specifically the backend.

Thus we currently do not plan to set up a Z3 bot, but if you wish to maintain we certainly can provide pointers on how this can be done.

I don't plan to set one up either. Just compiling clang/llvm is already very resource intensive.

What do you think if we introduce ninja check-clang-analyzer to run the analyzer tests, and ninja check-clang-analyzer-z3 to run all the analyzer tests with Z3?
[there might be another ninja target for running all analyzer tests, I just don't remember which one. But at the end of the day there should be away to run analyzer tests without Z3 even when Z3 is linked in]

Sounds good.

I agree, though a number of these are limitations in CSA, and not specifically the backend.

Yeah, so for instance we always assume that for a given state we know whether it's feasible or not,
and IMO for efficient SMT solver support we would need to operate over "lazy" states which may or may not be feasible.

This revision was automatically updated to reflect the committed changes.
This revision was automatically updated to reflect the committed changes.