The front-end and the runtime are currently using the unix logical
representation, but lowering was not. These inconsistencies could
caused issues.
The only place that defines what the logical representation is in
lowering is the translation from FIR to LLVM (FIR is agnostic to the
actual representation). More precisely, the LLVM implementation of
fir.convert between i1 and fir.logcial is what defines the
representation:
- fir.convert from i1 to fir.logical defines the .true. and .false.
canonical representations
- fir.convert from fir.logical to i1 decides what the test for
truth is.
Unix representation is:
- .true. canonical integer representation is 1
- .false. canonical integer representation is 0
- the test for truth is "integer representation != 0"
For the record, the previous representation that was used was in
codegen was:
- .true. canonical integer representation is -1 (all bits 1)
- .false. canonical integer representation is 0
- the test for truth is "integer representation lowest bit == 1"