When computing the base addresses of an array slice to make a
descriptor, codegen generated two LLVM GEPs. The first to compute
the address of the base character element, and a second one to
compute the substring base inside that element.
The previous code did not care about getting the result of the first
GEP right: it used the base array LLVM type as the result type.
This used to work when opaque pointer were not enabled (the actual GEP
result type was probably applied in some later pass). But with opaque
pointers, the second GEP ends-up computing an offset of len*<LLVM array
type> instead of len*<character width>. A previous attempt to fix the
issue was done in D129079, but it does not cover the cases where the
array slice contains subcomponents before the substring
(e.g: array(:)%char_field(5:10)).
This patch fix the issue by computing the actual GEP result type in
codegen. There is also enough knowledge now so that a single GEP can be
generated instead of two.
typo?