Overflows are never fun.
In most cases (in most of the code), they are rare,
because usually you e.g. don't have as many elements.
However, it's exceptionally easy to fall into this pitfail
in code that deals with images, because, assuming 4-channel 32-bit FP data,
you need *just* ~269 megapixel image to case an overflow
when computing at least the total byte count.
In darktable, there is a *long*, painful history of dealing with such bugs:
- https://github.com/darktable-org/darktable/pull/7740
- https://github.com/darktable-org/darktable/pull/7419
- https://github.com/darktable-org/darktable/commit/eea1989f2c9fa76710db07baaec4c19c1e40e81c
- https://github.com/darktable-org/darktable/commit/70626dd95bf0fab36f2d011dab075e3ebbf7aa28
- https://github.com/darktable-org/darktable/pull/670
- https://github.com/darktable-org/darktable/commit/38c69fb1b2bc90057c569242cb9945a10be0b583
and yet they clearly keep resurfacing still.
It would be immensely helpful to have a diagnostic for those patterns,
which is what this change proposes.
Currently, i only diagnose the most obvious case, where multiplication
is directly widened with no other expressions inbetween,
(i.e. long r = (int)a * (int)b but not even e.g. long r = ((int)a * (int)b))
however that might be worth relaxing later.
Does this only trigger when the sizeof(char*) > sizeof(int)? (judging by the test coverage I think that's the case)
(ultimately, might be worth committing the two diagnostics separately - usual sort of reasons, separation of concerns, etc)