[X86] The Code16GCC directive parses X86 assembly input in 32-bit mode and
outputs in 16-bit mode. Teach Parser to switch modes appropriately.
Details
Diff Detail
- Repository
- rL LLVM
Event Timeline
Hm, why? We already added -m16 support to both GCC and Clang. So why would anyone use .code16gcc any more? It's just legacy.
And the *only* reason you'd use it is by putting asm(".code16gcc") at the start of a C file (which has problems with ensuring it's *first* in the asm output, which is why -m16 is so much better even in GCC). And didn't I see separately you forbade inline asm from leaving the assembler in a different mode to the one it starts in?
And the *only* reason you'd use it is by putting asm(".code16gcc") at the start of a C file (which has problems with ensuring it's *first* in the asm output, which is why -m16 is so much better even in GCC). And didn't I see separately you forbade inline asm from leaving the assembler in a different mode to the one it starts in?
Agreed. It's not particularlly worthwhile as one could rewrite the code. Further, that one use example (asm(".code16gcc")) doesn't do anything as we do not preserve modes changes from inline assembly (which is the only reasonable way to deal with it persistant inline asm changes) (see: http://reviews.llvm.org/D20067). The only thing this does is let the integrated assembler automatically do the rewriting.
People do seem to use it in asm files, presumably for the same reason GCC used it: you get the same behavior of the textual assembly targetting 16-bit processor mode, as you would if it were running in a 32-bit processor mode.
Since LLVM's assembler is not only intended for inline asm in C, but also as an actual assembler, I think it makes fine sense to support this directive.
LGTM, modulo the one comment.
lib/Target/X86/AsmParser/X86AsmParser.cpp | ||
---|---|---|
64 ↗ | (On Diff #71839) | Please put this variable with the other instance variables above. |