- Tracks function body tranformation
- Translates "old" PC to "new" one
- Parse .debug_line opcodes and modifies them according to new PC
I haven't read this function yet, but this function is not OK because it is too long and doesn't have comment to help readers understand it. Please read other code in lld carefully and follow local conventions, and then reorganize the code in this patch, so that the first time reader can easily understand your code.
I there a way we can avoid needing to do this at all? How about we simply don't allow LEB compression and dwarf in the same output? It seems OK if building with dwarf in them don't have this optimization.
It is not OK for the presence of debug info to affect the construction of the rest of the executable. Normally this is something only the compiler needs to worry about, but it appears the linker also needs to care. If I understand the motivation here correctly.
We can do that, yes. We can also consider that at later point of time. I'll keep this patch open until we open different patch with disabling LEB when DWARF sections are present. (Do we want track it per function vs per file?)
Hmm, I do see your point. Maybe I can explain in a little more detail: There is a optional optimization that the wasm lld linker can perform which effectively shrinks the code section by converting padded-LEB128s (used at relocation sites) to non-padded LEB128s. This only occurs if you pass -O2 to the linker. We could even make it a separate optional argument. Obviously when we do this code section compression we change the instruction offsets, which breaks the DWARF information.
So I see a few options here:
(1) Increase the complexity of the linker so it can parse, modify mad re-write the DWARF sections rather than just blindly copying them.
(2) Remove this LEB compression feature from the linker and make it into a separate tool
(3) Make LEB compression and debug sections mutually exclusive. Would mean that --compress-lebs would automatically imply --strip-debug.
My understanding is that (1) generally goes against the philosophy of linkers in general, but maybe I'm wrong. If I'm correct then perhaps (2) makes the most sense as it keeps the linker dump and fast.
@probinson After this more detailed explanation are would you still be strongly against (3)?
@ruiu I am right about (1)? i.e. would dwarf parsing and re-writing be outside the normal preview of the linker?
I do imagine that "there's no way to create an executable with debug info if you are creating a release build" is unacceptable. So, we need to support both -O2 with debug info. But is this the only way to do this? This seems a bit too complicated to me. Maybe just code is complicated and the algorithm might not, but it is hard to tell because of the lack of any explanation of the algorithm.
It looks like your patch recognizes all DWARF records that contains in-file offsets to adjust them. Is this the right approach? If DWARF is extended (that happens fairly frequently), do you have to make a change to the linker to produce non-broken debug info? If that's the case, I think it is too fragile.
I wonder if we can just emit DWARF as-is and let the debugger count the number of bytes in the expanded form, to compensate the difference caused by the LEB128 compaction. Is this something you can do? Is that easier than making a change to the linker?
.debug_line uses delta encoding for PC and line/column to reduce amount of used space, otherwise it would need to refer each source code statement/operator by non-relative offset. At worst, it is every wasm operator, and adding relocation records with just increase amount of used space. FWIW, The algorithm just fixed "delta" encoded values and cannot be simplified.
There's no notion of "PC" in wasm, no? You cannot take an address of a function and call it indirectly later. Well, you can do that, but what you get is not an address in the regular mean. It's an index, if I understand correctly. So, by PC, what do you mean?
How about my suggestion to make a change to the debugger, so that it computes offset not against the compacted raw wasm text section but in the fully-expanded form?
As to making it a separate tool to compress wasm text section, I believe that's an option, though I'm not sure if it is ideal. I imagine that the tool compresses text section by optimize LEB128-encoded numbers and then strip debug sections to produce production binary. That doesn't sound like a bad idea, at least, and maybe that's a good idea. But I'd like to hear from wasm developers about it, as it affects their workflow.
For now, bytecode offset (function or file relative) is used in role of the PC -- for call stack, for profiler, and we also successfully adapted it for source maps. I'm not sure what moving to index (of wasm operator?) base will change, but it will only will add complexity
and will not solve this particular issue.
For now, bytecode offset (function or file relative) is used in role of the PC -- for call stack, for profiler, and we also successfully adapted it for source maps. I'm not sure what moving to index (of wasm operator?) base will change, but it will only will add complexity and will not solve this particular issue.
I'm not pushing this idea, but what I was saying is not use the function index or something like that instead of PC. What I meant is to continue using the bytecode offset as before, but count "bytecode offset" in the fully expanded form instead of the raw in-file form. So, any LEB128 number is counted as 5 byte long. Then, that "virtual bytecode offset" should exactly match the original, uncompressed byte code offset, eliminating the need of adjusting any byte offset in DWARF.
Yes, that's a valid solution too -- we just need to specify a normalized form that every debugger will consider when it will decode the data. Thanks. I had also idea to supply "code transform" custom sections that reflex the same idea, but without a burden for debugger to know about normalized "virtual bytecode offset" format.
Per discussion at the WebAssembly toolchain meeting, this solution is too complex. I going to close this WIP in favor more simple solutions such as disabling debug info during optimization, or moving optimization out of the lld.
That makes sense to me. Unlike compiler -O optimization, LEB128 compression by the linker doesn't change anything in the resulting executable. It just changes the encoding of immediates in the binary so that they are represented more compactly. So I can imagine that it is a rare situation that you have to enable both linker -O2 and -debug options.