Debuggers have different preferences regarding various bits of DWARF.
Invent a debugger "target" to clarify the choices, and also simplify testing.
A follow-up patch will plumb this through from Clang.
Debuggers have different preferences regarding various bits of DWARF.
I think mostly debuggers are pretty consistent about things from one version to the next. I haven't run into anything that would suggest knowing about debugger versions was really necessary.
"Default" works for me. I originally had some vague notion about a generic target, so "Any," but getting that to work right is surely more trouble than it's worth. "Default" better expresses that it will actually turn into something specific.
Maybe this should be "debugger-tune" instead of "target" so that it implies smaller kinds of differences. It's certainly a lot less radical than most uses of "target" within LLVM.
I have one more request: This should come with a guideline or at least some examples for what kind of changes constitute "tuning". Our primary goal should still be to emit standards-compliant DWARF. Or maybe the primary goal is to support existing debuggers and following the standard is second, but we probably should discuss this.
That should happen on the dev lists, and obviously I should kick it off... I think with a new topic, as the previous one had "target" in the subject and we're agreed not to use that term for what this is about.
It's also not really obvious to me where such a guideline ought to be placed. But we can debate that on-list as well.
Something odd is going on with clang right now, I'll refresh this patch tomorrow and start that discussion.
I might have lost this part of the discussion but I thought the default was "no tuning"?
At least not in the backend.
This should be simplified to tuneDebugForGDB().
I don't think this is a tuning option, and that's the problem. This is a correctness issue in that it just won't work rather than being inefficient or superfluous.
I think I'd like this split out into a different test file.
That did come up, and it was not clear what it should mean, so I fell back on preserving the prior behavior, which effectively defaults to GDB except for Darwin and PS4.
So, what would no-tuning mean? "No non-standard things" seems reasonably clear, but should it include all the optional-but-standard things that LLVM knows how to do? Which I guess would be pubnames + aranges + type units (are there more?).
Presumably you'd want Clang to tell LLVM what tuning to use, though? Having Clang default to "no tuning" seems user-hostile.
Should this become tuneDebugForLLDB()?
Okay. Not being Darwin-aware or an LLDB user, I was a bit reluctant to convert all the Darwin decision-making right away.
Yeah, trying to work around a GDB bug. GDB can't tolerate the standard opcode; SCE can't tolerate the GNU opcode; I don't know what LLDB thinks, but Adrian liked having the standard opcode.
Ironically, this is one of the tests that would break if LLVM defaulted to no-tuning; the triples would need to be replaced by tuning options. And if tuning isn't premised on the triple, then the non-default triple/tuning combos wouldn't need testing, and the new RUN lines would just go away.
Refresh and address specific review comments:
- pubsections based on tuning for GDB (this is correct only if "no tuning" isn't an option)
- revert TLS opcode choice to platform-based instead of tuning-based; Eric and Adrian need to work that one out, right now I'm following the most recent expressed wish of the code owner even though I agree with Adrian
- move test to a separate file
- setting the default tuning remains triple-based; changing the defaults for other targets can wait for follow-up patches
I did experiment with "default to no tuning." While tests basically
worked, it exposed one minor flaw in dwarf-public-names.ll (diff
included here although it really ought to be committed separately).
However, a no-tuning default would be a distinct functional change,
not backward compatible, as pointed out by David Blaikie; so I didn't
do that in the patch.
We might still want to have a more Target-based way to specify a
default tuning, rather than using the triple, but that could be a
followup patch. Note that people associated with FreeBSD and with
Hexagon have both expressed a desire to default to LLDB.
Looks plausible. Though the command line flag should really only be used for testing, I think - you'll want another way to specify it for actual exposure through clang, etc, if you want/need to expose it.
What's the command line argument for? I mean convenience of testing LLVM, a bit, but in general we don't want to use backend options like this when actually interacting with LLVM from Clang and other frontends. This should probably end up as a part of the CU debug info metadata? Or a module-level debug info metadata flag? (ala DWARF version)
what happened to these two checks in this test?
The command line option is precisely for testing; it is not how Clang communicates to LLVM.
As originally written, this check does not fail if there is actually a debug_pubnames section, because it didn't insist that the empty line was immediately following the "debug_pubnames" header line. Instead it would match the empty line at the end of the debug_pubnames dump, and pass.
This test update could actually be committed independently if you'd rather, as it's fixing a bug in the test rather than actually testing something I did in the rest of the patch.
David pointed out in email that using a TargetOptions field, it is hard to test that Clang got it right, and might not play nicely with LTO. I'll look at doing it with module-level metadata.
Committed separately as r240679.
Some inline comments.
Module level metadata is a possibility if we think it's something that we'll need to make sure stays consistent at LTO time etc. You'll need to define a way for the metadata to merge, etc. I don't necessarily like encoding the data this way, but without a later LTO codegen pass we can pass options to it's your best bet.
Feel free to update this comment separately.
I don't like the use of TT.isPS4CPU() here on first look, can you explain a bit?
Hm? The comment update reflects the code change for setting HasDwarfPubSections (line 235). Did you have something else in mind?
IsPS4 had been a bool field in the class, but as it was never used outside the constructor that seemed unnecessary so I removed it.
Replied inline. Some of this probably feels a bit vague, let me know if you've got any questions :)
Yeah, anything you're changing under the default (or triple) mode should go into a separate patch I'd think?
I've actually come around to thoughts of a triple based set of defaults. I.e. go ahead and use the triple to default behaviors like this and use the tuning parameter otherwise. i.e. feature based on the default debugger for the platform. It'll involve splitting this patch up a bit (see previous comment) but that seems reasonable yes?
For now yes. Post-dwarf5 it'll be more interesting.
Yeah. I agree with Adrian here. Let's use "isDebuggerLLDB()".
Yeah, let's put it with tuning with a what you just said as a comment (if you have a gdb bug to point at that'd be good).
This could all go with the enum.
Possibly rename without using Debug. We're in the debug code already.
Sorry, I couldn't really work out which things you wanted directly from triples and which would be from tuning. I'm not 100% sure phab is lining up the inline comments with the right source lines. I think I got everything else you wanted. New patch coming directly.
I'm a bit unhappy with the TLS op tuning, but I don't have any better ideas. It's less tuning and more correctness, but...
Anyhow, an inline comment about one of your FIXMEs, otherwise LGTM.
Thanks for all of the iterations here.
Are you using the same accelerator tables?