This is an archive of the discontinued LLVM Phabricator instance.

[AutoUpgrade] Add flag to disable autoupgrading debug info
ClosedPublic

Authored by aeubanks on Feb 2 2023, 4:14 PM.

Details

Summary

Auto-upgrade can be expensive, especially UpgradeDebugInfo() since it runs the verifier.

With this patch, we can specify that the imported bitcode is built with the same revision of LLVM, meaning there's no need to run any auto-upgrading. For now, limit this to just debug info since projects like Rust want to support multiple versions of LLVM at the same time (at compiler build time, not at Rust source code build time) and run a subset of the autoupgrade functionality for simplicity.

Diff Detail

Event Timeline

aeubanks created this revision.Feb 2 2023, 4:14 PM
Herald added a project: Restricted Project. · View Herald TranscriptFeb 2 2023, 4:14 PM
aeubanks requested review of this revision.Feb 2 2023, 4:14 PM
Herald added a project: Restricted Project. · View Herald TranscriptFeb 2 2023, 4:14 PM

With ThinLTO, we only support cases where the input bitcode is built with the same LLVM revision

I am pretty sure ThinLTO was designed without this restriction, where is it coming from?

With ThinLTO, we only support cases where the input bitcode is built with the same LLVM revision

I am pretty sure ThinLTO was designed without this restriction, where is it coming from?

Hmm I thought I remembered this being true, I may be wrong though. Hopefully others can comment.
If my assumption is wrong, I can make this into a flag instead.

A flag seems a bit ad-hoc to me, can we detect from the bitcode itself if this is necessary?

With ThinLTO, we only support cases where the input bitcode is built with the same LLVM revision

I am pretty sure ThinLTO was designed without this restriction, where is it coming from?

Hmm I thought I remembered this being true, I may be wrong though. Hopefully others can comment.
If my assumption is wrong, I can make this into a flag instead.

Yeah, my understanding's with @mehdi_amini too - that ThinLTO should be/is currently compatible with archived bitcode (Apple folks would be the ones most likely to speak to that, perhaps @tejohnson knows who's best to rope in on their side these days), it's just that our (Google) internal use doesn't need this flexibility and is paying a penalty for it. The cost of running the IR verifier for debug info might interest @aprantl and @JDevlieghere in case they've got ideas about other ways to address this or might just be interested in knowing it's hot and might be worth improvements, even if Google still ends up turning it off entirely.

A flag seems a bit ad-hoc to me, can we detect from the bitcode itself if this is necessary?

Not really that I know of - if it's untrusted bitcode, everything's got to be validated/can't just check a version flag and assume it's good.

Having some generic flag for "this bitcode is trusted" might be a nice general purpose thing to have for users that can ensure that.

With ThinLTO, we only support cases where the input bitcode is built with the same LLVM revision

I am pretty sure ThinLTO was designed without this restriction, where is it coming from?

Hmm I thought I remembered this being true, I may be wrong though. Hopefully others can comment.
If my assumption is wrong, I can make this into a flag instead.

Yeah, my understanding's with @mehdi_amini too - that ThinLTO should be/is currently compatible with archived bitcode (Apple folks would be the ones most likely to speak to that, perhaps @tejohnson knows who's best to rope in on their side these days), it's just that our (Google) internal use doesn't need this flexibility and is paying a penalty for it. The cost of running the IR verifier for debug info might interest @aprantl and @JDevlieghere in case they've got ideas about other ways to address this or might just be interested in knowing it's hot and might be worth improvements, even if Google still ends up turning it off entirely.

Right, it is not a general requirement for ThinLTO that the input bitcode must be built with the same revision. Inside our Google builds this does end up being the case, however, because of the way the build system works. So we would need to put it under a flag.

A flag seems a bit ad-hoc to me, can we detect from the bitcode itself if this is necessary?

Not really that I know of - if it's untrusted bitcode, everything's got to be validated/can't just check a version flag and assume it's good.

The description in this revision isn't about "trust" but about "matching revision", can you elaborate what you have in mind as "trust" issue here?

I don't think this restriction works for Apple. Compatibility is important for users that needs external dependencies that they do not control. If the debug info upgrader is expensive for you, we can probably add a fast check to skip that, or just give user a flag to control that behavior.

aeubanks planned changes to this revision.Feb 3 2023, 10:22 AM

my assumption was wrong then

what about checking if the llvm.idents (embedded by the frontend) of the modules match? although that wouldn't work for cross-language LTO, but it's a start

aeubanks updated this revision to Diff 494708.Feb 3 2023, 12:33 PM

check llvm.ident instead

aeubanks retitled this revision from [FunctionImporter] Don't upgrade debug info for ThinLTO compiles to [FunctionImporter] Don't upgrade debug info if llvm.idents match.Feb 3 2023, 12:34 PM
aeubanks edited the summary of this revision. (Show Details)

check llvm.ident instead

If my source module is bitcode upgraded, it won't get a new llvm.ident right? Then the upgrade pass will be wrongly skipped. I think we need a LLVM code generator version somewhere in the IR

check llvm.ident instead

If my source module is bitcode upgraded, it won't get a new llvm.ident right? Then the upgrade pass will be wrongly skipped. I think we need a LLVM code generator version somewhere in the IR

it the bitcode was upgraded, then during the upgrade it should already have had UpgradeDebugInfo called on it (with the expected LLVM revision) right?

arsenm added a subscriber: arsenm.Feb 3 2023, 1:39 PM

I don't think you can / should trust llvm.ident. It shouldn't be given a semantic meaning

arsenm added a comment.Feb 3 2023, 1:40 PM

I don't think you can / should trust llvm.ident. It shouldn't be given a semantic meaning

Plus this isn't even a single field, it's a list of many versions (unfortunately it's not deduplicated either)

check llvm.ident instead

If my source module is bitcode upgraded, it won't get a new llvm.ident right? Then the upgrade pass will be wrongly skipped. I think we need a LLVM code generator version somewhere in the IR

it the bitcode was upgraded, then during the upgrade it should already have had UpgradeDebugInfo called on it (with the expected LLVM revision) right?

I am talking about A.bc imports function from B.bc. Both of bitcode files are out of date, so A.bc gets upgraded when loaded, then A tries to import B, they have the same ident so the upgrade from B is skipped.

A flag seems a bit ad-hoc to me, can we detect from the bitcode itself if this is necessary?

Not really that I know of - if it's untrusted bitcode, everything's got to be validated/can't just check a version flag and assume it's good.

The description in this revision isn't about "trust" but about "matching revision", can you elaborate what you have in mind as "trust" issue here?

In the sense that if I write some handcrafted IR, LLVM should verify it before moving forward because other things interacting with the IR will assume it's valid & may crash if it isn't. So there's a trust boundary, generally - like in memory IR usually isn't verified, it's assumed the producer produced something valid by construction & it's a bug on the producer side if it didn't. But generally once we're reading things off disk we assume it might have come from anywhere & to be somewhat stable/not so crashy, we usually verify it on input (at least that was/is my rough understanding). So if you happen to be willing to sacrifice that extra stability guarantee because because you can guarantee that outside the system outside the program will ensure that only valid IR is passed in - then maybe having a flag to communicate that guarantee would be appropriate.

I guess it depends what the contract is for UpgradeDebugInfo, because from the name it didn't seem to me like a "verifier"?

I don't think you can / should trust llvm.ident. It shouldn't be given a semantic meaning

Plus this isn't even a single field, it's a list of many versions (unfortunately it's not deduplicated either)

Oh I misread the verifier code, yes you're right.

check llvm.ident instead

If my source module is bitcode upgraded, it won't get a new llvm.ident right? Then the upgrade pass will be wrongly skipped. I think we need a LLVM code generator version somewhere in the IR

it the bitcode was upgraded, then during the upgrade it should already have had UpgradeDebugInfo called on it (with the expected LLVM revision) right?

I am talking about A.bc imports function from B.bc. Both of bitcode files are out of date, so A.bc gets upgraded when loaded, then A tries to import B, they have the same ident so the upgrade from B is skipped.

I see, yes you're right.

In the sense that if I write some handcrafted IR, LLVM should verify it before moving forward because other things interacting with the IR will assume it's valid & may crash if it isn't. So there's a trust boundary, generally - like in memory IR usually isn't verified, it's assumed the producer produced something valid by construction & it's a bug on the producer side if it didn't. But generally once we're reading things off disk we assume it might have come from anywhere & to be somewhat stable/not so crashy, we usually verify it on input (at least that was/is my rough understanding). So if you happen to be willing to sacrifice that extra stability guarantee because because you can guarantee that outside the system outside the program will ensure that only valid IR is passed in - then maybe having a flag to communicate that guarantee would be appropriate.

At this point I agree that a flag would be what I'd try for now.

I guess it depends what the contract is for UpgradeDebugInfo, because from the name it didn't seem to me like a "verifier"?

UpgradeDebugInfo runs the verifier, and the verifier has a mode that doesn't fail if the debug info is invalid, but rather reports it separately. We could separate out the debug info portion of the verifier into a separate DIVerifier and use that instead. But we're still doing unnecessary work verifying debug info that we can assume to be valid in our build environments.

Looking at the implementation:

/// Check the debug info version number, if it is out-dated, drop the debug
/// info. Return true if module is modified.
bool llvm::UpgradeDebugInfo(Module &M) {
  unsigned Version = getDebugMetadataVersionFromModule(M);
  if (Version == DEBUG_METADATA_VERSION) {
    bool BrokenDebugInfo = false;
    if (verifyModule(M, &llvm::errs(), &BrokenDebugInfo))
      report_fatal_error("Broken module found, compilation aborted!");
    if (!BrokenDebugInfo)
      // Everything is ok.
      return false;
    else {
      // Diagnose malformed debug info.
      DiagnosticInfoIgnoringInvalidDebugMetadata Diag(M);
      M.getContext().diagnose(Diag);
    }
  }
...
  1. Contrary to the name of the API, this does not seem to "upgrade" anything: it merely drops outdated debug info. (Am I missing something?)
  2. This runs the verifier but does not fail verification on debug info related issue, instead it'll drop metadata when something is wrong there.

So are we re-running the IR verifier independently? And if we don't what kind of inconsistency would we be setting up?

aeubanks updated this revision to Diff 495902.Feb 8 2023, 11:25 AM

put under a flag

aeubanks retitled this revision from [FunctionImporter] Don't upgrade debug info if llvm.idents match to [FunctionImporter] Add flag to disable upgrading debug info.Feb 8 2023, 11:25 AM
aeubanks edited the summary of this revision. (Show Details)

Looking at the implementation:

/// Check the debug info version number, if it is out-dated, drop the debug
/// info. Return true if module is modified.
bool llvm::UpgradeDebugInfo(Module &M) {
  unsigned Version = getDebugMetadataVersionFromModule(M);
  if (Version == DEBUG_METADATA_VERSION) {
    bool BrokenDebugInfo = false;
    if (verifyModule(M, &llvm::errs(), &BrokenDebugInfo))
      report_fatal_error("Broken module found, compilation aborted!");
    if (!BrokenDebugInfo)
      // Everything is ok.
      return false;
    else {
      // Diagnose malformed debug info.
      DiagnosticInfoIgnoringInvalidDebugMetadata Diag(M);
      M.getContext().diagnose(Diag);
    }
  }
...
  1. Contrary to the name of the API, this does not seem to "upgrade" anything: it merely drops outdated debug info. (Am I missing something?)
  2. This runs the verifier but does not fail verification on debug info related issue, instead it'll drop metadata when something is wrong there.

So are we re-running the IR verifier independently? And if we don't what kind of inconsistency would we be setting up?

We don't typically run the verifier in assert builds because it's expensive. We assume that input bitcode passes the verifier.

mehdi_amini added inline comments.Feb 8 2023, 12:11 PM
llvm/lib/Transforms/IPO/FunctionImport.cpp
88 ↗(On Diff #495902)

I'll repeat my previous earlier comment: that seems to me like a very ad-hoc flag for some overly specific use-case.

Looking at the implementation:

  1. Contrary to the name of the API, this does not seem to "upgrade" anything: it merely drops outdated debug info. (Am I missing something?)
  2. This runs the verifier but does not fail verification on debug info related issue, instead it'll drop metadata when something is wrong there.

So are we re-running the IR verifier independently? And if we don't what kind of inconsistency would we be setting up?

The name "upgrade" comes from a time when regularly bumped DEBUG_METADATA_VERSION. Back then it "upgraded" older debug info formats by dropping the outdated metadata. In the more recent past we are trying to make debug info upgradeable in the bit code reader. Unfortunately, because debug info metadata is a cyclic graph we keep finding bugs where the metadata produced by older versions of LLVM and/or 3rd party frontends is malformed in a way that causes problems (crashes, infinite loops, ...) with newer version of LLVM. Whenever we find such a problem we improve the IR Verifier to detect the malformed IR.

Because of this, the upgrade function insists on running the Verifier. If it finds malformed debug info it will perform an "upgrade" by dropping the malformed debug info.

aeubanks added inline comments.Feb 8 2023, 3:08 PM
llvm/lib/Transforms/IPO/FunctionImport.cpp
88 ↗(On Diff #495902)

I wouldn't consider this use case "overly specific", I'd guess that many uses of LLVM don't care about bitcode upgrading.

Given that it seems quite hard to put a consistent LLVM revision in the bitcode that can be understood everywhere, I think this is something that requires a flag and bitcode version guarantees at a higher level.

There are two options I can think of:

  1. under a flag, when producing bitcode, add a marker saying "this bitcode will never need to be upgraded", then when seeing that marker when importing, don't upgrade debug info
  2. under a flag, when reading/importing bitcode, assume it doesn't need to be upgraded (this patch)

1 seems more annoying than 2

do you have any other alternatives?

mehdi_amini added inline comments.Feb 8 2023, 5:06 PM
llvm/lib/Transforms/IPO/FunctionImport.cpp
88 ↗(On Diff #495902)

You're presenting alternatives as if this is the only possibility. Doing nothing is another possibility for example...

Adding flag is not free: I see it as bad practice in general, and an easy "no design" solution that is too often overlooking the general cost of supporting matrix of configurations. There should be a high bar for this kind of things.

It's not clear to me that there is a strong case here to justify a flag in the first place, and especially I haven't seen any other area explored: to begin with "why is it costly and what can we do to optimize it".

Looking at the implementation:

  1. Contrary to the name of the API, this does not seem to "upgrade" anything: it merely drops outdated debug info. (Am I missing something?)
  2. This runs the verifier but does not fail verification on debug info related issue, instead it'll drop metadata when something is wrong there.

So are we re-running the IR verifier independently? And if we don't what kind of inconsistency would we be setting up?

The name "upgrade" comes from a time when regularly bumped DEBUG_METADATA_VERSION. Back then it "upgraded" older debug info formats by dropping the outdated metadata. In the more recent past we are trying to make debug info upgradeable in the bit code reader. Unfortunately, because debug info metadata is a cyclic graph we keep finding bugs where the metadata produced by older versions of LLVM and/or 3rd party frontends is malformed in a way that causes problems (crashes, infinite loops, ...) with newer version of LLVM. Whenever we find such a problem we improve the IR Verifier to detect the malformed IR.

It seems reasonable to me to provide a way to skip this for situations where the build system guarantees that the bitcode is always generated by the same compiler, e.g. in a distributed ThinLTO build where any change in compiler will force a rebuild anyway, which is the case @aeubanks is looking at.

@aeubanks are we also invoking UpgradeDebugInfo, or the Verifier separately, on the main input bitcode in the distributed ThinLTO backend clang invocation (e.g. the clang -x ir input file)? For the same reason, we should be able to skip that. So I wonder if it makes sense then for the disabling mechanism (e.g. flag or whatever) to be in UpgradeDebugInfo or the Verifier itself?

It seems reasonable to me to provide a way to skip this for situations where the build system guarantees that the bitcode is always generated by the same compiler, e.g. in a distributed ThinLTO build where any change in compiler will force a rebuild anyway, which is the case @aeubanks is looking at.

I'm having a hard time imagining how a build system would be able to guarantee this, especially if multiple compiler frontends are involved. The only scenario where this makes sense if you are only doing clean world builds with only clang as the compiler. Maybe that's what you are doing, but IMHO that is such a niche use-case that I'd rather see such a flag being implemented downstream and not in the main llvm-project. As @mehdi_amini pointed out, it complicates the qualification matrix quite a lot.

It seems reasonable to me to provide a way to skip this for situations where the build system guarantees that the bitcode is always generated by the same compiler, e.g. in a distributed ThinLTO build where any change in compiler will force a rebuild anyway, which is the case @aeubanks is looking at.

I'm having a hard time imagining how a build system would be able to guarantee this, especially if multiple compiler frontends are involved. The only scenario where this makes sense if you are only doing clean world builds with only clang as the compiler. Maybe that's what you are doing, but IMHO that is such a niche use-case that I'd rather see such a flag being implemented downstream and not in the main llvm-project. As @mehdi_amini pointed out, it complicates the qualification matrix quite a lot.

The compiler version being used is centrally controlled. And we're not always doing clean world builds, but Bazel's build cache is queried by a hash of all action inputs, including the compiler bits, so if anything changes it forces a rebuild. We use stock clang/LLVM, so adding an option downstream is not possible.

Taking a step back, though, the issue here is that we are verifying the same module multiple times - once when it is read to be compiled through its own ThinLTO backend (either via the linker for in-process ThinLTO, or via clang for distributed ThinLTO as we are doing), and then once per every other ThinLTO backend that imports anything from it. If the debug info is possibly out of date or untrusted then this is unavoidable. But in any build system where the inputs are expected to be from the same trusted source compilers it would be nice to have a mode whereby when the bitcode is read for compiling through its own ThinLTO backend we perform the UpgradeDebugInfo and if the verifier there fails, it is a hard error. Then skip the upgrading during the function importing process. This doesn't seem particularly niche to me. You are guaranteed that the build will simply fail if the assumption is wrong. This patch afaict only does the latter part of this, by skipping the upgrade/verification during function importing, so would need a bit more work to tie that to a guarantee that we do the upgrade/verification with a hard error during the read for each module's ThinLTO backend.

So your build system guarantees that all bitcode that is being consumed was generated by compilers linked against the same version of LLVM that is used in the LTO plugin and if that version were to serialize malformed metadata, you wouldn't be any worse off than a non-LTO build, which also doesn't verify between passes.

Based on that assumption, this option does make sense to me.

As others pointed out, it would be nice if the version number could be encoded in the bitcode itself, however, that doesn't fix the aforementioned trust boundary issue, so you're probably no better off than with this flag.

Considering all this, this patch gets an unenthusiastic LGTM from my side :-)

aeubanks updated this revision to Diff 501219.Feb 28 2023, 10:37 AM
aeubanks retitled this revision from [FunctionImporter] Add flag to disable upgrading debug info to [AutoUpgrade] Add flag to disable auto-upgrade.
aeubanks edited the summary of this revision. (Show Details)

move flag to AutoUpgrade, gate more functionality under flag

In chrome, we add a clang -D flag containing the toolchain revision for caching/determinism reasons so we can also guarantee that bitcode inputs will always match.

I may be wrong, but I believe rust does inter-crate (Thin)LTO, all bitcode produced from the same rustc frontend, which could also use this.

@aeubanks are we also invoking UpgradeDebugInfo, or the Verifier separately, on the main input bitcode in the distributed ThinLTO backend clang invocation (e.g. the clang -x ir input file)? For the same reason, we should be able to skip that. So I wonder if it makes sense then for the disabling mechanism (e.g. flag or whatever) to be in UpgradeDebugInfo or the Verifier itself?

Yeah there is some time spent verifying the main input bitcode (although much less than function importing). Moved the flag to autoupgrade.

llvm/lib/Transforms/IPO/FunctionImport.cpp
88 ↗(On Diff #495902)

The "why is this costly" part was answered by aprantl (need to run the verifier to detect cycles)

I'm not understanding the matrix of configurations support argument, this flag doesn't really interact with anything else.

For background, we spend about 2% of our overall compile time on autoupgrade because we do a lot of ThinLTO builds. 2% spent on code that doesn't need to run is pretty high and I think that deserves some sort of change in LLVM.

I agree that adding a flag is not ideal, but ultimately LLVM has a bitcode compatibility guarantee that isn't going to change. Assuming some subset of people want to avoid some of these autoupgrade costs if they have bitcode version guarantees, we have to have a flag somewhere since the default assumption is that old bitcode is upgradable. We could explore things like having an unstable bitcode format that only works at a single LLVM revision. But this is a nice stopgap in the meantime.

mehdi_amini added inline comments.Feb 28 2023, 11:02 AM
llvm/lib/Transforms/IPO/FunctionImport.cpp
88 ↗(On Diff #495902)

The "why is this costly" part was answered by aprantl (need to run the verifier to detect cycles)

I'm not totally seeing any satisfactory answer here: "it is costly because it does 'work'" is a bit under-detailed in terms of analysis. For example: what would it take to make it faster?

2% spent on code that doesn't need to run is pretty high and I think that deserves some sort of change in LLVM.

Again, you're eliding here that you're in a very niche use-case: ThinLTO in a very controlled build environment. Are there other identified users than Google on which we know this will have a significant impact that justified adding a flag in LLVM?

It still overall comes across to me like "reaching for the easiest stopgap" without due diligence on overall design exploration and mitigation (like optimizing the upgrade, embedding the information differently, etc.).
@aprantl mentioned in the first comment in this revision that if you need a stopgap for your own need, you can easily add this downstream.

tejohnson added inline comments.Mar 3 2023, 1:12 PM
llvm/lib/Transforms/IPO/FunctionImport.cpp
88 ↗(On Diff #495902)

I'm not totally seeing any satisfactory answer here: "it is costly because it does 'work'" is a bit under-detailed in terms of analysis. For example: what would it take to make it faster?

Even if it the time to do this verification can be reduced, it's not going to go to 0, particularly given the amount of modules that get collectively read in a ThinLTO backend (particularly with distributed ThinLTO). In a build environment using a single compiler version per build, this is completely unnecessary overhead.

Are there other identified users than Google on which we know this will have a significant impact that justified adding a flag in LLVM?

As Arthur noted, at least Chromium. I would imagine there are more build environments in other companies that are similar, but I don't have visibility.

@aprantl mentioned in the first comment in this revision that if you need a stopgap for your own need, you can easily add this downstream.

As mentioned in my reply to him, we use unmodified LLVM.

nikic added a subscriber: nikic.Mar 3 2023, 1:19 PM

I may be wrong, but I believe rust does inter-crate (Thin)LTO, all bitcode produced from the same rustc frontend, which could also use this.

Conceptually yes, we don't need verification for compiler-driven LTO (which is the default). We wouldn't be able to use it as implemented though, because we do need to auto-upgrade intrinsics in the initial IR, while this effectively "breaks" UpgradeCallsToIntrinsic().

I may be wrong, but I believe rust does inter-crate (Thin)LTO, all bitcode produced from the same rustc frontend, which could also use this.

Conceptually yes, we don't need verification for compiler-driven LTO (which is the default). We wouldn't be able to use it as implemented though, because we do need to auto-upgrade intrinsics in the initial IR, while this effectively "breaks" UpgradeCallsToIntrinsic().

Perhaps then it should go back to the original version that just skipped the UpgradeDebugInfo during cross-module importing - would that work for you?

nikic added a comment.Mar 3 2023, 1:37 PM

I may be wrong, but I believe rust does inter-crate (Thin)LTO, all bitcode produced from the same rustc frontend, which could also use this.

Conceptually yes, we don't need verification for compiler-driven LTO (which is the default). We wouldn't be able to use it as implemented though, because we do need to auto-upgrade intrinsics in the initial IR, while this effectively "breaks" UpgradeCallsToIntrinsic().

Perhaps then it should go back to the original version that just skipped the UpgradeDebugInfo during cross-module importing - would that work for you?

I believe that should work, yes.

I may be wrong, but I believe rust does inter-crate (Thin)LTO, all bitcode produced from the same rustc frontend, which could also use this.

Conceptually yes, we don't need verification for compiler-driven LTO (which is the default). We wouldn't be able to use it as implemented though, because we do need to auto-upgrade intrinsics in the initial IR, while this effectively "breaks" UpgradeCallsToIntrinsic().

Why does rust need to auto upgrade intrinsics during in the post link LTO step?

nikic added a comment.Mar 4 2023, 12:44 AM

I may be wrong, but I believe rust does inter-crate (Thin)LTO, all bitcode produced from the same rustc frontend, which could also use this.

Conceptually yes, we don't need verification for compiler-driven LTO (which is the default). We wouldn't be able to use it as implemented though, because we do need to auto-upgrade intrinsics in the initial IR, while this effectively "breaks" UpgradeCallsToIntrinsic().

Why does rust need to auto upgrade intrinsics during in the post link LTO step?

Rust upgrades intrinsics after initial IR generation (i.e. before the pre-link pipeline). The pre-link and post-link pipeline are part of the same process, so they can't use different opt options.

I may be wrong, but I believe rust does inter-crate (Thin)LTO, all bitcode produced from the same rustc frontend, which could also use this.

Conceptually yes, we don't need verification for compiler-driven LTO (which is the default). We wouldn't be able to use it as implemented though, because we do need to auto-upgrade intrinsics in the initial IR, while this effectively "breaks" UpgradeCallsToIntrinsic().

Why does rust need to auto upgrade intrinsics during in the post link LTO step?

Rust upgrades intrinsics after initial IR generation (i.e. before the pre-link pipeline). The pre-link and post-link pipeline are part of the same process, so they can't use different opt options.

Then the next question is, why does Rust need to upgrade intrinsics after initial IR generation, rather than constructing initially up-to-date intrinsic calls?

nikic added a comment.Mar 6 2023, 9:19 AM

I may be wrong, but I believe rust does inter-crate (Thin)LTO, all bitcode produced from the same rustc frontend, which could also use this.

Conceptually yes, we don't need verification for compiler-driven LTO (which is the default). We wouldn't be able to use it as implemented though, because we do need to auto-upgrade intrinsics in the initial IR, while this effectively "breaks" UpgradeCallsToIntrinsic().

Why does rust need to auto upgrade intrinsics during in the post link LTO step?

Rust upgrades intrinsics after initial IR generation (i.e. before the pre-link pipeline). The pre-link and post-link pipeline are part of the same process, so they can't use different opt options.

Then the next question is, why does Rust need to upgrade intrinsics after initial IR generation, rather than constructing initially up-to-date intrinsic calls?

std::arch defines bindings to LLVM target intrinsics, and these need to work across multiple supported LLVM versions.

aeubanks updated this revision to Diff 503151.Mar 7 2023, 2:21 PM
aeubanks retitled this revision from [AutoUpgrade] Add flag to disable auto-upgrade to [AutoUpgrade] Add flag to disable autoupgrading debug info.
aeubanks edited the summary of this revision. (Show Details)

limit to just UpgradeDebugInfo

aprantl added inline comments.Mar 7 2023, 5:57 PM
llvm/lib/IR/AutoUpgrade.cpp
46

I suppose this text is no longer accurate?

aeubanks updated this revision to Diff 503200.Mar 7 2023, 6:04 PM

fix help test

nikic accepted this revision.Mar 10 2023, 12:42 AM

LGTM

This revision is now accepted and ready to land.Mar 10 2023, 12:42 AM

I've only skimmed the discussion, but FTR, Sony licensees are also in a single-compiler environment, which is enforced in a couple of ways. In particular we have a downstream patch to insert the compiler version into the bitcode (modeled on similar code for Darwin IIRC) and the linker will complain if it sees a different version, before starting the LTO machinery.

I'm thinking we should have (a downstream patch to make) the linker set this flag when it invokes the LTO machinery? because it has already done a check?

This revision was landed with ongoing or failed builds.Mar 14 2023, 12:46 PM
This revision was automatically updated to reflect the committed changes.